Rozprawy doktorskie na temat „Precision”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Precision.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Precision”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Amaral, Lúcio de Paula. "GEOESTATÍSTICA APLICADA AO MANEJO FLORESTAL EXPERIMENTAL EM FLORESTA OMBRÓFILA MISTA". Universidade Federal de Santa Maria, 2014. http://repositorio.ufsm.br/handle/1/4807.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Forests present spatial-temporal strucutre, and their management can be aided by geostatistics. The present study aimed to use geostatistics in the experimental forest management of Mixed Ombrophylous Forest (MOF), in Rio Grande do Sul, Brazil, with two case studies. The specific objectives were to determine areas of production for a population of Araucaria angustifolia and check the sensitivity of geostatistics to different intensities of management (selective wood harvesting), at different time points, before and after the intervention in the forest. The first study was carried out in an area of 11.35 ha in Tapera, using census data from a population of Araucaria, which was used as a virtual sampling. Punctual ordinary kriging and co-kriging were used to the data of 52 virtual sampling units (30x30m) obtained. Cross semivariograms were adjusted based on the spatial structure of the number of individuals for basal area (G), volume (V), biomass (B) and carbon (C) combined through the use of map algebra to determine the production zones (PZ). The second study was held in Tupi Farm, Nova Prata, using sample units of 0.50 ha, with subunits of 10x10 m, where selective wood harvestings were implemented in 2002, with the removal of 0 (control), 20 (light harvest), 40 (medium harvest) and 60% (heavy harvest) of basal area in all diameter class. Inventories were carried out in 2001 (pre-harvesting), 2006 and 2010 (1st and 2nd monitoring). The available data were basal area and commercial volume, organized by subunits. In the first study, low, medium and high production zones were obtained (55.03, 35.54 and 9.43 % for the area of forest fragment, respectively). We obsereved that the forest was under disturbance and the population had balanced diameter distribution. In the second study, the light harvesting caused the less changes in the spatial structure of the forest, more noticeable in the simulated surface relative to the semivariogram, with the replacement of the wood removed when compared to the others. The control area was not more structured than the light harvesting, besides producing less wood. To the medium harvesting we observed pure nugget effect because it intensified the existing randomness in the sample unit prior to the intervention. However, in the heavy harvesting, there were major changes in the forest structure, where areas of high basal and commercial volume areas have become low value areas due to the mortality of individuals remaining in the former, and to the increase and inflow of trees occurring in the latter. The light selective harvesting was the most suitable, and it was spatially less structured, but more productive when compared to the control. Therefore, geostatistics may be used in forest management since it detects changes in the spatial structure of the forest and describes the behavior of variables.
As florestas possuem estrutra espaço-temporal, e seu manejo pode ser auxiliado pela geoestatística. O presente trabalho teve como objetivo geral utilizar a geoestatística no manejo florestal experimental de Floresta Ombrófila Mista - FOM, no Rio Grande do Sul, tendo dois estudos de caso. Os objetivos específicos foram determinar zonas de produção para uma população de Araucaria angustifolia, e verificar a sensibilidade da geoestatística à diferentes intensidades de manejo (cortes seletivos), em distintas épocas, antes e após, a intervensão na floresta. O primeiro estudo foi realizado numa área de 11,35 ha, em Tapera, com uso de dados censitários de uma população de araucária, onde fez-se uma amostragem virtual. Foram utilizadas krigagem ordinária pontual e co-krigagem, aos dados de 52 unidades amostrais virtuais (30x30 m) obtidas. Foram ajustados semivariogramas cruzados, a partir da estrutura espacial do número de indivíduos, para área basal (G), volume (V), biomassa (B) e carbono (C), combinados por meio de álgebra de mapas para determinar as zonas de produção (ZP). O segundo foi realizado na Fazenda Tupi, em Nova Prata, com uso de parcelas de 0,50 ha, divididas em subunidades de 10x10 m, onde foram realizados cortes seletivos em 2002, com retirada de 0% (testemunha), 20% (corte leve), 40% (corte médio) e 60% (corte pesado) de área basal em todas as classes de diâmetro. Os inventários foram realizados em 2001 (pré-exploratório), 2006 e 2010 (1º e 2º monitoramentos). Os dados disponíveis dos mesmos foram G e volume comercial, organizados por subunidades. No primeiro trabalho foram obtidas zonas de baixa, média e alta produção (55,03; 35,54 e 9,43% da área do fragmento florestal, respectivamente). A floresta encontra-se sob distúrbio e a população apresentou distribuição diamétrica balanceada. No segundo estudo, o corte leve foi o que causou menores alterações na estrutura espacial da floresta, mais perceptível na superfície simulada em relação ao semivariograma, havendo a reposição da madeira retirada. A testemunha não mostrou-se mais estruturada que o mesmo, além de ter produzido menos madeira. Para o corte médio observou-se efeito pepita puro, pois este intensificou a aleatoriedade existente na parcela anteriormente à intervenção. Já no corte pesado, houve grandes mudanças na estrutura da floresta, onde zonas de altos valores de G e volume comercial passaram a ser zonas de baixos valores, devido a mortalidade de indivíduos remanescentes na primeira, e aos incremento e ingressos ocorridos na segunda. O corte seletivo leve foi o mais indicado, e em relação a testemunha, apresentou-se menos estruturado espacialmente, porém mais produtivo. Conclui-se que a geoestatística pode ser utilizada no manejo florestal, pois detecta as mudanças na estrutura espacial da floresta e descreve o comportamento de variáveis.
Style APA, Harvard, Vancouver, ISO itp.
2

Parris, Andrew Nicholas. "Precision stretch forming of metal for precision assembly". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10916.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Neto, Cátia Alexandra Monteiro Bragança. "Fatores condicionantes na adoção de tecnologias de viticultura de precisão em empresas portuguesas". Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19981.

Pełny tekst źródła
Streszczenie:
Mestrado em Gestão de Sistemas de Informação
O conceito de tecnologias de informação da viticultura de precisão impulsionou nos últimos anos restruturações significativas nos parâmetros da indústria vitícola: uma videira não é mais vista como um recurso com comportamento uniforme, e sim segmentada em parcelas com diferentes propriedades, permitindo deste modo uma metodologia mais precisa e correta na aplicação de recursos. Apesar da extensiva promoção das tecnologias da viticultura de precisão em Portugal, a adoção ficou aquém das expectativas, despertando a importância de compreender os motivos que levam à aquisição de tecnologias diretamente relacionados ao conceito. Neste âmbito, foi elaborado um estudo dos fatores que condicionam a adoção de tecnologias de viticultura de precisão em empresas portuguesas. Para tal, foram entrevistadas empresas da indústria viticultura em relação ao conhecimento acerca do tópico da viticultura de precisão, assim como as razões que levaram à sua adoção. Os resultados apontam para uma dependência da abertura à tecnologia por parte dos potenciais utilizadores, a complexidade do processo produtivo da empresa bem como outros fatores intrínsecos e extrínseco ao utilizador. Espera-se que os resultados apresentados na dissertação possam auxiliar no processo de comercialização de tecnologias de precisão, e impulsionar mais estudos no contexto da agricultura de precisão na indústria agrícola portuguesa
In the past years, the concept of precision viticulture information technologies propelled significant restructurings in the grape-producing industry's parameters: a vine is no longer seen as a resource with uniform behaviour, but segmented into sections with distinct properties and characteristics, allowing a more precise and accurate inputapplication methodology. Although an extensive wave of precision viticulture promotion campaigns has been carried out, the level of adoption fell short of expectations, sparking the importance of understanding the motives for acquiring precision viticulture technologies. Thus, a study was developed to study the factors that condition the adoption of precision viticulture technologies in Portuguese companies. To undertake this investigation, enterprises in the grape growing industry were interviewed regarding their knowledge on the topic of precision agriculture, as well as their reasons for adopting it. The results point to a dependency on end users' openness to VP technologies, the complexity of the enterprise's production process, as well as other factors intrinsic to the enterprise's users. It is expected that the results of this investigation will help in the process of marketing precision viticulture technologies, along with pushing more studies in the context of precision agriculture in the Portuguese farming industry.
info:eu-repo/semantics/publishedVersion
Style APA, Harvard, Vancouver, ISO itp.
4

Aumond, Bernardo Dantas 1972. "High precision profilometry". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/46102.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Santos, Marcílio Manuel dos. "Quantum precision sensing". Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=215279.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Alke, Jenny, i Maria Sandahl. "Precision Court Sweeper". Thesis, KTH, Mekatronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296224.

Pełny tekst źródła
Streszczenie:
Tennis is a popular sport and in summer it’s often played outside. When an outdoors tennis court has been used it needs to be brushed. First the whole court is brushed witha large brush and then the white lines with a smaller brush. The aim of this thesis was to design and build a working prototype of a robot who can do all of this by itself i.e sweep the court and then the white lines. The budget for components to the prototype was limited to 1 000 SEK. Tools and other resources such as 3D-printers, soldering equipment and laser cutters was provided by KTH for free. First information and inspiration about self-driving cars and driving patterns was collected and some important sources were old bachelor’s thesis. Then, needed components and dimensions could be determined. In this project the main components were an Arduino Uno, two DC motors, an L298H-bridge, an ultra sonic distance sensor, an on/off switch, AAA batteries and a 9 V battery. The conclusions that could been drawn was that the robot can work good enough to sweep a court with only a preprogrammed path. However, to sweep the white lines, sensors would be necessary. It could also be concluded that a robot could sweep the court at the same speed as two people could do it.
Tennis är en populär sport och på sommaren spelas den ofta utomhus. När en utomhustennisbana har använts måste den borstas. Först borstas hela banan med en stor borste och sedan de vita linjerna med en mindre borste. Syftet med denna uppsats var att designa och bygga en fungerande prototyp av en robot som kan göra allt detta av sig själv dvs. sopa tennisbanan och sedan de vita linjerna. Budgeten för komponenter till prototypen var begränsad till 1000 SEK. Verktyg och andra resurser så som 3D-skrivare, lödutrustning och laserskärare tillhandahölls av KTH gratis. Det första som gjordes var att samla information och inspiration om självkörande bilar och olika körmönster och några viktiga källor var gamla kandidatexamensuppsatser. Sedan kunde nödvändiga komponenter och dimensioner bestämmas. I detta projekt var huvudkomponenterna en Arduino Uno, två DC-motorer, en L298 H-brygga, en ultraljudssensor, en på/av-omkopplare, AAA-batterier och ett 9 V batteri. Slutsatserna som kunde dras var att roboten kan fungera tillräckligt bra för att borsta en tennisbana med endast en förprogrammerad bana. För att sopa de vita linjerna skulle sensorer dock vara nödvändiga. En annan slutsats var att en robot kan sopa banan på samma tid som det krävs för två personer att sopa varsin halva.
Style APA, Harvard, Vancouver, ISO itp.
7

Andersson, Fredrik, i Henrik Jönsson. "Förbättrad precision vid ankomstkontroll". Thesis, Växjö University, School of Technology and Design, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2142.

Pełny tekst źródła
Streszczenie:

Examensarbetet utfördes på Balco AB i Växjö, som tillverkar och erbjuder balkongsystem på

totalentreprenad. Eftersom Balcos ökade omsättning har lett till att ankomstrollen inte längre kan utföras med

tillräckligt hög precision, så har produktionsplaneringen blivit lidande. Syftet med projektet var därför att Balco

ska kunna ankomstrapportera inkomna aluminiumprofiler från underleverantörer på ett smidigt sätt, samt med

hög precision.

För att lösa Balcos problem vid ankomstkontrollen, så att de enkelt kan rapportera in och få en god översikt

av ankommit gods, har projektgruppen kommit fram till tre olika alternativa lösningar. Dessa lösningar är

streckkodssystem, RFID-system och manuell identifiering. För bästa resultat, och användning av

identifikationssystemen, är det troligtvis nödvändigt att ett MPS-system implementeras.

Slutsatsen och rekommendationerna innefattas främst av en lösning med ett streckkodssystem, men där den

primära nyckeln till ökad precision och kontroll av ankommande gods ligger i att införa ett väl fungerande

datasystem.

Style APA, Harvard, Vancouver, ISO itp.
8

McKinnon, Neil 1971. "Passage, persistence and precision". Monash University, Dept. of Philosophy, 2002. http://arrow.monash.edu.au/hdl/1959.1/8203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Busck, Fredrik. "Low cost precision reflectometer". Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97118.

Pełny tekst źródła
Streszczenie:
This Master Thesis contains an evaluation of the six-port reflectometer (SPR), an alternative method for measuring reflection. This technique allows the measurement ofcomplex reflection using only scalar detectors. Obtaining both amplitude and phase information gives the possibility to make corrections for systematic errors such as feeder cable loss and directivity error. The report contains a literature study including the six-port reflectometer technique as well as a historical overview of reflection measurement techniques. Further more it contains simulation results of parts of the design as well as of the complete system. The calibration algorithm of the SPR is presented step by step with an improvement made in order to reduce the number of calculations. Measurement results of the reflection measurements are presented as a comparison to the result obtained from a network analyzer. The simulations showed high accuracy when simulating variations in the return loss of the loads as well as variations of the input signal frequency. The simulations also gave some indications on how to affect the accuracy of the reflection measurements. Measurements showed high accuracy when measuring unknown loads with changing return loss. Variations of the input power also yielded a good result. Measurements performed with the input signal frequency variated failed to show the same high accuracy as in the simulations. However, there are some improvements suggested to increase the accuracy in a presumptive product.
Det här examensarbetet innehåller en utvärdering av sexports reflektometern (SPR), en alternativ metod för att mäta reflektion. Den här tekniken tillåter mätning av komplex reflektion med hjälp av enbart skalära detektorer. Erhållandet av både amplitud och fas ger möjligheten att kompensera för systematiska fel som kabelförluster och direktivitetsfel. Rapporten innehåller en litteraturstudie som omfattar sexports reflektometertekniken samt en historisk översikt av olika tekniker för att mäta reflektion. Rapporten innehåller även simuleringsresultat från enskilda delar av designen samt från hela systemet. Kalibreringsalgoritmen presenteras steg för steg tillsammans med en förbättring som gjorts för att minska antalet beräkningar. Mätresultaten från reflektionsmätningarna presenteras som en jämförelse med resultaten från en nätverksanalysator. Simuleringarna visade hög noggrannhet under variation av lasternas reflektionsförluster och insignalens frekvens. Simuleringarna gav också en indikation på hur man kan påverka noggrannheten i reflektionsmätningarna. Mätningarna visade hög noggrannhet vid variation av lasternas reflektionsförluster. Variationer i insignalens effekt gav också goda resultat. Mätningar genomförda med variation i insignalens frekvens resulterade inte i samma höga noggrannhet som under simuleringarna. Det finns emellertid vissa förslag på förbättringar för att öka noggrannheten i en potentiell framtida produkt.
Style APA, Harvard, Vancouver, ISO itp.
10

Tang, Zhongwei. "High precision camera calibration". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00675484.

Pełny tekst źródła
Streszczenie:
The thesis focuses on precision aspects of 3D reconstruction with a particular emphasis on camera distortion correction. The causes of imprecisions in stereoscopy can be found at any step of the chain. The imprecision caused in a certain step will make useless the precision gained in the previous steps, then be propagated, amplified or mixed with errors in the following steps, finally leading to an imprecise 3D reconstruction. It seems impossible to directly improve the overall precision of a reconstruction chain leading to final imprecise 3D data. The appropriate approach to obtain a precise 3D model is to study the precision of every component. A maximal attention is paid to the camera calibration for three reasons. First, it is often the first component in the chain. Second, it is by itself already a complicated system containing many unknown parameters. Third, the intrinsic parameters of a camera only need to be calibrated once, depending on the camera configuration (and at constant temperature). The camera calibration problem is supposed to have been solved since years. Nevertheless, calibration methods and models that were valid for past precision requirements are becoming unsatisfying for new digital cameras permitting a higher precision. In our experiments, we regularly observed that current global camera methods can leave behind a residual distortion error as big as one pixel, which can lead to distorted reconstructed scenes. We propose two methods in the thesis to correct the distortion with a far higher precision. With an objective evaluation tool, it will be shown that the finally achievable correction precision is about 0.02 pixels. This value measures the average deviation of an observed straight line crossing the image domain from its perfectly straight regression line. High precision is also needed or desired for other image processing tasks crucial in 3D, like image registration. In contrast to the advance in the invariance of feature detectors, the matching precision has not been studied carefully. We analyze the SIFT method (Scale-invariant feature transform) and evaluate its matching precision. It will be shown that by some simple modifications in the SIFT scale space, the matching precision can be improved to be about 0.05 pixels on synthetic tests. A more realistic algorithm is also proposed to increase the registration precision for two real images when it is assumed that their transformation is locally smooth. A multiple-image denoising method, called ''burst denoising'', is proposed to take advantage of precise image registration to estimate and remove the noise at the same time. This method produces an accurate noise curve, which can be used to guide the denoising by the simple averaging and classic block matching method. ''burst denoising'' is particularly powerful to recover fine non-periodic textured part in images, even compared to the best state of the art denoising method.
Style APA, Harvard, Vancouver, ISO itp.
11

Tuncer, Munir Cihangir. "Precision forging hollow parts". Thesis, University of Birmingham, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558075.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Bingham, Brian S. (Brian Steven) 1973. "Precision autonomous underwater navigation". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29629.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2003.
Includes bibliographical references (p. 175-185).
Deep-sea archaeology, an emerging application of autonomous underwater vehicle (AUV) technology, requires precise navigation and guidance. As science requirements and engineering capabilities converge, navigating in the sensor-limited ocean remains a fundamental challenge. Despite the logistical cost, the standards of archaeological survey necessitate using fixed acoustic transponders - an instrumented navigation environment. This thesis focuses on the problems particular to operating precisely within such an environment by developing a design method and a navigation algorithm. Responsible documentation, through remote sensing images, distinguishes archaeology from salvage, and fine-resolution imaging demands precision navigation. This thesis presents a design process for making component and algorithm level tradeoffs to achieve system-level performance satisfying the archaeological standard. A specification connects the functional requirements of archaeological survey with the design parameters of precision navigation. Tools based on estimation fundamentals - the Cram6r-Rao lower bound and the extended Kalman filter - predict the system-level precision of candidate designs. Non-dimensional performance metrics generalize the analysis results. Analyzing a variety of factors and levels articulates the key tradeoffs: sensor selection, acoustic beacon configuration, algorithm selection, etc. The abstract analysis is made concrete by designing a survey and navigation system for an expedition to image the USS Monitor. Hypothesis grid (Hgrid) is both a representation of the sensed environment and an algorithm for building the representation. Range observations measuring the line-of-sight distance between two acoustic transducers are subject to multipath errors and spurious returns.
The quality of this measurement is dependent on the location of the estimator. Hgrids characterize the measurement quality by generating a priori association probabilities - the belief that subsequent measurements will correspond to the direct-path, a multipath, or an outlier - as a function of the estimated location. The algorithm has three main components: the mixed-density sensor model using Gaussian and uniform probability distributions, the measurement classification and multipath model identification using expectation-maximization (EM), and the grid-based spatial representation. Application to data from an autonomous benthic explorer (ABE) dive illustrates the algorithm and shows the feasibility of the approach.
by Brian Steven Bingham.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
13

Aumond, Bernardo Dantas 1972. "High precision stereo profilometry". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/88892.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.
Includes bibliographical references (leaves 186-190).
Metrological data from sample surfaces can be obtained by using a variety of profilome try methods. Atomic Force Microscopy (AFM), which relies on contact inter-atomic forces to extract topographical images of a sample, is one such method that can be used on a wide range of surface types, with possible nanometer resolution (both vertical andlateral). However, AFM images are commonly distorted by convolution, which reduces metrological accuracy. This type of distortion is more significant when the sample surface containshigh aspect ratio features such as lines, steps or sharp edges or when probe and sample share similar characteristic dimensions. Therefore, as the size of engineered features arepushed into the micrometer and sub-micrometer range by the development of new high precision fabrication techniques, convolution distortions embedded in the images becomeincreasingly more significant. Aiming at mitigating these distortions and recovering metrology sound ness, we introduce a novel image deconvolution scheme based on the principle of stereo imaging. Multiple images of a sample, taken at different angles, allow for separation ofcon volution artifacts from true topographic data. As a result, accurate samplereconstruction and probe shape estimation can be achieved simultaneously. Additionally, shadow zones, which are areas of the sample that cannot be reached by the AFM probe, are greatly re duced. Most importantly, this technique does not require a priori probe characterizationor any sort of shape assumption. It also reduces the need for slender or sharper probes,which, on one hand, induce less convolution distortion but, on the other hand, are more prone to wear and damage, thus decreasing the overall inspection system reliability.
(cont.) This research project includes a survey of current high precision metrology tools and an in-depthanalysis of the state of the art deconvolution techniques for probe based metrology instruments. Next, the stereo imaging algorithm is introduced, simulation results presented and anerror analysis is conducted. Finally, experimental validations of the technique are carried outfor an industrial inspection application where the characteristic dimensions of the samplesare in the nanometer range. The technique was found to be robust and insensitive to probe or shape geometries. Furthermore, the same framework was deemed to be applicable to other probe based imaging techniques such as mechanical stylus profilometers and scanning tunneling microscopy.
by Bernardo Dantas Aumond.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
14

Markova, Mariana (Mariana T. ). "Precision hybrid pipelined ADC". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/87932.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 205-[208]).
Technology scaling poses challenges in designing analog circuits because of the decrease in intrinsic gain and reduced swing. An alternative to using high-gain amplifiers in the implementation of switched-capacitor circuits has been proposed that replaces the amplifier with a current source and a comparator. The technique has been generalized to zero-crossing based circuits (ZCBC). It has been demonstrated but not limited to single-ended and differential pipelined ADCs, with effective number of bits (ENOB) ranging from 8 bits to 11 bits at sampling rates from 10MS/s to 100MS/s. The purpose of this project was to explore the use of the ZCBC technique for high-precision ADCs. The goal of the project is a 13-bit pipelined ADC operating at up to 100MS/s. A two-phase hybrid ZCBC operation is used to improve the power-linearity tradeoff of the A/D conversion. The first phase approximates the final output value, while the second phase allows the output to settle to its accurate value. Since the output is allowed to settle in the second phase, the currents through capacitors decay, permitting higher accuracy and power-supply rejection compared with standard ZCBCs. Linearization techniques for the ramp waveforms are implemented. Linear ramp waveforms require less correction in the second phase for given linearity, thus allowing faster operation. Techniques for improving linearity beyond using a cascoded current source are explored; these techniques include output pre-sampling and bidirectional output operation. Current steering is used to minimize the overall delay contributing to the first phase error, known as overshoot error. Overshoot error reduction at the end of the first phase improves the linearity requirements of the final phase. Automated background overshoot reduction is introduced though not included on the prototype ADC. A prototype ADC was designed in 1V, 65nm CMOS process to demonstrate the techniques introduced in this work. The prototype ADC did not meet the intended design goal and achieved 11-bit ENOB at 21MS/s and SFDR of 81dB. The main performance limitations are lack of overshoot reduction in the third pipeline stage in the prototype ADC and mid-range errors, introduced by the bidirectional ramp linearization technique, limiting the attainable output accuracy.
by Mariana Markova.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
15

Willoughby, Patrick (Patrick John) 1978. "Elastically averaged precision alignment". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30361.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 107-110).
One of the most important steps in designing a machine is the consideration of the effect of interfaces between components. A badly designed interface can vary from costly difficulties such as additional control or calibration to machine failure. For precision assemblies such as automobile engines, robotics, and many measurement devices, exact constraint techniques have been used to align removable components. Exact constraint typically requires controlled precision machining to allow an interface to be repeatable and interchangeable. Elastic averaging techniques can be used instead of exact constraint to create less repeatable interfaces with more generous machining requirements. Elastic averaging represents a subset of coupling types where improved accuracy is derived from the averaging of errors over a large number of relatively compliant contacting members. Repeatability and accuracy obtained through elastic averaging can be nearly as high as in deterministic systems, elastic averaging design allows for higher stiffness and lower local stress when compared to kinematic couplings. In this thesis, a model of elastic averaging has been developed to predict the effects of manufacturing variations on design. To demonstrate the capabilities of this model, a new fiber optic connector has been designed with elastic averaging and precision injection molding in mind. Simulations predict repeatability of approximately 5 micrometers for a 5X scale version, which agreed with experimental measurements. Fidelity parts were produced using the Silicon Insert Molded Plastics process (SIMP). SIMP uses microfabricated silicon inserts in a traditional injection mold to create parts with micro-scale features.
(cont.) The SIMP fidelity parts were measured to estimate manufacturing repeatability of approximately 5 micrometers. Using this repeatability, simulations predict that the actual scale version has repeatability of approximately 0.5 micrometers
by Patrick Willoughby.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
16

Omar, Basil A. "Precision laser beam measurements". Thesis, Loughborough University, 1990. https://dspace.lboro.ac.uk/2134/13794.

Pełny tekst źródła
Streszczenie:
The work described in this thesis is concerned with two main areas of investigation. The first involves the measurement and characterisation of the fluorescence of various doped glasses when excited by a pulsed ultra-violet laser beam, with a view to finding a material which acts as a suitable ultra-violet to visible image converter. A system is described, based on a glass fluorescer, which writes the beam profile of a single-shot KrF laser directly into computer memory and hence permits powerful image processing, and measurements to be made on the laser beam profile. The system was developed primarily for the spatial profiling of 'Sprite', Europe's largest ultra-violet laser, and is currently in routine use at the Rutherford Appleton Laboratory for this purpose.
Style APA, Harvard, Vancouver, ISO itp.
17

McVicar, Mhairi Thomson. "Precision in architectural production". Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/97224/.

Pełny tekst źródła
Streszczenie:
In the professionalised context of contemporary architectural practice, precise communications are charged with the task of translating architectural intentions into a prosaic language to guarantee certainty in advance of construction. To do so, regulatory and advisory bodies advise the architectural profession that ‘the objective is certainty.’1 Uncertainty is denied in a context which explicitly defines architectural quality as ‘fitness for purpose.’2 Theoretical critiques of a more architectural nature, meanwhile, employ a notably different language, applauding risk and deviation as central to definitions of architectural quality. Philosophers, sociologists and architectural theorists, critics and practitioners have critiqued the implications of a built environment constructed according to a framework of certainty, risk avoidance, and standardisation, refuting claims that communication is ever free from slippage of meaning, or that it ever it can, or should, be unambiguously precise when attempting to translate the richness of architectural intentions. Through close readings of architectural documentations accompanying six architectural details constructed between 1856 and 2006, this thesis explores the desire for, and consequences of, precision in architectural production. From the author’s experience of a 2004 self-build residence in the Orkney Islands, to architectural critiques of mortar joints at Sigurd Lewerentz’s 1966 Church of St Peter’s, Klippan; from the critical rejection of the 1856 South Kensington Iron Museum, to Caruso St John Architects’ resistance to off-the-peg construction at their 2006 entrance addition to the same relocated structure in Bethnal Green; and from the precise deviation of a pressed steel window frame at Mies van der Rohe’s 1954 Commons Building at IIT, Chicago, to the precise control of a ‘crude’ gypsum board ceiling at OMA’s 2003 adjoining McCormick Tribune Campus Centre, this thesis explores means by which precision in architectural production is historically and critically defined, applied, pursued and challenged in pursuit of the rich ambiguities of architectural quality.
Style APA, Harvard, Vancouver, ISO itp.
18

Nayak, Ankita Manjunath. "Precision Tunable Hardware Design". University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1479814631903673.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Edwards, Adam Michael. "Precision Aggregated Local Models". Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102125.

Pełny tekst źródła
Streszczenie:
Large scale Gaussian process (GP) regression is infeasible for larger data sets due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide-and-conquer, e.g., partitioning into sub-problems and inducing functional (and thus computational) independence. Such approximations can speedy, accurate, and sometimes even more flexible than an ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus pathologically good and bad in this regard. Model averaging, an alternative to divide-and-conquer, can maintain absolute continuity but often over-smooth, diminishing accuracy. Here I propose putting LAGP-like methods into a local experts-like framework, blending partition-based speed with model-averaging continuity, as a flagship example of what I call precision aggregated local models (PALM). Using N_C LAGPs, each selecting n from N data pairs, I illustrate a scheme that is at most cubic in n, quadratic in N_C, and linear in N, drastically reducing computational and storage demands. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster in terms of speed, and furnishes continuous predictive surfaces. Finally, I propose sequential updating scheme which greedily refines a PALM predictor up to a computational budget, and several variations on the basic PALM that may provide predictive improvements.
Doctor of Philosophy
Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
Style APA, Harvard, Vancouver, ISO itp.
20

Braathen, Johannes. "Automating Higgs precision calculations". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS073/document.

Pełny tekst źródła
Streszczenie:
L’étude des propriétés du boson de Higgs représente une excellente opportunité pour la recherche de Nouvelle Physique. En particulier, sa masse est mesurée avec une précision impressionnante, de l’ordre de 0.1%, tandis qu’elle est aussi prédite par certains modèles au-delà du Modèle Standard, notamment les modèles supersymétriques. Le but de cette thèse est de faire avancer le calcul des corrections radiatives aux masses des scalaires dans les modèles au-delà du Modèle Standard, ainsi que l’automatisation de ces calculs, afin d’établir ou d’améliorer les limites sur les couplages entre la Nouvelle Physique et le boson de Higgs. Nous calculons d’abord les corrections dominantes à deux boucles, de la forme O(alpha_s alpha_t), aux masses des scalaires neutres dans les modèles supersymétriques à jauginos de Dirac. Ensuite, nous montrons comment surmonter la Catastrophe des Bosons de Goldstone, un cas de divergences infrarouges dues aux bosons de Goldstones de masses nulles qui affecte les calculs de potentiels effectifs, d’équations « tadpoles » et d’énergies propres, en adoptant un schéma de renormalisation « on-shell » pour les masses des bosons de Goldstone. Nous illustrons la mise en œuvre numérique de notre solution dans le programme SARAH, et finalement, nous considérons le comportement aux hautes énergies de modèles non-supersymétriques avec des secteurs scalaires étendus
The Standard Model-like Higgs boson provides an excellent setting for the indirect search of New Physics, through the study of its properties. In particular its mass is now measured with an astonishing precision, of the order of 0.1%, while being predicted in some models of Beyond the Standard Model (BSM) Physics, such as supersymmetric (SUSY) models. The main purpose of this thesis is to push further the calculation of radiative corrections to Higgs boson masses in BSM models, as well as the automation of these calculations, in order to set or improve constraints on New Physics coupling to the Higgs boson. A first chapter is devoted to the computation of the leading two-loop O (alpha_s alpha_t) corrections to neutral scalar masses in SUSY models with Dirac gauginos. Then, we show to address the Goldstone Boson Catastrophe -- a case of infra-red divergences due to massless Goldstone bosons that plague the calculation of effective potentials, tadpole equations, and self-energies -- in the context of general renormalisable field theories, by adopting an on-shell renormalisation scheme for the Goldstone masses. Afterwards, we illustrate the numerical implementation of our solution to the Goldstone Boson Catastrophe in the public tool SARAH. Finally, in a last chapter, we consider the high-scale behaviour of non-supersymmetric models with extended Higgs sectors
Style APA, Harvard, Vancouver, ISO itp.
21

Vallance, Robert Ryan. "Precision connector assembly automation". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/38433.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1999.
Includes bibliographical references (p. 209-214).
Telecommunication systems, network servers, mainframes, and high-performance computers contain several printed circuit boards (PCBs) that are mounted in card-cage assemblies. Level-3 connectors, often called board-to-board connectors, transmit signals between the primary backplane PCB and the daughter card PCBs. These connectors are customized for each PCB by configuring modules along the length of the connector. Hence, the connector's assembly system must flexibly accommodate the connector configurations. Prior to this research, the assembly of daughter card connectors was a manual process. This thesis presents the conceptual design of an assembly cell, and thoroughly presents the selected concept, a flexible assembly system. In the flexible assembly system, the connector is fixtured on a pallet and transferred to assembly stations on a conveyor. The pallet must be precisely located at each station, to minimize the relative errors between the new component and the connector on the pallet. Kinematic couplings deterministically locate one rigid body with respect to another. Therefore, a pallet system was developed that uses split-groove kinematic couplings between the pallets and machines. Experiments demonstrated that the split-groove kinematic pallet was approximately O1X more repeatable than conventional pallet location methods. The design is evident in the fabrication and operation of the first automated machines for the connector assembly system. In automated machinery, kinematically coupled bodies are often subjected to ranges of disturbance forces. This thesis presents new methods for analyzing the static equilibrium, errors due to contact deformation, and contact stresses that result from disturbance forces. In addition, the manufacturing errors within individual pallets and machines combine to cause system-wide, variability in pallet location. Two methods are presented for estimating the system-wide variability in the position and orientation of the pallets.
by Robert Ryan Vallance.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
22

MATOS, Romário Ferreira de. "INDICADORES INTERNOS E EXTERNOS PARA ESTIMATIVA DA DIGESTIBILIDADE APARENTE DA MATÉRIA SECA EM OVINOS". Universidade Federal do Maranhão, 2017. http://tedebc.ufma.br:8080/jspui/handle/tede/1277.

Pełny tekst źródła
Streszczenie:
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-04-12T12:36:08Z No. of bitstreams: 1 Romario ferreira de Matos.pdf: 289293 bytes, checksum: 6e794c84f3b40406735e5e1ba5ad15b4 (MD5)
Made available in DSpace on 2017-04-12T12:36:08Z (GMT). No. of bitstreams: 1 Romario ferreira de Matos.pdf: 289293 bytes, checksum: 6e794c84f3b40406735e5e1ba5ad15b4 (MD5) Previous issue date: 2017-02-21
The objective of this study was to evaluate the accuracy and precision of the apparent dry matter digestibility estimates obtained using internal and external markers in sheep fed diets containing sugar-cane-tip hay or treated with urea or calcium. Also, it was evaluated the robustness of the markers in relation to the variation of dry matter intake (CMS) and the mean live weight (PV) of the animals. Were used 20 male, uncastrated, mestizos without defined breed pattern (SPRD) x Santa Inês, with a mean live weight of 29.64 ± 5.53 kg and age of approximately 12 months, in randomized block design, based on live weight. Estimates of total fecal dry matter yield and digestibility of DM and nutrients were performed using the method of total fecal collection and using internal markers, represented by the indigestible constituents MSi, FDNi and FDAi and the external indicator titanium dioxide (TiO2). Accuracy of the markers was evaluated by the mean bias, which is the difference between the value predicted by the indicator and the value observed by the total collection of feces, the most accurate indicator being considered, which presents a mean bias closer to zero. Precision, a measure of dispersion between predicted and observed values, represents the mean distance variability between these values and was evaluated by the mean square root of the prediction error. The robustness analysis of each indicator was performed by regressing the bias according to the CMS variables and PV. TiO2 presented a faecal recovery rate (TRF) of less than 100% and for the internal markers MSi, FDNi and FDAi. TRF was greater than 100%. There was a difference for the mean bias (P <0.05), which shows that there are differences in the markers regarding their accuracy for fecal yield estimates and, consequently, estimates of apparent dry matter digestibility (DMS) in sheep. The estimated of digestibility of dry matter (DMS) for internal markers MSi, FDNi and FDAi are recommended because the results obtained by these are not influenced by the CMS and PV.
Este estudio tuvo como objetivo evaluar la exactitud y precisión de las estimaciones de la digestibilidad materia seca aparente obtenida con el uso de indicadores internos y externos en ovejas alimentado con punta heno caña de azúcar sin tratar o tratado con urea u óxido de calcio. Además, se evaluó la robustez de los indicadores en relación con la variación del consumo de materia seca (CMS) y el peso vivo (PV) promedio animales. 20 ovejas machos se utilizaron patrón sin castrar sin mestizo raza definida (SPRD) x St. Agnes, con un peso promedio 29,64 ± 5,53 kg y envejecido aproximadamente 12 meses en el diseño de bloques al azar, con base en el peso viva. Las estimaciones de la producción total de materia seca fecal y la digestibilidad de la materia seca y los nutrientes se llevaron a cabo por el método de recogida de heces total y uso de indicadores internos, representados por los componentes indigeribles MSi, INDF y FDAi y dióxido de titanio indicador externo (TiO2), y los indicadores y tratamientos bloques de animales. La exactitud de los indicadores se evaluó por el sesgo de la media, que es el diferencia entre el valor predicho por el indicador y el valor observado por la colección total heces, se considera el indicador más precisa que la presente sesgo media más cerca de cero. La precisión, una medida de valores de dispersión entre el predicho y variabilidad observada es la distancia media entre estos valores y era evaluadas por la raíz cuadrada media del error de predicción. El análisis de robustez de cada indicador se realizó una regresión a las variables de polarización de función CMS y el peso corporal promedio. TiO2 mostró tasa de recuperación fecal (FRR) de menos de 100% y para el indicadores internos MSi, INDF y FDAi la TRF fue mayor que 100%. Hubo diferencias para el sesgo de la media (P <0,05), que no muestra ninguna diferencia como el indicador de su precisión para la estimación de la producción fecal y por lo tanto las estimaciones la digestibilidad de la materia seca (DMD) en bovinos. Los indicadores internos MSI INDF y FDAi se recomiendan para la estimación de la producción total de crudo y se seca fecal DM, ya que los resultados de estos no son CMS influenciada por el peso vivo del animal.
Objetivou-se avaliar a acurácia e precisão das estimativas da digestibilidade aparente da matéria seca obtidas com uso de indicadores internos e externos em ovinos alimentados com dietas contendo feno de ponta de cana-de-açúcar não tratado ou tratado com ureia ou óxido de cálcio. Também, foi avaliada a robustez dos indicadores em relação à variação do consumo de matéria seca (CMS) e ao peso vivo (PV) médio dos animais. Foram utilizados 20 ovinos machos, não castrados, mestiços sem padrão de raça definido (SPRD) x Santa Inês, com peso vivo médio 29,64±5,53 kg e idade de aproximadamente 12 meses, em delineamento em blocos ao acaso, com base no peso vivo. As estimativas da produção total de matéria seca fecal e da digestibilidade da MS e dos nutrientes foram realizadas pelo método da coleta total de fezes e com uso de indicadores internos, representados pelos constituintes indigestíveis MSi, FDNi e FDAi e do indicador externo dióxido de titânio (TiO2), sendo os indicadores os tratamentos e os animais os blocos. A acurácia dos indicadores foi avaliada pelo viés médio, que é a diferença entre o valor predito pelo indicador e o valor observado pela coleta total de fezes, sendo considerado o indicador mais acurado o que apresentar viés médio mais próximo de zero. A precisão, uma medida de dispersão entre os valores preditos e observados, representa a variabilidade média da distância entre esses valores e foi avaliada pela raiz quadrada média do erro de predição. A análise de robustez de cada indicador foi realizada regredindo-se o viés em função das variáveis CMS e peso vivo médio. O TiO2 apresentou taxa de recuperação fecal (TRF) inferior a 100% e para os indicadores internos MSi, FDNi e FDAi a TRF foi superior a 100%. Houve diferença para o viés médios (P<0,05), o que demonstra haver diferença dos indicadores quanto a sua acurácia para as estimativas da produção fecal e, consequentemente, das estimativas da digestibilidade aparente da matéria seca (DMS) em ovinos. Os indicadores internos MSi, FDNi e FDAi são recomendados para estimativas da produção total de matéria seca fecal e da digestibilidade da MS, pois os resultados obtidos por estes não são influenciados pelo CMS e peso vivo do animal.
Style APA, Harvard, Vancouver, ISO itp.
23

Enz, Christian C. Enz Christian Charles Enz Christian Charles Enz Christian Charles. "High precision CMOS micropower amplifiers /". [S.l.] : [s.n.], 1989. http://library.epfl.ch/theses/?nr=802.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Parthey, Christian Godehard. "Precision Spectroscopy on atomic hydrogen". Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-139433.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Rogers, Adam Gregory. "Precision mechatronics lab robot development". Texas A&M University, 2007. http://hdl.handle.net/1969.1/85854.

Pełny tekst źródła
Streszczenie:
This thesis presents the results from a modification of a previously existing research project titled the Intelligent Pothole Repair Vehicle (IPRV). The direction of the research in this thesis was changed toward the development of an industrially based mobile robot. The principal goal of this work was the demonstration of the Precision Mechatronics Lab (PML) robot. This robot should be capable of traversing any known distance while maintaining a minimal position error. An optical correction capability has been added with the addition of a webcam and the appropriate image processing software. The primary development goal was the ability to maintain the accuracy and performance of the robot with inexpensive and low-resolution hardware. Combining the two abilities of dead-reckoning and optical correction on a single platform will yield a robot with the ability to accurately travel any distance. As shown in this thesis, the additional capability of off-loading its visual processing tasks to a remote computer allows the PML robot to be developed with less expensive hardware. The majority of the literature research presented in this paper is in the area of visual processing. Various methods used in industry to accomplish robotic mobility, optical processing, image enhancement, and target interception have been presented. This background material is important in understanding the complexity of this field of research and the potential application of the work conducted in this thesis. The methods shown in this research can be extended to other small robotic vehicles, with two separate drive wheels. An empirical method based upon system identification was used to develop the motion controllers. This research demonstrates a successful combination of a dead-reckoning capability, an optical correction method, and a simplified controller methodology capable of accurate path following. Implementation of this procedure could be extended to multiple and inexpensive robots used in a manufacturing setting.
Style APA, Harvard, Vancouver, ISO itp.
26

Pocock, Trudy Louise. "An Analysis of Precision teaching". The University of Waikato, 2006. http://hdl.handle.net/10289/2622.

Pełny tekst źródła
Streszczenie:
This research examined three components of precision teaching; charting, timed practices, and performance aims. In the first study beginner skaters performed two roller skating skills, forward crosses and back scissors, with the aim of increasing fluency in these skills using precision teaching methods. Skaters were told to perform the skills as fast as they could during 1-min practises, aiming at a set performance aim, or goal. After each timing skaters were told how many repetitions they had performed. One group charted back scissors only and the other forward crosses only. The skaters became faster in both skills and charting did not produce faster rates. The improvement seen may have been a direct result of the performance aims. Therefore the second study, using back crosses, compared a fixed, difficult performance aim (complete 50 per minute) for one group and an easier, flexible performance aim (beat your previous sessions' high score) for a second group. After each timing skaters were told how many back crosses they had performed. Performance rates increased similarly for both groups, thus the different performance aims did not have different effects, contrary to the goal-setting literature. A third study investigated this further. Skaters performed forward crosses and back scissors during a baseline condition, where there were no performance aims or feedback. Increases in performance rates for both skills occurred. In a second condition, a performance aim higher than their number of repetitions in the previous condition was set and feedback was given for one skill only. There was an immediate increase in rate of the targeted skill for 3 of the 4 skaters, suggesting that the goal, when given with feedback, influenced the rate at which the skaters performed the skill. In the fourth study, where the effect of feedback and practice was examined more closely, soccer players dribbled a ball in and out of cones. As expected those who took part in eight to ten sessions that were told to do their best (an easy goal) and not given feedback performed this skill faster than those who completed only two sessions with the same conditions. Unexpectedly, they also performed faster than those set a performance aim of beating their previous highest score (a hard goal) and who were given feedback. Methodological issues that may have been responsible for this latter result were addressed in the fifth study. Skaters completing 10 sessions of forward crosses, with feedback and with a performance aim of completing 60 repetitions in one minute (a hard goal), became faster than skaters completing 10 sessions without feedback who were told to do their best. Skaters told to do their best, who completed only three sessions without feedback, did not get faster. These results support those in the goal-setting literature that, hard goals with feedback have more effect than being told to do your best. Overall these studies show that short, timed practices and hard performance aims, or goals, may be effective components of precision teaching while visual feedback from charting may not. Further, precision teaching methods were effective when applied to sporting skills such as those used by roller skaters and soccer players for building fluency of basic skills.
Style APA, Harvard, Vancouver, ISO itp.
27

Graikou, Eleni [Verfasser]. "High precision timing / Eleni Graikou". Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1188732013/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Peppa, Maria Valasia. "Precision analysis of 3D camera". Thesis, KTH, Geodesi och geoinformatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-131457.

Pełny tekst źródła
Streszczenie:
Three dimensional mapping is becoming an increasingly attractive product nowadays. Many devices like laser scanner or stereo systems provide 3D scene reconstruction. A new type of active sensor, the Time of Flight (ToF) camera obtains direct depth observations (3rd dimensional coordinate) in a high video rate, useful for interactive robotic and navigation applications. The high frame rate combined with the low weight and the compact design of the ToF cameras constitute an alternative solution of the 3D measuring technology. However a deep understanding of the error involved in the ToF camera observations is essential in order to upgrade their accuracy and enhance the ToF camera performance. This thesis work addresses the depth error characteristics of the SR4000 ToF camera and indicates potential error models for compensating the impact. In the beginning of the work the thesis investigates the error sources, their characteristics and how they influence the depth measurements. In the practical part, the work covers the above analysis via experiments. Last, the work proposes simple methods in order to reduce the depth error so that the ToF camera can be used for high accuracy applications.   An overall result of the work indicates that the depth acquired by the Time of Flight (ToF) camera deviates several centimeters, specifically the SR4000 camera provides 35 cm error size for the working range of 1-8 m. After the error compensation the depth offset fluctuates 15cm within the same working range. The error is smaller when the camera is set up close to the test field than when it is further away.
Style APA, Harvard, Vancouver, ISO itp.
29

Zhu, Q. S. "Precision electrical impedance tomography instrumentation". Thesis, Oxford Brookes University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.332494.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Gordon, Timothy Alistair. "Computer controlled precision distance measurement". Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Kirk, C. P. "Precision measurement of microscopic images". Thesis, University of Leeds, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.355929.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Imre, Egemen. "High precision relative motion modelling". Thesis, University of Surrey, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436083.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Kim, WÅ n.-jong. "High-precision planar magnetic levitation". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10419.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (v. 2, leaves 392-409).
by Won-jong Kim.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
34

Weber, Alexis Christian 1974. "Precision passive alignment of wafers". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/89364.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Stimac, Andrew K. (Andrew Kenneth) 1977. "Precision navigation for aerospace applications". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/16676.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.
Vita.
Includes bibliographical references (p. 162). Includes bibliographical references (p. 162).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Navigation is important in a variety of aerospace applications, and commonly uses a blend of GPS and inertial sensors. In this thesis, a navigation system is designed, developed, and tested. Several alternatives are discussed, but the ultimate design is a loosely-coupled Extended Kalman Filter using rigid body dynamics as the process with a small angle linearization of quaternions. Simulations are run using real flight data. A bench top hardware prototype is tested. Results show good performance and give a variety of insights into the design of navigation systems. Special attention is given to convergence and the validity of linearization.
by Andrew K. Stimac.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
36

Manrai, Arjun Kumar. "Statistical foundations for precision medicine". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97826.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Physicians must often diagnose their patients using disease archetypes that are based on symptoms as opposed to underlying pathophysiology. The growing concept of "precision medicine" addresses this challenge by recognizing the vast yet fractured state of biomedical data, and calls for a patient-centered view of data in which molecular, clinical, and environmental measurements are stored in large shareable databases. Such efforts have already enabled large-scale knowledge advancement, but they also risk enabling large-scale misuse. In this thesis, I explore several statistical opportunities and challenges central to clinical decision-making and knowledge advancement with these resources. I use the inherited heart disease hypertrophic cardiomyopathy (HCM) to illustrate these concepts. HCM has proven tractable to genomic sequencing, which guides risk stratification for family members and tailors therapy for some patients. However, these benefits carry risks. I show how genomic misclassifications can disproportionately affect African Americans, amplifying healthcare disparities. These findings highlight the value of diverse population sequencing data, which can prevent variant misclassifications by identifying ancestry informative yet clinically uninformative markers. As decision-making for the individual patient follows from knowledge discovery by the community, I introduce a new quantity called the "dataset positive predictive value" (dPPV) to quantify reproducibility when many research teams separately mine a shared dataset, a growing practice that mirrors genomic testing in scale but not synchrony. I address only a few of the many challenges of delivering sound interpretation of genetic variation in the clinic and the challenges of knowledge discovery with shared "big data." These examples nonetheless serve to illustrate the need for grounded statistical approaches to reliably use these powerful new resources.
by Arjun Kumar Manrai.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
37

Doroski, Adam D. "Precision stationkeeping with azimuthing thrusters". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68834.

Pełny tekst źródła
Streszczenie:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Precision positioning of an unmanned surface vehicle (USV) in a nautical environment is a difficult task. With a dual azimuthing thruster scheme, the optimization of thruster outputs uses an online method to minimize the amount of error. It simplifies necessary calculations by the assumption that the rotating thrusters are always parallel thus making the system holonomic. The scheme accommodates for limitations in actuator outputs, including rotation limits and time-lagged thrusts and was implemented in a MATLAB simulation that tested its response to step errors and disturbance forces, similar to what it would encounter in actual implementation. It successfully achieved commanded outputs in all three degrees of freedom, typically within 25 seconds. It also rejects constant and sinusoidal disturbance forces. However, specific configurations arise where the USV, at times, is uncontrollable and the system only recovers after being further perturbed into a controllable configuration.
by Adam D. Doroski.
S.B.
Style APA, Harvard, Vancouver, ISO itp.
38

Schmiechen, Philipp. "Design of precision kinematic systems". Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/12628.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Lock, Andrew. "Digital watermarking of precision imagery". Thesis, University of Aberdeen, 2013. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=214152.

Pełny tekst źródła
Streszczenie:
There has been a growing interest in reversible watermarking of medical images re- cently for security reasons. Typically, humans are assumed to be the end user of watermarked images, however in many cases machine vision processes may be addi- tional consumers. Therefore, any watermarking performed on these images must be imperceptible to not only human users, but also these machine vision processes. The objective of this thesis is to understand the extent to which reversible water- marking affects the ability of computer vision algorithms to perform correctly. We address both the effect on primitive feature detection and on complete machine vi- sion processes, and investigate the ability to predict these effects using Image Quality Metrics (IQMs). Additionally, we describe the development of a new watermarking algorithm. We perform primitive feature detection on original and watermarked images, com- paring the output feature maps. Subsequently we use statistical modelling to allow prediction of feature map differences based on various IQMs of a watermarked image. We then conduct a similar experiment using manually specified feature maps and edge detectors across their full parameter space. Watermarking algorithms showing the least impact are highlighted and prediction of poorer performance a priori is investigated. In many cases watermarking is shown to cause a significant difference in the output feature map, however prediction of the difference is possible with excellent discrim- ination in many cases. A validation system for utilising these results in practical applications is presented. Three machine vision processes are investigated using a range of watermarking al- gorithms and embedding capacities – iris recognition, medical image registration, and diabetic retinopathy assessment. Significant differences are found in some cases, however at low capacities the iris and retinopathy processes show no significant dif- ferences. In addition, prediction of erroneous results for the retinopathy process was possible with excellent discrimination.
Style APA, Harvard, Vancouver, ISO itp.
40

Kölzow, Krister, i Emil Grundén. "8ARM : Open Source Precision Pump". Thesis, KTH, Maskinkonstruktion (Inst.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-184258.

Pełny tekst źródła
Streszczenie:
Today many tasks are executed by robots, but one business is not affected much by this, that is the restaurant business, and in more particular the bartender business. This project takes on the challenge to build an Open Source bartender robot with cheap parts and research how good precision can be achieved. The project does not take account for different viscosities of the liquids or abnormal temperatures. The key component for this project is the peristaltic pump, which is used to transport the liquid. The pump is an Open Source 3D-printable pump distributed through Thingiverse® and can be modified parametrically in the software OpenSCAD. Other components used in this project are an ArduinoTM Uno and a tachometer. These are put together to a demonstrator which is controlled by a feedback control. A graphical user interface is also constructed using an object based model-view-controller architecture which runs on the programing language PHP on a Raspberry Pi. The testing of the demonstrator shows that the robot has an error of 5 percent when pumping small amounts of liquid. The total cost for this project is 1930 SEK, but it can be dropped lower if a cheaper engine is chosen, resulting in a slower machine.
Idag används robotar inom många branscher i världen. En bransch som inte påverkats mycket av robotindustrin är restaurangbranschen och ännu mer specifikt bartenderbranschen. Det här projektet handlar om att utveckla en bartenderrobot med öppen källkod som använder billiga komponenter och sedan undersöka hur bra precision det det går att få. Projektet tar inte hänsyn till olika viskositeter på vätskorna eller ovanliga temperaturer. Den viktigaste komponenten för detta projekt är den peristaltiska pumpen som används för att transportera vätskan. Pumpen kan 3D-printas, den har en öppen källkod och går att ladda ner ifrån Thingiverse®. Den är parametriserad i programmet OpenSCAD så den är lätt att modifiera. Andra komponenter som används i detta projekt är ArduinoTM Uno och en varvräknare som sätts ihop till en prototyp som styrs via ett återkopplat system. Även ett grafiskt gränssnitt är byggt på en objektbaserad model-view-controller arkitektur som körs i programspråket PHP på en Raspberry Pi. Tester av prototypen visar att det maximala felet är 5 procent när små volymer pumpas. Den totala kostnaden för detta projekt blev 1930 SEK men det går att få en lägre kostnad om en annan motor väljs. Detta gör dock att det tar längre tid att hälla upp vätska.
Style APA, Harvard, Vancouver, ISO itp.
41

Lawrence, Robert S., George Gregory, Derryl Stutz, Jerry Sanchez i Brent Neal. "CIGTF Enhanced Precision Reference Systems". International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606697.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The 746th Test Squadron at Holloman AFB has developed and utilized the Central Inertial Guidance Test Facility (CIGTF) High Accuracy Post-processing Reference System (CHAPS). CHAPS is a multi-sensor navigation reference system used to evaluate position, velocity, and attitude performance of Global Positioning System (GPS), Inertial Navigation System (INS), and Embedded GPS/INS (EGI) navigation systems on large vehicles and aircraft. Reference data is processed post-test with accuracy ranges from a meter to sub-meter depending on the reference configuration and test environment (profile, trajectory dynamics, GPS jamming, etc.). The GPS Aided Inertial Navigation Reference (GAINR) system developed by the Air Force Flight Test Center (Edwards AFB) offered other utilization capabilities (test beds and post-processing time). The basic sensor assembly is an EGI navigation system. The data are post-processed with Multisensor Optimal Smoothing Estimation Software (MOSES). Incorporating CHAPS and GAINR capabilities generates a reference system with enhanced accuracy (sub-meter) in a dynamic GPS non-jamming/jamming environment. This paper will present the enhanced reference system combination of CHAPS/GAINR capabilities, characterization process and development methodology.
Style APA, Harvard, Vancouver, ISO itp.
42

Ebbrell, Stephen. "Process requirements for precision grinding". Thesis, Liverpool John Moores University, 2003. http://researchonline.ljmu.ac.uk/5633/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Sanders, Rebecca. "Precision in RNA molecular measurement". Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/91295/.

Pełny tekst źródła
Streszczenie:
Measurement of gene expression profiles represents a snapshot of cellular metabolism or activity at the molecular scale. This involves measurement of messenger (m)RNA employing techniques such as reverse transcription quantitative polymerase chain reaction (RT-qPCR). To truly assign biological significance to associated findings, researchers must consider the idiosyncrasies of this method and associated technical error, termed measurement uncertainty. Significant error can occur at sample source, RNA extraction, RT and qPCR levels. This thesis explores the steps which may introduce potential bias. It is hypothesised that error in mRNA measurement can be partitioned across different experimental stages. Within this thesis, RNA measurement from sample source to qPCR has been analysed at each stage to delineate variability contributions attributed to specific steps using synthetic and validated endogenous reference genes, single cell lines, 3D models and complex bone tissue. These data determined that total RNA yields remained consistent between treatment (2D cell mineralisation, 3D co-culture mechanical loading) and control groups (p > 0.06). Sample complexity was positively correlated with RNA extraction yield variability. Evaluation of different extraction methods demonstrated that total RNA yields differed between methods (p < 0.001). Assessing total RNA quantity and quality, different metrics (Bioanalyzer, Nanodrop and Qubit) generated different yield estimates (p < 0.05), although quality estimates from different metrics were found to be comparable. In addition, different cell batches (cultures of the same cells from different cryo vials) generated disparate total RNA yields (p < 0.02), with variable quality estimates, despite normalisation for cell count. RT-digital PCR analysis revealed quantification differences and detection sensitivity biases between different RT enzymes (p < 0.0001), suggesting cDNA prepared using different RT enzymes cannot be meaningfully compared. The ERCC synthetic targets were variable under the model conditions assessed and therefore not suitable as normalisers in these circumstances. This work provides a guide for the approaches necessary to reduce error, improve experimental design and minimise uncertainties.
Style APA, Harvard, Vancouver, ISO itp.
44

Bardt, Jeffrey A. "Precision molding of metallic microcomponents". [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011822.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Rusch, Peter C. "Precision farming in South Africa". Diss., Pretoria : [s.n.], 2001. http://upetd.up.ac.za/thesis/available/etd-01072004-153302.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Pereira, Fabio Irigon. "High precision monocular visual odometry". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.

Pełny tekst źródła
Streszczenie:
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usadas para estimar a trajetória de um veículo equipado com uma câmera, problema conhecido como odometria visual. Para obter medidas objetivas de eficiência e precisão, e poder comparar os resultados obtidos com o estado da arte, uma base de dados de alta precisão, bastante utilizada pela comunidade científica foi utilizada. No curso deste trabalho novas técnicas para rastreamento de detalhes, estimativa de posição de câmera, cálculo de posição 3D de pontos e recuperação de escala são propostos. Os resultados alcançados superam os mais bem ranqueados trabalhos na base de dados escolhida até o momento da publicação desta tese.
Recovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
Style APA, Harvard, Vancouver, ISO itp.
47

Ester, Edward F. "Neural Mechanisms of Mnemonic Precision". Thesis, University of Oregon, 2011. http://hdl.handle.net/1794/12106.

Pełny tekst źródła
Streszczenie:
xii, 78 p. : ill. (some col.)
Working memory (WM) enables the storage of information in a state that can be rapidly accessed and updated. This system is a core component of higher cognitive function - individual differences in WM ability are strongly predictive of general intelligence (IQ) and scholastic achievement (e.g., SAT scores), and WM ability is compromised in many psychiatric (e.g., schizophrenia) and neurological (e.g., Parkinson's) disorders. Thus, there is a strong motivation to understand the basic properties of this system. Recent studies suggest that WM ability is determined by two independent factors: the number of items an individual can store and the precision with which representations can be maintained. Significant progress has been made in developing neural measures that are sensitive to the number of items stored in WM. For example, electrophysiological and neuroimaging studies have demonstrated that activity in posterior parietal cortex is directly modulated by the number of items stored in WM and reaches a plateau at the same set size where individual memory capacity is exceeded. However, comparably little is known regarding the neural mechanisms that enable the storage of high-fidelity information in WM. This dissertation describes two experiments that evaluate so-called sensory-recruitment models of WM, where the storage of highfidelity information in WM is mediated by sustained activity in sensory cortices that encode memoranda. In Chapter II, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis were used to demonstrate that sustained patterns of activiation observed in striate cortex discriminate specific feature attribute(s) (e.g., orientation) that an observer is holding in WM. In Chapter III, I show that these patterns of activation can be observed in regions of visual cortex that are not retinotopically mapped to the spatial location of a remembered stimulus and suggest that this spatially global recruitment of visual cortex enhances memory precision by facilitating robust population coding of the stored information. Together, these results provide strong support for so-called sensory recruitment models of WM, where the storage of fine visual details is mediated by sustained activity in sensory cortices that encode information. This dissertation includes previously published and co-authored material.
Committee in charge: Edward Awh, Chairperson and Advisor; Edward Vogel, Member; Nash Unsworth, Member; Terry Takahashi, Outside Member
Style APA, Harvard, Vancouver, ISO itp.
48

Mourad, Jacob, i Emil Gustafsson. "Curve Maneuvering for Precision Planter". Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157339.

Pełny tekst źródła
Streszczenie:
With a larger global population and fewer farmers, harvests will have to be larger and easier to manage. By high precision planting, each crop will have the same available area on the field, yielding an even size of the crops which means the whole field can be harvested at the same time. This thesis investigates the possibility for such precision planting in curves. Currently, Väderstads planter collection Tempo, can deliver precision in the centimeter range for speeds up to 20 km/h when driving straight, but not when turning. This thesis makes use of the available sensors on the planters, but also investigates possible improvements by including additional sensors. An Extended Kalman Filter is used to estimate the individual speeds of the planting row units and thus enabling high precision planting for an arbitrary motion. The filter is shown to yield a satisfactory result when using the internal measurement units, the radar speed sensor and the GPS already mounted on the planter. By implementing the filter, a higher precision is obtained compared to using the same global speed for all planting row units.
Style APA, Harvard, Vancouver, ISO itp.
49

Tessmer, Lavender. "Textile precision for customized assemblies". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123603.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Department of Architecture, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages [55]-[57]).
With the potential to configure patterns and materials with stitch-level control, textiles are becoming an increasingly desirable method of producing mass customized items. However, current textile machines lack the ability to transfer three-dimensional information between digital models and production with the same level of control and accuracy as other machines. Designers are accustomed to generating three-dimensional objects in a digital model then converting these into instructions for machines such as 3D printers or laser cutters, but current design interfaces and production machines for textiles provide no comparable workflow for producing items that rely on precise control of physical size and fit. Customized assemblies-such as footwear or architectural projects with complex geometries--increasingly integrate textile components with parts produced through a variety of other industrial processes. Furthermore, there is growing interest in the use of three-dimensional data, such as 3D body scanning, to aid in the production of custom-fit products. As mass customization becomes more widespread as an alternative to mass production, general-purpose machines are increasingly capable of generating customized items with high efficiency, relying on design-to-machine workflows to control geometric changes. However, current textile machines are unable to adapt to changing geometric information with the same efficiency. The challenges to dimensional precision in textiles are wide ranging, affected by computational interfaces, production machines, and material technique. Addressing these problems, this thesis demonstrates a design-to-fabrication workflow that enables the transfer of three-dimensional information directly to a device for textile production. The proposed workflow seeks a solution to the material, mechanical, and computational bottlenecks related to spatial accuracy in textile production.
by Lavender Tessmer.
S.M.
S.M. Massachusetts Institute of Technology, Department of Architecture
Style APA, Harvard, Vancouver, ISO itp.
50

Iborra, Egea Oriol. "Novel approaches towards precision medicine in acute and chronic heart failure". Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/669734.

Pełny tekst źródła
Streszczenie:
L’infart de miocardi (IM) és causat per una aturada sobtada del flux de sang que provoca isquèmia local al cor i desencadena un remodelat patològic, que es pot desenvolupar en insuficiència cardíaca (IC). Tot i que pot presentar-se com un esdeveniment estàtic, aquest és un procés complex i dinàmic. En aquesta tesi, es va valorar la IC considerant tot l'espectre de la malaltia. A més, estudis recents donen suport a la idea que processos biològics específics probablement estiguin influïts pel context biològic (p. ex: un teixit específic o una determinada malaltia). Aquesta aproximació constantment genera grans quantitats de dades, de manera que el recull, l’anàlisi i la interpretació d’aquesta informació constitueixen una tasca aclaparadora. En conseqüència, vam utilitzar tècniques d’intel·ligència artificial per combinar dades moleculars amb respostes clíniques observades en pacients, generant així un model matemàtic capaç de discernir els MoAs ocults en milers d’interaccions moleculars, altrament inaccessibles. Primer, vam voler analitzar els dos fàrmacs que estan revolucionant la gestió de la IC: Sacubitril/Valsartan, que ha demostrat una reducció del nombre de morts i ingressos en un 22% en assajos clínics recents, i Empagliflozina (un inhibidor de SGLT2 indicat per a pacients amb diabetis mellitus tipus 2) que va sorprendre amb una disminució del 32% en el desenvolupament de nous casos de IC a l’assaig EMPAREG. El nostre primer estudi va revelar que Sacubitril/Valsartan actua de manera sinèrgica bloquejant tant la mort cel·lular com el remodelat patològic de la matriu extracel·lular dels cardiomiòcits. El que és més important, vam descobrir un nucli de 8 proteïnes que es posicionen com a actors clau en aquest procés. En segon lloc, el MoA d’Empagliflozina suggeria una millora de la mort cardiomiocítica mitjançant la restauració de l’activitat de dos gens suprimits durant la IC, XIAP i BIRC5. Aquests resultats es van confirmar en un model de rata in vivo i van demostrar ser independents de la presència de diabetis, indicant que Empagliflozina podria establir-se com a nou tractament en el maneig de la IC. Tot i que ambdós fàrmacs presenten indicacions i mecanismes moleculars molt diferents, els seus beneficis en la reducció de la progressió de la IC eren notablement similars, evidenciant un paper clau del remodelat ventricular. Així doncs, a continuació vam voler explorar aquest remodelat per delinear una imatge estructurada i clara del procés complet post-IM. Aquí, vam identificar aquelles proteïnes alterades més relacionades amb la remodelació cardíaca tant en IM com en IC, i les vam utilitzar per buscar processos amb un enriquiment sostingut al llarg de la progressió de l’infart. Un cop establerts quins processos es veuen afectats en diferents etapes i la seva evolució durant l’IM, finalment vam identificar les proteïnes clau que impulsen aquestes cascades de senyalització. La IC crònica és la principal causa de mortalitat interhospitalària a tot el món, i es constitueix com a autèntica pandèmia. Tot i això, molts d’aquests pacients o bé desenvolupen IC derivat d’un esdeveniment agut o experimenten un deteriorament dràstic de la condició durant les hospitalitzacions recurrents. De fet, l’IC aguda és la principal causa de mortalitat intrahospitalària en països més desenvolupats, i el xoc cardiogènic (CS) representa la seva forma més agressiva. No obstant això, la IC aguda rep poca atenció en comparació amb la forma crònica de la malaltia Mitjançant tècniques de proteòmica i transcriptòmica avançades, primer vam investigar nous biomarcadors per ajudar a la gestió del CS. Avaluant els microRNAs i les proteïnes expressades diferencialment en pacients afectats, hem descrit l’estat actual de la investigació sobre biomarcadors en CS, així com desenvolupat un nou test molecular, el CS4P, per predir de forma fiable els resultats pronòstics d’aquests pacients.
Myocardial infarction (MI) is caused by a sudden stop of blood flow that can lead to local ischemia in the heart and cause a pathologic remodeling, which ultimately give rise to heart failure (HF). Although it might present as a static event, this is a complex and dynamic process. In this thesis, we aimed to assess HF considering the whole spectrum of the disease. From the acute phase, in which a patient suddenly falls victim to a drastic illness, to investigate the molecular transition towards its chronification and elucidate the mechanisms of action (MoA) of the most novel pharmaceutical therapies in chronic HF. Moreover, growing evidence supports the idea that specific biological processes are likely influenced by their biological context—for example, a specific tissue or a certain disease. This approach constantly generates vast amounts of data, such that putting together, analyzing, and interpreting this information constitutes an overwhelming task. Consequently, we harnessed artificial intelligence techniques to combine molecular data with clinical responses observed in patients, thus generating a mathematical model capable of both reproducing existing knowledge and discern MoAs hidden under thousands of molecular interactions, otherwise inaccessible. First, we analyzed the two drugs that are revolutionizing HF management: Entresto® (Sacubitril/Valsartan), which showed a reduction in the number of deaths and admissions by 22% in recent clinical trials, and Empagliflozin (a SGLT2 inhibitor indicated for type2 diabetes mellitus patients) that showed an unexpected 32% slash in development of new HF cases in the EMPAREG trial. Our first study revealed that Sacubitril/Valsartan acts synergistically by blocking both cell death and the pathological makeover of the extracellular matrix of cardiac cells. Most importantly, we discovered a core of 8 proteins that emerge as key players in this process. Secondly, the MoA of Empagliflozin was deciphered using deep learning analyses, which achieved 94.7% accuracy and showed an amelioration of cardiomyocyte cell death by restoring the activity of two genes suppressed during HF, XIAP and BIRC5. These results were confirmed in an in vivo rat model, and proved independent of the presence of diabetes, suggesting that Empagliflozin may emerge as a new standalone treatment in HF. Although both drugs have very distinct indications and intrinsic MoAs, their benefits in slowing HF progression were remarkably similar, evidencing a key role for ventricular remodeling. Thus, next we aimed to explore cardiac remodeling to delineate a structured and clear picture of the complete post-MI remodeling process towards HF. Here, we identified those altered proteins most related to cardiac remodeling in both MI and HF, and used them to look for processes with sustained enrichment throughout MI progression. Once we established which processes are affected at different stages and their evolution during MI, we finally sought to identify the key proteins driving these signaling cascades. Chronic HF is the leading cause of inter-hospital mortality worldwide, which constitutes an authentic pandemic. However, many of these patients either develop HF derived from an acute event or experience a drastic worsening of the condition during the recurrent hospitalizations. Indeed, acute HF is the leading cause of intra-hospital mortality in more-developed countries, in which cardiogenic shock (CS) represents its most aggressive form. Yet, acute HF receives little attention compared to the chronic form of the disease By using transcriptomic and advanced proteomics techniques, we first investigated new potential biomarkers to aid CS management, which remains the leading intra-hospital cardiovascular cause of death worldwide. Assessing microRNA and proteins differentially expressed in afflicted patients, we describe the current status of biomarker research in CS, as well as a new molecular score, the CS4P, to reliably predict the prognostic outcomes of these patients.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii