Auswahl der wissenschaftlichen Literatur zum Thema „Linear encoders“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Linear encoders" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Linear encoders"

1

Paredes, Ferran, Cristian Herrojo und Ferran Martín. „Position Sensors for Industrial Applications Based on Electromagnetic Encoders“. Sensors 21, Nr. 8 (13.04.2021): 2738. http://dx.doi.org/10.3390/s21082738.

Der volle Inhalt der Quelle
Annotation:
Optical and magnetic linear/rotary encoders are well-known systems traditionally used in industry for the accurate measurement of linear/angular displacements and velocities. Recently, a different approach for the implementation of linear/rotary encoders has been proposed. Such an approach uses electromagnetic signals, and the working principle of these electromagnetic encoders is very similar to the one of optical encoders, i.e., pulse counting. Specifically, a transmission line based structure fed by a harmonic signal tuned to a certain frequency, the stator, is perturbed by encoder motion. Such encoder consists in a linear or circular chain (or chains) of inclusions (metallic, dielectric, or apertures) on a dielectric substrate, rigid or flexible, and made of different materials, including plastics, organic materials, rubber, etc. The harmonic signal is amplitude modulated by the encoder chain, and the envelope function contains the information relative to the position and velocity. The paper mainly focuses on linear encoders based on metallic and dielectric inclusions. Moreover, it is shown that synchronous electromagnetic encoders, able to provide the quasi-absolute position (plus the velocity and direction of motion in some cases), can be implemented. Several prototype examples are reviewed in the paper, including encoders implemented by means of additive process, such as 3D printed and screen-printed encoders.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wesel, R. D., Xueting Liu, J. M. Cioffi und C. Komninakis. „Constellation labeling for linear encoders“. IEEE Transactions on Information Theory 47, Nr. 6 (2001): 2417–31. http://dx.doi.org/10.1109/18.945255.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Alejandre, I., und M. Artes. „Thermal non-linear behaviour in optical linear encoders“. International Journal of Machine Tools and Manufacture 46, Nr. 12-13 (Oktober 2006): 1319–25. http://dx.doi.org/10.1016/j.ijmachtools.2005.10.010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yang, Shengtian, Thomas Honold, Yan Chen, Zhaoyang Zhang und Peiliang Qiu. „Constructing Linear Encoders With Good Spectra“. IEEE Transactions on Information Theory 60, Nr. 10 (Oktober 2014): 5950–65. http://dx.doi.org/10.1109/tit.2014.2341560.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Dong, L. X., A. Subramanian, B. J. Nelson und Y. Sun. „Nanotube Encoders“. Solid State Phenomena 121-123 (März 2007): 1363–66. http://dx.doi.org/10.4028/www.scientific.net/ssp.121-123.1363.

Der volle Inhalt der Quelle
Annotation:
Linear encoders for nanoscale position sensing based on vertical arrays of single multi-walled carbon nanotubes (MWNTs) are investigated from experimental, theoretical, and design perspectives. Vertically aligned single MWNTs are realized using a combination of e-beam lithography and plasma-enhanced chemical vapor deposition (PECVD) growth. Field emission properties of the array are investigated inside a scanning electron microscope (SEM) equipped with a 3-DOF nanorobotic manipulator with nanometer resolution functioning as a scanning anode. Lateral position of the scanning anode is sensed from the emission distribution. High resolution (best: 12.9 nm; practical: 38.0 nm) for lateral position sensing around an emitter has been realized.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Merino, S., A. Retolaza und I. Lizuain. „Linear optical encoders manufactured by imprint lithography“. Microelectronic Engineering 83, Nr. 4-9 (April 2006): 897–901. http://dx.doi.org/10.1016/j.mee.2006.01.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Alejandre, I., und M. Artes. „REAL THERMAL COEFFICIENT IN OPTICAL LINEAR ENCODERS“. Experimental Techniques 28, Nr. 4 (Juli 2004): 18–22. http://dx.doi.org/10.1111/j.1747-1567.2004.tb00172.x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jovanović, Jelena, Dragan Denić und Uglješa Jovanović. „An Improved Linearization Circuit Used for Optical Rotary Encoders“. Measurement Science Review 17, Nr. 5 (01.10.2017): 241–49. http://dx.doi.org/10.1515/msr-2017-0029.

Der volle Inhalt der Quelle
Annotation:
Abstract Optical rotary encoders generate nonlinear sine and cosine signals in response to a change of angular position that is being measured. Due to the nonlinear shape of encoder output signals, encoder sensitivity to very small changes of angular position is low, causing a poor measurement accuracy level. To improve the optical encoder sensitivity and to increase its accuracy, an improved linearization circuit based on pseudo-linear signal generation and its further linearization with the two-stage piecewise linear analog-to-digital converter is presented in this paper. The proposed linearization circuit is composed of a mixed-signal circuit, which generates analog pseudo-linear signal and determines the first four bits of the final digital result, and the two-stage piecewise linear analog-to-digital converter, which performs simultaneous linearization and digitalization of the pseudo-linear signal. As a result, the maximal value of the absolute measurement error equals to 3.77168·10−5 [rad] (0.00216°) over the full measurement range of 2π [rad].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Karim, Ahmad M., Hilal Kaya, Mehmet Serdar Güzel, Mehmet R. Tolun, Fatih V. Çelebi und Alok Mishra. „A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification“. Sensors 20, Nr. 21 (09.11.2020): 6378. http://dx.doi.org/10.3390/s20216378.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation algorithm to enhance the performance of the neural network. Afterwards, the linear model transforms the calculated output of the deep stacked sparse auto-encoder to a value close to the anticipated output. This simple transformation increases the overall data classification performance of the stacked sparse auto-encoder architecture. The PSO algorithm allows the estimation of the parameters of the linear model in a metaheuristic policy. The proposed framework is validated by using three public datasets, which present promising results when compared with the current literature. Furthermore, the framework can be applied to any data classification problem by considering minor updates such as altering some parameters including input features, hidden neurons and output classes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Gurauskis, Donatas, Artūras Kilikevičius und Sergejus Borodinas. „Experimental Investigation of Linear Encoder’s Subdivisional Errors under Different Scanning Speeds“. Applied Sciences 10, Nr. 5 (04.03.2020): 1766. http://dx.doi.org/10.3390/app10051766.

Der volle Inhalt der Quelle
Annotation:
Optical encoders are widely used in applications requiring precise displacement measurement and fluent motion control. To reach high positioning accuracy and repeatability, and to create a more stable speed-control loop, essential attention must be directed to the subdivisional error (SDE) of the used encoder. This error influences the interpolation process and restricts the ability to achieve a high resolution. The SDE could be caused by various factors, such as the particular design of the reading head and the optical scanning principle, quality of the measuring scale, any kind of relative orientation changes between the optical components caused by mechanical vibrations or deformations, or scanning speed. If the distorted analog signals are not corrected before interpolation, it is very important to know the limitations of the used encoder. The methodology described in this paper could be used to determine the magnitude of an SDE and its trend. This method is based on a constant-speed test and does not require high-accuracy reference. The performed experimental investigation of the standard optical linear encoder SDE under different scanning speeds revealed the linear relationship between the tested encoder’s traversing velocity and the error value. A more detailed investigation of the obtained results was done on the basis of fast Fourier transformation (FFT) to understand the physical nature of the SDE, and to consider how to improve the performance of the encoder.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Linear encoders"

1

Boyd, Phillip L. „Recovery of unknown constraint length and encoder polynomials for rate 1/2 linear convolutional encoders“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA375935.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, December 1999.
"December 1999". Thesis advisor(s): Clark Robertson, Tri Ha, Ray Ramey. Includes bibliographical references (p. 79). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Balák, Pavel. „Konstrukce otočného lineárně přesuvného stolu s pevnou boční upínací deskou pro stroj FGU RT“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-230818.

Der volle Inhalt der Quelle
Annotation:
The aim of this diploma thesis is design of rotary-linear table with side clamping plate. Rotary-linear table is applicated to the profiled guideways used linear table. The work is focused on the design of the individuals nodes and their calculations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rosenfeld, Carl. „Automatiserad provrörskarusell : Elektronikkonstruktion och utvärdering“. Thesis, Uppsala University, Signals and Systems Group, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-131153.

Der volle Inhalt der Quelle
Annotation:

Den här rapporten beskriver arbetet med en automatiserad provrörsförflyttare. Det är ett examensarbete som har gjorts på företaget Q-linea AB. En karuselliknande konstruktion med en stegmotor användes för att flytta prover mellan ett antal positioner. En mikrokontroller som hanterar styrning och sensordata har programmerats i C. LabVIEW och en USB-kamera har använts som hjälp till att göra utvärderingar och tester av systemet. Målet var att konstruera en prototyp som uppfyllde de precisionskrav och tidskrav som ställdes, vilket också uppnåddes. Rapporten beskriver arbetsgången och avslutas med rekommendationer för fortsatt arbete. Rapporten innehåller en teoridel som kan användas till hjälp för att konstruera liknande system, d.v.s. roterande positioneringstillämpningar.


This thesis describes the work of an automated sample tube mover. This is a degree project and has been done at the company Q-linea AB. A carousel similar construction with a stepper motor has been designed for the task to move samples between a numbers of positions. A microcontroller has been programmed to control the movements and handle sensor data. LabVIEW have been used together with an USB-camera in order to do evaluations and tests of the system. The goal was to design a prototype that fulfills the demanded requirements of precision and timing, which also was achieved. The thesis describes the work process and concludes with recommendations for further work.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lee, Kwan Yee. „Analysis-by-synthesis linear predictive coding“. Thesis, University of Surrey, 1990. http://epubs.surrey.ac.uk/844188/.

Der volle Inhalt der Quelle
Annotation:
Applications such as satellite and digital mobile radio systems (DMR) have gained widespread acceptance in recent years, and efficient digital processing techniques are gradually replacing the older analogue systems. An important subsystem of these applications is voiceband communication, especially digital speech encoding. Digital encoding of speech has been a focus of speech processing research for many years, and recently this activity together with the rapid advances in digital hardware, has begun to produce realistic working algorithms. This is typified by the Pan-European DMR system which operates at 13Kbit/s. For applications operating below this coding capacity, sophisticated algorithms have been developed. A particular class of these, termed Analysis-by-Synthesis Linear Predictive Coding (ABS-LPC), has been a subject of active world-wide research. In this thesis, ABS-LPC algorithms are investigated with particular emphasis on the Code-Excited Linear Predictive coding (CELP) variant. The aim of the research is to produce high communication quality speech at 8Kbit/s and below by considering aspects of quantisation, computational complexity and robustness. The ABS-LPC algorithms operate by exploiting short-term and long-term correlations of speech signals. Line Spectral Frequency (LSF) representation of the short-term correlation is examined and various LSF derivations and quantisation procedures are presented. The variants of ABS-LPC are compared for their advantages and disadvantages to determine an algorithm suitable for in-depth analysis. The particular chosen variant, CELP, was pursued. A study on the importance of the long-term prediction, and the simplification of CELP without sacrificing speech quality is presented. The derived alternative approaches for the computation of the long-term predictor and the filter excitation have enabled the previously unpractical CELP algorithm to produce high communication quality speech at rates below 8Kbit/s, and yet remain implement able in real-time on a single chip. Refinements of the CELP algorithm followed in order to improve the coder towards higher speech quality at 4.8Kbit/s and below. This involved the examination of the weaknesses of the basic CELP algorithm, and alternative strategies to overcome these limitations are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Guan, Jun [Verfasser], und Rainer [Akademischer Betreuer] Tutsch. „Interferometric Encoder for Linear Displacement Metrology / Jun Guan ; Betreuer: Rainer Tutsch“. Braunschweig : Technische Universität Braunschweig, 2013. http://d-nb.info/1175822043/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dostál, Martin. „Konstrukční návrh lineární osy pro multifunkční obráběcí centrum“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443239.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis is concerned with providing a construction proposal of a linear axis X for multifunctional machining center. Moreover, this work presents characterisations of machining centers, overview of manufacturers, list of main construction components used in the linear axis, their evaluation, assessment of various options for construction, which are then explained further. These detailed construction methods include calculations with the subsequent choice of feed system component. Ultimately, final evaluation of chosen option is provided as well. Another section of this thesis is also an economical assessment and 3D model alongside with mechanical drawing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Karpf, Sebastian. „A system for time-encoded non-linear spectroscopy and microscopy“. Diss., Ludwig-Maximilians-Universität München, 2015. http://nbn-resolving.de/urn:nbn:de:bvb:19-183458.

Der volle Inhalt der Quelle
Annotation:
Raman scattering can be applied to biological imaging to identify molecules in a sample without the need for adding labels. Raman microscopy can be used to visualize functional areas at the cellular level by means of a molecular contrast and is thus a highly desired imaging tool to identify diseases in biomedical imaging. The underlying Raman scattering effect is an optical inelastic scattering effect, where energy is transferred to molecular excitations. Molecules can be identified by monitoring this energy loss of the pump light, which corresponds to a vibrational or rotational energy of the scattering molecule. With Raman scattering, the molecules can be identified by their specific vibrational energies and even quantified due to the signal height. This technique has been known for almost a century and finds vast applications from biology to medicine and from chemistry to homeland security. A problem is the weak effect, where usually only one in a billion photons are scattered. Non-linear enhancement techniques can improve the signal by many orders of magnitude. This can be especially important for fast biomedical imaging of highly scattering media and for high resolution spectroscopy, surpassing the resolution of usual spectrometers. In this thesis a new system for stimulated Raman spectroscopy (SRS) and hyperspectral Raman microscopy with a rapidly wavelength swept laser is presented. A time-encoded (TICO) technique was developed that enables direct encoding of the Raman transition energy in time and direct detection of the intensity change on the Stokes laser by employing fast analogue-to-digital converter (ADC) cards (1.8 Gigasamples/s). Therefore, a homebuilt pump laser was developed based on a fiber-based master oscillator power amplifier (MOPA) at 1064 nm and extended by a Raman shifter that can shift the output wavelength to 1122 nm or 1186 nm. This is achieved by seeding the Raman amplification in the fiber with a narrowband 1122 nm laser diode. Surprisingly, this also leads to narrowband (0.4 cm-1) cascaded Raman shifts at 1186 nm and 1257 nm, which is in contrast to the usually broadband spontaneous Raman transition in fused silica. The underlying effect was examined and therefore concluded that it is most probably due to a combined four-wave-mixing and cascaded Raman scattering mechanism. Experimentally, the narrowband cascaded Raman line was used to record a high-resolution TICO-Raman spectrum of benzene. As Raman Stokes laser, a rapidly wavelength swept Fourier domain mode-locked (FDML) laser was employed which provides many advantages for SRS. The most important advantages of this fiber based laser are that it enables coverage of the whole range of relevant Raman energies from 250 cm-1 up to 3150 cm-1, while being a continuous wave (CW) laser, which at the same time allows high resolution (0.5 cm-1) spectroscopy. Further, it enables a new dual stage balanced detection which permits shot noise limited SRS measurements and, due to the well-defined wavelength sweep, the TICO-Raman technique directly provides high-quality Raman spectra with accurate Raman transition energy calibration. This setup was used for different applications, including Raman spectroscopy and non-linear microscopy. As results, broadband Raman spectra are presented and compared to a state-of-the-art spontaneous Raman spectrum. Furthermore, several spectroscopic features are explored. For first imaging results, samples were raster scanned with a translational stage and at each pixel a TICO-Raman spectrum acquired. This led to a hyperspectral Raman image which was transformed into a color-coded image with molecular contrast. Biological imaging of a plant stem is presented. The setup further allowed performing multi-photon absorption imaging by two-photon excited fluorescence (TPEF). In summary, this thesis presents the design, development and preliminary testing of a new and promising platform for spectroscopy and non-linear imaging. This setup holds the capability of biological multi-modal imaging, including modalities like optical coherence tomography (OCT), absorption spectroscopy, SRS, TPEF, second harmonic generation (SHG), third-harmonic generation (THG) and fluorescence lifetime imaging (FLIM). Amongst the most promising characteristics of this setup is the fiber-based design, paving the way for an endoscopic imaging setup. Already now, this makes it a robust, alignment-free, reliable and easy-to-use system.
Ramanstreuung kann in der biomedizinischen Bildgebung dazu eingesetzt werden, Moleküle in einer Probe zu identifizieren, ohne dass die Probe vorher aufbereitet werden muss. Raman Mikroskopie kann funktionelle Bereiche sichtbar machen, indem es einen molekularen Kontrast auf Größenordnungen der Zellen bereitstellt und wird damit hochinteressant für die Krankheitserkennung in biomedizinischer Bildgebung. Der zugrundeliegende Raman Streuprozess ist ein optisch-inelastischer Streuungsmechanismus der die Detektion von Molekülschwingungen ermöglicht. Dabei wird das gestreute Licht detektiert und die Energiedifferenz zum Anregungslicht entspricht der molekularen Schwingungsenergie. Durch diese molekülspezifischen Schwingungsenergien ist es möglich, die Moleküle zu identifizieren und weiterhin durch die Signalhöhe zu quantifizieren. Diese Technik ist seit nunmehr beinahe einem Jahrhundert bekannt und findet breite Anwendung in Gebieten wie der Biologie, Chemie und der Medizin. Das Problem der Ramanstreuung ist die geringe Signalstärke des Effekts, wobei normalerweise nur eines von einer Milliarde Photonen gestreut wird. Es ist jedoch möglich, diesen Effekt durch nichtlineare Techniken um einige Größenordnungen zu verstärken. Dies wird besonders relevant beim Einsatz in der biomedizinischen Bildgebung von hochstreuendem Gewebe und bei hochauflösender Spektroskopie, wo gewöhnliche, gitterbasierte Spektrometer an ihre Grenzen stoßen. In der vorliegenden Arbeit wird ein neues System zur stimulierten Ramanstreuung (SRS) und hyperspektralen Ramanmikroskopie mittels eines schnell wellenlängenabstimmbaren Lasers vorgestellt. Hierfür wurde eine neue, zeitkodierte (TICO) Technik entwickelt, die es ermöglicht, die abgefragte Raman-Schwingungsenergie in der Zeit zu kodieren und weiter die durch SRS auftretende Leistungssteigerung direkt in der Zeitdomäne aufzunehmen, indem sehr schnelle Analog-zu-Digital-Wandler (ADC) mit 1.8 Gigasamples/s eingesetzt werden. Der hierfür entwickelte Pumplaser ist ein faserbasierter Masteroszillator Leistungsverstärker (MOPA) mit integriertem Ramanwandler, der einen Betrieb bei 1064 nm, 1122 nm oder 1186 nm ermöglicht. Diese Mehrwellenlängenfähigkeit basiert auf dem Ramaneffekt in der Glasfaser, der durch ein Keimlicht einer 1122 nm Laserdiode stimuliert wird. Überraschenderweise wurde dadurch ebenfalls ein schmalbandiger Betrieb (0,4 cm-1) der kaskadierten Ramanbanden bei 1186 nm und 1257 nm beobachtet, was zunächst der erwarteten breitbandigen Ramanbande von Glas widerspricht. Diese Ergebnisse wurden untersucht und es wird ein Modell vorgeschlagen, wonach der gefundene Effekt auf einer Kombination von Vier-Wellen-Mischen und kaskadierter Ramanstreuung beruht. Das schmalbandige kaskadierte Ramanlicht bei 1186 nm wurde im Experiment für hochauflösende Ramanspektroskopie von Benzol benutzt. Als Raman Stokeslaser wurde ein schnell wellenlängenabstimmbarer Fourierdomänen modengekoppelter (FDML) Laser benutzt, der einige Vorteile kombiniert. Als wichtigste Vorteile dieses faserbasierten Lasers sind die breite Abdeckung relevanter Ramanenergien, die sich von möglichen 250 cm-1 bis 3150 cm-1 erstreckt, die gleichzeitig hohe spektrale Auflösung (0.5 cm-1), und der für biologische Bildgebung interessante Dauerstrich-Betrieb (CW) zu nennen. Weiterhin wurde eine neue, zweistufig balanzierte Detektion entwickelt, die SRS Messungen an der Schrotrauschgrenze ermöglichen. Die wohldefinierte Wellenlängen-zu-Zeit Beziehung dieses Lasers wurde darüber hinaus dafür benutzt, den TICO-Raman Spektren direkt Ramanenergien zuzuweisen. Dadurch wurden hochqualitative Ramanspektren mit akkurater Wellenzahlinformation möglich. Das entwickelte System wurde für Anwendungen in der Raman Spektroskopie und nicht-linearen Bildgebung eingesetzt. Als Ergebnisse werden breitbandig abgetastete Ramanspektren präsentiert, die mit spontanen Raman Spektren verglichen werden. Weitere, neue spektrale Anwendungen wurden untersucht und erste Mikroskopiebilder erzeugt. Hierfür wurde die Probe mittels eines Verschiebetisches verfahren und an jedem Pixel ein TICO-Raman Spektrum aufgenommen. Die so erzeugten hyperspektralen Raman Mikroskopiebilder wurden in farbig kodierte Bilder mit molekularem Kontrast umgewandelt. Es wird eine TICO-Raman Mikroskopieaufnahme von einem Pflanzenschnitt präsentiert. Das System erlaubt es ferner, durch den Einsatz des hochintensiven Pumplasers Bilder mit Mehrphotonenabsorption zu messen, indem zweiphotonenangeregte Fluoreszenzbildgebung (TPEF) angewandt wird. Zusammenfassend wird in dieser Arbeit die Entwicklung eines neuen Systems der Spektroskopie und nichtlinearen Bildgebung beschrieben und erste Messergebnisse präsentiert. Mit diesem System wird es möglich sein, viele verschiedene Bildgebungsverfahren zu verbinden. Darunter unter anderem Bildgebungsverfahren wie die optische Kohärenztomographie (OCT), Absorptionsspektroskopie, SRS, TPEF, Frequenzverdopplung (SHG) und Frequenzverdreifachung (THG) und Fluoreszenzlebenszeitmikroskopie (FLIM). Der wohl vielversprechendste Vorteil dieses Systems liegt in dem faserbasierten Design, welches es ermöglichen kann dieses System zukünftig zur endoskopischen Bildgebung einzusetzen. Bereits jetzt ergibt dieser faserbasierte Aufbau ein sehr robustes System, das verlässlich, justagefrei und einfach zu bedienen ist.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Trnkócy, Tomáš. „Návrh a realizace testovacího zařízení manipulačního mechanismu vzorku pro elektronový mikroskop“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-230625.

Der volle Inhalt der Quelle
Annotation:
Práce se zabývá návrhem a implementací testovacího zařízení pro manipulačnínmechanismus vzorku v elektronovém mikroskopu. Testovací zařízení a jeho software zajištuje meření několika parametrů mechanismu, jejich statistické vyhodnocení a porovnání se specifikací. Cílem je vytvořit komplexní testovací zařízení s jednoduchým uživatelským rozhraním, s požadavkem náhrady stávajícího nemodulárního a nestabilního řešení a jeho rozšíření o testování dalších parametrů.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shirley, Matt, und n/a. „Characterisation of an 84 kb linear plasmid that encodes DDE cometabolism in Terrabacter sp. strain DDE-1“. University of Otago. Department of Microbiology & Immunology, 2006. http://adt.otago.ac.nz./public/adt-NZDU20060804.094902.

Der volle Inhalt der Quelle
Annotation:
DDT, an extremely widely used organochlorine pesticide, was banned in most developed countries more than 30 years ago. However, DDT residues, including 1,1-dichloro-2,2-bis(4-chlorophenyl)ethylene (DDE), still persist in the environment and have been identified as priority pollutants due to their toxicity and their ability to bioaccumulate and biomagnify in the food chain. In particular, DDE was long believed to be "enon-biodegradable"e, however some microorganisms have now been isolated that are able to metabolise DDE in pure culture. Terrabacter sp. strain DDE-1 was enriched from a DDT-contaminated agricultural soil from the Canterbury plains and is able to metabolise DDE to 4-chlorobenzoic acid when induced with biphenyl. The primary objective of this study was to identify the gene(s) responsible for Terrabacter sp. strain DDE-1�s ability to metabolise DDE and, in particular, to investigate the hypothesis that DDE-1 degrades DDE cometabolically via a biphenyl degradation pathway. Catabolism of biphenyl by strain DDE-1 was demonstrated, and a biphenyl degradation (bph) gene cluster containing bphDA1A2A3A4BCST genes was identified. The bphDA1A2A3A4BC genes are predicted to encode a biphenyl degradation upper pathway for the degradation of biphenyl to benzoate and cis-2-hydroxypenta-2,4-dienoate and the bphST genes are predicted to encode a two-component signal transduction system involved in regulation of biphenyl catabolism. The bph gene cluster was found to be located on a linear plasmid, designated pBPH1. A plasmid-cured strain (MJ-2) was unable to catabolise both biphenyl and DDE, supporting the hypothesis that strain DDE-1 degrades DDE cometabolically via the biphenyl degradation pathway. Furthermore, preliminary evidence from DDE overlayer agar plate assays suggested that Pseudomonas aeruginosa carrying the strain DDE-1 bphA1A2A3A4BC genes is able to catabolise DDE when grown in the presence of biphenyl. A second objective of this study was to characterise pBPH1. The complete 84,054-bp sequence of the plasmid was determined. Annotation of the DNA sequence data revealed seventy-six ORFs predicted to encode proteins, four pseudogenes, and ten gene fragments. Putative functions were assigned to forty-two of the ORF and pseudogenes. Besides biphenyl catabolism, the major functional classes of the predicted proteins were transposition, regulation, heavy metal transport/resistance, and plasmid maintenance and replication. It was shown that pBPH1 has the terminal structural features of an actinomycete invertron, including terminal proteins and terminal inverted repeats (TIRs). This is the first report detailing the nucleotide sequence and characterisation of a (linear) plasmid from the genus Terrabacter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Oberhauser, Joseph Q. „Design, Construction, Control, and Analysis of Linear Delta Robot“. Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1460045979.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Linear encoders"

1

Recovery of Unknown Constraint Length and Encoder Polynomials for Rate 1/2 Linear Convolutional Encoders. Storming Media, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Deruelle, Nathalie, und Jean-Philippe Uzan. The wave vector of light. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198786399.003.0022.

Der volle Inhalt der Quelle
Annotation:
This chapter shows how simple world lines of zero length can describe an undulatory aspect of light—namely, its frequency. It first encodes the information about the frequency of a monochromatic light wave in the zeroth component of its wave vector. An alternative method of taking into account the wave nature of light is based on the fact that the emission of successive light corpuscles by the source also defines the period of a light signal. To illustrate, the chapter provides the example of a light source and a receiver moving along the X axis of a frame S. Finally, this chapter illustrates the idea of a particle horizon as well as the limits of validity of the spectral shift formulas introduced in the chapter by the example of two objects which exchange light signals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Koch, Christof. Biophysics of Computation. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195104912.001.0001.

Der volle Inhalt der Quelle
Annotation:
Neural network research often builds on the fiction that neurons are simple linear threshold units, completely neglecting the highly dynamic and complex nature of synapses, dendrites, and voltage-dependent ionic currents. Biophysics of Computation: Information Processing in Single Neurons challenges this notion, using richly detailed experimental and theoretical findings from cellular biophysics to explain the repertoire of computational functions available to single neurons. The author shows how individual nerve cells can multiply, integrate, or delay synaptic inputs and how information can be encoded in the voltage across the membrane, in the intracellular calcium concentration, or in the timing of individual spikes. Key topics covered include the linear cable equation; cable theory as applied to passive dendritic trees and dendritic spines; chemical and electrical synapses and how to treat them from a computational point of view; nonlinear interactions of synaptic input in passive and active dendritic trees; the Hodgkin-Huxley model of action potential generation and propagation; phase space analysis; linking stochastic ionic channels to membrane-dependent currents; calcium and potassium currents and their role in information processing; the role of diffusion, buffering and binding of calcium, and other messenger systems in information processing and storage; short- and long-term models of synaptic plasticity; simplified models of single cells; stochastic aspects of neuronal firing; the nature of the neuronal code; and unconventional models of sub-cellular computation. Biophysics of Computation: Information Processing in Single Neurons serves as an ideal text for advanced undergraduate and graduate courses in cellular biophysics, computational neuroscience, and neural networks, and will appeal to students and professionals in neuroscience, electrical and computer engineering, and physics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Menon, Deepa U. Autism and Intellectual Disabilities. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199937837.003.0053.

Der volle Inhalt der Quelle
Annotation:
PTEN (phosphatase and tensin homologue) on chromosome 10q23.3 is a tumor suppressor gene that encodes for a dual specificity phosphatase that regulates the phosphatidylinositol 3- kinase pathway and has an important role in brain development by affecting neuronal survival, neurite outgrowth, synaptic plasticity, and learning memory. Germline mutations of the PTEN gene have been implicated in a group of related tumor syndromes with autosomal dominant inheritance and variable expression and include the Cowden syndrome, Bannayan-Riley-Ruvalcaba syndrome, Proteus syndrome, and Juvenile Polyposis syndrome. These syndromes are collectively called the PTEN hamartoma tumor syndromes (PHTS) because they have a predisposition to tumors and hamartomas. PTEN germ line mutations have also been recently linked to autism and macrocephaly and the prevalence of PTEN mutation in children with autism spectrum disorder, and macrocephaly is reported to range from 1.1% to 16.7%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jackendoff, Ray. Constructions in the Parallel Architecture. Herausgegeben von Thomas Hoffmann und Graeme Trousdale. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780195396683.013.0005.

Der volle Inhalt der Quelle
Annotation:
This chapter discusses what the Parallel Architecture has taken from Construction Grammar and what it might contribute to Construction Grammar. After outlining the fundamentals of the architecture, it explains why rules of grammar should be formulated as lexical items encoded as pieces of structure: there is no hard line between words, constructions, and standard rules. The chapter also argues for a “heterogeneous” variety of Construction Grammar, which does not insist that every syntactic construction is invested with meaning. Finally, it discusses the crucial issue of semiproductivity, usually thought to be a property of morphology, showing that constructions too can be either productive or semiproductive.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Duffley, Patrick. Linguistic Meaning Meets Linguistic Form. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198850700.001.0001.

Der volle Inhalt der Quelle
Annotation:
This book steers a middle course between two opposing conceptions that currently dominate the field of semantics, the logical and cognitive approaches. It brings to light the inadequacies of both frameworks, and argues along with the Columbia School that linguistic semantics must be grounded on the linguistic sign itself and the meaning it conveys across the full range of its uses. The book offers 12 case studies demonstrating the explanatory power of a sign-based semantics, dealing with topics such as complementation with aspectual and causative verbs, control and raising, wh- words, full-verb inversion, and existential-there constructions. It calls for a radical revision of the semantics/pragmatics interface, proposing that the dividing-line be drawn between semiologically-signified notional content (i.e. what is linguistically encoded) and non-semiologically-signified notional content (i.e. what is not encoded but still communicated). This highlights a dimension of embodiment that concerns the basic design architecture of human language itself: the ineludable fact that the fundamental relation on which language is based is the association between a mind-engendered meaning and a bodily produced sign. It is argued that linguistic analysis often disregards this fact and treats meaning on the level of the sentence or the construction, rather than on that of the lower-level linguistic items where the linguistic sign is stored in a stable, permanent, and direct relation with its meaning outside of any particular context. Building linguistic analysis up from the ground level provides it with a more solid foundation and increases its explanatory power.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Voll, Reinhard E., und Barbara M. Bröker. Innate vs acquired immunity. Oxford University Press, 2013. http://dx.doi.org/10.1093/med/9780199642489.003.0048.

Der volle Inhalt der Quelle
Annotation:
The innate and the adaptive immune system efficiently cooperate to protect us from infections. The ancient innate immune system, dating back to the first multicellular organisms, utilizes phagocytic cells, soluble antimicrobial peptides, and the complement system for an immediate line of defence against pathogens. Using a limited number of germline-encoded pattern recognition receptors including the Toll-like, RIG-1-like, and NOD-like receptors, the innate immune system recognizes so-called pathogen-associated molecular patterns (PAMPs). PAMPs are specific for groups of related microorganisms and represent highly conserved, mostly non-protein molecules essential for the pathogens' life cycles. Hence, escape mutants strongly reduce the pathogen's fitness. An important task of the innate immune system is to distinguish between harmless antigens and potentially dangerous pathogens. Ideally, innate immune cells should activate the adaptive immune cells only in the case of invading pathogens. The evolutionarily rather new adaptive immune system, which can be found in jawed fish and higher vertebrates, needs several days to mount an efficient response upon its first encounter with a certain pathogen. As soon as antigen-specific lymphocyte clones have been expanded, they powerfully fight the pathogen. Importantly, memory lymphocytes can often protect us from reinfections. During the development of T and B lymphocytes, many millions of different receptors are generated by somatic recombination and hypermutation of gene segments making up the antigen receptors. This process carries the inherent risk of autoimmunity, causing most inflammatory rheumatic diseases. In contrast, inadequate activation of the innate immune system, especially activation of the inflammasomes, may cause autoinflammatory syndromes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Linear encoders"

1

Touir, Ameur, und Brigitte Kerhervé. „Pattern Translation in Images Encoded by Linear Quadtree“. In Modeling in Computer Graphics, 231–46. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-68147-2_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zuo, W., Zhi Jing Feng, S. T. Huang und G. M. Zhao. „Electronic Subdividing Method for Linear Encoder of High Speed Position Detection“. In Advances in Machining & Manufacturing Technology VIII, 230–34. Stafa: Trans Tech Publications Ltd., 2006. http://dx.doi.org/10.4028/0-87849-999-7.230.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Schwabacher, Alan W., Christopher W. Johnson und Peter Geissinger. „Linear Spatially Encoded Combinatorial Chemistry with Fourier Transform Library Analysis“. In High-Throughput Analysis, 93–104. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4419-8989-5_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Luo, Jiapeng, Lei Cao und Jianhua Xu. „A Non-linear Label Compression Coding Method Based on Five-Layer Auto-Encoder for Multi-label Classification“. In Neural Information Processing, 415–24. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46675-0_45.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Leutgeb, Lorenz, Georg Moser und Florian Zuleger. „ATLAS: Automated Amortised Complexity Analysis of Self-adjusting Data Structures“. In Computer Aided Verification, 99–122. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_5.

Der volle Inhalt der Quelle
Annotation:
AbstractBeing able to argue about the performance of self-adjusting data structures such as splay trees has been a main objective, when Sleator and Tarjan introduced the notion of amortised complexity.Analysing these data structures requires sophisticated potential functions, which typically contain logarithmic expressions. Possibly for these reasons, and despite the recent progress in automated resource analysis, they have so far eluded automation. In this paper, we report on the first fully-automated amortised complexity analysis of self-adjusting data structures. Following earlier work, our analysis is based on potential function templates with unknown coefficients.We make the following contributions: 1) We encode the search for concrete potential function coefficients as an optimisation problem over a suitable constraint system. Our target function steers the search towards coefficients that minimise the inferred amortised complexity. 2) Automation is achieved by using a linear constraint system in conjunction with suitable lemmata schemes that encapsulate the required non-linear facts about the logarithm. We discuss our choices that achieve a scalable analysis. 3) We present our tool $$\mathsf {ATLAS}$$ ATLAS and report on experimental results for splay trees, splay heaps and pairing heaps. We completely automatically infer complexity estimates that match previous results (obtained by sophisticated pen-and-paper proofs), and in some cases even infer better complexity estimates than previously published.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Khedr, Haitham, James Ferlez und Yasser Shoukry. „PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier“. In Computer Aided Verification, 287–300. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_13.

Der volle Inhalt der Quelle
Annotation:
AbstractNeural Networks (NNs) have increasingly apparent safety implications commensurate with their proliferation in real-world applications: both unanticipated as well as adversarial misclassifications can result in fatal outcomes. As a consequence, techniques of formal verification have been recognized as crucial to the design and deployment of safe NNs. In this paper, we introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs – i.e. polytopic specifications on the input and output of the network. Like some other approaches, ours uses a relaxed convex program to mitigate the combinatorial complexity of the problem. However, unique in our approach is the way we use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions. In particular, we encode each ReLU by means of the usual linear constraints, and combine this with a convex objective function that penalizes the discrepancy between the output of each neuron and its relaxation. This convex function is further structured to force the largest relaxations to appear closest to the input layer; this provides the further benefit that the most “problematic” neurons are conditioned as early as possible, when conditioning layer by layer. This paradigm can be leveraged to create a verification algorithm that is not only faster in general than competing approaches, but is also able to verify considerably more safety properties; we evaluated PEREGRiNN on a standard MNIST robustness verification suite to substantiate these claims.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jaminet, Jean, Gabriel Esquivel und Shane Bugni. „Serlio and Artificial Intelligence: Problematizing the Image-to-Object Workflow“. In Proceedings of the 2021 DigitalFUTURES, 3–12. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5983-6_1.

Der volle Inhalt der Quelle
Annotation:
AbstractVirtual design production demands that information be increasingly encoded and decoded with image compression technologies. Since the Renaissance, the discourses of language and drawing and their actuation by the classical disciplinary treatise have been fundamental to the production of knowledge within the building arts. These early forms of data compression provoke reflection on theory and technology as critical counterparts to perception and imagination unique to the discipline of architecture. This research examines the illustrated expositions of Sebastiano Serlio through the lens of artificial intelligence (AI). The mimetic powers of technological data storage and retrieval and Serlio’s coded operations of orthographic projection drawing disclose other aesthetic and formal logics for architecture and its image that exist outside human perception. Examination of aesthetic communication theory provides a conceptual dimension of how architecture and artificial intelligent systems integrate both analog and digital modes of information processing. Tools and methods are reconsidered to propose alternative AI workflows that complicate normative and predictable linear design processes. The operative model presented demonstrates how augmenting and interpreting layered generative adversarial networks drive an integrated parametric process of three-dimensionalization. Concluding remarks contemplate the role of human design agency within these emerging modes of creative digital production.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rüttgers, Mario, Seong-Ryong Koh, Jenia Jitsev, Wolfgang Schröder und Andreas Lintermann. „Prediction of Acoustic Fields Using a Lattice-Boltzmann Method and Deep Learning“. In Lecture Notes in Computer Science, 81–101. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59851-8_6.

Der volle Inhalt der Quelle
Annotation:
Abstract Using traditional computational fluid dynamics and aeroacoustics methods, the accurate simulation of aeroacoustic sources requires high compute resources to resolve all necessary physical phenomena. In contrast, once trained, artificial neural networks such as deep encoder-decoder convolutional networks allow to predict aeroacoustics at lower cost and, depending on the quality of the employed network, also at high accuracy. The architecture for such a neural network is developed to predict the sound pressure level in a 2D square domain. It is trained by numerical results from up to 20,000 GPU-based lattice-Boltzmann simulations that include randomly distributed rectangular and circular objects, and monopole sources. Types of boundary conditions, the monopole locations, and cell distances for objects and monopoles serve as input to the network. Parameters are studied to tune the predictions and to increase their accuracy. The complexity of the setup is successively increased along three cases and the impact of the number of feature maps, the type of loss function, and the number of training data on the prediction accuracy is investigated. An optimal choice of the parameters leads to network-predicted results that are in good agreement with the simulated findings. This is corroborated by negligible differences of the sound pressure level between the simulated and the network-predicted results along characteristic lines and by small mean errors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Torii, Akihiro, Kazuhiro Hane und Shigeru Okuma. „Multi-Probe Force Microscope for a Precise Linear Encoder“. In International Progress in Precision Engineering, 1003–6. Elsevier, 1993. http://dx.doi.org/10.1016/b978-0-7506-9484-1.50115-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pollack, Martha E., und Ioannis Tsamardinos. „Efficiently Dispatching Plans Encoded as Simple Temporal Problems“. In Intelligent Techniques for Planning, 296–319. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-450-7.ch009.

Der volle Inhalt der Quelle
Annotation:
The Simple Temporal Problem (STP) formalism was developed to encode flexible quantitative temporal constraints, and it has been adopted as a commonly used framework for temporal plans. This chapter addresses the question of how to automatically dispatch a plan encoded as an STP, that is, how to determine when to perform its constituent actions so as to ensure that all of its temporal constraints are satisfied. After reviewing the theory of STPs and their use in encoding plans, we present detailed descriptions of the algorithms that have been developed to date in the literature on STP dispatch. We distinguish between off-line and online dispatch, and present both basic algorithms for dispatch and techniques for improving their efficiency in time-critical situations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Linear encoders"

1

Chen, Brian, und Jen-Yuan (James) Chang. „Mechatronic Integration of Magnetic Linear Encoding Medium Manufacturing“. In ASME 2014 Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/isps2014-6936.

Der volle Inhalt der Quelle
Annotation:
Linear encoder has been widely used in various position controls in industries, especially in machinery industry. The purpose of using linear encoders is to give precise position control in dynamic applications. Furthermore, using linear encoders helps minimize errors caused by human or mechanical problems such as backlash and thermal expansion [2]. There are various types of linear encoders such as mechanical, optical, magnetic, etc. Nevertheless, magnetic encoders are able to withstand harsh environment such as oil, grease, and dust much effective than the rest. Magnetic encoders have several advantageous qualities: low cost, fast response, and high reliability [1, 3]. Figure 1 shows the magnetic field of a magnetic scale where the yellow curves indicate the change of magnetic poles. The upper half of the scale is the incremental mark and the bottom half is the reference mark. Prior to magnetization, the scale has only the incremental mark, and the magnetizing process is to magnetize bottom half of the incremental mark into reference mark as shown in Fig. 1.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hosszu, Eva, Christina Fragouli und Janos Tapolcai. „Combinatorial error detection in linear encoders“. In 2015 IEEE 16th International Conference on High-Performance Switching and Routing (HPSR). IEEE, 2015. http://dx.doi.org/10.1109/hpsr.2015.7483114.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pulanco, William Michael, und Stephen Derby. „Design and Implementation of a Linear Testbed for Encoder Validation“. In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/cie-1438.

Der volle Inhalt der Quelle
Annotation:
Abstract It was desired to build a test station that would test the maximum operating speed and assist in the adjustment of the potentiometer during assembly of the linear encoders. A linear slide table provides the best linear motion to test encoders. It was also decided that a belt drive system would allow for the desired performance. The belt drive system is connected to a motor which needs some type of control. Many different options were considered for controlling the motor and some tried, but the final solution was to use a stand alone controller that would take input from a control panel. The total test station includes a motor & encoder, controller, amplifier, power supply, and interface board. The controller is a stand alone unit, that is programmed from the RS232 port on a personal computer. The controller can also send data back through the RS232 port for analysis purposes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ng, Kim-Gau, und Joey K. Parker. „A Two-Encoder Finger Position Sensing System for a Two-Degree-of-Freedom Robot Hand“. In ASME 1991 Design Technical Conferences. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/detc1991-0171.

Der volle Inhalt der Quelle
Annotation:
Abstract As part of a robot hand with two independently controlled fingers each having one degree of freedom, a novel two-encoder position sensing system was designed for each of the fingers. In this system, a combination of a linear encoder and a rotary encoder is used to indicate finger position. The linear encoder provides coarse measurements while the rotary encoder provides fine measurements between two adjacent linear encoder counts. This two-encoder system permits more precise measurements than a system with only the linear encoder. The two encoders are connected to an IBM PC through an interface system. This paper presents the complete design and implementation of this two-encoder position sensing system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yamaguchi, Ichirou, und Tadashige Fujita. „Linear And Rotary Encoders Using Electronic Speckle Correlation“. In 33rd Annual Techincal Symposium, herausgegeben von Ryszard J. Pryputniewicz. SPIE, 1990. http://dx.doi.org/10.1117/12.962748.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Leviton, Douglas B. „New ultrahigh-sensitivity absolute, linear, and rotary encoders“. In SPIE's International Symposium on Optical Science, Engineering, and Instrumentation, herausgegeben von Edward W. Taylor. SPIE, 1998. http://dx.doi.org/10.1117/12.326695.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Prokofev, Aleksandr V., Aleksandr N. Timofeev, Sergey V. Mednikov und Elena A. Sycheva. „Power calculation of linear and angular incremental encoders“. In SPIE Photonics Europe, herausgegeben von Frank Wyrowski, John T. Sheridan und Youri Meuret. SPIE, 2016. http://dx.doi.org/10.1117/12.2227336.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Borodzhieva, Adriana Naydenova. „FPGA Implementation of Systematic Linear Block Encoders for Educational Purposes“. In 2019 X National Conference with International Participation (ELECTRONICA). IEEE, 2019. http://dx.doi.org/10.1109/electronica.2019.8825652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Halbgewachs, Clemens, Thomas J. Kentischer, Karsten Sändig, Joerg Baumgartner, Alexander Bell, Andreas Fischer, Stefan Funk et al. „Qualification of HEIDENHAIN linear encoders for picometer resolution metrology in VTF Etalons“. In SPIE Astronomical Telescopes + Instrumentation, herausgegeben von Christopher J. Evans, Luc Simard und Hideki Takami. SPIE, 2016. http://dx.doi.org/10.1117/12.2232297.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chee, Yeow Meng, Han Mao Kiah und Tuan Thanh Nguyen. „Linear-Time Encoders for Codes Correcting a Single Edit for DNA-Based Data Storage“. In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849643.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Linear encoders"

1

Kang, G. S., und L. J. Fransen. Low-Bit Rate Speech Encoders Based on Line-Spectrum Frequencies (LSFs). Fort Belvoir, VA: Defense Technical Information Center, Januar 1985. http://dx.doi.org/10.21236/ada150518.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yesha, Yaacov. Channel coding for code excited linear prediction (CELP) encoded speech in mobile radio applications. Gaithersburg, MD: National Institute of Standards and Technology, 1994. http://dx.doi.org/10.6028/nist.ir.5503.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shu, Deming. Development of a laser Doppler displacement encoder system with ultra-low-noise-level for linear displacement measurement with subnanometer resolution - Final CRADA Report. Office of Scientific and Technical Information (OSTI), Januar 2016. http://dx.doi.org/10.2172/1334289.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Karlstrom, Karl, Laura Crossey, Allyson Matthis und Carl Bowman. Telling time at Grand Canyon National Park: 2020 update. National Park Service, April 2021. http://dx.doi.org/10.36967/nrr-2285173.

Der volle Inhalt der Quelle
Annotation:
Grand Canyon National Park is all about time and timescales. Time is the currency of our daily life, of history, and of biological evolution. Grand Canyon’s beauty has inspired explorers, artists, and poets. Behind it all, Grand Canyon’s geology and sense of timelessness are among its most prominent and important resources. Grand Canyon has an exceptionally complete and well-exposed rock record of Earth’s history. It is an ideal place to gain a sense of geologic (or deep) time. A visit to the South or North rims, a hike into the canyon of any length, or a trip through the 277-mile (446-km) length of Grand Canyon are awe-inspiring experiences for many reasons, and they often motivate us to look deeper to understand how our human timescales of hundreds and thousands of years overlap with Earth’s many timescales reaching back millions and billions of years. This report summarizes how geologists tell time at Grand Canyon, and the resultant “best” numeric ages for the canyon’s strata based on recent scientific research. By best, we mean the most accurate and precise ages available, given the dating techniques used, geologic constraints, the availability of datable material, and the fossil record of Grand Canyon rock units. This paper updates a previously-published compilation of best numeric ages (Mathis and Bowman 2005a; 2005b; 2007) to incorporate recent revisions in the canyon’s stratigraphic nomenclature and additional numeric age determinations published in the scientific literature. From bottom to top, Grand Canyon’s rocks can be ordered into three “sets” (or primary packages), each with an overarching story. The Vishnu Basement Rocks were once tens of miles deep as North America’s crust formed via collisions of volcanic island chains with the pre-existing continent between 1,840 and 1,375 million years ago. The Grand Canyon Supergroup contains evidence for early single-celled life and represents basins that record the assembly and breakup of an early supercontinent between 729 and 1,255 million years ago. The Layered Paleozoic Rocks encode stories, layer by layer, of dramatic geologic changes and the evolution of animal life during the Paleozoic Era (period of ancient life) between 270 and 530 million years ago. In addition to characterizing the ages and geology of the three sets of rocks, we provide numeric ages for all the groups and formations within each set. Nine tables list the best ages along with information on each unit’s tectonic or depositional environment, and specific information explaining why revisions were made to previously published numeric ages. Photographs, line drawings, and diagrams of the different rock formations are included, as well as an extensive glossary of geologic terms to help define important scientific concepts. The three sets of rocks are separated by rock contacts called unconformities formed during long periods of erosion. This report unravels the Great Unconformity, named by John Wesley Powell 150 years ago, and shows that it is made up of several distinct erosion surfaces. The Great Nonconformity is between the Vishnu Basement Rocks and the Grand Canyon Supergroup. The Great Angular Unconformity is between the Grand Canyon Supergroup and the Layered Paleozoic Rocks. Powell’s term, the Great Unconformity, is used for contacts where the Vishnu Basement Rocks are directly overlain by the Layered Paleozoic Rocks. The time missing at these and other unconformities within the sets is also summarized in this paper—a topic that can be as interesting as the time recorded. Our goal is to provide a single up-to-date reference that summarizes the main facets of when the rocks exposed in the canyon’s walls were formed and their geologic history. This authoritative and readable summary of the age of Grand Canyon rocks will hopefully be helpful to National Park Service staff including resource managers and park interpreters at many levels of geologic understandings...
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie