Siga este enlace para ver otros tipos de publicaciones sobre el tema: Camerana.

Artículos de revistas sobre el tema "Camerana"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Camerana".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Gómez Urdáñez, José Luis. "Subsistencia y descapitalización en el Camero Viejo al final del Antiguo Régimen". Brocar. Cuadernos de Investigación Histórica, n.º 12 (28 de junio de 1986): 103–40. http://dx.doi.org/10.18172/brocar.1834.

Texto completo
Resumen
José Luis Gómez Urdáñez ha estudiado la agricultura camerana con el fin de demostrar su importancia, cuando los estudios básicos sobre la zona se han referido sobre todo a la ganadería. Hace notar el papel de amortiguador de la producción agrícola y el desigual reparto de la propiedad ganadera, así como el peso de la descapitalización impuesta por los grandes propietarios ganaderos, tres factores claves para entender la crisis económica camerana entre mediados del siglo XVIII y primer tercio del XIX
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Melón Jiménez, Miguel Ángel. "De los Cameros a Extremadura : historia y comportamiento de los ganaderos riojanos en tierras de Cáceres (1720-1800)". Brocar. Cuadernos de Investigación Histórica, n.º 12 (28 de junio de 1986): 141–58. http://dx.doi.org/10.18172/brocar.1835.

Texto completo
Resumen
Miguel Ángel Melón estudia en este artículo la repercusión de la expansión demográfica y del avance agrario en la Extremadura dieciochesca en la obstaculizaron de la trashumancia, especialmente de la camerana. El artículo completa el anterior al aportar causas externas al fenómeno de la crisis de cameros con abundante documentación, a la vez que ratifica el crecimiento y la fuerza del movimiento de la lucha por la tierra en Cáceres y su comarca.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Cole, Selina R. "Phylogeny and morphologic evolution of the Ordovician Camerata (Class Crinoidea, Phylum Echinodermata)". Journal of Paleontology 91, n.º 4 (9 de febrero de 2017): 815–28. http://dx.doi.org/10.1017/jpa.2016.137.

Texto completo
Resumen
AbstractThe subclass Camerata (Crinoidea, Echinodermata) is a major group of Paleozoic crinoids that represents an early divergence in the evolutionary history and morphologic diversification of class Crinoidea, yet phylogenetic relationships among early camerates remain unresolved. This study conducted a series of quantitative phylogenetic analyses using parsimony methods to infer relationships of all well-preserved Ordovician camerate genera (52 taxa), establish the branching sequence of early camerates, and test the monophyly of traditionally recognized higher taxa, including orders Monobathrida and Diplobathrida. The first phylogenetic analysis identified a suitable outroup for rooting the Ordovician camerate tree and assessed affinities of the atypical dicyclic family Reteocrinidae. The second analysis inferred the phylogeny of all well-preserved Ordovician camerate genera. Inferred phylogenies confirm: (1) the Tremadocian genera Cnemecrinus and Eknomocrinus are sister to the Camerata; (2) as historically defined, orders Monobathrida and Diplobathrida do not represent monophyletic groups; (3) with minimal revision, Monobathrida and Diplobathrida can be re-diagnosed to represent monophyletic clades; (4) family Reteocrinidae is more closely related to camerates than to other crinoid groups currently recognized at the subclass level; and (5) several genera in subclass Camerata represent stem taxa that cannot be classified as either true monobathrids or true diplobathrids. The clade containing Monobathrida and Diplobathrida, as recognized herein, is termed Eucamerata to distinguish its constituent taxa from more basally positioned taxa, termed stem eucamerates. The results of this study provide a phylogenetic framework for revising camerate classification, elucidating patterns of morphologic evolution, and informing outgroup selection for future phylogenetic analyses of post-Ordovician camerates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Anjum, Nadeem. "Camera Localization in Distributed Networks Using Trajectory Estimation". Journal of Electrical and Computer Engineering 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/604647.

Texto completo
Resumen
This paper presents an algorithm for camera localization using trajectory estimation (CLUTE) in a distributed network of nonoverlapping cameras. The algorithm recovers the extrinsic calibration parameters, namely, the relative position and orientation of the camera network on a common ground plane coordinate system. We first model the observed trajectories in each camera's field of view using Kalman filtering, then we use this information to estimate the missing trajectory information in the unobserved areas by fusing the results of a forward and backward linear regression estimation from adjacent cameras. These estimated trajectories are then filtered and used to recover the relative position and orientation of the cameras by analyzing the estimated and observedexitandentrypoints of an object in each camera's field of view. The final configuration of the network is established by considering one camera as a reference and by adjusting the remaining cameras with respect to this reference. We demonstrate the algorithm on both simulated and real data and compare the results with state-of-the-art approaches. The experimental results show that the proposed algorithm is more robust to noisy and missing data and in case of camera failure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Karaimer, Hakki Can y Michael S. Brown. "Beyond raw-RGB and sRGB: Advocating Access to a Colorimetric Image State". Color and Imaging Conference 2019, n.º 1 (21 de octubre de 2019): 86–90. http://dx.doi.org/10.2352/issn.2169-2629.2019.27.16.

Texto completo
Resumen
Most modern cameras allow captured images to be saved in two color spaces: (1) raw-RGB and (2) standard RGB (sRGB). The raw-RGB image represents a scene-referred sensor image whose RGB values are specific to the color sensitivities of the sensor's color filter array. The sRGB image represents a display-referred image that has been rendered through the camera's image signal processor (ISP). The rendering process involves several camera-specific photo-finishing manipulations intended to make the sRGB image visually pleasing. For applications that want to use a camera for purposes beyond photography, both the raw-RGB and sRGB color spaces are undesirable. For example, because the raw-RGB color space is dependent on the camera's sensor, it is challenging to develop applications that work across multiple cameras. Similarly, the camera-specific photo-finishing operations used to render sRGB images also hinder applications intended to run on different cameras. Interestingly, the ISP camera pipeline includes a colorimetric conversion stage where the raw-RGB images are converted to a device-independent color space. However, this image state is not accessible. In this paper, we advocate for the ability to access the colorimetric image state and recommend that cameras output a third image format that is based on this device-independent colorimetric space. To this end, we perform experiments to demonstrate that image pixel values in a colorimetric space are more similar across different makes and models than sRGB and raw-RGB.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Pack, Michael L., Brian L. Smith y William T. Scherer. "Automated Camera Repositioning Technique for Video Image Vehicle Detection Systems: Integrating with Freeway Closed-Circuit Television Systems". Transportation Research Record: Journal of the Transportation Research Board 1856, n.º 1 (enero de 2003): 25–33. http://dx.doi.org/10.3141/1856-04.

Texto completo
Resumen
Transportation agencies have invested significantly in extensive closed-circuit television (CCTV) systems to monitor freeways in urban areas. While thes systems have proven to be very effective in supporting incident management, they do not support the collection of quantitative measures of traffic conditions. Instead, they simply provide images that must be interpreted by trained operators. While there are several video image vehicle detection systems (VIVDS) on the market that have the capability to automatically derive traffic measures fro video imagery, these systems require the installation of fixed-position cameras. Thus, they have not been integrated with the existing moveable CCTV cameras. VIVDS camera positioning and calibration challenges were addressed and a prototype machine-vision system was developed that successfully integrated existing moveable CCTV cameras with VIVDS. Results of testing the prototype are presentedindicating that when the camera’s initial zoom level was kept between ×1 and ×1.5, the camera consistently could be returned to its original position with a repositioning accuracy of less than 0.03 to 0.1 regardless of the camera’s displaced pan, tilt, or zoom settings at the time of repositioning. This level of positional accuracy when combined with a VIVDS resulted in vehicle count errors of less than 1%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Elias, Melanie, Anette Eltner, Frank Liebold y Hans-Gerd Maas. "Assessing the Influence of Temperature Changes on the Geometric Stability of Smartphone- and Raspberry Pi Cameras". Sensors 20, n.º 3 (23 de enero de 2020): 643. http://dx.doi.org/10.3390/s20030643.

Texto completo
Resumen
Knowledge about the interior and exterior camera orientation parameters is required to establish the relationship between 2D image content and 3D object data. Camera calibration is used to determine the interior orientation parameters, which are valid as long as the camera remains stable. However, information about the temporal stability of low-cost cameras due to the physical impact of temperature changes, such as those in smartphones, is still missing. This study investigates on the one hand the influence of heat dissipating smartphone components at the geometric integrity of implemented cameras and on the other hand the impact of ambient temperature changes at the geometry of uncoupled low-cost cameras considering a Raspberry Pi camera module that is exposed to controlled thermal radiation changes. If these impacts are neglected, transferring image measurements into object space will lead to wrong measurements due to high correlations between temperature and camera’s geometric stability. Monte-Carlo simulation is used to simulate temperature-related variations of the interior orientation parameters to assess the extent of potential errors in the 3D data ranging from a few millimetres up to five centimetres on a target in X- and Y-direction. The target is positioned at a distance of 10 m to the camera and the Z-axis is aligned with camera’s depth direction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liu, W., H. Wang, W. Jiang, F. Qian y L. Zhu. "REAL-TIME ON-ORBIT CALIBRATION OF ANGLES BETWEEN STAR SENSOR AND EARTH OBSERVATION CAMERA FOR OPTICAL SURVEYING AND MAPPING SATELLITES". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W5 (29 de mayo de 2019): 583–88. http://dx.doi.org/10.5194/isprs-annals-iv-2-w5-583-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> On space remote sensing stereo mapping field, the angle variation between the star sensor’s optical axis and the earth observation camera’s optical axis on-orbit affects the positioning accuracy, when optical mapping is without ground control points (GCPs). This work analyses the formation factors and elimination methods for both the star sensor’s error and the angles error between the star sensor’s optical axis and the earth observation camera’s optical axis. Based on that, to improve the low attitude stability and long calibration time necessary of current satellite cameras, a method is then proposed for real-time on-orbit calibration of the angles between star sensor’s optical axis and the earth observation camera’s optical axis based on the principle of auto-collimation. This method is experimentally verified to realize real-time on-orbit autonomous calibration of the angles between the star sensor’s optical axis and the earth observation camera’s optical axis.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Liu, Chun Feng, Shan Shan Kong y Hai Ming Wu. "Research on a Single Camera Location Model and its Application". Applied Mechanics and Materials 50-51 (febrero de 2011): 468–72. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.468.

Texto completo
Resumen
Digital cameras have been widely used in the areas of road transportation, railway transportation as well as security system. To address the position of digital camera in these fields this paper proposed a geometry calibration method based on feature point extraction of arbitrary target. Under the meaning of the questions, this paper first defines four kinds of coordinate system, that is the world coordinate system. The camera's optical center of the coordinate system is the camera coordinate system, using the same point in different coordinate system of the coordinate transformation to determine the relationship between world coordinate system and camera coordinate. And thus determine the camera's internal parameters and external parameters, available transformation matrix and translation vector indicated by the camera's internal parameters of the external parameters and the establishment of a single camera location model. According to the model, using the camera's external parameters to be on the target circle center point in the image plane coordinates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Cui, Yu, Kenta Nishimura, Yusuke Sunami, Mamoru Minami, Takayuki Matsuno y Akira Yanou. "Analyses about Trackability of Hand-Eye-Vergence Visual Servoing in Lateral Direction". Applied Mechanics and Materials 772 (julio de 2015): 512–17. http://dx.doi.org/10.4028/www.scientific.net/amm.772.512.

Texto completo
Resumen
Visual Servoing to Moving Target with Fixed Hand-Eye Cameras Mounted at Hand of Robot is Inevitably Be Affected by Hand Dynamical Oscillations, then it is Hard to Keep Target at the Centre of Camera’s Image, since Nonlinear Dynamical Effects of Whole Manipulator Stand against Tracking Ability. in Order to Solve this Problem, an Eye-Vergence System, where the Visual Servoing Controller of Hand and Eye-Vergence is Controlled Independently, so that the Cameras can Observe the Target Object at the Center of the Camera Images through Eye-Vergence Functions. the Eyes with Light Mass Make the Cameras’ Eye-Sight Direction Rotate Quickly, so the Track Ability of the Eye-Vergence Motion is Superior to the One of Fixed Hand-Eye Configuration. in this Report Merits of Eye-Vengence Visual Servoing for Pose Tracking Have been Confirmed through Frequency Response Experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Nikodem, Maciej, Mariusz Słabicki, Tomasz Surmacz, Paweł Mrówka y Cezary Dołęga. "Multi-Camera Vehicle Tracking Using Edge Computing and Low-Power Communication". Sensors 20, n.º 11 (11 de junio de 2020): 3334. http://dx.doi.org/10.3390/s20113334.

Texto completo
Resumen
Typical approaches to visual vehicle tracking across large area require several cameras and complex algorithms to detect, identify and track the vehicle route. Due to memory requirements, computational complexity and hardware constrains, the video images are transmitted to a dedicated workstation equipped with powerful graphic processing units. However, this requires large volumes of data to be transmitted and may raise privacy issues. This paper presents a dedicated deep learning detection and tracking algorithms that can be run directly on the camera’s embedded system. This method significantly reduces the stream of data from the cameras, reduces the required communication bandwidth and expands the range of communication technologies to use. Consequently, it allows to use short-range radio communication to transmit vehicle-related information directly between the cameras, and implement the multi-camera tracking directly in the cameras. The proposed solution includes detection and tracking algorithms, and a dedicated low-power short-range communication for multi-target multi-camera tracking systems that can be applied in parking and intersection scenarios. System components were evaluated in various scenarios including different environmental and weather conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Yi, Xue Feng y Si Chun Long. "Precision Displacement Measurement of Single Lens Reflex Digital Camera". Applied Mechanics and Materials 103 (septiembre de 2011): 82–86. http://dx.doi.org/10.4028/www.scientific.net/amm.103.82.

Texto completo
Resumen
When the stripes are photographed by digital cameras,Moiré fringes will be in the photograph. With the displacement amplification of Moiré fringes, tiny movement of stripes can be calculated precisely. We expounded its calculating methods in this paper. Experimental data show that high precision can be guaranteed.It is well known that the high rate zoom characteristic of moire fringe technique can be well used in precise measurement of digital Camera’s displacement, and besides, widely used in machining, laboratory and photo-electricity equipment. With the development and popularization of digital cameras, especially the great advancement in image sensor’s performance, it now becomes possible to accurately measure the displacement from far away by taking into account the sensor’s moire effect.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Qi, Xing Guang y Yi Zhen. "Research of the Paper Defect On-Line Inspection System Based on Distributed Machine Vision". Advanced Materials Research 562-564 (agosto de 2012): 1805–8. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.1805.

Texto completo
Resumen
This paper presents a distributed machine vision inspection system, which has a large field of view (FOV) and can perform high precision, high speed real-time inspection for wide paper sheet detection. The system consists of multiple GigE Vision linescan cameras which connected though Gigabit Ethernet. The cameras are arranged into a linear array so that every camera’s FOV is merged into one large FOV in the meantime the resolution keeps unchanged. In order to acquire high processing speed, the captured images from each camera are sent into one dedicate computer for distributed and parallel image processing. Experimental results show that the system with fine detection capability can satisfy the requirements of real time detection and find out the defects on the production line effectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Srisamosorn, Veerachart, Noriaki Kuwahara, Atsushi Yamashita, Taiki Ogata y Jun Ota. "Human-tracking system using quadrotors and multiple environmental cameras for face-tracking application". International Journal of Advanced Robotic Systems 14, n.º 5 (1 de septiembre de 2017): 172988141772735. http://dx.doi.org/10.1177/1729881417727357.

Texto completo
Resumen
In this article, a system for tracking human’s position and orientation in indoor environment was developed utilizing environmental cameras. The system consists of cameras installed in the environment at fixed locations and orientations, called environmental cameras, and a moving robot which mounts a camera, called moving camera. The environmental cameras detect the location and direction of each person in the space, as well as the position of the moving robot. The robot is then controlled to move and follow the person’s movement based on the person’s location and orientation, mimicking the act of moving camera tracking his/her face. The number of cameras needed to cover the area of the experiment, as well as each camera’s position and orientation, was obtained by using particle swarm optimization algorithm. Sensor fusion among multiple cameras is done by simple weighted averaging based on distance and knowledge of the number of robots being used. Xbox Kinect sensors and a miniature quadrotor were used to implement the system. The tracking experiment was done with one person walking and rotating in the area. The result shows that the proposed system can track the person and quadrotor within the degree of 10 cm , and the quadrotor can follow the person’s movement as desired. At least one camera was guaranteed to be tracking the person and the quadrotor at any time, with the minimum number of two for tracking the person and only a few moments that only one camera was tracking the quadrotor.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

GENET, R. M. "PORTABLE SPECKLE INTERFEROMETRY CAMERA SYSTEM". Journal of Astronomical Instrumentation 02, n.º 02 (diciembre de 2013): 1340008. http://dx.doi.org/10.1142/s2251171713400084.

Texto completo
Resumen
Speckle interferometry of close double stars avoids seeing limitations through a series of diffraction-limited high speed observations made faster than the atmospheric coherence time scale. Electron multiplying CCD cameras have low read noise at high read speeds, making them ideal for speckle interferometry. A portable speckle camera system was developed based on relatively low cost, off-the-shelf components. The camera's modular components can be exchanged to adapt the system to a wide range of telescopes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Kluykov, A. A. "Attitude determination of system coordinate gradiometer with respect to inertial space". Geodesy and Cartography 930, n.º 12 (20 de enero de 2018): 2–8. http://dx.doi.org/10.22389/0016-7126-2017-930-12-2-8.

Texto completo
Resumen
The article represents the algorithm of attitude determination in gradiometer coordinate system with respect to inertial space. The problem can be solved in two steps. The first step is to determine the values of matrix transformation from celestial system (ICRF) to star camera coordinate system (SSRF) using observations star. The second step is to determine the values of matrix transformation from star camera coordinate system (SSRF) to gradiometer coordinate system (GRF). This problem is solved through mounting sensor systems on board of a satellite. Due to the mission GOCE three star cameras are mounted there. The matrix of transformation from star camera coordinate system (SSRF) to gradiometer coordinate system (GRF) is determined for every star camera. The values of transformation matrix are represented in file of data AUX_EGG_DB. Processing star camera’s (star cameras’) observations include the following steps
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Park, Ji Hun. "3D Position Based Human Movement Computation Using Multiple Images". Applied Mechanics and Materials 865 (junio de 2017): 565–70. http://dx.doi.org/10.4028/www.scientific.net/amm.865.565.

Texto completo
Resumen
This paper presents a computation method of human movement using 3D point, regarding a human as a rigid body. The movement computation method uses ray vectors cast from cameras. Ray vectors cast from cameras to feature points carry values of camera external parameter. Given four or more nonplanar known points of one input image, we calculate camera's external parameters of the input image using computed values from partially overlapping, adjacent input image frames using Newton's root finding algorithm. This is achieved by computing fixed points in the environment, camera distortion values and external parameters from stationary scenes, and camera external parameters of the input frame. Using computed camera external parameters, a tracked object's rigid object movement is computed using projected intersection points between ray vectors. Our method is demonstrated using various input images. The result is used in a human tracking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Bernacki, Jaroslaw. "Digital camera identification based on analysis of optical defects". Multimedia Tools and Applications 79, n.º 3-4 (4 de diciembre de 2019): 2945–63. http://dx.doi.org/10.1007/s11042-019-08182-z.

Texto completo
Resumen
AbstractIn this paper we deal with the problem of digital camera identification by photographs. Identifying camera is possible by analyzing camera’s sensor artifacts that occur during the process of photo processing. The problem of digital camera identification has been popular for a long time. Recently many effective and robust algorithms for solving this problem have been proposed. However, almost all solutions are based on state-of-the-art algorithm, proposed by Lukás et al. in 2006. Core of this algorithm is to calculate the so-called sensor pattern noise based on denoising images with wavelet-based denoising filter. Such technique is very efficient, but very time consuming. In this paper we consider tracing cameras by analyzing defects of their optical systems, like vignetting and lens distortion. We show that analysis of vignetting defect allows for recognizing brand of the camera. Lens distortion can be used to distinguish images from different cameras. Experimental evaluation was carried out on 60 devices (compact cameras and smartphones) for a total number of 12 051 images, with support of the Dresden Image Database. Proposed methods do not require denoising images with wavelet-based denoising filter what has a significant influence for speed of image processing, compared with state-of-the-art algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

HOLEVA, LEE F. "RANGE ESTIMATION FROM CAMERA BLUR BY REGULARIZED ADAPTIVE IDENTIFICATION". International Journal of Pattern Recognition and Artificial Intelligence 08, n.º 06 (diciembre de 1994): 1273–300. http://dx.doi.org/10.1142/s0218001494000644.

Texto completo
Resumen
One of the fundamental problems of machine vision is the estimation of object depth from perceived images. This paper describes both an apparatus and the corresponding algorithms for the passive extraction of object depth. Here passive extraction implies the processing of images acquired using only the existing illumination, in this case roughly uniform white light. Depth from defocused algorithms are extremely sensitive to image variations. Regularization, the application of a priori constraints, is employed to improve the accuracy of the range measurements. When the camera’s point spread function is shift invariant, an adaptive algorithm is developed in the frequency domain. The constraints imposed upon the solution power spectrum vector vary temporally. When the camera’s point spread function is shift varying, an adaptive algorithm is developed in the spatial domain. The constraints imposed upon the solution point spread vector vary spatially. Data is acquired from line scan cameras. Only a single range measurement or a single depth profile is extracted. By relying upon the motion of the observed object on a conveyor belt, a complete range image may be generated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Liu, Zhe, Zhaozong Meng, Nan Gao y Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target". Sensors 19, n.º 13 (8 de julio de 2019): 3008. http://dx.doi.org/10.3390/s19133008.

Texto completo
Resumen
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

McIsaac, Jacqueline. "Cameras in the Countryside: Recreational Photography in Rural Ontario, 1851-1920". Scientia Canadensis 36, n.º 1 (26 de junio de 2014): 5–31. http://dx.doi.org/10.7202/1025787ar.

Texto completo
Resumen
The introduction and subsequent refinement of glass plate negative technology facilitated photography’s appropriation within rural Ontario. As a recreational consumer technology, the camera became easier to use, financially accessible, and portable, thus better suiting the needs of rural consumers. While technological advancements allowed the camera to be adopted as a leisure pursuit, its use was directed by the countryside’s cultural values and social norms. These interests influenced who used cameras, how photo-supplies were purchased, the camera’s place within household income diversification strategies, and the photographer’s gaze, all of which suggest that when photo-technology was used in the countryside, it was as an extension of, not a challenge to, rural cultural values. At the same time, as the first photography system that was accessible to the middle and labouring classes, glass plates cannot help but reveal the visual priorities this new group of consumers, thus contributing to current discussions on cultural aspects of rural society. Consequently, glass plate cameras in Ontario’s countryside functioned as both a documentary medium as well as a form of cultural expression.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou y Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing". Remote Sensing 10, n.º 8 (16 de agosto de 2018): 1298. http://dx.doi.org/10.3390/rs10081298.

Texto completo
Resumen
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

YAO, YI, CHUNG-HAO CHEN, BESMA ABIDI, DAVID PAGE, ANDREAS KOSCHAN y MONGI ABIDI. "MULTI-CAMERA POSITIONING FOR AUTOMATED TRACKING SYSTEMS IN DYNAMIC ENVIRONMENTS". International Journal of Information Acquisition 07, n.º 03 (septiembre de 2010): 225–42. http://dx.doi.org/10.1142/s0219878910002208.

Texto completo
Resumen
Most existing camera placement algorithms focus on coverage and/or visibility analysis, which ensures that the object of interest is visible in the camera's field of view (FOV). According to recent literature, handoff safety margin is introduced to sensor planning so that sufficient overlapped FOVs among adjacent cameras are reserved for successful and smooth target transition. In this paper, we investigate the sensor planning problem when considering the dynamic interactions between moving targets and observing cameras. The probability of camera overload is explored to model the aforementioned interactions. The introduction of the probability of camera overload also considers the limitation that a given camera can simultaneously monitor or track a fixed number of targets and incorporates the target's dynamics into sensor planning. The resulting camera placement not only achieves the optimal balance between coverage and handoff success rate but also maintains the optimal balance in environments with various target densities. The proposed camera placement method is compared with a reference algorithm by Erdem and Sclaroff. Consistently improved handoff success rate is illustrated via experiments using typical office floor plans with various target densities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Liu, Zhong Yan, Guo Quan Wang y Dong Ping Wang. "A 3D Reconstruction Method Based on Binocular View Geometry". Applied Mechanics and Materials 33 (octubre de 2010): 299–303. http://dx.doi.org/10.4028/www.scientific.net/amm.33.299.

Texto completo
Resumen
A method was proposed to gain three-dimensional (3D) reconstruction based on binocular view geometry. Images used to calibrate cameras and reconstruct car’s rearview mirror by image acquisition system, by calibration image, a camera's intrinsic and extrinsic parameters, projective and fundamental matrixes were drawn by Matlab7.1;the collected rearview mirror images is pretreated to draw refined laser, extracted feature points, find the very appropriate match points by epipolar geometry principle; according to the camera imaging model to calculate the coordinates of space points, display point cloud, fitting space points to reconstruct car’s rearview mirror; experimental results show this method can better restore the car’s rearview mirror of 3D information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Yu, Fujia, Wei Song, Mamoru Minami, Akira Yanou y Mingcong Deng. "Experimental Evaluations of Approaching Hand/Eye-Vergence Visual Servoing". Journal of Advanced Computational Intelligence and Intelligent Informatics 15, n.º 7 (20 de septiembre de 2011): 878–87. http://dx.doi.org/10.20965/jaciii.2011.p0878.

Texto completo
Resumen
We focus on controlling a robot’s end-effector to track a moving object meanwhile approaching an object with a desired tracking pose for grasping the object - a process we call Approaching Visual Servoing (AVS). AVS using binocular cameras requires inheritable eyevergence to keep a target in camera images at the center of the camera frame because approaching motion narrows the camera’s visual field or may even lose sight of the object. Experiments using our proposed hand and eye-vergence dual control involved full 6-degree-of-freedom AVS to a moving object by using a 7-link manipulator with a binocular camera, confirming the feasibility of hand and eye-vergence control.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

AARSVOLD, J. N., R. A. MINTZER, N. J. YASILLO, S. J. HEIMSATH, T. A. BLOCK, K. L. MATTHEWS, X. PAN et al. "A Miniature Gamma Cameraa". Annals of the New York Academy of Sciences 720, n.º 1 (mayo de 1994): 192–205. http://dx.doi.org/10.1111/j.1749-6632.1994.tb30447.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Liu, Jia, Yulei Xie, Shuang Gu y Xu Chen. "A SLAM-Based Mobile Augmented Reality Tracking Registration Algorithm". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 01 (19 de junio de 2019): 2054005. http://dx.doi.org/10.1142/s0218001420540051.

Texto completo
Resumen
This paper proposes a simultaneous localization and mapping (SLAM)-based markerless mobile-end tracking registration algorithm to address the problem of virtual image drift caused by fast camera motion in mobile-augmented reality (AR). The proposed algorithm combines the AGAST-FREAK-SLAM algorithm with inertial measurement unit (IMU) data to construct a scene map and localize the camera’s pose. The extracted feature points are matched with feature points in a map library for real-time camera localization and precise registration of virtual objects. Experimental results show that the proposed method can track feature points in real time, accurately construct scene maps, and locate cameras; moreover, it improves upon the tracking and registration robustness of earlier algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

David, Michael Joseph C. y Antonio H. Chua. "The Flexmount Ringlight: An Inexpensive Lighting Solution for Intraoral Photodocumentation". Philippine Journal of Otolaryngology-Head and Neck Surgery 24, n.º 1 (15 de junio de 2009): 21–26. http://dx.doi.org/10.32412/pjohns.v24i1.709.

Texto completo
Resumen
Objective: To fabricate an inexpensive, reproducible and portable ringlight with flexible, quick-release mount for use with point-and-shoot consumer digital cameras in intraoral photodocumentation Materials and Methods: Design: Instrumentation Setting: Tertiary Care Hospital Procedure: A commercially-available battery-powered mountaineer’s LED (Light Emitting Diode) headlight (manufacturer, place) was converted into a portable ringlight with a flexible, quick-release mount for intraoral photodocumentation. Results: The Flexmount Ringlight delivered an even and white illumination of the oral cavity and oropharynx at a working distance of more than 5cm from the subject in focus. It resulted in sharper pictures due to its constant illumination that assisted the camera’s autofocus system in getting accurate focusing intraorally. It also allowed the camera to use smaller apertures that have put more elements in focus, and faster shutter speeds that have markedly reduced motion blur. Conclusion: The Flexmount Ringlight is an inexpensive, easy-to-assemble and portable ringlight that can be used in point-and-shoot consumer digital cameras. Its constant and even illumination resulted in reproducible, sharp, shadowless photographs of the oral cavity and oropharynx. Keywords: ringlight, flexmount, intraoral photodocumentation
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Zhou, Shichao, Haibin Zhu, Qinwei Ma y Shaopeng Ma. "Heat Transfer and Temperature Characteristics of a Working Digital Camera". Sensors 20, n.º 9 (30 de abril de 2020): 2561. http://dx.doi.org/10.3390/s20092561.

Texto completo
Resumen
Digital cameras represented by industrial cameras are widely used as image acquisition sensors in the field of image-based mechanics measurement, and their thermal effect inevitably induces thermal-induced errors of the mechanics measurement. To deeply understand the errors, the research for digital camera’s thermal effect is necessary. This study systematically investigated the heat transfer processes and temperature characteristics of a working digital camera. Concretely, based on the temperature distribution of a typical working digital camera, the heat transfer of the working digital camera was investigated, and a model describing the temperature variation and distribution was presented and verified experimentally. With this model, the thermal equilibrium time and thermal equilibrium temperature of the camera system were calculated. Then, the influences of thermal parameters of digital camera and environmental temperature on the temperature characteristics of working digital camera were simulated and experimentally investigated. The theory analysis and experimental results demonstrate that the presented model can accurately describe the temperature characteristics and further calculate the thermal equilibrium state of working digital camera, all of which contribute to guiding mechanics measurement and thermal design based on such camera sensors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Luski, Aim. "Cameras". Philosophy of Photography 4, n.º 1 (1 de septiembre de 2013): 3–12. http://dx.doi.org/10.1386/pop.4.1.3_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Cameron, Matt. "CAMERON". Ecological Management & Restoration 21, n.º 2 (mayo de 2020): 158. http://dx.doi.org/10.1111/emr.12405.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Park, Ji Hun y Sung Hun Park. "Object Movement Computation from Two Images". Applied Mechanics and Materials 752-753 (abril de 2015): 1085–89. http://dx.doi.org/10.4028/www.scientific.net/amm.752-753.1085.

Texto completo
Resumen
This paper presents a new object movement computation method using ray vectors generated from two cameras. We compute camera's internal and external parameters of the input images using computed values from partially overlapping input image frames which has the same corresponding fixed feature points. This is achieved by computing fixed points in the environment, camera distortion values and internal and external parameters from stationary objects. Ray vectors cast from each camera to feature points keep camera external parameter values. Using computed camera external parameters, a tracked object's rigid object movement is estimated using maximum likelihood estimation by setting projected intersection points between ray vectors as a part of objective function. Our method is demonstrated and the results are compared to our another movement computation algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Evtikhiev, Nickolay N., Alexander V. Kozlov, Vitaly V. Krasnov, Vladislav G. Rodin, Rostislav S. Starikov y Pavel A. Cheremkhin. "Estimation of efficiency of measurement of digital camera photosensor noise by automatic segmentation of non-uniform target method and the standard EMVA 1288". Izmeritel`naya Tekhnika, n.º 4 (2021): 28–35. http://dx.doi.org/10.32446/0368-1025it.2021-4-28-35.

Texto completo
Resumen
In this paper important task of estimation of digital camera’s noise parameters is considered. Relation of accuracy of data obtained with digital camera and photosensor noise is discussed. Both standard European machine vision association EMVA 1288 and fast automatic segmentation of non-uniform target (ASNT) noise estimation methods are compared. Noise characteristics of machine vision PixeLink PL-B781F, scientific Retiga R6 and amateur mirrorless Canon EOS M100 cameras have been investigated. Accuracy of measurements, speed of calculation and experimental realization has been analyzed. Accuracy of temporal noise estimation by modified ASNT method is no less than that one for standard EMVA 1288. But the ASNT method can be implemented much faster than the standard EMVA 1288 even with additional frames for accuracy improvement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

March, Robert H. "Ugo Camerini". Physics Today 68, n.º 7 (julio de 2015): 55. http://dx.doi.org/10.1063/pt.3.2853.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Fang, Bin, Wu Sheng Chou, Xiao Qi Guo y Xin Ma. "The Design of an Miniature Underwater Robot for Hazardous Environment". Applied Mechanics and Materials 347-350 (agosto de 2013): 711–14. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.711.

Texto completo
Resumen
An miniature underwater robotic system for hazardous environment has been developed. The system consists of an underwater robot, a robot control station and a cameral control station. The underwater robot is installed two cameras for inspection, one is a camera of radiation resistant with two-freedom PTZ in the front of the robot, and the other is a fixed camera in the back of the robot. A miniature manipulator is equipped under the fore-camera to catch the small parts like bolts and nuts in the pools. The movement of the underwater robot is controlled by the master control station and the cameral control station controls the rotation and focus of the fore-camera. Besides, the underwater robot is equipped with the sensors, as MEMS inertial measurement unit, magnetometers, side scan sonar, water-depth gauges, which are integrated to determine the orientation and location of the robot. Meanwhile the navigation information is displayed in the virtual environment, which is modeled upon the real pools of the nuclear power plant. The underwater robotic system is easy to operate and will be applied to the hazardous environment like nuclear environment in future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

McKay, Carolyn y Murray Lee. "Body-worn images: Point-of-view and the new aesthetics of policing". Crime, Media, Culture: An International Journal 16, n.º 3 (15 de septiembre de 2019): 431–50. http://dx.doi.org/10.1177/1741659019873774.

Texto completo
Resumen
Police organisations across much of the Western world have eagerly embraced body-worn video camera technology, seen as a way to enhance public trust in police, provide transparency in policing activity, reduce conflict between police and citizens and provide a police perspective of incidents and events. Indeed, the cameras have become an everyday piece of police ‘kit’. Despite the growing ubiquity of the body-worn video camera, understandings of the nature and value of the audiovisual footage produced by police remain inchoate. Given body-worn video camera’s promise of veracity, this article is interested in the aesthetics of the camera images and the socio-cultural construction of the cameras as tellers of truth. We treat body-worn video cameras as image-making devices linked to techniques and technologies of power, which construct and frame police encounters in specific ways, and we suggest that the aesthetics and point-of-view nature of the image contribute greatly to the truth-value that the images acquire. This article begins by providing an historical context for the use of cameras and images in policing. We then introduce our framework of visual criminology and present theories of point-of-view as a construct in the diverse areas of gaming, pornography and the visual arts, as well as in television and cinema. The article deploys the cinematic use of point-of-view to unpack the affective impact and aesthetic of the police body-worn video camera footage. We suggest that viewers of the footage are placed in the position of the corporeally absent police officer whose experience has been recorded by a viewfinderless device. This generates a vacillating interplay between subjectivity and objectivity, given that the alleged faithful recording of the event by the body-worn video camera presents a singular perspective and incomplete document that may not necessarily capture the full context of the law enforcement event.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Christensen, Gordon J. "Intraoral television cameras versus digital cameras, 2007". Journal of the American Dental Association 138, n.º 8 (agosto de 2007): 1145–47. http://dx.doi.org/10.14219/jada.archive.2007.0328.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Gautam, Shikha y Anand Singh Jalal. "An Image Forgery Detection Approach Based on Camera's Intrinsic Noise Properties". International Journal of Computer Vision and Image Processing 8, n.º 1 (enero de 2018): 92–101. http://dx.doi.org/10.4018/ijcvip.2018010106.

Texto completo
Resumen
Digital images are found everywhere from cell phones to the pages of online news sites. With the rapid growth of the Internet and the popularity of digital image capturing devices, images have become major source of information. Now-a-days fudge of images has become easy due to powerful advanced photo-editing software and high-resolution cameras. In this article, the authors present a method for detecting forgery, which is detected by estimating camera's intrinsic noise properties. Differences in noise parameters of the image are used as evidence of Image tampering. The method works in two steps. In the first step, the given image is classified as forge or non-forge. In the second step, the forged region in the image is detected. Results show that the proposed method outperforms the previous methods and shows a detection accuracy of 85.76%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Xu, De y Qingbin Wang. "A new vision measurement method based on active object gazing". International Journal of Advanced Robotic Systems 14, n.º 4 (1 de julio de 2017): 172988141771598. http://dx.doi.org/10.1177/1729881417715984.

Texto completo
Resumen
A new vision measurement system is developed with two cameras. One is fixed in pose to serve as a monitor camera. It finds and tracks objects in image space. The other is actively rotated to track the object in Cartesian space, working as an active object-gazing camera. The intrinsic parameters of the monitor camera are calibrated. The view angle corresponding to the object is calculated from the object’s image coordinates and the camera’s intrinsic parameters. The rotation angle of the object-gazing camera is measured with an encoder. The object’s depth is computed with the rotation angle and the view angle. Then the object’s three-dimensional position is obtained with its depth and normalized imaging coordinates. The error analysis is provided to assess the measurement accuracy. The experimental results verify the effectiveness of the proposed vision system and measurement method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

BOULENGER, G. A. "On the Lacerta depressa of Camerano". Proceedings of the Zoological Society of London 74, n.º 4 (21 de agosto de 2009): 332–39. http://dx.doi.org/10.1111/j.1469-7998.1905.tb08341.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Query, Julia, Loren Cameron y Sue O'Sullivan. "Candid Cameron". Women's Review of Books 14, n.º 12 (septiembre de 1997): 26. http://dx.doi.org/10.2307/4022788.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Loussouarn, Sophie. "David Cameron". Outre-Terre N° 49, n.º 4 (2016): 234. http://dx.doi.org/10.3917/oute1.049.0234.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Lynch, Patrick K. "Digital Cameras". Biomedical Instrumentation & Technology 42, n.º 2 (marzo de 2008): 87. http://dx.doi.org/10.2345/0899-8205(2008)42[87:dc]2.0.co;2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Neal, Larry. "Rondo Cameron". Enterprise & Society 3, n.º 2 (junio de 2002): 352. http://dx.doi.org/10.1017/s1467222700011691.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Pittman, Andrew T. y Andrew T. Pittman. "Surveillance Cameras". Journal of Physical Education, Recreation & Dance 79, n.º 8 (octubre de 2008): 52–53. http://dx.doi.org/10.1080/07303084.2008.10598232.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Higgins, C. "Alick Cameron". BMJ 347, sep04 1 (4 de septiembre de 2013): f5031. http://dx.doi.org/10.1136/bmj.f5031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Naqvi, A. "Compact cameras". British Dental Journal 208, n.º 1 (enero de 2010): 3. http://dx.doi.org/10.1038/sj.bdj.2010.5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Gamble, Andrew. "Project Cameron". Public Policy Research 18, n.º 3 (septiembre de 2011): 173–78. http://dx.doi.org/10.1111/j.1744-540x.2011.00660.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

McCreary, Michael D. "Digital Cameras". Scientific American 278, n.º 6 (junio de 1998): 102. http://dx.doi.org/10.1038/scientificamerican0698-102.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Travin, Mark I. "Cardiac Cameras". Seminars in Nuclear Medicine 41, n.º 3 (mayo de 2011): 182–201. http://dx.doi.org/10.1053/j.semnuclmed.2010.12.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía