Journal articles on the topic 'Camere arbitrali'

To see the other types of publications on this topic, follow the link: Camere arbitrali.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Camere arbitrali.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sali, Rinaldo. "L’Arbitrato Istituzionale – Il Modello della Camera Arbitrale di Milano." Revista Brasileira de Arbitragem 6, Issue 21 (February 1, 2009): 125–35. http://dx.doi.org/10.54648/rba2009007.

Full text
Abstract:
RESUMO: O artigo descreve os mecanismos do regulamento de arbitragem de uma instituição arbitral internacional como a Câmara de Arbitragem de Milão. Um bom regulamento de arbitragem não deve ter a pretensão de disciplinar cada pequeno detalhe do procedimento arbitral, mas sim valorizar as garantias fundamentais de uma arbitragem administrada, quais sejam: garantia de independência e imparcialidade dos árbitros e da própria instituição; garantia de liberdade processual (liberdade dos árbitros para decidirem quais instrumentos processuais são mais adequados para cada caso); garantia de tempos curtos; garantia de custos previsíveis e razoáveis. O Regulamento da Câmara de Arbitragem de Milão prevê, ainda, um código deontológico que o árbitro deve respeitar enquanto atuar junto à instituição arbitral.
APA, Harvard, Vancouver, ISO, and other styles
2

Kano, Hiroshi, and Takeo Kanade. "Stereo vision with arbitrary camera arrangement and with camera calibration." Systems and Computers in Japan 29, no. 2 (February 1998): 47–56. http://dx.doi.org/10.1002/(sici)1520-684x(199802)29:2<47::aid-scj6>3.0.co;2-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Jun-Sik, and In So Kweon. "Camera calibration based on arbitrary parallelograms." Computer Vision and Image Understanding 113, no. 1 (January 2009): 1–10. http://dx.doi.org/10.1016/j.cviu.2008.06.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Steger, Carsten, and Markus Ulrich. "A Multi-view Camera Model for Line-Scan Cameras with Telecentric Lenses." Journal of Mathematical Imaging and Vision 64, no. 2 (October 13, 2021): 105–30. http://dx.doi.org/10.1007/s10851-021-01055-x.

Full text
Abstract:
AbstractWe propose a novel multi-view camera model for line-scan cameras with telecentric lenses. The camera model supports an arbitrary number of cameras and assumes a linear relative motion with constant velocity between the cameras and the object. We distinguish two motion configurations. In the first configuration, all cameras move with independent motion vectors. In the second configuration, the cameras are mounted rigidly with respect to each other and therefore share a common motion vector. The camera model can model arbitrary lens distortions by supporting arbitrary positions of the line sensor with respect to the optical axis. We propose an algorithm to calibrate a multi-view telecentric line-scan camera setup. To facilitate a 3D reconstruction, we prove that an image pair acquired with two telecentric line-scan cameras can always be rectified to the epipolar standard configuration, in contrast to line-scan cameras with entocentric lenses, for which this is possible only under very restricted conditions. The rectification allows an arbitrary stereo algorithm to be used to calculate disparity images. We propose an efficient algorithm to compute 3D coordinates from these disparities. Experiments on real images show the validity of the proposed multi-view telecentric line-scan camera model.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Biao, and Shiping Zou. "A New Camera Calibration Technique for Serious Distortion." Processes 10, no. 3 (February 28, 2022): 488. http://dx.doi.org/10.3390/pr10030488.

Full text
Abstract:
A new camera calibration technique based on serious distortion is proposed, which only requires the camera to observe the plane pattern in an arbitrary azimuth. It uses the geometrical imaging principle and radial distortion model to acquire radial lens distortion coefficient and the image coordinate (u0, v0), and then solves the linear equation aiming at the other parameters of the camera. This method has the following characteristics: Firstly, the position of the camera and the plane is arbitrary, and the technique needs only a single observation for plane pattern. Secondly, it is suitable for camera calibration with serious distortion. Thirdly, it does not need expensive ancillary equipment, accurate movement, or lots of photos observed from different orientations. Having been authenticated by computer emulation and actual experiment, the results of the proposed technique have proved to be satisfactory. The research has also paved a new way in camera calibration for further studies.
APA, Harvard, Vancouver, ISO, and other styles
6

Luan, LiKang, and Liam Crosbie. "A Three-Camera Digital Image Correlation System For Full-Field 3D Shape and Motion Measurement." Materials Evaluation 80, no. 11 (November 1, 2022): 34–41. http://dx.doi.org/10.32548/2022.me-04293.

Full text
Abstract:
A cluster-approach-based three-camera digital image correlation (DIC) system is introduced for full-field 3D shape and motion measurement. In this system, three cameras are employed to measure the same specimen area at different viewing angles. Data points within the region of interest can be evaluated by arbitrary camera pairs as a stereo DIC system so that data points with the smallest 3D residuum are selected and mapped into one common coordinate system. Two stationary shape measurements and one out-of-plane motion measurement were carried out with the three-camera DIC system. Test results were analyzed based on the same image series, projection calibration, and correlation parameters, but compared using different camera combinations (i.e., three-camera and two-camera data). Three-camera test results show not only an improved surface coverage due to the additional camera viewing angle for uneven specimen surfaces, but also a smaller and more homogenous distributed measurement uncertainty compared to the two-camera test results. The selection of data points with the smallest 3D residuum evaluated from any arbitrary camera pairs enables a better tolerance of the three-camera DIC system against various measurement error sources such as limited depth of field, lens distortion, and speckle pattern distortion due to tilted camera viewing angles.
APA, Harvard, Vancouver, ISO, and other styles
7

Imaide, T., T. Kurashige, T. Kinugasa, H. Ohtsubo, and M. Masuda. "A digital video camera with an arbitrary aspect ratio." IEEE Transactions on Consumer Electronics 37, no. 3 (1991): 506–12. http://dx.doi.org/10.1109/30.85560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gong, Xiaojin, Ying Lin, and Jilin Liu. "3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron." Sensors 13, no. 2 (February 1, 2013): 1902–18. http://dx.doi.org/10.3390/s130201902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Bo, Yue Wang, Xiaqing Ding, Yanmei Jiao, Li Tang, and Rong Xiong. "LiDAR-Camera Calibration Under Arbitrary Configurations: Observability and Methods." IEEE Transactions on Instrumentation and Measurement 69, no. 6 (June 2020): 3089–102. http://dx.doi.org/10.1109/tim.2019.2931526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sipkens, T. A., S. J. Grauer, A. M. Steinberg, S. N. Rogak, and P. Kirchen. "New transform to project axisymmetric deflection fields along arbitrary rays." Measurement Science and Technology 33, no. 3 (December 21, 2021): 035201. http://dx.doi.org/10.1088/1361-6501/ac3f83.

Full text
Abstract:
Abstract Axisymmetric tomography is used to extract quantitative information from line-of-sight measurements of gas flow and combustion fields. For instance, background-oriented schlieren (BOS) measurements are typically inverted by tomographic reconstruction to estimate the density field of a high-speed or high-temperature flow. Conventional reconstruction algorithms are based on the inverse Abel transform, which assumes that rays are parallel throughout the target object. However, camera rays are not parallel, and this discrepancy can result in significant errors in many practical imaging scenarios. We present a generalization of the Abel transform for use in tomographic reconstruction of light-ray deflections through an axisymmetric target. The new transform models the exact path of camera rays instead of assuming parallel paths, thereby improving the accuracy of estimates. We demonstrate our approach with a simulated BOS scenario in which we reconstruct noisy synthetic deflection data across a range of camera positions. Results are compared to state-of-the-art Abel-based algorithms. Reconstructions computed using the new transform are consistently more stable and accurate than conventional reconstructions.
APA, Harvard, Vancouver, ISO, and other styles
11

TOGAMI, Naohisa, Geunho LEE, and Takahiro FUKUDOME. "Single Camera based Motion Investigations of Throwing with Arbitrary Trajectories." Proceedings of the Symposium on sports and human dynamics 2020 (2020): B—6–1. http://dx.doi.org/10.1299/jsmeshd.2020.b-6-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Thacker, Neil A., and John EW Mayhew. "Optimal combination of stereo camera calibration from arbitrary stereo images." Image and Vision Computing 9, no. 1 (February 1991): 27–32. http://dx.doi.org/10.1016/0262-8856(91)90045-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Al Attar, Mohammad, Edward Ziter, and Lisa Wedeen. "Could You Please Look into the Camera?" TDR/The Drama Review 58, no. 3 (September 2014): 124–55. http://dx.doi.org/10.1162/dram_a_00376.

Full text
Abstract:
Are theatre texts documents of rapidly changing events in times of extreme complexity? If so, then what kind of documents are they? These questions provided the first motivation for writing Could You Please Look into the Camera? The second motivation was purely personal: the Assad regime’s program of arbitrary detention had stepped up the frequency of arrests, creating a republic of fear. This ugly fear that Syrians now share—whether arrested or not—had to be exposed in public.
APA, Harvard, Vancouver, ISO, and other styles
14

Zeng, Huai En, Qing Lin Yi, and Yi Zheng. "A Non-Iterative Algorithm to Determine Camera Position and Orientation." Key Engineering Materials 500 (January 2012): 409–15. http://dx.doi.org/10.4028/www.scientific.net/kem.500.409.

Full text
Abstract:
Determination of Camera position and orientation is a basic research subject in photogrammetry and computer vision. This paper introduces a new explicit approach to determine the camera-point distances based on the computer algebraic software of Matlab, then derives the basic mathematic model of pose estimation, and puts forward the analytical calculus of rotation matrix based on the Gibbs vector or Rodrigues matrix, finally presents explicit formulae of camera orientation and position. A simulative case is studied, and the result shows the presented algorithm is correct and efficient for arbitrary pose angles image.
APA, Harvard, Vancouver, ISO, and other styles
15

Fu, Bo, Yue Wang, Xiaqing Ding, Yanmei Jiao, Li Tang, and Rong Xiong. "Erratum to “LiDAR-Camera Calibration Under Arbitrary Configurations: Observability and Methods”." IEEE Transactions on Instrumentation and Measurement 70 (2021): 1. http://dx.doi.org/10.1109/tim.2020.3034990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kruchinin, A. Yu. "INDUSTRIAL DATAMATRIX BARCODE RECOGNITION FOR AN ARBITRARY CAMERA ANGLE AND ROTATION." Computer Optics 38, no. 4 (January 1, 2014): 865–70. http://dx.doi.org/10.18287/0134-2452-2014-38-4-865-870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Farin, D., and P. H. N. de With. "Enabling arbitrary rotational camera motion using multisprites with minimum coding cost." IEEE Transactions on Circuits and Systems for Video Technology 16, no. 4 (April 2006): 492–506. http://dx.doi.org/10.1109/tcsvt.2006.872781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zheng, Yinqiang. "Camera calibration using one perspective view of two arbitrary coplanar circles." Optical Engineering 47, no. 6 (June 1, 2008): 067203. http://dx.doi.org/10.1117/1.2943263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

SHIBATA, Masatoshi, Takashi KAWAKAMI, Takafumi OHORI, and Masahiro KINOSHITA. "2A1-K09 Intelligible Guitar e-Learning Contents using Arbitrary Camera Angles." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2007 (2007): _2A1—K09_1—_2A1—K09_2. http://dx.doi.org/10.1299/jsmermd.2007._2a1-k09_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Wen-Chung. "Automated quality inspection of camera zooming with real-time vision." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 232, no. 12 (January 17, 2017): 2236–41. http://dx.doi.org/10.1177/0954405416683973.

Full text
Abstract:
Industrial automated production technologies have been the research focus of many recent studies, comprising the two research streams of automated assembly and automated product testing. Camera lens-shake detection is an effective way to measure the quality of video cameras during zooming. Conventional testing methods involve time-consuming manual operation procedures. This study proposes a novel automated camera lens-shake detection method, in which real-time visual tracking of two arbitrary features is used to measure and analyze camera zooming quality. The camera lens-shake detection approach can be used to screen out video cameras for the purpose of quality control. It can be effectively employed to replace conventional testing methods and enhance efficiency and stability of product manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
21

Neves, J. C., J. C. Moreno, and H. Proença. "A Master-Slave Calibration Algorithm with Fish-Eye Correction." Mathematical Problems in Engineering 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/427270.

Full text
Abstract:
Surveillance systems capable of autonomously monitoring vast areas are an emerging trend, particularly when wide-angle cameras are combined with pan-tilt-zoom (PTZ) cameras in a master-slave configuration. The use of fish-eye lenses allows the master camera to maximize the coverage area while the PTZ acts as a foveal sensor, providing high-resolution images of regions of interest. Despite the advantages of this architecture, the mapping between image coordinates and pan-tilt values is the major bottleneck in such systems, since it depends on depth information and fish-eye effect correction. In this paper, we address these problems by exploiting geometric cues to perform height estimation. This information is used both for inferring 3D information from a single static camera deployed on an arbitrary position and for determining lens parameters to remove fish-eye distortion. When compared with the previous approaches, our method has the following advantages: (1) fish-eye distortion is corrected without relying on calibration patterns; (2) 3D information is inferred from a single static camera disposed on an arbitrary location of the scene.
APA, Harvard, Vancouver, ISO, and other styles
22

Yamato, Kazuki, Toshihiko Yamashita, Hiroyuki Chiba, and Hiromasa Oku. "Fast Volumetric Feedback under Microscope by Temporally Coded Exposure Camera." Sensors 19, no. 7 (April 3, 2019): 1606. http://dx.doi.org/10.3390/s19071606.

Full text
Abstract:
We developed a temporally coded exposure (TeCE) camera that can cope with high-speed focus variations of a tunable acoustic gradient index (TAG) lens. The TeCE camera can execute a very short exposure multiple times at an arbitrary timing during one shot. Furthermore, by accumulating the photoelectrons generated by each exposure, it is possible to maintain the brightness even with a short exposure time. By synchronously driving the TeCE camera and the TAG lens, different focal planes of an observation target can be acquired at high speed. As a result, high-speed three-dimensional measurement becomes possible, and this can be used for feedback of three-dimensional information. In the work described in this paper, we conducted a focus tracking experiment to evaluate the feedback performance of the TeCE camera. From the experimental results, we confirmed the feedback capability of the TeCE camera.
APA, Harvard, Vancouver, ISO, and other styles
23

Ogi, Tetsuro, and Takeshi Yokota. "Effect of visual attention guidance by camera work in visualization using dome display." International Journal of Modeling, Simulation, and Scientific Computing 09, no. 03 (May 24, 2018): 1840003. http://dx.doi.org/10.1142/s1793962318400032.

Full text
Abstract:
Dome display is expected to be used as effective visualization environment for modeling and simulation due to the features of frameless and high immersive sensation. However, since the users in the dome display can see the projected image in arbitrary direction freely, it is difficult to share information among the viewers. In this research, in order to solve such a problem, the effect of visual attention guidance in the dome environment due to the effect of camera work was examined. As a visualization system, DomePlayer that can express the effect of camera work based on the camera work description language was developed. From the result of evaluation experiments using this system, the constraint condition of the camera work in the dome environment was derived and the effect of visual attention guidance by the camera work was evaluated.
APA, Harvard, Vancouver, ISO, and other styles
24

Gieseler, Oliver, Hubert Roth, and Jürgen Wahrburg. "A novel 4 camera multi-stereo tracking system for application in surgical navigation systems." tm - Technisches Messen 87, no. 7-8 (July 26, 2020): 451–58. http://dx.doi.org/10.1515/teme-2019-0144.

Full text
Abstract:
AbstractIn this paper, we present a novel 4 camera stereo system for application as optical tracking component in navigation systems in computer-assisted surgery. This shall replace a common stereo camera system in several applications. The objective is to provide a tracking component consisting of four single industrial cameras. The system can be built up flexibly in the operating room e. g. at the operating room lamp. The concept is characterized by independent, arbitrary camera mounting poses and demands easy on-site calibration procedures of the camera setup. Following a short introduction describing the environment, motivation and advantages of the new camera system, a simulation of the camera setup and arrangement is depicted in Section 2. From this, we gather important information and parameters for the hardware setup, which is described in Section 3. Section 4 includes the calibration of the cameras. Here, we illustrate the background of camera model and applied calibration procedures, a comparison of calibration results obtained with different calibration programs and a new concept for fast and easy extrinsic calibration.
APA, Harvard, Vancouver, ISO, and other styles
25

Y.B., Blokhinov, Verkeenko M.S., Skryabin S.V., Andrienko E.E., Gorbachev V.A., and Osokin I.V. "Interpolation of Google Street View for arbitrary position of a virtual camera." Geodesy and Aerophotosurveying 62, no. 5 (2018): 495–506. http://dx.doi.org/10.30533/0536-101x-2018-62-5-495-506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tu, Zongjie, and Prabir Bhattacharya. "Game-theoretic surveillance over arbitrary floor plan using a video camera network." Signal, Image and Video Processing 7, no. 4 (May 7, 2013): 705–21. http://dx.doi.org/10.1007/s11760-013-0484-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lourakis, M., M. Pateraki, I. A. Karolos, C. Pikridas, and P. Patias. "POSE ESTIMATION OF A MOVING CAMERA WITH LOW-COST, MULTI-GNSS DEVICES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 55–62. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-55-2020.

Full text
Abstract:
Abstract. Without additional prior information, the pose of a camera estimated with computer vision techniques is expressed in a local coordinate frame attached to the camera’s initial location. Albeit sufficient in many cases, such an arbitrary representation is not convenient for employment in certain applications and has to be transformed to a coordinate system external to the camera before further use. Assuming a camera that is firmly mounted on a moving platform, this paper describes a method for continuously tracking the pose of that camera in a projected coordinate system. By combining exterior orientation from a known target with incremental pose changes inferred from accurate multi-GNSS positioning, the full 6 DoF pose of the camera is updated with low processing overhead and without requiring the continuous visual tracking of ground control points. Experimental results of applying the proposed method to a moving vehicle and a mobile port crane are reported, demonstrating its efficacy and potential.
APA, Harvard, Vancouver, ISO, and other styles
28

CAMPBELL, S. D., I. L. GOODIN, S. D. GROBE, Q. SU, and R. GROBE. "DECOMPOSITION BASED IMAGING WITH FIBER AND CCD CAMERA DETECTIONS." Journal of Innovative Optical Health Sciences 01, no. 01 (June 2008): 115–24. http://dx.doi.org/10.1142/s1793545808000029.

Full text
Abstract:
We generalize a previously proposed imaging scheme to situations for which the set of hidden objects embedded in the highly scattering medium can take arbitrary shapes. We compare the accuracy of images obtained from optical detection fibers with those from a ccd camera. The latter approach is more efficient and can be applied to non-contact geometries, but it requires an a priori linearization of the obtained digitized images. We discuss some details of this calibration for the camera and establish its potential as a new tool for decomposition based imaging.
APA, Harvard, Vancouver, ISO, and other styles
29

Qu, Shao Cheng, Yao Tian, and Chan Chen. "Fuzzy PID Control for Intelligent Smartcar System with CCD Camera." Key Engineering Materials 467-469 (February 2011): 433–36. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.433.

Full text
Abstract:
This paper introduces a design of intelligent smartcar system based on MCU SPCE061A and MCU 89S52. The practical path images are obtained using by the CCD camera, then a double P control method is applied to dynamically adjust steering angle. Based on fuzzy relationship between angle and velocity, the PD controller is proposed to achieve path identification and arbitrary track. Experiments show that the response and stability of the proposed system are excellent.
APA, Harvard, Vancouver, ISO, and other styles
30

CHEN, ZEN, and CHAO-MING WANG. "A TWO-PARAMETER GENERATE-AND-TEST METHOD FOR CAMERA PARAMETER ESTIMATION WITH ANY PLANAR CALIBRATION OBJECT." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 04 (August 1994): 877–98. http://dx.doi.org/10.1142/s0218001494000450.

Full text
Abstract:
In this paper we address the camera parameter estimation problem using an arbitrary planar calibration object. We intend not to use a system of linear or nonlinear equations in terms of the camera parameters derived from the correspondence relationships between the image features (points or lines) and the object features. Instead, we assume that the shape of the planar object can be arbitrary. Therefore, there may not exist any extractable object point or line features. We formulate the estimation method as a generate-and-test method. To make the method effective, we reduce the dimensionality of the parameter space from six to two using a simplified model of the perspective transformation under some mild conditions. Furthermore, several other measures are also taken to cut down the generation time and the verification time. Real, computer-generated, and noisy imagery data are used to illustrate our method in the experiments. The statistics on the speed performance of the method are also included.
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Peng, Haibo Tian, and Xinzhou Qiao. "Minimum Cable Tensions and Tension Sensitivity for Long-Span Cable-Driven Camera Robots with Applications to Stability Analysis." Actuators 12, no. 1 (December 31, 2022): 17. http://dx.doi.org/10.3390/act12010017.

Full text
Abstract:
Employing cables with strong flexibility and unidirectional restraints to operate a camera platform leads to stability issues for a camera robot with long-span cables considering the cable mass. Cable tensions, which are the constraints for the camera platform, have a critical influence on the stability of the robot. Consequently, this paper focuses on two special problems of minimum cable tension distributions (MCTDs) within the workspace and the cable tension sensitivity analysis (CTSA) for a camera robot by taking the cable mass into account, which can be used to investigate the stability of the robot. Firstly, three minimum cable tension distribution indices (MCTDIs) were proposed for the camera robot. An important matter is that the three proposed MCTDIs, which represent the weakest constraints for the camera platform, can be employed for investigating the stability of the robot. In addition, a specified minimum cable tension workspace (SMCTW) is introduced, where the minimum cable tension when the camera platform is located at arbitrary position meets the given requirement. Secondly, the CTSA model and cable tension sensitivity analysis index (CTSAI) for the camera robot were proposed with grey relational analysis method, in which the influence mechanism and influence degree of the positions of the camera platform relative to cable tensions was investigated in detail. Lastly, the reasonableness of the presented MCTDIs and the method for the CTSA with applications in the stability analysis of the camera robot were supported by performing some simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Ya Wei, and Jian Wei Li. "A New Technique for Camera Calibration Using Circle Template." Key Engineering Materials 467-469 (February 2011): 1917–20. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.1917.

Full text
Abstract:
In view of the poor anti-noise performance of traditional methods, the principle of camera calibration is analyzed based on calibration template with planar circles, and a new calibration technique is proposed based on pin-hole perspective model. Firstly, the edges of image ellipses are located in sub-pixel by spatial-moment operator. Then elliptic equations are obtained by the Least Square Fitting method, and common tangents of two arbitrary ellipses are also calculated. Finally, the extrinsic and intrinsic parameters are computed by the Radial Alignment Constraint (RAC) model. Experiments results indicate that proposed method has a goof performance of lower computation complexity and higher precision.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Seung-Eun. "Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model." KIPS Transactions on Software and Data Engineering 1, no. 2 (November 30, 2012): 127–36. http://dx.doi.org/10.3745/ktsde.2012.1.2.127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Laviada, Jaime, Mohammad Tayeb Ghasr, Miguel Lopez-Portugues, Fernando Las-Heras, and Reza Zoughi. "Real-Time Multiview SAR Imaging Using a Portable Microwave Camera With Arbitrary Movement." IEEE Transactions on Antennas and Propagation 66, no. 12 (December 2018): 7305–14. http://dx.doi.org/10.1109/tap.2018.2870485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ferreira, Flávio P., Paulo M. F. Forte, Paulo E. R. Felgueiras, Boris P. J. Bret, Michael S. Belsley, and Eduardo J. Nunes-Pereira. "Evaluating sub-pixel functional defects of a display using an arbitrary resolution camera." Displays 49 (September 2017): 40–50. http://dx.doi.org/10.1016/j.displa.2017.06.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Madhav, Priti, James E. Bowsher, Spencer J. Cutler, and Martin P. Tornai. "Characterizing the MTF in 3D for a Quantized SPECT Camera Having Arbitrary Trajectories." IEEE Transactions on Nuclear Science 56, no. 3 (June 2009): 661–70. http://dx.doi.org/10.1109/tns.2009.2013464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Aly, H. A., and E. Dubois. "Design of Optimal Camera Apertures Adapted to Display Devices Over Arbitrary Sampling Lattices." IEEE Signal Processing Letters 11, no. 4 (April 2004): 443–45. http://dx.doi.org/10.1109/lsp.2004.824062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zheng, Qinfen, and Rama Chellappa. "Automatic feature point extraction and tracking in image sequences for arbitrary camera motion." International Journal of Computer Vision 15, no. 1-2 (June 1995): 31–76. http://dx.doi.org/10.1007/bf01450849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Chun Feng, Shan Shan Kong, and Hai Ming Wu. "Research on a Single Camera Location Model and its Application." Applied Mechanics and Materials 50-51 (February 2011): 468–72. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.468.

Full text
Abstract:
Digital cameras have been widely used in the areas of road transportation, railway transportation as well as security system. To address the position of digital camera in these fields this paper proposed a geometry calibration method based on feature point extraction of arbitrary target. Under the meaning of the questions, this paper first defines four kinds of coordinate system, that is the world coordinate system. The camera's optical center of the coordinate system is the camera coordinate system, using the same point in different coordinate system of the coordinate transformation to determine the relationship between world coordinate system and camera coordinate. And thus determine the camera's internal parameters and external parameters, available transformation matrix and translation vector indicated by the camera's internal parameters of the external parameters and the establishment of a single camera location model. According to the model, using the camera's external parameters to be on the target circle center point in the image plane coordinates.
APA, Harvard, Vancouver, ISO, and other styles
40

BROWN, MICHAEL S., and W. BRENT SEALES. "INCORPORATING GEOMETRIC REGISTRATION WITH PC-CLUSTER RENDERING FOR FLEXIBLE TILED DISPLAYS." International Journal of Image and Graphics 04, no. 04 (October 2004): 683–700. http://dx.doi.org/10.1142/s0219467804001592.

Full text
Abstract:
We describe a display system that automatically registers an array of casually positioned projectors and drives the display with a PC cluster. The display system uses a camera to compute the necessary transformations to geometrically align multiple overlapping projectors. Geometric alignment is realized through real-time corrective warping applied in the rendering pipeline. This combination of camera-aided display alignment with the PC-cluster rendering engine provides a display system that is easy to configure and reconfigure, accommodates casually tiled projectors and arbitrary display surfaces, and can be operational in a very short period of time.
APA, Harvard, Vancouver, ISO, and other styles
41

Rubin, Noah A., Gabriele D’Aversa, Paul Chevalier, Zhujun Shi, Wei Ting Chen, and Federico Capasso. "Matrix Fourier optics enables a compact full-Stokes polarization camera." Science 365, no. 6448 (July 4, 2019): eaax1839. http://dx.doi.org/10.1126/science.aax1839.

Full text
Abstract:
Recent developments have enabled the practical realization of optical elements in which the polarization of light may vary spatially. We present an extension of Fourier optics—matrix Fourier optics—for understanding these devices and apply it to the design and realization of metasurface gratings implementing arbitrary, parallel polarization analysis. We show how these gratings enable a compact, full-Stokes polarization camera without standard polarization optics. Our single-shot polarization camera requires no moving parts, specially patterned pixels, or conventional polarization optics and may enable the widespread adoption of polarization imaging in machine vision, remote sensing, and other areas.
APA, Harvard, Vancouver, ISO, and other styles
42

Sharma, Mayank, Raghavendra Sara, and Shefali Agrawal. "framework to Georeference Point Cloud in GPS Shadowed Urban Environment by Integrating Computer Vision Algorithms and Photogrammetric Techniques." Journal of Geomatics 16, no. 2 (October 31, 2022): 146–51. http://dx.doi.org/10.58825/jog.2022.16.2.37.

Full text
Abstract:
The integration of computer vision algorithms and photogrammetric techniques has become an alternative to the high-cost Mobile Mapping Systems (MMS) and point cloud generation through Structure from Motion (SfM) algorithm is the best example of it. The point cloud generated using SfM is an arbitrary coordinate system and for its georeferencing known global coordinates of the camera exposure stations, rotational and translational parameters are required. The global coordinates of exposure stations are obtained through GNSS (Global Navigation Satellite System). GPS (Global Positioning System) is widely used for getting the positional information of a point. The problem in georeferencing the point cloud arises if the coordinates of a few camera exposure stations are unknown due to GPS shadowing or poor GDOP (Geometric Dilution of Precision). This issue is common in MMS that use laser scanners, GNSS and IMU (inertial measurement unit). In this paper, efforts are made to develop a methodology for handling GPS shadowing or poor accuracy for the georeferencing of arbitrary point clouds generated through SfM. The adopted method uses a blend of photogrammetric techniques of space resection and space intersection to determine the unknown camera exposure stations' coordinates. Bundle adjustment is applied to improve the accuracy of the results obtained. The developed methodology is well analyzed in different cases, and the results show good accuracy in respective cases.
APA, Harvard, Vancouver, ISO, and other styles
43

Daku, Gábor, and János Vad. "Preliminary Design Guidelines for Considering the Vibration and Noise of Low-Speed Axial Fans Due to Profile Vortex Shedding." International Journal of Turbomachinery, Propulsion and Power 7, no. 1 (January 7, 2022): 2. http://dx.doi.org/10.3390/ijtpp7010002.

Full text
Abstract:
This paper presents a critical overview on worst-case design scenarios for which low-speed axial flow fans may exhibit an increased risk of blade resonance due to profile vortex shedding. To set up a design example, a circular-arc-cambered plate of 8% relative curvature is investigated in twofold approaches of blade mechanics and aerodynamics. For these purposes, the frequency of the first bending mode of a plate of arbitrary circular camber is expressed by modeling the fan blade as a cantilever beam. Furthermore, an iterative blade design method is developed for checking the risky scenarios for which spanwise and spatially coherent shed vortices, stimulating pronounced vibration and noise, may occur. Coupling these two approaches, cases for vortex-induced blade resonance are set up. Opposing this basis, design guidelines are elaborated upon for avoiding such resonance. Based on the approach presented herein, guidelines are also developed for moderating the annoyance due to the vortex shedding noise.
APA, Harvard, Vancouver, ISO, and other styles
44

Nagano, Tatsuki, Ryosuke Yajima, Shunsuke Hamasaki, Keiji Nagatani, Alessandro Moro, Hiroyuki Okamoto, Genki Yamauchi, Takeshi Hashimoto, Atsushi Yamashita, and Hajime Asama. "Arbitrary Viewpoint Visualization for Teleoperated Hydraulic Excavators." Journal of Robotics and Mechatronics 32, no. 6 (December 20, 2020): 1233–43. http://dx.doi.org/10.20965/jrm.2020.p1233.

Full text
Abstract:
In this paper, we propose a visualization system for the teleoperation of excavation works using a hydraulic excavator. An arbitrary viewpoint visualization system is a visualization system that enables teleoperators to observe the environment around a machine by combining multiple camera images. However, when applied to machines with arms (such as hydraulic excavators), a part of the field of view is shielded by the image of the excavator’s arm; hence, an occlusion occurs behind the arm. Furthermore, it is difficult for teleoperators to understand the three-dimensional (3D) condition of the excavating point because the current system approximates the surrounding environment with a predetermined shape. To solve these problems, we propose two methods: (1) a method to reduce the occluded region and expand the field of view, and (2) a method to measure and integrate the 3D information of the excavating point to the image. In addition, we conduct experiments using a real hydraulic excavator, and we demonstrate that an image with sufficient accuracy can be presented in real-time.
APA, Harvard, Vancouver, ISO, and other styles
45

Antuña-Sánchez, Juan C., Roberto Román, Victoria E. Cachorro, Carlos Toledano, César López, Ramiro González, David Mateos, Abel Calle, and Ángel M. de Frutos. "Relative sky radiance from multi-exposure all-sky camera images." Atmospheric Measurement Techniques 14, no. 3 (March 22, 2021): 2201–17. http://dx.doi.org/10.5194/amt-14-2201-2021.

Full text
Abstract:
Abstract. All-sky cameras are frequently used to detect cloud cover; however, this work explores the use of these instruments for the more complex purpose of extracting relative sky radiances. An all-sky camera (SONA202-NF model) with three colour filters narrower than usual for this kind of cameras is configured to capture raw images at seven exposure times. A detailed camera characterization of the black level, readout noise, hot pixels and linear response is carried out. A methodology is proposed to obtain a linear high dynamic range (HDR) image and its uncertainty, which represents the relative sky radiance (in arbitrary units) maps at three effective wavelengths. The relative sky radiances are extracted from these maps and normalized by dividing every radiance of one channel by the sum of all radiances at this channel. Then, the normalized radiances are compared with the sky radiance measured at different sky points by a sun and sky photometer belonging to the Aerosol Robotic Network (AERONET). The camera radiances correlate with photometer ones except for scattering angles below 10∘, which is probably due to some light reflections on the fisheye lens and camera dome. Camera and photometer wavelengths are not coincident; hence, camera radiances are also compared with sky radiances simulated by a radiative transfer model at the same camera effective wavelengths. This comparison reveals an uncertainty on the normalized camera radiances of about 3.3 %, 4.3 % and 5.3 % for 467, 536 and 605 nm, respectively, if specific quality criteria are applied.
APA, Harvard, Vancouver, ISO, and other styles
46

Hirabayashi, Taketsugu, Kazuki Abukawa, Tomoo Sato, Sayuri Matsumoto, and Muneo Yoshie. "First Trial of Underwater Excavator Work Supported by Acoustic Video Camera." Journal of Robotics and Mechatronics 28, no. 2 (April 19, 2016): 138–48. http://dx.doi.org/10.20965/jrm.2016.p0138.

Full text
Abstract:
[abstFig src='/00280002/04.jpg' width=""300"" text='The experiment of recognition by acoustic video camera' ]External recognition is important for underwater machinery works. However, acquisition of external field information from optical camera images may not be possible, owing to muddiness of water caused by such work. Furthermore, in order to improve the workability of machines in the scenario of their remote operation, it is important to know the positional relation information between a target object and the end effector. To solve these problems, an acoustic video camera was developed and performance test experiments were conducted at a caisson dockyard. In the experiments, a prototype of acoustic video camera was used to measure and to recognize a target objects and an underwater construction machine. And the feasibility of monitoring for underwater construction using the acoustic videos was evaluated. As a result, it was found that despite the lower accuracy of shape recognition on account of a resolution problem, the positional relation could be recognized satisfactorily since the video images could be presented from an arbitrary viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
47

Ratcliffe, Jerry H., and Elizabeth R. Groff. "A Longitudinal Quasi-Experimental Study of Violence and Disorder Impacts of Urban CCTV Camera Clusters." Criminal Justice Review 44, no. 2 (December 3, 2018): 148–64. http://dx.doi.org/10.1177/0734016818811917.

Full text
Abstract:
Methodological challenges have hampered a number of previous studies into the crime reduction effectiveness of closed-circuit television (CCTV) surveillance systems. These have included the use of arbitrary fixed distances to represent estimated camera deterrence areas and a lack of control for camera sites with overlapping surveillance areas. The current article overcomes the first of these challenges by using camera view areas individually constructed by researchers viewing and manipulating cameras to determine precise camera viewsheds. The second challenge is addressed by grouping cameras into clusters of combined viewshed areas. The longitudinal crime and disorder reduction effectiveness of these clusters of overlapping CCTV cameras is tested in Philadelphia, PA. Multilevel mixed-effects models with time-varying covariates and measures from a noncomparable control area are applied to 10 years of crime data (2003–2012) within the viewsheds of 86 CCTV cameras grouped into 13 clusters. Models applied across violent street felonies and disorder incidents find no significant impact associated with the introduction of CCTV surveillance. Potential reasons for this are discussed.
APA, Harvard, Vancouver, ISO, and other styles
48

Craven, J. M., E. Meeks, G. Delich, E. Ayars, H. K. Pechkis, and J. A. Pechkis. "A low-cost shutter driver and arbitrary waveform generator for optical switching using a programmable system-on-chip (PSoC) device." Review of Scientific Instruments 93, no. 11 (November 1, 2022): 113002. http://dx.doi.org/10.1063/5.0105884.

Full text
Abstract:
We have developed a low-cost mechanical shutter driver with integrated arbitrary waveform generation for optical switching and control using a programmable system-on-chip device. This microcontroller-based device with configurable digital and analog blocks is readily programmed using free software, allowing for easy customization for a variety of applications. Additional digital and analog outputs with arbitrary timings can be used to control a variety of devices, such as additional shutters, acousto-optical modulators, or camera trigger pulses, for complete control and imaging of laser light. Utilizing logic-level control signals, this device can be readily integrated into existing computer control and data acquisition systems for expanded hardware capabilities.
APA, Harvard, Vancouver, ISO, and other styles
49

Hwang, Seo-Yeon, Jae-Bok Song, and Mun Sang Kim. "Robust Extraction of Arbitrary-Shaped Features in Ceiling for Upward-Looking Camera-Based SLAM." IFAC Proceedings Volumes 44, no. 1 (January 2011): 8165–70. http://dx.doi.org/10.3182/20110828-6-it-1002.00459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Zen, Chao-Ming Wang, and Shinn-Ying Ho. "An effective search approach to camera parameter estimation using an arbitrary planar calibration object." Pattern Recognition 26, no. 5 (May 1993): 655–66. http://dx.doi.org/10.1016/0031-3203(93)90119-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography