Articles de revues sur le sujet « Single object »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Single object.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Single object ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Dong, Yu Bing, Ying Sun et Ming Jing Li. « Multi-Object Tracking with Single Camera ». Applied Mechanics and Materials 740 (mars 2015) : 668–71. http://dx.doi.org/10.4028/www.scientific.net/amm.740.668.

Texte intégral
Résumé :
Multi-object tracking has been a challenging topic in computer vision. A Simple and efficient moving multi-object tracking algorithm is proposed. A new tracking method combined with trajectory prediction and a sub-block matching is used to handle the objects occlusion. The experimental results show that the proposed algorithm has good performance.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Moscatelli, Alberto. « A single object rotating ». Nature Nanotechnology 13, no 9 (septembre 2018) : 769. http://dx.doi.org/10.1038/s41565-018-0265-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ren, Xiaoyuan, Libing Jiang et Zhuang Wang. « Pose Estimation of Uncooperative Unknown Space Objects from a Single Image ». International Journal of Aerospace Engineering 2020 (18 juillet 2020) : 1–9. http://dx.doi.org/10.1155/2020/9966311.

Texte intégral
Résumé :
Estimating the 3D pose of the space object from a single image is an important but challenging work. Most of the existing methods estimate the 3D pose of known space objects and assume that the detailed geometry of a specific object is known. These methods are not available for unknown objects without the known geometry of the object. In contrast to previous works, this paper devotes to estimate the 3D pose of the unknown space object from a single image. Our method estimates not only the pose but also the shape of the unknown object from a single image. In this paper, a hierarchical shape model is proposed to represent the prior structure information of typical space objects. On this basis, the parameters of the pose and shape are estimated simultaneously for unknown space objects. Experimental results demonstrate the effectiveness of our method to estimate the 3D pose and infer the geometry of unknown typical space objects from a single image. Moreover, experimental results show the advantage of our approach over the methods relying on the known geometry of the object.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Osiurak, François, Ghislaine Aubin, Philippe Allain, Christophe Jarry, Isabelle Richard et Didier Le Gall. « Object utilization and object usage : A single-case study ». Neurocase 14, no 2 (27 juin 2008) : 169–83. http://dx.doi.org/10.1080/13554790802108372.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Dubanov, Aleksandr. « GEOMETRIC MODEL OF GROUP PURSUIT OF A SINGLE TARGET BY THE CHASE METHOD ». Geometry & ; Graphics 10, no 2 (10 octobre 2022) : 20–26. http://dx.doi.org/10.12737/2308-4898-2022-10-2-20-26.

Texte intégral
Résumé :
The article describes the model of group pursuit of a single target by the chase method. All objects participating in the pursuit model move with a constant modulo speed. One of the participants in the process moves along a certain trajectory and releases objects at specified intervals, the task of which is to achieve the goal by the chase method. All objects have restrictions on the curvature of the motion path. A single target, in turn, is tasked with achieving the target that releases objects using the parallel approach method. For each pursuing object, a detection area is formed in the form of two beams. The object's velocity vector is directed along the bisector of the angle formed by such rays. If the target enters the detection area, then the object starts pursuit and the velocity vector is directed to the target. If the target leaves the detection area, then the object makes a uniform and rectilinear movement. The task is to implement a dynamic model of multiple group pursuit, where each object has its own tasks, implemented by the chase method. As an example, where the model developed in the article could be in demand, the following example can be given. Consider the movement of a low-maneuverable object that is overtaking a faster target. As a means of protection, instead of releasing passive heat traps, it is proposed to drop a variety of autonomously controlled weapons. An analysis of existing studies has shown that such means of protecting aircraft do not exist. The results of the research can be in demand in the design of unmanned aerial vehicles with elements of autonomous control and artificial intelligence.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Abraham, Glincy, K. A. Narayanankutty et K. P. Soman. « Sparsity based Single Object Tracking ». INTERNATIONAL JOURNAL OF COMPUTERS & ; TECHNOLOGY 9, no 2 (15 juillet 2013) : 1004–11. http://dx.doi.org/10.24297/ijct.v9i2.4167.

Texte intégral
Résumé :
Object tracking has importance in various video processing applications like video surveillance, perceptual user interface driver assistance, tracking etc. This paper deals with a new tracking technique that combines the dictionary based background subtraction along with sparsity based tracking. The speed and performance challenges faced during the sparsity based tracking alone are addressed, as it is based on a background subtraction preprocessing and local compressive tracking. It also overcomes the challenges faced by the traditional techniques due to illumination variation, pose and shape change of the object. Output of the proposed technique is compared with that of compressive tracking technique.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Alonso, Estrella, et Juan Tejada. « Risk optimal single-object auctions ». Cuadernos de Economía 35, no 99 (septembre 2012) : 131–38. http://dx.doi.org/10.1016/s0210-0266(12)70030-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mishra, Debasis, et Abdul Quadir. « Non-bossy single object auctions ». Economic Theory Bulletin 2, no 1 (5 mars 2014) : 93–110. http://dx.doi.org/10.1007/s40505-014-0031-y.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Champion, Benjamin, Mo Jamshidi et Matthew Joordens. « Depth Estimation of an Underwater Object Using a Single Camera ». KnE Engineering 2, no 2 (9 février 2017) : 112. http://dx.doi.org/10.18502/keg.v2i2.603.

Texte intégral
Résumé :
<p>Underwater robotics is currently a growing field. To be able to autonomously find and collect objects on the land and in the air is a complicated problem, which is only compounded within the underwater setting. Different techniques have been developed over the years to attempt to solve this problem, many of which involve the use of expensive sensors. This paper explores a method to find the depth of an object within the underwater setting, using a single camera source and a known object. Once this known object has been found, information about other unknown objects surrounding this point can be determined, and therefore the objects can be collected.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yow, Kin-Choong, et Insu Kim. « General Moving Object Localization from a Single Flying Camera ». Applied Sciences 10, no 19 (4 octobre 2020) : 6945. http://dx.doi.org/10.3390/app10196945.

Texte intégral
Résumé :
Object localization is an important task in the visual surveillance of scenes, and it has important applications in locating personnel and/or equipment in large open spaces such as a farm or a mine. Traditionally, object localization can be performed using the technique of stereo vision: using two fixed cameras for a moving object, or using a single moving camera for a stationary object. This research addresses the problem of determining the location of a moving object using only a single moving camera, and it does not make use of any prior information on the type of object nor the size of the object. Our technique makes use of a single camera mounted on a quadrotor drone, which flies in a specific pattern relative to the object in order to remove the depth ambiguity associated with their relative motion. In our previous work, we showed that with three images, we can recover the location of an object moving parallel to the direction of motion of the camera. In this research, we find that with four images, we can recover the location of an object moving linearly in an arbitrary direction. We evaluated our algorithm on over 70 image sequences of objects moving in various directions, and the results showed a much smaller depth error rate (less than 8.0% typically) than other state-of-the-art algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Dai, Ziying, Xiaoguang Mao, Yan Lei, Yuhua Qi, Rui Wang et Bin Gu. « Compositional Mining of Multiple Object API Protocols through State Abstraction ». Scientific World Journal 2013 (2013) : 1–13. http://dx.doi.org/10.1155/2013/171647.

Texte intégral
Résumé :
API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Fernández, Unai J., Sonia Elizondo, Naroa Iriarte, Rafael Morales, Amalia Ortiz, Sebastian Marichal, Oscar Ardaiz et Asier Marzo. « A Multi-Object Grasp Technique for Placement of Objects in Virtual Reality ». Applied Sciences 12, no 9 (21 avril 2022) : 4193. http://dx.doi.org/10.3390/app12094193.

Texte intégral
Résumé :
Some daily tasks involve grasping multiple objects in one hand and releasing them in a determined order, for example laying out a surgical table or distributing items on shelves. For training these tasks in Virtual Reality (VR), there is no technique for allowing users to grasp multiple objects in one hand in a realistic way, and it is not known if such a technique would benefit user experience. Here, we design a multi-object grasp technique that enables users to grasp multiple objects in one hand and release them in a controlled way. We tested an object placement task under three conditions: real life, VR with single-object grasp and VR with multi-object grasp. Task completion time, distance travelled by the hands and subjective experience were measured in three scenarios: sitting in front of a desktop table, standing up in front of shelves and a room-size scenario where walking was required. Results show that the performance in a real environment is better than in Virtual Reality, both for single-object and multi-object grasping. The single-object technique performs better than the multi-object, except for the room scenario, where multi-object leads to less distance travelled and reported physical demand. For use cases where the distances are small (i.e., desktop scenario), single-object grasp is simpler and easier to understand. For larger scenarios, the multi-object grasp technique represents a good option that can be considered by other application designers.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Pinilla, Samuel, Laura Galvis, Karen Egiazarian et Henry Arguello. « Single-shot Coded Diffraction System for 3D Object Shape Estimation ». Electronic Imaging 2020, no 14 (26 janvier 2020) : 59–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.14.coimg-059.

Texte intégral
Résumé :
The three-dimensional (3D) shape reconstruction problem of an object is a task of high interest in autonomous vehicles, detection of moving objects, and precision agriculture. A common methodology to recover the 3D shape of an object is using its optical phase. However, this approach involves solving a non-convex computationally demanding inverse problem known as phase retrieval (PR) in a setup that records coded diffraction patterns (CDP). Usually, the acquisition of several snapshots from the scene is required to solve the PR problem. This work proposes a single-shot 3D shape estimation technique using the optical phase of the object from CDP. The presented approach consists on accurately estimating the optical phase of the object by low-passfiltering the leading eigenvector of a carefully constructed matrix. Then, the estimated phase is used to infer the 3D object shape. It is important to mention that the estimation procedure does not involve a full time demanding reconstruction of the objects. Numerical results on synthetic data demonstrate that the proposed methodology closely estimates the 3D surface of an object with a normalized Mean-Square-Error of up to 0.27, under both noiseless and noisy scenarios. Additionally, the proposed method requires up to 60% less measurements to accurately estimate the 3D surface compared to a state-of-the-art methodology.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Yu, Yong Yan, et Zhi Jian Wang. « Research on 3D Reconstruction Based on a Single Image ». Advanced Materials Research 108-111 (mai 2010) : 3–10. http://dx.doi.org/10.4028/www.scientific.net/amr.108-111.3.

Texte intégral
Résumé :
Aimed at the fact that most of the objects was symmetrical, we present a method reconstructing 3D model from single 2D image. At first, the angle, inclination angle and pivot angle should be established in the ternary perspective transformed matrix T, then the calibration which indicated the outline of the object can pick up the characteristic line. It resolved the host extinguishes information in accordance with the similar features of the parallel group of lines projection angle, then ensured the viewpoint situation and the object plane of symmetry, and made use of the object imagination plane of symmetry in accordance with symmetry feature through the interactive alignment specifying three pairs of the known symmetry image coordinates and the space coordinates of the corresponding points confirm the perspective transformative matrix T, then reverse prove the other space coordinates of the surface of the objects.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Zhao, Dong, Baoqing Ding, Yulin Wu, Lei Chen et Hongchao Zhou. « Unsupervised Learning from Videos for Object Discovery in Single Images ». Symmetry 13, no 1 (29 décembre 2020) : 38. http://dx.doi.org/10.3390/sym13010038.

Texte intégral
Résumé :
This paper proposes a method for discovering the primary objects in single images by learning from videos in a purely unsupervised manner—the learning process is based on videos, but the generated network is able to discover objects from a single input image. The rough idea is that an image typically consists of multiple object instances (like the foreground and background) that have spatial transformations across video frames and they can be sparsely represented. By exploring the sparsity representation of a video with a neural network, one may learn the features of each object instance without any labels, which can be used to discover, recognize, or distinguish object instances from a single image. In this paper, we consider a relatively simple scenario, where each image roughly consists of a foreground and a background. Our proposed method is based on encoder-decoder structures to sparsely represent the foreground, background, and segmentation mask, which further reconstruct the original images. We apply the feed-forward network trained from videos for object discovery in single images, which is different from the previous co-segmentation methods that require videos or collections of images as the input for inference. The experimental results on various object segmentation benchmarks demonstrate that the proposed method extracts primary objects accurately and robustly, which suggests that unsupervised image learning tasks can benefit from the sparsity of images and the inter-frame structure of videos.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Maktab Dar Oghaz, Mahdi, Manzoor Razaak et Paolo Remagnino. « Enhanced Single Shot Small Object Detector for Aerial Imagery Using Super-Resolution, Feature Fusion and Deconvolution ». Sensors 22, no 12 (8 juin 2022) : 4339. http://dx.doi.org/10.3390/s22124339.

Texte intégral
Résumé :
One common issue of object detection in aerial imagery is the small size of objects in proportion to the overall image size. This is mainly caused by high camera altitude and wide-angle lenses that are commonly used in drones aimed to maximize the coverage. State-of-the-art general purpose object detector tend to under-perform and struggle with small object detection due to loss of spatial features and weak feature representation of the small objects and sheer imbalance between objects and the background. This paper aims to address small object detection in aerial imagery by offering a Convolutional Neural Network (CNN) model that utilizes the Single Shot multi-box Detector (SSD) as the baseline network and extends its small object detection performance with feature enhancement modules including super-resolution, deconvolution and feature fusion. These modules are collectively aimed at improving the feature representation of small objects at the prediction layer. The performance of the proposed model is evaluated using three datasets including two aerial images datasets that mainly consist of small objects. The proposed model is compared with the state-of-the-art small object detectors. Experiment results demonstrate improvements in the mean Absolute Precision (mAP) and Recall values in comparison to the state-of-the-art small object detectors that investigated in this study.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Mathai, Anumol, Ningqun Guo, Dong Liu et Xin Wang. « 3D Transparent Object Detection and Reconstruction Based on Passive Mode Single-Pixel Imaging ». Sensors 20, no 15 (29 juillet 2020) : 4211. http://dx.doi.org/10.3390/s20154211.

Texte intégral
Résumé :
Transparent object detection and reconstruction are significant, due to their practical applications. The appearance and characteristics of light in these objects make reconstruction methods tailored for Lambertian surfaces fail disgracefully. In this paper, we introduce a fixed multi-viewpoint approach to ascertain the shape of transparent objects, thereby avoiding the rotation or movement of the object during imaging. In addition, a simple and cost-effective experimental setup is presented, which employs two single-pixel detectors and a digital micromirror device, for imaging transparent objects by projecting binary patterns. In the system setup, a dark framework is implemented around the object, to create shades at the boundaries of the object. By triangulating the light path from the object, the surface shape is recovered, neither considering the reflections nor the number of refractions. It can, therefore, handle transparent objects with a relatively complex shape with the unknown refractive index. The implementation of compressive sensing in this technique further simplifies the acquisition process, by reducing the number of measurements. The experimental results show that 2D images obtained from the single-pixel detectors are better in quality with a resolution of 32×32. Additionally, the obtained disparity and error map indicate the feasibility and accuracy of the proposed method. This work provides a new insight into 3D transparent object detection and reconstruction, based on single-pixel imaging at an affordable cost, with the implementation of a few numbers of detectors.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Cheong, Yun Zhe, et Wei Jen Chew. « The Application of Image Processing to Solve Occlusion Issue in Object Tracking ». MATEC Web of Conferences 152 (2018) : 03001. http://dx.doi.org/10.1051/matecconf/201815203001.

Texte intégral
Résumé :
Object tracking is a computer vision field that involves identifying and tracking either a single or multiple objects in an environment. This is extremely useful to help observe the movements of the target object like people in the street or cars on the road. However, a common issue with tracking an object in an environment with many moving objects is occlusion. Occlusion can cause the system to lose track of the object being tracked or after overlapping, the wrong object will be tracked instead. In this paper, a system that is able to correctly track occluded objects is proposed. This system includes algorithms such as foreground object segmentation, colour tracking, object specification and occlusion handling. An input video is input to the system and every single frame of the video is analysed. The foreground objects are segmented with object segmentation algorithm and tracked with the colour tracking algorithm. An ID is assigned to each tracked object. Results obtained shows that the proposed system is able to continuously track an object and maintain the correct identity even after is has been occluded by another object.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Trifunovic, Dejan. « Single object auctions with interdependent values ». Ekonomski anali 56, no 188 (2011) : 125–69. http://dx.doi.org/10.2298/eka1188125t.

Texte intégral
Résumé :
This paper reviews single object auctions when bidders? values of the object are interdependent. We will see how the auction forms could be ranked in terms of expected revenue when signals that bidders have about the value of the object are affiliated. In the discussion that follows we will deal with reserve prices and entry fees. Furthermore we will examine the conditions that have to be met for English auction with asymmetric bidders to allocate the object efficiently. Finally, common value auctions will be considered when all bidders have the same value for the object.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Newell, F. N. « Searching for Objects in the Visual Periphery : Effects of Orientation ». Perception 25, no 1_suppl (août 1996) : 110. http://dx.doi.org/10.1068/v96l1111.

Texte intégral
Résumé :
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Ortega, Ana, et Mubarak Shah. « From Shape from Shading to Object Recognition ». International Journal of Pattern Recognition and Artificial Intelligence 12, no 07 (novembre 1998) : 969–84. http://dx.doi.org/10.1142/s0218001498000531.

Texte intégral
Résumé :
Recognition of objects is one of the main goals of computer vision. Several approaches have been proposed to solve this problem using 3-D shapes. In most of them it is assumed that the 3-D shape (depth map) is available. Several object recognition systems use range images to extract the 3-D shape. We present a method that uses a shape from shading algorithm to perform 3-D object recognition for simple objects. This method extracts the 3-D information from a single intensity image, then segments the object into regions. After computing the properties of the regions, it compares the input object with the model objects in the database. To test our method, several images with slightly different viewing angles of single objects are matched against five models in the database.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Kim, Changwon. « Robust Single-Image Dehazing ». Electronics 10, no 21 (28 octobre 2021) : 2636. http://dx.doi.org/10.3390/electronics10212636.

Texte intégral
Résumé :
This paper proposes a new single-image dehazing method, which is an important preprocessing step in vision applications to overcome the limitations of the conventional dark channel prior. The dark channel prior has a tendency to underestimate transmissions of bright regions or objects that can generate color distortions during the process of dehazing. In order to suppress the distortions in a large sky area or a bright white object, the sky probabilities and the white-object probabilities calculated in the non-sky area are proposed. The sky area is detected by combining the advantages of a region-based and a boundary-based sky segmentation in order to consider various sky shapes in road scenes. The performance of the proposed methods is evaluated using synthetic and real-world datasets. When compared to conventional methods in the reviewed literature, the proposed method produces significant improvements concerning visual and numerical criteria.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Wiesmann, Sandro L., et Melissa L. H. Vo. « Is one object enough ? Diagnosticity of single objects for fast scene categorization ». Journal of Vision 22, no 14 (5 décembre 2022) : 4146. http://dx.doi.org/10.1167/jov.22.14.4146.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Gorbatsevich, V., Y. Vizilter, V. Knyaz et A. Moiseenko. « SINGLE-SHOT SEMANTIC MATCHER FOR UNSEEN OBJECT DETECTION ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (30 mai 2018) : 379–84. http://dx.doi.org/10.5194/isprs-archives-xlii-2-379-2018.

Texte intégral
Résumé :
In this paper we combine the ideas of image matching, object detection, image retrieval and zero-shot learning for stating and solving the semantic matching problem. Semantic matcher takes two images (test and request) as input and returns detected objects (bounding boxes) on test image corresponding to semantic class represented by request (sample) image. We implement our single-shot semantic matcher CNN architecture based on GoogleNet and YOLO/DetectNet architectures. We propose the detection-by-request training and testing protocols for semantic matching algorithms. We train and test our CNN on the ILSVRC 2014 with 200 seen and 90 unseen classes and provide the real-time object detection with mAP 23 for seen and mAP 21 for unseen classes.
Styles APA, Harvard, Vancouver, ISO, etc.
25

White, Olivier, Noreen Dowling, R. Martyn Bracewell et Jörn Diedrichsen. « Hand Interactions in Rapid Grip Force Adjustments Are Independent of Object Dynamics ». Journal of Neurophysiology 100, no 5 (novembre 2008) : 2738–45. http://dx.doi.org/10.1152/jn.90593.2008.

Texte intégral
Résumé :
Object manipulation requires rapid increase in grip force to prevent slippage when the load force of the object suddenly increases. Previous experiments have shown that grip force reactions interact between the hands when holding a single object. Here we test whether this interaction is modulated by the object dynamics experienced before the perturbation of the load force. We hypothesized that coupling of grip forces should be stronger when holding a single object than when holding separate objects. We measured the grip force reactions elicited by unpredictable load perturbations when participants were instructed to hold one single or two separate objects. We simulated these objects both visually and dynamically using a virtual environment consisting of two robotic devices and a calibrated stereo display. In contrast to previous studies, the load forces arising from a single object could be uncoupled at the moment of perturbation, allowing for a pure measurement of grip force coupling. Participants increased grip forces rapidly (onset ∼70 ms) in response to perturbations. Grip force increases were stronger when the load force on the other hand also increased. No such coupling was present in the reaction of the arms to the load force increase. Surprisingly, however, the grip force interaction did not depend on the nature of the manipulated object. These results show fast obligatory coupling of bimanual grip force responses. Although this coupling may play a functional role for providing stability in bimanual object manipulation, it seems to constitute a relatively hard-wired modulation of a reflex.
Styles APA, Harvard, Vancouver, ISO, etc.
26

An, Na, et Wei Qi Yan. « Multitarget Tracking Using Siamese Neural Networks ». ACM Transactions on Multimedia Computing, Communications, and Applications 17, no 2s (17 mai 2021) : 1–16. http://dx.doi.org/10.1145/3441656.

Texte intégral
Résumé :
In this article, we detect and track visual objects by using Siamese network or twin neural network. The Siamese network is constructed to classify moving objects based on the associations of object detection network and object tracking network, which are thought of as the two branches of the twin neural network. The proposed tracking method was designed for single-target tracking, which implements multitarget tracking by using deep neural networks and object detection. The contributions of this article are stated as follows. First, we implement the proposed method for visual object tracking based on multiclass classification using deep neural networks. Then, we attain multitarget tracking by combining the object detection network and the single-target tracking network. Next, we uplift the tracking performance by fusing the outcomes of the object detection network and object tracking network. Finally, we speculate on the object occlusion problem based on IoU and similarity score, which effectively diminish the influence of this issue in multitarget tracking.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Peng, Jiansheng, Kui Fu, Qingjin Wei, Yong Qin et Qiwen He. « Improved Multiview Decomposition for Single-Image High-Resolution 3D Object Reconstruction ». Wireless Communications and Mobile Computing 2020 (26 décembre 2020) : 1–14. http://dx.doi.org/10.1155/2020/8871082.

Texte intégral
Résumé :
As a representative technology of artificial intelligence, 3D reconstruction based on deep learning can be integrated into the edge computing framework to form an intelligent edge and then realize the intelligent processing of the edge. Recently, high-resolution representation of 3D objects using multiview decomposition (MVD) architecture is a fast reconstruction method for generating objects with realistic details from a single RGB image. The results of high-resolution 3D object reconstruction are related to two aspects. On the one hand, a low-resolution reconstruction network represents a good 3D object from a single RGB image. On the other hand, a high-resolution reconstruction network maximizes fine low-resolution 3D objects. To improve these two aspects and further enhance the high-resolution reconstruction capabilities of the 3D object generation network, we study and improve the low-resolution 3D generation network and the depth map superresolution network. Eventually, we get an improved multiview decomposition (IMVD) network. First, we use a 2D image encoder with multifeature fusion (MFF) to enhance the feature extraction capability of the model. Second, a 3D decoder using an effective subpixel convolutional neural network (3D ESPCN) improves the decoding speed in the decoding stage. Moreover, we design a multiresidual dense block (MRDB) to optimize the depth map superresolution network, which allows the model to capture more object details and reduce the model parameters by approximately 25% when the number of network layers is doubled. The experimental results show that the proposed IMVD is better than the original MVD in the 3D object superresolution experiment and the high-resolution 3D reconstruction experiment of a single image.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Ihee, Hyotcherl. « Novel Single-Molecule Technique by Single-Object Scattering Sampling (SOSS) ». Bulletin of the Korean Chemical Society 32, no 6 (20 juin 2011) : 1849–50. http://dx.doi.org/10.5012/bkcs.2011.32.6.1849.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Koning, Arno, et Johan Wagemans. « Detection of Symmetry and Repetition in One and Two Objects ». Experimental Psychology 56, no 1 (janvier 2009) : 5–17. http://dx.doi.org/10.1027/1618-3169.56.1.5.

Texte intégral
Résumé :
Symmetry is usually easier to detect within a single object than in two objects (one-object advantage), while the reverse is true for repetition (two-objects advantage). This interaction between regularity and number of objects could reflect an intrinsic property of encoding spatial relations within and across objects or it could reflect a matching strategy. To test this, regularities between two contours (belonging to a single object or two objects) had to be detected in two experiments. Projected three-dimensional (3-D) objects rotated in depth were used to disambiguate figure-ground segmentation and to make matching based on simple translations of the two-dimensional (2-D) contours unlikely. Experiment 1 showed the expected interaction between regularity and number of objects. Experiment 2 used two-objects displays only and prevented a matching strategy by also switching the positions of the two objects. Nevertheless, symmetry was never detected more easily than repetition in these two-objects displays. We conclude that structural coding, not matching strategies, underlies the one-object advantage for symmetry and the two-objects advantage for repetition.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Bahmanyar, R., S. M. Azimi et P. Reinartz. « MULTIPLE VEHICLES AND PEOPLE TRACKING IN AERIAL IMAGERY USING STACK OF MICRO SINGLE-OBJECT-TRACKING CNNS ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (18 octobre 2019) : 163–70. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-163-2019.

Texte intégral
Résumé :
Abstract. Geo-referenced real-time vehicle and person tracking in aerial imagery has a variety of applications such as traffic and large-scale event monitoring, disaster management, and also for input into predictive traffic and crowd models. However, object tracking in aerial imagery is still an unsolved challenging problem due to the tiny size of the objects as well as different scales and the limited temporal resolution of geo-referenced datasets. In this work, we propose a new approach based on Convolutional Neural Networks (CNNs) to track multiple vehicles and people in aerial image sequences. As the large number of objects in aerial images can exponentially increase the processing demands in multiple object tracking scenarios, the proposed approach utilizes the stack of micro CNNs, where each micro CNN is responsible for a single-object tracking task. We call our approach Stack of Micro-Single- Object-Tracking CNNs (SMSOT-CNN). More precisely, using a two-stream CNN, we extract a set of features from two consecutive frames for each object, with the given location of the object in the previous frame. Then, we assign each MSOT-CNN the extracted features of each object to predict the object location in the current frame. We train and validate the proposed approach on the vehicle and person sets of the KIT AIS dataset of object tracking in aerial image sequences. Results indicate the accurate and time-efficient tracking of multiple vehicles and people by the proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
31

YU, YUANLONG, GEORGE K. I. MANN et RAYMOND G. GOSINE. « A SINGLE-OBJECT TRACKING METHOD FOR ROBOTS USING OBJECT-BASED VISUAL ATTENTION ». International Journal of Humanoid Robotics 09, no 04 (décembre 2012) : 1250030. http://dx.doi.org/10.1142/s0219843612500302.

Texte intégral
Résumé :
It is a quite challenging problem for robots to track the target in complex environment due to appearance changes of the target and background, large variation of motion, partial and full occlusion, motion of the camera and so on. However, humans are capable to cope with these difficulties by using their cognitive capability, mainly including the visual attention and learning mechanisms. This paper therefore presents a single-object tracking method for robots based on the object-based attention mechanism. This tracking method consists of four modules: pre-attentive segmentation, top-down attentional selection, post-attentive processing and online learning of the target model. The pre-attentive segmentation module first divides the scene into uniform proto-objects. Then the top-down attention module selects one proto-object over the predicted region by using a discriminative feature of the target. The post-attentive processing module then validates the attended proto-object. If it is confirmed to be the target, it is used to obtain the complete target region. Otherwise, the recovery mechanism is automatically triggered to globally search for the target. Given the complete target region, the online learning algorithm autonomously updates the target model, which consists of appearance and saliency components. The saliency component is used to automatically select a discriminative feature for top-down attention, while the appearance component is used for bias estimation in the top-down attention module and validation in the post-attentive processing module. Experiments have shown that this proposed method outperforms other algorithms without using attention for tracking a single target in cluttered and dynamically changing environment.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Gao, Kaiye, Xiangbin Yan, Xiang-dong Liu et Rui Peng. « Object defence of a single object with preventive strike of random effect ». Reliability Engineering & ; System Safety 186 (juin 2019) : 209–19. http://dx.doi.org/10.1016/j.ress.2019.02.023.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Prof. Vasudha Bahl and Prof. Nidhi Sengar, Akash Kumar, Dr Amita Goel. « Real-Time Object Detection Model ». International Journal for Modern Trends in Science and Technology 6, no 12 (18 décembre 2020) : 360–64. http://dx.doi.org/10.46501/ijmtst061267.

Texte intégral
Résumé :
Object Detection is a study in the field of computer vision. An object detection model recognizes objects of the real world present either in a captured image or in real-time video where the object can belong to any class of objects namely humans, animals, objects, etc. This project is an implementation of an algorithm based on object detection called You Only Look Once (YOLO v3). The architecture of yolo model is extremely fast compared to all previous methods. Yolov3 model executes a single neural network to the given image and then divides the image into predetermined bounding boxes. These boxes are weighted by the predicted probabilities. After non max-suppression it gives the result of recognized objects together with bounding boxes. Yolo trains and directly executes object detection on full images.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Wang, Jun, Lili Jiang, Qingwen Qi et Yongji Wang. « Exploration of Semantic Geo-Object Recognition Based on the Scale Parameter Optimization Method for Remote Sensing Images ». ISPRS International Journal of Geo-Information 10, no 6 (20 juin 2021) : 420. http://dx.doi.org/10.3390/ijgi10060420.

Texte intégral
Résumé :
Image segmentation is of significance because it can provide objects that are the minimum analysis units for geographic object-based image analysis (GEOBIA). Most segmentation methods usually set parameters to identify geo-objects, and different parameter settings lead to different segmentation results; thus, parameter optimization is critical to obtain satisfactory segmentation results. Currently, many parameter optimization methods have been developed and successfully applied to the identification of single geo-objects. However, few studies have focused on the recognition of the union of different types of geo-objects (semantic geo-objects), such as a park. The recognition of semantic geo-objects is likely more crucial than that of single geo-objects because the former type of recognition is more correlated with the human perception. This paper proposes an approach to recognize semantic geo-objects. The key concept is that a single geo-object is the smallest component unit of a semantic geo-object, and semantic geo-objects are recognized by iteratively merging single geo-objects. Thus, the optimal scale of the semantic geo-objects is determined by iteratively recognizing the optimal scales of single geo-objects and using them as the initiation point of the reset scale parameter optimization interval. In this paper, we adopt the multiresolution segmentation (MRS) method to segment Gaofen-1 images and tested three scale parameter optimization methods to validate the proposed approach. The results show that the proposed approach can determine the scale parameters, which can produce semantic geo-objects.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Cant, Jonathan S., Sol Z. Sun et Yaoda Xu. « Distinct cognitive mechanisms involved in the processing of single objects and object ensembles ». Journal of Vision 15, no 4 (11 septembre 2015) : 12. http://dx.doi.org/10.1167/15.4.12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Gaibatova, K. D., et M. A. Aliverdieva. « LAND PLOT AS A SINGLE REAL ESTATE OBJECT ». Law Нerald of Dagestan State Universit 34, no 2 (2020) : 114–17. http://dx.doi.org/10.21779/2224-0241-2020-34-2-114-117.

Texte intégral
Résumé :
This article addresses the problems of considering a land plot as a single property. Particular attention is paid to the content of the principle of the unity of the fate of the land plot and real estate located on it. The foreign practice of implementing this principle in different systems of law and order is considered. It is noted that Russian legislation is in a transitional stage to a “single property”, which raises a number of problematic issues that need to be resolved. The authors propose to work out such a definition of a single real estate object that will coincide and comply with the principle of the unity of fate of land plots and real estate objects firmly connected with them. To do this, it is necessary to establish a legal connection in which, regardless of which object will be alienated, the other will inextricably follow his fate. The article concludes the need to introduce the principle of “unity of the property”, which will eliminate the contradictions, inconsistencies and gaps in the legal regulation of the investigated sphere of legal relations. In addition, the authors conclude that the introduction into civil law of the concept of a single real estate object will facilitate tax administration and taxation for citizens and organizations, as well as simplify cadastral registration and registration of transfer of ownership of real estate.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Arslan, Ozan. « 3d Object Reconstruction from a Single Image ». International Journal of Environment and Geoinformatics 1, no 1 (10 novembre 2014) : 21–28. http://dx.doi.org/10.30897/ijegeo.300724.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Riaz, Irfan, Xue Fan et Hyunchul Shin. « Single image dehazing with bright object handling ». IET Computer Vision 10, no 8 (17 juin 2016) : 817–27. http://dx.doi.org/10.1049/iet-cvi.2015.0451.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Li, Zuoyong, Kezong Tang, Yong Cheng et Yong Hu. « Transition region-based single-object image segmentation ». AEU - International Journal of Electronics and Communications 68, no 12 (décembre 2014) : 1214–23. http://dx.doi.org/10.1016/j.aeue.2014.06.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Lippitz, Markus, Florian Kulzer et Michel Orrit. « Statistical Evaluation of Single Nano-Object Fluorescence ». ChemPhysChem 6, no 5 (13 mai 2005) : 770–89. http://dx.doi.org/10.1002/cphc.200400560.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Gogel, Walter C., et Thomas J. Sharkey. « Measuring Attention Using Induced Motion ». Perception 18, no 3 (juin 1989) : 303–20. http://dx.doi.org/10.1068/p180303.

Texte intégral
Résumé :
Attention was measured by means of its effect upon induced motion. Perceived horizontal motion was induced in a vertically moving test spot by the physical horizontal motion of inducing objects. All stimuli were in a frontoparallel plane. The induced motion vectored with the physical motion to produce a clockwise or counterclockwise tilt in the apparent path of motion of the test spot. Either a single inducing object or two inducing objects moving in opposite directions were used. Twelve observers were instructed to attend to or to ignore the single inducing object while fixating the test object and, when the two opposing inducing objects were present, to attend to one inducing object while ignoring the other. Tracking of the test spot was visually monitored. The tilt of the path of apparent motion of the test spot was measured by tactile adjustment of a comparison rod. It was found that the measured tilt was substantially larger when the single inducing object was attended rather than ignored. For the two inducing objects, attending to one while ignoring the other clearly increased the effectiveness of the attended inducing object. The results are analyzed in terms of the distinction between voluntary and involuntary attention. The advantages of measuring attention by its effect on induced motion as compared with the use of a precueing procedure, and a hypothesis regarding the role of attention in modifying perceived spatial characteristics are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Wen, Mingyun, et Kyungeun Cho. « Object-Aware 3D Scene Reconstruction from Single 2D Images of Indoor Scenes ». Mathematics 11, no 2 (12 janvier 2023) : 403. http://dx.doi.org/10.3390/math11020403.

Texte intégral
Résumé :
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide the identities of objects, and object identification is necessary for a scene to be functional in virtual reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as well as producing more accurate shapes compared to previous approaches for mesh reconstruction from single images (e.g., topology modification networks). The proposed method was evaluated on real-world datasets. The results showed that it could effectively improve the object-aware 3D scene reconstruction performance over existing methods.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Zhao, Qijie, Tao Sheng, Yongtao Wang, Zhi Tang, Ying Chen, Ling Cai et Haibin Ling. « M2Det : A Single-Shot Object Detector Based on Multi-Level Feature Pyramid Network ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 9259–66. http://dx.doi.org/10.1609/aaai.v33i01.33019259.

Texte intégral
Résumé :
Feature pyramids are widely exploited by both the state-of-the-art one-stage object detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object detectors (e.g., Mask RCNN, DetNet) to alleviate the problem arising from scale variation across object instances. Although these object detectors with feature pyramids achieve encouraging results, they have some limitations due to that they only simply construct the feature pyramid according to the inherent multiscale, pyramidal architecture of the backbones which are originally designed for object classification task. Newly, in this work, we present Multi-Level Feature Pyramid Network (MLFPN) to construct more effective feature pyramids for detecting objects of different scales. First, we fuse multi-level features (i.e. multiple layers) extracted by backbone as the base feature. Second, we feed the base feature into a block of alternating joint Thinned U-shape Modules and Feature Fusion Modules and exploit the decoder layers of each Ushape module as the features for detecting objects. Finally, we gather up the decoder layers with equivalent scales (sizes) to construct a feature pyramid for object detection, in which every feature map consists of the layers (features) from multiple levels. To evaluate the effectiveness of the proposed MLFPN, we design and train a powerful end-to-end one-stage object detector we call M2Det by integrating it into the architecture of SSD, and achieve better detection performance than state-of-the-art one-stage detectors. Specifically, on MSCOCO benchmark, M2Det achieves AP of 41.0 at speed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which are the new stateof-the-art results among one-stage detectors. The code will be made available on https://github.com/qijiezhao/M2Det.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Xu, Chi, Jiale Chen, Mengyang Yao, Jun Zhou, Lijun Zhang et Yi Liu. « 6DoF Pose Estimation of Transparent Object from a Single RGB-D Image ». Sensors 20, no 23 (27 novembre 2020) : 6790. http://dx.doi.org/10.3390/s20236790.

Texte intégral
Résumé :
6DoF object pose estimation is a foundation for many important applications, such as robotic grasping, automatic driving, and so on. However, it is very challenging to estimate 6DoF pose of transparent object which is commonly seen in our daily life, because the optical characteristics of transparent material lead to significant depth error which results in false estimation. To solve this problem, a two-stage approach is proposed to estimate 6DoF pose of transparent object from a single RGB-D image. In the first stage, the influence of the depth error is eliminated by transparent segmentation, surface normal recovering, and RANSAC plane estimation. In the second stage, an extended point-cloud representation is presented to accurately and efficiently estimate object pose. As far as we know, it is the first deep learning based approach which focuses on 6DoF pose estimation of transparent objects from a single RGB-D image. Experimental results show that the proposed approach can effectively estimate 6DoF pose of transparent object, and it out-performs the state-of-the-art baselines by a large margin.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Thelen, Antonia, et Micah M. Murray. « The Efficacy of Single-Trial Multisensory Memories ». Multisensory Research 26, no 5 (2013) : 483–502. http://dx.doi.org/10.1163/22134808-00002426.

Texte intégral
Résumé :
This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (∼100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formedviasingle-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions thus persist over time to impact memory retrieval and object discrimination.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Corcoran, Padraig. « Topology Based Object Tracking ». Mathematical and Computational Applications 24, no 3 (18 septembre 2019) : 84. http://dx.doi.org/10.3390/mca24030084.

Texte intégral
Résumé :
A model for tracking objects whose topological properties change over time is proposed. Such changes include the splitting of an object into multiple objects or the merging of multiple objects into a single object. The proposed model employs a novel formulation of the tracking problem in terms of homology theory whereby 0-dimensional homology classes, which correspond to connected components, are tracked. A generalisation of this model for tracking spatially close objects lying in an ambient metric space is also proposed. This generalisation is particularly suitable for tracking spatial-temporal phenomena such as rain clouds. The utility of the proposed model is demonstrated with respect to tracking communities in a social network and tracking rain clouds in radar imagery.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Tsuji, Hiroyuki, Shinji Tokumasu, Hiroki Takahashi et Masayuki Nakajima. « Extracting Objects Using Contour Evolutions in Edge-Based Object Tracking ». Journal of Advanced Computational Intelligence and Intelligent Informatics 10, no 3 (20 mai 2006) : 362–71. http://dx.doi.org/10.20965/jaciii.2006.p0362.

Texte intégral
Résumé :
We propose edge-based object extraction targeting automatic video object plane (VOP) generation in MPEG-4 content-based video coding. In an edge-based VOP generation framework proposed by Meier, the object is represented as a binary edge image that does not generally form a closed contour and that also contains many extra edges, making extracting the object contour accurately less straightforward in such situations. To solve this problem, we adopt a PDE-based contour evolution approach to evolve initial multiple contours contained inside the object toward its boundary based on evolution equations, and to finally merge them into a single contour that accurately represents the object’s shape. Our experimental results using an MPEG standard image sequence show that object contours obtained as we propose appear subjectively more natural in shape compared with those obtained by two conventional methods, especially when the binary object model is not in good condition.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Franconeri, S. L., S. V. Jonathan et J. M. Scimeca. « Tracking Multiple Objects Is Limited Only by Object Spacing, Not by Speed, Time, or Capacity ». Psychological Science 21, no 7 (9 juin 2010) : 920–25. http://dx.doi.org/10.1177/0956797610373935.

Texte intégral
Résumé :
In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors—the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Tian, Ye, Zhen Wei Wang et Feng Chen. « Using Single Image Parallelepipeds for Camera Calibration ». Applied Mechanics and Materials 496-500 (janvier 2014) : 1869–72. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.1869.

Texte intégral
Résumé :
Human vision is generally regarded as a complicated process from feeling to consciousness. In other words, it refers to a projection form 3-D object to 2-D image, as well as a cognition of real objects according to 2-D image,The process that a real object is modeled through some images is called 3-D reconstruction. Presently, camera calibration attracts many researchers, and it includes the internal parameters and the external parameters, such as coordinate of main point, parameters of rotation and translation. Some researchers have pointed out that parallelepiped has a strict topological structure and geometric constraints. Therefore, it is suitable for the self-calibration of camera. This paper briefly explains parallelepiped methodsand tries to apply this method to deal with self-calibration. The experiments show that this method is flexible and available. image.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Rocha, Márcio S., et Oscar N. Mesquita. « New tools to study biophysical properties of single molecules and single cells ». Anais da Academia Brasileira de Ciências 79, no 1 (mars 2007) : 17–28. http://dx.doi.org/10.1590/s0001-37652007000100003.

Texte intégral
Résumé :
We present a review on two new tools to study biophysical properties of single molecules and single cells. A laser incident through a high numerical aperture microscope objective can trap small dielectric particles near the focus. This arrangement is named optical tweezers. This technique has the advantage to permit manipulation of a single individual object. We use optical tweezers to measure the entropic elasticity of a single DNA molecule and its interaction with the drug Psoralen. Optical tweezers are also used to hold a kidney cell MDCK away from the substrate to allow precise volume measurements of this single cell during an osmotic shock. This procedure allows us to obtain information about membrane water permeability and regulatory volume increase. Defocusing microscopy is a recent technique invented in our laboratory, which allows the observation of transparent objects, by simply defocusing the microscope in a controlled way. Our physical model of a defocused microscope shows that the image contrast observed in this case is proportional to the defocus distance and to the curvature of the transparent object. Defocusing microscopy is very useful to study motility and mechanical properties of cells. We show here the application of defocusing microscopy to measurements of macrophage surface fluctuations and their influence on phagocytosis.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie