Journal articles on the topic 'Multiple objects'

To see the other types of publications on this topic, follow the link: Multiple objects.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multiple objects.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bosov, A. A., V. M. Ilman, and N. V. Khalipova. "MULTIPLE OBJECTS." Science and Transport Progress. Bulletin of Dnipropetrovsk National University of Railway Transport, no. 3(57) (July 2, 2015): 145–61. http://dx.doi.org/10.15802/stp2015/46075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morgan, Jane L., and Antje S. Meyer. "Processing of Extrafoveal Objects During Multiple-Object Naming." Journal of Experimental Psychology: Learning, Memory, and Cognition 31, no. 3 (2005): 428–42. http://dx.doi.org/10.1037/0278-7393.31.3.428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Killingsworth, S., and D. Levin. "Goal Objects Reduce Accuracy in Multiple Object Tracking." Journal of Vision 12, no. 9 (August 10, 2012): 549. http://dx.doi.org/10.1167/12.9.549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Maoshen Jia, Ziyu Yang, Changchun Bao, Xiguang Zheng, and Christian Ritz. "Encoding Multiple Audio Objects Using Intra-Object Sparsity." IEEE/ACM Transactions on Audio, Speech, and Language Processing 23, no. 6 (June 2015): 1082–95. http://dx.doi.org/10.1109/taslp.2015.2419980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

O. Holcombe, Alex. "Multiple object tracking." WikiJournal of Science 6, no. 1 (2023): X. http://dx.doi.org/10.15347/wjs/2023.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In psychology and neuroscience, multiple object tracking (MOT) refers to the ability of humans and other animals to simultaneously monitor multiple objects as they move. It is also the term for a laboratory technique used to study this ability. In an MOT study, a number of identical moving objects are presented on a display. Some of the objects are designated as targets while the rest serve as distractors. Study participants try to monitor the changing positions of the targets as they and the distractors move about. At the end of the trial, participants typically are asked to indicate the final positions of the targets. The results of MOT experiments have revealed dramatic limitations on humans' ability to simultaneously monitor multiple moving objects. For example, awareness of features such as color and shape is disrupted by the objects' movement.
6

Hirano, Masahiro, Keigo Iwakuma, and Yuji Yamakawa. "Multiple High-Speed Vision for Identical Objects Tracking." Journal of Robotics and Mechatronics 34, no. 5 (October 20, 2022): 1073–84. http://dx.doi.org/10.20965/jrm.2022.p1073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In multi-object tracking of identical objects, it is difficult to return to tracking after occlusions occur due to three-dimensional intersections between objects because the objects cannot be distinguished by their appearances. In this paper, we propose a method for multi-object tracking of identical objects using multiple high-speed vision systems. By using high-speed vision, we take advantage of the fact that tracking information, such as the position of each object in each camera and the presence or absence of occlusion, can be obtained with high time density. Furthermore, we perform multi-object tracking of identical objects by efficiently performing occlusion handling using geometric constraints satisfied by multiple high-speed vision systems; these can be used by appropriately positioning them with respect to the moving region of the object. Through experiments using table-tennis balls as identical objects, this study shows that stable multi-object tracking can be performed in real time, even when frequent occlusions occur.
7

Ayare, Shubhamkar, and Nisheeth Srivastava. "Multiple Object Tracking Without Pre-attentive Indexing." Open Mind 8 (2024): 278–308. http://dx.doi.org/10.1162/opmi_a_00128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Multiple object tracking (MOT) involves simultaneous tracking of a certain number of target objects amongst a larger set of objects as they all move unpredictably over time. The prevalent explanation for successful target tracking by humans in MOT involving visually identical objects is based on the Visual Indexing Theory. This assumes that each target is indexed by a pointer using a non-conceptual mechanism to maintain an object’s identity even as its properties change over time. Thus, successful tracking requires successful indexing and the absence of identification errors. Identity maintenance and successful tracking are measured in terms of identification (ID) and tracking accuracy respectively, with higher accuracy indicating better identity maintenance or better tracking. Existing evidence suggests that humans have high tracking accuracy despite poor identification accuracy, suggesting that it might be possible to perform MOT without indexing. Our work adds to existing evidence for this position through two experiments, and presents a computational model of multiple object tracking that does not require indexes. Our empirical results show that identification accuracy is aligned with tracking accuracy in humans for tracking up to three, but is lower when tracking more objects. Our computational model of MOT without indexing accounts for several empirical tracking accuracy patterns shown in earlier studies, reproduces the dissociation between tracking and identification accuracy produced earlier in the literature as well as in our experiments, and makes several novel predictions.
8

Franconeri, S. L., J. Halberda, L. Feigenson, and G. A. Alvarez. "Common fate can define objects in multiple object tracking." Journal of Vision 4, no. 8 (August 1, 2004): 365. http://dx.doi.org/10.1167/4.8.365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ogawa, H., and A. Yagi. "The processing of untracked objects during multiple object tracking." Journal of Vision 2, no. 7 (March 15, 2010): 242. http://dx.doi.org/10.1167/2.7.242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Feria, C. "Effects of Distinct Distractor Objects in Multiple Object Tracking." Journal of Vision 10, no. 7 (August 3, 2010): 306. http://dx.doi.org/10.1167/10.7.306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Nichol, David, and Merrilyn Fiebig. "Tracking multiple moving objects by binary object forest segmentation." Image and Vision Computing 9, no. 6 (December 1991): 362–71. http://dx.doi.org/10.1016/0262-8856(91)90003-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Dong Mei, Tao Li, Tao Xiang, and Wei Xu. "Multiple Objects Tracking Based on Linear Fitting." Applied Mechanics and Materials 602-605 (August 2014): 1438–41. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.1438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For multiple objects tracking in complex scenes, a new tracking algorithm based on linear fitting for multiple moving objects is proposed. DG_CENTRIST feature and color feature are combined to describe the object, and the overlapping ratio of the tracking object is calculated. The object in the current frame is measured by using coincidence degree. If there is occlusion, we predict the path of each object by linear fitting and adjust the results of tracking in order to get correct results. The experiment results show that this method can effectively improve the accuracy of the multiple target tracking.
13

Harada, Kensuke, Shinya Nakano, Makoto Kaneko, and Toshio Tsuji. "Manipulation for Multiple Objects." Journal of the Robotics Society of Japan 18, no. 2 (2000): 236–43. http://dx.doi.org/10.7210/jrsj.18.236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Erdmann, Michael, and Tomás Lozano-Pérez. "On multiple moving objects." Algorithmica 2, no. 1-4 (November 1987): 477–521. http://dx.doi.org/10.1007/bf01840371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Merz, Martina. "Multiplex and Unfolding: Computer Simulation in Particle Physics." Science in Context 12, no. 2 (1999): 293–316. http://dx.doi.org/10.1017/s0269889700003434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The ArgumentWhat kind of objects are computer programs used for simulation purposes in scientific settings? The current investigation treats a special case. It focuses on “event generators,” the program packages that particle physicists construct and use to simulate mechanisms of particle production. The paper is an attempt to bring the multiplex and unfolding character of such knowledge objects to the fore: Multiple meanings and functions are embodied in the object and can be drawn out selectively according to the requirements of a work setting. The object's conceptual complexity governs its application in some contexts, while the object is considered a mere “black box,” transparent and ready-to-hand, in others. These two poles span a full spectrum of object aspects, functions, and conceptions. Event generators are ideas turned into software, testing grounds for models, just a tool to study the performance of a detector, etc. The object's multiplex nature is submitted to negotiation among different actors.
16

Botterill, K., R. Allen, and P. McGeorge. "Multiple-Object Tracking." Experimental Psychology 58, no. 3 (November 1, 2011): 196–200. http://dx.doi.org/10.1027/1618-3169/a000085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Multiple-Object Tracking paradigm has most commonly been utilized to investigate how subsets of targets can be tracked from among a set of identical objects. Recently, this research has been extended to examine the function of featural information when tracking is of objects that can be individuated. We report on a study whose findings suggest that, while participants can only hold featural information for roughly two targets this task does not affect tracking performance detrimentally and points to a discontinuity between the cognitive processes that subserve spatial location and featural information.
17

Tsuji, T., C. Jasnoch, R. Kurazume, and T. Hasegawa. "Tracking Multiple Objects Using the Fast Level Set Method." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2004 (2004): 202. http://dx.doi.org/10.1299/jsmermd.2004.202_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Dong Mei, and Tao Li. "Multiple Objects Tracking Based on Mixture Features." Advanced Materials Research 945-949 (June 2014): 1869–74. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.1869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For multiple objects tracking in complex scenes, this paper proposes a new tracking algorithm for multiple moving objects. The algorithm makes likelihood calculation by using new DG_CENTRIST feature and color feature, and then calculates the overlapping ratio of the tracking object and the object in the current frame using coincidence degree to measure the occlusion. The algorithm has good robustness and stability, and the experiment results show that this method can effectively improve the accuracy of the multiple target tracking.
19

Děchtěrenko, Filip, Daniela Jakubková, Jiří Lukavský, and Christina J. Howard. "Tracking multiple fish." PeerJ 10 (March 3, 2022): e13031. http://dx.doi.org/10.7717/peerj.13031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Although the Multiple Object Tracking (MOT) task is a widely used experimental method for studying divided attention, tracking objects in the real world usually looks different. For example, in the real world, objects are usually clearly distinguishable from each other and also possess different movement patterns. One such case is tracking groups of creatures, such as tracking fish in an aquarium. We used movies of fish in an aquarium and measured general tracking performance in this task (Experiment 1). In Experiment 2, we compared tracking accuracy within-subjects in fish tracking, tracking typical MOT stimuli, and in a third condition using standard MOT uniform objects which possessed movement patterns similar to the real fish. This third condition was added to further examine the impact of different motion characteristics on tracking performance. Results within a Bayesian framework showed that tracking real fish shares similarities with tracking simple objects in a typical laboratory MOT task. Furthermore, we observed a close relationship between performance in both laboratory MOT tasks (typical and fish-like) and real fish tracking, suggesting that the commonly used laboratory MOT task possesses a good level of ecological validity.
20

Hansen, Rickard. "Smoke Stratification in a Mine Drift with Multiple Objects Downstream." Mining Revue 29, no. 1 (March 1, 2023): 1–18. http://dx.doi.org/10.2478/minrv-2023-0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The smoke behaviour and smoke stratification of a fire in a mine drift will be one of the decisive factors affecting the risk to mining personnel during a fire. This paper studies the smoke stratification in a mine drift with multiple objects downstream of the fire, at varying distances and number of objects. Data for the study was provided from earlier model-scale fire experiments and CFD modelling was performed for in-depth analysis of specific phenomena. It was found that at considerable downstream distances from the fire, the smoke stratification differences were significant, reflecting the high impact of multiple objects. With an increasing distance between the objects downstream, an increased degree of mixing and decreased stratification occurred. With an increasing distance between the burning object and the second object, the smoke layer will descend further before encountering the object and the smoke stratification on the upstream side of the second object will decrease. The increased mixing of the hot gases flowing from the burning object will have a more significant effect on the overall stratification due to the higher temperatures. An increasing number of objects downstream will not by itself lead to increased stratification, with shorter distances between the objects and an increasing number of objects, the smoke stratification may instead be retained for a longer distance. An increasing flow velocity will result in decreasing stratification found foremost downstream of the burning object, as the tilt of the plume will increase and interact increasingly with the second object.
21

Baugh, Lee A., Amelie Yak, Roland S. Johansson, and J. Randall Flanagan. "Representing multiple object weights: competing priors and sensorimotor memories." Journal of Neurophysiology 116, no. 4 (October 1, 2016): 1615–25. http://dx.doi.org/10.1152/jn.00282.2016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
When lifting an object, individuals scale lifting forces based on long-term priors relating external object properties (such as material and size) to object weight. When experiencing objects that are poorly predicted by priors, people rapidly form and update sensorimotor memories that can be used to predict an object's atypical size-weight relation in support of predictively scaling lift forces. With extensive experience in lifting such objects, long-term priors, assessed with weight judgments, are gradually updated. The aim of the present study was to understand the formation and updating of these memory processes. Participants lifted, over multiple days, a set of black cubes with a normal size-weight mapping and green cubes with an inverse size-weight mapping. Sensorimotor memory was assessed with lifting forces, and priors associated with the black and green cubes were assessed with the size-weight illusion (SWI). Interference was observed in terms of adaptation of the SWI, indicating that priors were not independently adjusted. Half of the participants rapidly learned to scale lift forces appropriately, whereas reduced learning was observed in the others, suggesting that individual differences may be affecting sensorimotor memory abilities. A follow-up experiment showed that lifting forces are not accurately scaled to objects when concurrently performing a visuomotor association task, suggesting that sensorimotor memory formation involves cognitive resources to instantiate the mapping between object identity and weight, potentially explaining the results of experiment 1. These results provide novel insight into the formation and updating of sensorimotor memories and provide support for the independent adjustment of sensorimotor memory and priors.
22

Lidbetter, Thomas, and Kyle Y. Lin. "Searching for multiple objects in multiple locations." European Journal of Operational Research 278, no. 2 (October 2019): 709–20. http://dx.doi.org/10.1016/j.ejor.2019.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hua, Tianyu, Hongdong Zheng, Yalong Bai, Wei Zhang, Xiao-Ping Zhang, and Tao Mei. "Exploiting Relationship for Complex-scene Image Generation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1584–92. http://dx.doi.org/10.1609/aaai.v35i2.16250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The significant progress on Generative Adversarial Networks (GANs) has facilitated realistic single-object image generation based on language input. However, complex-scene generation (with various interactions among multiple objects) still suffers from messy layouts and object distortions, due to diverse configurations in layouts and appearances. Prior methods are mostly object-driven and ignore their inter-relations that play a significant role in complex-scene images. This work explores relationship-aware complex-scene image generation, where multiple objects are inter-related as a scene graph. With the help of relationships, we propose three major updates in the generation framework. First, reasonable spatial layouts are inferred by jointly considering the semantics and relationships among objects. Compared to standard location regression, we show relative scales and distances serve a more reliable target. Second, since the relations between objects have significantly influenced an object's appearance, we design a relation-guided generator to generate objects reflecting their relationships. Third, a novel scene graph discriminator is proposed to guarantee the consistency between the generated image and the input scene graph. Our method tends to synthesize plausible layouts and objects, respecting the interplay of multiple objects in an image. Experimental results on Visual Genome and HICO-DET datasets show that our proposed method significantly outperforms prior arts in terms of IS and FID metrics. Based on our user study and visual inspection, our method is more effective in generating logical layout and appearance for complex-scenes.
24

Thomas, Bruce H. "Examining User Perception of the Size of Multiple Objects in Virtual Reality." Applied Sciences 10, no. 11 (June 11, 2020): 4049. http://dx.doi.org/10.3390/app10114049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This article presents a user study into user perception of an object’s size when presented in virtual reality. Critical for users understanding of virtual worlds is their perception of the size of virtual objects. This article is concerned with virtual objects that are within arm’s reach of the user. Examples of such virtual objects could be virtual controls such as buttons, dials and levers that the users manipulate to control the virtual reality application. This article explores the issue of a user’s ability to judge the size of an object relative to a second object of a different colour. The results determined that the points of subjective equality for height and width judgement tasks ranging from 10 to 90 mm were all within an acceptable value. That is to say, participants were able to perceive height and width judgements very close to the target values. The results for height judgement task for just-noticeable difference were all less than 1.5 mm and for the width judgement task less than 2.3 mm.
25

KIM, DAE-NYEON, HOANG-HON TRINH, and KANG-HYUN JO. "OBJECT RECOGNITION BY SEGMENTED REGIONS USING MULTIPLE CUES ON OUTDOOR ENVIRONMENT." International Journal of Information Acquisition 04, no. 03 (September 2007): 205–13. http://dx.doi.org/10.1142/s0219878907001290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This work describes a method to allow objects for autonomous robot navigation on outdoor environment. The proposition of the method segments and recognizes the object from an image taken by moving robot on outdoor environment. We classify the object natural and artificial. We classify trees as natural objects and buildings as artificial objects. Then we define their characteristics individually. In the process, we segment objects included by preprocessing using multiple cues and show the method of segmentation based on low-level features using multiple cues. Multiple cues are color, line segment, context information, HCM (Hue Co-occurrence Matrix), PCs (Principal Components), vanishing point. Objects can be recognized when we combine predefined multiple cues. The correct object recognition of proposed system is over 92% among our test database which consist of about 1200 images. We accomplish the result of image segmentation using multiple cues and object recognition through experiments.
26

Ngomo, Macaire. "Multiple Inheritance Mechanisms In Logic Objects Approach Based On Multiple Specializations of Objects." International Journal of Computer Trends and Technology 69, no. 10 (October 25, 2021): 1–11. http://dx.doi.org/10.14445/22312803/ijctt-v69i10p101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Suben, A., and B. Scholl. "Recently disoccluded objects are preferentially attended during multiple-object tracking." Journal of Vision 12, no. 9 (August 10, 2012): 542. http://dx.doi.org/10.1167/12.9.542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kushnier, A., and Z. W. Pylyshyn. "Can flashing objects grab visual indexes in multiple object tracking?" Journal of Vision 3, no. 9 (March 18, 2010): 581. http://dx.doi.org/10.1167/3.9.581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jeong, Su Keun, and Yaoda Xu. "Task-context-dependent Linear Representation of Multiple Visual Objects in Human Parietal Cortex." Journal of Cognitive Neuroscience 29, no. 10 (October 2017): 1778–89. http://dx.doi.org/10.1162/jocn_a_01156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A host of recent studies have reported robust representations of visual object information in the human parietal cortex, similar to those found in ventral visual cortex. In ventral visual cortex, both monkey neurophysiology and human fMRI studies showed that the neural representation of a pair of unrelated objects can be approximated by the averaged neural representation of the constituent objects shown in isolation. In this study, we examined whether such a linear relationship between objects exists for object representations in the human parietal cortex. Using fMRI and multivoxel pattern analysis, we examined object representations in human inferior and superior intraparietal sulcus, two parietal regions previously implicated in visual object selection and encoding, respectively. We also examined responses from the lateral occipital region, a ventral object processing area. We obtained fMRI response patterns to object pairs and their constituent objects shown in isolation while participants viewed these objects and performed a 1-back repetition detection task. By measuring fMRI response pattern correlations, we found that all three brain regions contained representations for both single object and object pairs. In the lateral occipital region, the representation for a pair of objects could be reliably approximated by the average representation of its constituent objects shown in isolation, replicating previous findings in ventral visual cortex. Such a simple linear relationship, however, was not observed in either parietal region examined. Nevertheless, when we equated the amount of task information present by examining responses from two pairs of objects, we found that representations for the average of two object pairs were indistinguishable in both parietal regions from the average of another two object pairs containing the same four component objects but with a different pairing of the objects (i.e., the average of AB and CD vs. that of AD and CB). Thus, when task information was held consistent, the same linear relationship may govern how multiple independent objects are represented in the human parietal cortex as it does in ventral visual cortex. These findings show that object and task representations coexist in the human parietal cortex and characterize one significant difference of how visual information may be represented in ventral visual and parietal regions.
30

Pallos, Tamás, Gábor Sziebig, Péter Korondi, and Bjørn Solvang. "Multiple-Camera Optical Glyph Tracking." Advanced Materials Research 222 (April 2011): 367–71. http://dx.doi.org/10.4028/www.scientific.net/amr.222.367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Optical Glyph Tracking technology utilizes image processing and pattern recognition methods to calculate a given object’s position and orientation in the three dimensional space. Cost, precision and reliability are the key aspects when designing such a tracking system. The advantage of optical tracking is that even the simplest cameras can be used to follow the object, and it requires only a single marker. In this paper support for multiple cameras is introduced. All cameras recognize a set of markers on their images and the calculated coordinates are averaged to get a global value. Also multiple different markers are placed on the tracked objects and when even one of them is recognized, the coordinates of the object can be calculated. The Optical Glyph Tracking is implemented as a standalone module and could be used as an input device for any kind of application.
31

Martins de Freitas, Greice, and Clésio Luis Tozzi. "Object Tracking by Multiple State Management and Eigenbackground Segmentation." International Journal of Natural Computing Research 1, no. 4 (October 2010): 29–36. http://dx.doi.org/10.4018/jncr.2010100103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a multiple target tracking system through a fixed video camera, based on approaches found in literature. The proposed system is composed of three steps: foreground identification through background subtraction techniques; object association through color, area and centroid position matching, by using the Kalman filter to estimate the object’s position in the next frame; object classification according to an object management system. The obtained results showed that the proposed tracking system was able to recognize and track objects in movement on videos, as well as dealing with occlusions and separations, while encouraging future studies in its application on real time security systems.
32

KIM, DAE-NYEON, HOANG-HON TRINH, and KANG-HYUN JO. "OBJECTS SEGMENTATION USING MULTIPLE FEATURES FOR ROBOT NAVIGATION ON OUTDOOR ENVIRONMENT." International Journal of Information Acquisition 06, no. 02 (June 2009): 99–108. http://dx.doi.org/10.1142/s0219878909001862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the method to recognize objects for autonomous robot navigation in outdoor environment. The proposition of the method segments from an image taken by a moving robot in an outdoor environment. The method begins with object segmentation, which uses multiple features to obtain the object of segmented region. Multiple features are color, context information, line segments, edge, Hue Co-occurrence Matrix (HCM), Principal Components (PCs) and Vanishing Points (VPs). We model the objects of outdoor environment that define their characteristics individually. We segment the region as a mixture using the proposed features and methods. Objects can be detected when we combine predefined multiple features. Next, the stage classifies the object into natural and artificial ones. We detect sky and trees of natural objects. And we detect building of artificial objects. The last stage shows the combination of appearance and context information. We implement the result of object segmentation using multiple features through experiments.
33

Thornton, Ian, and Todd Horowitz. "Does action disrupt Multiple Object Tracking (MOT)?" Psihologija 48, no. 3 (2015): 289–301. http://dx.doi.org/10.2298/psi1503289t.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
While the relationship between action and focused attention has been well-studied, less is known about the ability to divide attention while acting. In the current paper we explore this issue using the multiple object tracking (MOT) paradigm (Pylyshyn & Storm, 1988). We asked whether planning and executing a display-relevant action during tracking would substantially affect the ability track and later identify targets. In all trials the primary task was to track 4 targets among a set of 8 identical objects. Several times during each trial, one object, selected at random, briefly changed colour. In the baseline MOT trials, these changes were ignored. During active trials, each changed object had to be quickly touched. On a given trial, changed objects were either from the tracking set or were selected at random from all 8 objects. Although there was a small dual-task cost, the need to act did not substantially impair tracking under either touch condition.
34

Mahalakshmi, N., and S. R. Saranya. "Robust Visual Tracking for Multiple Targets with Data Association and Track Management." International Journal of Advance Research and Innovation 3, no. 2 (2015): 68–71. http://dx.doi.org/10.51976/ijari.321516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multi-object tracking is still a challenging task in computer vision. A robust approach is proposed to realize multi-object tracking using camera networks. Detection algorithms are utilized to detect object regions with confidence scores for initialization of individual particle filters. Since data association is the key issue in Tracking-by-Detection mechanism, an efficient HOG algorithm and SVM classifier algorithm are used for tracking multiple objects. Furthermore, tracking in single cameras is realized by a greedy matching method. Afterwards, 3D geometry positions are obtained from the rectangular relationship between objects. Corresponding objects are tracked in cameras to take the advantages of camera based tracking. The proposed algorithm performs online and does not need any information about the scene, any restrictions of enter-and-exit zones, no assumption of areas where objects are moving on and can be extended to any class of object tracking. Experimental results show the benefits of using camera by the higher accuracy and detect the objects.
35

Hyönä, Li, and Oksama. "Eye Behavior During Multiple Object Tracking and Multiple Identity Tracking." Vision 3, no. 3 (July 31, 2019): 37. http://dx.doi.org/10.3390/vision3030037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We review all published eye-tracking studies to date that have used eye movements to examine multiple object (MOT) or multiple identity tracking (MIT). In both tasks, observers dynamically track multiple moving objects. In MOT the objects are identical, whereas in MIT they have distinct identities. In MOT, observers prefer to fixate on blank space, which is often the center of gravity formed by the moving targets (centroid). In contrast, in MIT observers have a strong preference for the target-switching strategy, presumably to refresh and maintain identity-location bindings for the targets. To account for the qualitative differences between MOT and MIT, two mechanisms have been posited, a position tracking (MOT) and an identity tracking (MOT & MIT) mechanism. Eye-tracking studies of MOT have also demonstrated that observers execute rescue saccades toward targets in danger of becoming occluded or are about to change direction after a collision. Crowding attracts the eyes close to it in order to increase visual acuity for the crowded objects to prevent target loss. It is suggested that future studies should concentrate more on MIT, as MIT more closely resembles tracking in the real world.
36

Saranya, M., Kariketi Tharun Reddy, Madhumitha Raju, and Manoj Kutala. "Multiple Objects and Road Detection in Unmanned Aerial Vehicle." International Journal of Computer Science, Engineering and Information Technology 12, no. 4 (August 31, 2022): 1–19. http://dx.doi.org/10.5121/ijcseit.2022.12401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Unmanned Aerial Vehicles have greater potential to widely used in military and civil applications. Additionally equipped with the cameras can also be used in agriculture and surveillance. Aerial imagery has its own unique challenges that differ from the training set of modern-day object detectors, since it is made of images of larger areas compared to the regular datasets and the objects are very small on the contrary. These problems do not allow us to use common object detection models. Currently there are many computer vision algorithm that are designed using human centric photographs, But from the top view imagery taken vertically the objects of interest are small and fewer features mostly appearing flat and rectangular, certain objects closer to each other can also overlap. So detecting most of the objects from the birds eye view is a challenging task. Hence the work will be focusing on detecting multiple objects from those images using enhanced ResNet, FPN, FasterRCNN models thereby providing an effective surveillance for the UAV and extraction of road networks from aerial images has fundamental importance.
37

HARADA, Kensuke, Jun NISHIYAMA, Yoshihiro MURAKAMI, and Makoto KANEKO. "Pushing Manipulation for Multiple Objects." Transactions of the Japan Society of Mechanical Engineers Series C 69, no. 677 (2003): 195–203. http://dx.doi.org/10.1299/kikaic.69.195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Franconeri, S., S. Helseth, and P. Mok. "Splitting attention over multiple objects." Journal of Vision 10, no. 7 (August 3, 2010): 240. http://dx.doi.org/10.1167/10.7.240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Harada, Kensuke, Jun Nishiyama, Yoshihiro Murakami, and Makoto Kaneko. "Pushing Manipulation for Multiple Objects." Journal of Dynamic Systems, Measurement, and Control 128, no. 2 (2006): 422. http://dx.doi.org/10.1115/1.2199857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ye, Lin, Wilson Cardwell, and Leonard S. Mark. "Perceiving Multiple Affordances for Objects." Ecological Psychology 21, no. 3 (July 29, 2009): 185–217. http://dx.doi.org/10.1080/10407410903058229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Han, Yuexing, Hideki Koike, and Masanori Idesawa. "Recognizing objects with multiple configurations." Pattern Analysis and Applications 17, no. 1 (June 21, 2012): 195–209. http://dx.doi.org/10.1007/s10044-012-0277-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Merlo, Giovanni. "Multiple reference and vague objects." Synthese 194, no. 7 (April 5, 2016): 2645–66. http://dx.doi.org/10.1007/s11229-016-1075-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Minsky, Naftaly H., and Partha Pratim Pal. "Providing multiple views for objects." Software: Practice and Experience 30, no. 7 (June 2000): 803–23. http://dx.doi.org/10.1002/(sici)1097-024x(200006)30:7<803::aid-spe320>3.0.co;2-f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Scimeca, Jason M., and Steven L. Franconeri. "Selecting and tracking multiple objects." Wiley Interdisciplinary Reviews: Cognitive Science 6, no. 2 (December 15, 2014): 109–18. http://dx.doi.org/10.1002/wcs.1328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ghodake, Abhijit. "Review: Multiple Objects Detection and Tracking using Deep Learning Approach." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 21, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem34402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multiple Object Tracking (MOT) is a crucial tool with diverse applications, such as object detection, object counting, and security systems. Precise identification and monitoring of numerous objects are essential in several computer vision uses, such as monitoring, self-driving cars, and computer-human communication. Very little has been done to address occlusion problems in order to enable the best moving object tracking with detection The tracking of visual objects is one of the most important components of computer vision. The process of tracking an object (or a group of objects) across time is called object tracking. Visual object tracking is used to identify or link target items over successive video frames. In this study, we analyze the tracking-by-detection strategy, which includes YOLO-based detection and SORT-based tracking. This work elucidates a general approach to tracking and recognizing many objects with an emphasis on accuracy improvement. We aim to revolutionize computer vision by applying Non-Maximum Suppression (NMS) and Intersection over Union (IoU) approaches, and by combining the state-of-the-art YOLO NAS algorithm with conventional tracking methods or an alternative version of the YOLO Algorithm for object identification. It is expected that our work will have a major impact on many different applications, enabling more precise and reliable object tracking and detection in difficult real-world scenarios. Keywords—Multiple Object Detection, Kalman Filters Multiple Object Tracking, DeepSORT, YOLO, IoU, NMS.
46

Jahn, Georg, Frank Papenmeier, Hauke S. Meyerhoff, and Markus Huff. "Spatial Reference in Multiple Object Tracking." Experimental Psychology 59, no. 3 (January 1, 2012): 163–73. http://dx.doi.org/10.1027/1618-3169/a000139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.
47

Tan, Yihua, Yuan Tai, and Shengzhou Xiong. "NCA-Net for Tracking Multiple Objects across Multiple Cameras." Sensors 18, no. 10 (October 11, 2018): 3400. http://dx.doi.org/10.3390/s18103400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tracking multiple pedestrians across multi-camera scenarios is an important part of intelligent video surveillance and has great potential application for public security, which has been an attractive topic in the literature in recent years. In most previous methods, artificial features such as color histograms, HOG descriptors and Haar-like feature were adopted to associate objects among different cameras. But there are still many challenges caused by low resolution, variation of illumination, complex background and posture change. In this paper, a feature extraction network named NCA-Net is designed to improve the performance of multiple objects tracking across multiple cameras by avoiding the problem of insufficient robustness caused by hand-crafted features. The network combines features learning and metric learning via a Convolutional Neural Network (CNN) model and the loss function similar to neighborhood components analysis (NCA). The loss function is adapted from the probability loss of NCA aiming at object tracking. The experiments conducted on the NLPR_MCT dataset show that we obtain satisfactory results even with a simple matching operation. In addition, we embed the proposed NCA-Net with two existing tracking systems. The experimental results on the corresponding datasets demonstrate that the extracted features using NCA-net can effectively make improvement on the tracking performance.
48

Kazanovich, Yakov, and Roman Borisyuk. "An Oscillatory Neural Model of Multiple Object Tracking." Neural Computation 18, no. 6 (June 2006): 1413–40. http://dx.doi.org/10.1162/neco.2006.18.6.1413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An oscillatory neural network model of multiple object tracking is described. The model works with a set of identical visual objects moving around the screen. At the initial stage, the model selects into the focus of attention a subset of objects initially marked as targets. Other objects are used as distractors. The model aims to preserve the initial separation between targets and distractors while objects are moving. This is achieved by a proper interplay of synchronizing and desynchronizing interactions in a multilayer network, where each layer is responsible for tracking a single target. The results of the model simulation are presented and compared with experimental data. In agreement with experimental evidence, simulations with a larger number of targets have shown higher error rates. Also, the functioning of the model in the case of temporarily overlapping objects is presented.
49

CHANG, CHIA-JUNG, JUN-WEI HSIEH, YUNG-SHENG CHEN, and WEN-FONG HU. "TRACKING MULTIPLE MOVING OBJECTS USING A LEVEL-SET METHOD." International Journal of Pattern Recognition and Artificial Intelligence 18, no. 02 (March 2004): 101–25. http://dx.doi.org/10.1142/s0218001404003071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a novel approach to track multiple moving objects using the level-set method. The proposed method can track different objects no matter if they are rigid, nonrigid, merged, split, with shadows, or without shadows. At the first stage, the paper proposes an edge-based camera compensation technique for dealing with the problem of object tracking when the background is not static. Then, after camera compensation, different moving pixels can be easily extracted through a subtraction technique. Thus, a speed function with three ingredients, i.e. pixel motions, object variances and background variances, can be accordingly defined for guiding the process of object boundary detection. According to the defined speed function, different object boundaries can be efficiently detected and tracked by a curve evolution technique, i.e. the level-set-based method. Once desired objects have been extracted, in order to further understand the video content, this paper takes advantage of a relation table to identify and observe different behaviors of tracked objects. However, the above analysis sometimes fails due to the existence of shadows. To avoid this problem, this paper adopts a technique of Gaussian shadow modeling to remove all unwanted shadows. Experimental results show that the proposed method is much more robust and powerful than other traditional methods.
50

CHANG, KO-CHIANG, and WEN-HSIANG TSAI. "3-D OBJECT INSPECTION FROM MULTIPLE 2-D CAMERA VIEWS." International Journal of Pattern Recognition and Artificial Intelligence 01, no. 01 (April 1987): 85–102. http://dx.doi.org/10.1142/s0218001487000072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A new approach to 3-D object inspection using 2-D camera views is proposed. With input objects put on a turntable for inspection, a top view of the object is first taken by a top camera right above the table. The object shape boundary points are extracted, and the centroid as well as the principal axis of the shape are computed for top-view registration with the model shape, using the generalized Hough transform. The distance-weighted correlation is then used as the similarity measure for shape matching. Input objects with low similarity values are rejected. Non-rejected objects are inspected further from lateral views, using a lateral camera for image taking and the turntable for object rotation. Experimental results with a high inspection rate and a fast inspection speed show the feasibility of the proposed approach.

To the bibliography