Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: - different object task.

Статті в журналах з теми "- different object task"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "- different object task".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Barrett, Maeve M., and Fiona N. Newell. "Developmental processes in audiovisual object recognition and object location." Seeing and Perceiving 25 (2012): 38. http://dx.doi.org/10.1163/187847612x646604.

Повний текст джерела
Анотація:
This study investigated whether performance in recognising and locating target objects benefited from the simultaneous presentation of a crossmodal cue. Furthermore, we examined whether these ‘what’ and ‘where’ tasks were affected by developmental processes by testing across different age groups. Using the same set of stimuli, participants conducted either an object recognition task, or object location task. For the recognition task, participants were required to respond to two of four target objects (animals) and withhold response to the remaining two objects. For the location task, participants responded when an object occupied either of two target locations and withheld response if the object occupied a different location. Target stimuli were presented either by vision alone, audition alone, or bimodally. In both tasks cross-modal cues were either congruent or incongruent. The results revealed that response time performance in both the object recognition task and in the object location task benefited from the presence of a congruent cross-modal cue, relative to incongruent or unisensory conditions. In the younger adult group, the effect was strongest for response times although the same pattern was found for accuracy in the object location task but not for the recognition task. Following recent studies on multisensory integration in children (e.g., Brandwein, 2010; Gori, 2008), we then tested performance in children (i.e., 8–14 year olds) using the same task. Although overall performance was affected by age, our findings suggest interesting parallels in the benefit of congruent, cross-modal cues between children and adults, for both object recognition and location tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tyler, L. K., E. A. Stamatakis, P. Bright, K. Acres, S. Abdallah, J. M. Rodd, and H. E. Moss. "Processing Objects at Different Levels of Specificity." Journal of Cognitive Neuroscience 16, no. 3 (April 2004): 351–62. http://dx.doi.org/10.1162/089892904322926692.

Повний текст джерела
Анотація:
How objects are represented and processed in the brain is a central topic in cognitive neuroscience. Previous studies have shown that knowledge of objects is represented in a featurebased distributed neural system primarily involving occipital and temporal cortical regions. Research with nonhuman primates suggest that these features are structured in a hierarchical system with posterior neurons in the inferior temporal cortex representing simple features and anterior neurons in the perirhinal cortex representing complex conjunctions of features (Bussey & Saksida, 2002; Murray & Bussey, 1999). On this account, the perirhinal cortex plays a crucial role in object identification by integrating information from different sensory systems into more complex polymodal feature conjunctions. We tested the implications of these claims for human object processing in an event-related fMRI study in which we presented colored pictures of common objects for 19 subjects to name at two levels of specificity-basic and domain. We reasoned that domain-level naming requires access to a coarsergrained representation of objects, thus involving only posterior regions of the inferior temporal cortex. In contrast, basic-level naming requires finer-grained discrimination to differentiate between similar objects, and thus should involve anterior temporal regions, including the perirhinal cortex. We found that object processing always activated the fusiform gyrus bilaterally, irrespective of the task, whereas the perirhinal cortex was only activated when the task required finer-grained discriminations. These results suggest that the same kind of hierarchical structure, which has been proposed for object processing in the monkey temporal cortex, functions in the human.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Quaney, Barbara M., Randolph J. Nudo, and Kelly J. Cole. "Can Internal Models of Objects be Utilized for Different Prehension Tasks?" Journal of Neurophysiology 93, no. 4 (April 2005): 2021–27. http://dx.doi.org/10.1152/jn.00599.2004.

Повний текст джерела
Анотація:
We examined if object information obtained during one prehension task is used to produce fingertip forces for handling the same object in a different prehension task. Our observations address the task specificity of the internal models presumed to issue commands for grasping and transporting objects. Two groups participated in a 2-day experiment in which they lifted a novel object (230 g; 1.2 g/cm3). On Day One, the high force group (HFG) lifted the object by applying 10 N of grip force prior to applying vertical lift force. This disrupted the usual coordination of grip and lift forces and represented a higher grip force than necessary. The self-selected force group (SSFG) lifted the object on Day One with no instructions regarding their grip or lift forces. They first generated grip forces of 5.8 N, which decreased to 2.6 N by the 10th lift. Four hours later, they lifted the same object in the manner of the HFG. On Day Two, both groups lifted the same object “naturally and comfortably” with the opposite hand. The SSFG began Day Two using a grip force of 2.5 N, consistent with the acquisition of an accurate object representation during Day One. The HFG began Day Two using accurately scaled lift forces, but produced grip forces that virtually replicated those of the SSFG on Day One. We concur with recent suggestions that separate, independently adapted internal models produce grip and lift commands. The object representation that scaled lift force was not available to scale grip force. Furthermore, the concept of a general-purpose object representation that is available across prehension tasks was not supported.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mecklinger, A., and N. Müller. "Dissociations in the Processing of “What” and “Where” Information in Working Memory: An Event-Related Potential Analysis." Journal of Cognitive Neuroscience 8, no. 5 (September 1996): 453–73. http://dx.doi.org/10.1162/jocn.1996.8.5.453.

Повний текст джерела
Анотація:
Based on recent research that suggests that the processing of spatial and object information in the primate brain involves functionally and anatomically different systems, we examined whether the encoding and retention of object and spatial information in working memory are associated with different ERP components. In a study-test procedure subjects were asked to either remember simple geometric objects presented in a 4 by 4 spatial matrix irrespective of their position (object memory task) or to remember spatial positions of the objects irrespective of their forms (spatial memory task). The EEG was recorded from 13 electrodes during the study phase and the test phase. Recognition performance (reaction time and accuracy) was not different for the two memory tasks. PCA analyses suggest that the same four ERP components are evoked in the study phase by both tasks, which could be identified as N100, P200, P300, and slow wave. ERPs started to differ as a function of memory task 225 msec after stimulus onset at the posterior recording sites: An occipital maximal P200 component, lateralized to the right posterior temporal recording site, was observed for the object memory but not for the spatial memory task. Between-tasks differences were also obtained for P300 scalp distribution. Moreover, ERPs evoked by objects that were remembered later were more positive than ERPs to objects that were not remembered, starting at 400 msec postsimulus. The PCA analysis suggest that P300 and a slow wave following P300 at the frontal recordings contribute to these differences. A similar differential effect was not found between positions remembered or not remembered later. Post hoc analyses revealed that the absence of such effects in the spatial memory task could be due to less elaborated mnemonic strategies used in the spatial task compared to the object memory task. In the face of two additional behavioral experiments showing that subjects exclusively encode object features in the object memory task and spatial stimulus features in the spatial memory task, the present data provide evidence that encoding and rehearsal of object and spatial information in working memory are subserved by functionally and anatomically different subsystems.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Proud, Keaton, James B. Heald, James N. Ingram, Jason P. Gallivan, Daniel M. Wolpert, and J. Randall Flanagan. "Separate motor memories are formed when controlling different implicitly specified locations on a tool." Journal of Neurophysiology 121, no. 4 (April 1, 2019): 1342–51. http://dx.doi.org/10.1152/jn.00526.2018.

Повний текст джерела
Анотація:
Skillful manipulation requires forming and recalling memories of the dynamics of objects linking applied force to motion. It has been assumed that such memories are associated with entire objects. However, we often control different locations on an object, and these locations may be associated with different dynamics. We have previously demonstrated that multiple memories can be formed when participants are explicitly instructed to control different visual points marked on an object. A key question is whether this novel finding generalizes to more natural situations in which control points are implicitly defined by the task. To answer this question, we used objects with no explicit control points and tasks designed to encourage the use of distinct implicit control points. Participants moved a handle, attached to a robotic interface, to control the position of a rectangular object (“eraser”) in the horizontal plane. Participants were required to move the eraser straight ahead to wipe away a column of dots (“dust”), located to either the left or right. We found that participants adapted to opposing dynamics when linked to the left and right dust locations, even though the movements required for these two contexts were the same. Control conditions showed this learning could not be accounted for by contextual cues or the fact that the task goal required moving in a straight line. These results suggest that people naturally control different locations on manipulated objects depending on the task context and that doing so affords the formation of separate motor memories. NEW & NOTEWORTHY Skilled manipulation requires forming motor memories of object dynamics, which have been assumed to be associated with entire objects. However, we recently demonstrated that people can form multiple memories when explicitly instructed to control different visual points on an object. In this article we show that this novel finding generalizes to more natural situations in which control points are implicitly defined by the task.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kitayama, Shinobu, Sean Duffy, Tadashi Kawamura, and Jeff T. Larsen. "Perceiving an Object and Its Context in Different Cultures." Psychological Science 14, no. 3 (May 2003): 201–6. http://dx.doi.org/10.1111/1467-9280.02432.

Повний текст джерела
Анотація:
In two studies, a newly devised test (framed-line test) was used to examine the hypothesis that individuals engaging in Asian cultures are more capable of incorporating contextual information and those engaging in North American cultures are more capable of ignoring contextual information. On each trial, participants were presented with a square frame, within which was printed a vertical line. Participants were then shown another square frame of the same or different size and asked to draw a line that was identical to the first line in either absolute length (absolute task) or proportion to the height of the surrounding frame (relative task). The results supported the hypothesis: Whereas Japanese were more accurate in the relative task, Americans were more accurate in the absolute task. Moreover, when engaging in another culture, individuals tended to show the cognitive characteristic common in the host culture.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Soans, Melisa Andrea. "Review on Different Methods for Real Time Object Detection for Visually Impaired." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (April 30, 2022): 3414–21. http://dx.doi.org/10.22214/ijraset.2022.41438.

Повний текст джерела
Анотація:
Abstract: Real-time object detection is the task of doing object detection in real-time with fast inference while main- taining a base level of accuracy. Real time object detection helps the visually impaired detect the objects around them. Object detection can be done using different models such as the yolov3 model and the ssd mobilenet model. This paper aims to review and analyze the implementation and performance of various methodologies for real time object detection which will help the visually impaired. Each technique has its advantages and limitations. This paper helps in the review of different methods and help in selecting the best method for object detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Şık, Ayhan, Petra van Nieuwehuyzen, Jos Prickaerts, and Arjan Blokland. "Performance of different mouse strains in an object recognition task." Behavioural Brain Research 147, no. 1-2 (December 2003): 49–54. http://dx.doi.org/10.1016/s0166-4328(03)00117-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tinguria, Ajay, and R. Sudhakar. "Extracting Task Designs Using Fuzzy and Neuro-Fuzzy Approaches." International Journal of Computer Science and Mobile Computing 11, no. 7 (July 30, 2022): 72–82. http://dx.doi.org/10.47760/ijcsmc.2022.v11i07.007.

Повний текст джерела
Анотація:
Several applications generate large volumes of data on movements including vehicle navigation, fleet management, wildlife tracking and in the near future cell phone tracking. Such applications require support to manage the growing volumes of movement data. Understanding how an object moves in space and time is fundamental to the development of an appropriate movement model of the object. Many objects are dynamic and their positions change with time. The ability to reason about the changing positions of moving objects over time thus becomes crucial. Explanations on movements of an object require descriptions of the patterns they exhibit over space and time. Every moving object exhibits a wide range of patterns some of which repeat but not exactly over space and time such as an animal foraging or a delivery truck moving about a city. Even though movement patterns are not exactly the same, they are not completely different. Moving objects may move on the same or nearly similar paths and visit the same locations over time. In this paper we discuss some technique of fuzzy approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Müller, Dagmar, István Winkler, Urte Roeber, Susann Schaffer, István Czigler, and Erich Schröger. "Visual Object Representations Can Be Formed outside the Focus of Voluntary Attention: Evidence from Event-related Brain Potentials." Journal of Cognitive Neuroscience 22, no. 6 (June 2010): 1179–88. http://dx.doi.org/10.1162/jocn.2009.21271.

Повний текст джерела
Анотація:
There is an ongoing debate whether visual object representations can be formed outside the focus of voluntary attention. Recently, implicit behavioral measures suggested that grouping processes can occur for task-irrelevant visual stimuli, thus supporting theories of preattentive object formation (e.g., Lamy, D., Segal, H., & Ruderman, L. Grouping does not require attention. Perception and Psychophysics, 68, 17–31, 2006; Russell, C., & Driver, J. New indirect measures of “inattentive” visual grouping in a change-detection task. Perception and Psychophysics, 67, 606–623, 2005). We developed an ERP paradigm that allows testing for visual grouping when neither the objects nor its constituents are related to the participant's task. Our paradigm is based on the visual mismatch negativity ERP component, which is elicited by stimuli deviating from a regular stimulus sequence even when the stimuli are ignored. Our stimuli consisted of four pairs of colored discs that served as objects. These objects were presented isochronously while participants were engaged in a task related to the continuously presented fixation cross. Occasionally, two color deviances occurred simultaneously either within the same object or across two different objects. We found significant ERP differences for same- versus different-object deviances, supporting the notion that forming visual object representations by grouping can occur outside the focus of voluntary attention. Also our behavioral experiment, in which participants responded to color deviances—thus, this time the discs but, again, not the objects were task relevant—showed that the object status matters. Our results stress the importance of early grouping processes for structuring the perceptual world.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Smith, Edward E., John Jonides, Robert A. Koeppe, Edward Awh, Eric H. Schumacher, and Satoshi Minoshima. "Spatial versus Object Working Memory: PET Investigations." Journal of Cognitive Neuroscience 7, no. 3 (July 1995): 337–56. http://dx.doi.org/10.1162/jocn.1995.7.3.337.

Повний текст джерела
Анотація:
We used positron emission tomography (PET) to answer the following question: Is working memory a unitary storage system, or does it instead include different storage buffers for different kinds of information? In Experiment 1, PET measures were taken while subjects engaged in either a spatial-memory task (retain the position of three dots for 3 sec) or an object-memory task (retain the identity of two objects for 3 sec). The results manifested a striking double dissociation, as the spatial task activated only right-hemisphere regions, whereas the object task activated primarily left-hemisphere regions. The spatial (right-hemisphere) regions included occipital, parietal, and prefrontal areas, while the object (left-hemisphere) regions included inferotemporal and parietal areas. Experiment 2 was similar to Experiment 1 except that the stimuli and trial events were identical for the spatial and object tasks; whether spatial or object memory was required was manipulated by instructions. The PET results once more showed a double dissociation, as the spatial task activated primarily right-hemisphere regions (again including occipital, parietal and prefrontal areas), whereas the object task activated only left-hemisphere regions (again including inferotemporal and parietal areas). Experiment 3 was a strictly behavioral study, which produced another double dissociation. It used the same tasks as Experiment 2, and showed that a variation in spatial similarity affected performance in the spatial but not the object task, whereas a variation in shape similarity affected performance in the object but not the spatial task. Taken together, the results of the three experiments clearly imply that different working-memory buffers are used for storing spatial and object information.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Grekov, R., and A. Borisov. "CHARACTERIZATION OF THE EFFICIENCY OF THE FEATURES AGGREGATE IN FUZZY PATTERN RECOGNITION TASK." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 1 (June 27, 1997): 78. http://dx.doi.org/10.17770/etr1997vol1.1858.

Повний текст джерела
Анотація:
Let a set of objects exist each of which is described by N features X1? ..., XN, where each feature X} is a real number. So each object is set by N-dimensional vector (Xl5 ..., XN) and represents a point in the space of object descriptions, RN.There are also set objects for which degrees of membership in either class are unknown. A decision rule should be determined that could enable estimation of the membership of either object with unknown degrees of membership in the given classes (Ozols and Borisov, 1996). To determine the decision rule, such features should be found which give a possibility to distinguish objects belonging to different classes, i.e. features that are specific for each class. That is why a subtask of estimation of the efficiency of features should be solved. A function 5 should be determined which could enable estimation of the efficiency of both separate features and of features groups.Thus, the task is reduced to the determination of a number of features from set N that will best describe groups of objects and will enable possibly correct recognition of the object's membership in a class.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Muto, Hiroyuki. "Correlational Evidence for the Role of Spatial Perspective-Taking Ability in the Mental Rotation of Human-Like Objects." Experimental Psychology 68, no. 1 (January 2021): 41–48. http://dx.doi.org/10.1027/1618-3169/a000505.

Повний текст джерела
Анотація:
Abstract. People can mentally rotate objects that resemble human bodies more efficiently than nonsense objects in the same/different judgment task. Previous studies proposed that this human-body advantage in mental rotation is mediated by one's projections of body axes onto a human-like object, implying that human-like objects elicit a strategy shift, from an object-based to an egocentric mental rotation. To test this idea, we investigated whether mental rotation performance involving a human-like object had a stronger association with spatial perspective-taking, which entails egocentric mental rotation, than a nonsense object. In the present study, female participants completed a chronometric mental rotation task with nonsense and human-like objects. Their spatial perspective-taking ability was then assessed using the Road Map Test and the Spatial Orientation Test. Mental rotation response times (RTs) were shorter for human-like than for nonsense objects, replicating previous research. More importantly, spatial perspective-taking had a stronger negative correlation with RTs for human-like than for nonsense objects. These findings suggest that human-like stimuli in the same/different mental rotation task induce a strategy shift toward efficient egocentric mental rotation.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Jeong, Su Keun, and Yaoda Xu. "Task-context-dependent Linear Representation of Multiple Visual Objects in Human Parietal Cortex." Journal of Cognitive Neuroscience 29, no. 10 (October 2017): 1778–89. http://dx.doi.org/10.1162/jocn_a_01156.

Повний текст джерела
Анотація:
A host of recent studies have reported robust representations of visual object information in the human parietal cortex, similar to those found in ventral visual cortex. In ventral visual cortex, both monkey neurophysiology and human fMRI studies showed that the neural representation of a pair of unrelated objects can be approximated by the averaged neural representation of the constituent objects shown in isolation. In this study, we examined whether such a linear relationship between objects exists for object representations in the human parietal cortex. Using fMRI and multivoxel pattern analysis, we examined object representations in human inferior and superior intraparietal sulcus, two parietal regions previously implicated in visual object selection and encoding, respectively. We also examined responses from the lateral occipital region, a ventral object processing area. We obtained fMRI response patterns to object pairs and their constituent objects shown in isolation while participants viewed these objects and performed a 1-back repetition detection task. By measuring fMRI response pattern correlations, we found that all three brain regions contained representations for both single object and object pairs. In the lateral occipital region, the representation for a pair of objects could be reliably approximated by the average representation of its constituent objects shown in isolation, replicating previous findings in ventral visual cortex. Such a simple linear relationship, however, was not observed in either parietal region examined. Nevertheless, when we equated the amount of task information present by examining responses from two pairs of objects, we found that representations for the average of two object pairs were indistinguishable in both parietal regions from the average of another two object pairs containing the same four component objects but with a different pairing of the objects (i.e., the average of AB and CD vs. that of AD and CB). Thus, when task information was held consistent, the same linear relationship may govern how multiple independent objects are represented in the human parietal cortex as it does in ventral visual cortex. These findings show that object and task representations coexist in the human parietal cortex and characterize one significant difference of how visual information may be represented in ventral visual and parietal regions.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hu, Jianqiu, Jiazhou He, Pan Jiang, and Yuwei Yin. "SOMC:A Object-Level Data Augmentation for Sea Surface Object Detection." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012033. http://dx.doi.org/10.1088/1742-6596/2171/1/012033.

Повний текст джерела
Анотація:
Abstract The deep learning model is a data-driven model and more high-quality data will bring it better results. In the task of Unmanned Surface Vessel’s object detection based on optical images or videos, the object is sparser than the target in the natural scene. The current datasets of sea scenes often have some disadvantages such as high image acquisition costs, wide range of changes in object size, imbalance in the number of different objects and so on, which limit the generalization of the model for the detection of sea surface objects. In order to solve problems of insufficient scene and poor effect in current sea surface object detection, an object-level data augmentation for sea surface objects called SOMC is proposed. According to the different scenarios faced by the USV when performing autonomous obstacle avoidance, patrol and other tasks, SOMC generates suitable scenarios by mixing and copying targets conveniently, providing the possibility of unlimited expansion of the sea surface object. The experiment selected images in the video taken by the camera on top of the USV. A sufficient amount of comparative experiment prove that the SOMC integrates with existing excellent data augmentations and achieved an improvement in the detection effect, which proves the effectiveness and practicability of the SOMC in the perception task of the USV.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Marful, Alejandra, Daniela Paolieri, and M. Teresa Bajo. "Is naming faces different from naming objects? Semantic interference in a face- and object-naming task." Memory & Cognition 42, no. 3 (October 16, 2013): 525–37. http://dx.doi.org/10.3758/s13421-013-0376-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

SALLEH, Ahmad Faizal, Ryojun IKEURA, Soichiro HAYAKAWA, and Hideki SAWAI. "Cooperative Object Transfer: Effect of Observing Different Part of the Object on the Cooperative Task Smoothness." Journal of Biomechanical Science and Engineering 6, no. 4 (2011): 343–60. http://dx.doi.org/10.1299/jbse.6.343.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Korjoukov, Ilia, Danique Jeurissen, Niels A. Kloosterman, Josine E. Verhoeven, H. Steven Scholte, and Pieter R. Roelfsema. "The Time Course of Perceptual Grouping in Natural Scenes." Psychological Science 23, no. 12 (November 8, 2012): 1482–89. http://dx.doi.org/10.1177/0956797612443832.

Повний текст джерела
Анотація:
Visual perception starts with localized filters that subdivide the image into fragments that undergo separate analyses. The visual system has to reconstruct objects by grouping image fragments that belong to the same object. A widely held view is that perceptual grouping occurs in parallel across the visual scene and without attention. To test this idea, we measured the speed of grouping in pictures of animals and vehicles. In a classification task, these pictures were categorized efficiently. In an image-parsing task, participants reported whether two cues fell on the same or different objects, and we measured reaction times. Despite the participants’ fast object classification, perceptual grouping required more time if the distance between cues was larger, and we observed an additional delay when the cues fell on different parts of a single object. Parsing was also slower for inverted than for upright objects. These results imply that perception starts with rapid object classification and that rapid classification is followed by a serial perceptual grouping phase, which is more efficient for objects in a familiar orientation than for objects in an unfamiliar orientation.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chiatti, Agnese, Gianluca Bardaro, Emanuele Bastianelli, Ilaria Tiddi, Prasenjit Mitra, and Enrico Motta. "Task-Agnostic Object Recognition for Mobile Robots through Few-Shot Image Matching." Electronics 9, no. 3 (February 25, 2020): 380. http://dx.doi.org/10.3390/electronics9030380.

Повний текст джерела
Анотація:
To assist humans with their daily tasks, mobile robots are expected to navigate complex and dynamic environments, presenting unpredictable combinations of known and unknown objects. Most state-of-the-art object recognition methods are unsuitable for this scenario because they require that: (i) all target object classes are known beforehand, and (ii) a vast number of training examples is provided for each class. This evidence calls for novel methods to handle unknown object classes, for which fewer images are initially available (few-shot recognition). One way of tackling the problem is learning how to match novel objects to their most similar supporting example. Here, we compare different (shallow and deep) approaches to few-shot image matching on a novel data set, consisting of 2D views of common object types drawn from a combination of ShapeNet and Google. First, we assess if the similarity of objects learned from a combination of ShapeNet and Google can scale up to new object classes, i.e., categories unseen at training time. Furthermore, we show how normalising the learned embeddings can impact the generalisation abilities of the tested methods, in the context of two novel configurations: (i) where the weights of a Convolutional two-branch Network are imprinted and (ii) where the embeddings of a Convolutional Siamese Network are L2-normalised.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Jeong, Su Keun, and Yaoda Xu. "Neural Representation of Targets and Distractors during Object Individuation and Identification." Journal of Cognitive Neuroscience 25, no. 1 (January 2013): 117–26. http://dx.doi.org/10.1162/jocn_a_00298.

Повний текст джерела
Анотація:
In many everyday activities, we need to attend and encode multiple target objects among distractor objects. For example, when driving a car on a busy street, we need to simultaneously attend objects such as traffic signs, pedestrians, and other cars, while ignoring colorful and flashing objects in display windows. To explain how multiple visual objects are selected and encoded in visual STM and in perception in general, the neural object file theory argues that, whereas object selection and individuation is supported by inferior intraparietal sulcus (IPS), the encoding of detailed object features that enables object identification is mediated by superior IPS and higher visual areas such as the lateral occipital complex (LOC). Nevertheless, because task-irrelevant distractor objects were never present in previous studies, it is unclear how distractor objects would impact neural responses related to target object individuation and identification. To address this question, in two fMRI experiments, we asked participants to encode target object shapes among distractor object shapes, with targets and distractors shown in different spatial locations and in different colors. We found that distractor-related neural processing only occurred at low, but not at high, target encoding load and impacted both target individuation in inferior IPS and target identification in superior IPS and LOC. However, such distractor-related neural processing was short-lived, as it was only present during the visual STM encoding but not the delay period. Moreover, with spatial cuing of target locations in advance, distractor processing was attenuated during target encoding in superior IPS. These results are consistent with the load theory of visual information processing. They also show that, whereas inferior IPS and LOC were automatically engaged in distractor processing under low task load, with the help of precuing, superior IPS was able to only encode the task-relevant visual information.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Xu, Yaoda. "Distinctive Neural Mechanisms Supporting Visual Object Individuation and Identification." Journal of Cognitive Neuroscience 21, no. 3 (March 2009): 511–18. http://dx.doi.org/10.1162/jocn.2008.21024.

Повний текст джерела
Анотація:
Many everyday activities, such as driving on a busy street, require the encoding of distinctive visual objects from crowded scenes. Given resource limitations of our visual system, one solution to this difficult and challenging task is to first select individual objects from a crowded scene (object individuation) and then encode their details (object identification). Using functional magnetic resonance imaging, two distinctive brain mechanisms were recently identified that support these two stages of visual object processing. While the inferior intraparietal sulcus (IPS) selects a fixed number of about four objects via their spatial locations, the superior IPS and the lateral occipital complex (LOC) encode the features of a subset of the selected objects in great detail (object shapes in this case). Thus, the inferior IPS individuates visual objects from a crowded display and the superior IPS and higher visual areas participate in subsequent object identification. Consistent with the prediction of this theory, even when only object shape identity but not its location is task relevant, this study shows that object individuation in the inferior IPS treats four identical objects similarly as four objects that are all different, whereas object shape identification in the superior IPS and the LOC treat four identical objects as a single unique object. These results provide independent confirmation supporting the dissociation between visual object individuation and identification in the brain.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Suzuki, Wendy A., Earl K. Miller, and Robert Desimone. "Object and Place Memory in the Macaque Entorhinal Cortex." Journal of Neurophysiology 78, no. 2 (August 1, 1997): 1062–81. http://dx.doi.org/10.1152/jn.1997.78.2.1062.

Повний текст джерела
Анотація:
Suzuki, Wendy A., Earl K. Miller, and Robert Desimone. Object and place memory in the macaque entorhinal cortex. J. Neurophysiol. 78: 1062–1081, 1997. Lesions of the entorhinal cortex in humans, monkeys, and rats impair memory for a variety of kinds of information, including memory for objects and places. To begin to understand the contribution of entorhinal cells to different forms of memory, responses of entorhinal cells were recorded as monkeys performed either an object or place memory task. The object memory task was a variation of delayed matching to sample. A sample picture was presented at the start of the trial, followed by a variable sequence of zero to four test pictures, ending with a repetition of the sample (i.e., a match). The place memory task was a variation of delayed matching to place. In this task, a cue stimulus was presented at a variable sequence of one to four “places” on a computer screen, ending with a repetition of one of the previously shown places (i.e., a match). For both tasks, the animals were rewarded for releasing a bar to the match. To solve these tasks, the monkey must 1) discriminate the stimuli, 2) maintain a memory of the appropriate stimuli during the course of the trial, and 3) evaluate whether a test stimulus matches previously presented stimuli. The responses of entorhinal cortex neurons were consistent with a role in all three of these processes in both tasks. We found that 47% and 55% of the visually responsive entorhinal cells responded selectively to the different objects or places presented during the object or place task, respectively. Similar to previous findings in prefrontal but not perirhinal cortex on the object task, some entorhinal cells had sample-specific delay activity that was maintained throughout all of the delay intervals in the sequence. For the place task, some cells had location-specific maintained activity in the delay immediately following a specific cue location. In addition, 59% and 22% of the visually responsive cells recorded during the object and place task, respectively, responded differently to the test stimuli according to whether they were matching or nonmatching to the stimuli held in memory. Responses of some cells were enhanced to matching stimuli, whereas others were suppressed. This suppression or enhancement typically occurred well before the animals' behavioral response, suggesting that this information could be used to perform the task. These results indicate that entorhinal cells receive sensory information about both objects and spatial locations and that their activity carries information about objects and locations held in short-term memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Arshad, Usama. "Object Detection in Last Decade - A Survey." Scientific Journal of Informatics 8, no. 1 (May 10, 2021): 60–70. http://dx.doi.org/10.15294/sji.v8i1.28956.

Повний текст джерела
Анотація:
In the last decade, object detection is one of the interesting topics that played an important role in revolutionizing the presentera. Especially when it comes to computervision, object detection is a challenging and most fundamental problem. Researchersin the last decade enhanced object detection and made many advance discoveries using thetechnological advancements. When wetalk about object detection, we also must talk about deep learning and its advancements over the time. This research work describes theadvancements in object detection over last10 years (2010-2020). Different papers published in last 10 years related to objectdetection and its types are discussed with respect to their role in advancement of object detection. This research work also describesdifferent types of object detection, which include text detection, face detection etc. It clearly describes the changes inobject detection techniques over the period of the last 10 years. The Objectdetection is divided into two groups. General detectionand Task based detection. General detection is discussed chronologically and with its different variants while task based detectionincludes many state of the art algorithms and techniques according to tasks. Wealso described the basic comparison of how somealgorithms and techniques have been updated and played a major role in advancements of different fields related to object detection.We conclude that the most important advancements happened in the last decade and the future is promising much more advancement inobject detection on the basis of work done in this decade.In the last decade, object detection is one of the interesting topics that played an important role in revolutionizing the presentera. Especially when it comes to computervision, object detection is the challenging and most fundamental problem. Researchersinlast decade enhanced object detection and made many advance discoveries using thetechnological advancements. When wetalk about object detection, we also must talk about deep learning and its advancements over the time. This research work describes theadvancements in object detection over last10 years (2010-2020). Different papers published in last 10 years related to objectdetection and its types are discussed with respect to their role in advancement of object detection. This research work also describesdifferent types of object detection, which include text detection, face detection etc. It clearly describes the changes inobject detection techniques over the period of last 10 years. The Objectdetection is divided into two groups. General detectionand Task based detection. General detection is discussed chronologically and with its different variants while task based detectionincludes many state of the art algorithms and techniques according to tasks. Wealso described the basic comparison of how somealgorithms and techniques have been updated and played a major role in advancements of different fields related to object detection.We conclude that the most important advancements happened in last decade and future is promising much more advancement inobject detection on the basis of work done in this decade.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lin, Guan-Ting, Vinay Malligere Shivanna, and Jiun-In Guo. "A Deep-Learning Model with Task-Specific Bounding Box Regressors and Conditional Back-Propagation for Moving Object Detection in ADAS Applications." Sensors 20, no. 18 (September 15, 2020): 5269. http://dx.doi.org/10.3390/s20185269.

Повний текст джерела
Анотація:
This paper proposes a deep-learning model with task-specific bounding box regressors (TSBBRs) and conditional back-propagation mechanisms for detection of objects in motion for advanced driver assistance system (ADAS) applications. The proposed model separates the object detection networks for objects of different sizes and applies the proposed algorithm to achieve better detection results for both larger and tinier objects. For larger objects, a neural network with a larger visual receptive field is used to acquire information from larger areas. For the detection of tinier objects, the network of a smaller receptive field utilizes fine grain features. A conditional back-propagation mechanism yields different types of TSBBRs to perform data-driven learning for the set criterion and learn the representation of different object sizes without degrading each other. The design of dual-path object bounding box regressors can simultaneously detect objects in various kinds of dissimilar scales and aspect ratios. Only a single inference of neural network is needed for each frame to support the detection of multiple types of object, such as bicycles, motorbikes, cars, buses, trucks, and pedestrians, and to locate their exact positions. The proposed model was developed and implemented on different NVIDIA devices such as 1080 Ti, DRIVE-PX2 and Jetson TX-2 with the respective processing performance of 67 frames per second (fps), 19.4 fps, and 8.9 fps for the video input of 448 × 448 resolution, respectively. The proposed model can detect objects as small as 13 × 13 pixels and achieves 86.54% accuracy on a publicly available Pascal Visual Object Class (VOC) car database and 82.4% mean average precision (mAP) on a large collection of common road real scenes database (iVS database).
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Todd, Steven, and Arthur F. Kramer. "Attentional Guidance in Visual Attention." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 19 (October 1993): 1378–82. http://dx.doi.org/10.1518/107118193784162290.

Повний текст джерела
Анотація:
Earlier research has shown that a task-irrelevant sudden onset of an object will capture or draw an observer's visual attention to that object's location (e.g., Yantis & Jonides, 1984). In the four experiments reported here, we explore the question of whether task-irrelevant properties other than sudden-onset may capture attention. Our results suggest that a uniquely colored or luminous object, as well as an irrelevant boundary, may indeed capture or guide attention, though apparently to a lesser degree than a sudden onset: it appears that the degree of attentional capture is dependent on the relative salience of the varied, irrelevant dimension. Whereas a sudden onset is very salient, a uniquely colored object, for example, is only salient relative to the other objects within view, both to the degree that it is different in hue from its neighbors and the number of neighbors from which it differs. The relationship of these findings to work in the fields of visual momentum and visual scanning is noted.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Srikesavan, Cynthia S., Barbara Shay, and Tony Szturm. "Test-Retest Reliability and Convergent Validity of a Computer Based Hand Function Test Protocol in People with Arthritis." Open Orthopaedics Journal 9, no. 1 (February 27, 2015): 57–67. http://dx.doi.org/10.2174/1874325001509010057.

Повний текст джерела
Анотація:
Objectives: A computer based hand function assessment tool has been developed to provide a standardized method for quantifying task performance during manipulations of common objects/tools/utensils with diverse physical properties and grip/grasp requirements for handling. The study objectives were to determine test-retest reliability and convergent validity of the test protocol in people with arthritis. Methods: Three different object manipulation tasks were evaluated twice in forty people with rheumatoid arthritis (RA) or hand osteoarthritis (HOA). Each object was instrumented with a motion sensor and moved in concert with a computer generated visual target. Self-reported joint pain and stiffness levels were recorded before and after each task. Task performance was determined by comparing the object movement with the computer target motion. This was correlated with grip strength, nine hole peg test, Disabilities of Arm, Shoulder, and Hand (DASH) questionnaire, and the Health Assessment Questionnaire (HAQ) scores. Results: The test protocol indicated moderate to high test-retest reliability of performance measures for three manipulation tasks, intraclass correlation coefficients (ICCs) ranging between 0.5 to 0.84, p<0.05. Strength of association between task performance measures with self- reported activity/participation composite scores was low to moderate (Spearman rho <0.7). Low correlations (Spearman rho < 0.4) were observed between task performance measures and grip strength; and between three objects’ performance measures. Significant reduction in pain and joint stiffness (p<0.05) was observed after performing each task. Conclusion: The study presents initial evidence on the test retest reliability and convergent validity of a computer based hand function assessment protocol in people with rheumatoid arthritis or hand osteoarthritis. The novel tool objectively measures overall task performance during a variety of object manipulation tasks done by tracking a computer based visual target. This allows an innovative method of assessing performance than considering the time taken to complete a task or relying on subjective measures of self-reports on a limited range of objects and tasks covered. In addition, joint pain and stiffness levels before and after a manipulation task are tracked, which is lacking in other hand outcome measures. Performance measures during a broad range of object manipulation tasks relate to many activities relevant to life role participation. Therefore, task performance evaluation of common objects, utensils, or tools would be more valuable to gauge the difficulties encountered in daily life by people with arthritis. Future studies should consider a few revisions of the present protocol and evaluate a number of different objects targeting strength, fine, and gross dexterity based tasks for a broader application of the tool in arthritis populations.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Taniguchi, Kosuke, Kana Kuraguchi, and Yukuo Konishi. "Task Difficulty Makes ‘No’ Response Different From ‘Yes’ Response in Detection of Fragmented Object Contours." Perception 47, no. 9 (July 17, 2018): 943–65. http://dx.doi.org/10.1177/0301006618787395.

Повний текст джерела
Анотація:
Two-alternative forced choice tasks are often used in object detection, which regards detecting an object as a ‘yes’ response and detecting no object as a ‘no’ response. Previous studies have suggested that the processing of yes/no responses arises from identical or similar processing. In this study, we investigated the difference of processing between detecting an object (‘yes’ response) and not detecting any object (‘no’ response) by controlling the task difficulty in terms of fragment length and stimulus duration. The results indicated that a ‘yes’ response depends on accurate and stable decisions through grouping processing, and a ‘no’ response might involve two distinct processing, including accurate decisions and intuitive decisions. Accurate decisions of ‘no’ may arise after the rejection of a ‘yes’ response with grouping processing, which is an accurate but slow response in an easy task. Intuitive decisions of ‘no’ arise as the result of breaking down the decision process when the received information was insufficient for grouping processing in a difficult task. Therefore, intuitive decisions of ‘no’ arise quickly but are inaccurate. The different processes associated with yes/no responses were discussed in terms of the hierarchal structure of object recognition, especially with respect to receiving information and grouping.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Nassar, Ahmed Samy, Sébastien Lefèvre, and Jan Dirk Wegner. "Multi-View Instance Matching with Learned Geometric Soft-Constraints." ISPRS International Journal of Geo-Information 9, no. 11 (November 18, 2020): 687. http://dx.doi.org/10.3390/ijgi9110687.

Повний текст джерела
Анотація:
We present a new approach for matching urban object instances across multiple ground-level images for the ultimate goal of city-scale mapping of objects with high positioning accuracy. What makes this task challenging is the strong change in view-point, different lighting conditions, high similarity of neighboring objects, and variability in scale. We propose to turn object instance matching into a learning task, where image-appearance and geometric relationships between views fruitfully interact. Our approach constructs a Siamese convolutional neural network that learns to match two views of the same object given many candidate image cut-outs. In addition to image features, we propose utilizing location information about the camera and the object to support image evidence via soft geometric constraints. Our method is compared to existing patch matching methods to prove its edge over state-of-the-art. This takes us one step closer to the ultimate goal of city-wide object mapping from street-level imagery to benefit city administration.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zhao, Binglei, and Sergio Della Sala. "Different representations and strategies in mental rotation." Quarterly Journal of Experimental Psychology 71, no. 7 (January 1, 2018): 1574–83. http://dx.doi.org/10.1080/17470218.2017.1342670.

Повний текст джерела
Анотація:
It is still debated whether holistic or piecemeal transformation is applied to carry out mental rotation (MR) as an aspect of visual imagery. It has been recently argued that various mental representations could be flexibly generated to perform MR tasks. To test the hypothesis that imagery ability and types of stimuli interact to affect the format of representation and the choice of strategy in performing MR task, participants, grouped as good or poor imagers, were assessed using four MR tasks, comprising two sets of ‘Standard’ cube figures and two sets of ‘non-Standard’ ones, designed by withdrawing cubes from the Standard ones. Both good and poor imagers performed similarly under the two Standard conditions. Under non-Standard conditions, good imagers performed much faster in non-Standard objects than Standard ones, whereas poor imagers performed much slower in non-Standard objects than Standard ones. These results suggested that (1) individuals did not differ in processing the integrated Standard object, whereas (2) in processing the non-Standard objects, various visual representations and strategies could be applied in MR by diverse individuals: Good imagers were more flexible in generating different visual representations, whereas poor imagers applied different strategies under different task demands.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yokoi, Isao, Atsumichi Tachibana, Takafumi Minamimoto, Naokazu Goda, and Hidehiko Komatsu. "Dependence of behavioral performance on material category in an object-grasping task with monkeys." Journal of Neurophysiology 120, no. 2 (August 1, 2018): 553–63. http://dx.doi.org/10.1152/jn.00748.2017.

Повний текст джерела
Анотація:
Material perception is an essential part of our cognitive function that enables us to properly interact with our complex daily environment. One important aspect of material perception is its multimodal nature. When we see an object, we generally recognize its haptic properties as well as its visual properties. Consequently, one must examine behavior using real objects that are perceived both visually and haptically to fully understand the characteristics of material perception. As a first step, we examined whether there is any difference in the behavioral responses to different materials in monkeys trained to perform an object grasping task in which they saw and grasped rod-shaped real objects made of various materials. We found that the monkeys’ behavior in the grasping task, which was measured based on the success rate and the pulling force, differed depending on the material category. Monkeys easily and correctly grasped objects of some materials, such as metal and glass, but failed to grasp objects of other materials. In particular, monkeys avoided grasping fur-covered objects. The differences in the behavioral responses to the material categories cannot be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where their biological significance is an important factor. In addition, a monkey that avoided touching real fur-covered objects readily touched images of the same objects presented on a CRT display. This suggests that employing real objects is important when studying behaviors related to material perception. NEW & NOTEWORTHY We tested monkeys using an object-grasping task in which monkeys saw and grasped rod-shaped real objects made of various materials. We found that the monkeys’ behavior differed dramatically across the material categories and that the behavioral differences could not be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where the biological significance of materials is an important factor.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ellis, R., D. A. Allport, G. W. Humphreys, and J. Collis. "Varieties of Object Constancy." Quarterly Journal of Experimental Psychology Section A 41, no. 4 (November 1989): 775–96. http://dx.doi.org/10.1080/14640748908402393.

Повний текст джерела
Анотація:
Three experiments are described in which two pictures of isolated man-made objects were presented in succession. The subjects’ task was to decide, as rapidly as possible, whether the two pictured objects had the same name. With a stimulus-onset asynchrony (SOA) of above 200 msec two types of facilitation were observed: (1) the response latency was reduced if the pictures showed the same object, even though seen from different viewpoints (object benefit); (2) decision time was reduced further if the pictures showed the same object from the same angle of view (viewpoint benefit). These facilitation effects were not affected by projecting the pictures to different retinal locations. Significant benefits of both types were also obtained when the projected images differed in size. However, in these circumstances there was a small but significant performance decrement in matching two similar views of a single object, but not if the views were different. Conversely, the object benefit, but not the viewpoint benefit, was reduced when the SOA was only 100 msec. The data suggest the existence of (at least) two different visual codes, one non-retinotopic but viewer-centred, the other object-centred.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Sabes, Philip N., Boris Breznen, and Richard A. Andersen. "Parietal Representation of Object-Based Saccades." Journal of Neurophysiology 88, no. 4 (October 1, 2002): 1815–29. http://dx.doi.org/10.1152/jn.2002.88.4.1815.

Повний текст джерела
Анотація:
When monkeys make saccadic eye movements to simple visual targets, neurons in the lateral intraparietal area (LIP) display a retinotopic, or eye-centered, coding of the target location. However natural saccadic eye movements are often directed at objects or parts of objects in the visual scene. In this paper we investigate whether LIP represents saccadic eye movements differently when the target is specified as part of a visually displayed object. Monkeys were trained to perform an object-based saccade task that required them to make saccades to previously cued parts of an abstract object after the object reappeared in a new orientation. We recorded single neurons in area LIP of two macaque monkeys and analyzed their activity in the object-based saccade task, as well as two control tasks: a standard memory saccade task and a fixation task with passive object viewing. The majority of LIP neurons that were tuned in the memory saccade task were also tuned in the object-based saccade task. Using a hierarchical generalized linear model analysis, we compared the effects of three different spatial variables on the firing rate: the retinotopic location of the target, the object-fixed location of the target, and the orientation of the object in space. There was no evidence of an explicit object-fixed representation in the activity in LIP during either of the object-based tasks. In other words, no cells had receptive fields that rotated with the object. While some cells showed a modulation of activity due to the location of the target on the object, these variations were small compared to the retinotopic effects. For most cells, firing rates were best accounted for by either the retinotopic direction of the movement, the orientation of the object, or both spatial variables. The preferred direction of these retinotopic and object orientation effects were found to be invariant across tasks. On average, the object orientation effects were consistent with the retinotopic coding of potential target locations on the object. This interpretation is supported by the fact that the magnitude of these two effects were roughly equal in the early portions of the trial, but around the time of the motor response, the retinotopic effects dominated. We conclude that LIP uses the same retinotopic coding of saccade target whether the target is specified as an absolute point in space or as a location on a moving object.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Flittner*, Jonathan, John Luksas, and Joseph L. Gabbard. "Predicting User Performance in Augmented Reality User Interfaces with Image Analysis Algorithms." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 2108–12. http://dx.doi.org/10.1177/1071181320641511.

Повний текст джерела
Анотація:
This study determines how to apply existing image analysis measures of visual clutter to augmented reality user interfaces, in conjunction with other factors that may affect performance such as the percentage of virtual objects compared to real objects in an interface, and the type of object a user is searching for (real or virtual). Image analysis measures of clutter were specifically chosen as they can be applied to complex and naturalistic images as is common to experience while using an AR UI. The end goal of this research is to develop an algorithm capable of predicting user performance for a given AR UI. In this experiment, twelve participants performed a visual search task of locating a target object in an array of objects where some objects were virtual, and some were real. Participants completed this task under three different clutter levels (low, medium, high) against five different levels of virtual object percentage (0%, 25%, 50%, 75%, 100%) and two types of targets (real, virtual) with repetition. Task performance was measured through response time. Results show significant differences in response time between clutter levels and between virtual object percentage, but not target type. Participants consistently had more difficulty finding objects in more cluttered scenes, where clutter was determined through image analysis methods, and had more difficulty in finding objects when the virtual of objects was at 50% as opposed to other scenarios. Response time positively correlated to measures of combined clutter (virtual and real) arrays but not for measures of clutter taken of the individual array components (virtual or real), and positively correlated with the clutter scores of the target objects themselves.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Gregorics, Tibor. "Object-oriented backtracking." Acta Universitatis Sapientiae, Informatica 9, no. 2 (December 20, 2017): 144–61. http://dx.doi.org/10.1515/ausi-2017-0010.

Повний текст джерела
Анотація:
Abstract Several versions of the backtracking are known. In this paper, those versions are in focus which solve the problems whose problem space can be described with a special directed tree. The traversal strategies of this tree will be analyzed and they will be implemented in object-oriented style. In this way, the traversal is made by an enumerator object which iterates over all the paths (partial solutions) of the tree. Two different “acktracking enumerators” are going to be presented and the backtracking algorithm will be a linear search over one of these enumerators. Since these algorithms consist of independent objects (the enumerator, the linear search and the task which must be solved), it is very easy to exchange one component in order to solve another problem. Even the linear search could be substituted with another algorithm pattern, for example, with a counting or a maximum selection if the task had to be solved with a backtracking counting or a backtracking maximum selection.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Grill-Spector, Kalanit, and Nancy Kanwisher. "Visual Recognition." Psychological Science 16, no. 2 (February 2005): 152–60. http://dx.doi.org/10.1111/j.0956-7976.2005.00796.x.

Повний текст джерела
Анотація:
What is the sequence of processing steps involved in visual object recognition? We varied the exposure duration of natural images and measured subjects' performance on three different tasks, each designed to tap a different candidate component process of object recognition. For each exposure duration, accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds) than on a perceptual categorization task (e.g., birds vs. cars). However, strikingly, at each exposure duration, subjects performed just as quickly and accurately on the categorization task as they did on a task requiring only object detection: By the time subjects knew an image contained an object at all, they already knew its category. These findings place powerful constraints on theories of object recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

GREENWALD, HAL S., and DAVID C. KNILL. "A comparison of visuomotor cue integration strategies for object placement and prehension." Visual Neuroscience 26, no. 1 (January 2009): 63–72. http://dx.doi.org/10.1017/s0952523808080668.

Повний текст джерела
Анотація:
AbstractVisual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects’ fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kunimatsu, Jun, Shinya Yamamoto, Kazutaka Maeda, and Okihide Hikosaka. "Environment-based object values learned by local network in the striatum tail." Proceedings of the National Academy of Sciences 118, no. 4 (January 19, 2021): e2013623118. http://dx.doi.org/10.1073/pnas.2013623118.

Повний текст джерела
Анотація:
Basal ganglia contribute to object-value learning, which is critical for survival. The underlying neuronal mechanism is the association of each object with its rewarding outcome. However, object values may change in different environments and we then need to choose different objects accordingly. The mechanism of this environment-based value learning is unknown. To address this question, we created an environment-based value task in which the value of each object was reversed depending on the two scene-environments (X and Y). After experiencing this task repeatedly, the monkeys became able to switch the choice of object when the scene-environment changed unexpectedly. When we blocked the inhibitory input from fast-spiking interneurons (FSIs) to medium spiny projection neurons (MSNs) in the striatum tail by locally injecting IEM-1460, the monkeys became unable to learn scene-selective object values. We then studied the mechanism of the FSI-MSN connection. Before and during this learning, FSIs responded to the scenes selectively, but were insensitive to object values. In contrast, MSNs became able to discriminate the objects (i.e., stronger response to good objects), but this occurred clearly in one of the two scenes (X or Y). This was caused by the scene-selective inhibition by FSI. As a whole, MSNs were divided into two groups that were sensitive to object values in scene X or in scene Y. These data indicate that the local network of striatum tail controls the learning of object values that are selective to the scene-environment. This mechanism may support our flexible switching behavior in various environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Yoon, Eun Young, Glyn W. Humphreys, Sanjay Kumar, and Pia Rotshtein. "The Neural Selection and Integration of Actions and Objects: An fMRI Study." Journal of Cognitive Neuroscience 24, no. 11 (November 2012): 2268–79. http://dx.doi.org/10.1162/jocn_a_00256.

Повний текст джерела
Анотація:
There is considerable evidence that there are anatomically and functionally distinct pathways for action and object recognition. However, little is known about how information about action and objects is integrated. This study provides fMRI evidence for task-based selection of brain regions associated with action and object processing, and on how the congruency between the action and the object modulates neural response. Participants viewed videos of objects used in congruent or incongruent actions and attended either to the action or the object in a one-back procedure. Attending to the action led to increased responses in a fronto-parietal action-associated network. Attending to the object activated regions within a fronto-inferior temporal network. Stronger responses for congruent action–object clips occurred in bilateral parietal, inferior temporal, and putamen. Distinct cortical and thalamic regions were modulated by congruency in the different tasks. The results suggest that (i) selective attention to action and object information is mediated through separate networks, (ii) object–action congruency evokes responses in action planning regions, and (iii) the selective activation of nuclei within the thalamus provides a mechanism to integrate task goals in relation to the congruency of the perceptual information presented to the observer.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hasson, Christopher J., Tian Shen, and Dagmar Sternad. "Energy margins in dynamic object manipulation." Journal of Neurophysiology 108, no. 5 (September 1, 2012): 1349–65. http://dx.doi.org/10.1152/jn.00019.2012.

Повний текст джерела
Анотація:
Many tasks require humans to manipulate dynamically complex objects and maintain appropriate safety margins, such as placing a cup of coffee on a coaster without spilling. This study examined how humans learn such safety margins and how they are shaped by task constraints and changing variability with improved skill. Eighteen subjects used a manipulandum to transport a shallow virtual cup containing a ball to a target without losing the ball. Half were to complete the cup transit in a comfortable target time of 2 s (a redundant task with infinitely many equivalent solutions), and the other half in minimum time (a nonredundant task with one explicit cost to optimize). The safety margin was defined as the ball energy relative to escape, i.e., as an energy margin. The first hypothesis, that subjects converge to a single strategy in the minimum-time task but choose different strategies in the less constrained target-time task, was not supported. Both groups developed individualized strategies with practice. The second hypothesis, that subjects decrease safety margins in the minimum-time task but increase them in the target-time task, was supported. The third hypothesis, that in both tasks subjects modulate energy margins according to their execution variability, was partially supported. In the target-time group, changes in energy margins correlated positively with changes in execution variability; in the minimum-time group, such a relation was observed only at the end of practice, not across practice. These results show that when learning a redundant object manipulation task, most subjects increase their safety margins and shape their movement strategies in accordance with their changing variability.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ojemann, Jeffrey G., George A. Ojemann, and Ettore Lettich. "Cortical stimulation mapping of language cortex by using a verb generation task: effects of learning and comparison to mapping based on object naming." Journal of Neurosurgery 97, no. 1 (July 2002): 33–38. http://dx.doi.org/10.3171/jns.2002.97.1.0033.

Повний текст джерела
Анотація:
Object. Cortical stimulation mapping has traditionally relied on disruption of object naming to define essential language areas. In this study, the authors reviewed the use of a different language task, verb generation, in mapping language. This task has greater use in brain imaging studies and may be used to test aspects of language different from those of object naming. Methods. In 14 patients, cortical stimulation mapping performed using a verb generation task provided a map of language areas in the frontal and temporoparietal cortices. These verb generation maps often overlapped object naming ones and, in many patients, different areas of cortex were found to be involved in the two functions. In three patients, stimulation mapping was performed during the initial performance of the verb generation task and also during learned performance of the task. Parallel to findings of published neuroimaging studies, a larger area of stimulated cortex led to disruption of verb generation in response to stimulation during novel task performance than during learned performance. Conclusions. Results of cortical stimulation mapping closely resemble those of functional neuroimaging when both implement the verb generation task. The precise map of the temporoparietal language cortex depends on the task used for mapping.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Fornia, Luca, Marco Rossi, Marco Rabuffetti, Antonella Leonetti, Guglielmo Puglisi, Luca Viganò, Luciano Simone, et al. "Direct Electrical Stimulation of Premotor Areas: Different Effects on Hand Muscle Activity during Object Manipulation." Cerebral Cortex 30, no. 1 (September 2, 2019): 391–405. http://dx.doi.org/10.1093/cercor/bhz139.

Повний текст джерела
Анотація:
Abstract Dorsal and ventral premotor (dPM and vPM) areas are crucial in control of hand muscles during object manipulation, although their respective role in humans is still debated. In patients undergoing awake surgery for brain tumors, we studied the effect of direct electrical stimulation (DES) of the premotor cortex on the execution of a hand manipulation task (HMt). A quantitative analysis of the activity of extrinsic and intrinsic hand muscles recorded during and in absence of DES was performed. Results showed that DES applied to premotor areas significantly impaired HMt execution, affecting task-related muscle activity with specific features related to the stimulated area. Stimulation of dorsal vPM induced both a complete task arrest and clumsy task execution, characterized by general muscle suppression. Stimulation of ventrocaudal dPM evoked a complete task arrest mainly due to a dysfunctional recruitment of hand muscles engaged in task execution. These results suggest that vPM and dPM contribute differently to the control of hand muscles during object manipulation. Stimulation of both areas showed a significant impact on motor output, although the different effects suggest a stronger relationship of dPM with the corticomotoneuronal circuit promoting muscle recruitment and a role for vPM in supporting sensorimotor integration.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Zhang, Fan, Jiaxing Luan, Zhichao Xu, and Wei Chen. "DetReco: Object-Text Detection and Recognition Based on Deep Neural Network." Mathematical Problems in Engineering 2020 (July 14, 2020): 1–15. http://dx.doi.org/10.1155/2020/2365076.

Повний текст джерела
Анотація:
Deep learning-based object detection method has been applied in various fields, such as ITS (intelligent transportation systems) and ADS (autonomous driving systems). Meanwhile, text detection and recognition in different scenes have also attracted much attention and research effort. In this article, we propose a new object-text detection and recognition method termed “DetReco” to detect objects and texts and recognize the text contents. The proposed method is composed of object-text detection network and text recognition network. YOLOv3 is used as the algorithm for the object-text detection task and CRNN is employed to deal with the text recognition task. We combine the datasets of general objects and texts together to train the networks. At test time, the detection network detects various objects in an image. Then, the text images are passed to the text recognition network to derive the text contents. The experiments show that the proposed method achieves 78.3 mAP (mean Average Precision) for general objects and 72.8 AP (Average Precision) for texts in regard to detection performance. Furthermore, the proposed method is able to detect and recognize affine transformed or occluded texts with robustness. In addition, for the texts detected around general objects, the text contents can be used as the identifier to distinguish the object.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zhang, Xiaoliang, Kehe Wu, Qi Ma, and Zuge Chen. "Research on Object Detection Model Based on Feature Network Optimization." Processes 9, no. 9 (September 14, 2021): 1654. http://dx.doi.org/10.3390/pr9091654.

Повний текст джерела
Анотація:
As the object detection dataset scale is smaller than the image recognition dataset ImageNet scale, transfer learning has become a basic training method for deep learning object detection models, which pre-trains the backbone network of the object detection model on an ImageNet dataset to extract features for detection tasks. However, the classification task of detection focuses on the salient region features of an object, while the location task of detection focuses on the edge features, so there is a certain deviation between the features extracted by a pretrained backbone network and those needed by a localization task. To solve this problem, a decoupled self-attention (DSA) module is proposed for one-stage object-detection models in this paper. A DSA includes two decoupled self-attention branches, so it can extract appropriate features for different tasks. It is located between the Feature Pyramid Networks (FPN) and head networks of subtasks, and used to independently extract global features for different tasks based on FPN-fused features. Although the DSA network module is simple, it can effectively improve the performance of object detection, and can easily be embedded in many detection models. Our experiments are based on the representative one-stage detection model RetinaNet. In the Common Objects in Context (COCO) dataset, when ResNet50 and ResNet101 are used as backbone networks, the detection performances can be increased by 0.4 and 0.5% AP, respectively. When the DSA module and object confidence task are both applied in RetinaNet, the detection performances based on ResNet50 and ResNet101 can be increased by 1.0 and 1.4% AP, respectively. The experiment results show the effectiveness of the DSA module.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Martinovic, Jasna, Thomas Gruber, and Matthias Müller. "Priming of object categorization within and across levels of specificity." Psihologija 42, no. 1 (2009): 27–46. http://dx.doi.org/10.2298/psi0901027m.

Повний текст джерела
Анотація:
Identification of objects can occur at different levels of specificity. Depending on task and context, an object can be classified at the superordinate level (as an animal), at the basic level (a bird) or at the subordinate level (a sparrow). What are the interactions between these representational levels and do they rely on the same sequential processes that lead to successful object identification? In this electroencephalogram study, a task-switching paradigm (covert naming or living/non-living judgment) was used. Images of objects were repeated either within the same task, or with a switch from a covert naming task to a living or non-living judgment and vice versa. While covert naming accesses entrylevel (basic or subordinate), living/non-living judgments rely on superordinate classification. Our behavioral results demonstrated clear priming effects within both tasks. However, asymmetries were found when task-switching had occurred, with facilitation for covert naming but not for categorization. We also found lower accuracy and early-starting and persistent enhancements of event-related potentials (ERPs) for covert naming, indicating that this task was more difficult and involved more intense perceptual and semantic processing. Perceptual priming was marked by consistent reductions of the ERP component L1 for repeated presentations, both with and without task switching. Additional repetition effects were found in early event-related activity between 150-190 ms (N1) when a repeated image had been named at initial presentation. We conclude that differences in N1 indicate task-related changes in the identification process itself. Such enhancements for covert naming again emerge in a later time window associated with depth of semantic processing. Meanwhile, L1 reflects modulations due to implicit memory of objects. In conclusion, evidence was found for representational overlap; changes in ERP markers started early and revealed cross-task priming at the level of object structure analysis and more intense perceptual and semantic processing for covert naming.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Karne, Ms Archana, Mr RadhaKrishna Karne, Mr V. Karthik Kumar, and Dr A. Arunkumar. "Convolutional Neural Networks for Object Detection and Recognition." Journal of Artificial Intelligence, Machine Learning and Neural Network, no. 32 (February 4, 2023): 1–13. http://dx.doi.org/10.55529/jaimlnn.32.1.13.

Повний текст джерела
Анотація:
One of the essential technologies in the fields of target extraction, pattern recognition, and motion measurement is moving object detection. Finding moving objects or a number of moving objects across a series of frames is called object tracking. Basically, object tracking is a difficult task. Unexpected changes in the surroundings, an item's mobility, noise, etc., might make it difficult to follow an object. Different tracking methods have been developed to solve these issues. This paper discusses a number of object tracking and detection approaches. The major methods for identifying objects in images will be discussed in this paper. Recent years have seen impressive advancements in fields like pattern recognition and machine learning, both of which use convolutional neural networks (CNNs). It is mostly caused by graphics processing units'(GPUs) enhanced parallel processing capacity. This article describes many kinds of object classification, object racking, and object detection techniques. Our results showed that the suggested algorithm can detect moving objects reliably and efficiently in a variety of situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Rau, Pei-Luen Patrick, Jian Zheng, Lijun Wang, Jingyu Zhao, and Dangxiao Wang. "Haptic and Auditory–Haptic Attentional Blink in Spatial and Object-Based Tasks." Multisensory Research 33, no. 3 (July 1, 2020): 295–312. http://dx.doi.org/10.1163/22134808-20191483.

Повний текст джерела
Анотація:
Abstract Dual-task performance depends on both modalities (e.g., vision, audition, haptics) and task types (spatial or object-based), and the order by which different task types are organized. Previous studies on haptic and especially auditory–haptic attentional blink (AB) are scarce, and the effect of task types and their order have not been fully explored. In this study, 96 participants, divided into four groups of task type combinations, identified auditory or haptic Target 1 (T1) and haptic Target 2 (T2) in rapid series of sounds and forces. We observed a haptic AB (i.e., the accuracy of identifying T2 increased with increasing stimulus onset asynchrony between T1 and T2) in spatial, object-based, and object–spatial tasks, but not in spatial–object task. Changing the modality of an object-based T1 from haptics to audition eliminated the AB, but similar haptic-to-auditory change of the modality of a spatial T1 had no effect on the AB (if it exists). Our findings fill a gap in the literature regarding the auditory–haptic AB, and substantiate the importance of modalities, task types and their order, and the interaction between them. These findings were explained by how the cerebral cortex is organized for processing spatial and object-based information in different modalities.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Ramos, Shayenne Elizianne, Luis David Solis Murgas, Monica Rodrigues Ferreira, and Carlos Alberto Mourao Junior. "Learning and Working Memory In Mice Under Different Lighting Conditions." Revista Neurociências 21, no. 3 (September 30, 2013): 349–55. http://dx.doi.org/10.34024/rnc.2013.v21.8158.

Повний текст джерела
Анотація:
Objective. This study aimed to investigate the effect of different light/ dark cycles and light intensity during behavioral tests of learning and working memory in Swiss mice. Method. Fifty-seven Swiss mice were kept in a housing room in either a 12:12h light/dark cycle (LD), con­stant light (LL), or constant darkness (DD). The animals were then tested in Lashley maze and Object recognition task under either 500 or 0 lux illumination, resulting in six treatments (LD-500, LD-0, LL- 500, LL-0, DD-500, and DD-0). Results. There were no significant differences between the conditions of light/dark, or between tests at 500 and 0 lux. Animals kept in constant darkness and tested at 0 lux (DD-0) had learning and working memory impaired, as demon­strated by slower learning in Lashley III maze, and no object recogni­tion in Object recognition task. Conclusion. Continuous darkness throughout the experiment affected the learning and working mem­ory of Swiss mice.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Koivisto, Mika, Simone Grassini, Niina Salminen-Vaparanta, and Antti Revonsuo. "Different Electrophysiological Correlates of Visual Awareness for Detection and Identification." Journal of Cognitive Neuroscience 29, no. 9 (September 2017): 1621–31. http://dx.doi.org/10.1162/jocn_a_01149.

Повний текст джерела
Анотація:
Detecting the presence of an object is a different process than identifying the object as a particular object. This difference has not been taken into account in designing experiments on the neural correlates of consciousness. We compared the electrophysiological correlates of conscious detection and identification directly by measuring ERPs while participants performed either a task only requiring the conscious detection of the stimulus or a higher-level task requiring its conscious identification. Behavioral results showed that, even if the stimulus was consciously detected, it was not necessarily identified. A posterior electrophysiological signature 200–300 msec after stimulus onset was sensitive for conscious detection but not for conscious identification, which correlated with a later widespread activity. Thus, we found behavioral and neural evidence for elementary visual experiences, which are not yet enriched with higher-level knowledge. The search for the mechanisms of consciousness should focus on the early elementary phenomenal experiences to avoid the confounding effects of higher-level processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Ravinder M., Arunima Jaiswal, and Shivani Gulati. "Deep Learning-Based Object Detection in Diverse Weather Conditions." International Journal of Intelligent Information Technologies 18, no. 1 (January 2022): 1–14. http://dx.doi.org/10.4018/ijiit.296236.

Повний текст джерела
Анотація:
The number of different types of composite images has grown very rapidly in current years, making Object Detection, an extremely critical task that requires a deeper understanding of various deep learning strategies that help to detect objects with higher accuracy in less amount of time. A brief description of object detection strategies under various weather conditions is discussed in this paper with their advantages and disadvantages. So, to overcome this transfer learning has been used and implementation has been done with two Pretrained Models i.e., YOLO and Resnet50 with denoising which detects the object under different weather conditions like sunny, snowy, rainy, and hazy weather. And comparison has been made between the two models. The objects are detected from the images under different conditions where Resnet50 is identified to be the best Model.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Pérez, Javier, Jose-Luis Guardiola, Alberto J. Perez, and Juan-Carlos Perez-Cortes. "Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM)." Sensors 20, no. 22 (November 17, 2020): 6554. http://dx.doi.org/10.3390/s20226554.

Повний текст джерела
Анотація:
Inspecting a 3D object which shape has elastic manufacturing tolerances in order to find defects is a challenging and time-consuming task. This task usually involves humans, either in the specification stage followed by some automatic measurements, or in other points along the process. Even when a detailed inspection is performed, the measurements are limited to a few dimensions instead of a complete examination of the object. In this work, a probabilistic method to evaluate 3D surfaces is presented. This algorithm relies on a training stage to learn the shape of the object building a statistical shape model. Making use of this model, any inspected object can be evaluated obtaining a probability that the whole object or any of its dimensions are compatible with the model, thus allowing to easily find defective objects. Results in simulated and real environments are presented and compared to two different alternatives.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії