Zeitschriftenartikel zum Thema „A object“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: A object.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "A object" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Calogero, Rachel M. „Objects Don’t Object“. Psychological Science 24, Nr. 3 (22.01.2013): 312–18. http://dx.doi.org/10.1177/0956797612452574.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bergin, Joseph, Richard Kick, Judith Hromcik und Kathleen Larson. „The object is objects“. ACM SIGCSE Bulletin 34, Nr. 1 (März 2002): 251. http://dx.doi.org/10.1145/563517.563438.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Carey, Susan, und Fei Xu. „Infants' knowledge of objects: beyond object files and object tracking“. Cognition 80, Nr. 1-2 (Juni 2001): 179–213. http://dx.doi.org/10.1016/s0010-0277(00)00154-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Remhof, Justin. „Object Constructivism and Unconstructed Objects“. Southwest Philosophy Review 30, Nr. 1 (2014): 177–85. http://dx.doi.org/10.5840/swphilreview201430117.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Neubauer, Peter B. „Preoedipal Objects and Object Primacy“. Psychoanalytic Study of the Child 40, Nr. 1 (Januar 1985): 163–82. http://dx.doi.org/10.1080/00797308.1985.11823027.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Vannucci, Manila, Giuliana Mazzoni, Carlo Chiorri und Lavinia Cioli. „Object imagery and object identification: object imagers are better at identifying spatially-filtered visual objects“. Cognitive Processing 9, Nr. 2 (24.01.2008): 137–43. http://dx.doi.org/10.1007/s10339-008-0203-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kingo, Osman S., und Peter Krøjgaard. „Object manipulation facilitates kind-based object individuation of shape-similar objects“. Cognitive Development 26, Nr. 2 (April 2011): 87–103. http://dx.doi.org/10.1016/j.cogdev.2010.08.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Newell, F. N. „Searching for Objects in the Visual Periphery: Effects of Orientation“. Perception 25, Nr. 1_suppl (August 1996): 110. http://dx.doi.org/10.1068/v96l1111.

Der volle Inhalt der Quelle
Annotation:
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shioiri, Satoshi, Kotaro Hashimoto, Kazumichi Matsumiya, Ichiro Kuriki und Sheng He. „Extracting the orientation of rotating objects without object identification: Object orientation induction“. Journal of Vision 18, Nr. 9 (17.09.2018): 17. http://dx.doi.org/10.1167/18.9.17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ciribuco, Andrea, und Anne O’Connor. „Translating the object, objects in translation“. Translation and Interpreting Studies 17, Nr. 1 (05.07.2022): 1–13. http://dx.doi.org/10.1075/tis.00052.int.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Gao, T., und B. Scholl. „Are objects required for object-files?“ Journal of Vision 7, Nr. 9 (30.03.2010): 916. http://dx.doi.org/10.1167/7.9.916.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Downey, T. Wayne. „Early Object Relations into New Objects“. Psychoanalytic Study of the Child 56, Nr. 1 (Januar 2001): 39–67. http://dx.doi.org/10.1080/00797308.2001.11800664.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Clark, Don. „Object-lessons from self-explanatory objects“. Computers & Education 18, Nr. 1-3 (Januar 1992): 11–22. http://dx.doi.org/10.1016/0360-1315(92)90031-y.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Sari, Marlindia Ike, Anang Sularsa, Rini Handayani, Surya Badrudin Alamsyah und Siswandi Riki Rizaldi. „3D Scanner Using Infrared for Small Object“. JOIV : International Journal on Informatics Visualization 7, Nr. 3 (10.09.2023): 935. http://dx.doi.org/10.30630/joiv.7.3.2050.

Der volle Inhalt der Quelle
Annotation:
Three-Dimensional scanning is a method to convert various distances set into object visualization in 3-dimensional form. Developing a 3D scanner has various methods and techniques depending on the 3d scanner's purpose and the size of the object target. This research aims to build a prototype of a 3D scanner scanning small objects with dimensions maximum(10x7x23)cm. The study applied an a-three dimensional(3D) scanner using infrared and a motor to move the infrared upward to get Z-ordinate. The infrared is used to scan an object and visualize the result based on distance measurement by infrared. At the same time, the motor for rotating objects gets the (X, Y) ordinates. The object was placed in the center of the scanner, and the maximum distance of the object from infrared was 20cm. The model uses infrared to measure the object's distance, collect the result for each object's height, and visualize it in the graphic user interface. In this research, we tested the scanner with the distance between the object and infrared were 7 cm, 10 cm, 15 cm, and 20 cm. The best result was 80% accurate, with the distance between the object and the infrared being 10cm. The best result was obtained when the scanner was used on a cylindrical object and an object made of a non-glossy material. The design of this study is not recommended for objects with edge points and metal material.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Ju, Ginny, und Irving Biederman. „Tests of a Theory of Human Image Understanding: Part I the Perception of Colored and Partial Objects“. Proceedings of the Human Factors Society Annual Meeting 30, Nr. 3 (September 1986): 297–300. http://dx.doi.org/10.1177/154193128603000322.

Der volle Inhalt der Quelle
Annotation:
Object recognition can be conceptualized as a process in which the perceptual input is successfully matched with a stored representation of the object. A theory of pattern recognition, Recognition by Components(RBC) assumes that objects are represented as simple volumetric primatives (e.g., bricks, cylinders, etc.) in specifed relations to each other. According to RBC, speeded recognition should be possible from only a few components, as long as those components uniquely identify an object. Neither the full complement of an object's components, nor the object's surface characteristics (e.g., color and texture) need be present for rapid identification. The results from two experiments on the perception of briefly presented objects are offered for supporting the sufficiency of the theory. Line drawings are identified about as rapidly and as accurately as full color slides. Partial objects could be rapidly (though not optimally) identified. Complex objects are more readily identified than simple objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Chen, Liang-Chia, Thanh-Hung Nguyen und Shyh-Tsong Lin. „Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection“. Measurement Science and Technology 26, Nr. 10 (15.09.2015): 105202. http://dx.doi.org/10.1088/0957-0233/26/10/105202.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Woods, Andrew T., Allison Moore und Fiona N. Newell. „Canonical Views in Haptic Object Perception“. Perception 37, Nr. 12 (01.01.2008): 1867–78. http://dx.doi.org/10.1068/p6038.

Der volle Inhalt der Quelle
Annotation:
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Mohanad Abdulhamid und Adam Olalo. „Implementation of Moving Object Tracker System“. Data Science: Journal of Computing and Applied Informatics 5, Nr. 2 (05.10.2021): 102–6. http://dx.doi.org/10.32734/jocai.v5.i2-6450.

Der volle Inhalt der Quelle
Annotation:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it’s from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Abdulhamid, Mohanad. „Implementation of Moving Object Tracker System“. Journal of Siberian Federal University. Engineering & Technologies 14, Nr. 8 (Dezember 2021): 986–95. http://dx.doi.org/10.17516/1999-494x-0367.

Der volle Inhalt der Quelle
Annotation:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it's from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Abdulhamid, Mohanad, und Adam Olalo. „Implementation of moving object tracker system“. Journal of Engineering Sciences and Innovation 4, Nr. 4 (02.12.2019): 427–38. http://dx.doi.org/10.56958/jesi.2019.4.4.427.

Der volle Inhalt der Quelle
Annotation:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it’s from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wibowo, Edy, Naily Ulya, Whibatsu Helvantriyudo, Muhammad Maliki Azyumardi, Fata Hafiduddin, Mamat Rokhmat, Ismudiati Puri Handayani et al. „Misconceptions on the understanding of flying objects in fluids“. Momentum: Physics Education Journal 7, Nr. 2 (01.06.2023): 178–87. http://dx.doi.org/10.21067/mpej.v7i2.6881.

Der volle Inhalt der Quelle
Annotation:
The concepts of floating, flying, and sinking object have been studied since junior high school. However, we still often find students' misconceptions regarding the concept, especially of flying objects, even at the university level. This work aims to propose a clarification of the concept of a flying object in the fluid to be correctly described the condition for the flying object. We used eggs, water, and salt solutions to demonstrate sinking, rising, and floating objects in the fluids. The results showed that when the density of the object is the same as the density of the fluid, the position of the object is still at the bottom of the fluid since it was not flying in the middle of the depth of the fluid. But the object does not touch the bottom of the container so that the object's height is zero. This is because the object has not had a driving force (Fd = 0) that pushes the object upward towards the surface of the fluid to float. When the density of the fluid slightly exceeds the density of the object, the object immediately moves upward to the fluid surface - floating phenomenon is started. The greater the difference between the density of the fluid and the density of the object, the faster the object moves towards the surface. The object cannot stay at any position between the bottom and the surface of the fluid. A stable position is reached when the object reaches the surface of the fluid to float. This work is expected to increase students' understanding of flying objects in fluids.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Schinkel, Anders. „The Object of History“. Essays in Philosophy 7, Nr. 2 (2006): 212–31. http://dx.doi.org/10.5840/eip2006726.

Der volle Inhalt der Quelle
Annotation:
The phrase ‘the object of history’ may mean all sorts of things. In this article, a distinction is made between object1, the object of study for historians, and object2, the goal or purpose of the study of history. Within object2, a distinction is made between a goal intrinsic to the study of history (object2in) and an extrinsic goal (object2ex), the latter being what the study of history should contribute to society (or anything else outside itself). The main point of the article, which is illustrated by a discussion of the work of R. G. Collingwood, E. H. Carr, and G. R. Elton, is that in the work of historians and philosophers of history, these kinds of ‘object of history’ are usually (closely) connected. If they are not, something is wrong. That does not mean, however, that historians or even philosophers of history are always aware of these connections. For that reason, the distinctions made in this article provide a useful analytical tool for historians and theorists of history alike.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Landau, Barbara, und Ray Jackendoff. „“What” and “where” in spatial language and spatial cognition“. Behavioral and Brain Sciences 16, Nr. 2 (Juni 1993): 217–38. http://dx.doi.org/10.1017/s0140525x00029733.

Der volle Inhalt der Quelle
Annotation:
AbstractFundamental to spatial knowledge in all species are the representations underlying object recognition, object search, and navigation through space. But what sets humans apart from other species is our ability to express spatial experience through language. This target article explores the language ofobjectsandplaces, asking what geometric properties are preserved in the representations underlying object nouns and spatial prepositions in English. Evidence from these two aspects of language suggests there are significant differences in the geometric richness with which objects and places are encoded. When an object is named (i.e., with count nouns), detailed geometric properties – principally the object's shape (axes, solid and hollow volumes, surfaces, and parts) – are represented. In contrast, when an object plays the role of either “figure” (located object) or “ground” (reference object) in a locational expression, only very coarse geometric object properties are represented, primarily the main axes. In addition, the spatial functions encoded by spatial prepositions tend to be nonmetric and relatively coarse, for example, “containment,” “contact,” “relative distance,” and “relative direction.” These properties are representative of other languages as well. The striking differences in the way language encodes objects versus places lead us to suggest two explanations: First, there is a tendency for languages to level out geometric detail from both object and place representations. Second, a nonlinguistic disparity between the representations of “what” and “where” underlies how language represents objects and places. The language of objects and places converges with and enriches our understanding of corresponding spatial representations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Kumar, Lokesh, Pramod Kumar und Parag Jain. „Object ID Tracking in Videos: A Review“. International Transactions in Mathematical Sciences and Computer 15, Nr. 02 (2022): 157–66. http://dx.doi.org/10.58517/itmsc.2022.15204.

Der volle Inhalt der Quelle
Annotation:
Assigning and monitoring distinct identifiers to objects or entities in a video stream is known as object ID tracking in videos. For the purpose of tracking and analyzing object movement over time, computer vision, video analytics, and surveillance systems employ this technology extensively. Object detection is the first step in the process, wherein computer vision algorithms locate and identify things within individual video frames. This may entail methods like region-based Convolutional Neural Network (Faster R-CNN) or YOLO (You Only Look Once), which are object detection models based on deep learning. Based on the object's past trajectory, the tracking algorithm assists in predicting the object's position in the frame.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Korda, Andrea. „Object Lessons in Victorian Education: Text, Object, Image“. Journal of Victorian Culture 25, Nr. 2 (08.01.2020): 200–222. http://dx.doi.org/10.1093/jvcult/vcz064.

Der volle Inhalt der Quelle
Annotation:
Abstract Lessons on common objects, known as ‘object lessons’, were a customary occurrence in Victorian schoolrooms. This article looks at Victorian object lessons around mid-century as a means of examining the variety of meanings that common objects, and particularly manufactured objects, might have held both inside and outside Victorian schoolrooms. While model scripts for object lessons circulated widely and clarified the meaning of common objects in print, the objects themselves had the potential to complicate and challenge these meanings. Drawing primarily on publications by Elizabeth Mayo and the Home and Colonial School Society (established in 1836), this article outlines the theological, industrial and imperialist ways of looking that informed the model object lesson. Yet close study of the objects employed in object lessons – feathers, an object lesson specimen box, and a series of illustrations of animals – demonstrates how full sensory engagement with material objects can disrupt these disciplined ways of looking and learning. The final section of the article describes the decline of object lesson pedagogies once they were established within the official curriculum for England and Wales over the course of the 1880s and 1890s. Increasingly, pictures and nature study came to replace common objects in Victorian schoolrooms, and had their own implications for the ways that schoolchildren were taught to look at and learn from the world around them.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Zelinsky, Gregory J., und Gregory L. Murphy. „Synchronizing Visual and Language Processing: An Effect of Object Name Length on Eye Movements“. Psychological Science 11, Nr. 2 (März 2000): 125–31. http://dx.doi.org/10.1111/1467-9280.00227.

Der volle Inhalt der Quelle
Annotation:
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Xu, Shu, und Fu Ming Li. „Study of Re-Entrant Lines Modeling Based on Object-Oriented Petri Net“. Applied Mechanics and Materials 303-306 (Februar 2013): 1280–85. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.1280.

Der volle Inhalt der Quelle
Annotation:
This article puts forward object-oriented Petri net modeling method, which possesses good encapsulation and modularity compared with current ordinary modeling method. On the macro level, it divides the re-entrant lines into different object modules according to the technology, so that the complexity of models is largely reduced through message delivery between objects. In the micro level, it explains objects' internal operational mechanism, in another word, each object's internal operation cannot be affected by other objects and environment. At last, it makes modeling and dynamic analysis by taking LED chips' processing flow for example, showing that re-entrant lines model based on object-oriented Petri net possesses good modeling ability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Pulungan, Ali Basrah, und Zhafranul Nafis. „Rancangan Alat Pendeteksi Benda dengan Berdasarkan Warna, Bentuk, dan Ukuran dengan Webcam“. JTEIN: Jurnal Teknik Elektro Indonesia 2, Nr. 1 (11.02.2021): 49–54. http://dx.doi.org/10.24036/jtein.v2i1.111.

Der volle Inhalt der Quelle
Annotation:
Along with the times, technology is also developing so rapidly, one of the innovations in technological development is the use of webcams. the use of a webcam can be developed as a sensor in detecting an object through several stages of image processing. The use of a webcam aims to simplify an automation system so that it can be used to perform several tasks at once. Therefore, the author intends to design and manufacture an object detector with measurement parameters of the object's color, shape and size. This tool uses a webcam as a sensing sensor, and uses programming in PYTHON to recognize objects to be detected, and uses a servo motor to drive object actuators. The results of this tool have been tested and are able to detect objects properly based on predetermined color, shape and size. This tool is also able to separate objects that meet specifications from objects that do not meet specifications. Object detection using a webcam and an object separating actuator can work well.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Brickman, Arthur S. „"Object"? I object.“ Psychoanalytic Psychology 26, Nr. 4 (Oktober 2009): 402–14. http://dx.doi.org/10.1037/a0017715.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Clarke, Alex, Philip J. Pell, Charan Ranganath und Lorraine K. Tyler. „Learning Warps Object Representations in the Ventral Temporal Cortex“. Journal of Cognitive Neuroscience 28, Nr. 7 (Juli 2016): 1010–23. http://dx.doi.org/10.1162/jocn_a_00951.

Der volle Inhalt der Quelle
Annotation:
The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., “made of wood,” “floats”) and spatial contextual associations (e.g., “found in gardens”) with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Khrychov, V., und M. Legenkiy. „About reducing the visibility of complex object on the background of underlying surface“. 35, Nr. 35 (29.12.2021): 15–26. http://dx.doi.org/10.26565/2311-0872-2021-35-02.

Der volle Inhalt der Quelle
Annotation:
Relevance: Reducing the radar visibility of an object is an important task in the creation of military equipment. Real objects are often located on some underlying surface, which leads to a significant increasing in the scattered field by such a system in comparison with the scattered field by only object without taking into account re-reflection from the underlying surface. The development of methods for reducing the reflected field plays an important role among the tasks of reducing radar signature. The purpose of the work is to consider the existing methods for modeling the scattering of electromagnetic waves on complex shape objects against the background of the underlying surface, analyze the level of the reflected field components. To propose methods for reducing the radar visibility of an object. To carry out a simulation for some object in order to assess the effectiveness of the proposed methods. Materials and methods: The problem of diffraction on the complex shape object, which located on the underlying surface, is solved. In this case, different components of the scattered field are taken into account: single reflection from different elements of the object's surface (physical-optical component); one-time re-reflections between different parts of the object; re-reflection between the object and the underlying surface. In numerical modeling, the scattered field on an object located on the underlying surface, the underlying surface is considered as a rectangle of finite size. Results: The possibilities of optimizing a model of the complex shape object in order to reduce its radar visibility are considered. In particular, geometric modifications of the object's surface and the using radio-absorbing materials are considered. In order to demonstrate the effect of these techniques, using a technique previously proposed by the authors for determining the scattered field by an object of complex shape located against the background of the underlying surface simulations have been carried out. Conclusion: Methods of optimizing a model of the complex shape object to reduce its radar visibility are proposed. For most real objects, the largest contribution to the total reflected field is made by the field reflected from the smooth part of the object and the re-reflection field between parts of the object and between the object and the underlying surface.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Gordon, A. M., G. Westling, K. J. Cole und R. S. Johansson. „Memory representations underlying motor commands used during manipulation of common and novel objects“. Journal of Neurophysiology 69, Nr. 6 (01.06.1993): 1789–96. http://dx.doi.org/10.1152/jn.1993.69.6.1789.

Der volle Inhalt der Quelle
Annotation:
1. While subjects lifted a variety of commonly handled objects of different shapes, weights, and densities, the isometric vertical lifting force opposing the object's weight was recorded from an analog weight scale, which was instrumented with high-stiffness strain gauge transducers. 2. The force output was scaled differently for the various objects from the first lift, before sensory information related to the object's weight was available. The force output was successfully specified from information in memory related to the weight of common objects, because only small changes in the force-rate profiles occurred across 10 consecutive lifts. This information was retrieved during a process related to visual identification of the target object. 3. The amount of practice necessary to appropriately scale the vertical lifting and grip (pinch) force was also studied when novel objects (equipped with force transducers at the grip surfaces) of different densities were encountered. The mass of a test object that subjects had not seen previously was adjusted to either 300 or 1,000 g by inserting an appropriate mass in the object's base without altering its appearance. This resulted in either a density that was in the range of most common objects (1.2 kg/l) or a density that was unusually high (4.0 kg/l). 4. Low vertical-lifting and grip-force rates were used initially with the high-density object, as if a lighter object had been expected. However, within the first few trials, the duration of the loading phase (period of isometric force increase before lift-off) was reduced by nearly 50% and the employed force-rate profiles were targeted for the weight of the object.(ABSTRACT TRUNCATED AT 250 WORDS)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Kwon, Keehang. „Acessing Objects Locally in Object-Oriented Languages.“ Journal of Object Technology 4, Nr. 2 (2005): 151. http://dx.doi.org/10.5381/jot.2005.4.2.a4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Kogashi, Kaen, Yang Wu, Shohei Nobuhara und Ko Nishino. „Human–object interaction detection with missing objects“. Image and Vision Computing 113 (September 2021): 104262. http://dx.doi.org/10.1016/j.imavis.2021.104262.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Erlikhman, Gennady, Taissa Lytchenko, Nathan H. Heller, Marvin R. Maechler und Gideon P. Caplovitz. „Object-based attention generalizes to multisurface objects“. Attention, Perception, & Psychophysics 82, Nr. 4 (09.01.2020): 1599–612. http://dx.doi.org/10.3758/s13414-019-01964-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Sun, Zekun, und Chaz Firestone. „Curious objects: Preattentive processing of object complexity“. Journal of Vision 18, Nr. 10 (01.09.2018): 1055. http://dx.doi.org/10.1167/18.10.1055.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wong, J. H., A. P. Hillstrom und Y. C. Chai. „What changes to objects disrupt object constancy?“ Journal of Vision 5, Nr. 8 (01.09.2005): 1042. http://dx.doi.org/10.1167/5.8.1042.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Reiland, Hans. „The object beyond objects and the sacred“. Scandinavian Psychoanalytic Review 27, Nr. 2 (Januar 2004): 78–86. http://dx.doi.org/10.1080/01062301.2004.10592945.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Li, M., und Z. Yang. „Statistics of natural objects and object recognition“. Journal of Vision 10, Nr. 7 (12.08.2010): 993. http://dx.doi.org/10.1167/10.7.993.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

HARRISON, VICTORIA S. „Mathematical objects and the object of theology“. Religious Studies 53, Nr. 4 (18.10.2016): 479–96. http://dx.doi.org/10.1017/s0034412516000238.

Der volle Inhalt der Quelle
Annotation:
AbstractThis article brings mathematical realism and theological realism into conversation. It outlines a realist ontology that characterizes abstract mathematical objects as inaccessible to the senses, non-spatiotemporal, and acausal. Mathematical realists are challenged to explain how we can know such objects. The article reviews some promising responses to this challenge before considering the view that the object of theology also possesses the three characteristic features of abstract objects, and consequently may be known through the same methods that yield knowledge of mathematical objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Takahashi, Kohske, und Katsumi Watanabe. „Seeing Objects as Faces Enhances Object Detection“. i-Perception 6, Nr. 5 (30.09.2015): 204166951560600. http://dx.doi.org/10.1177/2041669515606007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Drozdek, Adam. „Object-Oriented Programming and Representation of Objects“. Studies in Logic, Grammar and Rhetoric 40, Nr. 1 (01.03.2015): 293–302. http://dx.doi.org/10.1515/slgr-2015-0014.

Der volle Inhalt der Quelle
Annotation:
Abstract In this paper, a lesson is drawn from the way class definitions are provided in object-oriented programming. The distinction is introduced between the visible structure given in a class definition and the hidden structure, and then possible connections are indicated between these two structures and the structure of an entity modeled by the class definition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Albert, Elvira, Puri Arenas, Jesús Correas, Samir Genaim, Miguel Gómez-Zamalloa, Germán Puebla und Guillermo Román-Díez. „Object-sensitive cost analysis for concurrent objects“. Software Testing, Verification and Reliability 25, Nr. 3 (04.03.2015): 218–71. http://dx.doi.org/10.1002/stvr.1569.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Chow, Jason, Thomas Palmeri und Isabel Gauthier. „Tactile object recognition performance on graspable objects, but not texture-like objects, relates to visual object recognition ability“. Journal of Vision 20, Nr. 11 (20.10.2020): 188. http://dx.doi.org/10.1167/jov.20.11.188.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Pruetz, J. D., und M. A. Bloomsmith. „Comparing Two Manipulable Objects as Enrichment for Captive Chimpanzees“. Animal Welfare 1, Nr. 2 (Mai 1992): 127–37. http://dx.doi.org/10.1017/s096272860001486x.

Der volle Inhalt der Quelle
Annotation:
AbstractThis study compared the effectiveness of kraft wrapping paper and rubber toys as enrichment for 22 chimpanzees group-housed in conventional indoor/outdoor runs. Objects were tested separately during 67 hours of data collection using a group scan sampling technique. Paper was used a mean 27 per cent of the available time, while the Kong Toys™ were used a mean 10 per cent of the available time. The degree of object manipulation and object contact was higher with the paper, but the level of social play and solitary play with the object was not differentially affected by the two objects. The objects had differing effects on the subjects’ levels of grooming, but affiliation, agonism, inactivity and sexual behaviour did not vary according to the object being used. A gender-by-age interaction was found, with immature males exhibiting the highest levels of solitary play with objects. Object use steadily declined over the first hour of exposure, showing evidence of habituation. Object use when the Kong Toy™ was present declined over the course of the study, but use of the paper remained consistent. Texture, destructibility, portability, complexity and adaptability may be important in determining the object's value as effective enrichment. The destructible wrapping paper was a more worthwhile enrichment object than the indestructible Kong Toy™ for the captive chimpanzees in this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Liao, Xiaolian, Jun Li, Leyi Li, Caoxi Shangguan und Shaoyan Huang. „RGBD Salient Object Detection, Based on Specific Object Imaging“. Sensors 22, Nr. 22 (19.11.2022): 8973. http://dx.doi.org/10.3390/s22228973.

Der volle Inhalt der Quelle
Annotation:
RGBD salient object detection, based on the convolutional neural network, has achieved rapid development in recent years. However, existing models often focus on detecting salient object edges, instead of objects. Importantly, detecting objects can more intuitively display the complete information of the detection target. To take care of this issue, we propose a RGBD salient object detection method, based on specific object imaging, which can quickly capture and process important information on object features, and effectively screen out the salient objects in the scene. The screened target objects include not only the edge of the object, but also the complete feature information of the object, which realizes the detection and imaging of the salient objects. We conduct experiments on benchmark datasets and validate with two common metrics, and the results show that our method reduces the error by 0.003 and 0.201 (MAE) on D3Net and JLDCF, respectively. In addition, our method can still achieve a very good detection and imaging performance in the case of the greatly reduced training data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Landau, Barbara, Linda Smith und Susan Jones. „Object Shape, Object Function, and Object Name“. Journal of Memory and Language 38, Nr. 1 (Januar 1998): 1–27. http://dx.doi.org/10.1006/jmla.1997.2533.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zhang, Jun, Feiteng Han, Yutong Chun, Kangwei Liu und Wang Chen. „Detecting Objects from No-Object Regions: A Context-Based Data Augmentation for Object Detection“. International Journal of Computational Intelligence Systems 14, Nr. 1 (2021): 1871. http://dx.doi.org/10.2991/ijcis.d.210622.003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Wang, Binglu, Chenxi Guo, Yang Jin, Haisheng Xia und Nian Liu. „TransGOP: Transformer-Based Gaze Object Prediction“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 9 (24.03.2024): 10180–88. http://dx.doi.org/10.1609/aaai.v38i9.28883.

Der volle Inhalt der Quelle
Annotation:
Gaze object prediction aims to predict the location and category of the object that is watched by a human. Previous gaze object prediction works use CNN-based object detectors to predict the object's location. However, we find that Transformer-based object detectors can predict more accurate object location for dense objects in retail scenarios. Moreover, the long-distance modeling capability of the Transformer can help to build relationships between the human head and the gaze object, which is important for the GOP task. To this end, this paper introduces Transformer into the fields of gaze object prediction and proposes an end-to-end Transformer-based gaze object prediction method named TransGOP. Specifically, TransGOP uses an off-the-shelf Transformer-based object detector to detect the location of objects and designs a Transformer-based gaze autoencoder in the gaze regressor to establish long-distance gaze relationships. Moreover, to improve gaze heatmap regression, we propose an object-to-gaze cross-attention mechanism to let the queries of the gaze autoencoder learn the global-memory position knowledge from the object detector. Finally, to make the whole framework end-to-end trained, we propose a Gaze Box loss to jointly optimize the object detector and gaze regressor by enhancing the gaze heatmap energy in the box of the gaze object. Extensive experiments on the GOO-Synth and GOO-Real datasets demonstrate that our TransGOP achieves state-of-the-art performance on all tracks, i.e., object detection, gaze estimation, and gaze object prediction. Our code will be available at https://github.com/chenxi-Guo/TransGOP.git.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

BOURBAKIS, N., P. YUAN und P. KAKUMANU. „A GRAPH BASED OBJECT DESCRIPTION AND RECOGNITION METHODOLOGY“. International Journal on Artificial Intelligence Tools 17, Nr. 06 (Dezember 2008): 1161–94. http://dx.doi.org/10.1142/s0218213008004345.

Der volle Inhalt der Quelle
Annotation:
This paper presents a methodology for recognizing 3D objects using synthesis of 2D views. In particular, the methodology uses wavelets for rearranging the shape of the perceived 2D view of an object for attaining a desirable size, local-global (LG) graphs for representing the shape, color and location of each image object's region obtained by an image segmentation method and the synthesis of these regions that compose that particular object. The synthesis of the regions is obtained by composing their local graph representations under certain neighborhood criteria. The LG graph representation of the extracted object is compared against a set of LG based object-models stored in a Database (DB). The methodology is accurate for recognizing objects existed in the DB and it has the capability of "learning" the LG patterns of new objects by associating them with attributes from existing LG patterns in the DB. Note that for each object-model stored in the database there are only six views, since all the intermediate views can be generated by appropriately synthesizing these six views. Illustrative examples are also provided.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie