Статті в журналах з теми "A object"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: A object.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "A object".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Calogero, Rachel M. "Objects Don’t Object." Psychological Science 24, no. 3 (January 22, 2013): 312–18. http://dx.doi.org/10.1177/0956797612452574.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bergin, Joseph, Richard Kick, Judith Hromcik, and Kathleen Larson. "The object is objects." ACM SIGCSE Bulletin 34, no. 1 (March 2002): 251. http://dx.doi.org/10.1145/563517.563438.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Carey, Susan, and Fei Xu. "Infants' knowledge of objects: beyond object files and object tracking." Cognition 80, no. 1-2 (June 2001): 179–213. http://dx.doi.org/10.1016/s0010-0277(00)00154-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Remhof, Justin. "Object Constructivism and Unconstructed Objects." Southwest Philosophy Review 30, no. 1 (2014): 177–85. http://dx.doi.org/10.5840/swphilreview201430117.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Neubauer, Peter B. "Preoedipal Objects and Object Primacy." Psychoanalytic Study of the Child 40, no. 1 (January 1985): 163–82. http://dx.doi.org/10.1080/00797308.1985.11823027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vannucci, Manila, Giuliana Mazzoni, Carlo Chiorri, and Lavinia Cioli. "Object imagery and object identification: object imagers are better at identifying spatially-filtered visual objects." Cognitive Processing 9, no. 2 (January 24, 2008): 137–43. http://dx.doi.org/10.1007/s10339-008-0203-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kingo, Osman S., and Peter Krøjgaard. "Object manipulation facilitates kind-based object individuation of shape-similar objects." Cognitive Development 26, no. 2 (April 2011): 87–103. http://dx.doi.org/10.1016/j.cogdev.2010.08.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Newell, F. N. "Searching for Objects in the Visual Periphery: Effects of Orientation." Perception 25, no. 1_suppl (August 1996): 110. http://dx.doi.org/10.1068/v96l1111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
9

Shioiri, Satoshi, Kotaro Hashimoto, Kazumichi Matsumiya, Ichiro Kuriki, and Sheng He. "Extracting the orientation of rotating objects without object identification: Object orientation induction." Journal of Vision 18, no. 9 (September 17, 2018): 17. http://dx.doi.org/10.1167/18.9.17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ciribuco, Andrea, and Anne O’Connor. "Translating the object, objects in translation." Translation and Interpreting Studies 17, no. 1 (July 5, 2022): 1–13. http://dx.doi.org/10.1075/tis.00052.int.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Gao, T., and B. Scholl. "Are objects required for object-files?" Journal of Vision 7, no. 9 (March 30, 2010): 916. http://dx.doi.org/10.1167/7.9.916.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Downey, T. Wayne. "Early Object Relations into New Objects." Psychoanalytic Study of the Child 56, no. 1 (January 2001): 39–67. http://dx.doi.org/10.1080/00797308.2001.11800664.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Clark, Don. "Object-lessons from self-explanatory objects." Computers & Education 18, no. 1-3 (January 1992): 11–22. http://dx.doi.org/10.1016/0360-1315(92)90031-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Sari, Marlindia Ike, Anang Sularsa, Rini Handayani, Surya Badrudin Alamsyah, and Siswandi Riki Rizaldi. "3D Scanner Using Infrared for Small Object." JOIV : International Journal on Informatics Visualization 7, no. 3 (September 10, 2023): 935. http://dx.doi.org/10.30630/joiv.7.3.2050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Three-Dimensional scanning is a method to convert various distances set into object visualization in 3-dimensional form. Developing a 3D scanner has various methods and techniques depending on the 3d scanner's purpose and the size of the object target. This research aims to build a prototype of a 3D scanner scanning small objects with dimensions maximum(10x7x23)cm. The study applied an a-three dimensional(3D) scanner using infrared and a motor to move the infrared upward to get Z-ordinate. The infrared is used to scan an object and visualize the result based on distance measurement by infrared. At the same time, the motor for rotating objects gets the (X, Y) ordinates. The object was placed in the center of the scanner, and the maximum distance of the object from infrared was 20cm. The model uses infrared to measure the object's distance, collect the result for each object's height, and visualize it in the graphic user interface. In this research, we tested the scanner with the distance between the object and infrared were 7 cm, 10 cm, 15 cm, and 20 cm. The best result was 80% accurate, with the distance between the object and the infrared being 10cm. The best result was obtained when the scanner was used on a cylindrical object and an object made of a non-glossy material. The design of this study is not recommended for objects with edge points and metal material.
15

Ju, Ginny, and Irving Biederman. "Tests of a Theory of Human Image Understanding: Part I the Perception of Colored and Partial Objects." Proceedings of the Human Factors Society Annual Meeting 30, no. 3 (September 1986): 297–300. http://dx.doi.org/10.1177/154193128603000322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Object recognition can be conceptualized as a process in which the perceptual input is successfully matched with a stored representation of the object. A theory of pattern recognition, Recognition by Components(RBC) assumes that objects are represented as simple volumetric primatives (e.g., bricks, cylinders, etc.) in specifed relations to each other. According to RBC, speeded recognition should be possible from only a few components, as long as those components uniquely identify an object. Neither the full complement of an object's components, nor the object's surface characteristics (e.g., color and texture) need be present for rapid identification. The results from two experiments on the perception of briefly presented objects are offered for supporting the sufficiency of the theory. Line drawings are identified about as rapidly and as accurately as full color slides. Partial objects could be rapidly (though not optimally) identified. Complex objects are more readily identified than simple objects.
16

Chen, Liang-Chia, Thanh-Hung Nguyen, and Shyh-Tsong Lin. "Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection." Measurement Science and Technology 26, no. 10 (September 15, 2015): 105202. http://dx.doi.org/10.1088/0957-0233/26/10/105202.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Woods, Andrew T., Allison Moore, and Fiona N. Newell. "Canonical Views in Haptic Object Perception." Perception 37, no. 12 (January 1, 2008): 1867–78. http://dx.doi.org/10.1068/p6038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
18

Mohanad Abdulhamid and Adam Olalo. "Implementation of Moving Object Tracker System." Data Science: Journal of Computing and Applied Informatics 5, no. 2 (October 5, 2021): 102–6. http://dx.doi.org/10.32734/jocai.v5.i2-6450.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it’s from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object.
19

Abdulhamid, Mohanad. "Implementation of Moving Object Tracker System." Journal of Siberian Federal University. Engineering & Technologies 14, no. 8 (December 2021): 986–95. http://dx.doi.org/10.17516/1999-494x-0367.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it's from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object
20

Abdulhamid, Mohanad, and Adam Olalo. "Implementation of moving object tracker system." Journal of Engineering Sciences and Innovation 4, no. 4 (December 2, 2019): 427–38. http://dx.doi.org/10.56958/jesi.2019.4.4.427.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it’s from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object.
21

Wibowo, Edy, Naily Ulya, Whibatsu Helvantriyudo, Muhammad Maliki Azyumardi, Fata Hafiduddin, Mamat Rokhmat, Ismudiati Puri Handayani, et al. "Misconceptions on the understanding of flying objects in fluids." Momentum: Physics Education Journal 7, no. 2 (June 1, 2023): 178–87. http://dx.doi.org/10.21067/mpej.v7i2.6881.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The concepts of floating, flying, and sinking object have been studied since junior high school. However, we still often find students' misconceptions regarding the concept, especially of flying objects, even at the university level. This work aims to propose a clarification of the concept of a flying object in the fluid to be correctly described the condition for the flying object. We used eggs, water, and salt solutions to demonstrate sinking, rising, and floating objects in the fluids. The results showed that when the density of the object is the same as the density of the fluid, the position of the object is still at the bottom of the fluid since it was not flying in the middle of the depth of the fluid. But the object does not touch the bottom of the container so that the object's height is zero. This is because the object has not had a driving force (Fd = 0) that pushes the object upward towards the surface of the fluid to float. When the density of the fluid slightly exceeds the density of the object, the object immediately moves upward to the fluid surface - floating phenomenon is started. The greater the difference between the density of the fluid and the density of the object, the faster the object moves towards the surface. The object cannot stay at any position between the bottom and the surface of the fluid. A stable position is reached when the object reaches the surface of the fluid to float. This work is expected to increase students' understanding of flying objects in fluids.
22

Schinkel, Anders. "The Object of History." Essays in Philosophy 7, no. 2 (2006): 212–31. http://dx.doi.org/10.5840/eip2006726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The phrase ‘the object of history’ may mean all sorts of things. In this article, a distinction is made between object1, the object of study for historians, and object2, the goal or purpose of the study of history. Within object2, a distinction is made between a goal intrinsic to the study of history (object2in) and an extrinsic goal (object2ex), the latter being what the study of history should contribute to society (or anything else outside itself). The main point of the article, which is illustrated by a discussion of the work of R. G. Collingwood, E. H. Carr, and G. R. Elton, is that in the work of historians and philosophers of history, these kinds of ‘object of history’ are usually (closely) connected. If they are not, something is wrong. That does not mean, however, that historians or even philosophers of history are always aware of these connections. For that reason, the distinctions made in this article provide a useful analytical tool for historians and theorists of history alike.
23

Landau, Barbara, and Ray Jackendoff. "“What” and “where” in spatial language and spatial cognition." Behavioral and Brain Sciences 16, no. 2 (June 1993): 217–38. http://dx.doi.org/10.1017/s0140525x00029733.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractFundamental to spatial knowledge in all species are the representations underlying object recognition, object search, and navigation through space. But what sets humans apart from other species is our ability to express spatial experience through language. This target article explores the language ofobjectsandplaces, asking what geometric properties are preserved in the representations underlying object nouns and spatial prepositions in English. Evidence from these two aspects of language suggests there are significant differences in the geometric richness with which objects and places are encoded. When an object is named (i.e., with count nouns), detailed geometric properties – principally the object's shape (axes, solid and hollow volumes, surfaces, and parts) – are represented. In contrast, when an object plays the role of either “figure” (located object) or “ground” (reference object) in a locational expression, only very coarse geometric object properties are represented, primarily the main axes. In addition, the spatial functions encoded by spatial prepositions tend to be nonmetric and relatively coarse, for example, “containment,” “contact,” “relative distance,” and “relative direction.” These properties are representative of other languages as well. The striking differences in the way language encodes objects versus places lead us to suggest two explanations: First, there is a tendency for languages to level out geometric detail from both object and place representations. Second, a nonlinguistic disparity between the representations of “what” and “where” underlies how language represents objects and places. The language of objects and places converges with and enriches our understanding of corresponding spatial representations.
24

Kumar, Lokesh, Pramod Kumar, and Parag Jain. "Object ID Tracking in Videos: A Review." International Transactions in Mathematical Sciences and Computer 15, no. 02 (2022): 157–66. http://dx.doi.org/10.58517/itmsc.2022.15204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Assigning and monitoring distinct identifiers to objects or entities in a video stream is known as object ID tracking in videos. For the purpose of tracking and analyzing object movement over time, computer vision, video analytics, and surveillance systems employ this technology extensively. Object detection is the first step in the process, wherein computer vision algorithms locate and identify things within individual video frames. This may entail methods like region-based Convolutional Neural Network (Faster R-CNN) or YOLO (You Only Look Once), which are object detection models based on deep learning. Based on the object's past trajectory, the tracking algorithm assists in predicting the object's position in the frame.
25

Korda, Andrea. "Object Lessons in Victorian Education: Text, Object, Image." Journal of Victorian Culture 25, no. 2 (January 8, 2020): 200–222. http://dx.doi.org/10.1093/jvcult/vcz064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Lessons on common objects, known as ‘object lessons’, were a customary occurrence in Victorian schoolrooms. This article looks at Victorian object lessons around mid-century as a means of examining the variety of meanings that common objects, and particularly manufactured objects, might have held both inside and outside Victorian schoolrooms. While model scripts for object lessons circulated widely and clarified the meaning of common objects in print, the objects themselves had the potential to complicate and challenge these meanings. Drawing primarily on publications by Elizabeth Mayo and the Home and Colonial School Society (established in 1836), this article outlines the theological, industrial and imperialist ways of looking that informed the model object lesson. Yet close study of the objects employed in object lessons – feathers, an object lesson specimen box, and a series of illustrations of animals – demonstrates how full sensory engagement with material objects can disrupt these disciplined ways of looking and learning. The final section of the article describes the decline of object lesson pedagogies once they were established within the official curriculum for England and Wales over the course of the 1880s and 1890s. Increasingly, pictures and nature study came to replace common objects in Victorian schoolrooms, and had their own implications for the ways that schoolchildren were taught to look at and learn from the world around them.
26

Zelinsky, Gregory J., and Gregory L. Murphy. "Synchronizing Visual and Language Processing: An Effect of Object Name Length on Eye Movements." Psychological Science 11, no. 2 (March 2000): 125–31. http://dx.doi.org/10.1111/1467-9280.00227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.
27

Xu, Shu, and Fu Ming Li. "Study of Re-Entrant Lines Modeling Based on Object-Oriented Petri Net." Applied Mechanics and Materials 303-306 (February 2013): 1280–85. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.1280.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article puts forward object-oriented Petri net modeling method, which possesses good encapsulation and modularity compared with current ordinary modeling method. On the macro level, it divides the re-entrant lines into different object modules according to the technology, so that the complexity of models is largely reduced through message delivery between objects. In the micro level, it explains objects' internal operational mechanism, in another word, each object's internal operation cannot be affected by other objects and environment. At last, it makes modeling and dynamic analysis by taking LED chips' processing flow for example, showing that re-entrant lines model based on object-oriented Petri net possesses good modeling ability.
28

Pulungan, Ali Basrah, and Zhafranul Nafis. "Rancangan Alat Pendeteksi Benda dengan Berdasarkan Warna, Bentuk, dan Ukuran dengan Webcam." JTEIN: Jurnal Teknik Elektro Indonesia 2, no. 1 (February 11, 2021): 49–54. http://dx.doi.org/10.24036/jtein.v2i1.111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Along with the times, technology is also developing so rapidly, one of the innovations in technological development is the use of webcams. the use of a webcam can be developed as a sensor in detecting an object through several stages of image processing. The use of a webcam aims to simplify an automation system so that it can be used to perform several tasks at once. Therefore, the author intends to design and manufacture an object detector with measurement parameters of the object's color, shape and size. This tool uses a webcam as a sensing sensor, and uses programming in PYTHON to recognize objects to be detected, and uses a servo motor to drive object actuators. The results of this tool have been tested and are able to detect objects properly based on predetermined color, shape and size. This tool is also able to separate objects that meet specifications from objects that do not meet specifications. Object detection using a webcam and an object separating actuator can work well.
29

Brickman, Arthur S. ""Object"? I object." Psychoanalytic Psychology 26, no. 4 (October 2009): 402–14. http://dx.doi.org/10.1037/a0017715.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Clarke, Alex, Philip J. Pell, Charan Ranganath, and Lorraine K. Tyler. "Learning Warps Object Representations in the Ventral Temporal Cortex." Journal of Cognitive Neuroscience 28, no. 7 (July 2016): 1010–23. http://dx.doi.org/10.1162/jocn_a_00951.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., “made of wood,” “floats”) and spatial contextual associations (e.g., “found in gardens”) with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.
31

Khrychov, V., and M. Legenkiy. "About reducing the visibility of complex object on the background of underlying surface." 35, no. 35 (December 29, 2021): 15–26. http://dx.doi.org/10.26565/2311-0872-2021-35-02.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Relevance: Reducing the radar visibility of an object is an important task in the creation of military equipment. Real objects are often located on some underlying surface, which leads to a significant increasing in the scattered field by such a system in comparison with the scattered field by only object without taking into account re-reflection from the underlying surface. The development of methods for reducing the reflected field plays an important role among the tasks of reducing radar signature. The purpose of the work is to consider the existing methods for modeling the scattering of electromagnetic waves on complex shape objects against the background of the underlying surface, analyze the level of the reflected field components. To propose methods for reducing the radar visibility of an object. To carry out a simulation for some object in order to assess the effectiveness of the proposed methods. Materials and methods: The problem of diffraction on the complex shape object, which located on the underlying surface, is solved. In this case, different components of the scattered field are taken into account: single reflection from different elements of the object's surface (physical-optical component); one-time re-reflections between different parts of the object; re-reflection between the object and the underlying surface. In numerical modeling, the scattered field on an object located on the underlying surface, the underlying surface is considered as a rectangle of finite size. Results: The possibilities of optimizing a model of the complex shape object in order to reduce its radar visibility are considered. In particular, geometric modifications of the object's surface and the using radio-absorbing materials are considered. In order to demonstrate the effect of these techniques, using a technique previously proposed by the authors for determining the scattered field by an object of complex shape located against the background of the underlying surface simulations have been carried out. Conclusion: Methods of optimizing a model of the complex shape object to reduce its radar visibility are proposed. For most real objects, the largest contribution to the total reflected field is made by the field reflected from the smooth part of the object and the re-reflection field between parts of the object and between the object and the underlying surface.
32

Gordon, A. M., G. Westling, K. J. Cole, and R. S. Johansson. "Memory representations underlying motor commands used during manipulation of common and novel objects." Journal of Neurophysiology 69, no. 6 (June 1, 1993): 1789–96. http://dx.doi.org/10.1152/jn.1993.69.6.1789.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. While subjects lifted a variety of commonly handled objects of different shapes, weights, and densities, the isometric vertical lifting force opposing the object's weight was recorded from an analog weight scale, which was instrumented with high-stiffness strain gauge transducers. 2. The force output was scaled differently for the various objects from the first lift, before sensory information related to the object's weight was available. The force output was successfully specified from information in memory related to the weight of common objects, because only small changes in the force-rate profiles occurred across 10 consecutive lifts. This information was retrieved during a process related to visual identification of the target object. 3. The amount of practice necessary to appropriately scale the vertical lifting and grip (pinch) force was also studied when novel objects (equipped with force transducers at the grip surfaces) of different densities were encountered. The mass of a test object that subjects had not seen previously was adjusted to either 300 or 1,000 g by inserting an appropriate mass in the object's base without altering its appearance. This resulted in either a density that was in the range of most common objects (1.2 kg/l) or a density that was unusually high (4.0 kg/l). 4. Low vertical-lifting and grip-force rates were used initially with the high-density object, as if a lighter object had been expected. However, within the first few trials, the duration of the loading phase (period of isometric force increase before lift-off) was reduced by nearly 50% and the employed force-rate profiles were targeted for the weight of the object.(ABSTRACT TRUNCATED AT 250 WORDS)
33

Kwon, Keehang. "Acessing Objects Locally in Object-Oriented Languages." Journal of Object Technology 4, no. 2 (2005): 151. http://dx.doi.org/10.5381/jot.2005.4.2.a4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kogashi, Kaen, Yang Wu, Shohei Nobuhara, and Ko Nishino. "Human–object interaction detection with missing objects." Image and Vision Computing 113 (September 2021): 104262. http://dx.doi.org/10.1016/j.imavis.2021.104262.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Erlikhman, Gennady, Taissa Lytchenko, Nathan H. Heller, Marvin R. Maechler, and Gideon P. Caplovitz. "Object-based attention generalizes to multisurface objects." Attention, Perception, & Psychophysics 82, no. 4 (January 9, 2020): 1599–612. http://dx.doi.org/10.3758/s13414-019-01964-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Sun, Zekun, and Chaz Firestone. "Curious objects: Preattentive processing of object complexity." Journal of Vision 18, no. 10 (September 1, 2018): 1055. http://dx.doi.org/10.1167/18.10.1055.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wong, J. H., A. P. Hillstrom, and Y. C. Chai. "What changes to objects disrupt object constancy?" Journal of Vision 5, no. 8 (September 1, 2005): 1042. http://dx.doi.org/10.1167/5.8.1042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Reiland, Hans. "The object beyond objects and the sacred." Scandinavian Psychoanalytic Review 27, no. 2 (January 2004): 78–86. http://dx.doi.org/10.1080/01062301.2004.10592945.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Li, M., and Z. Yang. "Statistics of natural objects and object recognition." Journal of Vision 10, no. 7 (August 12, 2010): 993. http://dx.doi.org/10.1167/10.7.993.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

HARRISON, VICTORIA S. "Mathematical objects and the object of theology." Religious Studies 53, no. 4 (October 18, 2016): 479–96. http://dx.doi.org/10.1017/s0034412516000238.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis article brings mathematical realism and theological realism into conversation. It outlines a realist ontology that characterizes abstract mathematical objects as inaccessible to the senses, non-spatiotemporal, and acausal. Mathematical realists are challenged to explain how we can know such objects. The article reviews some promising responses to this challenge before considering the view that the object of theology also possesses the three characteristic features of abstract objects, and consequently may be known through the same methods that yield knowledge of mathematical objects.
41

Takahashi, Kohske, and Katsumi Watanabe. "Seeing Objects as Faces Enhances Object Detection." i-Perception 6, no. 5 (September 30, 2015): 204166951560600. http://dx.doi.org/10.1177/2041669515606007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Drozdek, Adam. "Object-Oriented Programming and Representation of Objects." Studies in Logic, Grammar and Rhetoric 40, no. 1 (March 1, 2015): 293–302. http://dx.doi.org/10.1515/slgr-2015-0014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract In this paper, a lesson is drawn from the way class definitions are provided in object-oriented programming. The distinction is introduced between the visible structure given in a class definition and the hidden structure, and then possible connections are indicated between these two structures and the structure of an entity modeled by the class definition.
43

Albert, Elvira, Puri Arenas, Jesús Correas, Samir Genaim, Miguel Gómez-Zamalloa, Germán Puebla, and Guillermo Román-Díez. "Object-sensitive cost analysis for concurrent objects." Software Testing, Verification and Reliability 25, no. 3 (March 4, 2015): 218–71. http://dx.doi.org/10.1002/stvr.1569.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chow, Jason, Thomas Palmeri, and Isabel Gauthier. "Tactile object recognition performance on graspable objects, but not texture-like objects, relates to visual object recognition ability." Journal of Vision 20, no. 11 (October 20, 2020): 188. http://dx.doi.org/10.1167/jov.20.11.188.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Pruetz, J. D., and M. A. Bloomsmith. "Comparing Two Manipulable Objects as Enrichment for Captive Chimpanzees." Animal Welfare 1, no. 2 (May 1992): 127–37. http://dx.doi.org/10.1017/s096272860001486x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis study compared the effectiveness of kraft wrapping paper and rubber toys as enrichment for 22 chimpanzees group-housed in conventional indoor/outdoor runs. Objects were tested separately during 67 hours of data collection using a group scan sampling technique. Paper was used a mean 27 per cent of the available time, while the Kong Toys™ were used a mean 10 per cent of the available time. The degree of object manipulation and object contact was higher with the paper, but the level of social play and solitary play with the object was not differentially affected by the two objects. The objects had differing effects on the subjects’ levels of grooming, but affiliation, agonism, inactivity and sexual behaviour did not vary according to the object being used. A gender-by-age interaction was found, with immature males exhibiting the highest levels of solitary play with objects. Object use steadily declined over the first hour of exposure, showing evidence of habituation. Object use when the Kong Toy™ was present declined over the course of the study, but use of the paper remained consistent. Texture, destructibility, portability, complexity and adaptability may be important in determining the object's value as effective enrichment. The destructible wrapping paper was a more worthwhile enrichment object than the indestructible Kong Toy™ for the captive chimpanzees in this study.
46

Liao, Xiaolian, Jun Li, Leyi Li, Caoxi Shangguan, and Shaoyan Huang. "RGBD Salient Object Detection, Based on Specific Object Imaging." Sensors 22, no. 22 (November 19, 2022): 8973. http://dx.doi.org/10.3390/s22228973.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
RGBD salient object detection, based on the convolutional neural network, has achieved rapid development in recent years. However, existing models often focus on detecting salient object edges, instead of objects. Importantly, detecting objects can more intuitively display the complete information of the detection target. To take care of this issue, we propose a RGBD salient object detection method, based on specific object imaging, which can quickly capture and process important information on object features, and effectively screen out the salient objects in the scene. The screened target objects include not only the edge of the object, but also the complete feature information of the object, which realizes the detection and imaging of the salient objects. We conduct experiments on benchmark datasets and validate with two common metrics, and the results show that our method reduces the error by 0.003 and 0.201 (MAE) on D3Net and JLDCF, respectively. In addition, our method can still achieve a very good detection and imaging performance in the case of the greatly reduced training data.
47

Landau, Barbara, Linda Smith, and Susan Jones. "Object Shape, Object Function, and Object Name." Journal of Memory and Language 38, no. 1 (January 1998): 1–27. http://dx.doi.org/10.1006/jmla.1997.2533.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zhang, Jun, Feiteng Han, Yutong Chun, Kangwei Liu, and Wang Chen. "Detecting Objects from No-Object Regions: A Context-Based Data Augmentation for Object Detection." International Journal of Computational Intelligence Systems 14, no. 1 (2021): 1871. http://dx.doi.org/10.2991/ijcis.d.210622.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wang, Binglu, Chenxi Guo, Yang Jin, Haisheng Xia, and Nian Liu. "TransGOP: Transformer-Based Gaze Object Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 9 (March 24, 2024): 10180–88. http://dx.doi.org/10.1609/aaai.v38i9.28883.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Gaze object prediction aims to predict the location and category of the object that is watched by a human. Previous gaze object prediction works use CNN-based object detectors to predict the object's location. However, we find that Transformer-based object detectors can predict more accurate object location for dense objects in retail scenarios. Moreover, the long-distance modeling capability of the Transformer can help to build relationships between the human head and the gaze object, which is important for the GOP task. To this end, this paper introduces Transformer into the fields of gaze object prediction and proposes an end-to-end Transformer-based gaze object prediction method named TransGOP. Specifically, TransGOP uses an off-the-shelf Transformer-based object detector to detect the location of objects and designs a Transformer-based gaze autoencoder in the gaze regressor to establish long-distance gaze relationships. Moreover, to improve gaze heatmap regression, we propose an object-to-gaze cross-attention mechanism to let the queries of the gaze autoencoder learn the global-memory position knowledge from the object detector. Finally, to make the whole framework end-to-end trained, we propose a Gaze Box loss to jointly optimize the object detector and gaze regressor by enhancing the gaze heatmap energy in the box of the gaze object. Extensive experiments on the GOO-Synth and GOO-Real datasets demonstrate that our TransGOP achieves state-of-the-art performance on all tracks, i.e., object detection, gaze estimation, and gaze object prediction. Our code will be available at https://github.com/chenxi-Guo/TransGOP.git.
50

BOURBAKIS, N., P. YUAN, and P. KAKUMANU. "A GRAPH BASED OBJECT DESCRIPTION AND RECOGNITION METHODOLOGY." International Journal on Artificial Intelligence Tools 17, no. 06 (December 2008): 1161–94. http://dx.doi.org/10.1142/s0218213008004345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a methodology for recognizing 3D objects using synthesis of 2D views. In particular, the methodology uses wavelets for rearranging the shape of the perceived 2D view of an object for attaining a desirable size, local-global (LG) graphs for representing the shape, color and location of each image object's region obtained by an image segmentation method and the synthesis of these regions that compose that particular object. The synthesis of the regions is obtained by composing their local graph representations under certain neighborhood criteria. The LG graph representation of the extracted object is compared against a set of LG based object-models stored in a Database (DB). The methodology is accurate for recognizing objects existed in the DB and it has the capability of "learning" the LG patterns of new objects by associating them with attributes from existing LG patterns in the DB. Note that for each object-model stored in the database there are only six views, since all the intermediate views can be generated by appropriately synthesizing these six views. Illustrative examples are also provided.

До бібліографії