To see the other types of publications on this topic, follow the link: Objects.

Journal articles on the topic 'Objects'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Objects.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Calogero, Rachel M. "Objects Don’t Object." Psychological Science 24, no. 3 (January 22, 2013): 312–18. http://dx.doi.org/10.1177/0956797612452574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bergin, Joseph, Richard Kick, Judith Hromcik, and Kathleen Larson. "The object is objects." ACM SIGCSE Bulletin 34, no. 1 (March 2002): 251. http://dx.doi.org/10.1145/563517.563438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ju, Ginny, and Irving Biederman. "Tests of a Theory of Human Image Understanding: Part I the Perception of Colored and Partial Objects." Proceedings of the Human Factors Society Annual Meeting 30, no. 3 (September 1986): 297–300. http://dx.doi.org/10.1177/154193128603000322.

Full text
Abstract:
Object recognition can be conceptualized as a process in which the perceptual input is successfully matched with a stored representation of the object. A theory of pattern recognition, Recognition by Components(RBC) assumes that objects are represented as simple volumetric primatives (e.g., bricks, cylinders, etc.) in specifed relations to each other. According to RBC, speeded recognition should be possible from only a few components, as long as those components uniquely identify an object. Neither the full complement of an object's components, nor the object's surface characteristics (e.g., color and texture) need be present for rapid identification. The results from two experiments on the perception of briefly presented objects are offered for supporting the sufficiency of the theory. Line drawings are identified about as rapidly and as accurately as full color slides. Partial objects could be rapidly (though not optimally) identified. Complex objects are more readily identified than simple objects.
APA, Harvard, Vancouver, ISO, and other styles
4

Remhof, Justin. "Object Constructivism and Unconstructed Objects." Southwest Philosophy Review 30, no. 1 (2014): 177–85. http://dx.doi.org/10.5840/swphilreview201430117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neubauer, Peter B. "Preoedipal Objects and Object Primacy." Psychoanalytic Study of the Child 40, no. 1 (January 1985): 163–82. http://dx.doi.org/10.1080/00797308.1985.11823027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Chao, Xuehe Zhang, Xizhe Zang, Yubin Liu, Guanwen Ding, Wenxin Yin, and Jie Zhao. "Feature Sensing and Robotic Grasping of Objects with Uncertain Information: A Review." Sensors 20, no. 13 (July 2, 2020): 3707. http://dx.doi.org/10.3390/s20133707.

Full text
Abstract:
As there come to be more applications of intelligent robots, their task object is becoming more varied. However, it is still a challenge for a robot to handle unfamiliar objects. We review the recent work on the feature sensing and robotic grasping of objects with uncertain information. In particular, we focus on how the robot perceives the features of an object, so as to reduce the uncertainty of objects, and how the robot completes object grasping through the learning-based approach when the traditional approach fails. The uncertain information is classified into geometric information and physical information. Based on the type of uncertain information, the object is further classified into three categories, which are geometric-uncertain objects, physical-uncertain objects, and unknown objects. Furthermore, the approaches to the feature sensing and robotic grasping of these objects are presented based on the varied characteristics of each type of object. Finally, we summarize the reviewed approaches for uncertain objects and provide some interesting issues to be more investigated in the future. It is found that the object’s features, such as material and compactness, are difficult to be sensed, and the object grasping approach based on learning networks plays a more important role when the unknown degree of the task object increases.
APA, Harvard, Vancouver, ISO, and other styles
7

Azaad, Shaheed, and Simon M. Laham. "Pixel asymmetry predicts between-object differences in the object-based compatibility effect." Quarterly Journal of Experimental Psychology 73, no. 12 (August 16, 2020): 2376–88. http://dx.doi.org/10.1177/1747021820947374.

Full text
Abstract:
When participants make left/right responses to unimanually graspable objects, response times (RTs) are faster when the responding hand is aligned with the viewed object’s handle. This object-based compatibility effect (CE) is often attributed to motor activation elicited by the object’s afforded grasp. However, some evidence suggests that the object-based CE is an example of spatial CEs, or Simon effects, elicited by the protruding nature of objects’ handles. Moreover, recent work shows that the way in which objects are centred on-screen might attenuate or reverse CEs, perhaps due to differences in pixel asymmetry (the proportion of pixels either side of fixation) between centralities. In this study, we tested whether pixel asymmetry also contributes to between-object variation in object-based CEs. In experiment 1 ( N = 34), we found that between-object differences in asymmetry predicted object-based CEs, such that objects with a greater proportion of pixels to the handle-congruent side of fixation produced larger CEs. In experiment 2 ( N = 35), we presented participants with mug (low asymmetry) and frying pan (high asymmetry) images and found that between-object and within-object (due to stimulus centrality) differences in pixel asymmetry interact to moderate CEs. Base-centred stimuli (centred according to the width of the object’s base) produced conventional CEs, whereas object-centred (centred according to the object’s total width) stimuli produced negative CEs (NCEs). Furthermore, the effect of centrality was smaller for mugs than pans, indicating an interaction between within-object and between-object differences in pixel asymmetry.
APA, Harvard, Vancouver, ISO, and other styles
8

Harlastputra, Amario Fausta, Hadi Nasbey, and Haris Suhendar. "YOLOv3 Algorithm to Measure Free Fall Time and Gravity Acceleration." Current STEAM and Education Research 1, no. 2 (December 31, 2023): 65–70. http://dx.doi.org/10.58797/cser.010204.

Full text
Abstract:
Computer vision methods as an alternative to sensors in modern measurements are feasible in physics experiments due to their speed, accuracy, and low cost. The You Only Look Once (YOLO) algorithm is widely used in computer vision because it detects object positions quickly and accurately. This research uses YOLO version 3 (YOLOv3) to compute an object’s falling time and gravitational acceleration. Two steps are performed in this study: first, the detection of predefined objects using YOLOv3, and second, the use of trained YOLOv3 to track the object's coordinate. According to the object tracking results, the object's falling time can be measured based on the object tracking results. The gravitational acceleration is calculated using the time data after the fall time of the object is measured. The measurement result of the fall time of the object will be compared with the data from the sensor. The result of the gravitational acceleration calculation is measured for its relative error against the value of 9.78150 m/s2, which is the value of gravitational acceleration in Jakarta city. The results show that YOLOv3 can accurately detect objects and measure free-fall motion, with a time measurement error of only 1.1 milliseconds compared to sensor measurements. The error obtained from the measurement of the Earth's gravity is 0.634%.
APA, Harvard, Vancouver, ISO, and other styles
9

Sapkota, Raju P., Shahina Pardhan, and Ian van der Linde. "Change Detection in Visual Short-Term Memory." Experimental Psychology 62, no. 4 (September 2015): 232–39. http://dx.doi.org/10.1027/1618-3169/a000294.

Full text
Abstract:
Abstract. Numerous kinds of visual event challenge our ability to keep track of the objects that populate our visual environment from moment to moment. These include blinks, occlusion, shifting visual attention, and changes to object’s visual and spatial properties over time. These visual events may lead to objects falling out of our visual awareness, but can also lead to unnoticed changes, such as undetected object replacements and positional exchanges. Current visual memory models do not predict which visual changes are likely to be the most difficult to detect. We examine the accuracy with which switches (where two objects exchange locations) and substitutions (where one or two objects are replaced) are detected. Inferior performance for one-object substitutions versus two-objects switches, along with superior performance for two-object substitutions versus two-object switches was found. Our results are interpreted in terms of object file theory, trade-offs between diffused and localized attention, and net visual change.
APA, Harvard, Vancouver, ISO, and other styles
10

Hui, Yuk. "On the Soul of Technical Objects: Commentary on Simondon’s ‘Technics and Eschatology’ (1972)." Theory, Culture & Society 35, no. 6 (March 8, 2018): 97–111. http://dx.doi.org/10.1177/0263276418757318.

Full text
Abstract:
This article comments on a paper titled ‘Technique et eschatologie: le devenir des objets techniques’ that Gilbert Simondon presented in 1972. For Simondon, eschatology consists of a basic presupposition, which is the duality between the immortal soul and the corruptible body. The eschatology of technical objects can be seen as the object’s becoming against time. Simondon suggests that in the epoch of artisans, the product through its perfection searches for the ‘immortality of his producer’, while in the industrial epoch standardization becomes the key mover, in the sense that different parts of the object can be replaced. This analysis of Simondon on the relation between technics and eschatology allows a speculation on the soul of technical objects by tracing his earlier works. This conception of the soul, as this article tries to show, allows Simondon to address the alienation of technical objects in juxtaposition to a Marxist critique of alienation.
APA, Harvard, Vancouver, ISO, and other styles
11

Lo, Wen-Chien, Chung-Cheng Chiu, and Jia-Horng Yang. "Three-Dimensional Object Segmentation and Labeling Algorithm Using Contour and Distance Information." Applied Sciences 12, no. 13 (June 29, 2022): 6602. http://dx.doi.org/10.3390/app12136602.

Full text
Abstract:
Object segmentation and object labeling are important techniques in the field of image processing. Because object segmentation techniques developed using two-dimensional images may cause segmentation errors for overlapping objects, this paper proposes a three-dimensional object segmentation and labeling algorithm that combines the segmentation and labeling functions using contour and distance information for static images. The proposed algorithm can segment and label the object without relying on the dynamic information of consecutive images and without obtaining the characteristics of the segmented objects in advance. The algorithm can also effectively segment and label complex overlapping objects and estimate the object’s distance and size according to the labeling contour information. In this paper, a self-made image capture system is developed to capture test images and the actual distance and size of the objects are also measured using measuring tools. The measured data is used as a reference for the estimated data of the proposed algorithm. The experimental results show that the proposed algorithm can effectively segment and label the complex overlapping objects, obtain the estimated distance and size of each object, and satisfy the detection requirements of objects at a long-range in outdoor scenes.
APA, Harvard, Vancouver, ISO, and other styles
12

Kisil, N. V. "THE ESSENCE AND CONTENT OF FORENSIC EXPERT ACTIVITY IN THE SPHERE OF INTELLECTUAL PROPERTY." Theory and Practice of Forensic Science and Criminalistics 15 (November 30, 2016): 346–54. http://dx.doi.org/10.32353/khrife.2015.43.

Full text
Abstract:
The research has identified that the examination in the sphere of intellectual property establishes the factual data on the properties, features, regularities for the creation and use of objects that are specific particularly for this class of forensic examinations the objects of the intellectual property rights. The main tasks include the following: the identification of the features of the intellectual property object in the object referred to study; the study into the conformity of the object with the criteria necessary to grant legal protection (novelty, industrial applicability, etc); the identification of the fact of the use of the intellectual property object’s body of features; identification of the fact of the copyrightable object’s reproduction; the calculation of the value of the property rights for the objects of intellectual rights and calculation of damages incurred due to their infringement. The nature of the expert’s special knowledge is conditioned by the knowledge of properties and features of intellectual property objects, regularities of their creation and use, calculation of their value, incurred damages and economic transactions regarding these objects as well as methods and means used to solve the tasks.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Eun-Seok, and Byeong-Seok Shin. "Vertex Chunk-Based Object Culling Method for Real-Time Rendering in Metaverse." Electronics 12, no. 12 (June 9, 2023): 2601. http://dx.doi.org/10.3390/electronics12122601.

Full text
Abstract:
Famous content using the Metaverse concept allows users to freely place objects in a world space without constraints. To render various high-resolution objects placed by users in real-time, various algorithms exist, such as view frustum culling, visibility culling and occlusion culling. These algorithms selectively remove objects outside the camera’s view and eliminate an object that is too small to render. However, these methods require additional operations to select objects to cull, which can slowdown the rendering speed in a world scene with massive number of objects. This paper introduces an object-culling technique using vertex chunk to render a massive number of objects in real-time. This method compresses the bounding boxes of objects into data units called vertex chunks to reduce input data for rendering passes, and utilizes GPU parallel processing to quickly restore the data and select culled objects. This method redistributes the bottleneck that occurred in the Object’s validity determination from the GPU to the CPU, allowing for the rendering of massive objects. Previously, the existing methods performed all the object validity checks on the GPU. Therefore, it can efficiently reduce the computation time of previous methods. The experimental results showed an improvement in performance of about 15%, and it showed a higher effect when multiple objects were placed.
APA, Harvard, Vancouver, ISO, and other styles
14

Clarke, Alex, Philip J. Pell, Charan Ranganath, and Lorraine K. Tyler. "Learning Warps Object Representations in the Ventral Temporal Cortex." Journal of Cognitive Neuroscience 28, no. 7 (July 2016): 1010–23. http://dx.doi.org/10.1162/jocn_a_00951.

Full text
Abstract:
The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., “made of wood,” “floats”) and spatial contextual associations (e.g., “found in gardens”) with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.
APA, Harvard, Vancouver, ISO, and other styles
15

Ревенко, В. Ю., І. І. Сафін, and Ю. В. Лукашук. "INCREASING THE RADAR CONTRAST OF TWO OBJECTS SIMULTANEOUSLY OBSERVED BY THE SHIP RADAR USING THE ENERGY MATRIX OF LOSSES." SHIP POWER PLANTS 43, no. 1 (September 7, 2021): 167–71. http://dx.doi.org/10.31653/smf343.2021.167-171.

Full text
Abstract:
The article discusses an increase in the radar contrast of two objects simultaneously observed by a ship's radar using information about the loss of power of an electromagnetic wave irradiating these objects after its interaction with their surface or internal structure. It is shown that the scattering or reflection of electromagnetic energy from the surface or from the internal volume of an object is associated with the conductivity of the object surface or the dielectric constant of a volume object, the internal structure of which consists of scattering and absorbing particles of different sizes, shapes, and dielectric constant. An algorithm is considered for determining the matrix of losses of electromagnetic energy during scattering or reflection by objects, which makes it possible to increase the contrast of objects by changing the polarization of the irradiated wave Key words - radar contrast, objects, radar observation, loss matrix, radar basis, scattering matrix, object's own basis.
APA, Harvard, Vancouver, ISO, and other styles
16

Weng, Li Yuan, Min Li, and Zhen Bang Gong. "On Sonar Image Processing Techniques for Detection and Localization of Underwater Objects." Applied Mechanics and Materials 236-237 (November 2012): 509–14. http://dx.doi.org/10.4028/www.scientific.net/amm.236-237.509.

Full text
Abstract:
This paper presents an underwater object detection and localization system based on multi-beam sonar image processing techniques. Firstly, sonar data flow collected by multi-beam sonar is processed by median filter to reduce noise. Secondly, an improved adaptive thresholding method based on Otsu method is proposed to extract foreground objects from sonar image. Finally, the object’s contour is calculated by Moore-Neighbor Tracing algorithm to locate the object. Experiments show that the proposed system can detect underwater objects quickly and the figure out the position of objects accurately.
APA, Harvard, Vancouver, ISO, and other styles
17

Gordon, A. M., G. Westling, K. J. Cole, and R. S. Johansson. "Memory representations underlying motor commands used during manipulation of common and novel objects." Journal of Neurophysiology 69, no. 6 (June 1, 1993): 1789–96. http://dx.doi.org/10.1152/jn.1993.69.6.1789.

Full text
Abstract:
1. While subjects lifted a variety of commonly handled objects of different shapes, weights, and densities, the isometric vertical lifting force opposing the object's weight was recorded from an analog weight scale, which was instrumented with high-stiffness strain gauge transducers. 2. The force output was scaled differently for the various objects from the first lift, before sensory information related to the object's weight was available. The force output was successfully specified from information in memory related to the weight of common objects, because only small changes in the force-rate profiles occurred across 10 consecutive lifts. This information was retrieved during a process related to visual identification of the target object. 3. The amount of practice necessary to appropriately scale the vertical lifting and grip (pinch) force was also studied when novel objects (equipped with force transducers at the grip surfaces) of different densities were encountered. The mass of a test object that subjects had not seen previously was adjusted to either 300 or 1,000 g by inserting an appropriate mass in the object's base without altering its appearance. This resulted in either a density that was in the range of most common objects (1.2 kg/l) or a density that was unusually high (4.0 kg/l). 4. Low vertical-lifting and grip-force rates were used initially with the high-density object, as if a lighter object had been expected. However, within the first few trials, the duration of the loading phase (period of isometric force increase before lift-off) was reduced by nearly 50% and the employed force-rate profiles were targeted for the weight of the object.(ABSTRACT TRUNCATED AT 250 WORDS)
APA, Harvard, Vancouver, ISO, and other styles
18

Newell, F. N. "Searching for Objects in the Visual Periphery: Effects of Orientation." Perception 25, no. 1_suppl (August 1996): 110. http://dx.doi.org/10.1068/v96l1111.

Full text
Abstract:
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Shu, and Fu Ming Li. "Study of Re-Entrant Lines Modeling Based on Object-Oriented Petri Net." Applied Mechanics and Materials 303-306 (February 2013): 1280–85. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.1280.

Full text
Abstract:
This article puts forward object-oriented Petri net modeling method, which possesses good encapsulation and modularity compared with current ordinary modeling method. On the macro level, it divides the re-entrant lines into different object modules according to the technology, so that the complexity of models is largely reduced through message delivery between objects. In the micro level, it explains objects' internal operational mechanism, in another word, each object's internal operation cannot be affected by other objects and environment. At last, it makes modeling and dynamic analysis by taking LED chips' processing flow for example, showing that re-entrant lines model based on object-oriented Petri net possesses good modeling ability.
APA, Harvard, Vancouver, ISO, and other styles
20

Levene, Merrick, Daisy Z. Hu, and Ori Friedman. "The glow of grime: Why cleaning an old object can wash away its value." Judgment and Decision Making 14, no. 5 (September 2019): 565–72. http://dx.doi.org/10.1017/s1930297500004861.

Full text
Abstract:
AbstractFor connoisseurs of antiques and antiquities, cleaning old objects can reduce their value. In five experiments (total N = 1,019), we show that lay people also often judge that old objects are worth less when cleaned, and we test two explanations for why cleaning can reduce object value. In Experiment 1, participants judged that cleaning an old object would reduce its value, but judged that cleaning would not reduce the value of an object made from a rare material. In Experiments 2 and 3 we described the nature, age and origin of the traces that cleaning would remove. Now participants judged that cleaning old historical traces would reduce the object’s value, but cleaning recently acquired traces would not. In Experiment 4, participants judged that the current value of an old object is reduced even when it was cleaned in ancient times. However, participants in Experiment 5 valued objects cleaned in ancient times as much as uncleaned ones, while judging that objects cleaned recently are worth less. Together, our findings suggest that cleaning objects may reduce value by removing valued historical traces, and by changing objects from their historic state. We also outline potential implications for previous studies showing that cleaning reduces the value of objects used by admired celebrities.
APA, Harvard, Vancouver, ISO, and other styles
21

Jelbert, Sarah A., Rachael Miller, Martina Schiestl, Markus Boeckle, Lucy G. Cheke, Russell D. Gray, Alex H. Taylor, and Nicola S. Clayton. "New Caledonian crows infer the weight of objects from observing their movements in a breeze." Proceedings of the Royal Society B: Biological Sciences 286, no. 1894 (January 9, 2019): 20182332. http://dx.doi.org/10.1098/rspb.2018.2332.

Full text
Abstract:
Humans use a variety of cues to infer an object's weight, including how easily objects can be moved. For example, if we observe an object being blown down the street by the wind, we can infer that it is light. Here, we tested whether New Caledonian crows make this type of inference. After training that only one type of object (either light or heavy) was rewarded when dropped into a food dispenser, birds observed pairs of novel objects (one light and one heavy) suspended from strings in front of an electric fan. The fan was either on—creating a breeze which buffeted the light, but not the heavy, object—or off, leaving both objects stationary. In subsequent test trials, birds could drop one, or both, of the novel objects into the food dispenser. Despite having no opportunity to handle these objects prior to testing, birds touched the correct object (light or heavy) first in 73% of experimental trials, and were at chance in control trials. Our results suggest that birds used pre-existing knowledge about the behaviour exhibited by differently weighted objects in the wind to infer their weight, using this information to guide their choices.
APA, Harvard, Vancouver, ISO, and other styles
22

Hove, Philip, Alison M. Tollner, Martina I. Klein, Michael A. Riley, and Marie-Vee Santana. "Haptic Perception of Whole and Partial Extents of Small Objects." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 26 (September 2002): 2193–96. http://dx.doi.org/10.1177/154193120204602619.

Full text
Abstract:
Success at using hand-held objects in the absence of vision implies that the haptic perceptual system is capable of registering information that specifies certain properties of the objects, such as object length or orientation. Research has indicated that people are capable of non-visually perceiving a multitude of object properties. Moreover, research has revealed that those haptic perceptions seem to be constrained by an object's distribution of mass (i.e., inertial properties). However, the majority of this research has been done with large hand-held objects. We sought to test if this relation holds with very small objects. We concluded that participants were able to perceive the whole lengths of the small rods and that whole and partial length are perceptually independent. The results lend support to the hypothesis that this form of touch perception (at the small scale) is anchored in the haptic system's sensitivity to the object's resistance to being rotated (i.e., inertia). Human Factors applications of this line of research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Ciribuco, Andrea, and Anne O’Connor. "Translating the object, objects in translation." Translation and Interpreting Studies 17, no. 1 (July 5, 2022): 1–13. http://dx.doi.org/10.1075/tis.00052.int.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gao, T., and B. Scholl. "Are objects required for object-files?" Journal of Vision 7, no. 9 (March 30, 2010): 916. http://dx.doi.org/10.1167/7.9.916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Downey, T. Wayne. "Early Object Relations into New Objects." Psychoanalytic Study of the Child 56, no. 1 (January 2001): 39–67. http://dx.doi.org/10.1080/00797308.2001.11800664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Clark, Don. "Object-lessons from self-explanatory objects." Computers & Education 18, no. 1-3 (January 1992): 11–22. http://dx.doi.org/10.1016/0360-1315(92)90031-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lutsenko, Nickolay A. "Numerical Comparison of Gas Flows through Plane Porous Heat-Evolutional Object with Axisymmetric one when Object's Outlet is Partially Closed." Advanced Materials Research 1040 (September 2014): 529–34. http://dx.doi.org/10.4028/www.scientific.net/amr.1040.529.

Full text
Abstract:
Using numerical experiment the gas flow in the gravity field through a plane porous object with heat sources inside and partial closure of the object's outlet has been investigated and compared with axisymmetric case. The influence of partial closure of the object's outlet on the cooling process of the plane porous objects with a non-uniform distribution of heat sources has been analyzed by means of computational experiment. It has been revealed that effect of the top cover on a cooling process of the plane porous objects is qualitatively the same as in the axisymmetric objects, but quantitative differences are significant.
APA, Harvard, Vancouver, ISO, and other styles
28

Kerimkhan, Bekzhan, Аinur Zhumadillayeva, and Alexander Nedzvedz. "ANALYSIS OF DYNAMICAL CHANGES FROM LARGE SET OF REMOTE SENSING IMAGES." Scientific Journal of Astana IT University, no. 11 (September 30, 2022): 4–13. http://dx.doi.org/10.37943/suet5603.

Full text
Abstract:
Basic elements of changes on the multi-temporal satellite image and their basic sets of dynamic objects are formulated and defined, for which the main characteristics define the dynamic object as an area of motion. Such dependents of objects are inherited not only between objects and their dynamic groups. In such a case, the concept of dynamic objects in a multi-temporal sequence of satellite images has been developed based on the formalization of processes occurring on a change stream. A specific methodology has been developed to select a dynamic object from a dynamic group based on the analysis of the changing characteristics of the object’s environment. It means that objects in a group have similar changing for separate characteristics. Such objects are included in specified ranges and are combined into dynamic groups. Characteristics dynamic group is characterized by changing every object of it. The monitoring is performed for such a group. The technique includes six stages: image acquisition and pre-processing, image scene segmentation and selection of regions, image scene analysis for segmented areas, control of compliance with the conditions for behavioral characteristics, and classification of the behavioral line of objects in the region. As a result, it is possible to describe the properties of the group’s behavior and objects in the group as separate characteristics. The control of fulfillment of conditions for the behavioral characteristic is carried out to control the object as a dynamic group element. Thus, monitoring is carried out as a control for the motion of many objects rather than images.
APA, Harvard, Vancouver, ISO, and other styles
29

Pulungan, Ali Basrah, and Zhafranul Nafis. "Rancangan Alat Pendeteksi Benda dengan Berdasarkan Warna, Bentuk, dan Ukuran dengan Webcam." JTEIN: Jurnal Teknik Elektro Indonesia 2, no. 1 (February 11, 2021): 49–54. http://dx.doi.org/10.24036/jtein.v2i1.111.

Full text
Abstract:
Along with the times, technology is also developing so rapidly, one of the innovations in technological development is the use of webcams. the use of a webcam can be developed as a sensor in detecting an object through several stages of image processing. The use of a webcam aims to simplify an automation system so that it can be used to perform several tasks at once. Therefore, the author intends to design and manufacture an object detector with measurement parameters of the object's color, shape and size. This tool uses a webcam as a sensing sensor, and uses programming in PYTHON to recognize objects to be detected, and uses a servo motor to drive object actuators. The results of this tool have been tested and are able to detect objects properly based on predetermined color, shape and size. This tool is also able to separate objects that meet specifications from objects that do not meet specifications. Object detection using a webcam and an object separating actuator can work well.
APA, Harvard, Vancouver, ISO, and other styles
30

Luria, Roy, and Edward K. Vogel. "Come Together, Right Now: Dynamic Overwriting of an Object's History through Common Fate." Journal of Cognitive Neuroscience 26, no. 8 (August 2014): 1819–28. http://dx.doi.org/10.1162/jocn_a_00584.

Full text
Abstract:
The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object's status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects' representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects “met” and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects' initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues.
APA, Harvard, Vancouver, ISO, and other styles
31

Tyler, Lorraine K., Shannon Chiu, Jie Zhuang, Billi Randall, Barry J. Devereux, Paul Wright, Alex Clarke, and Kirsten I. Taylor. "Objects and Categories: Feature Statistics and Object Processing in the Ventral Stream." Journal of Cognitive Neuroscience 25, no. 10 (October 2013): 1723–35. http://dx.doi.org/10.1162/jocn_a_00419.

Full text
Abstract:
Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions—the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an object's meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.
APA, Harvard, Vancouver, ISO, and other styles
32

Mustile, Magda, Flora Giocondo, Daniele Caligiore, Anna M. Borghi, and Dimitrios Kourtis. "Motor Inhibition to Dangerous Objects: Electrophysiological Evidence for Task-dependent Aversive Affordances." Journal of Cognitive Neuroscience 33, no. 5 (April 1, 2021): 826–39. http://dx.doi.org/10.1162/jocn_a_01690.

Full text
Abstract:
Abstract Previous work suggests that perception of an object automatically facilitates actions related to object grasping and manipulation. Recently, the notion of automaticity has been challenged by behavioral studies suggesting that dangerous objects elicit aversive affordances that interfere with encoding of an object's motor properties; however, related EEG studies have provided little support for these claims. We sought EEG evidence that would support the operation of an inhibitory mechanism that interferes with the motor encoding of dangerous objects, and we investigated whether such mechanism would be modulated by the perceived distance of an object and the goal of a given task. EEGs were recorded by 24 participants who passively perceived dangerous and neutral objects in their peripersonal, boundary, or extrapersonal space and performed either a reachability judgment task or a categorization task. Our results showed that greater attention, reflected in the visual P1 potential, was drawn by dangerous and reachable objects. Crucially, a frontal N2 potential, associated with motor inhibition, was larger for dangerous objects only when participants performed a reachability judgment task. Furthermore, a larger parietal P3b potential for dangerous objects indicated the greater difficulty in linking a dangerous object to the appropriate response, especially when it was located in the participants' extrapersonal space. Taken together, our results show that perception of dangerous objects elicits aversive affordances in a task-dependent way and provides evidence for the operation of a neural mechanism that does not code affordances of dangerous objects automatically, but rather on the basis of contextual information.
APA, Harvard, Vancouver, ISO, and other styles
33

O’Rourke, Michael. "Objects, Objects, Objects (and Some Objections)." Feminist Formations 25, no. 3 (2013): 190–201. http://dx.doi.org/10.1353/ff.2013.0041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fernandes, Alexandra M., and Pedro B. Albuquerque. "Working memory span for pictures, names, and touched objects." Seeing and Perceiving 25 (2012): 21. http://dx.doi.org/10.1163/187847612x646433.

Full text
Abstract:
Through an immediate serial recall task, working memory for objects’ pictures, objects’ names and touched objects was evaluated with and without a simultaneous articulatory suppression task. Each group performed the task in one modality: seeing object pictures presented on a computer screen, reading out loud two-syllabic object names presented in a computer screen, or touching real objects without sight. The task was performed twice by the participants, once with articulatory suppression and once without articulatory suppression. The objects were presented sequentially for three seconds each, starting with lists of two items and progressively increasing the number of items by one, according to the participant’s performance, until a maximum of 10 items. The results showed that span values were similar in the three modalities, with an average of five items being recalled without articulatory suppression and about four items being recalled with articulatory suppression. This study suggests similar performance in immediate serial recall tasks regardless of presentation modality. Articulatory suppression presented an equivalent effect in all groups, impairing recall performance in average by one item. Results are discussed attending to the verbal nature of the task, which implies the recall of the object’s name, and the impact of articulatory suppression in common object’s encoding.
APA, Harvard, Vancouver, ISO, and other styles
35

Woods, Andrew T., Allison Moore, and Fiona N. Newell. "Canonical Views in Haptic Object Perception." Perception 37, no. 12 (January 1, 2008): 1867–78. http://dx.doi.org/10.1068/p6038.

Full text
Abstract:
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
APA, Harvard, Vancouver, ISO, and other styles
36

Ringer, Ryan V., Allison M. Coy, Adam M. Larson, and Lester C. Loschky. "Investigating Visual Crowding of Objects in Complex Real-World Scenes." i-Perception 12, no. 2 (March 2021): 204166952199415. http://dx.doi.org/10.1177/2041669521994150.

Full text
Abstract:
Visual crowding, the impairment of object recognition in peripheral vision due to flanking objects, has generally been studied using simple stimuli on blank backgrounds. While crowding is widely assumed to occur in natural scenes, it has not been shown rigorously yet. Given that scene contexts can facilitate object recognition, crowding effects may be dampened in real-world scenes. Therefore, this study investigated crowding using objects in computer-generated real-world scenes. In two experiments, target objects were presented with four flanker objects placed uniformly around the target. Previous research indicates that crowding occurs when the distance between the target and flanker is approximately less than half the retinal eccentricity of the target. In each image, the spacing between the target and flanker objects was varied considerably above or below the standard (0.5) threshold to either suppress or facilitate the crowding effect. Experiment 1 cued the target location and then briefly flashed the scene image before participants could move their eyes. Participants then selected the target object’s category from a 15-alternative forced choice response set (including all objects shown in the scene). Experiment 2 used eye tracking to ensure participants were centrally fixating at the beginning of each trial and showed the image for the duration of the participant’s fixation. Both experiments found object recognition accuracy decreased with smaller spacing between targets and flanker objects. Thus, this study rigorously shows crowding of objects in semantically consistent real-world scenes.
APA, Harvard, Vancouver, ISO, and other styles
37

Pashchenko, Dmitry V., Dmitry A. Trokoz, Irina G. Sergina, and Elena A. Balzannikova. "Methods for the Classification of Radar Objects." Nexo Revista Científica 35, no. 03 (September 30, 2022): 835–44. http://dx.doi.org/10.5377/nexo.v35i03.15012.

Full text
Abstract:
Currently, many different algorithms have been developed that implement the classification problem. Of particular interest is the use of automatic classification algorithms in radar systems. In each such system, one has to solve the classification and recognition problem for detected targets. The work of an operator analyzing the information can take place in conditions that hinder assess of the control object’s state. This is due to the absence or insufficient amount of reliable information about some radar target properties. A radar target is understood as any material object that can be detected, and its location and movement parameters can be measured using radar methods. We consider air, ground and surface objects as a radar object. Object classification algorithms are used in various fields when analyzing the properties of an object. This paper discusses the main methods for the classification of radar objects.
APA, Harvard, Vancouver, ISO, and other styles
38

van Beers, Robert J., Daniel M. Wolpert, and Patrick Haggard. "Sensorimotor Integration Compensates for Visual Localization Errors During Smooth Pursuit Eye Movements." Journal of Neurophysiology 85, no. 5 (May 1, 2001): 1914–22. http://dx.doi.org/10.1152/jn.2001.85.5.1914.

Full text
Abstract:
To localize a seen object, the CNS has to integrate the object's retinal location with the direction of gaze. Here we investigate this process by examining the localization of static objects during smooth pursuit eye movements. The normally experienced stability of the visual world during smooth pursuit suggests that the CNS essentially compensates for the eye movement when judging target locations. However, certain systematic localization errors are made, and we use these to study the process of sensorimotor integration. During an eye movement, a static object's image moves across the retina. Objects that produce retinal slip are known to be mislocalized: objects moving toward the fovea are seen too far on in their trajectory, whereas errors are much smaller for objects moving away from the fovea. These effects are usually studied by localizing the moving object relative to a briefly flashed one during fixation: moving objects are then mislocalized, but flashes are not. In our first experiment, we found that a similar differential mislocalization occurs for static objects relative to flashes during pursuit. This effect is not specific for horizontal pursuit but was also found in other directions. In a second experiment, we examined how this effect generalizes to positions outside the line of eye movement. We found that large localization errors were found in the entire hemifield ahead of the pursuit target and were predominantly aligned with the direction of eye movement. In a third experiment, we determined whether it is the flash or the static object that is mislocalized ahead of the pursuit target. In contrast to fixation conditions, we found that during pursuit it is the flash, not the static object, which is mislocalized. In a fourth experiment, we used egocentric localization to confirm this result. Our results suggest that the CNS compensates for the retinal localization errors to maintain position constancy for static objects during pursuit. This compensation is achieved in the process of sensorimotor integration of retinal and gaze signals: different retinal areas are integrated with different gaze signals to guarantee the stability of the visual world.
APA, Harvard, Vancouver, ISO, and other styles
39

Szinte, Martin, Marisa Carrasco, Patrick Cavanagh, and Martin Rolfs. "Attentional trade-offs maintain the tracking of moving objects across saccades." Journal of Neurophysiology 113, no. 7 (April 2015): 2220–31. http://dx.doi.org/10.1152/jn.00966.2014.

Full text
Abstract:
In many situations like playing sports or driving a car, we keep track of moving objects, despite the frequent eye movements that drastically interrupt their retinal motion trajectory. Here we report evidence that transsaccadic tracking relies on trade-offs of attentional resources from a tracked object's motion path to its remapped location. While participants covertly tracked a moving object, we presented pulses of coherent motion at different locations to probe the allocation of spatial attention along the object's entire motion path. Changes in the sensitivity for these pulses showed that during fixation attention shifted smoothly in anticipation of the tracked object's displacement. However, just before a saccade, attentional resources were withdrawn from the object's current motion path and reflexively drawn to the retinal location the object would have after saccade. This finding demonstrates the predictive choice the visual system makes to maintain the tracking of moving objects across saccades.
APA, Harvard, Vancouver, ISO, and other styles
40

Kampis, Dora, and Ágnes Melinda Kovács. "Seeing the World From Others’ Perspective: 14-Month-Olds Show Altercentric Modulation Effects by Others’ Beliefs." Open Mind 5 (2021): 189–207. http://dx.doi.org/10.1162/opmi_a_00050.

Full text
Abstract:
Abstract Humans have a propensity to readily adopt others’ perspective, which often influences their behavior even when it seemingly should not. This altercentric influence has been widely studied in adults, yet we lack an understanding of its ontogenetic origins. The current studies investigated whether 14-month-olds’ search in a box for potential objects is modulated by another person’s belief about the box’s content. We varied the person’s potential belief such that in her presence/absence an object was removed, added, or exchanged for another, leading to her true/false belief about the object’s presence (Experiment 1, n = 96); or transformed into another object, leading to her true/false belief about the object’s identity (i.e., the objects represented under a specific aspect, Experiment 2, n = 32). Infants searched longer if the other person believed that an object remained in the box, showing an altercentric influence early in development. These results suggest that infants spontaneously represent others’ beliefs involving multiple objects and raise the possibility that infants can appreciate that others encode the world under a unique aspect.
APA, Harvard, Vancouver, ISO, and other styles
41

Manion, Todd M. "Objects objects everywhere." XRDS: Crossroads, The ACM Magazine for Students 7, no. 3 (March 15, 2001): 10–14. http://dx.doi.org/10.1145/367884.367893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

TSICHRITZIS, DENNIS, OSCAR NIERSTRASZ, and SIMON GIBBS. "BEYOND OBJECTS: OBJECTS." International Journal of Cooperative Information Systems 01, no. 01 (March 1992): 43–60. http://dx.doi.org/10.1142/s0218215792000039.

Full text
Abstract:
Object-orientation offers more than just objects, classes and inheritance as means to structure applications. It is an approach to application development in which software systems can be constructed by composing and refining pre-designed, plug-compatible software components. But for this approach to be successfully applied, programming languages must provide better support for component specification and software composition, the software development life-cycle must separate the issues of generic component design and reuse from that of constructing applications to meet specific requirements, and, more generally, the way we develop, manage, exchange and market software must adapt to better support large-scale reuse for software communities. In this paper we shall explore these themes and we will highlight a number of key research directions and open problems to be explored as steps towards improving the effectiveness of object technology.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Yun Cheng. "Research on Method of CIM-Based Data Exchange for Electric Power Enterprise." Advanced Materials Research 986-987 (July 2014): 2151–57. http://dx.doi.org/10.4028/www.scientific.net/amr.986-987.2151.

Full text
Abstract:
A novel CIM-based approach is proposed to realize power enterprise data exchange under heterogeneous IT circumstance. CIM objects encoding specification by XML is introduced in this paper. The object is expressed by XML complex element, and the object’s properties are encoded by simple elements embedded in complex one. In order to solve some data interchange problems, a CIM/XSD schema which applies on CIM data syntax and data validation verification is established by using XML Schema Definition (XSD) technology, and an attribute group “AssociationAttributeGroup” is designed to serialize complex relationships of CIM objects. The attribute group provides syntax support for marshaling linkages of objects in certain two methods: “embedding” and “referring”. The two operators: serialization and deserialization are added to each CIM class. By this way, the CIM objects can make quickly and bidirectional alternation between memory objects and CIM/XML document. The algorithms of the two operators are designed in detail, which can implement complex object set bidirectional conversion efficiently. The case study shows that the CIM object encoding specification, the CIM/XML schema and the algorithms of serialization functions can be applied to exchange and share CIM data in electric power enterprise.
APA, Harvard, Vancouver, ISO, and other styles
44

HARRISON, VICTORIA S. "Mathematical objects and the object of theology." Religious Studies 53, no. 4 (October 18, 2016): 479–96. http://dx.doi.org/10.1017/s0034412516000238.

Full text
Abstract:
AbstractThis article brings mathematical realism and theological realism into conversation. It outlines a realist ontology that characterizes abstract mathematical objects as inaccessible to the senses, non-spatiotemporal, and acausal. Mathematical realists are challenged to explain how we can know such objects. The article reviews some promising responses to this challenge before considering the view that the object of theology also possesses the three characteristic features of abstract objects, and consequently may be known through the same methods that yield knowledge of mathematical objects.
APA, Harvard, Vancouver, ISO, and other styles
45

Duy Cong, Vo, and Le Hoai Phuong. "Design and development of a delta robot system to classify objects using image processing." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 3 (June 1, 2023): 2669. http://dx.doi.org/10.11591/ijece.v13i3.pp2669-2676.

Full text
Abstract:
In this paper, a delta robot is designed to grasp objects in an automatic sorting system. The system consists of a delta robot arm for grasping objects, a belt conveyor for transmitting objects, a camera mounted above the conveyor to capture images of objects, and a computer for processing images to classify objects. The delta robot is driven by three direct current (DC) servo motors. The controller is implemented by an Arduino board and Raspberry Pi 4 computer. The Arduino is programmed to provide rotation to each corresponding motor. The Raspberry Pi 4 computer is used to process images of objects to classify objects according to their color. An image processing algorithm is developed to classify objects by color. The blue, green, red (BGR) image of objects is converted to HSV color space and then different thresholds are applied to recognize the object’s color. The robot grasps objects and put them in the correct position according to information received from Raspberry. Experimental results show that the accuracy when classifying red and yellow objects is 100%, and for green objects is 97.5%. The system takes an average of 1.8 s to sort an object.
APA, Harvard, Vancouver, ISO, and other styles
46

Zelinsky, Gregory J., and Gregory L. Murphy. "Synchronizing Visual and Language Processing: An Effect of Object Name Length on Eye Movements." Psychological Science 11, no. 2 (March 2000): 125–31. http://dx.doi.org/10.1111/1467-9280.00227.

Full text
Abstract:
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.
APA, Harvard, Vancouver, ISO, and other styles
47

Hua, Tianyu, Hongdong Zheng, Yalong Bai, Wei Zhang, Xiao-Ping Zhang, and Tao Mei. "Exploiting Relationship for Complex-scene Image Generation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1584–92. http://dx.doi.org/10.1609/aaai.v35i2.16250.

Full text
Abstract:
The significant progress on Generative Adversarial Networks (GANs) has facilitated realistic single-object image generation based on language input. However, complex-scene generation (with various interactions among multiple objects) still suffers from messy layouts and object distortions, due to diverse configurations in layouts and appearances. Prior methods are mostly object-driven and ignore their inter-relations that play a significant role in complex-scene images. This work explores relationship-aware complex-scene image generation, where multiple objects are inter-related as a scene graph. With the help of relationships, we propose three major updates in the generation framework. First, reasonable spatial layouts are inferred by jointly considering the semantics and relationships among objects. Compared to standard location regression, we show relative scales and distances serve a more reliable target. Second, since the relations between objects have significantly influenced an object's appearance, we design a relation-guided generator to generate objects reflecting their relationships. Third, a novel scene graph discriminator is proposed to guarantee the consistency between the generated image and the input scene graph. Our method tends to synthesize plausible layouts and objects, respecting the interplay of multiple objects in an image. Experimental results on Visual Genome and HICO-DET datasets show that our proposed method significantly outperforms prior arts in terms of IS and FID metrics. Based on our user study and visual inspection, our method is more effective in generating logical layout and appearance for complex-scenes.
APA, Harvard, Vancouver, ISO, and other styles
48

Chao, Linda L., and Alex Martin. "Cortical Regions Associated with Perceiving, Naming, and Knowing about Colors." Journal of Cognitive Neuroscience 11, no. 1 (January 1999): 25–35. http://dx.doi.org/10.1162/089892999563229.

Full text
Abstract:
Positron emission tomography (PET) was used to investigate whether retrieving information about a specific object attribute requires reactivation of brain areas that mediate perception of that attribute. During separate PET scans, subjects passively viewed colored and equiluminant gray-scale Mondrians, named colored and achromatic objects, named the color of colored objects, and generated color names associated with achromatic objects. Color perception was associated with activations in the lingual and fusiform gyri of the occipital lobes, consistent with previous neuroimaging and human lesion studies. Retrieving information about object color (generating color names for achromatic objects relative to naming achromatic objects) activated the left inferior temporal, left frontal, and left posterior parietal cortices, replicating previous findings from this laboratory. When subjects generated color names for achromatic objects relative to the low-level baseline of viewing gray-scale Mondrians, additional activations in the left fusi-form/lateral occipital region were detected. However, these activations were lateral to the occipital regions associated with color perception and identical to occipital regions activated when subjects simply named achromatic objects relative to the same low-level baseline. This suggests that the occipital activa-tions associated with retrieving color information were due to the perception of object form rather than to the top-down influence of brain areas that mediate color perception. Taken together, these results indicate that retrieving previously acquired information about an object's typical color does not require reactivation of brain regions that subserve color perception.
APA, Harvard, Vancouver, ISO, and other styles
49

Mohanad Abdulhamid and Adam Olalo. "Implementation of Moving Object Tracker System." Data Science: Journal of Computing and Applied Informatics 5, no. 2 (October 5, 2021): 102–6. http://dx.doi.org/10.32734/jocai.v5.i2-6450.

Full text
Abstract:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it’s from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object.
APA, Harvard, Vancouver, ISO, and other styles
50

Abdulhamid, Mohanad. "Implementation of Moving Object Tracker System." Journal of Siberian Federal University. Engineering & Technologies 14, no. 8 (December 2021): 986–95. http://dx.doi.org/10.17516/1999-494x-0367.

Full text
Abstract:
The field of computer vision is increasingly becoming an active area of research with tremendous efforts being put towards giving computers the capability of sight. As human beings we are able to see, distinguish between different objects based on their unique features and even trace their movements if they are within our view. For computers to really see they also need to have the capability of identifying different objects and equally track them. This paper focuses on that aspect of identifying objects which the user chooses; the object chosen is differentiated from other objects by comparison of pixel characteristics. The chosen object is then to be tracked with a bounding box for ease of identification of the object's location. A real time video feed captured by a web camera is to be utilized and it's from this environment visible within the camera view that an object is to be selected and tracked. The scope of this paper mainly focuses on the development of a software application that will achieve real time object tracking. The software module will allow the user to identify the object of interest someone wishes to track, while the algorithm employed will enable noise and size filtering for ease of tracking of the object
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography