Academic literature on the topic 'Egocentric'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Egocentric.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Egocentric"

1

Urzha, A. V. "EGOCENTRIC UNITS WITH EPISTEMIC MEANING: CLASSIFICATION, PRAGMATICS, FUNCTIONING (ON THE MATERIAL OF ORIGINAL AND TRANSLATED NARRATIVES IN RUSSIAN)." Bulletin of Udmurt University. Series History and Philology 30, no. 5 (October 27, 2020): 765–73. http://dx.doi.org/10.35634/2412-9534-2020-30-5-765-773.

Full text
Abstract:
The article presents the characteristics of the specific group of egocentric words and constructions having the semantics related to knowledge or the lack of knowledge, and dealing with the shift from the latter to the former. These egocentric units with epistemic meaning are used to demonstrate the restrictions of any point of view in the text (belonging to the narrator or the focal hero), they form the suspense in the narrative. These elements are called epistemic egocentric units (compared to deictic egocentric units, evaluative units etc.). This epistemic cluster includes the persuasive and evidential markers, the words and expressions denoting uncertainty and unexpectedness, the constructions expressing similarity, likeness and identification. The analysis of original and translated texts in Russian shows the interaction of different types of epistemic egocentric units within the perspective of the narratives. The essential material for the study is constituted by several Russian translations of the novels “Dracula” (В. Stoker) and “The Adventures of Tom Sawyer” (M. Twain). The results of the comparative analysis of translated versions show that active use of epistemic egocentrics in Russian translations reinforces suspense and dramatization within some translator’s strategies, highlighting the original devices.
APA, Harvard, Vancouver, ISO, and other styles
2

Urzha, A. V. "Combining Linguistic Methods of Studying Egocentric Units in Russian Translated Narratives." Bulletin of Kemerovo State University 22, no. 3 (October 29, 2020): 879–88. http://dx.doi.org/10.21603/2078-8975-2020-22-3-879-888.

Full text
Abstract:
The present research featured a functional comparative analysis of egocentric language units in contemporary Russian translated narratives, namely six Russian translations of The Adventures of Tom Sawyer. The study was based on parallel corpora within the Russian National Corpus and a set of digitized translations. The research objective was to present the classification of egocentric units applicable to the analysis of translations, as well as to describe the ways of combining various linguistic methods of studying egocentrics in translated narratives. Egocentric units were studied within several semantic clusters: actualizing (deictic), evaluative, epistemic, modal, and interactive. Using the heuristic method, the authors found and counted the contexts containing egocentric units of a certain type within the parallel corpora. The inductive method made it possible to reveal the trends based on the data obtained. The hypotheses were verified using the deductive method. The research was based on wide narrative contexts and took into the account the writing style, the genre and composition of the text, the use of egocentrics in the target language, and the individual translation strategies. The paper focuses on the lexical markers of uncertainty added by the Russian translators of Mark Twain. They are often used as additional markers of focalization in Russian translations. On the one hand, this phenomenon deals with specific ways of foregrounding subjectivity in the Russian language; on the other hand, it reveals the strategies of building-up suspense applied by individual translators.
APA, Harvard, Vancouver, ISO, and other styles
3

Hayashi, Yugo. "Facilitating Perspective Taking in Groups." International Journal of Software Science and Computational Intelligence 5, no. 1 (January 2013): 1–14. http://dx.doi.org/10.4018/ijssci.2013010101.

Full text
Abstract:
The present study investigates the nature of egocentric biases in a situation where a speaker is surrounded by social actors with different perspectives. In this context, the author investigated how communication channels function to ease egocentric bias during collaborative activities. To investigate this point, the author used conversational agents as social actors. The present study therefore created a virtual situation where a speaker was surrounded by several speakers. The author hypothesized that the diversity of communication channels available to the audience would increase the awareness of others and facilitate the adoption of an exocentric perspective. The results of the analysis show that participants who engaged in the collaboration task with various communication channels used fewer egocentric perspectives. Studies in egocentrism and communication have not yet investigated the conversational dynamics of multiple speakers. This study therefore provides a new perspective about the kinds of factors that may ease such biases.
APA, Harvard, Vancouver, ISO, and other styles
4

Smith, Joel. "Egocentric Space." International Journal of Philosophical Studies 22, no. 3 (May 27, 2014): 409–33. http://dx.doi.org/10.1080/09672559.2014.913888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Field, Hartry. "Egocentric Content." Noûs 51, no. 3 (April 13, 2016): 521–46. http://dx.doi.org/10.1111/nous.12141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Epley, Nicholas, and Eugene M. Caruso. "Egocentric Ethics." Social Justice Research 17, no. 2 (June 2004): 171–87. http://dx.doi.org/10.1023/b:sore.0000027408.72713.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Elgharib, Mohamed, Mohit Mendiratta, Justus Thies, Matthias Niessner, Hans-Peter Seidel, Ayush Tewari, Vladislav Golyanik, and Christian Theobalt. "Egocentric videoconferencing." ACM Transactions on Graphics 39, no. 6 (November 26, 2020): 1–16. http://dx.doi.org/10.1145/3414685.3417808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nakashima, Ryoichi, and Takatsune Kumada. "Peripersonal versus extrapersonal visual scene information for egocentric direction and position perception." Quarterly Journal of Experimental Psychology 71, no. 5 (January 1, 2018): 1090–99. http://dx.doi.org/10.1080/17470218.2017.1310267.

Full text
Abstract:
When perceiving the visual environment, people simultaneously perceive their own direction and position in the environment (i.e., egocentric spatial perception). This study investigated what visual information in a scene is necessary for egocentric spatial perceptions. In two perception tasks (the egocentric direction and position perception tasks), observers viewed two static road images presented sequentially. In Experiment 1, the critical manipulation involved an occluded region in the road image, an extrapersonal region (far-occlusion) and a peripersonal region (near-occlusion). Egocentric direction perception was worse in the far-occlusion condition than in the no-occlusion condition, and egocentric position perceptions were worse in the far- and near-occlusion conditions than in the no-occlusion condition. In Experiment 2, we conducted the same tasks manipulating the observers’ gaze location in a scene—an extrapersonal region (far-gaze), a peripersonal region (near-gaze) and the intermediate region between the former two (middle-gaze). Egocentric direction perception performance was the best in the far-gaze condition, and egocentric position perception performances were not different among gaze location conditions. These results suggest that egocentric direction perception is based on fine visual information about the extrapersonal region in a road landscape, and egocentric position perception is based on information about the entire visual scene.
APA, Harvard, Vancouver, ISO, and other styles
9

Coluccia, Emanuele, Irene C. Mammarella, and Cesare Cornoldi. "Centred Egocentric, Decentred Egocentric, and Allocentric Spatial Representations in the Peripersonal Space of Congenital Total Blindness." Perception 38, no. 5 (January 1, 2009): 679–93. http://dx.doi.org/10.1068/p5942.

Full text
Abstract:
The distinction between different spatial representations in the peripersonal space was examined in two experiments by requiring sighted blindfolded and blind participants to remember the locations of objects haptically explored. In experiment 1, object relocation took place from either the same position as learning—with the same (centred egocentric condition) or 90°-rotated (rotated egocentric condition) object array—or from a position different from the learning position (allocentric condition). Results revealed that, in both sighted and blind people, distance errors were higher in the allocentric and rotated conditions than in the centred egocentric condition, and that blind participants made more distance errors than sighted subjects only in the allocentric condition. Experiment 2 repeated rotated egocentric and allocentric conditions, while the centred egocentric condition was replaced by a decentred egocentric condition in which object relocation took place from the same position as learning (egocentric) but started from a decentred point. The decentred egocentric condition was found to remain significantly different from the rotated condition, but not from the allocentric condition. Moreover, blind participants performed less well in the allocentric condition, but were specifically impaired. Overall, our results confirm that different types of spatial constraints and representations, including the decentred egocentric one, can be distinguished in the peripersonal space and that blind people are as efficient as sighted in the egocentric and rotated conditions, but they encounter difficulties in recalling locations also in the peripersonal space, especially when an allocentric condition is required.
APA, Harvard, Vancouver, ISO, and other styles
10

Rácz, Anna. "Measuring egocentric networks." Magyar Pszichológiai Szemle 69, no. 3 (September 1, 2014): 567–93. http://dx.doi.org/10.1556/mpszle.69.2014.3.6.

Full text
Abstract:
Jelen tanulmányban az egocentrikus hálók feltérképezési lehetőségeit vizsgáljuk meg. Először áttekintjük az egót körülvevő kapcsolati rétegeket, majd azokat az eszközöket, amelyekkel ezek a rétegek feltárhatók. Ezen eszközök közül részletesen a névgenerátorokat mutatjuk be, de röviden kitérünk egyéb, az egóközpontú kapcsolathálók vizsgálatára alkalmas módszerekre is. Végül a névgenerátoros vizsgálatokkal kapcsolatos néhány módszertani kérdést tekintünk át.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Egocentric"

1

Surie, Dipak. "Egocentric interaction for ambient intelligence." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-50822.

Full text
Abstract:
Ambient intelligence refers to the vision of computationally augmented everyday environments that are sensitive, adaptive and responsive to humans and intelligently support their daily lives. Ambient ecologies are the infrastructures of ambient intelligence. To enable system developers to frame and manage the dynamic and complex interaction of humans with ambient ecologies consisting of a mixture of physical (real) and virtual (digital) objects, novel interaction paradigms are needed. Traditional interaction paradigms like the WIMP (windows, icon, menus, and pointing devices) paradigm for desktop computing operate in a closed world, unaware of the physical, social and cultural context. They restrict human perception and action to screen, mouse and keyboard with the assumption that human attention will be fully devoted to interaction with the computer. Emerging interaction paradigms for ambient intelligence are typically centered on specific devices, specific computing environments or specific human capabilities. Also, many of them are driven by technological advancements rather than viewing the human agent as their starting point. A principled, theoretical approach centered in the individual human agent, their situation and activities that are comprehensive and integrated while at the same time instrumental in the design of ambient ecologies has been lacking. This thesis introduces egocentric interaction as an approach towards the modeling of ambient ecologies with the distinguishing feature of taking the human agent’s body, situation and activities as center of reference, as opposed to the more common device-centric approaches in facilitating human-environment interaction. Egocentric interaction is encapsulated in a number of assumptions and principles such as situatedness, the proximity principle, the physical-virtual equity principle, perception and action instead of “input” and “output,” and activity-centeredness. A situative space model is proposed based on some of these principles. It is intended to capture what a specific human agent can perceive and not perceive, reach and not reach at any given moment in time. The situative space model is for the egocentric interaction paradigm what the virtual desktop is for the WIMP interaction paradigm: more or less everything of interest to a specific human agent is assumed and supposed to happen here. In addition, the conception and implementation of the easy ADL ecology based on egocentric interaction, comprising of smart objects, a personal activity-centric middleware, ambient intelligence applications aimed at everyday activity support, and a human agent literally in the middle of it all is described. The middleware was developed to address important challenges in ambient intelligence: (1) tracking and managing smart objects; (2) tracking a human agent’s situative spaces; (3) recognizing human activities and actions; (4) managing and facilitating human-environment interaction; and (5) to ease up the development of ambient intelligence applications. The easy ADL ecology was first simulated in immersive virtual reality, and then set up physically as a living laboratory to evaluate: (1) the technological and technical performance of individual middleware components, (2) to perform a user experience evaluation assessing various aspects of user satisfaction in relation to the support offered by the easy ADL ecology, and (3) to use it as a research test bed for addressing challenges in ambient intelligence. While it is problematic to directly compare the “proof-of-concept” easy ADL ecology with related research efforts, it is clear from the user experience evaluation that the subjects were positive with the services it offered.
easy ADL project
APA, Harvard, Vancouver, ISO, and other styles
2

Ostell, Carol. "Individual differences in egocentric orientation." Thesis, University of York, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hipiny, Irwandi. "Egocentric activity recognition using gaze." Thesis, University of Bristol, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.682564.

Full text
Abstract:
When coupled with an egocentric camera, a gaze tracker provides the image point of where the person is fixating at. While performing a familiar task, we tend to fixate on activity- relevant objects at the points in time required in the task at hand. The resulting sequence of gaze regions is therefore very useful for inferring the subject 's activity and action class. This thesis addresses the problem of visual recognition of human activity and action from an egocentric point of view. The higher level task of activity recognition is based on processing the entire sequence of gaze regions as users perform tasks such as cooking or assembling objects, while the mid-level task of action recognition , such as pouring into a cup, is addressed via the automatic segmentation of mutually exclusive sequences prior to recognition. Temporal segmentation is performed by tracking two motion based features inside the successive gaze regions. These features model the underlying structure of image motion data at natural temporal cuts. This segmentation is further improved by the incorporation of a 2D color histogram based detection of human hands inside gaze regions . The proposed method learns activity and action models from the sequence of gaze regions. Activities are learned as a bag of visual words, however we introduce a multi-voting scheme to reduce the effect of noisy matching. Actions are, in addition, modeled as a string of visual words which enforces the structural constraint of an action. We introduce contextual information in the form of location based priors. Furthermore, this thesis addresses the problem of measuring task performance from gaze region modeling. The hypothesis is that subjects with greater task performance scores demonstrate specific gaze patterns as they conduct the task, which is expected to indicate the presence of domain knowledge. This may be reflected in for example requiring minimal visual feedback during the completion of a task. This consistent and strategic use of gaze produces nearly identical activity models among those that score higher, whilst a greater variation is observed between models learned from subjects that have performed less well in the given task. Results are shown on datasets captured using an egocentric gaze tracker with two cameras, a frontal facing camera that captures the scene, and an inward facing camera that tracks the movement of the pupil to estimate the subject's gaze fixation. Our activity and action recognition results are comparable to current literature in egocentric activity recognition, and to the best of our knowledge, the results from the task performance evaluation are the first steps towards automatically modeling user performance from gaze patterns.
APA, Harvard, Vancouver, ISO, and other styles
4

Sundaram, Sudeep. "Egocentric activity recognition on the move." Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.617591.

Full text
Abstract:
Advances in the design and efficiency of cameras, combined with a significant increase in computational capabilities of machines, have directly resulted in the rapid evolution of computer vision systems. Cameras, which capture all texture, objects and motion in their field of view, provide tremendous potential as wearable sensors that provide aid in daily living. Given the advantages associated with visual sensors, they have been surprisingly under-used as wearables on the move. This thesis addresses the problem of visual recognition of human activity on the move from an egocentric point of view. The problem of perceived motion in the background brought about by the use of moving cameras, is handled by a novel method for background subtraction. Once foreground motion has been disambiguated, a unique representation of actions as collections of smaller space-time volumes is presented, using which two novel methods are proposed for the recognition of egocentric and external actions. The element of context plays a. vital role in complete understanding of human activities. This aspect is addressed through the recognition of user location using a Simultaneous Localisation and Mapping system. Activities and locations are modelled in a complementary manner, such that knowledge of what happens where enhances mapping of large environments and also increases accuracy of activity recognition. The combination of action recognition and location recognition is applied for sequential recognition of activities in continuous video, which in turn opens doors to applications such as life logging and activity retrieval. Results arc shown on datasets captured using a shoulder-worn camera as a solitary sensor. Although the proposed methods outperform the state of the art in egocentric visual activity recognition, it remains a. significant first step towards truly autonomous wearable assistance.
APA, Harvard, Vancouver, ISO, and other styles
5

Herlihey, Tracey A. "Optic flow, egocentric direction and walking." Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/54390/.

Full text
Abstract:
This research explored two aspects of visually guided walking (1) what is the role of optic flow in the recalibration of misperceived direction while walking, and (2) how does a change in perceived direction map onto a change in walking direction. Data from five studies investigating adaptation to displaced direction (by prism glasses) suggested the following. First, optic flow is important in the recalibration of perceived direction. Further, processing optic flow is attentionally demanding, such that when cognitive load is increased, recalibration decreases. The results also demonstrated that the timecourse of recalibration changed as a function of the presence, or absence, of optic flow. With regards to the relationship between egocentric direction and walking direction, we demonstrated that a change in visual straight ahead could be mapped onto a change in target-heading error. We found that this relationship held when we unpacked the data according to the direction of displacement to which observers were exposed. The important relationship between visually perceived direction and walking direction was also highlighted in a patient study, using patients whose perception of direction was endogenously shifted after a right hemisphere stroke. Taken together, the results of this thesis help to highlight the role of optic flow in the recalibration of perceived direction, and the role of perceived direction in the visual guidance of walking. It is argued that optic flow promotes rapid recalibration of visual direction, and that change in perceived visual straight ahead can be mapped onto a changed in walking direction.
APA, Harvard, Vancouver, ISO, and other styles
6

Aghaei, Maedeh. "Social Signal Processing from Egocentric Photo-Streams." Doctoral thesis, Universitat de Barcelona, 2018. http://hdl.handle.net/10803/650918.

Full text
Abstract:
Wearable photo-cameras offer a hands-free way to record images from the camera- wearer perspective of daily experiences as they are lived, without the necessity to interrupt recording due to the device battery or storage limitations. This stream of images, known as egocentric photo-streams, contains important visual data about the living of the user, where social events among them are of special interest. Social interactions are proven to be a key to longevity and having too few interactions equates the same risk factor as smoking regularly. Considering the importance of the matter, there is no wonder that automatic analysis of social interactions is largely attracting the interest of the scientific community. Analysis of unconstrained photo-streams however, imposes novel challenges to the social signal processing problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. Also, since images are acquired under real-world conditions, occlusions occur regularly and appearance of the people undergoes intensive variations from one event to another. Given a user wearing a photo-camera during a determined period, this thesis, driven by the social signal processing paradigm presents a framework for comprehensive social pattern characterization of the user. In social signal processing, the second step after recording the scene is to track the appearance of multiple people who are involved in the social events. Hence, our proposal begins by introducing a multi-face tracking which holds certain characteristics to deal with challenges imposed by the egocentric photo-streams. Next step forward in social signal processing, is to extract the so-called social signals from the tracked people. In this step, besides the conventionally studied social signals, clothing as a novel social signal is proposed for further studies within the social signal processing. Finally, the last step is social signal analysis, itself. In this thesis, social signal analysis is essentially defined as reaching an understanding of social patterns of a wearable photo-camera user by reviewing captured photos by the worn camera over a period of time. Our proposal for social signal analysis is comprised of first, to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Each step of our proposed pipeline is validated over relevant datasets, and the obtained results are reported quantitatively and qualitatively. For each section of the pipeline, a comparison with related state-of-the-art models is provided. A discussion section over the obtained results is also given which is dedicated to highlighting the advantages, shortcomings, and differences of the proposed models, and with regards to the state-of-the-art.
Las cámaras portables ofrecen una forma de capturar imágenes de experiencias diarias vividas por el usuario, desde su propia perspectiva y sin la intervención de éste, sin la necesidad de interrumpir la grabación debido a la batería del dispositivo o las limitaciones de almacenamiento. Este conjunto de imágenes, conocidas como secuencias de fotos egocéntricas, contiene datos visuales importantes sobre la vida del usuario, donde entre ellos los eventos sociales son de especial interés. Las interacciones sociales han demostrado ser clave para la longevidad, el tener pocas interacciones equivale al mismo factor de riesgo que fumar regularmente. Teniendo en cuenta la importancia del asunto, no es de extrañar que el análisis automático de las interacciones sociales atraiga en gran medida el interés de la comunidad científica. Sin embargo, el análisis de secuencias de fotos impone nuevos desafíos al problema del procesamiento de las señales sociales con respecto a los videos convencionales. Debido al movimiento libre de la cámara y a su baja resolución temporal, los cambios abruptos en el campo de visión, en la iluminación y en la ubicación del objeto son frecuentes. Además, dado que las imágenes se adquieren en condiciones reales, las oclusiones ocurren con regularidad y la apariencia de las personas varía de un evento a otro. Dado que un individuo usa una cámara fotográfica durante un período determinado, esta tesis, impulsada por el paradigma del procesamiento de señales sociales, presenta un marco para la caracterización integral del patrón social de dicho individuo. En el procesamiento de señales sociales, el segundo paso después de grabar la escena es rastrear la apariencia de varias personas involucradas en los eventos sociales. Por lo tanto, nuestra propuesta comienza con la introducción de un seguimiento de multiples caras que posee ciertas características para hacer frente a los desafíos impuestos por las secuencias de fotos egocéntricas. El siguiente paso en el procesamiento de señales sociales es extraer las señales sociales de las personas bajo análisis. En este paso, adema´s de las señales sociales estudiadas convencionalmente, en esta tesis se propone la vestimenta como una nueva señal social para estudios posteriores dentro del procesamiento de señales sociales. Finalmente, el último paso es el análisis de señales sociales. En esta tesis, el análisis de señales sociales se define esencialmente como la comprensión de los patrones sociales de un usuario de cámara portable, mediante la revisión de fotos capturadas por la cámara llevada durante un período de tiempo. Nuestra propuesta para el análisis de señales sociales se compone de diferentes pasos. En primer lugar, detectar las interacciones sociales del usuario donde se explora el impacto de varias señales sociales en la tarea. Los eventos sociales detectados se inspeccionan en el segundo paso para la categorización en diferentes reuniones sociales. El último paso de la propuesta es caracterizar los patrones sociales del usuario. Nuestro objetivo es cuantificar la duración, la diversidad y la frecuencia de las relaciones sociales del usuario en diversas situaciones sociales. Este objetivo se logra mediante el descubrimiento de apariciones recurrentes de personas en todo el conjunto de eventos sociales relacionados con el usuario. Cada paso de nuestro método propuesto se valida sobre conjuntos de datos relevantes, y los resultados obtenidos se evalúan cuantitativa y cualitativamente. Cada etapa del modelo se compara con los trabajos relacionados más recientes. También, se presenta una sección de discusión sobre los resultados obtenidos, que se centra en resaltar las ventajas, limitaciones y diferencias de los modelos propuestos, y de estos con respecto al estado del arte.
APA, Harvard, Vancouver, ISO, and other styles
7

Cartas, Ayala Alejandro. "Recognizing Action and Activities from Egocentric Images." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670752.

Full text
Abstract:
Egocentric action recognition consists in determining what a wearable camera user is doing from his perspective. Its defining characteristic is that the person himself is only partially visible in the images through his hands. As a result, the recognition of actions can rely solely on user interactions with objects, other people, and the scene. Egocentric action recognition has numerous assistive technology applications, in particular in the field of rehabilitation and preventive medicine. The type of egocentric camera determines the activities or actions that can be predicted. There are roughly two kinds: lifelogging and video cameras. The former can continuously take pictures every 20-30 seconds during day-long periods. The sequences of pictures produced by them are called visual lifelogs or photo-streams. In comparison with video, they lack of motion that typically has been used to disambiguate actions. We present several egocentric action recognition approaches for both settings. We first introduce an approach that classifies still-images from lifelogs by combining a convolutional network and a random forest. Since lifelogs show temporal coherence within consecutive images, we also present two architectures that are based on the long short-term memory (LSTM) network. In order to thoroughly measure their generalization performance, we introduce the largest photo-streams dataset for activity recognition. These tests not only consider hidden days and multiple users but also the effect of time boundaries from events. We finally present domain adaptation strategies for dealing with unknown domain images in a real-world scenario. Our work on egocentric action recognition from videos is primarily focused on object-interactions. We present a deep network that in the first level models person to-object interactions, and in the second level models sequences of actions as part of a single activity. The spatial relationship between hands and objects is modeled using a region-based network, whereas the actions and activities are modeled using a hierarchical LSTM. Our last approach explores the importance of audio produced by the egocentric manipulations of objects. It combines a sparse temporal sampling strategy with a late fusion of audio, RGB, and temporal streams. Experimental results on the EPIC-Kitchen dataset show that multimodal integration leads to better performance than unimodal approaches.
El reconeixement d’accions egocèntriques consisteix a determinar què fa un usuari de càmera egocèntrica des de la seva perspectiva. La característica fonamental que defineix aquesta tasca és que la persona mateixa només és parcialment visible: a les imatges només veurem les seves mans. Com a resultat, el reconeixement d’accions pot dependre únicament de les interaccions dels usuaris amb objectes, altres persones i l’escena mitjançant les seves mans. El reconeixement d’acció egocèntrica té nombroses aplicacions tecnològiques d’assistència, en particular en el camp de la rehabilitació i la medicina preventiva. El tipus de càmera egocèntrica determina les activitats o les accions que es poden predir. Hi ha, grosso modo, dos tipus: registre de vida i càmeres de vídeo. Les càmeres de registre de vida poden prendre fotografies cada 20-30 segons de manera continuada durant tot el dia. Les seqüències d'imatges produïdes per aquestes s'anomenen visual lifelogs o photo-streams. En comparació amb el vídeo, manquen de moviment, el qual normalment es fa servir per desambiguar les accions. Presentem múltiples propostes per a reconeixement d’acció egocèntrica en ambdós contextos. Primer introduïm una proposta per classificar les imatges fixes dels registres de vida combinant una xarxa convolucional i un random forest. Com que els registres de vida mostren coherència temporal en imatges consecutives, també presentem dues arquitectures basades en una xarxa de long short-term memory (LSTM). Per mesurar a fons el seu rendiment de generalització, presentem la base de dades més gran de seqüències de fotos per al reconeixement d’activitats. Aquestes proves no només tenen en compte els dies ocults i diversos usuaris, sinó també l’efecte de les fronteres temporals dels diferents esdeveniments. Finalment presentem estratègies d’adaptació de dominis per tractar imatges de domini desconegut en un escenari del món real. El nostre treball sobre el reconeixement d’accions egocèntriques a partir de vídeos se centra principalment en les interaccions amb objectes. Presentem una xarxa profunda que en el primer nivell modela les interaccions entre persona i objecte i en el segon nivell modela les seqüències d’accions com a part d’una única activitat. La relació espacial entre mans i objectes es modela mitjançant una xarxa basada en regions, mentre que les accions i activitats es modelen mitjançant una xarxa LSTM jeràrquica. La nostra darrera proposta explora la importància de l'àudio produït per les manipulacions egocèntriques d'objectes. Combina una estratègia de mostreig temporal escassa amb una fusió tardana d’àudio, RGB i fluxos temporals. Els resultats experimentals sobre la base de dades EPIC-Kitchen mostren que la integració multimodal resulta en un millor rendiment que els plantejaments unimodals.
APA, Harvard, Vancouver, ISO, and other styles
8

Tamir, Diana Ilse. "A Social Neuroscience Perspective on Egocentric Influence." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11523.

Full text
Abstract:
This dissertation explores the cognitive mechanisms and motivations that guide two aspects of human social behavior: thinking about other's experiences and communicating with others. In both cases, studies investigated the possibility that self-referential thought guides our social behavior. First, Paper 1 and 2 investigated how people come to understand other's thoughts and experiences by suggesting that people may use their own self-knowledge as a starting point for making inferences about others. Using functional magnetic resonance imaging and behavioral measures, these studies tested whether individuals make social inferences using the cognitive process of egocentric anchoring-and-adjustment, whereby individuals first anchor on self-knowledge, and then serially adjust away from these anchors in order to correct for differences between the self and other. Results provided evidence consistent with egocentric anchoring-and-adjustment: increases in self-other discrepancy corresponded to both increases in activity in the MPFC (Paper 1), a neural region associated with both self-referential thought and social cognition, as well as increases in response time (Paper 2), though only for targets where self-knowledge is particularly relevant. Paper 3 then investigated a prominent social behavior, self-disclosure--the act of sharing information about the self with others--which comprises 30- 40% of human conversation. Using both functional magnetic resonance imaging and behavioral economics methodology, five studies tested whether people communicate their thoughts and feelings to others because they are intrinsically motivated to do so. Results supported the hypothesis that individuals experience sharing their thoughts with others as subjectively rewarding: self-disclosure was associated with increased activation in brain regions that form the mesolimbic dopamine reward system; and individuals were willing to forgo money to self- disclose. Moreover, both the self and the disclosure aspects of self-disclosure independently contributed to its value. Together these Papers contribute to our understanding of the ways in which our internal world grounds elements of our external social acts.
Psychology
APA, Harvard, Vancouver, ISO, and other styles
9

Spera, Emiliano. "Egocentric Vision Based Localization of Shopping Cart." Doctoral thesis, Università di Catania, 2019. http://hdl.handle.net/10761/4139.

Full text
Abstract:
Indoor camera localization from egocentric images is a challenge computer vision problem which has been strongly investigated in the last years. Localizing a camera in a 3D space can open many useful applications in different domains. In this work, we analyse this challenge to localize shopping cart in stores. Three main contributions are given with this thesis. As first, we propose a new dataset for shopping cart localization which includes both RGB and depth images together with the 3-DOF data corresponding to the cart position and orientation in the store. The dataset is also labelled with respect to 16 different classes associated to different areas of the considered retail. A second contribution is related to a benchmark study where different methods are compared for both, cart pose estimation and retail area classification. Last contribution is related to the computational analysis of the considered approaches.
APA, Harvard, Vancouver, ISO, and other styles
10

Boutaleb, Mohamed Yasser. "Egocentric Hand Activity Recognition : The principal components of an egocentric hand activity recognition framework, exploitable for augmented reality user assistance." Electronic Thesis or Diss., CentraleSupélec, 2022. http://www.theses.fr/2022CSUP0007.

Full text
Abstract:
Les êtres humains utilisent leurs mains pour diverses tâches dans la vie quotidienne et professionnelle, ce qui fait que la recherche dans ce domaine a récemment suscitée un grand intérêt. De plus, l'analyse et l'interprétation du comportement humain à l'aide de signaux visuels est l'un des domaines les plus actifs et les plus explorés de la vision par ordinateur. Avec l'arrivée des nouvelles technologies de réalité augmentée, les chercheurs s'intéressent de plus en plus à la compréhension de l'activité de la main d'un point de vue de la première personne, en explorant la pertinence de son utilisation pour le guidage et l'assistance humaine.L'objectif principal de cette thèse est de proposer un système de reconnaissance de l'activité de l'utilisateur incluant quatre composants essentiels, qui peut être utilisé pour assister les utilisateurs lors d'activités orientées vers des objectifs spécifiques : industrie 4.0 (par exemple, assemblage assisté, maintenance) et enseignement. Ainsi, le système observe les mains de l'utilisateur et les objets manipulés depuis le point de vue de l'utilisateur afin de reconnaître et comprendre ses activités manuelles réalisées. Le système de réalité augmenté souhaité doit reconnaître de manière robuste les activités habituelles de l'utilisateur. Néanmoins, il doit détecter les activités inhabituelles afin d'informer l'utilisateur et l'empêcher d'effectuer de mauvaises manœuvres, une exigence fondamentale pour l'assistance à l'utilisateur. Cette thèse combine donc des techniques issues des domaines de recherche de la vision par ordinateur et de l'apprentissage automatique afin de proposer des composants de reconnaissance de l'activité de l'utilisateur nécessaires à un outil d'assistance complet
Humans use their hands for various tasks in daily life and industry, making research in this area a recent focus of significant interest. Moreover, analyzing and interpreting human behavior using visual signals is one of the most animated and explored areas of computer vision. With the advent of new augmented reality technologies, researchers are increasingly interested in hand activity understanding from a first-person perspective exploring its suitability for human guidance and assistance. Our work is based on machine learning technology to contribute to this research area. Recently, deep neural networks have proven their outstanding effectiveness in many research areas, allowing researchers to jump significantly in efficiency and robustness.This thesis's main objective is to propose a user's activity recognition framework including four key components, which can be used to assist users during their activities oriented towards specific objectives: industry 4.0 (e.g., assisted assembly, maintenance) and teaching. Thus, the system observes the user's hands and the manipulated objects from the user's viewpoint to recognize his performed hand activity. The desired framework must robustly recognize the user's usual activities. Nevertheless, it must detect unusual ones to feedback and prevent him from performing wrong maneuvers, a fundamental requirement for user assistance. This thesis, therefore, combines techniques from the research fields of computer vision and machine learning to propose comprehensive hand activity recognition components essential for a complete assistance tool
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Egocentric"

1

Kecskes, Istvan, and Jacob Mey, eds. Intention, Common Ground and the Egocentric Speaker-Hearer. Berlin, New York: Mouton de Gruyter, 2008. http://dx.doi.org/10.1515/9783110211474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Foley, Richard. Working without a net: Study of egocentric epistemology. New York: Oxford University Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

István, Kecskés, and Mey Jacob, eds. Intention, common ground and the egocentric speaker-hearer. Berlin: Mouton de Gruyter, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Reason and morality: A defense of the egocentric perspective. Ithaca: Cornell University Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Working without a net: A study of egocentric epistemology. New York: Oxford University Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Weiss, Liad. Egocentric Categorization: Self as a Reference Category in Product Judgment and Consumer Choice. [New York, N.Y.?]: [publisher not identified], 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hibbs, Douglas A. Solidarity or egoism?: The economics of sociotropic and egocentric influences on political behavior : Denmark in international and theoretical perspective. Aarhus, Denmark: Aarhus University Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Free: The end of the human condition : the biological reason why humans have had to be individual, competitive, egocentric, and aggressive. Sydney, Australia: Centre for Humanity's Adulthood, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Josephson, Erland. Självporträtt: En egocentrisk dialog. [Stockholm]: Bromberg, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Socio-egocentrism: Theory, research, and practice. San Francisco: International Scholars Publications, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Egocentric"

1

Ellenbroek, Bart, Alfonso Abizaid, Shimon Amir, Martina de Zwaan, Sarah Parylak, Pietro Cottone, Eric P. Zorrilla, et al. "Egocentric." In Encyclopedia of Psychopharmacology, 457. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-540-68706-1_1637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Walker, Steven M., and Daryl V. Watkins. "Egocentric Cases." In Toxic Leadership, 87–95. New York: Routledge, 2022. http://dx.doi.org/10.4324/9781003202462-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fiset, Sylvain. "Egocentric Frame." In Encyclopedia of Animal Cognition and Behavior, 1–3. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-47829-6_1094-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fiset, Sylvain. "Egocentric Frame." In Encyclopedia of Animal Cognition and Behavior, 2226–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-319-55065-7_1094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Talavera, Estefania, Petia Radeva, and Nicolai Petkov. "Towards Egocentric Sentiment Analysis." In Computer Aided Systems Theory – EUROCAST 2017, 297–305. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-74727-9_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rushton, Simon K. "Egocentric Direction and Locomotion." In Optic Flow and Beyond, 339–62. Dordrecht: Springer Netherlands, 2004. http://dx.doi.org/10.1007/978-1-4020-2092-6_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cohen, David. "Egocentric or social animals?" In How the child's mind develops, 56–67. Third Edition. | New York : Routledge, [2018] | Revised edition of the author’s How the child’s mind develops, 2013.: Routledge, 2017. http://dx.doi.org/10.4324/9781315201375-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Seah, Vincent Pei-wen, and Jeff S. Shamma. "Multiagent Cooperation through Egocentric Modeling." In Cooperative Control of Distributed Multi-Agent Systems, 213–29. Chichester, UK: John Wiley & Sons, Ltd, 2007. http://dx.doi.org/10.1002/9780470724200.ch9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yeap, Wai Kiang. "On Egocentric and Allocentric Maps." In Spatial Cognition IX, 62–75. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11215-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jain, Samriddhi, Renu M. Rameshan, and Aditya Nigam. "Object Triggered Egocentric Video Summarization." In Computer Analysis of Images and Patterns, 428–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64698-5_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Egocentric"

1

Lucia, William, and Elena Ferrari. "EgoCentric." In CIKM '14: 2014 ACM Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2661829.2661990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Park, Hyun Soo, Jyh-Jing Hwang, Yedong Niu, and Jianbo Shi. "Egocentric Future Localization." In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Mengmi, Keng Teck Ma, Shih-Cheng Yen, Joo Hwee Lim, Qi Zhao, and Jiashi Feng. "Egocentric Spatial Memory." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Finocchiaro, Jessica, Aisha Urooj Khan, and Ali Borji. "Egocentric Height Estimation." In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2017. http://dx.doi.org/10.1109/wacv.2017.132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fathi, Alireza, Ali Farhadi, and James M. Rehg. "Understanding egocentric activities." In 2011 IEEE International Conference on Computer Vision (ICCV). IEEE, 2011. http://dx.doi.org/10.1109/iccv.2011.6126269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thapar, Daksh, Aditya Nigam, and Chetan Arora. "Anonymizing Egocentric Videos." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Tianyu, Weiqing Min, Jiahao Yang, Tao Liu, Shuqiang Jiang, and Yong Rui. "What If We Could Not See? Counterfactual Analysis for Egocentric Action Anticipation." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/182.

Full text
Abstract:
Egocentric action anticipation aims at predicting the near future based on past observation in first-person vision. While future actions may be wrongly predicted due to the dataset bias, we present a counterfactual analysis framework for egocentric action anticipation (CA-EAA) to enhance the capacity. In the factual case, we can predict the upcoming action based on visual features and semantic labels from past observation. Imagining one counterfactual situation where no visual representation had been observed, we would obtain a counterfactual predicted action only using past semantic labels. In this way, we can reduce the side-effect caused by semantic labels via a comparison between factual and counterfactual outcomes, which moves a step towards unbiased prediction for egocentric action anticipation. We conduct experiments on two large-scale egocentric video datasets. Qualitative and quantitative results validate the effectiveness of our proposed CA-EAA.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yin, Zhefan Ye, and James M. Rehg. "Delving into egocentric actions." In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2015. http://dx.doi.org/10.1109/cvpr.2015.7298625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Corujeira, José G. P., and Ian Oakley. "Stereoscopic egocentric distance perception." In SAP' 13: ACM Symposium on Applied Perception 2013. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2492494.2492509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Spera, Emiliano, Antonino Furnari, Sebastiano Battiato, and Giovanni Maria Farinella. "Egocentric Shopping Cart Localization." In 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018. http://dx.doi.org/10.1109/icpr.2018.8545516.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Egocentric"

1

Liu, Feng, and Miguel Figliozzi. Utilizing Egocentric Video and Sensors to Conduct Naturalistic Bicycling Studies. Portland State University, August 2016. http://dx.doi.org/10.15760/trec.154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Song, So Young, and Youn-Kyung Kim. Altercentric versus Egocentric 'Green Jeans' Advertising: Impact on Dual Emotional Warmth. Ames: Iowa State University, Digital Repository, November 2015. http://dx.doi.org/10.31274/itaa_proceedings-180814-171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jerome, Christian J., and Bob G. Witmer. The Perception and Estimation of Egocentric Distance in Real and Augmented Reality Environments. Fort Belvoir, VA: Defense Technical Information Center, May 2008. http://dx.doi.org/10.21236/ada493544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography