Literatura académica sobre el tema "HCI Systems, Human Factors, Digital Traces"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "HCI Systems, Human Factors, Digital Traces".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "HCI Systems, Human Factors, Digital Traces"

1

Balcombe, Luke y Diego De Leo. "Human-Computer Interaction in Digital Mental Health". Informatics 9, n.º 1 (22 de febrero de 2022): 14. http://dx.doi.org/10.3390/informatics9010014.

Texto completo
Resumen
Human-computer interaction (HCI) has contributed to the design and development of some efficient, user-friendly, cost-effective, and adaptable digital mental health solutions. But HCI has not been well-combined into technological developments resulting in quality and safety concerns. Digital platforms and artificial intelligence (AI) have a good potential to improve prediction, identification, coordination, and treatment by mental health care and suicide prevention services. AI is driving web-based and smartphone apps; mostly it is used for self-help and guided cognitive behavioral therapy (CBT) for anxiety and depression. Interactive AI may help real-time screening and treatment in outdated, strained or lacking mental healthcare systems. The barriers for using AI in mental healthcare include accessibility, efficacy, reliability, usability, safety, security, ethics, suitable education and training, and socio-cultural adaptability. Apps, real-time machine learning algorithms, immersive technologies, and digital phenotyping are notable prospects. Generally, there is a need for faster and better human factors in combination with machine interaction and automation, higher levels of effectiveness evaluation and the application of blended, hybrid or stepped care in an adjunct approach. HCI modeling may assist in the design and development of usable applications, and to effectively recognize, acknowledge, and address the inequities of mental health care and suicide prevention and assist in the digital therapeutic alliance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Shanthi, N., Sathishkumar V E, K. Upendra Babu, P. Karthikeyan, Sukumar Rajendran y Shaikh Muhammad Allayear. "Analysis on the Bus Arrival Time Prediction Model for Human-Centric Services Using Data Mining Techniques". Computational Intelligence and Neuroscience 2022 (26 de septiembre de 2022): 1–13. http://dx.doi.org/10.1155/2022/7094654.

Texto completo
Resumen
The human-computer interaction has become inevitable in digital world. HCI helps humans to incorporate technology to resolve even their day-to-day problems. The main objective of the paper is to utilize HCI in Intelligent Transportation Systems. In India, the most common and convenient mode of transportation is the buses. Every state government provides the bus transportation facility to all routes at an affordable cost. The main difficulty faced by the passengers (humans) is lack of information about bus numbers available for the particular route and Estimated Time of Arrival (ETA) of the buses. There may be different reasons for the bus delay. These include heavy traffic, breakdowns, and bad weather conditions. The passengers waiting in the bus stops are neither aware of the delay nor the bus arrival time. These issues can be resolved by providing an HCI-based web/mobile application for the passengers to track their bus locations in real time. They can also check the Estimated Time of Arrival (ETA) of a particular bus, calculated using machine learning techniques by considering the impacts of environmental dynamics, and other factors like traffic density and weather conditions and track their bus locations in real time. This can be achieved by developing a real-time bus management system for the benefit of passengers, bus drivers, and bus managers. This system can effectively address the problems related to bus timing transparency and arrival time forecasting. The buses are equipped with real-time vehicle tracking module containing Raspberry Pi, GPS, and GSM. The traffic density in the current location of the bus and weather data are some of the factors used for the ETA prediction using the Support Vector Regression algorithm. The model showed RMSE of 27 seconds when tested. The model is performing well when compared with other models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Евдокимов, К. Н. "ON THE ISSUE OF IMPROVING SPECIAL MEASURES TO COUNTERACT TECHNOTRONIC CRIME IN THE RUSSIAN FEDERATION". Digest of research works "Criminalistics: yesterday, today, tomorrow", n.º 2(22) (30 de junio de 2022): 45–55. http://dx.doi.org/10.55001/2587-9820.2022.93.56.006.

Texto completo
Resumen
Современный мир характеризуется стремительным развитием информационных отношений, информационно-коммуникационных и робототехнических технологий, глобального киберпространства, социальных информационных сетей и когнитивных технологий, а также тотальной компьютеризацией человеческого общества.Мы живем в эпоху четвертой научно-технической революции, в результате которой на смену электронно-вычислительным машинам, полупроводниковым компьютерам и информационно-коммуникационной сети Интернет пришли глобальные информационные системы, информационно-телекоммуникационные сети, автоматизированные системы управления и системы мгновенного обмена информацией нового поколения; средства создания, использования, обработки и распространения информации и др., которые основаны на других физических принципах.Российское технотронное общество является полностью зависимым от инфо­рмационных, коммуникационных, аэрокосмических, энергетических, транспортных, производственных, биологических, производственных, научных и иных высоких технологий. Однако при этом большинство людей, не обладая специальными познаниями, умениями и навыками, слабо представляет принципы и способы их работы.Поэтому техническая безграмотность населения, увеличение объемов информации, сложность и многообразие современных знаний, психология потребления благ, а не созидания, творчества и познания, отсутствие индивидуальной и общественной культуры технологической безопасности и другие факторы научно-технического характера привели к возникновению компьютерной преступности и её последующей трансформации в технотронную преступность.В статье констатируется тот факт, что технотронная преступность представляет одну из системных угроз национальной безопасности Российской Федерации.Автором обосновывается необходимость совершенствования комплекса мер противодействия новому виду высокотехнологичной преступности.По результатам исследования делается вывод о том, что для успешного противодействия технотронной преступности, кроме мер правового и организационно-технического характера, требуются принятие и определенных криминалистических мер. В частности, предлагается создание единой для российских правоохранительных органов базы данных «Технотрон» с элементами искусственного интеллекта по учету технотронных преступников, совершенных ими преступных деяний и обнаруженных цифровых следов преступлений. Вышеуказанная база данных с соответствующим программным обеспечением могла бы в автоматическом режиме сопоставлять изъятые с места преступлений «цифровые следы», компьютерные вирусы, а также сведения о технотронных преступниках, совершенных ими и доказанных преступных деяний. В случае выявления определенных совпадений предоставлять сведения о нераскрытых преступлениях и лицах, с большой долей вероятности их совершивших.Безусловно, использование криминалистических учетов с элементами «искусственного интеллекта» будет способствовать раскрытию сложных технотронных преступлений и более эффективному их расследованию. The modern world is characterized by the rapid development of information relations, information and communication and robotic technologies, global cyberspace, social information networks and cognitive technologies, as well as the total computerization of human society.We live in the era of the fourth scientific and technological revolution, as a result of which electronic computers, semiconductor computers and the Internet information and communication network were replaced by global information systems, information and telecommunication networks, automated control systems and instant information exchange systems of a new generation; means of creating, using, processing and distributing information and others that are based on other physical principles.The Russian technotronic society is completely dependent on information, communication, aerospace, energy, transport, production, biological, industrial, scientific and other high technologies. However, at the same time, most people, without special knowledge, skills and abilities, have little idea of the principles and ways of their work.Therefore, the technical illiteracy of the population, the increase in the volume of information, the complexity and diversity of modern knowledge, the psychology of consumption of goods rather than creation, creativity and cognition, the lack of individual and public culture of technological security and other factors of a scientific and technical nature led to the emergence of computer crime and its subsequent transformation into technotronic crime.The article states the fact that technotronic crime is one of the systemic threats to the national security of the Russian Federation.The author substantiates the need to improve the complex of measures to counter a new type of high-tech crimeAccording to the results of the study, it is concluded that in order to successfully counteract technotronic crime, in addition to legal and organizational and technical measures, the adoption of certain forensic measures is also required. In particular, it is proposed to create a single database for Russian law enforcement agencies “Technotron” with elements of artificial intelligence to account for technotronic criminals, criminal acts committed by them and detected digital traces of crimes. The above database with the appropriate software could automatically compare the “digital traces” seized from the crime scene, computer viruses, as well as information about technotronic criminals, committed by them and proven criminal acts. In case of detection of certain coincidences, provide information about unsolved crimes and persons, with a high degree of probability, who committed them.Of course, the use of forensic records with elements of “artificial intelligence” will contribute to the disclosure of complex technotronic crimes and their more effective investigation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kerasidou, Xaroula (Charalampia). "Regressive Augmentation: Investigating Ubicomp’s Romantic Promises". M/C Journal 16, n.º 6 (7 de noviembre de 2013). http://dx.doi.org/10.5204/mcj.733.

Texto completo
Resumen
Machines that fit the human environment instead of forcing humans to enter theirs will make using a computer as refreshing as taking a walk in the woods. Mark Weiser on ubiquitous computing (21st Century Computer 104) In 2007, a forum entitled HCI 2020: Human Values in a Digital Age sought to address the questions: What will our world be like in 2020? Digital technologies will continue to proliferate, enabling ever more ways of changing how we live. But will such developments improve the quality of life, empower us, and make us feel safer, happier and more connected? Or will living with technology make it more tiresome, frustrating, angst-ridden, and security-driven? What will it mean to be human when everything we do is supported or augmented by technology? (Harper et al. 10) The forum came as a response to, what many call, post-PC technological developments; developments that seek to engulf our lives in digital technologies which in their various forms are meant to support and augment our everyday lives. One of these developments has been the project of ubiquitous computing along with its kin project, tangible computing. Ubiquitous computing (ubicomp) made its appearance in the late 1980s in the labs of Xerox’s Palo Alto Research Center (PARC) as the “third wave” in computing, following those of the mainframe and personal computing (Weiser, Open House 2). Mark Weiser, who coined the term, along with his collaborators at Xerox PARC, envisioned a “new technological paradigm” which would leave behind the traditional one-to-one relationship between human and computer, and spread computation “ubiquitously, but invisibly, throughout the environment” (Weiser, Gold and Brown 693). Since then, the field has grown and now counts several peer-reviewed journals, conferences, and academic and industrial research centres around the world, which have set out to study the new “post-PC computing” under names such as Pervasive Computing, Ambient Intelligence, Tangible Computing, The Internet of Things, etc. Instead of providing a comprehensive account of all the different ubicomp incarnations, this paper seeks to focus on the early projects and writings of some of ubicomp’s most prominent figures and tease out, as a way of critique, the origins of some of its romantic promises. From the outset, ubiquitous computing was heavily informed by a human-centred approach that sought to shift the focus from the personal computer back to its users. On the grounds that the PC has dominated the technological landscape at the expense of its human counterparts, ubiquitous computing promised a different human-machine interaction, with “machines that fit the human environment instead of forcing humans to enter theirs” (104, my italics) placing the two in opposite and antagonistic terrains. The problem comes about in the form of interaction between people and machines … So when the two have to meet, which side should dominate? In the past, it has been the machine that dominates. In the future, it should be the human. (Norman 140) Within these early ubicomp discourses, the computer came to embody a technological menace, the machine that threatened the liberal humanist value of being free and in control. For example, in 1999 in a book that was characterized as “the bible of ‘post-PC’ thinking” by Business Week, Donald Norman exclaimed: we have let ourselves to be trapped. … I don’t want to be controlled by a technology. I just want to get on with my life, … So down with PC’s; down with computers. All they do is complicate our lives. (72) And we read on the website of MIT’s first ubicomp project Oxygen: For over forty years, computation has centered about machines, not people. We have catered to expensive computers, pampering them in air-conditioned rooms or carrying them around with us. Purporting to serve us, they have actually forced us to serve them. Ubiquitous computing then, in its early incarnations, was presented as the solution; the human-centred, somewhat natural approach, which would shift the emphasis away from the machine and bring control back to its legitimate owner, the liberal autonomous human subject, becoming the facilitator of our apparently threatened humanness. Its promise? An early promise of regressive augmentation, I would say, since it promised to augment our lives, not by changing them, but by returning us to a past, better world that the alienating PC has supposedly displaced, enabling us to “have more time to be more fully human” (Weiser and Brown). And it sought to achieve this through the key characteristic of invisibility, which was based on the paradox that while more and more computers will permeate our lives, they will effectively disappear. Ubicomp’s Early Romantic Promises The question of how we can make computers disappear has been addressed in computer research in various ways. One of the earliest and most prominent of these is the approach, which focuses on the physicality of the world seeking to build tangible interfaces. One of the main advocates of this approach is MIT’s Tangible Media Group, led by Professor Hiroshi Ishii. The group has been working on their vision, which they call “Tangible Bits,” for almost two decades now, and in 2009 they were awarded the “Lasting Impact Award” at the ACM Symposium on User Interface Software and Technology (UIST) for their metaDesk platform, presented in 1997 (fig.1), which explores the coupling of everyday physical objects with digital information (Ullmer and Ishii). Also, in 2004 in a special paper titled “Bottles: A Transparent Interface as a Tribute to Mark Weiser”, Ishii presented once again an early project he and his group developed in 1999, and for which they were personally commented by Weiser himself. According to Ishii, bottles (fig. 2)—a system which comprises three glass bottles “filled with music” each representing a different musical instrument, placed on a Plexiglas “stage” and controlled by their physical manipulation (moving, opening or closing them)—no less, “illustrates Mark Weiser’s vision of the transparent (or invisible) interface that weaves itself into the fabric of everyday life” (1299). Figure 1: metaDesk platform (MIT Tangible Media Group) Figure 2: musicBottles (MIT Tangible Media Group) Tangible computing was based on the premise that we inhabit two worlds: the physical world and cyberspace, or as Ishii and Ullmer put it, the world of atoms and the world of bits claiming that there is gap between these two worlds that left us “torn between these parallel but disjoint spaces” (1). This agreed with Weiser’s argument that cyberspace, and specifically the computer, has taken centre stage leaving the real world—the real people, the real interactions—in the background and neglected. Tangible computing then sought to address this problem by "bridging the gaps between both cyberspace and the physical environment" (1). As Ishii and Ullmer wrote in 1997: The aim of our research is to show concrete ways to move beyond the current dominant model of GUI [Graphic User Interface] bound to computers with a flat rectangular display, windows, a mouse, and a keyboard. To make computing truly ubiquitous and invisible, we seek to establish a new type of HCI that we call "Tangible User Interfaces" (TUIs). TUIs will augment the real physical world by coupling digital information to everyday physical objects and environments. (2) “Our intention is to take advantage of natural physical affordances to achieve a heightened legibility and seamlessness of interaction between people and information” (2). In his earlier work computer scientist Paul Dourish turned to phenomenology and the concept of embodiment in order to develop an understanding of interaction as embodied. This was prior to his recent work with cultural anthropologist Bell where they examined the motivating mythology of ubiquitous computing along with the messiness of its lived experience (Dourish and Bell). Dourish, in this earlier work observed that one of the common critical features early tangible and ubiquitous computing shared is that “they both attempt to exploit our natural familiarity with the everyday environment and our highly developed spatial and physical skills to specialize and control how computation can be used in concert with naturalistic activities” (Context-Aware Computing 232). They then sought to exploit this familiarity in order to build natural computational interfaces that fit seamlessly within our everyday, real world (Where the Action Is 17). This idea of an existing set of natural tactile skills appears to come hand-in-hand with a nostalgic, romantic view of an innocent, simple, and long gone world that the early projects of tangible and ubiquitous computing sought to revive; a world where the personal computer not only did not fit, an innocent world in fact displaced by the personal computer. In 1997, Ishii and Ullmer wrote about their decision to start their investigations about the “future of HCI” in the museum of the Collection of Historic Scientific Instruments at Harvard University in their efforts to get inspired by “the aesthetics and rich affordances of these historical scientific instruments” concerned that, “alas, much of this richness has been lost to the rapid flood of digital technologies” (1). Elsewhere Ishii explained that the origin of his idea to design a bottle interface began with the concept of a “weather forecast bottle;” an idea he intended to develop as a present for his mother. “Upon opening the weather bottle, she would be greeted by the sound of singing birds if the next day’s weather was forecasted to be clear” (1300). Here, we are introduced to a nice elderly lady who has opened thousands of bottles while cooking for her family in her kitchen. This senior lady; who is made to embody the symbolic alignment between woman, the domestic and nature (see Soper, Rose, Plumwood); “has never clicked a mouse, typed a URL, nor booted a computer in her life” (Ishii 1300). Instead, “my mother simply wanted to know the following day’s weather forecast. Why should this be so complicated?” (1300, my italics). Weiser also mobilised nostalgic sentiments in order to paint a picture of what it would be to live with ubiquitous computing. So, for example, when seeking a metaphor for ubiquitous computing, he proposed “childhood – playful, a building of foundations, constant learning, a bit mysterious and quickly forgotten by adults” (Not a Desktop 8). He viewed the ubicomp home as the ideal retreat to a state of childhood; playfully reaching out to the unknown, while being securely protected and safely “at home” (Open House). These early ideas of a direct experience of the world through our bodily senses along with the romantic view of a past, simple, and better world that the computer threatened and that future technological developments promised, could point towards what Leo Marx has described as America’s “pastoral ideal”, a force that, according to Marx, is ingrained in the American view of life. Balancing between primitivism and civilisation, nature and culture, the pastoral ideal “is an embodiment of what Lovejoy calls ‘semi-primitivism’; it is located in a middle ground somewhere ‘between’, yet in a transcendent relation to, the opposing forces of civilisation and nature” (Marx 23). It appears that the early advocates of tangible and ubiquitous computing sought to strike a similar balance to the American pastoral ideal; a precarious position that managed to reconcile the disfavour and fear of Europe’s “satanic mills” with an admiration for the technological power of the Industrial Revolution, the admiration for technological development with the bucolic ideal of an unspoiled and pure nature. But how was such a balance to be achieved? How could the ideal middle state be achieved balancing the opposing forces of technological development and the dream of the return to a serene pastoral existence? According to Leo Marx, for the European colonisers, the New World was to provide the answer to this exact question (101). The American landscape was to become the terrain where old and new, nature and technology harmonically meet to form a libertarian utopia. Technology was seen as “‘naturally arising’ from the landscape as another natural ‘means of happiness’ decreed by the Creator in his design of the continent. So, far from conceding that there might be anything alien or ‘artificial’ about mechanization, technology was seen as inherent in ‘nature’; both geographic and human” (160). Since then, according to Marx, the idea of the “return” to a new Golden Age has been engrained in the American culture and it appears that it informs ubiquitous computing’s own early visions. The idea of a “naturally arising” technology which would facilitate our return to the once lost garden of security and nostalgia appears to have become a common theme within ubiquitous computing discourses making appearances across time and borders. So, for example, while in 1991 Weiser envisioned that ubiquitous technologies will make “using a computer as refreshing as taking a walk in the woods” (21st Century Computer 11), twelve years later Marzano writing about Philip’s vision of Ambient Intelligence promised that “the living space of the future could look more like that of the past than that of today” (9). While the pastoral defined nature in terms of the geographical landscape, early ubiquitous computing appeared to define nature in terms of the objects, tools and technologies that surround us and our interactions with them. While pastoral America defined itself in contradistinction to the European industrial sites and the dirty, smoky and alienating cityscapes, within those early ubiquitous computing discourses the role of the alienating force was assigned to the personal computer. And whereas the personal computer with its “grey box” was early on rejected as the modern embodiment of the European satanic mills, computation was welcomed as a “naturally arising” technological solution which would infuse the objects which, “through the ages, … are most relevant to human life—chairs, tables and beds, for instance, … the objects we can’t do without” (Marzano 9). Or else, it would infuse the—newly constructed—natural landscape fulfilling the promise that when the “world of bits” and the “world of atoms” are finally bridged, the balance will be restored. But how did these two worlds come into existence? How did bits and atoms come to occupy different and separate ontological spheres? Far from being obvious or commonsensical, the idea of the separation between bits and atoms has a history that grounds it to specific times and places, and consequently makes those early ubiquitous and tangible computing discourses part of a bigger story that, as documented (Hayles) and argued (Agre), started some time ago. The view that we inhabit the two worlds of atoms and bits (Ishii and Ullmer) was endorsed by both early ubiquitous and tangible computing, it was based on the idea of the separation of computation from its material instantiation, presenting the former as a free floating entity able to infuse our world. As we saw earlier, tangible computing took the idea of this separation as an unquestionable fact, which then served as the basis for its research goals. As we read in the home page of the Tangible Media Group’s website: Where the sea of bits meets the land of atoms, we are now facing the challenge of reconciling our dual citizenship in the physical and digital worlds. "Tangible Bits" is our vision of Human Computer Interaction (HCI): we seek a seamless coupling of bits and atoms by giving physical form to digital information and computation (my italics). The idea that digital information does not have to have a physical form, but is given one in order to achieve a coupling of the two worlds, not only reinforces the view of digital information as an immaterial entity, but also places it in a privileged position against the material world. Under this light, those early ideas of augmentation or of “awakening” the physical world (Ishii and Ullmer 3) appear to be based on the idea of a passive material world that can be brought to life and become worthy and meaningful through computation, making ubiquitous computing part of a bigger and more familiar story. Restaging the dominant Cartesian dualism between the “ensouled” subject and the “soulless” material object, the latter is rendered passive, manipulable, and void of agency and, just like Ishii’s old bottles, it is performed as a mute, docile “empty vessel” ready to carry out any of its creator’s wishes; hold perfumes and beverages, play music, or tell the weather. At the same time, computation was presented as the force that could breathe life to a mundane and passive world; a free floating, somewhat natural, immaterial entity, like oxygen (hence the name of MIT’s first ubicomp project), like the air we breathe that could travel unobstructed through any medium, our everyday objects and our environment. But it is interesting to see that in those early ubicomp discourses computation’s power did not extend too far. While computation appeared to be foregrounded as a powerful, almost magic, entity able to give life and soul to a soulless material world, at the same time it was presented as controlled and muted. The computational power that would fill our lives, according to Weiser’s ubiquitous computing, would be invisible, it wouldn’t “intrude on our consciousness” (Weiser Not a Desktop 7), it would leave no traces and bring no radical changes. If anything, it would enable us to re-establish our humanness and return us to our past, natural state promising not to change us, or our lives, by introducing something new and unfamiliar, but to enable us to “remain serene and in control” (Weiser and Brown). In other words, ubiquitous computing, as this early story goes, would not be alienating, complex, obtrusive, or even noticeable, for that matter, and so, at the end of this paper, we come full circle to ubicomp’s early goals of invisibility with its underpinnings of the precarious pastoral ideal. This short paper focused on some of ubicomp’s early stories and projects and specifically on its promise to return us to a past and implicitly better world that the PC has arguably displaced. By reading these early promises of, what I call, regressive augmentation through Marx’s work on the “pastoral ideal,” this paper sought to tease out, in order to unsettle, the origins of some of ubicomp’s romantic promises. References Agre, P. E. Computation and Human Experience. New York: Cambridge University Press, 1997. Dourish, P. “Seeking a Foundation for Context-Aware Computing.” Human–Computer Interaction 16.2-4 (2001): 229-241. ———. Where the Action Is: The Foundations of Embodied Interaction. Cambridge: MIT Press, 2001. Dourish, P. and Genevieve Bell. Divining a Digital Future: Mess and Mythology in Ubiquitous Computing. Cambridge, Massachusetts: MIT Press, 2011.Grimes, A., and R. Harper. “Celebratory Technology: New Directions for Food Research in HCI.” In CHI’08, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: ACM, 2008. 467-476. Harper, R., T. Rodden, Y. Rogers, and A. Sellen (eds.). Being Human: Human-Computer Interaction in the Year 2020. Microsoft Research, 2008. 1 Dec. 2013 ‹http://research.microsoft.com/en-us/um/Cambridge/projects/hci2020/downloads/BeingHuman_A3.pdf›. Hayles, K. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999. Ishii, H. “Bottles: A Transparent Interface as a Tribute to Mark Weiser.” IEICE Transactions on Information and Systems 87.6 (2004): 1299-1311. Ishii, H., and B. Ullmer. “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms.” In CHI ’97, Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems. New York: ACM, 1997. 234-241. Marx, L. The Machine in the Garden: Technology and the Pastoral Ideal in America. 35th ed. New York: Oxford University Press, 2000. Marzano, S. “Cultural Issues in Ambient Intelligence”. In E. Aarts and S. Marzano (eds.), The New Everyday: Views on Ambient Intelligence. Rotterdam: 010 Publishers, 2003. Norman, D. The Invisible Computer: Why Good Oroducts Can Fail, the Personal Computer Is So Complex, and Information Appliances Are the Solution. Cambridge, Mass.: MIT Press, 1999. Plumwood, V. Feminism and the Mastery of Nature. London, New York: Routledge, 1993. Rose, G. Feminism and Geography. Cambridge: Polity, 1993. Soper, K. “Naturalised Woman and Feminized Nature.” In L. Coupe (ed.), The Green Studies Reader: From Romanticism to Ecocriticism. London: Routledge, 2000. Ullmer, B., and H. Ishii. “The metaDESK: Models and Prototypes for Tangible User Interfaces.” In UIST '97, Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology. New York: ACM, 1997. 223-232. Weiser, M. “The Computer for the 21st Century." Scientific American 265.3 (1991): 94-104. ———. “The Open House.” ITP Review 2.0, 1996. 1 Dec. 2013 ‹http://makingfurnitureinteractive.files.wordpress.com/2007/09/wholehouse.pdf›. ———. “The World Is Not a Desktop." Interactions 1.1 (1994): 7-8. Weiser, M., and J.S. Brown. “The Coming Age of Calm Technology.” 1996. 1 Dec. 2013 ‹http://www.johnseelybrown.com/calmtech.pdf›. Weiser, M., R. Gold, and J.S. Brown. “The Origins of Ubiquitous Computing at PARC in the Late 80s.” Pervasive Computing 38 (1999): 693-696.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Döllinger, Nina, Carolin Wienrich y Marc Erich Latoschik. "Challenges and Opportunities of Immersive Technologies for Mindfulness Meditation: A Systematic Review". Frontiers in Virtual Reality 2 (27 de abril de 2021). http://dx.doi.org/10.3389/frvir.2021.644683.

Texto completo
Resumen
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Jethani, Suneel y Robbie Fordyce. "Darkness, Datafication, and Provenance as an Illuminating Methodology". M/C Journal 24, n.º 2 (27 de abril de 2021). http://dx.doi.org/10.5204/mcj.2758.

Texto completo
Resumen
Data are generated and employed for many ends, including governing societies, managing organisations, leveraging profit, and regulating places. In all these cases, data are key inputs into systems that paradoxically are implemented in the name of making societies more secure, safe, competitive, productive, efficient, transparent and accountable, yet do so through processes that monitor, discipline, repress, coerce, and exploit people. (Kitchin, 165) Introduction Provenance refers to the place of origin or earliest known history of a thing. It refers to the custodial history of objects. It is a term that is commonly used in the art-world but also has come into the language of other disciplines such as computer science. It has also been applied in reference to the transactional nature of objects in supply chains and circular economies. In an interview with Scotland’s Institute for Public Policy Research, Adam Greenfield suggests that provenance has a role to play in the “establishment of reliability” given that a “transaction or artifact has a specified provenance, then that assertion can be tested and verified to the satisfaction of all parities” (Lawrence). Recent debates on the unrecognised effects of digital media have convincingly argued that data is fully embroiled within capitalism, but it is necessary to remember that data is more than just a transactable commodity. One challenge in bringing processes of datafication into critical light is how we understand what happens to data from its point of acquisition to the point where it becomes instrumental in the production of outcomes that are of ethical concern. All data gather their meaning through relationality; whether acting as a representation of an exterior world or representing relations between other data points. Data objectifies relations, and despite any higher-order complexities, at its core, data is involved in factualising a relation into a binary. Assumptions like these about data shape reasoning, decision-making and evidence-based practice in private, personal and economic contexts. If processes of datafication are to be better understood, then we need to seek out conceptual frameworks that are adequate to the way that data is used and understood by its users. Deborah Lupton suggests that often we give data “other vital capacities because they are about human life itself, have implications for human life opportunities and livelihoods, [and] can have recursive effects on human lives (shaping action and concepts of embodiment ... selfhood [and subjectivity]) and generate economic value”. But when data are afforded such capacities, the analysis of its politics also calls for us to “consider context” and “making the labour [of datafication] visible” (D’Ignazio and Klein). For Jenny L. Davis, getting beyond simply thinking about what data affords involves bringing to light how continually and dynamically to requests, demands, encourages, discourages, and refuses certain operations and interpretations. It is in this re-orientation of the question from what to how where “practical analytical tool[s]” (Davis) can be found. Davis writes: requests and demands are bids placed by technological objects, on user-subjects. Encourage, discourage and refuse are the ways technologies respond to bids user-subjects place upon them. Allow pertains equally to bids from technological objects and the object’s response to user-subjects. (Davis) Building on Lupton, Davis, and D’Ignazio and Klein, we see three principles that we consider crucial for work on data, darkness and light: data is not simply a technological object that exists within sociotechnical systems without having undergone any priming or processing, so as a consequence the data collecting entity imposes standards and way of imagining data before it comes into contact with user-subjects; data is not neutral and does not possess qualities that make it equivalent to the things that it comes to represent; data is partial, situated, and contingent on technical processes, but the outcomes of its use afford it properties beyond those that are purely informational. This article builds from these principles and traces a framework for investigating the complications arising when data moves from one context to another. We draw from the “data provenance” as it is applied in the computing and informational sciences where it is used to query the location and accuracy of data in databases. In developing “data provenance”, we adapt provenance from an approach that solely focuses on technical infrastructures and material processes that move data from one place to another and turn to sociotechnical, institutional, and discursive forces that bring about data acquisition, sharing, interpretation, and re-use. As data passes through open, opaque, and darkened spaces within sociotechnical systems, we argue that provenance can shed light on gaps and overlaps in technical, legal, ethical, and ideological forms of data governance. Whether data becomes exclusive by moving from light to dark (as has happened with the removal of many pages and links from Facebook around the Australian news revenue-sharing bill), or is publicised by shifting from dark to light (such as the Australian government releasing investigative journalist Andie Fox’s welfare history to the press), or even recontextualised from one dark space to another (as with genetic data shifting from medical to legal contexts, or the theft of personal financial data), there is still a process of transmission here that we can assess and critique through provenance. These different modalities, which guide data acquisition, sharing, interpretation, and re-use, cascade and influence different elements and apparatuses within data-driven sociotechnical systems to different extents depending on context. Attempts to illuminate and make sense of these complex forces, we argue, exposes data-driven practices as inherently political in terms of whose interests they serve. Provenance in Darkness and in Light When processes of data capture, sharing, interpretation, and re-use are obscured, it impacts on the extent to which we might retrospectively examine cases where malpractice in responsible data custodianship and stewardship has occurred, because it makes it difficult to see how things have been rendered real and knowable, changed over time, had causality ascribed to them, and to what degree of confidence a decision has been made based on a given dataset. To borrow from this issue’s concerns, the paradigm of dark spaces covers a range of different kinds of valences on the idea of private, secret, or exclusive contexts. We can parallel it with the idea of ‘light’ spaces, which equally holds a range of different concepts about what is open, public, or accessible. For instance, in the use of social data garnered from online platforms, the practices of academic researchers and analysts working in the private sector often fall within a grey zone when it comes to consent and transparency. Here the binary notion of public and private is complicated by the passage of data from light to dark (and back to light). Writing in a different context, Michael Warner complicates the notion of publicness. He observes that the idea of something being public is in and of itself always sectioned off, divorced from being fully generalisable, and it is “just whatever people in a given context think it is” (11). Michael Hardt and Antonio Negri argue that publicness is already shadowed by an idea of state ownership, leaving us in a situation where public and private already both sit on the same side of the propertied/commons divide as if the “only alternative to the private is the public, that is, what is managed and regulated by states and other governmental authorities” (vii). The same can be said about the way data is conceived as a public good or common asset. These ideas of light and dark are useful categorisations for deliberately moving past the tensions that arise when trying to qualify different subspecies of privacy and openness. The problem with specific linguistic dyads of private vs. public, or open vs. closed, and so on, is that they are embedded within legal, moral, technical, economic, or rhetorical distinctions that already involve normative judgements on whether such categories are appropriate or valid. Data may be located in a dark space for legal reasons that fall under the legal domain of ‘private’ or it may be dark because it has been stolen. It may simply be inaccessible, encrypted away behind a lost password on a forgotten external drive. Equally, there are distinctions around lightness that can be glossed – the openness of Open Data (see: theodi.org) is of an entirely separate category to the AACS encryption key, which was illegally but enthusiastically shared across the internet in 2007 to the point where it is now accessible on Wikipedia. The language of light and dark spaces allows us to cut across these distinctions and discuss in deliberately loose terms the degree to which something is accessed, with any normative judgments reserved for the cases themselves. Data provenance, in this sense, can be used as a methodology to critique the way that data is recontextualised from light to dark, dark to light, and even within these distinctions. Data provenance critiques the way that data is presented as if it were “there for the taking”. This also suggests that when data is used for some or another secondary purpose – generally for value creation – some form of closure or darkening is to be expected. Data in the public domain is more than simply a specific informational thing: there is always context, and this contextual specificity, we argue, extends far beyond anything that can be captured in a metadata schema or a licensing model. Even the transfer of data from one open, public, or light context to another will evoke new degrees of openness and luminosity that should not be assumed to be straightforward. And with this a new set of relations between data-user-subjects and stewards emerges. The movement of data between public and private contexts by virtue of the growing amount of personal information that is generated through the traces left behind as people make use of increasingly digitised services going about their everyday lives means that data-motile processes are constantly occurring behind the scenes – in darkness – where it comes into the view, or possession, of third parties without obvious mechanisms of consent, disclosure, or justification. Given that there are “many hands” (D’Iganzio and Klein) involved in making data portable between light and dark spaces, equally there can be diversity in the approaches taken to generate critical literacies of these relations. There are two complexities that we argue are important for considering the ethics of data motility from light to dark, and this differs from the concerns that we might have when we think about other illuminating tactics such as open data publishing, freedom-of-information requests, or when data is anonymously leaked in the public interest. The first is that the terms of ethics must be communicable to individuals and groups whose data literacy may be low, effectively non-existent, or not oriented around the objective of upholding or generating data-luminosity as an element of a wider, more general form of responsible data stewardship. Historically, a productive approach to data literacy has been finding appropriate metaphors from adjacent fields that can help add depth – by way of analogy – to understanding data motility. Here we return to our earlier assertion that data is more than simply a transactable commodity. Consider the notion of “giving” and “taking” in the context of darkness and light. The analogy of giving and taking is deeply embedded into the notion of data acquisition and sharing by virtue of the etymology of the word data itself: in Latin, “things having been given”, whereby in French données, a natural gift, perhaps one that is given to those that attempt capture for the purposes of empiricism – representation in quantitative form is a quality that is given to phenomena being brought into the light. However, in the contemporary parlance of “analytics” data is “taken” in the form of recording, measuring, and tracking. Data is considered to be something valuable enough to give or take because of its capacity to stand in for real things. The empiricist’s preferred method is to take rather than to accept what is given (Kitchin, 2); the data-capitalist’s is to incentivise the act of giving or to take what is already given (or yet to be taken). Because data-motile processes are not simply passive forms of reading what is contained within a dataset, the materiality and subjectivity of data extraction and interpretation is something that should not be ignored. These processes represent the recontextualisation of data from one space to another and are expressed in the landmark case of Cambridge Analytica, where a private research company extracted data from Facebook and used it to engage in psychometric analysis of unknowing users. Data Capture Mechanism Characteristics and Approach to Data Stewardship Historical Information created, recorded, or gathered about people of things directly from the source or a delegate but accessed for secondary purposes. Observational Represents patterns and realities of everyday life, collected by subjects by their own choice and with some degree of discretion over the methods. Third parties access this data through reciprocal arrangement with the subject (e.g., in exchange for providing a digital service such as online shopping, banking, healthcare, or social networking). Purposeful Data gathered with a specific purpose in mind and collected with the objective to manipulate its analysis to achieve certain ends. Integrative Places less emphasis on specific data types but rather looks towards social and cultural factors that afford access to and facilitate the integration and linkage of disparate datasets Table 1: Mechanisms of Data Capture There are ethical challenges associated with data that has been sourced from pre-existing sets or that has been extracted from websites and online platforms through scraping data and then enriching it through cleaning, annotation, de-identification, aggregation, or linking to other data sources (tab. 1). As a way to address this challenge, our suggestion of “data provenance” can be defined as where a data point comes from, how it came into being, and how it became valuable for some or another purpose. In developing this idea, we borrow from both the computational and biological sciences (Buneman et al.) where provenance, as a form of qualitative inquiry into data-motile processes, centres around understanding the origin of a data point as part of a broader almost forensic analysis of quality and error-potential in datasets. Provenance is an evaluation of a priori computational inputs and outputs from the results of database queries and audits. Provenance can also be applied to other contexts where data passes through sociotechnical systems, such as behavioural analytics, targeted advertising, machine learning, and algorithmic decision-making. Conventionally, data provenance is based on understanding where data has come from and why it was collected. Both these questions are concerned with the evaluation of the nature of a data point within the wider context of a database that is itself situated within a larger sociotechnical system where the data is made available for use. In its conventional sense, provenance is a means of ensuring that a data point is maintained as a single source of truth (Buneman, 89), and by way of a reproducible mechanism which allows for its path through a set of technical processes, it affords the assessment of a how reliable a system’s output might be by sheer virtue of the ability for one to retrace the steps from point A to B. “Where” and “why” questions are illuminating because they offer an ends-and-means view of the relation between the origins and ultimate uses of a given data point or set. Provenance is interesting when studying data luminosity because means and ends have much to tell us about the origins and uses of data in ways that gesture towards a more accurate and structured research agenda for data ethics that takes the emphasis away from individual moral patients and reorients it towards practices that occur within information management environments. Provenance offers researchers seeking to study data-driven practices a similar heuristic to a journalist’s line of questioning who, what, when, where, why, and how? This last question of how is something that can be incorporated into conventional models of provenance that make it useful in data ethics. The question of how data comes into being extends questions of power, legality, literacy, permission-seeking, and harm in an entangled way and notes how these factors shape the nature of personal data as it moves between contexts. Forms of provenance accumulate from transaction to transaction, cascading along, as a dataset ‘picks up’ the types of provenance that have led to its creation. This may involve multiple forms of overlapping provenance – methodological and epistemological, legal and illegal – which modulate different elements and apparatuses. Provenance, we argue is an important methodological consideration for workers in the humanities and social sciences. Provenance provides a set of shared questions on which models of transparency, accountability, and trust may be established. It points us towards tactics that might help data-subjects understand privacy in a contextual manner (Nissenbaum) and even establish practices of obfuscation and “informational self-defence” against regimes of datafication (Brunton and Nissenbaum). Here provenance is not just a declaration of what means and ends of data capture, sharing, linkage, and analysis are. We sketch the outlines of a provenance model in table 2 below. Type Metaphorical frame Dark Light What? The epistemological structure of a database determines the accuracy of subsequent decisions. Data must be consistent. What data is asked of a person beyond what is strictly needed for service delivery. Data that is collected for a specific stated purpose with informed consent from the data-subject. How does the decision about what to collect disrupt existing polities and communities? What demands for conformity does the database make of its subjects? Where? The contents of a database is important for making informed decisions. Data must be represented. The parameters of inclusion/exclusion that create unjust risks or costs to people because of their inclusion or exclusion in a dataset. The parameters of inclusion or exclusion that afford individuals representation or acknowledgement by being included or excluded from a dataset. How are populations recruited into a dataset? What divides exist that systematically exclude individuals? Who? Who has access to data, and how privacy is framed is important for the security of data-subjects. Data access is political. Access to the data by parties not disclosed to the data-subject. Who has collected the data and who has or will access it? How is the data made available to those beyond the data subjects? How? Data is created with a purpose and is never neutral. Data is instrumental. How the data is used, to what ends, discursively, practically, instrumentally. Is it a private record, a source of value creation, the subject of extortion or blackmail? How the data was intended to be used at the time that it was collected. Why? Data is created by people who are shaped by ideological factors. Data has potential. The political rationality that shapes data governance with regard to technological innovation. The trade-offs that are made known to individuals when they contribute data into sociotechnical systems over which they have limited control. Table 2: Forms of Data Provenance Conclusion As an illuminating methodology, provenance offers a specific line of questioning practices that take information through darkness and light. The emphasis that it places on a narrative for data assets themselves (asking what when, who, how, and why) offers a mechanism for traceability and has potential for application across contexts and cases that allows us to see data malpractice as something that can be productively generalised and understood as a series of ideologically driven technical events with social and political consequences without being marred by perceptions of exceptionality of individual, localised cases of data harm or data violence. References Brunton, Finn, and Helen Nissenbaum. "Political and Ethical Perspectives on Data Obfuscation." Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology. Eds. Mireille Hildebrandt and Katja de Vries. New York: Routledge, 2013. 171-195. Buneman, Peter, Sanjeev Khanna, and Wang-Chiew Tan. "Data Provenance: Some Basic Issues." International Conference on Foundations of Software Technology and Theoretical Computer Science. Berlin: Springer, 2000. Davis, Jenny L. How Artifacts Afford: The Power and Politics of Everyday Things. Cambridge: MIT Press, 2020. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. Cambridge: MIT Press, 2020. Hardt, Michael, and Antonio Negri. Commonwealth. Cambridge: Harvard UP, 2009. Kitchin, Rob. "Big Data, New Epistemologies and Paradigm Shifts." Big Data & Society 1.1 (2014). Lawrence, Matthew. “Emerging Technology: An Interview with Adam Greenfield. ‘God Forbid That Anyone Stopped to Ask What Harm This Might Do to Us’. Institute for Public Policy Research, 13 Oct. 2017. <https://www.ippr.org/juncture-item/emerging-technology-an-interview-with-adam-greenfield-god-forbid-that-anyone-stopped-to-ask-what-harm-this-might-do-us>. Lupton, Deborah. "Vital Materialism and the Thing-Power of Lively Digital Data." Social Theory, Health and Education. Eds. Deana Leahy, Katie Fitzpatrick, and Jan Wright. London: Routledge, 2018. Nissenbaum, Helen F. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books, 2020. Warner, Michael. "Publics and Counterpublics." Public Culture 14.1 (2002): 49-90.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "HCI Systems, Human Factors, Digital Traces"

1

Ferracani, Andrea. "Exploiting Digital Traces and Human Factors to Design and Improve HCI Systems for Online, Outdoor and Indoor Environments". Doctoral thesis, 2018. http://hdl.handle.net/2158/1130560.

Texto completo
Resumen
This PhD thesis deals with the study of new paradigms for exploiting data obtained from virtual and real digital traces (i.e. from Social Networks, Location-Based Social Networks, remote sensing and crowd-sourcing, physic sensors) and human factors (e.g. interests profiling, behavioral aspects such as movements, voice and contextual information) as a basis for the design and implementation of innovative infovis technologies, multimedia recommendation and browsing systems, human computer interaction paradigms in real and virtual spaces, i.e. in online, outdoor and indoor environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "HCI Systems, Human Factors, Digital Traces"

1

Townsend, Scott y Maria Patsarika. "Rethinking Cultural Probes in Community Research and Design as Ethnographic Practice". En Frontiers in Sociology and Social Research, 37–57. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11756-5_3.

Texto completo
Resumen
AbstractUnderstanding social practices as they co-evolve between researcher-community is fundamental in “design and social innovation” where local knowledge, resources, and agency meet to solve wicked problems (Rittel and Webber, Policy Sciences, 4, 155–169, 1973). In this chapter, we seek to explore the traces that researchers and community members leave behind as indexical forms of representation. Contemporary perspectives urge a critical examination of the interplay between design and broader structural and cultural issues (Björgvinsson et al., CoDesign, 8(2–3), 127–144, 2012). Design methods, however, are often chosen arbitrarily reflecting a “toolbox” mentality that potentially misses culturally embedded nuances (Dourish, Implications for design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 541–550), 2006). Cultural probes as part of this “toolbox” are often associated with ethnographic methods, yet were never intended to generate data, whereas ethnography goes beyond data gathering to analyze socio-cultural meaning and practices (Boehner et al., How HCI interprets the probes. In CHI Proceedings Designing for Specific Cultures, 2007). We present two case studies to discuss the use of cultural probes in participatory design as enablers of dialogue in open-ended conversations with communities. We draw on reflexive practices and Manzini’s concept of “diffuse design” and “expert design.” Working in communities can thus become a form of “public ethnography,” an effort to understand and analyze social practices from multiple knowledge perspectives as an ongoing process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Patnaik, Priyadarsini. "Human-Machine Interactions". En Advances in Electronic Government, Digital Divide, and Regional Development, 101–22. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9710-1.ch006.

Texto completo
Resumen
Research on human-computer interaction (HCI) has been widely developed since the 1960s as a result of the rapid growth of information systems. Designing ergonomic human-computer interfaces is the purpose of human-computer interaction (HCI). Nonetheless, the user had complete control of the computer, but they were mostly static. The management of increasingly complex and coupled systems has become indispensable to both the public and private sectors over the past few years. Therefore, humans and machines must be able to adapt to unforeseen circumstances by maintaining degrees of freedom. Though modern machine manages all the activities that humans have done in the past, processing information as well as dealing with dynamic situations but uncertain factors play a crucial role in these dynamic situations, which are difficult to control. The trend the industry is experiencing demands a human-machine cooperative (HMC) approach to handle the new challenges such as cyber security. The chapter explains how HMC might improve HCI to address problems emerging from this change.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "HCI Systems, Human Factors, Digital Traces"

1

Bowler, Ryan David, Benjamin Bach y Larissa Pschetz. "Exploring Uncertainty in Digital Scheduling, and The Wider Implications of Unrepresented Temporalities in HCI". En CHI '22: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3491102.3502107.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Vieira, Rômulo, Gabriel Rocha y Flávio Schiavoni. "Current research on the use of HCI in decision-making to build digital musical instruments". En IHC '20: XIX Brazilian Symposium on Human Factors in Computing Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3424953.3426646.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía