Jethani, Suneel. "Lists, Spatial Practice and Assistive Technologies for the Blind". M/C Journal 15, nr 5 (12.10.2012). http://dx.doi.org/10.5204/mcj.558.
Streszczenie:
IntroductionSupermarkets are functionally challenging environments for people with vision impairments. A supermarket is likely to house an average of 45,000 products in a median floor-space of 4,529 square meters and many visually impaired people are unable to shop without assistance, which greatly impedes personal independence (Nicholson et al.). The task of selecting goods in a supermarket is an “activity that is expressive of agency, identity and creativity” (Sutherland) from which many vision-impaired persons are excluded. In response to this, a number of proof of concept (demonstrating feasibility) and prototype assistive technologies are being developed which aim to use smart phones as potential sensorial aides for vision impaired persons. In this paper, I discuss two such prototypic technologies, Shop Talk and BlindShopping. I engage with this issue’s list theme by suggesting that, on the one hand, list making is a uniquely human activity that demonstrates our need for order, reliance on memory, reveals our idiosyncrasies, and provides insights into our private lives (Keaggy 12). On the other hand, lists feature in the creation of spatial inventories that represent physical environments (Perec 3-4, 9-10). The use of lists in the architecture of assistive technologies for shopping illuminates the interaction between these two modalities of list use where items contained in a list are not only textual but also cartographic elements that link the material and immaterial in space and time (Haber 63). I argue that despite the emancipatory potential of assistive shopping technologies, their efficacy in practical situations is highly dependent on the extent to which they can integrate a number of lists to produce representations of space that are meaningful for vision impaired users. I suggest that the extent to which these prototypes may translate to becoming commercially viable, widely adopted technologies is heavily reliant upon commercial and institutional infrastructures, data sources, and regulation. Thus, their design, manufacture and adoption-potential are shaped by the extent to which certain data inventories are accessible and made interoperable. To overcome such constraints, it is important to better understand the “spatial syntax” associated with the shopping task for a vision impaired person; that is, the connected ordering of real and virtual spatial elements that result in a supermarket as a knowable space within which an assisted “spatial practice” of shopping can occur (Kellerman 148, Lefebvre 16).In what follows, I use the concept of lists to discuss the production of supermarket-space in relation to the enabling and disabling potentials of assistive technologies. First, I discuss mobile digital technologies relative to disability and impairment and describe how the shopping task produces a disabling spatial practice. Second, I present a case study showing how assistive technologies function in aiding vision impaired users in completing the task of supermarket shopping. Third, I discuss various factors that may inhibit the liberating potential of technology assisted shopping by vision-impaired people. Addressing Shopping as a Disabling Spatial Practice Consider how a shopping list might inform one’s experience of supermarket space. The way shopping lists are written demonstrate the variability in the logic that governs list writing. As Bill Keaggy demonstrates in his found shopping list Web project and subsequent book, Milk, Eggs, Vodka, a shopping list may be written on a variety of materials, be arranged in a number of orientations, and the writer may use differing textual attributes, such as size or underlining to show emphasis. The writer may use longhand, abbreviate, write neatly, scribble, and use an array of alternate spelling and naming conventions. For example, items may be listed based on knowledge of the location of products, they may be arranged on a list as a result of an inventory of a pantry or fridge, or they may be copied in the order they appear in a recipe. Whilst shopping, some may follow strictly the order of their list, crossing back and forth between aisles. Some may work through their list item-by-item, perhaps forward scanning to achieve greater economies of time and space. As a person shops, their memory may be stimulated by visual cues reminding them of products they need that may not be included on their list. For the vision impaired, this task is near impossible to complete without the assistance of a relative, friend, agency volunteer, or store employee. Such forms of assistance are often unsatisfactory, as delays may be caused due to the unavailability of an assistant, or the assistant having limited literacy, knowledge, or patience to adequately meet the shopper’s needs. Home delivery services, though readily available, impede personal independence (Nicholson et al.). Katie Ellis and Mike Kent argue that “an impairment becomes a disability due to the impact of prevailing ableist social structures” (3). It can be said, then, that supermarkets function as a disability producing space for the vision impaired shopper. For the vision impaired, a supermarket is a “hegemonic modern visual infrastructure” where, for example, merchandisers may reposition items regularly to induce customers to explore areas of the shop that they wouldn’t usually, a move which adds to the difficulty faced by those customers with impaired vision who work on the assumption that items remain as they usually are (Schillmeier 161).In addressing this issue, much emphasis has been placed on the potential of mobile communications technologies in affording vision impaired users greater mobility and flexibility (Jolley 27). However, as Gerard Goggin argues, the adoption of mobile communication technologies has not necessarily “gone hand in hand with new personal and collective possibilities” given the limited access to standard features, even if the device is text-to-speech enabled (98). Issues with Digital Rights Management (DRM) limit the way a device accesses and reproduces information, and confusion over whether audio rights are needed to convert text-to-speech, impede the accessibility of mobile communications technologies for vision impaired users (Ellis and Kent 136). Accessibility and functionality issues like these arise out of the needs, desires, and expectations of the visually impaired as a user group being considered as an afterthought as opposed to a significant factor in the early phases of design and prototyping (Goggin 89). Thus, the development of assistive technologies for the vision impaired has been left to third parties who must adopt their solutions to fit within certain technical parameters. It is valuable to consider what is involved in the task of shopping in order to appreciate the considerations that must be made in the design of shopping intended assistive technologies. Shopping generally consists of five sub-tasks: travelling to the store; finding items in-store; paying for and bagging items at the register; exiting the store and getting home; and, the often overlooked task of putting items away once at home. In this process supermarkets exhibit a “trichotomous spatial ontology” consisting of locomotor space that a shopper moves around the store, haptic space in the immediate vicinity of the shopper, and search space where individual products are located (Nicholson et al.). In completing these tasks, a shopper will constantly be moving through and switching between all three of these spaces. In the next section I examine how assistive technologies function in producing supermarkets as both enabling and disabling spaces for the vision impaired. Assistive Technologies for Vision Impaired ShoppersJason Farman (43) and Adriana de Douza e Silva both argue that in many ways spaces have always acted as information interfaces where data of all types can reside. Global Positioning System (GPS), Radio Frequency Identification (RFID), and Quick Response (QR) codes all allow for practically every spatial encounter to be an encounter with information. Site-specific and location-aware technologies address the desire for meaningful representations of space for use in everyday situations by the vision impaired. Further, the possibility of an “always-on” connection to spatial information via a mobile phone with WiFi or 3G connections transforms spatial experience by “enfolding remote [and latent] contexts inside the present context” (de Souza e Silva). A range of GPS navigation systems adapted for vision-impaired users are currently on the market. Typically, these systems convert GPS information into text-to-speech instructions and are either standalone devices, such as the Trekker Breeze, or they use the compass, accelerometer, and 3G or WiFi functions found on most smart phones, such as Loadstone. Whilst both these products are adequate in guiding a vision-impaired user from their home to a supermarket, there are significant differences in their interfaces and data architectures. Trekker Breeze is a standalone hardware device that produces talking menus, maps, and GPS information. While its navigation functionality relies on a worldwide radio-navigation system that uses a constellation of 24 satellites to triangulate one’s position (May and LaPierre 263-64), its map and text-to-speech functionality relies on data on a DVD provided with the unit. Loadstone is an open source software system for Nokia devices that has been developed within the vision-impaired community. Loadstone is built on GNU General Public License (GPL) software and is developed from private and user based funding; this overcomes the issue of Trekker Breeze’s reliance on trading policies and pricing models of the few global vendors of satellite navigation data. Both products have significant shortcomings if viewed in the broader context of the five sub-tasks involved in shopping described above. Trekker Breeze and Loadstone require that additional devices be connected to it. In the case of Trekker Breeze it is a tactile keypad, and with Loadstone it is an aftermarket screen reader. To function optimally, Trekker Breeze requires that routes be pre-recorded and, according to a review conducted by the American Foundation for the Blind, it requires a 30-minute warm up time to properly orient itself. Both Trekker Breeze and Loadstone allow users to create and share Points of Interest (POI) databases showing the location of various places along a given route. Non-standard or duplicated user generated content in POI databases may, however, have a negative effect on usability (Ellis and Kent 2). Furthermore, GPS-based navigation systems are accurate to approximately ten metres, which means that users must rely on their own mobility skills when they are required to change direction or stop for traffic. This issue with GPS accuracy is more pronounced when a vision-impaired user is approaching a supermarket where they are likely to encounter environmental hazards with greater frequency and both pedestrian and vehicular traffic in greater density. Here the relations between space defined and spaces poorly defined or undefined by the GPS device interact to produce the supermarket surrounds as a disabling space (Galloway). Prototype Systems for Supermarket Navigation and Product SelectionIn the discussion to follow, I look at two prototype systems using QR codes and RFID that are designed to be used in-store by vision-impaired shoppers. Shop Talk is a proof of concept system developed by researchers at Utah State University that uses synthetic verbal route directions to assist vision impaired shoppers with supermarket navigation, product search, and selection (Nicholson et al.). Its hardware consists of a portable computational unit, a numeric keypad, a wireless barcode scanner and base station, headphones for the user to receive the synthetic speech instructions, a USB hub to connect all the components, and a backpack to carry them (with the exception of the barcode scanner) which has been slightly modified with a plastic stabiliser to assist in correct positioning. Shop Talk represents the supermarket environment using two data structures. The first is comprised of two elements: a topological map of locomotor space that allows for directional labels of “left,” “right,” and “forward,” to be added to the supermarket floor plan; and, for navigation of haptic space, the supermarket inventory management system, which is used to create verbal descriptions of product information. The second data structure is a Barcode Connectivity Matrix (BCM), which associates each shelf barcode with several pieces of information such as aisle, aisle side, section, shelf, position, Universal Product Code (UPC) barcode, product description, and price. Nicholson et al. suggest that one of their “most immediate objectives for future work is to migrate the system to a more conventional mobile platform” such as a smart phone (see Mobile Shopping). The Personalisable Interactions with Resources on AMI-Enabled Mobile Dynamic Environments (PRIAmIDE) research group at the University of Deusto is also approaching Ambient Assisted Living (AAL) by exploring the smart phone’s sensing, communication, computing, and storage potential. As part of their work, the prototype system, BlindShopping, was developed to address the issue of assisted shopping using entirely off-the-shelf technology with minimal environmental adjustments to navigate the store and search, browse and select products (López-de-Ipiña et al. 34). Blind Shopping’s architecture is based on three components. Firstly, a navigation system provides the user with synthetic verbal instructions to users via headphones connected to the smart phone device being used in order to guide them around the store. This requires a RFID reader to be attached to the tip of the user’s white cane and road-marking-like RFID tag lines to be distributed throughout the aisles. A smartphone application processes the RFID data that is received by the smart phone via Bluetooth generating the verbal navigation commands as a result. Products are recognised by pointing a QR code reader enabled smart phone at an embossed code located on a shelf. The system is managed by a Rich Internet Application (RIA) interface, which operates by Web browser, and is used to register the RFID tags situated in the aisles and the QR codes located on shelves (López-de-Ipiña and 37-38). A typical use-scenario for Blind Shopping involves a user activating the system by tracing an “L” on the screen or issuing the “Location” voice command, which activates the supermarket navigation system which then asks the user to either touch an RFID floor marking with their cane or scan a QR code on a nearby shelf to orient the system. The application then asks the user to dictate the product or category of product that they wish to locate. The smart phone maintains a continuous Bluetooth connection with the RFID reader to keep track of user location at all times. By drawing a “P” or issuing the “Product” voice command, a user can switch the device into product recognition mode where the smart phone camera is pointed at an embossed QR code on a shelf to retrieve information about a product such as manufacturer, name, weight, and price, via synthetic speech (López-de-Ipiña et al. 38-39). Despite both systems aiming to operate with as little environmental adjustment as possible, as well as minimise the extent to which a supermarket would need to allocate infrastructural, administrative, and human resources to implementing assistive technologies for vision impaired shoppers, there will undoubtedly be significant establishment and maintenance costs associated with the adoption of production versions of systems resembling either prototype described in this paper. As both systems rely on data obtained from a server by invoking Web services, supermarkets would need to provide in-store WiFi. Further, both systems’ dependence on store inventory data would mean that commercial versions of either of these systems are likely to be supermarket specific or exclusive given that there will be policies in place that forbid access to inventory systems, which contain pricing information to third parties. Secondly, an assumption in the design of both prototypes is that the shopping task ends with the user arriving at home; this overlooks the important task of being able to recognise products in order to put them away or to use at a later time.The BCM and QR product recognition components of both respective prototypic systems associates information to products in order to assist users in the product search and selection sub-tasks. However, information such as use-by dates, discount offers, country of manufacture, country of manufacturer’s origin, nutritional information, and the labelling of products as Halal, Kosher, containing alcohol, nuts, gluten, lactose, phenylalanine, and so on, create further challenges in how different data sources are managed within the devices’ software architecture. The reliance of both systems on existing smartphone technology is also problematic. Changes in the production and uptake of mobile communication devices, and the software that they operate on, occurs rapidly. Once the fit-out of a retail space with the necessary instrumentation in order to accommodate a particular system has occurred, this system is unlikely to be able to cater to the requirement for frequent upgrades, as built environments are less flexible in the upgrading of their technological infrastructure (Kellerman 148). This sets up a scenario where the supermarket may persist as a disabling space due to a gap between the functional capacities of applications designed for mobile communication devices and the environments in which they are to be used. Lists and Disabling Spatial PracticeThe development and provision of access to assistive technologies and the data they rely upon is a commercial issue (Ellis and Kent 7). The use of assistive technologies in supermarket-spaces that rely on the inter-functional coordination of multiple inventories may have the unintended effect of excluding people with disabilities from access to legitimate content (Ellis and Kent 7). With de Certeau, we can ask of supermarket-space “What spatial practices correspond, in the area where discipline is manipulated, to these apparatuses that produce a disciplinary space?" (96).In designing assistive technologies, such as those discussed in this paper, developers must strive to achieve integration across multiple data inventories. Software architectures must be optimised to overcome issues relating to intellectual property, cross platform access, standardisation, fidelity, potential duplication, and mass-storage. This need for “cross sectioning,” however, “merely adds to the muddle” (Lefebvre 8). This is a predicament that only intensifies as space and objects in space become increasingly “representable” (Galloway), and as the impetus for the project of spatial politics for the vision impaired moves beyond representation to centre on access and meaning-making.ConclusionSupermarkets act as sites of hegemony, resistance, difference, and transformation, where the vision impaired and their allies resist the “repressive socialization of impaired bodies” through their own social movements relating to environmental accessibility and the technology assisted spatial practice of shopping (Gleeson 129). It is undeniable that the prototype technologies described in this paper, and those like it, indeed do have a great deal of emancipatory potential. However, it should be understood that these devices produce representations of supermarket-space as a simulation within a framework that attempts to mimic the real, and these representations are pre-determined by the industrial, technological, and regulatory forces that govern their production (Lefebvre 8). Thus, the potential of assistive technologies is dependent upon a range of constraints relating to data accessibility, and the interaction of various kinds of lists across the geographic area that surrounds the supermarket, locomotor, haptic, and search spaces of the supermarket, the home-space, and the internal spaces of a shopper’s imaginary. These interactions are important in contributing to the reproduction of disability in supermarkets through the use of assistive shopping technologies. The ways by which people make and read shopping lists complicate the relations between supermarket-space as location data and product inventories versus that which is intuited and experienced by a shopper (Sutherland). Not only should we be creating inventories of supermarket locomotor, haptic, and search spaces, the attention of developers working in this area of assistive technologies should look beyond the challenges of spatial representation and move towards a focus on issues of interoperability and expanded access of spatial inventory databases and data within and beyond supermarket-space.ReferencesDe Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.De Souza e Silva, A. “From Cyber to Hybrid: Mobile Technologies As Interfaces of Hybrid Spaces.” Space and Culture 9.3 (2006): 261-78.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.Galloway, Alexander. “Are Some Things Unrepresentable?” Theory, Culture and Society 28 (2011): 85-102.Gleeson, Brendan. Geographies of Disability. London: Routledge, 1999.Goggin, Gerard. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006.Haber, Alex. “Mapping the Void in Perec’s Species of Spaces.” Tattered Fragments of the Map. Ed. Adam Katz and Brian Rosa. S.l.: Thelimitsoffun.org, 2009.Jolley, William M. When the Tide Comes in: Towards Accessible Telecommunications for People with Disabilities in Australia. Sydney: Human Rights and Equal Opportunity Commission, 2003.Keaggy, Bill. Milk Eggs Vodka: Grocery Lists Lost and Found. Cincinnati, Ohio: HOW Books, 2007.Kellerman, Aharon. Personal Mobilities. London: Routledge, 2006.Kleege, Georgia. “Blindness and Visual Culture: An Eyewitness Account.” The Disability Studies Reader. 2nd edition. Ed. Lennard J. Davis. New York: Routledge, 2006. 391-98.Lefebvre, Henri. The Production of Space. Oxford, UK: Blackwell, 1991.López-de-Ipiña, Diego, Tania Lorido, and Unai López. “Indoor Navigation and Product Recognition for Blind People Assisted Shopping.” Ambient Assisted Living. Ed. J. Bravo, R. Hervás, and V. Villarreal. Berlin: Springer-Verlag, 2011. 25-32. May, Michael, and Charles LaPierre. “Accessible Global Position System (GPS) and Related Orientation Technologies.” Assistive Technology for Visually Impaired and Blind People. Ed. Marion A. Hersh, and Michael A. Johnson. London: Springer-Verlag, 2008. 261-88. Nicholson, John, Vladimir Kulyukin, and Daniel Coster. “Shoptalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans.” The Open Rehabilitation Journal 2.1 (2009): 11-23.Perec, Georges. Species of Spaces and Other Pieces. Trans. and Ed. John Sturrock. London: Penguin Books, 1997.Schillmeier, Michael W. J. Rethinking Disability: Bodies, Senses, and Things. New York: Routledge, 2010.Sutherland, I. “Mobile Media and the Socio-Technical Protocols of the Supermarket.” Australian Journal of Communication. 36.1 (2009): 73-84.