Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Indoor positional navigation.

Zeitschriftenartikel zum Thema „Indoor positional navigation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-35 Zeitschriftenartikel für die Forschung zum Thema "Indoor positional navigation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Subhash Reddy, S., und Y. Bhaskar Rao. „Indoor Navigation System for Blind People Using VLC“. International Journal of Engineering & Technology 7, Nr. 3.27 (15.08.2018): 77. http://dx.doi.org/10.14419/ijet.v7i3.27.17659.

Der volle Inhalt der Quelle
Annotation:
We propose an indoor navigation system that utilizes visible light communication technology, which employs LED lights and a geomagnetic correction method, aimed at supporting visually impaired people who travel indoors. To verify the effectiveness of this system, we conducted an experiment targeting visually impaired people. Although acquiring accurate positional information and detecting directions indoors is difficult, we confirmed that using this system, accurate positional information and travel direction can be obtained utilizing visible light communication technology, which employs LED lights, and correcting the values of the geomagnetic sensor integrated in a Smartphone.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Irshad, Liu, Arshad, Sohail, Murthy, Khokhar und Uba. „A Novel Localization Technique Using Luminous Flux“. Applied Sciences 9, Nr. 23 (21.11.2019): 5027. http://dx.doi.org/10.3390/app9235027.

Der volle Inhalt der Quelle
Annotation:
As global navigation satellite system (GNNS) signals are unable to enter indoor spaces, substitute methods such as indoor localization-based visible light communication (VLC) are gaining the attention of researchers. In this paper, the systematic investigation of a VLC channel is performed for both direct and indirect line of sight (LoS) by utilizing the impulse response of indoor optical wireless channels. In order to examine the localization scenario, two light-emitting diode (LED) grid patterns are used. The received signal strength (RSS) is observed based on the positional dilution of precision (PDoP), a subset of the dilution of precision (DoP) used in global navigation satellite system (GNSS) positioning. In total, 31 × 31 possible positional tags are set for a given PDoP configuration. The values for positional error in terms of root mean square error (RMSE) and the sum of squared errors (SSE) are taken into consideration. The performance of the proposed approach is validated by simulation results according to the selected indoor space. The results show that the position accuracy enhanced is at short range by 24% by utilizing the PDoP metric. As confirmation, the modeled accuracy is compared with perceived accuracy results. This study determines the application and design of future optical wireless systems specifically for indoor localization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Vieira, M. A., M. Vieira, V. Silva, P. Louro, L. Mateus und P. Vieira. „Indoor positioning using a-SiC:H technology“. MRS Advances 1, Nr. 55 (2016): 3685–90. http://dx.doi.org/10.1557/adv.2016.381.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTThe nonlinear property of SiC multilayer devices under Ultra Violet (UV) irradiation is used to design an optical processor for indoor positioning. The transducers combine the simultaneous demultiplexing operation with the photodetection and self-amplification. Moreover, we present a way to achieve indoor positioning using the parity bits and the navigation syndrome. A 4 bit representation with the original string colour message and the transmitted 7 bit string, the encoding and decoding accurate positional information processes and the design of SiC navigation syndrome generators are discussed. The visible multilateration method estimates the device’s position by using the MUX signal received from several, non-collinear transmitters. The location and motion information is found by mapping position and estimates the location areas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rajchowski, Piotr, Jacek Stefanski, Jaroslaw Sadowski und Krzysztof K. Cwalina. „Person Tracking in Ultra-Wide Band Hybrid Localization System Using Reduced Number of Reference Nodes“. Sensors 20, Nr. 7 (02.04.2020): 1984. http://dx.doi.org/10.3390/s20071984.

Der volle Inhalt der Quelle
Annotation:
In this article a novel method of positional data integration in an indoor hybrid localization system combining inertial navigation with radio distance measurements is presented. A point of interest is the situation when the positional data and the radio distance measurements are obtained from less than thee reference nodes and it is impossible to unambiguously localize the moving person due to undetermined set of positional equations. The presented method allows to continuously provide localization service even in areas with disturbed propagation of the radio signals. Authors performed simulation and measurement studies of the proposed method to verify the precision of position estimation of a moving person in an indoor environment. It is worth noting that to determine the simulation parameters and realize the experimental studies the hybrid localization system demonstrator was developed, combining inertial navigation and radio distance measurements. In the proposed solution, results of distance measurements taken to less than three reference nodes are used to compensate the drift of the position estimated using the inertial sensor. In the obtained simulation and experimental results it was possible to reduce the localization error by nearly 50% regarding the case when only inertial navigation was used, additionally keeping the long term root mean square error at the level of ca. 0.50 m. That gives a degradation of localization precision below 0.1 m with respect to the fusion Kalman filtration when four reference nodes are present.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

dos Santos, D. R., F. P. Freiman und N. L. Pavan. „GLOBAL REFINEMENT OF TERRESTRIAL LASER SCANNER DATA REGISTRATION USING WEIGHTED SENSOR POSES“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1 (26.09.2018): 121–25. http://dx.doi.org/10.5194/isprs-archives-xlii-1-121-2018.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Terrestrial laser scanner (TLS) sensor captures highly dense and accurate point clouds quite useful for indoor and outdoor mapping, navigation, 3D reconstruction, surveillance, industrial projects, infrastructure management, and others. In this paper, we present a global registration method that weights the sensor poses for refinement of TLS data registration. Our global refinement method assumes that the variance-covariance matrix that describes the uncertainty of sensor poses is available to refine the registration errors. The effectiveness of the proposed method is demonstrated with TLS dataset obtained into outdoor environment. Our results show that the weighting the sensor poses obtained in registration task improves the positional accuracy of TLS sensor.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sithole, George, und Sisi Zlatanova. „POSITION, LOCATION, PLACE AND AREA: AN INDOOR PERSPECTIVE“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-4 (03.06.2016): 89–96. http://dx.doi.org/10.5194/isprsannals-iii-4-89-2016.

Der volle Inhalt der Quelle
Annotation:
Over the last decade, harnessing the commercial potential of smart mobile devices in indoor environments has spurred interest in indoor mapping and navigation. Users experience indoor environments differently. For this reason navigational models have to be designed to adapt to a user’s personality, and to reflect as many cognitive maps as possible. This paper presents an extension of a previously proposed framework. In this extension the notion of placement is accounted for, thereby enabling one aspect of the ‘personalised indoor experience’. In the paper, firstly referential expressions are used as a tool to discuss the different ways of thinking of placement within indoor spaces. Next, placement is expressed in terms of the concept of Position, Location, Place and Area. Finally, the previously proposed framework is extended to include these concepts of placement. An example is provided of the use of the extended framework. &lt;br&gt;&lt;br&gt; Notable characteristics of the framework are: (1) Sub-spaces, resources and agents can simultaneously possess different types of placement, e.g., a person in a room can have an xyz position and a location defined by the room number. While these entities can simultaneously have different forms of placement, only one is dominant. (2) Sub-spaces, resources and agents are capable of possessing modifiers that alter their access and usage. (3) Sub-spaces inherit the modifiers of the resources or agents contained in them. (4) Unlike conventional navigational models which treat resources and obstacles as different types of entities, in the proposed framework there are only resources and whether a resource is an obstacle is determined by a modifier that determines whether a user can access the resource. The power of the framework is that it blends the geometry and topology of space, the influence of human activity within sub-spaces together with the different notions of placement in a way that is simple and yet very flexible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sithole, George, und Sisi Zlatanova. „POSITION, LOCATION, PLACE AND AREA: AN INDOOR PERSPECTIVE“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-4 (03.06.2016): 89–96. http://dx.doi.org/10.5194/isprs-annals-iii-4-89-2016.

Der volle Inhalt der Quelle
Annotation:
Over the last decade, harnessing the commercial potential of smart mobile devices in indoor environments has spurred interest in indoor mapping and navigation. Users experience indoor environments differently. For this reason navigational models have to be designed to adapt to a user’s personality, and to reflect as many cognitive maps as possible. This paper presents an extension of a previously proposed framework. In this extension the notion of placement is accounted for, thereby enabling one aspect of the ‘personalised indoor experience’. In the paper, firstly referential expressions are used as a tool to discuss the different ways of thinking of placement within indoor spaces. Next, placement is expressed in terms of the concept of Position, Location, Place and Area. Finally, the previously proposed framework is extended to include these concepts of placement. An example is provided of the use of the extended framework. <br><br> Notable characteristics of the framework are: (1) Sub-spaces, resources and agents can simultaneously possess different types of placement, e.g., a person in a room can have an xyz position and a location defined by the room number. While these entities can simultaneously have different forms of placement, only one is dominant. (2) Sub-spaces, resources and agents are capable of possessing modifiers that alter their access and usage. (3) Sub-spaces inherit the modifiers of the resources or agents contained in them. (4) Unlike conventional navigational models which treat resources and obstacles as different types of entities, in the proposed framework there are only resources and whether a resource is an obstacle is determined by a modifier that determines whether a user can access the resource. The power of the framework is that it blends the geometry and topology of space, the influence of human activity within sub-spaces together with the different notions of placement in a way that is simple and yet very flexible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Garcia-Fernandez, Miquel, Isaac Hoyas-Ester, Alex Lopez-Cruces, Malgorzata Siutkowska und Xavier Banqué-Casanovas. „Accuracy in WiFi Access Point Position Estimation Using Round Trip Time“. Sensors 21, Nr. 11 (01.06.2021): 3828. http://dx.doi.org/10.3390/s21113828.

Der volle Inhalt der Quelle
Annotation:
WiFi Round Trip Time (RTT) unlocks meter level accuracies in user terminal positions where no other navigation systems, such as Global Navigation Satellite Systems (GNSS), are able to (e.g., indoors). However, little has been done so far to obtain a scalable and automated system that computes the position of the WiFi Access Points (WAP) using RTT and that is able to estimate, in addition to the position, the hardware biases that offset the WiFi ranging measurements. These biases have a direct impact on the ultimate position accuracy of the terminals. This work proposes a method in which the computation of the WiFi Access Points positions and hardware biases (i.e., products) can be estimated based on the ranges and position fixes provided by user terminals (i.e., inverse positioning) and details how this can be improved if raw GNSS measurements (pseudoranges and carrier phase) are also available in the terminal. The data setup used to obtain a performance assessment was configured in a benign scenario (open sky with no obstructions) in order to obtain an upper boundary on the positioning error that can be achieved with the proposed method. Under these conditions, accuracies better than 1.5 m were achieved for the WAP position and hardware bias. The proposed method is suitable to be implemented in an automated manner, without having to rely on dedicated campaigns to survey 802.11mc-compliant WAPs. This paper offers a technique to automatically estimate both mild-indoor WAP products (where terminals have both Wi-Fi RTT and GNSS coverage) and deep-indoor WAP (with no GNSS coverage where the terminals obtain their position exclusively from previously estimated mild-indoor WAPs).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Schelkshorn, S., und J. Detlefsen. „Position finding using simple Doppler sensors“. Advances in Radio Science 5 (12.06.2007): 153–56. http://dx.doi.org/10.5194/ars-5-153-2007.

Der volle Inhalt der Quelle
Annotation:
Abstract. An increasing number of modern applications and services is based on the knowledge of the users actual position. Depending on the application a rough position estimate is sufficient, e. g. services in cellular networks that use the information about the users actual cell. Other applications, e. g. navigation systems use the GPS-System for accurate position finding. Beyond these outdoor applications a growing number of indoor applications requires position information. The previously mentioned methods for position finding (mobile cell, GPS) are not usable for these indoor applications. Within this paper we will present a system that relies on the simultaneous measurement of doppler signals at four different positions to obtain position and velocity of an unknown object. It is therefore suiteable for indoor usage, extendig already existing wireless infrastructure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Jamil, Faisal, und DoHyeun Kim. „Enhanced Kalman filter algorithm using fuzzy inference for improving position estimation in indoor navigation“. Journal of Intelligent & Fuzzy Systems 40, Nr. 5 (22.04.2021): 8991–9005. http://dx.doi.org/10.3233/jifs-201352.

Der volle Inhalt der Quelle
Annotation:
In recent few years, the widespread applications of indoor navigation have compelled the research community to propose novel solutions for detecting objects position in the Indoor environment. Various approaches have been proposed and implemented concerning the indoor positioning systems. This study propose an fuzzy inference based Kalman filter to improve the position estimation in indoor navigation. The presented system is based on FIS based Kalman filter aiming at predicting the actual sensor readings from the available noisy sensor measurements. The proposed approach has two main components, i.e., multi sensor fusion algorithm for positioning estimation and FIS based Kalman filter algorithm. The position estimation module is used to determine the object location in an indoor environment in an accurate way. Similarly, the FIS based Kalman filter is used to control and tune the Kalman filter by considering the previous output as a feedback. The Kalman filter predicts the actual sensor readings from the available noisy readings. To evaluate the proposed approach, the next-generation inertial measurement unit is used to acquire a three-axis gyroscope and accelerometer sensory data. Lastly, the proposed approach’s performance has been investigated considering the MAD, RMSE, and MSE metrics. The obtained results illustrate that the FIS based Kalman filter improve the prediction accuracy against the traditional Kalman filter approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Akgül, Batur Alp, Bülent HAZNEDAR, Abdurrahman YAŞAR und Mustafa Ersan ÇİNKILIÇ. „A INDOOR POSITION ROUTING (IPR) AND DATA MONITOR USING BLUETOOTH LOW ENERGY TECHNOLOGY (iBEACON-BLE): AN IMPLEMENTATION STUDY“. ICONTECH INTERNATIONAL JOURNAL 5, Nr. 1 (28.03.2021): 38–61. http://dx.doi.org/10.46291/icontechvol5iss1pp38-61.

Der volle Inhalt der Quelle
Annotation:
Rapid advancements in mobile industry have emerged new technological ideas and applications for researchers by allowing smart devices over the last decade. In recent years, the need for Indoor Position Routing (IPR) and Location-Based Advertisements (LBA) systems are increasingly common, IPR and LBA systems have been becoming very popular. Nowadays, it has become possible to create software and hardware applications for IPR and LBA in indoor environments, thanks to developments of different technologies. The development of the system should be based on low-cost technology, it should be suitable for integration and indoors operation. New options and possibilities for indoor locations are presented by the iBeacon-Bluetooth Low Energy (BLE) radio protocol. iBeacon-BLE supports portable battery-powered system that can be smoothly distributed at low cost giving it distinct advantages over Wi-Fi. Therefore, in this study, a technological infrastructure is created to solve the navigation problem in closed locations using iBeacon-BLE technology, a data monitoring information system is proposed for smart devices of currently available technology for IPR, LBA with using iBeacon-BLE. The localization of the objects based on iBeacon-BLE and their combination are determined using the measured data with the developed application. To build an IPR system for indoor environments, the available hardware, software, and network technologies are presented. The concept of the indoor monitoring system and the technologies can be used to develop the IPR system are presented. This system is made up of iBeacon-BLE sensor nodes, a smart device and a mobile application that provides IPR and LBA services by measuring the distance between Transmitter (TX) and Receiver (RX). The proposed model uses the trilateration method, it allows the mobile application to determine the exact location of the object at the micro-level size. The proposed model uses sensory data to identify and trilateration the object’s position.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kee, Changdon, Doohee Yun und Haeyoung Jun. „Precise calibration method of pseudolite positions in indoor navigation systems“. Computers & Mathematics with Applications 46, Nr. 10-11 (November 2003): 1711–24. http://dx.doi.org/10.1016/s0898-1221(03)90205-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Jeong, Jae-Hoon, und Kiwon Park. „Numerical Analysis of 2-D Positioned, Indoor, Fuzzy-Logic, Autonomous Navigation System Based on Chromaticity and Frequency-Component Analysis of LED Light“. Sensors 21, Nr. 13 (25.06.2021): 4345. http://dx.doi.org/10.3390/s21134345.

Der volle Inhalt der Quelle
Annotation:
Topics concerning autonomous navigation, especially those related to positioning systems, have recently attracted increased research attention. The commonly available global positioning system (GPS) is unable to determine the positions of vehicles in GPS-shaded regions. To address this concern, this paper presents a fuzzy-logic system capable of determining the position of a moving robot in a GPS-shaded indoor environment by analyzing the chromaticity and frequency-component ratio of LED lights installed under the ceiling. The proposed system’s performance was analyzed by performing a MATLAB simulation of an indoor environment with obstacles. During the simulation, the mobile robot utilized a fuzzy autonomous navigation system with behavioral rules to approach targets successfully in a variety of indoor environments without colliding with obstacles. The robot utilized the x and y coordinates of the fuzzy positioning system. The results obtained in this study confirm the suitability of the proposed method for use in applications involving autonomous navigation of vehicles in areas with poor GPS-signal reception, such as in tunnels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Zhou, Qi, und YuanJian Tian. „Spatial-temporal Evolution and Completeness Analysis of OpenStreetMap Building Data in China from 2012 to 2017“. Abstracts of the ICA 1 (15.07.2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-436-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> OpenStreetMap (OSM), as a typical volunteered geographic information project, is an online map with free content and everyone can edit and use it (Goodchild 2007). A range of applications has been proposed using OSM data, including routing and navigation, crisis mapping, 3D modelling, land use/cover mapping. This is because the OSM data is not only free of use, but also has a global coverage and high currentness. In despite of the above advantages, however, most of the OSM data were contributed by ‘non-professional’ or ‘amateur geographers’ (Goodchild 2008; Haklay 2010). Therefore, a lot of concerns have been paid attention to the quality issue of the OSM data. Assessing the quality of OSM data has become a hot topic in the field of geographic information science.</p><p>Extensive studies have been carried out on assessing various quality measures (e.g. positional accuracy, completeness and attribute accuracy) of OSM datasets in different countries or districts such as Germany, England, France, Italy, Canada and the United States. In the meanwhile, the road feature of an OSM dataset has been paid much attention to. To our knowledge, however, not any study has been focused on assessing the data quality of OSM building data in China, although it may be an essential data source for urban planning and management, 3D modelling and indoor navigation. Therefore, the aim of this study is to investigate the OSM building data in China. More precisely, an analysis of the spatial-temporal evolution and completeness of OSM building data in China from 2012 to 2017 was carried out. The tenet of our study was to employ two quality indicators, i.e. building count (Gröchenig et al. 2013, Barron et al. 2014, Fan 2016) and building density (Zhou 2018), for the analyses. First, the numbers of OSM building data from 2012 to 2017 were calculated in terms of both provincial- and prefecture-level divisions in China; The OSM building count were then compared among different divisions and also different years (2012&amp;ndash;2017) for analyzing the evolution of OSM building data in China in both temporal dimension and spatial scale. Moreover, the correlations between OSM building counts and four potential factors (i.e. gross domestic product (GDP), population, urban land area, OSM road length), which may influence the development of OSM building data in China, were respectively investigated. Second, a 1&amp;thinsp;&amp;times;&amp;thinsp;1&amp;thinsp;km<sup>2</sup> regular grid was overlapped onto the OSM building datasets in urban areas for calculating the OSM building density of each grid cell; Moreover, high-density grid cells (whose OSM building data were almost complete) were extracted and analyzed through a simple clustering method, in order to investigate the spatial pattern of OSM building data in urban areas. Results showed that,</p><p>1) The OSM building data in China increased almost 20 times from the years 2012 to 2017, especially for those located in the eastern coastal regions of China (e.g. the provincial-level divisions: Jinagsu, Zhejiang, Guangzhou and Shandong and the prefecture-level divisions: Beijing, Nantong, Shanghai, Tianjin, Suzhou, Yangzhou and Dalian). In most cases, both the GDP and OSM road length factors had a moderate correlation with the OSM building count.</p><p> 2) Most of the grid cells in urban areas still had no building or their building densities were equal to 0%, which indicated that the OSM building dataset in China was far from being complete. From analyzing the high density grid cells, two typical spatial distribution modes, i.e. dispersion and aggregation, were found in different prefecture-level divisions. As an example, the high-density grid cells for some prefecture-level divisions (e.g. Luoyang, Yueyang and Dalian) were mostly aggregated in the city cores; while those for some (e.g. Beijing Tianjin and Shanghai) were located in the hot spots such as business districts, attractions and transportation hubs.</p><p>The above results may benefit for users (especially those researchers and educators) to choose appropriate study area(s) from the OSM building dataset in China. In the meanwhile, the volunteers around the world may be motivated to contribute more OSM building data in this region. Further research work may include: developing quality indicators for quantitative completeness estimation of OSM building data, especially in rural areas; and investigating other quality measures (e.g. positional accuracy and semantic accuracy) or geographical features (e.g. railways, land uses, and points of interest) in China’s OSM dataset.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Fan, Qigao, Yaheng Wu, Jing Hui, Lei Wu, Zhenzhong Yu und Lijuan Zhou. „Integrated Navigation Fusion Strategy of INS/UWB for Indoor Carrier Attitude Angle and Position Synchronous Tracking“. Scientific World Journal 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/215303.

Der volle Inhalt der Quelle
Annotation:
In some GPS failure conditions, positioning for mobile target is difficult. This paper proposed a new method based on INS/UWB for attitude angle and position synchronous tracking of indoor carrier. Firstly, error model of INS/UWB integrated system is built, including error equation of INS and UWB. And combined filtering model of INS/UWB is researched. Simulation results show that the two subsystems are complementary. Secondly, integrated navigation data fusion strategy of INS/UWB based on Kalman filtering theory is proposed. Simulation results show that FAKF method is better than the conventional Kalman filtering. Finally, an indoor experiment platform is established to verify the integrated navigation theory of INS/UWB, which is geared to the needs of coal mine working environment. Static and dynamic positioning results show that the INS/UWB integrated navigation system is stable and real-time, positioning precision meets the requirements of working condition and is better than any independent subsystem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Jamil, Faisal, Naeem Iqbal, Shabir Ahmad und Do-Hyeun Kim. „Toward Accurate Position Estimation Using Learning to Prediction Algorithm in Indoor Navigation“. Sensors 20, Nr. 16 (07.08.2020): 4410. http://dx.doi.org/10.3390/s20164410.

Der volle Inhalt der Quelle
Annotation:
Internet of Things is advancing, and the augmented role of smart navigation in automating processes is at its vanguard. Smart navigation and location tracking systems are finding increasing use in the area of the mission-critical indoor scenario, logistics, medicine, and security. A demanding emerging area is an Indoor Localization due to the increased fascination towards location-based services. Numerous inertial assessments unit-based indoor localization mechanisms have been suggested in this regard. However, these methods have many shortcomings pertaining to accuracy and consistency. In this study, we propose a novel position estimation system based on learning to the prediction model to address the above challenges. The designed system consists of two modules; learning to prediction module and position estimation using sensor fusion in an indoor environment. The prediction algorithm is attached to the learning module. Moreover, the learning module continuously controls, observes, and enhances the efficiency of the prediction algorithm by evaluating the output and taking into account the exogenous factors that may have an impact on its outcome. On top of that, we reckon a situation where the prediction algorithm can be applied to anticipate the accurate gyroscope and accelerometer reading from the noisy sensor readings. In the designed system, we consider a scenario where the learning module, based on Artificial Neural Network, and Kalman filter are used as a prediction algorithm to predict the actual accelerometer and gyroscope reading from the noisy sensor reading. Moreover, to acquire data, we use the next-generation inertial measurement unit, which contains a 3-axis accelerometer and gyroscope data. Finally, for the performance and accuracy of the proposed system, we carried out numbers of experiments, and we observed that the proposed Kalman filter with learning module performed better than the traditional Kalman filter algorithm in terms of root mean square error metric.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Nakagawa, M., K. Akano, T. Kobayashi und Y. Sekiguchi. „RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W4 (14.09.2017): 349–54. http://dx.doi.org/10.5194/isprs-annals-iv-2-w4-349-2017.

Der volle Inhalt der Quelle
Annotation:
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Deng, Zhi-An, Guofeng Wang, Ying Hu und Yang Cui. „Carrying Position Independent User Heading Estimation for Indoor Pedestrian Navigation with Smartphones“. Sensors 16, Nr. 5 (11.05.2016): 677. http://dx.doi.org/10.3390/s16050677.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Melo, Leonimer Flávio de, João Mauricio Rosário und Almiro Franco da Silveira Junior. „Mobile Robot Indoor Autonomous Navigation with Position Estimation Using RF Signal Triangulation“. Positioning 04, Nr. 01 (2013): 20–35. http://dx.doi.org/10.4236/pos.2013.41004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Anbarasu, B., und G. Anitha. „Vision-based Position estimation and Indoor scene recognition algorithm for Quadrotor Navigation“. Journal of Physics: Conference Series 1969, Nr. 1 (01.07.2021): 012001. http://dx.doi.org/10.1088/1742-6596/1969/1/012001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Sugiura, Akihiko, und Takuya Shoji. „A Pedestrian Navigation System Using Cellular Phone Video-Conferencing Functions“. International Journal of Vehicular Technology 2012 (31.12.2012): 1–8. http://dx.doi.org/10.1155/2012/945365.

Der volle Inhalt der Quelle
Annotation:
A user’s position-specific field has been developed using the Global Positioning System (GPS) technology. To determine the position using cellular phones, a device was developed, in which a pedestrian navigation unit carries the GPS. However, GPS cannot specify a position in a subterranean environment or indoors, which is beyond the reach of transmitted signals. In addition, the position-specification precision of GPS, that is, its resolution, is on the order of several meters, which is deemed insufficient for pedestrians. In this study, we proposed and evaluated a technique for locating a user’s 3D position by setting up a marker in the navigation space detected in the image of a cellular phone. By experiment, we verified the effectiveness and accuracy of the proposed method. Additionally, we improved the positional precision because we measured the position distance using numerous markers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wang, Chuang, Li Xing und Xiaowei Tu. „A Novel Position and Orientation Sensor for Indoor Navigation Based on Linear CCDs“. Sensors 20, Nr. 3 (29.01.2020): 748. http://dx.doi.org/10.3390/s20030748.

Der volle Inhalt der Quelle
Annotation:
The position and orientation of a mobile agent, such as robot or drone, etc., should be estimated in a timely way during operation in the structured indoor environment, so as to ensure the security and efficiency of task execution. Concerning the problem that the position and orientation are often estimated separately by different kinds of sensors in the off-the-shelf methods, we design a novel position orientation sensor (POS). The POS consists of four pairs of linear charge-coupled devices (CCDs) and cylindrical lenses, which can estimate the 3D coordinate of the anchor in the POS’s field of view. After detecting at least three anchors in its field of vision sequentially, the Rodrigues coordinate transformation algorithm is utilized to estimate the position and orientation of POS simultaneously. Meanwhile, the position and orientation are estimated at the receiver side. Hence there is no privacy concern associated with this system. The architecture of the proposed POS is symmetrical and redundant, even if one of the linear CCDs or cylindrical lens malfunctions, the whole system could still work normally. The proposed method is cost-effective and easily extends to a wide range. The numerical simulation demonstrates the feasibility and high accuracy of the proposed method, and it outperforms the off-the-shelf methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Laoudias, Christos, Artyom Nikitin, Panagiotis Karras, Moustafa Youssef und Demetrios Zeinalipour-Yazti. „Indoor Quality-of-position Visual Assessment Using Crowdsourced Fingerprint Maps“. ACM Transactions on Spatial Algorithms and Systems 7, Nr. 2 (Februar 2021): 1–32. http://dx.doi.org/10.1145/3433026.

Der volle Inhalt der Quelle
Annotation:
Internet-based Indoor Navigation (IIN) architectures organize signals collected by crowdsourcers in Fingerprint Maps (FMs) to improve localization given that satellite-based technologies do not operate accurately in indoor spaces where people spend 80%–90% of their time. In this article, we study the Quality-of-Position (QoP) assessment problem, which aims to assess in an offline manner the localization accuracy that can be obtained by a user that aims to localize using a FM. Particularly, our proposed ACCES framework uses a generic interpolation method using Gaussian Processes (GP), upon which a navigability score at any location is derived using the Cramer-Rao Lower Bound (CRLB). We derive adaptations of ACCES for both Magnetic and Wi-Fi data and implement a complete visual assessment environment, which has been incorporated in the Anyplace open-source IIN. Our experimental evaluation of ACCES in Anyplace suggests the high qualitative and quantitative benefits of our propositions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Park, JeeWoong, Yong K. Cho und Diego Martinez. „A BIM and UWB integrated Mobile Robot Navigation System for Indoor Position Tracking Applications“. Journal of Construction Engineering and Project Management 6, Nr. 2 (01.06.2016): 30–39. http://dx.doi.org/10.6106/jcepm.2016.6.2.030.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Koh, Kyoung C., Jae S. Kim und Hyung S. Cho. „A position estimation system for mobile robots using a monocular image of a 3-D landmark“. Robotica 12, Nr. 5 (September 1994): 431–41. http://dx.doi.org/10.1017/s0263574700017987.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThis paper presents an absolute position estimation system for a mobile robot moving on a flat surface. In this system, a 3-D landmark with four coplanar points and a non-coplanar point is utilized to improve the accuracy of position estimation and to guide the robot during navigation. Applying theoretical analysis, we investigate the image sensitivity of the proposed 3-D landmark compared with the conventional 2-D landmark. In the camera calibration stage of the experiments, we employ a neural network as a computational tool. The neural network is trained from a set of learning data collected at various points around the mark so that the extrinsic and intrinsic parameters of the camera system can be resolved. The overall estimation algorithm from the mark identification to the position determination is implemented in a 32-bit personal computer with an image digitizer and an arithmetic accelerator. To demonstrate the effectiveness of the proposed 3-D landmark and the neural network-based calibration scheme, a series of navigation experiments were performed on a wheeled mobile robot (LCAR) in an indoor environment. The results show the feasibility of the position estimation system applicable to mobile robot's real-time navigation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Guo, Ying, Qinghua Liu, Xianlei Ji, Shengli Wang, Mingyang Feng und Yuxi Sun. „Multimode Pedestrian Dead Reckoning Gait Detection Algorithm Based on Identification of Pedestrian Phone Carrying Position“. Mobile Information Systems 2019 (31.10.2019): 1–14. http://dx.doi.org/10.1155/2019/4709501.

Der volle Inhalt der Quelle
Annotation:
Pedestrian dead reckoning (PDR) is an essential technology for positioning and navigation in complex indoor environments. In the process of PDR positioning and navigation using mobile phones, gait information acquired by inertial sensors under various carrying positions differs from noise contained in the heading information, resulting in excessive gait detection deviation and greatly reducing the positioning accuracy of PDR. Using data from mobile phone accelerometer and gyroscope signals, this paper examined various phone carrying positions and switching positions as the research objective and analysed the time domain characteristics of the three-axis accelerometer and gyroscope signals. A principal component analysis algorithm was used to reduce the dimension of the extracted multidimensional gait feature, and the extracted features were random forest modelled to distinguish the phone carrying positions. The results show that the step detection and distance estimation accuracy in the gait detection process greatly improved after recognition of the phone carrying position, which enhanced the robustness of the PDR algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Atia, Mohamed M., Shifei Liu, Heba Nematallah, Tashfeen B. Karamat und Aboelmagd Noureldin. „Integrated Indoor Navigation System for Ground Vehicles With Automatic 3-D Alignment and Position Initialization“. IEEE Transactions on Vehicular Technology 64, Nr. 4 (April 2015): 1279–92. http://dx.doi.org/10.1109/tvt.2015.2397004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Zhe Huang, Jigui Zhu, Linghui Yang, Bin Xue, Jun Wu und Ziyue Zhao. „Accurate 3-D Position and Orientation Method for Indoor Mobile Robot Navigation Based on Photoelectric Scanning“. IEEE Transactions on Instrumentation and Measurement 64, Nr. 9 (September 2015): 2518–29. http://dx.doi.org/10.1109/tim.2015.2415031.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Abugabal, Muhammad, Yasmine Fahmy und Hazim Tawfik. „Novel Position Estimation using Differential Timing Information for Asynchronous LTE/NR Networks“. International journal of Computer Networks & Communications 13, Nr. 04 (31.07.2021): 39–52. http://dx.doi.org/10.5121/ijcnc.2021.13403.

Der volle Inhalt der Quelle
Annotation:
Positioning techniques have been a common objective since the early development of wireless networks. However, current positioning methods in cellular networks, for instance, are still primarily focused on the use of the Global Navigation Satellite System (GNSS), which has several limitations, like high power drainage and failure in indoor scenarios. This study introduces a novel approach employing standard LTE signaling in order to provide high accuracy positioning estimation. The proposed technique is designed in analogy to the human sound localization system, eliminating the need of having information from three spatially diverse Base Stations (BSs). This is inspired by the perfect human 3D sound localization with two ears. A field study is carried out in a dense urban city to verify the accuracy of the proposed technique, with more than 20 thousand measurement samples collected. The achieved positioning accuracy is meeting the latest Federal Communications Commission (FCC) requirements in the planner dimension.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Seiffert Simões, Walter C. S., und Vicente F. de Lucena. „Indoor Navigation Assistant for Visually Impaired by Pedestrian Dead Reckoning and Position Estimative of Correction for Patterns Recognition“. IFAC-PapersOnLine 49, Nr. 30 (2016): 167–70. http://dx.doi.org/10.1016/j.ifacol.2016.11.149.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Fu, Qing, und Guenther Retscher. „Active RFID Trilateration and Location Fingerprinting Based on RSSI for Pedestrian Navigation“. Journal of Navigation 62, Nr. 2 (12.03.2009): 323–40. http://dx.doi.org/10.1017/s0373463308005195.

Der volle Inhalt der Quelle
Annotation:
In the work package ‘Integrated Positioning’ of the Ubiquitous Cartography for Pedestrian Navigation project (UCPNAVI) alternative location methods using active Radio Frequency Identification (RFID) are investigated for positioning of pedestrians in areas where no GNSS position determination is possible due to obstruction of the satellite signals. In most common RFID applications, positioning is performed using cell-based positioning. RFID tags can be installed at active landmarks (i.e., known locations) in the surroundings and a user equipped with an RFID reader can be positioned using Cell of Origin (CoO). The positioning accuracy, however, depends on the size of the cell defined by the maximum range of the signal. Using long range RFID for positioning the cell size can be quite large, i.e., around 20 m. Therefore, the paper proposes two new methods for positioning, i.e., trilateration and location fingerprinting based on received signal strength indication (RSSI) if more than one RFID tag is visible. The trilateration approach is based on the deduction of ranges to the RFID tags from RSSI. An iterative approach to model the signal propagation will be introduced, i.e., the International Telecommunication Union (ITU) indoor location model that can be simplified to a logarithmic model, and a simple polynomial model is employed for the signal strength to range conversion. In a second attempt, location fingerprinting based on RSSI is investigated. In this case, RSSI is measured in a training phase at known locations inside the building and stored in a database. In the positioning phase these measurements are used together with the current measurements to obtain the current location of the user. For the estimation of the current location different approaches are employed and tested, i.e., a direction-based approach, a tag-based approach, a direction-tag-based approach and a heading-based approach. Using trilateration or fingerprinting positioning accuracies on the one to a few metres level can usually be achieved. The concept and the iterative approach of the different methods and test results are discussed in this paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Sun, Manhui, Shaowu Yang und Hengzhu Liu. „Convolutional neural network-based coarse initial position estimation of a monocular camera in large-scale 3D light detection and ranging maps“. International Journal of Advanced Robotic Systems 16, Nr. 6 (01.11.2019): 172988141989351. http://dx.doi.org/10.1177/1729881419893518.

Der volle Inhalt der Quelle
Annotation:
Initial position estimation in global maps, which is a prerequisite for accurate localization, plays a critical role in mobile robot navigation tasks. Global positioning system signals often become unreliable in disaster sites or indoor areas, which require other localization methods to help the robot in searching and rescuing. Many visual-based approaches focus on estimating a robot’s position within prior maps acquired with cameras. In contrast to conventional methods that need a coarse estimation of initial position to precisely localize a camera in a given map, we propose a novel approach that estimates the initial position of a monocular camera within a given 3D light detection and ranging map using a convolutional neural network with no retraining is required. It enables a mobile robot to estimate a coarse position of itself in 3D maps with only a monocular camera. The key idea of our work is to use depth information as intermediate data to retrieve a camera image in immense point clouds. We employ an unsupervised learning framework to predict the depth from a single image. Then we use a pretrained convolutional neural network model to generate depth image descriptors to construct representations of the places. We retrieve the position by computing similarity scores between the current depth image and the depth images projected from the 3D maps. Experiments on the publicly available KITTI data sets have demonstrated the efficiency and feasibility of the presented algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Scheuermann, Edward, und Mark Costello. „Specialized Algorithm for Navigation of a Micro Hopping Air Vehicle Using Only Inertial Sensors“. Journal of Dynamic Systems, Measurement, and Control 135, Nr. 2 (21.12.2012). http://dx.doi.org/10.1115/1.4007975.

Der volle Inhalt der Quelle
Annotation:
The need for accurate and reliable navigation techniques for micro air vehicles plays an important part in enabling autonomous operation. Traditional navigation systems typically rely on periodic global positioning system updates and provide little benefit when operating indoors or in other similarly shielded environments. Moreover, direct integration of the onboard inertial measurement unit data stream often results in substantial drift errors yielding virtually unusable positional information. This paper presents a new strategy for obtaining an accurate navigation solution for the special case of a micro hopping air vehicle, beginning from some known location and heading, using only one triaxial accelerometer and one triaxial gyroscope. Utilizing the unique dynamics of the hopping vehicle, a piece-wise navigation solution is constructed by selectively integrating the inertial data stream for only those short periods of time while the vehicle is airborne. Interhop data post processing and sensor bias recalibration are also used to further improve estimation accuracy. To assess the performance of the proposed algorithm, a series of tests were conducted in which the estimated vehicle position following a sequence of 10 consecutive hops was compared with measurements from an optical motion-capture system. On average, the final estimated vehicle position was within 0.70 m or just over 6% from its actual location based on a total traveled distance of approximately 11 m.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Darmawan, Adytia, Sanggar Dewanto und Dadet Pramadihanto. „An Implementation of Error Minimization Position Estimate in Wireless Inertial Measurement Unit using Modification ZUPT“. EMITTER International Journal of Engineering Technology 4, Nr. 2 (15.12.2016). http://dx.doi.org/10.24003/emitter.v4i2.156.

Der volle Inhalt der Quelle
Annotation:
Position estimation using WIMU (Wireless Inertial Measurement Unit) is one of emerging technology in the field of indoor positioning systems. WIMU can detect movement and does not depend on GPS signals. The position is then estimated using a modified ZUPT (Zero Velocity Update) method that was using Filter Magnitude Acceleration (FMA), Variance Magnitude Acceleration (VMA) and Angular Rate (AR) estimation. Performance of this method was justified on a six-legged robot navigation system. Experimental result shows that the combination of VMA-AR gives the best position estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Avram, Horea. „The Convergence Effect: Real and Virtual Encounters in Augmented Reality Art“. M/C Journal 16, Nr. 6 (07.11.2013). http://dx.doi.org/10.5204/mcj.735.

Der volle Inhalt der Quelle
Annotation:
Augmented Reality—The Liminal Zone Within the larger context of the post-desktop technological philosophy and practice, an increasing number of efforts are directed towards finding solutions for integrating as close as possible virtual information into specific real environments; a short list of such endeavors include Wi-Fi connectivity, GPS-driven navigation, mobile phones, GIS (Geographic Information System), and various technological systems associated with what is loosely called locative, ubiquitous and pervasive computing. Augmented Reality (AR) is directly related to these technologies, although its visualization capabilities and the experience it provides assure it a particular place within this general trend. Indeed, AR stands out for its unique capacity (or ambition) to offer a seamless combination—or what I call here an effect of convergence—of the real scene perceived by the user with virtual information overlaid on that scene interactively and in real time. The augmented scene is perceived by the viewer through the use of different displays, the most common being the AR glasses (head-mounted display), video projections or monitors, and hand-held mobile devices such as smartphones or tablets, increasingly popular nowadays. One typical example of AR application is Layar, a browser that layers information of public interest—delivered through an open-source content management system—over the actual image of a real space, streamed live on the mobile phone display. An increasing number of artists employ this type of mobile AR apps to create artworks that consist in perceptually combining material reality and virtual data: as the user points the smartphone or tablet to a specific place, virtual 3D-modelled graphics or videos appear in real time, seamlessly inserted in the image of that location, according to the user’s position and orientation. In the engineering and IT design fields, one of the first researchers to articulate a coherent conceptualization of AR and to underlie its specific capabilities is Ronald Azuma. He writes that, unlike Virtual Reality (VR) which completely immerses the user inside a synthetic environment, AR supplements reality, therefore enhancing “a user’s perception of and interaction with the real world” (355-385). Another important contributor to the foundation of AR as a concept and as a research field is industrial engineer Paul Milgram. He proposes a comprehensive and frequently cited definition of “Mixed Reality” (MR) via a schema that includes the entire spectrum of situations that span the “continuum” between actual reality and virtual reality, with “augmented reality” and “augmented virtuality” between the two poles (283). Important to remark with regard to terminology (MR or AR) is that especially in the non-scientific literature, authors do not always explain a preference for either MR or AR. This suggests that the two terms are understood as synonymous, but it also provides evidence for my argument that, outside of the technical literature, AR is considered a concept rather than a technology. Here, I use the term AR instead of MR considering that the phrase AR (and the integrated idea of augmentation) is better suited to capturing the convergence effect. As I will demonstrate in the following lines, the process of augmentation (i.e. the convergence effect) is the result of an enhancement of the possibilities to perceive and understand the world—through adding data that augment the perception of reality—and not simply the product of a mix. Nevertheless, there is surely something “mixed” about this experience, at least for the fact that it combines reality and virtuality. The experiential result of combining reality and virtuality in the AR process is what media theorist Lev Manovich calls an “augmented space,” a perceptual liminal zone which he defines as “the physical space overlaid with dynamically changing information, multimedia in form and localized for each user” (219). The author derives the term “augmented space” from the term AR (already established in the scientific literature), but he sees AR, and implicitly augmented space, not as a strictly defined technology, but as a model of visuality concerned with the intertwining of the real and virtual: “it is crucial to see this as a conceptual rather than just a technological issue – and therefore as something that in part has already been an element of other architectural and artistic paradigms” (225-6). Surely, it is hard to believe that AR has appeared in a void or that its emergence is strictly related to certain advances in technological research. AR—as an artistic manifestation—is informed by other attempts (not necessarily digital) to merge real and fictional in a unitary perceptual entity, particularly by installation art and Virtual Reality (VR) environments. With installation art, AR shares the same spatial strategy and scenographic approach—they both construct “fictional” areas within material reality, that is, a sort of mise-en-scène that are aesthetically and socially produced and centered on the active viewer. From the media installationist practice of the previous decades, AR inherited the way of establishing a closer spatio-temporal interaction between the setting, the body and the electronic image (see for example Bruce Nauman’s Live-Taped Video Corridor [1970], Peter Campus’s Interface [1972], Dan Graham’s Present Continuous Pasts(s) [1974], Jeffrey Shaw’s Viewpoint [1975], or Jim Campbell’s Hallucination [1988]). On the other hand, VR plays an important role in the genealogy of AR for sharing the same preoccupation for illusionist imagery and—at least in some AR projects—for providing immersive interactions in “expanded image spaces experienced polysensorily and interactively” (Grau 9). VR artworks such as Paul Sermon, Telematic Dreaming (1992), Char Davies’ Osmose (1995), Michael Naimark’s Be Now Here (1995-97), Maurice Benayoun’s World Skin: A Photo Safari in the Land of War (1997), Luc Courchesne’s Where Are You? (2007-10), are significant examples for the way in which the viewer can be immersed in “expanded image-spaces.” Offering no view of the exterior world, the works try instead to reduce as much as possible the critical distance the viewer might have to the image he/she experiences. Indeed, AR emerged in great part from the artistic and scientific research efforts dedicated to VR, but also from the technological and artistic investigations of the possibilities of blending reality and virtuality, conducted in the previous decades. For example, in the 1960s, computer scientist Ivan Sutherland played a crucial role in the history of AR contributing to the development of display solutions and tracking systems that permit a better immersion within the digital image. Another important figure in the history of AR is computer artist Myron Krueger whose experiments with “responsive environments” are fundamental as they proposed a closer interaction between participant’s body and the digital object. More recently, architect and theorist Marcos Novak contributed to the development of the idea of AR by introducing the concept of “eversion”, “the counter-vector of the virtual leaking out into the actual”. Today, AR technological research and the applications made available by various developers and artists are focused more and more on mobility and ubiquitous access to information instead of immersivity and illusionist effects. A few examples of mobile AR include applications such as Layar, Wikitude—“world browsers” that overlay site-specific information in real-time on a real view (video stream) of a place, Streetmuseum (launched in 2010) and Historypin (launched in 2011)—applications that insert archive images into the street-view of a specific location where the old images were taken, or Google Glass (launched in 2012)—a device that provides the wearer access to Google’s key Cloud features, in situ and in real time. Recognizing the importance of various technological developments and of the artistic manifestations such as installation art and VR as predecessors of AR, we should emphasize that AR moves forward from these artistic and technological models. AR extends the installationist precedent by proposing a consistent and seamless integration of informational elements with the very physical space of the spectator, and at the same time rejects the idea of segregating the viewer into a complete artificial environment like in VR systems by opening the perceptual field to the surrounding environment. Instead of leaving the viewer in a sort of epistemological “lust” within the closed limits of the immersive virtual systems, AR sees virtuality rather as a “component of experiencing the real” (Farman 22). Thus, the questions that arise—and which this essay aims to answer—are: Do we have a specific spatial dimension in AR? If yes, can we distinguish it as a different—if not new—spatial and aesthetic paradigm? Is AR’s intricate topology able to be the place not only of convergence, but also of possible tensions between its real and virtual components, between the ideal of obtaining a perceptual continuity and the inherent (technical) limitations that undermine that ideal? Converging Spaces in the Artistic Mode: Between Continuum and Discontinuum As key examples of the way in which AR creates a specific spatial experience—in which convergence appears as a fluctuation between continuity and discontinuity—I mention three of the most accomplished works in the field that, significantly, expose also the essential role played by the interface in providing this experience: Living-Room 2 (2007) by Jan Torpus, Under Scan (2005-2008) by Rafael Lozano-Hemmer and Hans RichtAR (2013) by John Craig Freeman and Will Pappenheimer. The works illustrate the three main categories of interfaces used for AR experience: head-attached, spatial displays, and hand-held (Bimber 2005). These types of interface—together with all the array of adjacent devices, software and tracking systems—play a central role in determining the forms and outcomes of the user’s experience and consequently inform in a certain measure the aesthetic and socio-cultural interpretative discourse surrounding AR. Indeed, it is not the same to have an immersive but solitary experience, or a mobile and public experience of an AR artwork or application. The first example is Living-Room 2 an immersive AR installation realized by a collective coordinated by Jan Torpus in 2007 at the University of Applied Sciences and Arts FHNW, Basel, Switzerland. The work consists of a built “living-room” with pieces of furniture and domestic objects that are perceptually augmented by means of a “see-through” Head Mounted Display. The viewer perceives at the same time the real room and a series of virtual graphics superimposed on it such as illusionist natural vistas that “erase” the walls, or strange creatures that “invade” the living-room. The user can select different augmenting “scenarios” by interacting with both the physical interfaces (the real furniture and objects) and the graphical interfaces (provided as virtual images in the visual field of the viewer, and activated via a handheld device). For example, in one of the scenarios proposed, the user is prompted to design his/her own extended living room, by augmenting the content and the context of the given real space with different “spatial dramaturgies” or “AR décors.” Another scenario offers the possibility of creating an “Ecosystem”—a real-digital world perceived through the HMD in which strange creatures virtually occupy the living-room intertwining with the physical configuration of the set design and with the user’s viewing direction, body movement, and gestures. Particular attention is paid to the participant’s position in the room: a tracking device measures the coordinates of the participant’s location and direction of view and effectuates occlusions of real space and then congruent superimpositions of 3D images upon it. Figure 1: Jan Torpus, Living-Room 2 (Ecosystems), Augmented Reality installation (2007). Courtesy of the artist. Figure 2: Jan Torpus, Living-Room 2 (AR decors), Augmented Reality installation (2007). Courtesy of the artist.In this sense, the title of the work acquires a double meaning: “living” is both descriptive and metaphoric. As Torpus explains, Living-Room is an ambiguous phrase: it can be both a living-room and a room that actually lives, an observation that suggests the idea of a continuum and of immersion in an environment where there are no apparent ruptures between reality and virtuality. Of course, immersion is in these circumstances not about the creation of a purely artificial secluded space of experience like that of the VR environments, but rather about a dialogical exercise that unifies two different phenomenal levels, real and virtual, within a (dis)continuous environment (with the prefix “dis” as a necessary provision). Media theorist Ron Burnett’s observations about the instability of the dividing line between different levels of experience—more exactly, of the real-virtual continuum—in what he calls immersive “image-worlds” have a particular relevance in this context: Viewing or being immersed in images extend the control humans have over mediated spaces and is part of a perceptual and psychological continuum of struggle for meaning within image-worlds. Thinking in terms of continuums lessens the distinctions between subjects and objects and makes it possible to examine modes of influence among a variety of connected experiences. (113) It is precisely this preoccupation to lessen any (or most) distinctions between subjects and objects, and between real and virtual spaces, that lays at the core of every artistic experiment under the AR rubric. The fact that this distinction is never entirely erased—as Living-Room 2 proves—is part of the very condition of AR. The ambition to create a continuum is after all not about producing perfectly homogenous spaces, but, as Ron Burnett points out (113), “about modalities of interaction and dialogue” between real worlds and virtual images. Another way to frame the same problematic of creating a provisional spatial continuum between reality and virtuality, but this time in a non-immersive fashion (i.e. with projective interface means), occurs in Rafael Lozano-Hemmer’s Under Scan (2005-2008). The work, part of the larger series Relational Architecture, is an interactive video installation conceived for outdoor and indoor environments and presented in various public spaces. It is a complex system comprised of a powerful light source, video projectors, computers, and a tracking device. The powerful light casts shadows of passers-by within the dark environment of the work’s setting. A tracking device indicates where viewers are positioned and permits the system to project different video sequences onto their shadows. Shot in advance by local videographers and producers, the filmed sequences show full images of ordinary people moving freely, but also watching the camera. As they appear within pedestrians’ shadows, the figurants interact with the viewers, moving and establishing eye contact. Figure 3: Rafael Lozano-Hemmer, Under Scan (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist. Figure 4: Rafael Lozano-Hemmer, Under Scan (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist. One of the most interesting attributes of this work with respect to the question of AR’s (im)possible perceptual spatial continuity is its ability to create an experientially stimulating and conceptually sophisticated play between illusion and subversion of illusion. In Under Scan, the integration of video projections into the real environment via the active body of the viewer is aimed at tempering as much as possible any disparities or dialectical tensions—that is, any successive or alternative reading—between real and virtual. Although non-immersive, the work fuses the two levels by provoking an intimate but mute dialogue between the real, present body of the viewer and the virtual, absent body of the figurant via the ambiguous entity of the shadow. The latter is an illusion (it marks the presence of a body) that is transcended by another illusion (video projection). Moreover, being “under scan,” the viewer inhabits both the “here” of the immediate space and the “there” of virtual information: “the body” is equally a presence in flesh and bones and an occurrence in bits and bytes. But, however convincing this reality-virtuality pseudo-continuum would be, the spatial and temporal fragmentations inevitably persist: there is always a certain break at the phenomenological level between the experience of real space, the bodily absence/presence in the shadow, and the displacements and delays of the video image projection. Figure 5: John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists. Figure 6: John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists. The third example of an AR artwork that engages the problem of real-virtual spatial convergence as a play between perceptual continuity and discontinuity, this time with the use of hand-held mobile interface is Hans RichtAR by John Craig Freeman and Will Pappenheimer. The work is an AR installation included in the exhibition “Hans Richter: Encounters” at Los Angeles County Museum of Art, in 2013. The project recreates the spirit of the 1929 exhibition held in Stuttgart entitled Film und Foto (“FiFo”) for which avant-garde artist Hans Richter served as film curator. Featured in the augmented reality is a re-imaging of the FiFo Russian Room designed by El Lissitzky where a selection of Russian photographs, film stills and actual film footage was presented. The users access the work through tablets made available at the exhibition entrance. Pointing the tablet at the exhibition and moving around the room, the viewer discovers that a new, complex installation is superimposed on the screen over the existing installation and gallery space at LACMA. The work effectively recreates and interprets the original design of the Russian Room, with its scaffoldings and surfaces at various heights while virtually juxtaposing photography and moving images, to which the authors have added some creative elements of their own. Manipulating and converging real space and the virtual forms in an illusionist way, AR is able—as one of the artists maintains—to destabilize the way we construct representation. Indeed, the work makes a statement about visuality that complicates the relationship between the visible object and its representation and interpretation in the virtual realm. One that actually shows the fragility of establishing an illusionist continuum, of a perfect convergence between reality and represented virtuality, whatever the means employed. AR: A Different Spatial Practice Regardless the degree of “perfection” the convergence process would entail, what we can safely assume—following the examples above—is that the complex nature of AR operations permits a closer integration of virtual images within real space, one that, I argue, constitutes a new spatial paradigm. This is the perceptual outcome of the convergence effect, that is, the process and the product of consolidating different—and differently situated—elements in real and virtual worlds into a single space-image. Of course, illusion plays a crucial role as it makes permeable the perceptual limit between the represented objects and the material spaces we inhabit. Making the interface transparent—in both proper and figurative senses—and integrating it into the surrounding space, AR “erases” the medium with the effect of suspending—at least for a limited time—the perceptual (but not ontological!) differences between what is real and what is represented. These aspects are what distinguish AR from other technological and artistic endeavors that aim at creating more inclusive spaces of interaction. However, unlike the CAVE experience (a display solution frequently used in VR applications) that isolates the viewer within the image-space, in AR virtual information is coextensive with reality. As the example of the Living-Room 2 shows, regardless the degree of immersivity, in AR there is no such thing as dismissing the real in favor of an ideal view of a perfect and completely controllable artificial environment like in VR. The “redemptive” vision of a total virtual environment is replaced in AR with the open solution of sharing physical and digital realities in the same sensorial and spatial configuration. In AR the real is not denounced but reflected; it is not excluded, but integrated. Yet, AR distinguishes itself also from other projects that presuppose a real-world environment overlaid with data, such as urban surfaces covered with screens, Wi-Fi enabled areas, or video installations that are not site-specific and viewer inclusive. Although closely related to these types of projects, AR remains different, its spatiality is not simply a “space of interaction” that connects, but instead it integrates real and virtual elements. Unlike other non-AR media installations, AR does not only place the real and virtual spaces in an adjacent position (or replace one with another), but makes them perceptually convergent in an—ideally—seamless way (and here Hans RichtAR is a relevant example). Moreover, as Lev Manovich notes, “electronically augmented space is unique – since the information is personalized for every user, it can change dynamically over time, and it is delivered through an interactive multimedia interface” (225-6). Nevertheless, as our examples show, any AR experience is negotiated in the user-machine encounter with various degrees of success and sustainability. Indeed, the realization of the convergence effect is sometimes problematic since AR is never perfectly continuous, spatially or temporally. The convergence effect is the momentary appearance of continuity that will never take full effect for the viewer, given the internal (perhaps inherent?) tensions between the ideal of seamlessness and the mostly technical inconsistencies in the visual construction of the pieces (such as real-time inadequacy or real-virtual registration errors). We should note that many criticisms of the AR visualization systems (being them practical applications or artworks) are directed to this particular aspect related to the imperfect alignment between reality and digital information in the augmented space-image. However, not only AR applications can function when having an estimated (and acceptable) registration error, but, I would state, such visual imperfections testify a distinctive aesthetic aspect of AR. The alleged flaws can be assumed—especially in the artistic AR projects—as the “trace,” as the “tool’s stroke” that can reflect the unique play between illusion and its subversion, between transparency of the medium and its reflexive strategy. In fact this is what defines AR as a different perceptual paradigm: the creation of a convergent space—which will remain inevitably imperfect—between material reality and virtual information.References Azuma, Ronald T. “A Survey on Augmented Reality.” Presence: Teleoperators and Virtual Environments 6.4 (Aug. 1997): 355-385. < http://www.hitl.washington.edu/projects/knowledge_base/ARfinal.pdf >. Benayoun, Maurice. World Skin: A Photo Safari in the Land of War. 1997. Immersive installation: CAVE, Computer, video projectors, 1 to 5 real photo cameras, 2 to 6 magnetic or infrared trackers, shutter glasses, audio-system, Internet connection, color printer. Maurice Benayoun, Works. < http://www.benayoun.com/projet.php?id=16 >. Bimber, Oliver, and Ramesh Raskar. Spatial Augmented Reality. Merging Real and Virtual Worlds. Wellesley, Massachusetts: AK Peters, 2005. 71-92. Burnett, Ron. How Images Think. Cambridge, Mass.: MIT Press, 2004. Campbell, Jim. Hallucination. 1988-1990. Black and white video camera, 50 inch rear projection video monitor, laser disc players, custom electronics. Collection of Don Fisher, San Francisco. Campus, Peter. Interface. 1972. Closed-circuit video installation, black and white camera, video projector, light projector, glass sheet, empty, dark room. Centre Georges Pompidou Collection, Paris, France. Courchesne, Luc. Where Are You? 2005. Immersive installation: Panoscope 360°. a single channel immersive display, a large inverted dome, a hemispheric lens and projector, a computer and a surround sound system. Collection of the artist. < http://courchel.net/# >. Davies, Char. Osmose. 1995. Computer, sound synthesizers and processors, stereoscopic head-mounted display with 3D localized sound, breathing/balance interface vest, motion capture devices, video projectors, and silhouette screen. Char Davies, Immersence, Osmose. < http://www.immersence.com >. Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012. Graham, Dan. Present Continuous Past(s). 1974. Closed-circuit video installation, black and white camera, one black and white monitor, two mirrors, microprocessor. Centre Georges Pompidou Collection, Paris, France. Grau, Oliver. Virtual Art: From Illusion to Immersion. Translated by Gloria Custance. Cambridge, Massachusetts, London: MIT Press, 2003. Hansen, Mark B.N. New Philosophy for New Media. Cambridge, Mass.: MIT Press, 2004. Harper, Douglas. Online Etymology Dictionary, 2001-2012. < http://www.etymonline.com >. Manovich, Lev. “The Poetics of Augmented Space.” Visual Communication 5.2 (2006): 219-240. Milgram, Paul, Haruo Takemura, Akira Utsumi, Fumio Kishino. “Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum.” SPIE [The International Society for Optical Engineering] Proceedings 2351: Telemanipulator and Telepresence Technologies (1994): 282-292. Naimark, Michael, Be Now Here. 1995-97. Stereoscopic interactive panorama: 3-D glasses, two 35mm motion-picture cameras, rotating tripod, input pedestal, stereoscopic projection screen, four-channel audio, 16-foot (4.87 m) rotating floor. Originally produced at Interval Research Corporation with additional support from the UNESCO World Heritage Centre, Paris, France. < http://www.naimark.net/projects/benowhere.html >. Nauman, Bruce. Live-Taped Video Corridor. 1970. Wallboard, video camera, two video monitors, videotape player, and videotape, dimensions variable. Solomon R. Guggenheim Museum, New York. Novak, Marcos. Interview with Leo Gullbring, Calimero journalistic och fotografi, 2001. < http://www.calimero.se/novak2.htm >. Sermon, Paul. Telematic Dreaming. 1992. ISDN telematic installation, two video projectors, two video cameras, two beds set. The National Museum of Photography, Film & Television in Bradford England. Shaw, Jeffrey, and Theo Botschuijver. Viewpoint. 1975. Photo installation. Shown at 9th Biennale de Paris, Musée d'Art Moderne, Paris, France.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie