Academic literature on the topic 'Camera recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Camera recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Camera recognition"

1

Zhao, Ruiyi, Yangshi Ge, Ye Duan, and Quanhong Jiang. "Large-field Gesture Tracking and Recognition for Augmented Reality Interaction." Journal of Physics: Conference Series 2560, no. 1 (August 1, 2023): 012016. http://dx.doi.org/10.1088/1742-6596/2560/1/012016.

Full text
Abstract:
Abstract In recent years, with the continuous development of computer vision and artificial intelligence technology, gesture recognition is widely used in many fields, such as virtual reality, augmented reality and so on. However, the traditional binocular camera architecture is limited by its limited field of view Angle and depth perception range. Fisheye camera is gradually applied in gesture recognition field because of its advantage of larger field of view Angle. Fisheye cameras offer a wider field of vision than previous binocular cameras, allowing for a greater range of gesture recognition. This gives fisheye cameras a distinct advantage in situations that require a wide field of view. However, because the imaging mode of fisheye camera is different from traditional camera, the image of fisheye camera has a certain degree of distortion, which makes the calculation of gesture recognition more complicated. Our goal is to design a distortion correction processing strategy suitable for fisheye cameras in order to extend the range of gesture recognition and achieve large field of view gesture recognition. Combined with binocular technology, we can use the acquired hand depth information to enrich the means of interaction. By taking advantage of the large viewing Angle of the fisheye camera to expand the range of gesture recognition, make it more extensive and accurate. This will help improve the real-time and precision of gesture recognition, which has important implications for artificial intelligence, virtual reality and augmented reality.
APA, Harvard, Vancouver, ISO, and other styles
2

WANG, Chenyu, Yukinori KOBAYASHI, Takanori EMARU, and Ankit RAVANKAR. "1A1-H04 Recognition of 3-D Grid Structure Recognition with Fixed Camera and RGB-D Camera." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2015 (2015): _1A1—H04_1—_1A1—H04_4. http://dx.doi.org/10.1299/jsmermd.2015._1a1-h04_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Reddy, K. Manideep. "Face Recognition for Criminal Detection." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 2856–60. http://dx.doi.org/10.22214/ijraset.2022.44528.

Full text
Abstract:
Abstract: In these days, assessment camera structure wins as a security system at high speed since this structure can screen from remote spots using Web camera joined to video screen by network. Besides, computerized supplies like Web camera, and hard circle drive are proficiently fabricated, and are sold for minimal price. Likewise, execution gain of these mechanized sorts of stuff improves at a fast rate. Current perception camera structure shows dynamic pictures from some oversight areas shot by various Web cameras all the while. Then, this system makes spectator's mind and body tired considering the way that he/she wants to watch enormous number of dynamic pictures been persistently strengthened. Moreover, this structure has a troublesome issue, which is an observer slips over mark of bad behavior. This study eliminates Motion Region from moving individual, and measures Motion Quantity for assessing his/her dynamic state. Also, this recommendation method finds the distinctive place of questionable activity, and checks the degree of risk of the questionable development.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Zhuo, Hai Bo Wu, and Sheng Ping Xia. "A Cooperative Dual-Camera System for Face Recognition and Video Monitoring." Advanced Materials Research 998-999 (July 2014): 784–88. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.784.

Full text
Abstract:
In the ordinary video monitoring system, the whole small scene is usually observed by a stationary camera or a few stationary cameras, but the system can’t zoom and focus on the target of interest rapidly, and also can’t get the high resolution image of the target of interest in a far distance. Therefore based on the research of the dual-camera cooperation and a RSOM clustering tree and CSHG algorithm, a cooperative dual-camera system is designed to track and recognize a face quickly in a large-scale and far-distance scene in this paper, which is made up of a Stationary Wide Field of View (SWFV) camera and a Pan-Tilt-Zoom (PTZ) camera. In the meanwhile, the algorithm can ensure the real-time requirement.
APA, Harvard, Vancouver, ISO, and other styles
5

Tseng, Hung Li, Chao Nan Hung, Sun Yen Tan, Chiu Ching Tuan, Chi Ping Lee, and Wen Tzeng Huang. "Single Camera for Multiple Vehicles License Plate Localization and Recognition on Multilane Highway." Applied Mechanics and Materials 418 (September 2013): 120–23. http://dx.doi.org/10.4028/www.scientific.net/amm.418.120.

Full text
Abstract:
License plate recognition systems can be classified into several categories: systems with single camera for motionless vehicle, systems with single camera for moving vehicle, and systems with multiple cameras for moving vehicles on highways (one camera for each lane). In this paper we present an innovative system which can locate multiple moving vehicles and recognize their license plates with only one single camera. Obviously, our system is highly cost effective in comparison with other systems. Our system has license plate localization success rate 94% and license plate recognition success rate 88%. These success rates are pretty satisfiable considering the system is working on fast moving vehicles on highway.
APA, Harvard, Vancouver, ISO, and other styles
6

Francisca O Nwokoma, Juliet N Odii, Ikechukwu I Ayogu, and James C Ogbonna. "Camera-based OCR scene text detection issues: A review." World Journal of Advanced Research and Reviews 12, no. 3 (December 30, 2021): 484–89. http://dx.doi.org/10.30574/wjarr.2021.12.3.0705.

Full text
Abstract:
Camera-based scene text detection and recognition is a research area that has attracted countless attention and had made noticeable progress in the area of deep learning technology, computer vision, and pattern recognition. They are highly recommended for capturing text on-scene images (signboards), documents with a multipart and complex background, images on thick books and documents that are highly fragile. This technology encourages real-time processing since handheld cameras are built with very high processing speed and internal memory, are quite easy and flexible to use than the traditional scanner whose usability is limited as they are not portable in size and cannot be used on images captured by cameras. However, characters captured by traditional scanners pose fewer computational difficulties as compared to camera captured images that are associated with divers’ challenges with consequences of high computational complexity and recognition difficulties. This paper, therefore, reviews the various factors that increase the computational difficulties of Camera-Based OCR, and made some recommendations as per the best practices for Camera-Based OCR systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Fan, Zhijie, Zhiwei Cao, Xin Li, Chunmei Wang, Bo Jin, and Qianjin Tang. "Video Surveillance Camera Identity Recognition Method Fused With Multi-Dimensional Static and Dynamic Identification Features." International Journal of Information Security and Privacy 17, no. 1 (March 9, 2023): 1–18. http://dx.doi.org/10.4018/ijisp.319304.

Full text
Abstract:
With the development of smart cities, video surveillance networks have become an important infrastructure for urban governance. However, by replacing or tampering with surveillance cameras, an important front-end device, attackers are able to access the internal network. In order to identify illegal or suspicious camera identities in advance, a camera identity identification method that incorporates multidimensional identification features is proposed. By extracting the static information of cameras and dynamic traffic information, a camera identity system that incorporates explicit, implicit, and dynamic identifiers is constructed. The experimental results show that the explicit identifiers have the highest contribution, but they are easy to forge; the dynamic identifiers rank second, but the traffic preprocessing is complex; the static identifiers rank last but are indispensable. Experiments on 40 cameras verified the effectiveness and feasibility of the proposed identifier system for camera identification, and the accuracy of identification reached 92.5%.
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Yeonji, Yoojin Jeong, and Chaebong Sohn. "Suspicious behavior recognition using deep learning." Journal of Advances in Military Studies 4, no. 1 (April 30, 2021): 43–59. http://dx.doi.org/10.37944/jams.v4i1.78.

Full text
Abstract:
The purpose of this study is to reinforce the defense and security system by recognizing the behaviors of suspicious person both inside and outside the military using deep learning. Surveillance cameras help detect criminals and people who are acting unusual. However, it is inefficient in that the administrator must monitor all the images transmitted from the camera. It incurs a large cost and is vulnerable to human error. Therefore, in this study, we propose a method to find a person who should be watched carefully only with surveillance camera images. For this purpose, the video data of doubtful behaviors were collected. In addition, after applying a algorithm that generalizes different heights and motions for each person in the input images, we trained through a model combining CNN, bidirectional LSTM, and DNN. As a result, the accuracy of the behavior recognition of suspicious behaviors was improved. Therefore, if deep learning is applied to existing surveillance cameras, it is expected that it will be possible to find the dubious person efficiently.
APA, Harvard, Vancouver, ISO, and other styles
9

Ake, Kanako, Tadatoshi Ogura, Yayoi Kaneko, and Gregory S. A. Rasmussen. "Automated photogrammetric method to identify individual painted dogs (Lycaon pictus)." Zoology and Ecology 29, no. 2 (July 30, 2019): 103–8. http://dx.doi.org/10.35513/21658005.2019.2.5.

Full text
Abstract:
The painted dog, Lycaon pictus, has been visually identified by their tricolor patterns in surveys and whilst computerised recognition methods have been used in other species, they have not been used in painted dogs. This study compares results achieved from Hotspotter software against human recognition. Fifteen individual painted dogs in Yokohama Zoo, Japan were photographed using camera-traps and hand-held cameras from October 17–20, 2017. Twenty examinees identified 297 photos visually, and the same images were identified using Hotspotter. In the visual identification, mean accuracy rate was 61.20%, and a mean finish time was 4,840 seconds. At 90.57%, the accuracy rate for Hotspotter was significantly higher, with a mean finish time of 3,168 seconds. This highlights that visual photo-recognition may not be of value for untrained eyes, while software recognition can be useful for this species. For visual identification there was a significant difference in accuracy rates between hand-held cameras and camera-traps whereas for software identification there was no significant difference. This result shows that the accuracy of software identification may be unaffected by the type of photographic device. With software identification there was a significant difference with camera-trap height. This may be because the images of one camera-trap at a lower position became dark due to it being in a shadow.
APA, Harvard, Vancouver, ISO, and other styles
10

Rusydi, Muhammad Ilhamdi, Aulia Novira, Takayuki Nakagome, Joseph Muguro, Rio Nakajima, Waweru Njeri, Kojiro Matsushita, and Minoru Sasaki. "Autonomous Movement Control of Coaxial Mobile Robot based on Aspect Ratio of Human Face for Public Relation Activity Using Stereo Thermal Camera." Journal of Robotics and Control (JRC) 3, no. 3 (May 1, 2022): 361–73. http://dx.doi.org/10.18196/jrc.v3i3.14750.

Full text
Abstract:
In recent years, robots that recognize people around them and provide guidance, information, and monitoring have been attracting attention. The mainstream of conventional human recognition technology is the method using a camera or laser range finder. However, it is difficult to recognize with a camera due to fluctuations in lighting 1), and it is often affected by the recognition environment such as misrecognition 2) with a person's leg and a chair's leg with a laser range finder. Therefore, we propose a human recognition method using a thermal camera that can visualize human heat. This study aims to realize human-following autonomous movement based on human recognition. In addition, the distance from the robot to the person is measured with a stereo thermal camera that uses two thermal cameras. A coaxial two-wheeled robot that is compact and capable of super-credit turning is used as a mobile robot. Finally, we conduct an autonomous movement experiment of a coaxial mobile robot based on human recognition by combining these. We performed human-following experiments on a coaxial two-wheeled robot based on human recognition using a stereo thermal camera and confirmed that it moves appropriately to the location where the recognized person is in multiple use cases (scenarios). However, the accuracy of distance measurement by stereo vision is inferior to that of laser measurement. It is necessary to improve it in the case of movement that requires more accuracy.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Camera recognition"

1

Johansson, Fredrik. "Recognition of Targets in Camera Networks." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-95351.

Full text
Abstract:
This thesis presents a re-recognition model for use in area camera network surveillance systems. The method relies on a mix of covariance matrix fea- ture descriptions and Bayesian networks for topological information. The system consists of an object recognition model and an re-recognition model. The object recognition model is responsible for separating people from the background and generating the position and description for each person and frame. This is done by using a foreground-background segmen- tation model to separate the background from a person. The segmented image is then tracked by a tracking algorithm that produces the coordinates for each person. It is also responsible for creating a silhouette that is used to create a feature vector consisting of a covariance matrix that describes the persons appearance. A hypothesis engine is then responsible for connecting the coordinates into a continues track that describes the trajectory were aa person has been visiting. Every trajectory is stored and available to the re-recognition model. It then compares two covariance matrices using a sophisticated distance me- thod to generate a probabilistic score value. The score is then combined with a likelihood-value of the topological match generated with a Bayesian network structure containing gathered statistical data. The topological in- formation is mainly intended to ¯lter the most un-likely matches.
APA, Harvard, Vancouver, ISO, and other styles
2

Tadesse, Girmaw Abebe. "Human activity recognition using a wearable camera." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/668914.

Full text
Abstract:
Advances in wearable technologies are facilitating the understanding of human activities using first-person vision (FPV) for a wide range of assistive applications. In this thesis, we propose robust multiple motion features for human activity recognition from first­ person videos. The proposed features encode discriminant characteristics form magnitude, direction and dynamics of motion estimated using optical flow. M:>reover, we design novel virtual-inertial features from video, without using the actual inertial sensor, from the movement of intensity centroid across frames. Results on multiple datasets demonstrate that centroid-based inertial features improve the recognition performance of grid-based features. Moreover, we propose a multi-layer modelling framework that encodes hierarchical and temporal relationships among activities. The first layer operates on groups of features that effectively encode motion dynamics and temporal variaitons of intra-frame appearance descriptors of activities with a hierarchical topology. The second layer exploits the temporal context by weighting the outputs of the hierarchy during modelling. In addition, a post-decoding smoothing technique utilises decisions on past samples based on the confidence of the current sample. We validate the proposed framework with several classi fiers, and the temporal modelling is shown to improve recognition performance. We also investigate the use of deep networks to simplify the feature engineering from first-person videos. We propose a stacking of spectrograms to represent short-term global motions that contains a frequency-time representation of multiplemotion components. This enables us to apply 2D convolutions to extract/learn motion features. We employ long short-term memory recurrent network to encode long-term temporal dependency among activiites. Furthermore, we apply cross-domain knowledge transfer between inertial­ based and vision-based approaches for egocentric activity recognition. We propose sparsity weightedcombination of information from different motion modalities and/or streams . Results show that the proposed approach performs competitively with existing deep frameworks, moreover, with reduced complexity.
Los avances en tecnologías wearables facilitan la comprensión de actividades humanas utilizando cuando se usan videos grabados en primera persona para una amplia gama de aplicaciones. En esta tesis, proponemos características robustas de movimiento para el reconocimiento de actividades humana a partir de videos en primera persona. Las características propuestas codifican características discriminativas estimadas a partir de optical flow como magnitud, dirección y dinámica de movimiento. Además, diseñamos nuevas características de inercia virtual a partir de video, sin usar sensores inerciales, utilizando el movimiento del centroide de intensidad a través de los fotogramas. Los resultados obtenidos en múltiples bases de datos demuestran que las características inerciales basadas en centroides mejoran el rendimiento de reconocimiento en comparación con grid-based características. Además, proponemos un algoritmo multicapa que codifica las relaciones jerárquicas y temporales entre actividades. La primera capa opera en grupos de características que codifican eficazmente las dinámicas del movimiento y las variaciones temporales de características de apariencia entre múltiples fotogramas utilizando una jerarquía. La segunda capa aprovecha el contexto temporal ponderando las salidas de la jerarquía durante el modelado. Además, diseñamos una técnica de postprocesado para filtrar las decisiones utilizando estimaciones pasadas y la confianza de la estimación actual. Validamos el algoritmo propuesto utilizando varios clasificadores. El modelado temporal muestra una mejora del rendimiento en el reconocimiento de actividades. También investigamos el uso de redes profundas (deep networks) para simplificar el diseño manual de características a partir de videos en primera persona. Proponemos apilar espectrogramas para representar movimientos globales a corto plazo. Estos espectrogramas contienen una representación espaciotemporal de múltiples componentes de movimiento. Esto nos permite aplicar convoluciones bidimensionales para aprender funciones de movimiento. Empleamos long short-term memory recurrent networks para codificar la dependencia temporal a largo plazo entre las actividades. Además, aplicamos transferencia de conocimiento entre diferentes dominios (cross-domain knowledge) entre enfoques inerciales y basados en la visión para el reconocimiento de la actividad en primera persona. Proponemos una combinación ponderada de información de diferentes modalidades de movimiento y/o secuencias. Los resultados muestran que el algoritmo propuesto obtiene resultados competitivos en comparación con existentes algoritmos basados en deep learning, a la vez que se reduce la complejidad.
APA, Harvard, Vancouver, ISO, and other styles
3

Erhard, Matthew John. "Visual intent recognition in a multiple camera environment /." Online version of thesis, 2006. http://hdl.handle.net/1850/3365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Soh, Ling Min. "Recognition using tagged objects." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/844110/.

Full text
Abstract:
This thesis describes a method for the recognition of objects in an unconstrained environment with a widely ranging illumination, imaged from unknown view points and complicated background. The general problem is simplified by placing specially designed patterns on the object that allows us to solve the pose determination problem easily. There are several key components involved in the proposed recognition approach. They include pattern detection, pose estimation, model acquisition and matching, searching and indexing the model database. Other crucial issues pertaining to the individual components of the recognition system such as the choice of the pattern, the reliability and accuracy of the pattern detector, pose estimator and matching and the speed of the overall system are addressed. After establishing the methodological framework, experiments are carried out on a wide range of both synthetic and real data to illustrate the validity and usefulness of the proposed methods. The principal contribution of this research is a methodology for Tagged Object Recognition (TOR) in unconstrained conditions. A robust pattern (calibration chart) detector is developed for off-the-shelf use. To empirically assess the effectiveness of the pattern detector and the pose estimator under various scenarios, simulated data generated using a graphics rendering process is used. This simulated data provides ground truth which is difficult to obtain in projected images. Using the ground truth, the detection error, which is usually ignored, can be analysed. For model matching, the Chamfer matching algorithm is modified to get a more reliable matching score. The technique facilitates reliable Tagged Object Recognition (TOR). Finally, the results of extensive quantitative and qualitative tests are presented that show the plausibility of practical use of Tagged Object Recognition (TOR). The features characterising the enabling technology developed are the ability to a) recognise an object which is tagged with the calibration chart, b) establish camera position with respect to a landmark and c) test any camera calibration and 3D pose estimation routines, thus facilitating future research and applications in mobile robots navigations, 3D reconstruction and stereo vision.
APA, Harvard, Vancouver, ISO, and other styles
5

Mudduluru, Sravani. "Indian Sign Language Numbers Recognition using Intel RealSense Camera." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1815.

Full text
Abstract:
The use of gesture based interaction with devices has been a significant area of research in the field of computer science since many years. The main idea of these kind of interactions is to ease the user experience by providing high degree of freedom and provide more interactive way of communication with the technology in a natural way. The significant areas of applications of gesture recognition are in video gaming, human computer interaction, virtual reality, smart home appliances, medical systems, robotics and several others. With the availability of the devices such as Kinect, Leap Motion and Intel RealSense cameras accessing the depth as well as color information has become available to the public with affordable costs. The Intel RealSense camera is a USB powered controller that can be supported with few hardware requirements such as Windows 8 and above. This is one such camera that can be used to track the human body information similar to the Kinect and Leap Motion. It was designed specifically to provide more minute information about the different parts of the human body such as face, hand etc. This camera was designed to give users more natural and intuitive interactions with the smart devices by providing some features such as creating 3D avatars, high quality 3D prints, high-quality graphic gaming visuals, virtual reality and others. The main aim of this study is to try to analyze hand tracking information and build a training model in order to decide if this camera is suitable for sign language. In this study, we have extracted the joint information of 22 joint labels per single hand .We trained the model to identify the Indian Sign Language(ISL) numbers from 0-9. Through this study we analyzed that multi-class SVM model showed higher accuracy of 93.5% when compared to the decision tree and KNN models.
APA, Harvard, Vancouver, ISO, and other styles
6

Bellando, John Louis. "Modeling and Recognition of Gestures Using a Single Camera." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin973088031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brauer, Henrik Siebo Peter. "Camera based human localization and recognition in smart environments." Thesis, University of the West of Scotland, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.739946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hannuksela, J. (Jari). "Camera based motion estimation and recognition for human-computer interaction." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514289781.

Full text
Abstract:
Abstract Communicating with mobile devices has become an unavoidable part of our daily life. Unfortunately, the current user interface designs are mostly taken directly from desktop computers. This has resulted in devices that are sometimes hard to use. Since more processing power and new sensing technologies are already available, there is a possibility to develop systems to communicate through different modalities. This thesis proposes some novel computer vision approaches, including head tracking, object motion analysis and device ego-motion estimation, to allow efficient interaction with mobile devices. For head tracking, two new methods have been developed. The first method detects a face region and facial features by employing skin detection, morphology, and a geometrical face model. The second method, designed especially for mobile use, detects the face and eyes using local texture features. In both cases, Kalman filtering is applied to estimate the 3-D pose of the head. Experiments indicate that the methods introduced can be applied on platforms with limited computational resources. A novel object tracking method is also presented. The idea is to combine Kalman filtering and EM-algorithms to track an object, such as a finger, using motion features. This technique is also applicable when some conventional methods such as colour segmentation and background subtraction cannot be used. In addition, a new feature based camera ego-motion estimation framework is proposed. The method introduced exploits gradient measures for feature selection and feature displacement uncertainty analysis. Experiments with a fixed point implementation testify to the effectiveness of the approach on a camera-equipped mobile phone. The feasibility of the methods developed is demonstrated in three new mobile interface solutions. One of them estimates the ego-motion of the device with respect to the user's face and utilises that information for browsing large documents or bitmaps on small displays. The second solution is to use device or finger motion to recognize simple gestures. In addition to these applications, a novel interactive system to build document panorama images is presented. The motion estimation and recognition techniques presented in this thesis have clear potential to become practical means for interacting with mobile devices. In fact, cameras in future mobile devices may, for the most of time, be used as sensors for self intuitive user interfaces rather than using them for digital photography.
APA, Harvard, Vancouver, ISO, and other styles
9

Akman, Oytun. "Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking And Event Recognition." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608620/index.pdf.

Full text
Abstract:
In this thesis, novel methods for background modeling, tracking, occlusion handling and event recognition via multi-camera configurations are presented. As the initial step, building blocks of typical single camera surveillance systems that are moving object detection, tracking and event recognition, are discussed and various widely accepted methods for these building blocks are tested to asses on their performance. Next, for the multi-camera surveillance systems, background modeling, occlusion handling, tracking and event recognition for two-camera configurations are examined. Various foreground detection methods are discussed and a background modeling algorithm, which is based on multi-variate mixture of Gaussians, is proposed. During occlusion handling studies, a novel method for segmenting the occluded objects is proposed, in which a top-view of the scene, free of occlusions, is generated from multi-view data. The experiments indicate that the occlusion handling algorithm operates successfully on various test data. A novel tracking method by using multi-camera configurations is also proposed. The main idea of multi-camera employment is fusing the 2D information coming from the cameras to obtain a 3D information for better occlusion handling and seamless tracking. The proposed algorithm is tested on different data sets and it shows clear improvement over single camera tracker. Finally, multi-camera trajectories of objects are classified by proposed multi-camera event recognition method. In this method, concatenated different view trajectories are used to train Gaussian Mixture Hidden Markov Models. The experimental results indicate an improvement for the multi-camera event recognition performance over the event recognition by using single camera.
APA, Harvard, Vancouver, ISO, and other styles
10

Kurihata, Hiroyuki, Tomokazu Takahashi, Ichiro Ide, Yoshito Mekada, Hiroshi Murase, Yukimasa Tamatsu, and Takayuki Miyahara. "Rainy weather recognition from in-vehicle camera images for driver assistance." IEEE, 2005. http://hdl.handle.net/2237/6798.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Camera recognition"

1

Iwamura, Masakazu, and Faisal Shafait, eds. Camera-Based Document Analysis and Recognition. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29364-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Iwamura, Masakazu, and Faisal Shafait, eds. Camera-Based Document Analysis and Recognition. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05167-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Javed, Omar. Automated Multi-Camera Surveillance: Algorithms and Practice. Boston, MA: Springer Science+Business Media, LLC, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Faisal, Shafait, and SpringerLink (Online service), eds. Camera-Based Document Analysis and Recognition: 4th International Workshop, CBDAR 2011, Beijing, China, September 22, 2011, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Jiang, Zicheng Liu, and Ying Wu. Human Action Recognition with Depth Cameras. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04561-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

United States. National Aeronautics and Space Administration., ed. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera: Final technical report for NASA grant NAG-1-1371, "analysis of image sequences from sensors for restricted visibility operations", period of the grant January 24, 1992 to May 31, 1994. [Washington, DC: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

United States. National Aeronautics and Space Administration., ed. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera: Final technical report for NASA grant NAG-1-1371, "analysis of image sequences from sensors for restricted visibility operations", period of the grant January 24, 1992 to May 31, 1994. [Washington, DC: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

United States. National Aeronautics and Space Administration., ed. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera: Final technical report for NASA grant NAG-1-1371, "analysis of image sequences from sensors for restricted visibility operations", period of the grant January 24, 1992 to May 31, 1994. [Washington, DC: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hooper, John R. The illustrated Camaro recognition guide. Westminster, MD: J & D Publications, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Remondino, Fabio. TOF Range-Imaging Cameras. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Camera recognition"

1

Zhang, Shu, Guangqi Hou, and Zhenan Sun. "Eyelash Removal Using Light Field Camera for Iris Recognition." In Biometric Recognition, 319–27. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12484-1_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fischer, Stephan, Ivica Rimac, and Ralf Steinmetz. "Automatic Recognition of Camera Zooms." In Visual Information and Information Systems, 253–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48762-x_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Srivastava, Gaurav, Johnny Park, Avinash C. Kak, Birgi Tamersoy, and J. K. Aggarwal. "Multi-camera Human Action Recognition." In Computer Vision, 501–11. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xompero, Alessio, and Andrea Cavallaro. "Cross-Camera View-Overlap Recognition." In Lecture Notes in Computer Science, 253–69. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Xiaohua, Yan Gao, Wei-Shi Zheng, Jianhuang Lai, and Junyong Zhu. "One-Snapshot Face Anti-spoofing Using a Light Field Camera." In Biometric Recognition, 108–17. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69923-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yanqiong, Gang Shi, Qing Cui, Yuhong Sheng, and Guoqun Liu. "A Method of Personnel Location Based on Monocular Camera in Complex Terrain." In Biometric Recognition, 175–85. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97909-0_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kasar, Thotreingam, and Angarai G. Ramakrishnan. "Multi-script and Multi-oriented Text Localization from Scene Images." In Camera-Based Document Analysis and Recognition, 1–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29364-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bukhari, Syed Saqib, Faisal Shafait, and Thomas M. Breuel. "Border Noise Removal of Camera-Captured Document Images Using Page Frame Detection." In Camera-Based Document Analysis and Recognition, 126–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29364-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bukhari, Syed Saqib, Faisal Shafait, and Thomas M. Breuel. "An Image Based Performance Evaluation Method for Page Dewarping Algorithms Using SIFT Features." In Camera-Based Document Analysis and Recognition, 138–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29364-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nagy, Robert, Anders Dicker, and Klaus Meyer-Wegener. "NEOCR: A Configurable Dataset for Natural Image Text Recognition." In Camera-Based Document Analysis and Recognition, 150–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29364-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Camera recognition"

1

Rajaraman, Srinivasan, Danielle M. Caruccio, Nicholas C. Fung, and Cory J. Hayes. "Fully automatic, unified stereo camera and LiDAR-camera calibration." In Automatic Target Recognition XXXI, edited by Timothy L. Overman, Riad I. Hammoud, and Abhijit Mahalanobis. SPIE, 2021. http://dx.doi.org/10.1117/12.2587806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Haoyu, Shaomin Xiong, and Toshiki Hirano. "A Real-Time Human Recognition and Tracking System With a Dual-Camera Setup." In ASME 2019 28th Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/isps2019-7469.

Full text
Abstract:
Abstract Most surveillance camera systems are still controlled and monitored by humans. Smart surveillance camera systems are proposed to automatically understand the scene captured, identify the objects of interest, detect the abnormality, etc. However, most surveillance cameras are either wide-angle or pan-tilt-zoom (PTZ). When the cameras are in the wide-view mode, small objects can be hard to be recognized. On the other hand, when the cameras are zoomed-in to the object of interest, the global view cannot be covered and important events outside the zoomed view will be missed. In this paper, we proposed a system composed of a wide-angle camera and a PTZ camera. The system is able to capture the wide-view and the zoomed-view at the same time, taking the advantages from both views. A real-time human detection and identification algorithm based on a neural network is developed. The system can efficiently and effectively recognize humans, distinguish different identities, and follow the person of interest using the PTZ camera. A multi-target multi-camera (MTMC) system is developed based on the original system. In the MTMC system, multiple cameras are placed at different places to look at different views. The same person shown in any camera can be recognized as the same person while different persons can be distinguished among all the cameras.
APA, Harvard, Vancouver, ISO, and other styles
3

Perez-Yus, A., G. Lopez-Nicolas, and J. J. Guerrero. "A novel hybrid camera system with depth and fisheye cameras." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7900058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sumida, Hiroaki, Fuji Ren, Shun Nishide, and Xin Kang. "Environment Recognition Using Robot Camera." In 2020 5th IEEE International Conference on Big Data Analytics (ICBDA). IEEE, 2020. http://dx.doi.org/10.1109/icbda49040.2020.9101205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Castells-Rufas, David, and Jordi Carrabina. "Camera-based Digit Recognition System." In 2006 13th IEEE International Conference on Electronics, Circuits and Systems. IEEE, 2006. http://dx.doi.org/10.1109/icecs.2006.379899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kessler, Viktor, Patrick Thiam, Mohammadreza Amirian, and Friedhelm Schwenker. "Pain recognition with camera photoplethysmography." In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, 2017. http://dx.doi.org/10.1109/ipta.2017.8310110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hiew, B. Y., Andrew B. J. Teoh, and Y. H. Pang. "Digital camera based fingerprint recognition." In 2007 IEEE International Conference on Telecommunications and Malaysia International Conference on Communications. IEEE, 2007. http://dx.doi.org/10.1109/ictmicc.2007.4448572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shahreza, Hatef Otroshi, Alexandre Veuthey, and Sébastien Marcel. "Face Recognition Using Lensless Camera." In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10446710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yao, Yi, Chung-Hao Chen, Besma Abidi, David Page, Andreas Koschan, and Mongi Abidi. "Sensor planning for PTZ cameras using the probability of camera overload." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lei, Bangjun, Shuifa Sun, and Sheng Zheng. "Passive geometric camera calibration for arbitrary camera configuration." In Sixth International Symposium on Multispectral Image Processing and Pattern Recognition, edited by Mingyue Ding, Bir Bhanu, Friedrich M. Wahl, and Jonathan Roberts. SPIE, 2009. http://dx.doi.org/10.1117/12.832609.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Camera recognition"

1

Steves, Michelle, Brian Stanton, Mary Theofanos, Dana Chisnell, and Hannah Wald. Camera Recognition. National Institute of Standards and Technology, March 2013. http://dx.doi.org/10.6028/nist.ir.7921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shapovalov, Viktor B., Yevhenii B. Shapovalov, Zhanna I. Bilyk, Anna P. Megalinska, and Ivan O. Muzyka. The Google Lens analyzing quality: an analysis of the possibility to use in the educational process. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3754.

Full text
Abstract:
Biology is a fairly complicated initial subject because it involves knowledge of biodiversity. Google Lens is a unique, mobile software that allows you to recognition species and genus of the plant student looking for. The article devoted to the analysis of the efficiency of the functioning of the Google Lens related to botanical objects. In order to perform the analysis, botanical objects were classified by type of the plant (grass, tree, bush) and by part of the plant (stem, flower, fruit) which is represented on the analyzed photo. It was shown that Google Lens correctly identified plant species in 92.6% cases. This is a quite high result, which allows recommending this program using during the teaching. The greatest accuracy of Google Lens was observed under analyzing trees and plants stems. The worst accuracy was characterized to Google Lens results of fruits and stems of the bushes recognizing. However, the accuracy was still high and Google Lens can help to provide the researches even in those cases. Google Lens wasn’t able to analyze the local endemic Ukrainian flora. It has been shown that the recognition efficiency depends more on the resolution of the photo than on the physical characteristics of the camera through which they are made. In the article shown the possibility of using the Google Lens in the educational process is a simple way to include principles of STEM-education and “New Ukrainian school” in classes.
APA, Harvard, Vancouver, ISO, and other styles
3

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir, and Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, December 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Full text
Abstract:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
5

Hall, Mark, and Neil Price. Medieval Scotland: A Future for its Past. Society of Antiquaries of Scotland, September 2012. http://dx.doi.org/10.9750/scarf.09.2012.165.

Full text
Abstract:
The main recommendations of the panel report can be summarised under five key headings. Underpinning all five areas is the recognition that human narratives remain crucial for ensuring the widest access to our shared past. There is no wish to see political and economic narratives abandoned but the need is recognised for there to be an expansion to more social narratives to fully explore the potential of the diverse evidence base. The questions that can be asked are here framed in a national context but they need to be supported and improved a) by the development of regional research frameworks, and b) by an enhanced study of Scotland’s international context through time. 1. From North Britain to the Idea of Scotland: Understanding why, where and how ‘Scotland’ emerges provides a focal point of research. Investigating state formation requires work from Medieval Scotland: a future for its past ii a variety of sources, exploring the relationships between centres of consumption - royal, ecclesiastical and urban - and their hinterlands. Working from site-specific work to regional analysis, researchers can explore how what would become ‘Scotland’ came to be, and whence sprang its inspiration. 2. Lifestyles and Living Spaces: Holistic approaches to exploring medieval settlement should be promoted, combining landscape studies with artefactual, environmental, and documentary work. Understanding the role of individual sites within wider local, regional and national settlement systems should be promoted, and chronological frameworks developed to chart the changing nature of Medieval settlement. 3. Mentalities: The holistic understanding of medieval belief (particularly, but not exclusively, in its early medieval or early historic phase) needs to broaden its contextual understanding with reference to prehistoric or inherited belief systems and frames of reference. Collaborative approaches should draw on international parallels and analogues in pursuit of defining and contrasting local or regional belief systems through integrated studies of portable material culture, monumentality and landscape. 4. Empowerment: Revisiting museum collections and renewing the study of newly retrieved artefacts is vital to a broader understanding of the dynamics of writing within society. Text needs to be seen less as a metaphor and more as a technological and social innovation in material culture which will help the understanding of it as an experienced, imaginatively rich reality of life. In archaeological terms, the study of the relatively neglected cultural areas of sensory perception, memory, learning and play needs to be promoted to enrich the understanding of past social behaviours. 5. Parameters: Multi-disciplinary, collaborative, and cross-sector approaches should be encouraged in order to release the research potential of all sectors of archaeology. Creative solutions should be sought to the challenges of transmitting the importance of archaeological work and conserving the resource for current and future research.
APA, Harvard, Vancouver, ISO, and other styles
6

Dalglish, Chris, and Sarah Tarlow, eds. Modern Scotland: Archaeology, the Modern past and the Modern present. Society of Antiquaries of Scotland, September 2012. http://dx.doi.org/10.9750/scarf.09.2012.163.

Full text
Abstract:
The main recommendations of the panel report can be summarised under five key headings:  HUMANITY The Panel recommends recognition that research in this field should be geared towards the development of critical understandings of self and society in the modern world. Archaeological research into the modern past should be ambitious in seeking to contribute to understanding of the major social, economic and environmental developments through which the modern world came into being. Modern-world archaeology can add significantly to knowledge of Scotland’s historical relationships with the rest of the British Isles, Europe and the wider world. Archaeology offers a new perspective on what it has meant to be a modern person and a member of modern society, inhabiting a modern world.  MATERIALITY The Panel recommends approaches to research which focus on the materiality of the recent past (i.e. the character of relationships between people and their material world). Archaeology’s contribution to understandings of the modern world lies in its ability to situate, humanise and contextualise broader historical developments. Archaeological research can provide new insights into the modern past by investigating historical trends not as abstract phenomena but as changes to real lives, affecting different localities in different ways. Archaeology can take a long-term perspective on major modern developments, researching their ‘prehistory’ (which often extends back into the Middle Ages) and their material legacy in the present. Archaeology can humanise and contextualise long-term processes and global connections by working outwards from individual life stories, developing biographies of individual artefacts and buildings and evidencing the reciprocity of people, things, places and landscapes. The modern person and modern social relationships were formed in and through material environments and, to understand modern humanity, it is crucial that we understand humanity’s material relationships in the modern world.  PERSPECTIVE The Panel recommends the development, realisation and promotion of work which takes a critical perspective on the present from a deeper understanding of the recent past. Research into the modern past provides a critical perspective on the present, uncovering the origins of our current ways of life and of relating to each other and to the world around us. It is important that this relevance is acknowledged, understood, developed and mobilised to connect past, present and future. The material approach of archaeology can enhance understanding, challenge assumptions and develop new and alternative histories. Modern Scotland: Archaeology, the Modern past and the Modern present vi Archaeology can evidence varied experience of social, environmental and economic change in the past. It can consider questions of local distinctiveness and global homogeneity in complex and nuanced ways. It can reveal the hidden histories of those whose ways of life diverged from the historical mainstream. Archaeology can challenge simplistic, essentialist understandings of the recent Scottish past, providing insights into the historical character and interaction of Scottish, British and other identities and ideologies.  COLLABORATION The Panel recommends the development of integrated and collaborative research practices. Perhaps above all other periods of the past, the modern past is a field of enquiry where there is great potential benefit in collaboration between different specialist sectors within archaeology, between different disciplines, between Scottish-based researchers and researchers elsewhere in the world and between professionals and the public. The Panel advocates the development of new ways of working involving integrated and collaborative investigation of the modern past. Extending beyond previous modes of inter-disciplinary practice, these new approaches should involve active engagement between different interests developing collaborative responses to common questions and problems.  REFLECTION The Panel recommends that a reflexive approach is taken to the archaeology of the modern past, requiring research into the nature of academic, professional and public engagements with the modern past and the development of new reflexive modes of practice. Archaeology investigates the past but it does so from its position in the present. Research should develop a greater understanding of modern-period archaeology as a scholarly pursuit and social practice in the present. Research should provide insights into the ways in which the modern past is presented and represented in particular contexts. Work is required to better evidence popular understandings of and engagements with the modern past and to understand the politics of the recent past, particularly its material aspect. Research should seek to advance knowledge and understanding of the moral and ethical viewpoints held by professionals and members of the public in relation to the archaeology of the recent past. There is a need to critically review public engagement practices in modern-world archaeology and develop new modes of public-professional collaboration and to generate practices through which archaeology can make positive interventions in the world. And there is a need to embed processes of ethical reflection and beneficial action into archaeological practice relating to the modern past.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography