Journal articles on the topic 'Photography – Digital techniques – Classification'

To see the other types of publications on this topic, follow the link: Photography – Digital techniques – Classification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Photography – Digital techniques – Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Farhat, Farshid, Mohammad Mahdi Kamani, and James Z. Wang. "CAPTAIN: Comprehensive Composition Assistance for Photo Taking." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–24. http://dx.doi.org/10.1145/3462762.

Full text
Abstract:
Many people are interested in taking astonishing photos and sharing them with others. Emerging high-tech hardware and software facilitate the ubiquitousness and functionality of digital photography. Because composition matters in photography, researchers have leveraged some common composition techniques, such as the rule of thirds and the perspective-related techniques, in providing photo-taking assistance. However, composition techniques developed by professionals are far more diverse than well-documented techniques can cover. We present a new approach to leverage the underexplored photography ideas, which are virtually unlimited, diverse, and correlated. We propose a comprehensive fork-join framework, named CAPTAIN ( C omposition A ssistance for P hoto Ta k in g), to guide a photographer with a variety of photography ideas. The framework consists of a few components: integrated object detection, photo genre classification, artistic pose clustering, and personalized aesthetics-aware image retrieval. CAPTAIN is backed by a large managed dataset crawled from a Website with ideas from photography enthusiasts and professionals. The work proposes steps to decompose a given amateurish shot into composition ingredients and compose them to bring the photographer a list of useful and related ideas. The work addresses personal preferences for composition by presenting a user-specified preference list of photography ideas. We have conducted many experiments on the newly proposed components and reported findings. A user study demonstrates that the work is useful to those taking photos.
APA, Harvard, Vancouver, ISO, and other styles
2

Díaz, Gastón Mauro, and José Daniel Lencinas. "Model-based local thresholding for canopy hemispherical photography." Canadian Journal of Forest Research 48, no. 10 (October 2018): 1204–16. http://dx.doi.org/10.1139/cjfr-2018-0006.

Full text
Abstract:
Canopy hemispherical photography (HP) is widely used to estimate forest structural variables. To achieve good results with HP, a classification algorithm is needed to produce binary images to accurately estimate the gap fraction. Our aim was to develop a local thresholding method for binarizing carefully acquired hemispherical photographs. The method was implemented in the R package “caiman”. Working with photographs of artificial structures and using a linear model, our method turns the cumbersome problem of finding the optimal threshold value into a simpler one, which is estimating the digital number (DN) of the sky. Using hemispherical photographs of a deciduous forest, we compared our method with several standard and state-of-the-art binarization techniques. Our method was as accurate as the best-tested binarization techniques, regardless of the exposure, as long as it was between 0 and 2 stops over the open sky auto-exposure. Moreover, our method did not require knowing the exact relative exposure. Intending to balance accuracy and practicality, we mapped the sky DN using the values extracted from gaps. However, we discussed whether a more accurate but less practical way to map sky DN could provide, along with our method, a new benchmark.
APA, Harvard, Vancouver, ISO, and other styles
3

Lim, H. S., M. Z. Matjafri, and K. Abdullah. "Land Use/Cover Classification over Small Areas Using Conventional Digital Camcorder Imagery Based on Frequency-Based Contextual and Neural Network Classification Techniques." Advanced Materials Research 650 (January 2013): 658–63. http://dx.doi.org/10.4028/www.scientific.net/amr.650.658.

Full text
Abstract:
An airborne survey was conducted to produce land cover/use maps. The feasibility of using a conventional digital camcorder to acquire remotely sensed data was investigated, and the imagery for land cover mapping using remote sensing technique was evaluated. The study area was the Universiti Sains Malaysia campus, Penang, located in Peninsular Malaysia. Digital images were taken from a low-attitude light aircraft, Cessna 172Q, at an average altitude of 2.4384 km above sea level. The use of a digital camcorder as a sensor to capture digital images is more economical compared with other airborne sensors. This technique is designed to overcome the problem of obtaining cloud-free photographs from a satellite platform in equatorial regions. Digital video imageries were taken in the red, green, and blue bands. A comparison between frequency-based contextual and neural network classification techniques for analyzing digital camcorder imagery is presented. Frequency-based contextual and neural network classification techniques were applied to the digital camera spectral bands (red, green, and blue) to extract the thematic information from the acquired scenes. The classified map was compared with the ground truth data, and accuracy was evaluated by an error matrix. Results indicate that a conventional digital camcorder can be used to acquire digital imageries for land cover/use mapping of a small area of coverage.
APA, Harvard, Vancouver, ISO, and other styles
4

Kumar, Halaguru Basavarajappa Basanth, and Haranahalli Rajanna Chennamma. "Classification of Computer Graphic Images and Photographic Images Based on Fusion of Color and Texture Features." Revue d'Intelligence Artificielle 35, no. 3 (June 30, 2021): 201–7. http://dx.doi.org/10.18280/ria.350303.

Full text
Abstract:
With the rapid advancement in digital image rendering techniques, allows the user to create surrealistic computer graphic (CG) images which are hard to distinguish from photographs captured by digital cameras. In this paper, classification of CG images and photographic (PG) images based on fusion of global features is presented. Color and texture of an image represents global features. Texture feature descriptors such as gray level co-occurrence matrix (GLCM) and local binary pattern (LBP) are considered. Different combinations of these global features are investigated on various datasets. Experimental results show that, fusion of color and texture features subset can achieve best classification results over other feature combinations.
APA, Harvard, Vancouver, ISO, and other styles
5

Nahid, Abdullah-Al, Mohamad Ali Mehrabi, and Yinan Kong. "Histopathological Breast Cancer Image Classification by Deep Neural Network Techniques Guided by Local Clustering." BioMed Research International 2018 (2018): 1–20. http://dx.doi.org/10.1155/2018/2362108.

Full text
Abstract:
Breast Cancer is a serious threat and one of the largest causes of death of women throughout the world. The identification of cancer largely depends on digital biomedical photography analysis such as histopathological images by doctors and physicians. Analyzing histopathological images is a nontrivial task, and decisions from investigation of these kinds of images always require specialised knowledge. However, Computer Aided Diagnosis (CAD) techniques can help the doctor make more reliable decisions. The state-of-the-art Deep Neural Network (DNN) has been recently introduced for biomedical image analysis. Normally each image contains structural and statistical information. This paper classifies a set of biomedical breast cancer images (BreakHis dataset) using novel DNN techniques guided by structural and statistical information derived from the images. Specifically a Convolutional Neural Network (CNN), a Long-Short-Term-Memory (LSTM), and a combination of CNN and LSTM are proposed for breast cancer image classification. Softmax and Support Vector Machine (SVM) layers have been used for the decision-making stage after extracting features utilising the proposed novel DNN models. In this experiment the best Accuracy value of 91.00% is achieved on the 200x dataset, the best Precision value 96.00% is achieved on the 40x dataset, and the best F-Measure value is achieved on both the 40x and 100x datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Mohammed, Hersh A., Shahab W. Kareem, and Amin S. Mohammed. "A COMPARATIVE EVALUATION OF DEEP LEARNING METHODS IN DIGITAL IMAGE CLASSIFICATION." Kufa Journal of Engineering 13, no. 4 (October 14, 2022): 53–69. http://dx.doi.org/10.30572/2018/kje/130405.

Full text
Abstract:
White Blood Cells are important in determining a person's overall health. The blood disease diagnosis includes characterization and identification of blood samples of a patient. Neural Networks (NN), Convolutional Neural Networks (CNN), and a mix of CNN and NN models are used in recent techniques to improve visual content understanding. From start to finish, The authors were driven to uncover remarkable characteristics in example photographs because of their expertise in medical image analysis. For blood cell classification, the overall performance of individual cell patches extracted using blood smear techniques has been excellent. These approaches, on the other hand, are incapable of dealing with the issue of multiple cells overlapping. Because of the blood cell overlapping pictures, the input image dimension is compressed, the classification time is reduced, as well as the network works better with more accurate parameter estimates. In this review, we are evaluating a detailed scientific comparison of some of the ways used to improve WBC classification. The authors will show some of the ways used to automatically classify their cells. The results of some of the tests used using available data, compared to blood cell classification techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Mohammed, Hersh A., Shahab W. Kareem, and Amin S. Mohammed. "A COMPARATIVE EVALUATION OF DEEP LEARNING METHODS IN DIGITAL IMAGE CLASSIFICATION." Kufa Journal of Engineering 13, no. 4 (October 14, 2022): 53–69. http://dx.doi.org/10.30572/2018/kje/1305.

Full text
Abstract:
White Blood Cells are important in determining a person's overall health. The blood disease diagnosis includes characterization and identification of blood samples of a patient. Neural Networks (NN), Convolutional Neural Networks (CNN), and a mix of CNN and NN models are used in recent techniques to improve visual content understanding. From start to finish, The authors were driven to uncover remarkable characteristics in example photographs because of their expertise in medical image analysis. For blood cell classification, the overall performance of individual cell patches extracted using blood smear techniques has been excellent. These approaches, on the other hand, are incapable of dealing with the issue of multiple cells overlapping. Because of the blood cell overlapping pictures, the input image dimension is compressed, the classification time is reduced, as well as the network works better with more accurate parameter estimates. In this review, we are evaluating a detailed scientific comparison of some of the ways used to improve WBC classification. The authors will show some of the ways used to automatically classify their cells. The results of some of the tests used using available data, compared to blood cell classification techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Basanth Kumar, Halaguru Basavarajappa, and Haranahalli Rajanna Chennamma. "Dataset for classification of computer graphic images and photographic images." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 1 (March 1, 2022): 137. http://dx.doi.org/10.11591/ijai.v11.i1.pp137-147.

Full text
Abstract:
<span lang="EN-US">The recent advancements in computer graphics (CG) image rendering techniques have made it easy for the content creators to produce high quality computer graphics similar to photographic images (PG) confounding the most naïve users. Such images used with negative intent, cause serious problems to the society. In such cases, proving the authenticity of an image is a big challenge in digital image forensics due to high photo-realism of CG images. Existing datasets used to assess the performance of classification models are lacking with: (i) larger dataset size, (ii) diversified image contents, and (iii) images generated with the recent digital image rendering techniques. To fill this gap, we created two new datasets, namely, ‘JSSSTU CG and PG image dataset’ and ‘JSSSTU PRCG image dataset’. Further, the complexity of the new datasets and benchmark datasets are evaluated using handcrafted texture feature descriptors such as gray level co-occurrence matrix, local binary pattern and VGG variants (VGG16 and VGG19) which are pre-trained convolutional neural network (CNN) models. Experimental results showed that the CNN-based pre-trained techniques outperformed the conventional support vector machine (SVM)-based classifier in terms of classification accuracy. Proposed datasets have attained a low f-score when compared to existing datasets indicating they are very challenging.</span>
APA, Harvard, Vancouver, ISO, and other styles
9

Wittmann, Florian, Dieter Anhuf, and Wolfgang J. Funk. "Tree species distribution and community structure of central Amazonian várzea forests by remote-sensing techniques." Journal of Tropical Ecology 18, no. 6 (September 25, 2002): 805–20. http://dx.doi.org/10.1017/s0266467402002523.

Full text
Abstract:
In central Amazonian white-water floodplains (várzea), different forest types become established in relation to the flood-level gradient. The formations are characterized by typical patterns of species composition, and their architecture results in different light reflectance patterns, which can be detected by Landsat TM image data. Ground checking comprised a detailed forest inventory of 4 ha, with Digital Elevation Models (DEM) being generated for all sites. The results indicate that, at the average flood level of 3 m, species diversity and architecture of the forests changes, thus justifying the classification into the categories of low várzea (várzea baixa) and high várzea (várzea alta). In a first step to scale up, the study sites were observed by aerial photography. Tree heights, crown sizes, the projected crown area coverage and the gap frequencies provide information, which confirms a remotely sensed classification into three different forest types. The structure of low várzea depends on the successional stage, and species diversity increases with increasing age of the formations. In high várzea, only one successional stage was found and species diversity is higher than in all low-várzea formations. The more complex architecture of the high-várzea forest results in a more diffuse behaviour pattern in pixel distribution, when scanned by TM image data.
APA, Harvard, Vancouver, ISO, and other styles
10

Malik, Owais A., Idrus Puasa, and Daphne Teck Ching Lai. "Segmentation for Multi-Rock Types on Digital Outcrop Photographs Using Deep Learning Techniques." Sensors 22, no. 21 (October 22, 2022): 8086. http://dx.doi.org/10.3390/s22218086.

Full text
Abstract:
The basic identification and classification of sedimentary rocks into sandstone and mudstone are important in the study of sedimentology and they are executed by a sedimentologist. However, such manual activity involves countless hours of observation and data collection prior to any interpretation. When such activity is conducted in the field as part of an outcrop study, the sedimentologist is likely to be exposed to challenging conditions such as the weather and their accessibility to the outcrops. This study uses high-resolution photographs which are acquired from a sedimentological study to test an alternative basic multi-rock identification through machine learning. While existing studies have effectively applied deep learning techniques to classify the rock types in field rock images, their approaches only handle a single rock-type classification per image. One study applied deep learning techniques to classify multi-rock types in each image; however, the test was performed on artificially overlaid images of different rock types in a test sample and not of naturally occurring rock surfaces of multiple rock types. To the best of our knowledge, no study has applied semantic segmentation to solve the multi-rock classification problem using digital photographs of multiple rock types. This paper presents the application of two state-of-the-art segmentation models, namely U-Net and LinkNet, to identify multiple rock types in digital photographs by segmenting the sandstone, mudstone, and background classes in a self-collected dataset of 102 images from a field in Brunei Darussalam. Four pre-trained networks, including Resnet34, Inceptionv3, VGG16, and Efficientnetb7 were used as a backbone for both models, and the performances of the individual models and their ensembles were compared. We also investigated the impact of image enhancement and different color representations on the performances of these segmentation models. The experiment results of this study show that among the individual models, LinkNet with Efficientnetb7 as a backbone had the best performance with a mean over intersection (MIoU) value of 0.8135 for all of the classes. While the ensemble of U-Net models (with all four backbones) performed slightly better than the LinkNet with Efficientnetb7 did with an MIoU of 0.8201. When different color representations and image enhancements were explored, the best performance (MIoU = 0.8178) was noticed for the L*a*b* color representation with Efficientnetb7 using U-Net segmentation. For the individual classes of interest (sandstone and mudstone), U-Net with Efficientnetb7 was found to be the best model for the segmentation. Thus, this study presents the potential of semantic segmentation in automating the reservoir characterization process whereby we can extract the patches of interest from the rocks for much deeper study and modeling to be conducted.
APA, Harvard, Vancouver, ISO, and other styles
11

Yu, Xiaolei, and Xulin Guo. "Extracting Fractional Vegetation Cover from Digital Photographs: A Comparison of In Situ, SamplePoint, and Image Classification Methods." Sensors 21, no. 21 (November 3, 2021): 7310. http://dx.doi.org/10.3390/s21217310.

Full text
Abstract:
Fractional vegetation cover is a key indicator of rangeland health. However, survey techniques such as line-point intercept transect, pin frame quadrats, and visual cover estimates can be time-consuming and are prone to subjective variations. For this reason, most studies only focus on overall vegetation cover, ignoring variation in live and dead fractions. In the arid regions of the Canadian prairies, grass cover is typically a mixture of green and senescent plant material, and it is essential to monitor both green and senescent vegetation fractional cover. In this study, we designed and built a camera stand to acquire the close-range photographs of rangeland fractional vegetation cover. Photographs were processed by four approaches: SamplePoint software, object-based image analysis (OBIA), unsupervised and supervised classifications to estimate the fractional cover of green vegetation, senescent vegetation, and background substrate. These estimates were compared to in situ surveys. Our results showed that the SamplePoint software is an effective alternative to field measurements, while the unsupervised classification lacked accuracy and consistency. The Object-based image classification performed better than other image classification methods. Overall, SamplePoint and OBIA produced mean values equivalent to those produced by in situ assessment. These findings suggest an unbiased, consistent, and expedient alternative to in situ grassland vegetation fractional cover estimation, which provides a permanent image record.
APA, Harvard, Vancouver, ISO, and other styles
12

Locke, Chris, Mark White, Jacqueline Michel, Charlie Henry, Jon D. Sellars, and Micheal L. Aslaksen. "USE OF VERTICAL DIGITAL PHOTOGRAPHY AT THE BAYOU PEROT, LA SPILL FOR OIL MAPPING AND VOLUME ESTIMATION." International Oil Spill Conference Proceedings 2008, no. 1 (May 1, 2008): 127–30. http://dx.doi.org/10.7901/2169-3358-2008-1-127.

Full text
Abstract:
ABSTRACT The January 2007 release of over 8,000 barrels of a condensate crude oil from a damaged well in Bayou Perot, Louisiana resulted in intermittent oiling of remote mud flats and salt marshes over a 30 square mile area. NOAA'S National Geodetic Survey collected aerial vertical digital photography 17 days after the spill to assist in locating and quantifying areas of oiling. The effective pixel size was 34 centimeters, however, the data were processed to 40 centimeter resolution. Useful products were posted to the web within two days after acquisition. Standard supervised and unsupervised image processing techniques were used in conjunction with oblique photography and field knowledge to define the oiling signatures. Time constraints required that the classification be conducted on mosaiced, non-color balanced images (ideally each image would be classified independently to account for differences in illumination and/or processing). However, the strong visible signature of the oiled areas and ground-truth data from field surveys resulted in high confidence levels for several oil types which in turn were used to enhance the identification of the remaining classes. Five oil types were identified: Black (218,000 ft2), Red (81,000 ft2), Orange (154,000 ft2), Yellow (38,000 ft2), and Light Yellow (349,000 ft2) corresponding to the color and attributes of the oil. The total conservative estimate of oiled area was 840,000 ft2 or nearly 20 acres. Based on estimated thicknesses of the different oils, the total volume of oil present at the time of imagery acquisition was 3,330 barrels. This value was close to the actual amount of oil recovered over the time period between the date of imagery acquisition and the end of cleanup.
APA, Harvard, Vancouver, ISO, and other styles
13

Boniecki, Piotr, Maciej Zaborowicz, Agnieszka Pilarska, and Hanna Piekarska-Boniecka. "Identification Process of Selected Graphic Features Apple Tree Pests by Neural Models Type MLP, RBF and DNN." Agriculture 10, no. 6 (June 10, 2020): 218. http://dx.doi.org/10.3390/agriculture10060218.

Full text
Abstract:
In this paper, the classification capabilities of perceptron and radial neural networks are compared using the identification of selected pests feeding in apple tree orchards in Poland as an example. The goal of the study was the neural separation of five selected apple tree orchard pests. The classification was based on graphical information coded as selected characteristic features of the pests, presented in digital images. In the paper, MLP (MultiLayer Perceptrons), RBF (Radial Basis Function) and DNN (Deep Neural Networks) neural classification models are compared, generated using learning files acquired on the basis of information contained in digital photographs of five selected pests. In order to classify the pests, neural modeling methods were used, including digital image analysis techniques. The qualitative analysis of the neural models enabled the selection of optimal neuron topology that was characterized by the highest classification capability. As representative graphic features were selected five selected coefficients of shape and two defined graphical features of the classified objects. The created neuron model is dedicated as a core for computer systems supporting the decision processes occurring during apple production, particularly in the context of apple tree orchard pest protection automation.
APA, Harvard, Vancouver, ISO, and other styles
14

Pires de Lima, Rafael, and Kurt Marfurt. "Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis." Remote Sensing 12, no. 1 (December 25, 2019): 86. http://dx.doi.org/10.3390/rs12010086.

Full text
Abstract:
Remote-sensing image scene classification can provide significant value, ranging from forest fire monitoring to land-use and land-cover classification. Beginning with the first aerial photographs of the early 20th century to the satellite imagery of today, the amount of remote-sensing data has increased geometrically with a higher resolution. The need to analyze these modern digital data motivated research to accelerate remote-sensing image classification. Fortunately, great advances have been made by the computer vision community to classify natural images or photographs taken with an ordinary camera. Natural image datasets can range up to millions of samples and are, therefore, amenable to deep-learning techniques. Many fields of science, remote sensing included, were able to exploit the success of natural image classification by convolutional neural network models using a technique commonly called transfer learning. We provide a systematic review of transfer learning application for scene classification using different datasets and different deep-learning models. We evaluate how the specialization of convolutional neural network models affects the transfer learning process by splitting original models in different points. As expected, we find the choice of hyperparameters used to train the model has a significant influence on the final performance of the models. Curiously, we find transfer learning from models trained on larger, more generic natural images datasets outperformed transfer learning from models trained directly on smaller remotely sensed datasets. Nonetheless, results show that transfer learning provides a powerful tool for remote-sensing scene classification.
APA, Harvard, Vancouver, ISO, and other styles
15

Kemper, G., A. Weidauer, and T. Coppack. "MONITORING SEABIRDS AND MARINE MAMMALS BY GEOREFERENCED AERIAL PHOTOGRAPHY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (June 23, 2016): 689–94. http://dx.doi.org/10.5194/isprs-archives-xli-b8-689-2016.

Full text
Abstract:
The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of biogeographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425&thinsp;m, a focal length of 110&thinsp;mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000&thinsp;s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2&thinsp;cm and an image footprint of 155 x 410&thinsp;m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software to distinguish biological from non-biological signals. Each detected seabird or marine mammal signal is identified to species level or assigned to a species group and automatically saved into a geo-database for subsequent quality assurance, geo-statistical analyses and data export to third-party users. The relative size of a detected object can be accurately measured which provides key information for species-identification. During the development and testing of this system until 2015, more than 40 surveys have produced around 500.000 digital aerial images, of which some were taken in specially protected areas (SPA) of the Baltic Sea and thus include a wide range of relevant species. Here, we present the technical principles of this comparatively new survey approach and discuss the key methodological challenges related to optimizing survey design and workflow in view of the pending regulatory requirements for effective environmental impact assessments.
APA, Harvard, Vancouver, ISO, and other styles
16

Kemper, G., A. Weidauer, and T. Coppack. "MONITORING SEABIRDS AND MARINE MAMMALS BY GEOREFERENCED AERIAL PHOTOGRAPHY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (June 23, 2016): 689–94. http://dx.doi.org/10.5194/isprsarchives-xli-b8-689-2016.

Full text
Abstract:
The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of biogeographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425&thinsp;m, a focal length of 110&thinsp;mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000&thinsp;s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2&thinsp;cm and an image footprint of 155 x 410&thinsp;m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software to distinguish biological from non-biological signals. Each detected seabird or marine mammal signal is identified to species level or assigned to a species group and automatically saved into a geo-database for subsequent quality assurance, geo-statistical analyses and data export to third-party users. The relative size of a detected object can be accurately measured which provides key information for species-identification. During the development and testing of this system until 2015, more than 40 surveys have produced around 500.000 digital aerial images, of which some were taken in specially protected areas (SPA) of the Baltic Sea and thus include a wide range of relevant species. Here, we present the technical principles of this comparatively new survey approach and discuss the key methodological challenges related to optimizing survey design and workflow in view of the pending regulatory requirements for effective environmental impact assessments.
APA, Harvard, Vancouver, ISO, and other styles
17

Yoshida, Keisuke, Shijun Pan, Junichi Taniguchi, Satoshi Nishiyama, Takashi Kojima, and Md Touhidul Islam. "Airborne LiDAR-assisted deep learning methodology for riparian land cover classification using aerial photographs and its application for flood modelling." Journal of Hydroinformatics 24, no. 1 (January 1, 2022): 179–201. http://dx.doi.org/10.2166/hydro.2022.134.

Full text
Abstract:
Abstract In response to challenges in land cover classification (LCC), many researchers have experimented recently with classification methods based on artificial intelligence techniques. For LCC mapping of the vegetated Asahi River in Japan, the current study uses deep learning (DL)-based DeepLabV3+ module for image segmentation of aerial photographs. We modified the existing model by concatenating data on its resultant output port to access the airborne laser bathymetry (ALB) dataset, including voxel-based laser points and vegetation height (i.e. digital surface model data minus digital terrain model data). Findings revealed that the modified approach improved the accuracy of LCC greatly compared to our earlier unsupervised ALB-based method, with 25 and 35% improvement, respectively, in overall accuracy and the macro F1-score for November 2017 dataset (no–leaf condition). Finally, by estimating flow-resistance parameters in flood modelling using LCC mapping-derived data, we conclude that the upgraded DL methodology produces better fit between numerically analyzed and observed peak water levels.
APA, Harvard, Vancouver, ISO, and other styles
18

Schweier, C., M. Markus, and E. Steinle. "Simulation of earthquake caused building damages for the development of fast reconnaissance techniques." Natural Hazards and Earth System Sciences 4, no. 2 (April 16, 2004): 285–93. http://dx.doi.org/10.5194/nhess-4-285-2004.

Full text
Abstract:
Abstract. Catastrophic events like strong earthquakes can cause big losses in life and economic values. An increase in the efficiency of reconnaissance techniques could help to reduce the losses in life as many victims die after and not during the event. A basic prerequisite to improve the rescue teams' work is an improved planning of the measures. This can only be done on the basis of reliable and detailed information about the actual situation in the affected regions. Therefore, a bundle of projects at Karlsruhe university aim at the development of a tool for fast information retrieval after strong earthquakes. The focus is on urban areas as the most losses occur there. In this paper the approach for a damage analysis of buildings will be presented. It consists of an automatic methodology to model buildings in three dimensions, a comparison of pre- and post-event models to detect changes and a subsequent classification of the changes into damage types. The process is based on information extraction from airborne laserscanning data, i.e. digital surface models (DSM) acquired through scanning of an area with pulsed laser light. To date, there are no laserscanning derived DSMs available to the authors that were taken of areas that suffered damages from earthquakes. Therefore, it was necessary to simulate such data for the development of the damage detection methodology. In this paper two different methodologies used for simulating the data will be presented. The first method is to create CAD models of undamaged buildings based on their construction plans and alter them artificially in such a way as if they had suffered serious damage. Then, a laserscanning data set is simulated based on these models which can be compared with real laserscanning data acquired of the buildings (in intact state). The other approach is to use measurements of actual damaged buildings and simulate their intact state. It is possible to model the geometrical structure of these damaged buildings based on digital photography taken after the event by evaluating the images with photogrammetrical methods. The intact state of the buildings is simulated based on on-site investigations, and finally laserscanning data are simulated for both states.
APA, Harvard, Vancouver, ISO, and other styles
19

Shrestha, U. S. "Land Use and Land Cover Classification from ETM Sensor Data : A Case Study from Tamakoshi River Basin of Nepal." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 28, 2014): 943–48. http://dx.doi.org/10.5194/isprsarchives-xl-8-943-2014.

Full text
Abstract:
The mountain watershed of Nepal is highly rugged, inaccessible and difficult for acquiring field data. The application of ETM sensor Data Sat satellite image of 30 meter pixel resolutions has been used for land use and land cover classification of Tamakoshi River Basin (TRB) of Nepal. The paper tries to examine the strength of image classification methods in derivation of land use and land classification. Supervised digital image classification techniques was used for examination the thematic classification. Field verification, Google earth image, aerial photographs, topographical sheet and GPS locations were used for land use and land cover type classification, selecting training samples and assessing accuracy of classification results. Six major land use and land cover types: forest land, water bodies, bush/grass land, barren land, snow land and agricultural land was extracted using the method. Moreover, there is spatial variation of statistics of classified land uses and land cover types depending upon the classification methods. <br><br> The image data revealed that the major portion of the surface area is covered by unclassified bush and grass land covering 34.62 per cent followed by barren land (28 per cent). The knowledge derived from supervised classification was applied for the study. The result based on the field survey of the area during July 2014 also verifies the same result. So image classification is found more reliable in land use and land cover classification of mountain watershed of Nepal.
APA, Harvard, Vancouver, ISO, and other styles
20

Ekhande, Sonali, Uttam Patil, and Kshama Vishwanath Kulhalli. "Review on effectiveness of deep learning approach in digital forensics." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 5 (October 1, 2022): 5481. http://dx.doi.org/10.11591/ijece.v12i5.pp5481-5592.

Full text
Abstract:
<p><span>Cyber forensics is use of scientific methods for definite description of cybercrime activities. It deals with collecting, processing and interpreting digital evidence for cybercrime analysis. Cyber forensic analysis plays very important role in criminal investigations. Although lot of research has been done in cyber forensics, it is still expected to face new challenges in near future. Analysis of digital media specifically photographic images, audio and video recordings are very crucial in forensics This paper specifically focus on digital forensics. There are several methods for digital forensic analysis. Currently deep learning (DL), mainly convolutional neural network (CNN) has proved very promising in classification of digital images and sound analysis techniques. This paper presents a compendious study of recent research and methods in forensic areas based on CNN, with a view to guide the researchers working in this area. We first, defined and explained preliminary models of DL. In the next section, out of several DL models we have focused on CNN and its usage in areas of digital forensic. Finally, conclusion and future work are discussed. The review shows that CNN has proved good in most of the forensic domains and still promise to be better.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
21

khan, Muhammad, and Muhammad Rajwana. "Remote Sensing Image Classification Via Vision Transformer and Transfer Learning." International Journal of Advances in Soft Computing and its Applications 14, no. 1 (March 28, 2022): 213–25. http://dx.doi.org/10.15849/ijasca.220328.14.

Full text
Abstract:
Abstract Aerial scene classification, which aims to automatically tag an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. The classification of remote sensing image scenes can provide significant value, from forest fire monitoring to land use and land cover classification. From the first aerial photographs of the early 20th century to today's satellite imagery, the amount of remote sensing data has increased geometrically with higher resolution. The need to analyze this modern digital data has motivated research to accelerate the classification of remotely sensed images. Fortunately, the computer vision community has made great strides in classifying natural images. Transformers first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformers to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent networks. Given its high performance and less need for vision-specific inductive bias, the transformer is receiving more and more attention from the computer vision community. In this paper, we provide a systematic review of the Transfer Learning and Transformer techniques for scene classification using AID datasets. Both approaches give an accuracy of 80% and 84%, for the AID dataset. Keywords: remote sensing, vision transformers, transfer learning, classification accuracy
APA, Harvard, Vancouver, ISO, and other styles
22

Schmidt, Jochen, Phil Tonkin, and Allan Hewitt. "Quantitative soil - landscape models for the Haldon and Hurunui soil sets, New Zealand." Soil Research 43, no. 2 (2005): 127. http://dx.doi.org/10.1071/sr04074.

Full text
Abstract:
Limited resources and large areas of steeplands with limited field access forced soil and land resource surveyors in New Zealand often to develop generalised models of soil–landscape relationships and to use these to produce soil maps by manual interpretation of aerial photographs and field survey. This method is subjective and non-reproducible. Recent studies showed the utility of digital information and analysis to complement manual soil survey. The study presents quantitative soil–landscape models for the Hurunui and Haldon soil sets (New Zealand), developed from conceptual soil–landscape models. Spatial modelling techniques, including terrain analysis and fuzzy classification, are applied to compute membership maps of landform components for the study areas. The membership maps can be used to derive a ‘hard’ classification of land components and uncertainty maps. A soil taxonomic model is developed based on field data (soil profiles), which attaches dominant soil profiles and soil properties, including their uncertainties, to the defined land components. The method presented in this study is proposed as a potential technique for modelling land components of steepland areas in New Zealand, in which the spatial soil variation is dominantly controlled by landform properties. A soil map was developed that includes the uncertainty in the fundamental definitions of landscape units and the variability of soil properties within landscape units.
APA, Harvard, Vancouver, ISO, and other styles
23

Kloser, R. J., N. J. Bax, T. Ryan, A. Williams, and B. A. Barker. "Remote sensing of seabed types in the Australian South East Fishery; development and application of normal incident acoustic techniques and associated 'ground truthing'." Marine and Freshwater Research 52, no. 4 (2001): 475. http://dx.doi.org/10.1071/mf99181.

Full text
Abstract:
Calibrated acoustic backscattering measurements using 12, 38 and 120 kHz were collected over depths of 30–230 m, together with benthic epi- and in-fauna, sediments, photographs and video data. Each acoustic ping was envelope detected and digitized by echo sounder to include both the first and second echoes, and specifically designed software removed signal biases. A reference set of distinct habitat types at different depths was established, and a simple classification of the seabed combined both biological and geological attributes. Four seabed types were identified as having broad biological and geological significance;the simple acoustic indices could discriminate three of these at a single frequency. This demonstrates that the acoustic indices are not directly related to specific seabed properties but to a combination of seabed hardness and roughness attributes at a particular sampling frequency. The acoustic-derived maps have greater detail of seabed structure than previously described by sediment surveys and fishers’ interpretation. The collection of calibrated digital acoustic data at multiple frequencies and the creation of reference seabed sites will ensure that new shape-and energy-based feature extraction methods on the ping-based data can begin to unravel the complexities of the seabed. The methods described can be transferred to higher-resolution swath-mapping acoustic-sampling devices such as digital side-scan sonars and multi-beam echo sounders.
APA, Harvard, Vancouver, ISO, and other styles
24

Apollonio, Fabrizio Ivan, Filippo Fantini, Simone Garagnani, and Marco Gaiani. "A Photogrammetry-Based Workflow for the Accurate 3D Construction and Visualization of Museums Assets." Remote Sensing 13, no. 3 (January 30, 2021): 486. http://dx.doi.org/10.3390/rs13030486.

Full text
Abstract:
Nowadays digital replicas of artefacts belonging to the Cultural Heritage (CH) are one of the most promising innovations for museums exhibitions, since they foster new forms of interaction with collections, at different scales. However, practical digitization is still a complex task dedicated to specialized operators. Due to these premises, this paper introduces a novel approach to support non-experts working in museums with robust, easy-to-use workflows based on low-cost widespread devices, aimed at the study, classification, preservation, communication and restoration of CH artefacts. The proposed methodology introduces an automated combination of acquisition, based on mobile equipment and visualization, based on Real-Time Rendering. After the description of devices used along the workflow, the paper focuses on image pre-processing and geometry processing techniques adopted to generate accurate 3D models from photographs. Assessment criteria for the developed process evaluation are illustrated. Tests of the methodology on some effective museum case studies are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

Czyżewski, Dariusz, and Irena Fryc. "Luminance Calibration and Linearity Correction Method of Imaging Luminance Measurement Devices." Photonics Letters of Poland 13, no. 2 (June 30, 2021): 25. http://dx.doi.org/10.4302/plp.v13i2.1094.

Full text
Abstract:
This paper presents that the opto-electrical characteristic of a typical CCD based digital camera is nonlinear. It means that digital electric signal of the camera's CCD detector - is not a linear function of the luminance value on camera's lens. The opto-electrical characteristic feature of a digital camera needs to be transformed into a linear function if this camera is to be used as a luminance distribution measurement device known as Imaging Luminance Measurement Device (ILMD). The article presents the methodology for obtaining the opto-electrical characteristic feature of a typical CCD digital camera and focuses on the non- linearity correction method. Full Text: PDF ReferencesD. Wüller and H. Gabele, "The usage of digital cameras as luminance meters," in Digital Photography III, 2007, p. 65020U CrossRef P. Fiorentin and A. Scroccaro, "Detector-Based Calibration for Illuminance and Luminance Meters-Experimental Results," IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 5, pp. 1375-1381, 2010 CrossRef M. Shpak, P. Kärhä, G. Porrovecchio, M. Smid, and E. Ikonen, "Luminance meter for photopic and scotopic measurements in the mesopic range," Meas. Sci. Technol, vol. 25, no. 9, p. 95001, 2014, CrossRef P. Fiorentin, P. Iacomussi, and G. Rossi, "Characterization and calibration of a CCD detector for light engineering," IEEE Transactions on Instrumentation and Measurement, vol. 54, no. 1, pp. 171-177, 2005, CrossRef I. Fryc and E. Czech, "Application of optical fibers and CCD array for measurement of luminance distribution," in Proc. SPIE 5064, Lightmetry 2002: Metrology and Testing Techniques Using Light, 2003, pp. 18-21, CrossRef I. Fryc, "Accuracy of spectral correction of a CCD array for luminance distribution measurement," in Proc. SPIE 5064, Lightmetry 2002: Metrology and Testing Techniques Using Light, 2003, pp. 38-42, CrossRef I. Fryc, "Analysis of the spectral correction errors of illuminance meter photometric head under the influence of the diffusing element," Optical Engineering, vol. 40, no. 8, pp. 1636-1640, 2001. CrossRef D. Czyzewski, "Monitoring of the subsequent LED lighting installation in Warsaw in the years 2014-2015," in Proceedings of 2016 IEEE Lighting Conference of the Visegrad Countries, Lumen V4 2016, 2016, pp. 1-4, CrossRef M. Sielachowska, D. Tyniecki, and M. Zajkowski, "Measurements of the Luminance Distribution in the Classroom Using the SkyWatcher Type System," in 2018 VII. Lighting Conference of the Visegrad Countries (Lumen V4), 2018, pp. 1-5, CrossRef W. Malska and H. Wachta, "Elements of inferential statistics in a quantitative assessment of illuminations of architectural structures," in 2016 IEEE Lighting Conference of the Visegrad Countries (Lumen V4), 2016, pp. 1-6, CrossRef T. Kruisselbrink, R. Dangol, and A. Rosemann, "Photometric measurements of lighting quality: An overview," Building and Environment, vol. 138, pp. 42-52, 2018. CrossRef A. Borisuit, M. Münch, L. Deschamps, J. Kämpf, and J.-L. Scartezzini, "A new device for dynamic luminance mapping and glare risk assessment in buildings," in Proc. SPIE 8485. Nonimaging Optics: Efficient Design for Illumination and Solar Concentration IX, 2012, vol. 8485, p. 84850M, CrossRef I. Lewin and J. O'Farrell, "Luminaire photometry using video camera techniques," Journal of the Illuminating Engineering Society, vol. 28, no. 1, pp. 57-63, 1999, CrossRef D. Czyżewski, "Research on luminance distributions of chip-on-board light-emitting diodes," Crystals, vol. 9, no. 12, pp. 1-14, 2019, CrossRef K. Tohsing, M. Schrempf, S. Riechelmann, H. Schilke, and G. Seckmeyer, "Measuring high-resolution sky luminance distributions with a CCD camera," Applied optics, vol. 52, no. 8, pp. 1564-1573, 2013. CrossRef D. Czyzewski, "Investigation of COB LED luminance distribution," in Proceedings of 2016 IEEE Lighting Conference of the Visegrad Countries, Lumen V4 2016, 2016, pp. 1-4, CrossRef A. de Vries, J. L. Souman, B. de Ruyter, I. Heynderickx, and Y. A. W. de Kort, "Lighting up the office: The effect of wall luminance on room appraisal, office workers' performance, and subjective alertness," Building and Environment, 2018 CrossRef D. Silvestre, J. Guy, J. Hanck, K. Cornish, and A. Bertone, "Different luminance- and texture-defined contrast sensitivity profiles for school-aged children," Nature. Scientific Reports, vol. 10, no. 13039, 2020, CrossRef H. Wachta, K. Baran, and M. Leśko, "The meaning of qualitative reflective features of the facade in the design of illumination of architectural objects," in AIP Conference Proceedings, 2019, vol. 2078, no. 1, p. 20102. CrossRef CIE, "Technical raport CIE 231:2019. CIE Classification System of Illuminance and Luminance Meters.," Vienna, Austria, 2019. CrossRef DIN, "Standard DIN 5032-7:2017. Photometry - Part 7: Classification of illuminance meters and luminance meters.," 2017. DirectLink CEN, "EN 13032-1:2004. Light and lighting - Measurement and presentation of photometric data of lamps and luminaires - Part 1: Measurement and file format," Bruxelles, Belgium., 2004. DirectLink CIE, "Technical raport CIE 231:2019. CIE Classification System of Illuminance and Luminance Meters," Vienna, Austria, 2019 CrossRef E. Czech, D. Czyzewski, "The linearization of the relationship between scene luminance and digital camera output levels", Photonics Letter of Poland 13, 1 (2021). CrossRef
APA, Harvard, Vancouver, ISO, and other styles
26

Lally, Eugene F. "INNOVATIONS IN PHOTOGRAPHY: Digital Printing Techniques." Anthropology News 44, no. 4 (April 2003): 17. http://dx.doi.org/10.1111/an.2003.44.4.17.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Loganadan, Suriya, Murnisari Dardjan, Nani Murniati, Fahmi Oscandar, Yuti Malinda, and Dewi Zakiawati. "Preliminary Research: Description of Lip Print Patterns in Children and Their Parents among Deutero-Malay Population in Indonesia." International Journal of Dentistry 2019 (March 13, 2019): 1–6. http://dx.doi.org/10.1155/2019/7629146.

Full text
Abstract:
Introduction. Human identification is vital not only in legal medicine but also in criminal inquiries and identification. Cheiloscopy is the study of lip prints which are unique, individual, and heritable that is used for personal identification purposes in forensic odontology. Objective. The aim of this study is to identify the possibility of the child to inherit the lip print patterns from their parents and also to describe the lip print patterns in children and their parents among the Deutero-Malay population. Method. The descriptive research used lip samples of 90 individuals including father, mother, and a child who are biologically related and their age ranges from 12 to 60 years old. The samples chosen are from the Deutero-Malay ethnic in Indonesia at least for the past two generation who obeys all the exclusion criteria of this research. Purposive nonrandom sampling method was used to collect samples by photography technique using a digital camera, and the data obtained were then analysed using Adobe Photoshop CS3 software. Grooves and wrinkles of primary quadrants one, three, six, and seven of lips were studied according to Suzuki and Tsuchihashi’s classification in 1971. Result. In the present study, it is found that Type I′ (30.28%) is the most dominant lip print pattern and Type I (1.39%) is the least dominant among the Deutero-Malay population. Besides, this study has shown that the similarity of lip print pattern between mother and the child (57.89%) is greater compared to the father and the child (42.22%). Conclusion. Based on this, we can conclude that lip print can be inherited and dissimilar for every population of race; likewise, the Deutero-Malay population has the Type I′ as the most dominant lip print pattern.
APA, Harvard, Vancouver, ISO, and other styles
28

Leibova, N. A., and M. B. Leibov. "Digital Anthropological photography." VESTNIK ARHEOLOGII, ANTROPOLOGII I ETNOGRAFII, no. 4(59) (December 15, 2022): 132–46. http://dx.doi.org/10.20874/2071-0437-2022-59-4-11.

Full text
Abstract:
Despite the fact that in recent years the anthropologist's arsenal has significantly expanded due to the intro-duction of digital 3D scanning, computed tomography, microtomography, etc. into the practice of anthropological research, for most researchers photography remains an important part of the scientific process. Moreover, the resulting images are increasingly subject to higher requirements, since they often appear in scientific circulation much faster than before, bypassing editors and professional retouchers of publishers thanks to various kinds of Internet resources, such as presentations, on-line Internet conferences, reports, etc. In this new digital reality, the researcher acts as both an expert, a director, and an operator of a photo session and is solely responsible for the quality of the result and for its compliance with the goals of the shooting. The high intelligence of modern digital cameras creates a false impression in the beginner’s mind that camera can always be given freedom in making decisions regarding the shooting parameters. However, as shown in the article, there are a number of shooting situations when targeted manual management of shooting parameters is necessary to obtain a positive result. The following information will help the photographer do this. The purpose of our article is to help the researcher anthropologist qualitatively solve his problems using a digital camera. We will try to give an idea of those basic concepts, features of technology and techniques that determine the work of a photographer within the digital space. To this end, the article discusses the main technical and methodological techniques of anthropological photography within the digital space. A brief definition of the basic concepts of the “digital world” and the most important technical characteristics of modern digital cameras are given. The main part of the article is devoted to photography of paleoanthropological materials. Particular attention is paid to the shooting of the skull and odon-tological materials. Specific recommendations are given on the management of shooting parameters and on the organization of the shooting process, the use of which will allow the researcher to obtain high-quality digital pho-tographs of the studied anthropological objects that meet both the requirements of modern printing and the re-quirements of representation on Internet resources.
APA, Harvard, Vancouver, ISO, and other styles
29

Schmidt, Alison. "Practical retinal photography and digital imaging techniques." American Journal of Ophthalmology 137, no. 6 (June 2004): 1177–78. http://dx.doi.org/10.1016/j.ajo.2004.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Siok, Katarzyna, Ireneusz Ewiak, and Agnieszka Jenerowicz. "Multi-Sensor Fusion: A Simulation Approach to Pansharpening Aerial and Satellite Images." Sensors 20, no. 24 (December 11, 2020): 7100. http://dx.doi.org/10.3390/s20247100.

Full text
Abstract:
The growing demand for high-quality imaging data and the current technological limitations of imaging sensors require the development of techniques that combine data from different platforms in order to obtain comprehensive products for detailed studies of the environment. To meet the needs of modern remote sensing, the authors present an innovative methodology of combining multispectral aerial and satellite imagery. The methodology is based on the simulation of a new spectral band with a high spatial resolution which, when used in the pansharpening process, yields an enhanced image with a higher spectral quality compared to the original panchromatic band. This is important because spectral quality determines the further processing of the image, including segmentation and classification. The article presents a methodology of simulating new high-spatial-resolution images taking into account the spectral characteristics of the photographed types of land cover. The article focuses on natural objects such as forests, meadows, or bare soils. Aerial panchromatic and multispectral images acquired with a digital mapping camera (DMC) II 230 and satellite multispectral images acquired with the S2A sensor of the Sentinel-2 satellite were used in the study. Cloudless data with a minimal time shift were obtained. Spectral quality analysis of the generated enhanced images was performed using a method known as “consistency” or “Wald’s protocol first property”. The resulting spectral quality values clearly indicate less spectral distortion of the images enhanced by the new methodology compared to using a traditional approach to the pansharpening process.
APA, Harvard, Vancouver, ISO, and other styles
31

Rotz, J. D., A. O. Abaye, R. H. Wynne, E. B. Rayburn, G. Scaglia, and R. D. Phillips. "Classification of Digital Photography for Measuring Productive Ground Cover." Rangeland Ecology & Management 61, no. 2 (March 2008): 245–48. http://dx.doi.org/10.2111/07-011.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Comert, R., U. Avdan, and T. Gorum. "RAPID MAPPING OF FORESTED LANDSLIDE FROM ULTRA-HIGH RESOLUTION UNMANNED AERIAL VEHICLE DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W4 (March 6, 2018): 171–76. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w4-171-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> The Black Sea Region is one of the most landslide prone area due to the high slope gradients, heavy rainfall and highly weathered hillslope material conditions in Turkey. The landslide occurrences in this region are mainly controlled by the hydro-climatic conditions and anthropogenic activities. Rapid regional landslide inventory mapping after a major event is main difficulties encountered in this densely vegetated region. However, landslide inventories are first step and necessary for susceptibility assessment since considering the principle that the past is the key to the future thus, future landslides will be more likely to occur under similar conditions, which have led to past and present instability. In this respect, it is important to apply rapid mapping techniques to create regional landslide inventory maps of the area. This study presents the preliminary results of the semi-automated mapping of landslides from unmanned aerial vehicles (UAV) with object-based image analysis (OBIA) approach. Within the scope of the study, ultra-high resolution aerial photographs were taken with fixed wing UAV system on Aug 17, 2017 in the landslide zones which are triggered by the prolonged heavy rainfall event on August 12&amp;ndash;13, 2016 at Bartın Kurucaşile province. 10&amp;thinsp;cm resolution orthomosaic and Digital Surface Model (DSM) data of the area were produced by processing the obtained photographs. A test area was selected from the overall research area and semi-automatic landslide detection was performed by applying object-based image analysis. OBIA has been implemented in three steps: image segmentation, image object metric calculation and classification. The accuracy of the resulting maps is assessed by comparisons with expert based landslide inventory map of the area. As a result of the comparison, 80&amp;thinsp;% of the 240 landslides in the area were detected correctly.</p>
APA, Harvard, Vancouver, ISO, and other styles
33

Skorobogatov, E. V., I. A. Stepanova, V. S. Orekhov, and M. K. Beklemishev. "The use of NIR Fluorimetry with photographic data acquisition in the fingerprinting method with the addition of fluorophores to the samples: discrimination of apple juices." Аналитика и контроль 26, no. 1 (2022): 21–30. http://dx.doi.org/10.15826/analitika.2022.26.1.005.

Full text
Abstract:
The application of dyes, that fluoresce in the near infrared (NIR, 700-800 nm) region, for the recognition of samples using a fingerprinting method with the addition of fluorophores to the samples (“fluorescent eye”) is proposed. The technique has been successfully applied to the classification of samples of various nature. In the current work, this strategy has been tested on the example of discrimination of 17 samples of apple juice from different manufacturers, purchased at different times. An indolenine series heptamethine carbocyanine dye in the presence of surfactants was used as the added fluorophore, red LEDs were used as an excitation source, and the signal was recorded using a digital camera with an additional IR filter installed; a spectrofluorimeter with a 96-well plate accessory was used to record the spectra. Photographic images were processed using Unscrambler X and Excel software. The results were presented using the following coordinates: intensity of NIR fluorescence - intensity of visible light reflection (using the photographic images). It was found that such presentation allowed the samples to be divided into groups associated with the manufacturer. We have also obtained intrinsic fluorescence spectra, including those with the addition of NIR dye, and these results were processed by the principal component analysis. It was possible to distinguish 5–6 groups of samples by their intrinsic emission, not counting the blank, while the spectra with the addition of the dye allowed to isolate the largest number of groups of samples (9). At the same time, the classification using spectra did not allow juices to be grouped by the producer. Also, obtaining photographs using a visualizer was easier and faster than recording the fluorescence spectra. The joint processing of emission spectra and photographs did not improve the quality of discrimination.
APA, Harvard, Vancouver, ISO, and other styles
34

FUJIWARA, Tadao, Shuji NISHIHARA, and Koji HIROSE. "Color Flow-Visualization Photography and Digital Image Processing Techniques." JSME international journal. Ser. 2, Fluids engineering, heat transfer, power, combustion, thermophysical properties 31, no. 1 (1988): 39–46. http://dx.doi.org/10.1299/jsmeb1988.31.1_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

FUJIWARA, Tadao, Shuji NISHIHARA, and Koji HIROSE. "Color flow-visualization photography and digital image processing techniques." Transactions of the Japan Society of Mechanical Engineers Series B 53, no. 493 (1987): 2762–70. http://dx.doi.org/10.1299/kikaib.53.2762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Thalheimer, Raquel D., Vanessa L. Merker, K. Ina Ly, Amanda Champlain, Jennifer Sawaya, Naomi L. Askenazi, Hamilton P. Herr, et al. "Validating Techniques for Measurement of Cutaneous Neurofibromas." Neurology 97, no. 7 Supplement 1 (July 6, 2021): S32—S41. http://dx.doi.org/10.1212/wnl.0000000000012428.

Full text
Abstract:
ObjectiveTo assess the reliability and variability of digital calipers, 3D photography, and high-frequency ultrasound (HFUS) for measurement of cutaneous neurofibromas (cNF) in patients with neurofibromatosis type 1 (NF1).BackgroundcNF affect virtually all patients with NF1 and are a major source of morbidity. Reliable techniques for measuring cNF are needed to develop therapies for these tumors.MethodsAdults with NF1 were recruited. For each participant, 6 cNF were assessed independently by 3 different examiners at 5 different time points using digital calipers, 3D photography, and HFUS. The intraclass correlation coefficient (ICC) was used to assess intrarater and interrater reliability of linear and volumetric measurements for each technique, with ICC values >0.90 defined as excellent reliability. The coefficient of variation (CV) was used to estimate the minimal detectable difference (MDD) for each technique.ResultsFifty-seven cNF across 10 participants were evaluated. The ICC for image acquisition and measurement was >0.97 within and across examiners for HFUS and 3D photography. ICC for digital calipers was 0.62–0.88. CV varied by measurement tool, linear vs volumetric measurement, and tumor size.ConclusionsHFUS and 3D photography demonstrate excellent reliability whereas digital calipers have good to excellent reliability in measuring cNF. The MDD for each technique was used to create tables of proposed thresholds for investigators to use as guides for clinical trials focused on cNF size. These criteria should be updated as the performance of these end points is evaluated.
APA, Harvard, Vancouver, ISO, and other styles
37

Morrison, Annie O., and Jerad M. Gardner. "Microscopic Image Photography Techniques of the Past, Present, and Future." Archives of Pathology & Laboratory Medicine 139, no. 12 (May 19, 2015): 1558–64. http://dx.doi.org/10.5858/arpa.2014-0315-ra.

Full text
Abstract:
Context The field of pathology is driven by microscopic images. Educational activities for trainees and practicing pathologists alike are conducted through exposure to images of a variety of pathologic entities in textbooks, publications, online tutorials, national and international conferences, and interdepartmental conferences. During the past century and a half, photographic technology has progressed from primitive and bulky, glass-lantern projector slides to static and/or whole slide digital-image formats that can now be transferred around the world in a matter of moments via the Internet. Objective To provide a historic and technologic overview of the evolution of microscopic-image photographic tools and techniques. Data Sources Primary historic methods of microscopic image capture were delineated through interviews conducted with senior staff members in the Emory University Department of Pathology. Searches for the historic image-capturing methods were conducted using the Google search engine. Google Scholar and PubMed databases were used to research methods of digital photography, whole slide scanning, and smart phone cameras for microscopic image capture in a pathology practice setting. Conclusions Although film-based cameras dominated for much of the time, the rise of digital cameras outside of pathology generated a shift toward digital-image capturing methods, including mounted digital cameras and whole slide digital-slide scanning. Digital image capture techniques have ushered in new applications for slide sharing and second-opinion consultations of unusual or difficult cases in pathology. With their recent surge in popularity, we suspect that smart phone cameras are poised to become a widespread, cost-effective method for pathology image acquisition.
APA, Harvard, Vancouver, ISO, and other styles
38

Montgomery, Joshua, Craig Mahoney, Brian Brisco, Lyle Boychuk, Danielle Cobbaert, and Chris Hopkinson. "Remote Sensing of Wetlands in the Prairie Pothole Region of North America." Remote Sensing 13, no. 19 (September 28, 2021): 3878. http://dx.doi.org/10.3390/rs13193878.

Full text
Abstract:
The Prairie Pothole Region (PPR) of North America is an extremely important habitat for a diverse range of wetland ecosystems that provide a wealth of socio-economic value. This paper describes the ecological characteristics and importance of PPR wetlands and the use of remote sensing for mapping and monitoring applications. While there are comprehensive reviews for wetland remote sensing in recent publications, there is no comprehensive review about the use of remote sensing in the PPR. First, the PPR is described, including the wetland classification systems that have been used, the water regimes that control the surface water and water levels, and the soil and vegetation characteristics of the region. The tools and techniques that have been used in the PPR for analyses of geospatial data for wetland applications are described. Field observations for ground truth data are critical for good validation and accuracy assessment of the many products that are produced. Wetland classification approaches are reviewed, including Decision Trees, Machine Learning, and object versus pixel-based approaches. A comprehensive description of the remote sensing systems and data that have been employed by various studies in the PPR is provided. A wide range of data can be used for various applications, including passive optical data like aerial photographs or satellite-based, Earth-observation data. Both airborne and spaceborne lidar studies are described. A detailed description of Synthetic Aperture RADAR (SAR) data and research are provided. The state of the art is the use of multi-source data to achieve higher accuracies and hybrid approaches. Digital Surface Models are also being incorporated in geospatial analyses to separate forest and shrub and emergent systems based on vegetation height. Remote sensing provides a cost-effective mechanism for mapping and monitoring PPR wetlands, especially with the logistical difficulties and cost of field-based methods. The wetland characteristics of the PPR dictate the need for high resolution in both time and space, which is increasingly possible with the numerous and increasing remote sensing systems available and the trend to open-source data and tools. The fusion of multi-source remote sensing data via state-of-the-art machine learning is recommended for wetland applications in the PPR. The use of such data promotes flexibility for sensor addition, subtraction, or substitution as a function of application needs and potential cost restrictions. This is important in the PPR because of the challenges related to the highly dynamic nature of this unique region.
APA, Harvard, Vancouver, ISO, and other styles
39

Yu, Cheng Xin, Xiang Xia, Feng Hua Qin, and Peng Xiao. "The Application of Digital Photography Techniques in Structure Deformation Measurement." Applied Mechanics and Materials 475-476 (December 2013): 204–8. http://dx.doi.org/10.4028/www.scientific.net/amm.475-476.204.

Full text
Abstract:
Introduced the use of 3D time baseline parallax method of steel structure,calculation of deformation measurement data processing methods, and the experimental results are given. This method considered the resolution, focal length, and the impact of external environmental conditions. Has made the small observation error, high precision results.Digital cameras used for digital photographic survey, avoid the complex process of measuring cameras printing. The photographic measurements can be performed simultaneously inside and outside the industry.So far, more suitable for digital camera solver methods are direct linear transformation and time baseline parallax method. Among them, 3D time baseline parallax method is mainly used for measuring spatial displacement of objects.
APA, Harvard, Vancouver, ISO, and other styles
40

Durusoy, Murat. "In-Game Photography: Creating New Realities through Video Game Photography." Membrana Journal of Photography, Vol. 3, no. 1 (2018): 42–47. http://dx.doi.org/10.47659/m4.042.art.

Full text
Abstract:
Computers and photography has had a long and complicated relationship throughout the years. As image processing and manipulating capabilities advanced on the computer front, photography re-birthed itself with digital cameras and digital imaging techniques. Development of interconnected social sharing networks like Instagram and Twitter feeds the photographers’/users’ thirst to show off their momentaneous “been there/seen that – capture the moment/share the moment” instincts. One other unlikely front emerged as an image processing power of the consumer electronics improved is “video game worlds” in which telematic travellers may shoot photographs in constructed fantasy worlds as if travelling in real life. While life-like graphics manufactured by the computers raise questions about authenticity and truthfulness of the image, the possible future of the photography as socially efficient visual knowledge is in constant flux. This article aims to reflect on today’s trends in in-game photography and tries to foresee how this emerging genre and its constructed realities will transpose the old with the new photographic data in the post-truth condition fostering for re-evaluation of photography truth-value. Keywords: digital image, lens-based, photography, screenshot, video games
APA, Harvard, Vancouver, ISO, and other styles
41

Norflus, Fran. "Using Digital Photography to Supplement Learning of Biotechnology." American Biology Teacher 74, no. 4 (April 1, 2012): 232–36. http://dx.doi.org/10.1525/abt.2012.74.4.5.

Full text
Abstract:
The author used digital photography to supplement learning of biotechnology by students with a variety of learning styles and educational backgrounds. Because one approach would not be sufficient to reach all the students, digital photography was used to explain the techniques and results to the class instead of having to teach each student individually. To analyze the effectiveness of this teaching technique, the students' responses on various examination questions were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
42

Englund, Sylvia R., Joseph J. O'Brien, and David B. Clark. "Evaluation of digital and film hemispherical photography and spherical densiometry for measuring forest light environments." Canadian Journal of Forest Research 30, no. 12 (December 1, 2000): 1999–2005. http://dx.doi.org/10.1139/x00-116.

Full text
Abstract:
This study presents the results of a comparison of digital and film hemispherical photography as means of characterizing forest light environments and canopy openness. We also compared hemispherical photography to spherical densiometry. Our results showed that differences in digital image quality due to the loss of resolution that occurred when images were processed for computer analysis did not affect estimates of unweighted openness. Weighted openness and total site factor estimates were significantly higher in digital images compared with film photos. The differences between the two techniques might be a result of underexposure of the film images or differences in lens optical quality and field of view. We found densiometer measurements significantly increased in consistency with user practice and were correlated with total site factor and weighted-openness estimates derived from hemispherical photography. Digital photography was effective and more convenient and inexpensive than film cameras, but until the differences we observed are better explained, we recommend caution when comparisons are made between the two techniques. We also concluded that spherical densiometers effectively characterize forest light environments.
APA, Harvard, Vancouver, ISO, and other styles
43

Bai, Ben Du, Wei Hua Liu, Ying Liu, and Jiu Lun Fan. "Recent Advances in High Dynamic Range Imaging." Applied Mechanics and Materials 441 (December 2013): 699–702. http://dx.doi.org/10.4028/www.scientific.net/amm.441.699.

Full text
Abstract:
This paper aims to present a review of recent techniques in high dynamic range imaging (HDRI), which was the topic in research areas including image processing, computer graphics, and photography. HDRI or just HDR is a set of techniques that allows a greater dynamic range between the lightest and darkest areas of an image than current standard digital imaging techniques or photographic methods. HDR imaging technologies will spread its sphere of influence in imaging industry for its high quality and its powerful expression ability, including digital cinema, digital photography and next generation broadcast. This paper discusses recent advances, future direction in HDRI. The major goal of the paper is to provide a reference source for the researchers involved in HDRI, regardless of particular application areas.
APA, Harvard, Vancouver, ISO, and other styles
44

Purwanti, Endah, and Retna Apsari. "Classification of Digital Mammograms Using Nearest Neighbor Techniques." International Journal of Computer Trends and Technology 17, no. 4 (November 25, 2014): 196–99. http://dx.doi.org/10.14445/22312803/ijctt-v17p137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lahuerta Zamora, L., and M. T. Pérez-Gracia. "Using digital photography to implement the McFarland method." Journal of The Royal Society Interface 9, no. 73 (February 15, 2012): 1892–97. http://dx.doi.org/10.1098/rsif.2011.0809.

Full text
Abstract:
The McFarland method allows the concentration of bacterial cells in a liquid medium to be determined by either of two instrumental techniques: turbidimetry or nephelometry. The microbes act by absorbing and scattering incident light, so the absorbance (turbidimetry) or light intensity (nephelometry) measured is directly proportional to their concentration in the medium. In this work, we developed a new analytical imaging method for determining the concentration of bacterial cells in liquid media. Digital images of a series of McFarland standards are used to assign turbidity-based colour values with the aid of dedicated software. Such values are proportional to bacterial concentrations, which allow a calibration curve to be readily constructed. This paper assesses the calibration reproducibility of an intra-laboratory study and compares the turbidimetric and nephelometric results with those provided by the proposed method, which is relatively simple and affordable; in fact, it can be implemented with a digital camera and the public domain software I mage J.
APA, Harvard, Vancouver, ISO, and other styles
46

सिं Sing, अशोकमान Ashokman. "फोटोग्राफी प्रविधि : सङ्क्षिप्त इतिहास {Photography Techniques: A Brief History}." SIRJANĀ – A Journal on Arts and Art Education 4, no. 1 (December 1, 2017): 28–39. http://dx.doi.org/10.3126/sirjana.v4i1.44507.

Full text
Abstract:
निकै लामो यात्रा तय गर्दै फोटोग्राफी प्रविधि वर्तमान अवस्थासम्म आइपुगेको हो । क्यामेरा अब्स्क्युराबाटसुरु भएको यो यात्रा हेलियोग्राफी क्यामेरा, डेगुएरिओ टाइप क्यामेरा, क्यालोटाइप क्यामेरा,वेटकोलोडियन पद्धति, ड्राइप्लेट नेगेटिभ पद्धति, कोडाक क्यामेरा आदि हुँदै आज डिजिटलफोटोग्राफीसम्म आइपुगेको अवस्था छ । समाजका समस्त क्षेत्रमा फोटोग्राफीको उपयोग झन् झन्बढ्दो छ । मनोरञ्जन र सोखका रूपमा मात्रै होइन, अपितु कला र आमसञ्चारका सशक्त माध्यमकारूपमा फोटोग्राफी विधा स्थापित भइसकेको छ । प्रस्तुत लेखमा क्यामेराको अवधारणा र यसकोविकासक्रम, अन्तर्राष्ट्रिय स्तरमा प्रसिद्ध फोटोग्राफरहरू तथा नेपालका सन्दर्भमा फोटोग्राफीको इतिहासएवम् स्थापित नेपाली फोटोग्राफरहरूबारे सङ्क्षेपमा जानकारी दिने प्रयास गरिएको छ । {Traveling a long way, photography technology has come a long way. From camera obscuraThe journey started with heliography camera, Daguerio type camera, calotype camera, Weekcolodian method, dryplate negative method, Kodak camera etc. are going digital today. There is a situation when it comes to photography. The use of photography in all spheres of society is increasing Is increasing. Not only for entertainment and hobby, but also for powerful means of art and mass communication As the genre of photography has been established. In the present article, the concept of camera and its Development, internationally renowned photographers and the history of photography in the context of Nepal Attempts have been made to give brief information about established Nepali photographers.}
APA, Harvard, Vancouver, ISO, and other styles
47

Cavalcanti, Claudio S. V. C., Herman Martins Gomes, and José Eustáquio Rangel De Queiroz. "A survey on automatic techniques for enhancement and analysis of digital photography." Journal of the Brazilian Computer Society 19, no. 3 (March 26, 2013): 341–59. http://dx.doi.org/10.1007/s13173-013-0102-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hakanson, Emma C., Kevin J. Hakanson, Paula S. Anich, and Jonathan G. Martin. "Techniques for documenting and quantifying biofluorescence through digital photography and color quantization." Journal of Photochemistry and Photobiology 12 (December 2022): 100149. http://dx.doi.org/10.1016/j.jpap.2022.100149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Khaldi, Amine. "Steganographic Techniques Classification According to Image Format." International Annals of Science 8, no. 1 (November 4, 2019): 143–49. http://dx.doi.org/10.21467/ias.8.1.143-149.

Full text
Abstract:
In this work, we present a classification of steganographic methods applicable to digital images. We also propose a classification of steganographic methods according to the type of image used. We noticed there are no methods that can be applied to all image formats. Each type of image has its characteristics and each steganographic method operates on a precise colorimetric representation. This classification provides an overview of the techniques used for the steganography of digital images
APA, Harvard, Vancouver, ISO, and other styles
50

Micklem, Kingsley. "Developing Digital Photomicroscopy." Cells 11, no. 2 (January 16, 2022): 296. http://dx.doi.org/10.3390/cells11020296.

Full text
Abstract:
(1) The need for efficient ways of recording and presenting multicolour immunohistochemistry images in a pioneering laboratory developing new techniques motivated a move away from photography to electronic and ultimately digital photomicroscopy. (2) Initially broadcast quality analogue cameras were used in the absence of practical digital cameras. This allowed the development of digital image processing, storage and presentation. (3) As early adopters of digital cameras, their advantages and limitations were recognised in implementation. (4) The adoption of immunofluorescence for multiprobe detection prompted further developments, particularly a critical approach to probe colocalization. (5) Subsequently, whole-slide scanning was implemented, greatly enhancing histology for diagnosis, research and teaching.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography