Gotowa bibliografia na temat „IMAGE PATCHING”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „IMAGE PATCHING”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "IMAGE PATCHING"

1

Tsai, Yi-Chang (James), Yi-Ching Wu, and Geoffrey Price. "A Cost-Effective and Objective Full-Depth Patching Identification Method using 3D Sensing Technology with Automated Crack Detection and Classification." Transportation Research Record: Journal of the Transportation Research Board 2672, no. 40 (2018): 50–58. http://dx.doi.org/10.1177/0361198118798474.

Pełny tekst źródła
Streszczenie:
Full-depth patching is one of the commonly used asphalt pavement maintenance and rehabilitation methods in which deteriorated base and surface layers are repaired to restore strength and improve ride quality. During resurfacing projects, areas requiring full-depth patching are identified and quantified as construction priorities because of the high costs associated with the labor and materials for the procedure. Currently, the manual surveys conducted to identify these locations are time-consuming and labor-intensive. Thus, large projects often cannot easily quantify the full-depth patching need because of the significant labor that would be required. This paper proposes a method that uses emerging 3D laser technology to identify the full-depth patching need by processing and analyzing the pavement distresses automatically extracted from 3D laser images. The proposed method consists of five steps: (1) 3D data acquisition, calibration, and validation, (2) crack detection, (3) crack classification, (4) rutting detection and measurement, and (5) determination of image-based patching need using the established decision tree. A case study of one mile of 3D pavement images, collected from US 80/S.R. 26, was conducted to demonstrate the use and feasibility of the proposed method. Results show the proposed method is capable of correctly classifying 95.4% of the images that show pavements requiring patching and 84.2% of the images showing pavements not requiring patching for a combined accuracy of 94.1%. The method shows promise for identifying patch locations in a cost-effective manner and will save money and time for transportation agencies.
Style APA, Harvard, Vancouver, ISO itp.
2

Zhong, Haidong, Xianyi Chen, and Qinglong Tian. "An Improved Reversible Image Transformation Using K-Means Clustering and Block Patching." Information 10, no. 1 (2019): 17. http://dx.doi.org/10.3390/info10010017.

Pełny tekst źródła
Streszczenie:
Recently, reversible image transformation (RIT) technology has attracted considerable attention because it is able not only to generate stego-images that look similar to target images of the same size, but also to recover the secret image losslessly. Therefore, it is very useful in image privacy protection and reversible data hiding in encrypted images. However, the amount of accessorial information, for recording the transformation parameters, is very large in the traditional RIT method, which results in an abrupt degradation of the stego-image quality. In this paper, an improved RIT method for reducing the auxiliary information is proposed. Firstly, we divide secret and target images into non-overlapping blocks, and classify these blocks into K classes by using the K-means clustering method. Secondly, we match blocks in the last (K-T)-classes using the traditional RIT method for a threshold T, in which the secret and target blocks are paired with the same compound index. Thirdly, the accessorial information (AI) produced by the matching can be represented as a secret segment, and the secret segment can be hided by patching blocks in the first T-classes. Experimental results show that the proposed strategy can reduce the AI and improve the stego-image quality effectively.
Style APA, Harvard, Vancouver, ISO itp.
3

Gong, Yan. "Panoramic Image Patching Algorithm Based on Global Optimization." Journal of Information and Computational Science 12, no. 14 (2015): 5523–30. http://dx.doi.org/10.12733/jics20150019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Celaya-Padilla, Jose M., Carlos E. Galvan T, J. Ruben Delgado C, Issac Galvan-Tejada, and Ernesto Ivan Sandoval. "Multi-seed texture synthesis to fast image patching." Procedia Engineering 35 (2012): 210–16. http://dx.doi.org/10.1016/j.proeng.2012.04.182.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Totsuka, Satoru, Tomoya Handa, Hitoshi Ishikawa, and Nobuyuki Shoji. "Improvement of Adherence with Occlu-Pad Therapy for Pediatric Patients with Amblyopia." BioMed Research International 2018 (November 22, 2018): 1–5. http://dx.doi.org/10.1155/2018/2394562.

Pełny tekst źródła
Streszczenie:
We aimed to examine visual acuity improvement effect and adherence in amblyopia training using tablet type vision training equipment (Occlu-pad). The subjects were 138 patients with amblyopia (average age of 5.5 ± 1.6 years old); their amblyopic visual acuity at the start of training was logMAR 0.15 to 1.3. Occlu-pad is a device that processes images such that amblyopic eyes can only view the image as it passes through polarized glasses; this is achieved by peeling off the polarizing film layer in the liquid crystal display of an iPad (Apple). Amblyopia training comprised either the instructional training with Occlu-pad or the eye patch (Patching) as a family training, after wearing perfectly corrected glasses. Visual acuity improvement following amblyopia training by Occlu-pad and Patching was significantly different after 6 months in patients with anisometropic amblyopia (p <0.05). In patients with strabismic amblyopia, a significant difference between training methods was observed after 9 months (p <0.05). Use of the Occlu-pad resulted in better adherence for patients with either anisometropic amblyopia or strabismic amblyopia; a significant difference in adherence was observed after 3 months, compared with Patching (p <0.05). Amblyopia training with Occlu-pad supports greater visual acuity improvement and adherence than Patching.
Style APA, Harvard, Vancouver, ISO itp.
6

Dan, Han-Cheng, Hao-Fan Zeng, Zhi-Heng Zhu, Ge-Wen Bai, and Wei Cao. "Methodology for Interactive Labeling of Patched Asphalt Pavement Images Based on U-Net Convolutional Neural Network." Sustainability 14, no. 2 (2022): 861. http://dx.doi.org/10.3390/su14020861.

Pełny tekst źródła
Streszczenie:
Image recognition based on deep learning generally demands a huge sample size for training, for which the image labeling becomes inevitably laborious and time-consuming. In the case of evaluating the pavement quality condition, many pavement distress patching images would need manual screening and labeling, meanwhile the subjectivity of the labeling personnel would greatly affect the accuracy of image labeling. In this study, in order for an accurate and efficient recognition of the pavement patching images, an interactive labeling method is proposed based on the U-Net convolutional neural network, using active learning combined with reverse and correction labeling. According to the calculation results in this paper, the sample size required by the interactive labeling is about half of the traditional labeling method for the same recognition precision. Meanwhile, the accuracy of interactive labeling method based on the mean intersection over union (mean_IOU) index is 6% higher than that of the traditional method using the same sample size and training epochs. In addition, the accuracy analysis of the noise and boundary of the prediction results shows that this method eliminates 92% of the noise in the predictions (the proportion of noise is reduced from 13.85% to 1.06%), and the image definition is improved by 14.1% in terms of the boundary gray area ratio. The interactive labeling is considered as a significantly valuable approach, as it reduces the sample size in each epoch of active learning, greatly alleviates the demand for manpower, and improves learning efficiency and accuracy.
Style APA, Harvard, Vancouver, ISO itp.
7

TOKUDA, Kenichi, Tetsuya KINUGASA, Ryota HAYASHI, Takafumi HAJI, and Hisanori AMANO. "Shredded Image Patching of Inner Crawler Cameras for Disaster Robot." Proceedings of Mechanical Engineering Congress, Japan 2016 (2016): G1500504. http://dx.doi.org/10.1299/jsmemecj.2016.g1500504.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Takahashi, Ryo, Takashi Matsubara, and Kuniaki Uehara. "Data Augmentation Using Random Image Cropping and Patching for Deep CNNs." IEEE Transactions on Circuits and Systems for Video Technology 30, no. 9 (2020): 2917–31. http://dx.doi.org/10.1109/tcsvt.2019.2935128.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Maeda, Keisuke, Saya Takada, Tomoki Haruyama, Ren Togo, Takahiro Ogawa, and Miki Haseyama. "Distress Detection in Subway Tunnel Images via Data Augmentation Based on Selective Image Cropping and Patching." Sensors 22, no. 22 (2022): 8932. http://dx.doi.org/10.3390/s22228932.

Pełny tekst źródła
Streszczenie:
Distresses, such as cracks, directly reflect the structural integrity of subway tunnels. Therefore, the detection of subway tunnel distress is an essential task in tunnel structure maintenance. This paper presents the performance improvement of deep learning-based distress detection to support the maintenance of subway tunnels through a new data augmentation method, selective image cropping and patching (SICAP). Specifically, we generate effective data for training the distress detection model by focusing on the distressed regions via SICAP. After the data augmentation, we train a distress detection model using the expanded training data. The new image generated based on SICAP does not change the pixel values of the original image. Thus, there is little loss of information, and the generated images are effective in constructing a robust model for various subway tunnel lines. We conducted experiments with some comparative methods. The experimental results show that the detection performance can be improved by our data augmentation.
Style APA, Harvard, Vancouver, ISO itp.
10

WANG, DACHENG, and SARGUR N. SRIHARI. "ANALYSIS OF FORM IMAGES." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 05 (1994): 1031–52. http://dx.doi.org/10.1142/s0218001494000528.

Pełny tekst źródła
Streszczenie:
Automatic analysis of images of forms is a problem of both practical and theoretical interest; due to its importance in office automation, and due to the conceptual challenges posed for document image analysis, respectively. We describe an approach to the extraction of text, both typed and handwritten, from scanned and digitized images of filled-out forms. In decomposing a filled-out form into three basic components of boxes, line segments and the remainder (handwritten and typed characters, words, and logos), the method does not use a priori knowledge of form structure. The input binary image is first segmented into small and large connected components. Complex boxes are decomposed into elementary regions using an approach based on key-point analysis. Handwritten and machine-printed text that touches or overlaps guide lines and boxes are separated by removing lines. Characters broken by line removal are rejoined using a character patching method. Experimental results with filled-out forms, from several different domains (insurance, banking, tax, retail and postal) are given.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii