Journal articles on the topic 'Printed image'

To see the other types of publications on this topic, follow the link: Printed image.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Printed image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yao, Yong, Wei Hua Wang, Dong Fang Zhang, and Hong Yan Guo. "Printed Character Database Analysis Based Printed Document Examination." Applied Mechanics and Materials 411-414 (September 2013): 1260–66. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1260.

Full text
Abstract:
This paper presented a study on printed character database image analysis based printed document examination in purpose of identifying the printer which created a suspect printed document. It was composed of printed document image acquisition, image pre-processing, feature extraction and classifier. After characters are extracted and recognized in pre-processing, stroke feature sequence of each text block are calculated, and the HU moments of the sequence are also calculated. Finally, the Euclid distance classifier and MQDF classifier are used to recognize the fonts using the above two kind of font feature respectively. Experiments are carried out in a database including 40 LaserJet printers. The experimental results demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Zhigang, Xiangying Du, Xiaojuan Guo, Dongdong Rong, Ruiying Kang, Guangyun Mao, Jiabin Liu, and Kuncheng Li. "Comparison of dry laser printer versus paper printer in full-field digital mammography." Acta Radiologica 51, no. 3 (April 2010): 235–39. http://dx.doi.org/10.3109/02841850903485755.

Full text
Abstract:
Background: Paper printers have been used to document radiological findings in some hospitals. It is critical to establish whether paper printers can achieve the same efficacy and quality as dry laser printers for full-field digital mammography (FFDM). Purpose: To compare the image quality and detection rate of dry laser printers and paper printers for FFDM. Material and Methods: Fifty-five cases (25 with single clustered microcalcifications and 30 controls) were selected by a radiologist not participating in the image review. All images were printed on film and paper by one experienced mammography technologist using the processing algorithm routinely used for our mammograms. Two radiologists evaluated hard copies from dry laser printers and paper printers for image quality and detectability of clustered microcalcifications. For the image quality comparisons, agreement between the reviewers was evaluated by means of kappa statistics. The significance of differences between both of the printers was determined using Wilcoxon's signed-rank test. The detection rate of two printing systems was evaluated using receiver operating characteristic (ROC) analysis. Results: From 110 scores (55 patients, two readers) per printer system, the following quality results were achieved for dry laser printer images: 70 (63.6%) were rated as good and 40 (36.4%) as moderate. By contrast, for the paper printer images, 25 scores (22.7%) were rated as good and 85 (77.3%) as moderate. Therefore, the image quality of the dry laser printer was superior to that achieved by the paper printer ( P=0.00). The average area-under-the-curve (Az) values for the dry laser printer and the paper printer were 0.991 and 0.805, respectively. The difference was 0.186. Results of ROC analysis showed significant difference in observer performance between the dry laser printer and paper printer ( P=0.0015). Conclusion: The performance of dry laser printers is superior to that of paper printers. Paper printers should not be used in FFDM.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Runzhe, Yi Yang, Eric Maggard, Yousun Bang, Minki Cho, and Jan Allebach. "A comprehensive system for analyzing the presence of print quality defects." Electronic Imaging 2020, no. 9 (January 26, 2020): 314–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-314.

Full text
Abstract:
Print quality (PQ) is most important in the printing industry. It plays a role in users’ satisfaction with their products. Page quality will be degraded when there are print quality defects on the printed page, which could be caused by the electrophotographic printer (EP) process and associated print mechanism. To identify the print quality issue, customers have to consult a printer user manual or contact customer service to describe the problems. In this paper, we propose a comprehensive system to analyze the printed page automatically and extract the important defect features to determine the type and severity of defects on the scanned page. This system incorporates many of our previous works. The input of this system is the master digital image and the scanned image of the printed page. The comprehensive system includes three modules: the region of interest (ROI) extraction module, the scanned image pre-processing module (image alignment and color calibration procedure), and the print defect analysis module (text fading detection, color fading detection, streak detection, and banding detection). This system analyzes the scanned images based on different ROIs, and each ROI will produce a printer defect feature vector. The final output is the whole feature vector including all the ROI feature vectors of the printed page, and this feature vector will be uploaded to customer service to analyze the printer defect.
APA, Harvard, Vancouver, ISO, and other styles
4

Kuo, Chung-Feng Jeffrey, Cheng-Lin Lee, and Chung-Yang Shih. "Image database of printed fabric with repeating dot patterns part (I) – image archiving." Textile Research Journal 87, no. 17 (August 12, 2016): 2089–105. http://dx.doi.org/10.1177/0040517516663160.

Full text
Abstract:
An image database of printed fabrics with repeating dot patterns was created to alleviate issues associated with management of and searches for numerous dot printed fabrics in the printing industry. The function of the database is to archive and allow retrieval of images. First, we discuss image archiving of repeating pattern-based dot printed fabrics. The color image was scanned by resolution of 200 dpi. The wavelet transformation was used to preprocess the image to obtain a scanned image 1/16 of the size of the original to be the stored image. To acquire images with repeating pattern color and repeating pattern template, the binary image of each pattern was obtained using the Sobel edge detection method and a morphological operation. Then pattern elements identical to the target pattern element were screened out. Afterwards, the centroid positions of these identical pattern elements were used to subdivide the repeating pattern color image and repeating pattern template image using a vertical vector method. Finally, the RGB 512-color histogram was used as the color feature of the dot printed fabrics, and the geometric and moment invariant feature values of the repeating pattern template image were used as the pattern feature of the dot printed fabrics. Our experimental results show that images can be acquired that are suitable for use in a dot printed fabric image database. The color and template images of the repeating patterns, which represent the image content of the printed fabrics, were obtained to create an image database of repeating pattern-based dot printed fabrics. This image database contains data on 300 printed fabrics which can be used for subsequent research on database image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Hao Xue, Gui Hua Cui, Min Huang, Bing Wu, and Yu Liu. "Color-Difference Threshold for Printed Images." Applied Mechanics and Materials 469 (November 2013): 236–39. http://dx.doi.org/10.4028/www.scientific.net/amm.469.236.

Full text
Abstract:
Five ISO 400 images were used as test images and a method of limits psychophysical experiment was designed to test color-difference threshold in printed images. The color appearance of each original image was modified by an exponential function for CIELAB lightness and chroma, an offset function for CIELAB hue at 20 steps for each attributes respectively. The modified images and their originals were paired to form the test image pairs. The mean color differences of image pairs, ranged from 0 to 4 CIELAB units, were calculated by CIELAB color-difference formula and nearly uniformly divided into 21 grades for each attributes. The test image pairs were assessed in a CPC-8n lighting booth. 12 normal color vision observers took part in the experiment. The experimental results showed that the mean color-differences threshold for lightness, chroma and hue attributes were 1.49, 1.53 and 0.78 CIELAB units showing the threshold for hue was apparently smaller than that of lightness and chroma, and the thresholds of different images were dependent on the image content or color distribution.
APA, Harvard, Vancouver, ISO, and other styles
6

Amornraksa, Thumrongrat, and Kharittha Thongkor. "Effects of Spatial Domain Image Watermarking on Types of Printers and Printing Papers." Applied Mechanics and Materials 781 (August 2015): 564–67. http://dx.doi.org/10.4028/www.scientific.net/amm.781.564.

Full text
Abstract:
This paper presents the performance investigations of the spatial domain image watermarking for camera-captured images on different types of printers and printable materials. We examine the effects of our previous watermarking method based on the modification of image pixels on three types of printers, i.e. inkjet, laser and photo printers, and four different types of printing papers, i.e. uncoated, matte, glossy and semi-glossy papers. In the experiments, the DSLR camera is used as tool to capture the printed watermarked images, while the image registration technique based on projective transformation is used to diminish the RST and perspective distortions in the captured image. The performances in terms of extracted watermark accuracy at equivalent watermarked image quality on different types of printers and printing papers are measured and compared.
APA, Harvard, Vancouver, ISO, and other styles
7

Ferreira, Anselmo, Ehsan Nowroozi, and Mauro Barni. "VIPPrint: Validating Synthetic Image Detection and Source Linking Methods on a Large Scale Dataset of Printed Documents." Journal of Imaging 7, no. 3 (March 8, 2021): 50. http://dx.doi.org/10.3390/jimaging7030050.

Full text
Abstract:
The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
8

McConnell, James, and Mari Marutani. "413 PB 060 A PRINT-ON-DEMAND SYSTEM FOR PRODUCING EDUCATIONAL AND EXTENSION MATERIALS." HortScience 29, no. 5 (May 1994): 490b—490. http://dx.doi.org/10.21273/hortsci.29.5.490b.

Full text
Abstract:
A Print-On-Demand (POD) System was developed for the rapid production of educational and extension materials such as fact-sheets. Information is stored in a final format on the computer and the number of copies of a specific publication can be printed as needed. The system greatly reduces the time to having the finished product and allows any number of publications to be printed. The printing cost ranges from $.43 to $.80 per page with a 300dpi color thermal wax printer. Photo CDs and video capture images are the most common sources of color images used in the POD system. Photo CDs produce higher quality images but require time to process a film before images are used in the system. In live video capture, an image can be captured by a video camera, and sent to a computer for immediate production of a fact-sheet. Tape playback reduces the image quality compared to live video. Live video also gives the best feedback in determining whether the image shows the desired information. In general, the image is video captured at twice the needed size and reduced while increasing the resolution from 72 dpi to 130 dpi. This produces a better quality image. Other sources of pictures are flatbed scanners and slide scanners.
APA, Harvard, Vancouver, ISO, and other styles
9

McConnell, James, and Maria I. D. Pangelinan. "Producing Print-on-demand Publications for Instructional and Extension Materials." HortTechnology 8, no. 2 (April 1998): 210–20. http://dx.doi.org/10.21273/horttech.8.2.210.

Full text
Abstract:
Print-on-demand (POD) publications are being produced from computer to printer to increase the diversity of printed extension and educational materials. The layouts are stored in libraries on the computer and text files and digital images are added to the layouts. Images can be edited before insertion into the layouts to enhance the image. The completed materials are stored in portable document format (PDF) on disk and are printed as needed or distributed over computer networks. Printing materials as needed greatly increases the diversity of materials and gives greater flexibility in revising publications than bulk printing.
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Sige, Daulet Kenzhebalin, Bakedu Choi, George Chiu, Zillion Lin, Davi He, and Jan Allebach. "Developing an inkjet printer III: Multibit CMY halftones to hardware-ready bits." Electronic Imaging 2020, no. 15 (January 26, 2020): 352–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.15.color-351.

Full text
Abstract:
Nowadays, inkjet printers are widely used all around the world. But how do they transfer the digital image to a map that can control nozzle firing? In this paper, we briefly illustrate that part of the printing pipeline that starts from a halftone image and end with Hardware Ready Bits (HRBs). We also describe the implementation of the multi-pass printing method with a designed print mask. HRBs are used to read an input halftone CMY image and output a binary map of each color to decide whether or not to eject the corresponding color drop at each pixel position. In general, for an inkjet printer, each row of the image corresponds to one specific nozzle in each swath so that each swath will be the height of the printhead [1]. To avoid visible white streaks due to clogged or burned out color nozzles, the method called multi-pass printing is implemented. Subsequently, the print mask is introduced so that we can decide during which pass each pixel should be printed.
APA, Harvard, Vancouver, ISO, and other styles
11

McMillan, Alexandra, Armine Kocharyan, Simone E. Dekker, Elias George Kikano, Anisha Garg, Victoria W. Huang, Nicholas Moon, Malcolm Cooke, and Sarah E. Mowry. "Comparison of Materials Used for 3D-Printing Temporal Bone Models to Simulate Surgical Dissection." Annals of Otology, Rhinology & Laryngology 129, no. 12 (May 4, 2020): 1168–73. http://dx.doi.org/10.1177/0003489420918273.

Full text
Abstract:
Objective: To identify 3D-printed temporal bone (TB) models that most accurately recreate cortical mastoidectomy for use as a training tool by comparison of different materials and fabrication methods. Background: There are several different printers and materials available to create 3D-printed TB models for surgical planning and trainee education. Current reports using Acrylonitrile Butadiene Styrene (ABS) plastic generated via fused deposition modeling (FDM) have validated the capacity for 3D-printed models to serve as accurate surgical simulators. Here, a head-to-head comparison of models produced using different materials and fabrication processes was performed to identify superior models for application in skull base surgical training. Methods: High-resolution CT scans of normal TBs were used to create stereolithography files with image conversion for application in 3D-printing. The 3D-printed models were constructed using five different materials and four printers, including ABS printed on a MakerBot 2x printer, photopolymerizable polymer (Photo) using the Objet 350 Connex3 Printer, polycarbonate (PC) using the FDM-Fortus 400 mc printer, and two types of photocrosslinkable acrylic resin, white and blue (FLW and FLB, respectively), using the Formlabs Form 2 stereolithography printer. Printed TBs were drilled to assess the haptic experience and recreation of TB anatomy with comparison to the current paradigm of ABS. Results: Surgical drilling demonstrated that FLW models created by FDM as well as PC and Photo models generated using photopolymerization more closely recreated cortical mastoidectomy compared to ABS models. ABS generated odor and did not represent the anatomy accurately. Blue resin performed poorly in simulation, likely due to its dark color and translucent appearance. Conclusions: PC, Photo, and FLW models best replicated surgical drilling and anatomy as compared to ABS and FLB models. These prototypes are reliable simulators for surgical training.
APA, Harvard, Vancouver, ISO, and other styles
12

Field, G. "Image Structure Aspects of Printed Image Quality." Journal of Photographic Science 38, no. 4-5 (July 1989): 197–200. http://dx.doi.org/10.1080/00223638.1989.11737104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Yi Hua, Jun Qian, and Xue Mei Yu. "Study on Precision and Uniformity Evaluation Method of PEDOT/PSS Ink-Jet Print Film." Advanced Materials Research 634-638 (January 2013): 3840–43. http://dx.doi.org/10.4028/www.scientific.net/amr.634-638.3840.

Full text
Abstract:
PEDOT/PSS has an excellent performance in the electronic components and ink jet print has been used widely printed electronics. But the film quality of PEDOT/PSS produced by ink jet printers can be limited by many parameters and is very difficult to evaluate accurately. In this paper, we propose uniformity and precision as the key indicators and combine image processing method in to the evaluation methods. First we analyses the characteristic and limitation of the ink jet printer, and use image processing methods to evaluate the experimental data. From the result it demonstrates this new method is feasible, and can get more detail. And it can apply in the future dedicated ink jet printer study and quality detection
APA, Harvard, Vancouver, ISO, and other styles
14

del Barrio Cortés, Elisa, Clara Matutano Molina, Luis Rodríguez-Lorenzo, and Nieves Cubo-Mateo. "Generation of Controlled Micrometric Fibers inside Printed Scaffolds Using Standard FDM 3D Printers." Polymers 15, no. 1 (December 26, 2022): 96. http://dx.doi.org/10.3390/polym15010096.

Full text
Abstract:
New additive manufacturing techniques, such as melting electro-writing (MEW) or near-field electrospinning (NFES), are now used to include microfibers inside 3D printed scaffolds as FDM printers present a limited resolution in the XY axis, not making it easy to go under 100 µm without dealing with nozzle troubles. This work studies the possibility of creating reproducible microscopic internal fibers inside scaffolds printed by standard 3D printing. For this purpose, novel algorithms generating deposition routines (G-code) based on primitive geometrical figures were created by python scripts, modifying basic deposition conditions such as temperature, speed, or material flow. To evaluate the influence of these printing conditions on the creation of internal patterns at the microscopic level, an optical analysis of the printed scaffolds was carried out using a digital microscope and subsequent image analysis with ImageJ software. To conclude, the formation of heterogeneously shaped microfilaments (48 ± 12 µm, mean ± S.D.) was achieved in a standard FDM 3D Printer with the strategies developed in this work, and it was found that the optimum conditions for obtaining such microfibers were high speeds and a reduced extrusion multiplier.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Seongsu, Min Ho Lee, and Tae-Kyu Lee. "Individualized three-dimensional printed model for skull base tumor surgery." Journal of Korean Skull base society 17, no. 2 (September 30, 2022): 61–67. http://dx.doi.org/10.55911/jksbs.22.0003.

Full text
Abstract:
Background : Individualized three-dimensional (3D) printed skull base tumor model for each patient was printed to discuss a surgical plan with the surgeon and assistant before surgery. This article would demonstrate 3D printer use in skull base tumor surgery. Materials and Methods : Between January 2019 and August 2021, an individualized 3D-printed model was produced for preoperative planning for skull base tumor cases. The radiologic image files were obtained from each patient. The skull and other structures were extracted from images and segmented via the commercialized program. The generated objects were exported to a 3D image file and implemented as a 3D model. Each segmented object was printed with a 3D printer. Results : Seven cases were enrolled in the study, and there were four male patients and three female patients. The median age was 61 years (range, 36-67 years). Four cases were skullbase meningiomas, one case was a schwannoma, one case was a giant pituitary adenoma, and one case was a cerebellar metastatic tumor. Two patients underwent total gross resection, four patients underwent near-total resection, and one case was resected sub-totally. There was no significant postoperative morbidities. Conclusions : Individualized 3D-printed models are beneficial in pre-planning for resection of the skull base tumor. Although there are time-consuming and limitation of economic budget, it is expected that more detailed models can be provided at a lower with an advanced technology in near future. This approach provides conceptual understanding of thorough knowledge around the anatomical complex.
APA, Harvard, Vancouver, ISO, and other styles
16

TOSAKA, T., K. TAIRA, Y. YAMANAKA, K. FUKUNAGA, A. NISHIKATA, and M. HATTORI. "Reconstruction of Printed Image Using Electromagnetic Disturbance from Laser Printer." IEICE Transactions on Communications E90-B, no. 3 (March 1, 2007): 711–15. http://dx.doi.org/10.1093/ietcom/e90-b.3.711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chandrakala, M. "Image Analysis of Sauvola and Niblack Thresholding Techniques." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 14, 2021): 2353–57. http://dx.doi.org/10.22214/ijraset.2021.34569.

Full text
Abstract:
Image segmentation is a critical problem in computer vision and other image processing applications. Image segmentation has become quite challenging over the years due to its widespread use in a variety of applications. Image thresholding is a popular image segmentation technique. The segmented image quality is determined by the techniques used to determine the threshold value.A locally adaptive thresholding method based on neighborhood processing is presented in this paper. The performance of locally thresholding methods like Niblack and Sauvola was demonstrated using real-world images, printed text, and handwritten text images. Threshold-based segmentation methods were investigated using misclassification error, MSE and PSNR. Experiments have shown that the Sauvola method outperforms real-world images, printed and handwritten text images in terms of misclassification error, PSNR, and MSE.
APA, Harvard, Vancouver, ISO, and other styles
18

Eraso, Francisco Eduardo, William C. Scarfe, Yoshihiko Hayakawa, Jane Goldsmith, and Allan G. Farman. "Teledentistry: protocols for the transmission of digitized radiographs of the temporomandibular joint." Journal of Telemedicine and Telecare 2, no. 4 (December 1, 1996): 217–23. http://dx.doi.org/10.1258/1357633961930103.

Full text
Abstract:
Tomograms of the temporomandibular joint were digitized in three different formats using a PC-based system. The image resolution for various projections was determined at different camera-film distances. Three series of images were transmitted by telephone, and transmission times were measured. The original radiographs, the digitized images, the transmitted images and the transmitted-and-printed images were presented to 10 observers, who were asked to rate image quality. No difference in image quality was found between the initial digitized and the transmitted images. However, transmitted and transmitted-and-printed images were of significantly lower quality than the original radiographs or the digitized images viewed on a computer monitor. Transmission time was reduced significantly 50 by cropping the images before transmission. The image quality of individual radiographs was better than radiographs formatted as a series.
APA, Harvard, Vancouver, ISO, and other styles
19

Sheth, Gaurav, Katherine Carpenter, and Susan Farnand. "Image Quality Assessment of Displayed and Printed Smartphone Images." Color and Imaging Conference 25, no. 1 (September 11, 2017): 13–19. http://dx.doi.org/10.2352/issn.2169-2629.2017.25.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mendez, Patina K., Sangyeon Lee, and Chris E. Venter. "Imaging natural history museum collections from the bottom up: 3D print technology facilitates imaging of fluid-stored arthropods with flatbed scanners." ZooKeys 795 (November 5, 2018): 49–65. http://dx.doi.org/10.3897/zookeys.795.28416.

Full text
Abstract:
Availability of 3D-printed laboratory equipment holds promise to improve arthropod digitization efforts. A 3D-printed specimen scanning box was designed to image fluid-based arthropod collections using a consumer-grade flatbed scanner. The design was customized to accommodate double-width microscope slides and printed in both Polylactic Acid (PLA) and nylon (Polyamide). The workflow with two or three technicians imaged Trichoptera lots in batches of six scanning boxes. Individual images were cropped from batch imagess using an R script. PLA and nylon both performed similarly with no noticeable breakdown of the plastic; however, dyed nylon leeched color into the ethanol. The total time for handling, imaging, and cropping was ~8 minutes per vial, including returning material to vials and replacing ethanol. Image quality at 2400 dpi was the best and revealed several diagnostic structures valuable for partial identifications with higher utility if structures of the genitalia were captured; however, lower resolution scans may be adequate for natural history collection imaging. Image quality from this technique is similar to other natural history museum imaging techniques; yet, the scanning approach may have wider applications to morphometrics because of lack of distortion. The approach can also be applied to image vouchering for biomonitoring and other ecological studies.
APA, Harvard, Vancouver, ISO, and other styles
21

Ibrahim, Zuwairie, Ismail Ibrahim, Kamal Khalil, Sophan Wahyudi Nawawi, Muhammad Arif Abdul Rahim, Zulfakar Aspar, and Wan Khairunizam Wan Ahmad. "Noise Elimination for Image Subtraction in Printed Circuit Board Defect Detection Algorithm." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 2 (August 5, 2013): 1317–29. http://dx.doi.org/10.24297/ijct.v10i2.7000.

Full text
Abstract:
Image subtraction operation has been frequently used for automated visual inspection of printed circuit board (PCB) defects. Even though the image subtraction operation able to detect all defects occurred on PCB, some unwanted noise could be detected as well. Hence, before the image subtraction operation can be applied to real images of PCB, image registration operation should be done to align a defective PCB image against a template PCB image. This study shows how the image registration operation is incorporated with a thresholding algorithm to eliminate unwanted noise. The results show that all defects occurred on real images of PCB can be correctly detected without interfere by any unwanted noise.
APA, Harvard, Vancouver, ISO, and other styles
22

Gardan, Julien. "Method for characterization and enhancement of 3D printing by binder jetting applied to the textures quality." Assembly Automation 37, no. 2 (April 3, 2017): 162–69. http://dx.doi.org/10.1108/aa-01-2016-007.

Full text
Abstract:
Purpose This paper aims to present a technical approach to evaluate the quality of textures obtained by an inkjet during binder jetting in 3D printing on a powder bed through contours detection to improve the quality of the surface printed according to the result of the assembly between the inkjet and a granular product. Design/methodology/approach The manufacturing process is based on the use of computer-aided design and a 3D printer via binder jetting. Image processing measures the edge deviation of a texture on the granular surface with the possibility of implementing a correction in an active assembly through a “design for manufacturing” (DFM) approach. Example application is presented through first tests. Findings This approach observes a shape alteration of the printed image on a 3D printed product, and the work used the image processing method to improve the model according to the DFM approach. Originality/value This paper introduces a solution for improving the texture quality on 3D printed products realized via binder jetting. The DFM approach proposes an active assembly by compensating the print errors in upstream of a product life cycle.
APA, Harvard, Vancouver, ISO, and other styles
23

Satyanarayana, Y. V. V., U. Ravi Babu, and S. Maruthu Permal. "Printed Telugu Numeral Recognition based on Structural, Skeleton and Water Reservoir Features." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 7 (August 22, 2013): 1815–24. http://dx.doi.org/10.24297/ijct.v10i7.7042.

Full text
Abstract:
Selection of feature extraction method is most important factor in achieving high recognition performance in automatic numeral recognition systems In this paper, a new proposed system for Telugu printed numeral characters recognition using number of contours, skeleton feature, water reservoir features, and ratio of length of Top line to bottom line of the image. Printed Telugu numerals are scan converted to binary images and extract the features irrespective of size of the image. The proposed method is applied with success to a database of 3150 printed multi-font printed Telugu numerals The recognition accuracy is achieved 100%. The experimental results obtained are encouraging and comparable with other methods found in literature survey. The novelty of the proposed method is free from size normalization and without classification method.
APA, Harvard, Vancouver, ISO, and other styles
24

Hrytsenko, Olha, Dmytro Hrytsenko, Vitaliy Shvalagin, Galyna Grodziuk, and Nataliia Andriushyna. "The Influence of Parameters of Ink-Jet Printing on Photoluminescence Properties of Nanophotonic Labels Based on Ag Nanoparticles for Smart Packaging." Journal of Nanomaterials 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/3485968.

Full text
Abstract:
Ag nanoparticles are perspective for the use in ink-jet printed smart packaging labels in order to protect a customer from counterfeit or inform them about the safety of consumption of a packaged product via changeable luminescence properties. It is determined that, to obtain printed images with the highest luminescence intensity, using the most technologically permissible concentration of fluorescent component in the ink composition and applying inks to papers with the lowest absorbance are recommended. The highest contrast of a tone fluorescent image can be obtained on papers with high degree of sizing. It is found that the use of papers with low optical brightness agent (OBA) content with a wide range of luminescence intensity allows obtaining the same visual legibility of a printed nanophotonic label. The increase in the relative area of raster elements of an image leads to nonlinear increase in luminescence intensity of printed images in long-wave area of visible spectrum, affecting the luminescence color of a printed label. For wide industrial production of printed nanophotonic labels for smart packaging, the created principles of reproduction of nanophotonic images applied onto paper materials by ink-jet printing technique using printing inks containing Ag nanoparticles should be taken into account.
APA, Harvard, Vancouver, ISO, and other styles
25

Scribner, R. W. "THE PRINTED IMAGE AS HISTORICAL EVIDENCE." German Life and Letters 48, no. 3 (July 1995): 324–37. http://dx.doi.org/10.1111/j.1468-0483.1995.tb01634.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

R, Jaichandran, Somasundaram K, Bhagyashree Basfore, Menaka I.S, and Uma S. "Prototype to Help Visually Impaired Person in Reading Printed Learning Materials using Raspberry PI." International Journal of Engineering & Technology 7, no. 3.1 (August 4, 2018): 82. http://dx.doi.org/10.14419/ijet.v7i3.1.16803.

Full text
Abstract:
This paper presents a prototype to help visually impaired persons in reading printed learning materials using Raspberry PI. Tesseract an open source optical character recognition technique is used extract texts in printed images and converted to audio output using text-to-speech conversion software. Prototype is experimented using printed text pages with various font sizes and line spacing as test cases. Results show that the prototype is better in converting printed texts to speech. However quality of image, font size, and line space affects performance of prototype in converting printed texts to speech..
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Jianhang, Qian Lin, and Jan P. Allebach. "Deep Learning for Printed Mottle Defect Grading." Electronic Imaging 2020, no. 8 (January 26, 2020): 184–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.8.imawm-184.

Full text
Abstract:
In this paper, we propose a new method for printed mottle defect grading. By training the data scanned from printed images, our deep learning method based on a Convolutional Neural Network (CNN) can classify various images with different mottle defect levels. Different from traditional methods to extract the image features, our method utilizes a CNN for the first time to extract the features automatically without manual feature design. Different data augmentation methods such as rotation, flip, zoom, and shift are also applied to the original dataset. The final network is trained by transfer learning using the ResNet-34 network pretrained on the ImageNet dataset connected with fully connected layers. The experimental results show that our approach leads to a 13.16% error rate in the T dataset, which is a dataset with a single image content, and a 20.73% error rate in a combined dataset with different contents.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Matthew, Ninad Mahajan, Jessica Menold, and Christopher McComb. "Image collection of 3D-printed prototypes and non-3D-printed prototypes." Data in Brief 27 (December 2019): 104691. http://dx.doi.org/10.1016/j.dib.2019.104691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Peters, Klaus-Ruediger. "On LaserJet Printers:." Microscopy Today 4, no. 10 (December 1996): 17. http://dx.doi.org/10.1017/s1551929500063367.

Full text
Abstract:
LaserJets, like many other printers, print in lines per inch (Ipi). The lines represent in digital image printing lines of pixels. If the pixels are printed squarely than each square area (of a width equal to that of the actual line width) must be printed with a desired amount of ink variations in order to generate the gray levels (see John Russ’ book for a nice illustration on how this is done in a LaserJet). LaserJets and Inkjets use dots as the smallest printed entity.
APA, Harvard, Vancouver, ISO, and other styles
30

Tian, Jieni, Jiangping Yuan, Hua Li, Danyang Yao, and Guangxue Chen. "Advanced Surface Color Quality Assessment in Paper-Based Full-Color 3D Printing." Materials 14, no. 4 (February 4, 2021): 736. http://dx.doi.org/10.3390/ma14040736.

Full text
Abstract:
Color 3D printing allows for 3D-printed parts to represent 3D objects more realistically, but its surface color quality evaluation lacks comprehensive objective verification considering printing materials. In this study, a unique test model was designed and printed using eco-friendly and vivid paper-based full-color 3D printing as an example. By measuring the chromaticity, roughness, glossiness, and whiteness properties of 3D-printed surfaces and by acquiring images of their main viewing surfaces, this work skillfully explores the correlation between the color representation of a paper-based 3D-printed coloring layer and its attached underneath blank layer. Quantitative analysis was performed using ΔE*ab, feature similarity index measure of color image (FSIMc), and improved color-image-difference (iCID) values. The experimental results show that a color difference on color-printed surfaces exhibits a high linear correlation trend with its FSIMc metric and iCID metric. The qualitative analysis of microscopic imaging and the quantitative analysis of the above three surface properties corroborate the prediction of the linear correlation between color difference and image-based metrics. This study can provide inspiration for the development of computational coloring materials for additive manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
31

Hrytsenko, Olha, Dmytro Hrytsenko, Vitaliy Shvalagin, Galyna Grodziuk, and Mikhail Kompanets. "The Use of Carbon Nanoparticles for Inkjet-Printed Functional Labels for Smart Packaging." Journal of Nanomaterials 2018 (July 2, 2018): 1–10. http://dx.doi.org/10.1155/2018/6485654.

Full text
Abstract:
Smart packaging functions can be provided by printing functional labels onto packaging materials using inkjet printing and inks with changeable photoluminescence properties. Carbon nanoparticles are considered a perspective fluorescent component of such inks. Ink compositions based on carbon nanoparticles are developed and adapted for inkjet printing on paper packaging materials for producing smart packaging labels. The influence of technological factors of the printing process on the photoluminescence characteristics of the printed images is investigated. The main investigated factors are the concentration of carbon nanoparticles, the relative area of raster elements of a raster field of a tone image, the absorbance and surface smoothness of paper. The resulting parameters are photoluminescence intensity and color. It is found that in case of changes in surface smoothness and absorbance of paper and concentrations of carbon nanoparticles in the ink compositions, the photoluminescence intensity of a printed image changes while its photoluminescence color remains the same. To obtain the highest contrast of tone inkjet-printed images with carbon nanoparticles on papers with any absorbance, the highest concentration of carbon nanoparticles in the ink composition should be used. However, the highest contrast and the highest own photoluminescence intensity of a tone inkjet-printed image with inks with carbon nanoparticles can be achieved only on papers with the lowest absorbance. The most noticeable difference between photoluminescence intensity of printed images on papers with any absorbance can be obtained with the lower concentration of carbon nanoparticles in the ink composition (10 mg/mL). The optimum concentrations of carbon nanoparticles in the composition are determined: for papers with low absorbance—10 mg/mL, and for papers with medium and high absorbance—25 mg/mL. Analytical dependency is created for photoluminescence intensity of images printed with inkjet printing inks with carbon nanoparticles as a function of the studied technological factors. Some design solutions for photoluminescent labels are suggested.
APA, Harvard, Vancouver, ISO, and other styles
32

Petrov, S. M. "Using Quantitative Characteristics of Document Images for Identification of Laser Printers." Theory and Practice of Forensic Science 16, no. 2 (July 30, 2021): 69–88. http://dx.doi.org/10.30764/1819-2785-2021-2-69-88.

Full text
Abstract:
The article provides a brief overview of the current state of the theory and practice of identifying laser printers and the results of research work aimed at discovering individual features of the printing mechanism of a laser printer.The author analyses the scheme of a laser printer, describes the printing cycle, presents the main results of the analysis of a printer’s mechanism and the influence of its individual parts on the optical density of printing. The method of assessment of the optical density of the print by the digital image of the printed document is proposed, the complex of the necessary technical and program tools is described.A hypothesis on the correlation between fluctuations in the optical density of printing of solid fills and fluctuations in the area of printed elements was put forward and confirmed; a visual representation of the study results in graphical form is presented, the relationship between the shape of the obtained graphs and defects of the printer’s components and parts is substantiated. The author proposes ways to detect the inhomogeneity of printing density on text arrays based on changes in the area of printed elements and processing of the results, which allows comparing distributions for texts printed in fonts of different sizes and styles. Based on experimental material, the individuality of the form of the obtained distributions and the possibility of their use as identifying features of the printing device are substantiated.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Cai Yin, Chao Li, and Li Jiang Huo. "A Security Printing Method by Black Ink Hiding Infrared Image." Applied Mechanics and Materials 200 (October 2012): 730–33. http://dx.doi.org/10.4028/www.scientific.net/amm.200.730.

Full text
Abstract:
In the multi-color printing, the four inks used are cyan, magenta, yellow and black, which have different response in the near infrared spectral area, in which the black with CMK color inks have no response while the black with K color ink has a good response. But two kinds of black are both visible and equivalent under daylight. Based on this principle, this paper introduces a new method of security printing by K ink hiding infrared image. To hide infrared image in source image, the Yule-Nielsen modified spectral Neugebauer equations is employed, the Yule-Nielsen parameter is found based on a least squares regression over a training set of spectral measurements, then the source image is color-separated with the mask which is the security image. The security image must be a grey-scale map. The security information is stored in the K channel of the color separation image which is printed. In daylight the visual effect of printed image is the same as that of source image, but the security image is detected by IR camera. Finally, a series of experiments are performed on HP Color LaserJet CP2035 printer. Experimental results show that the proposed method is promising and feasible for preventing from falsifying presswork.
APA, Harvard, Vancouver, ISO, and other styles
34

Havenko, Svitlana, Oleh NAZAR, Viktoria KOCHUBEI, and Lesia PELYK. "THE RESEARCH OF QUALITY OF THERMOTRANSFER PRINT IMAGES ON COTTON TEXTILE MATERIAL." Herald of Khmelnytskyi National University 303, no. 6 (December 2021): 235–39. http://dx.doi.org/10.31891/2307-5732-2021-303-6-235-239.

Full text
Abstract:
The article presents thermogravimetric studies of cotton textile material before and after printing by thermal transfer printing. Thermal transfer printing on garments and knitwear, umbrellas, bags, advertising banners, posters, etc. is popular. Transfer printing technology involves the transfer of the image to the textile material using an intermediate medium. First, the desired image is formed on a special paper or film using screen printing. Then, with the help of temperature in special presses, it is transferred to the textile material. If the image is multicolored, the whole process is repeated separately for each color. Heat transfer technology allows to apply high- and multi-color images to finished products or semi-finished products with high accuracy; to carry out personalized printing. Since thermal transfer printing involves the presence of high temperatures to obtain an image on the material, a comprehensive thermal analysis of cotton fabric was performed before and after printing. A test scale with a raster line from 100 to 140 lines / cm was used for research. Densitometric indicators of quality of the formed thermotransfer images by plastezol paints are given. It is established that with the increase of the line of raster images the color indicators of the prints on the textile material decrease slightly, which must be taken into account when fulfilling orders in industrial conditions. Using electron microscopy, the process of interaction of dye with cotton fibers in the fixation of printed images was studied. Significant influence on the quality of prints on textile material of surface structure of cotton fibers, their structure, dye composition and printing modes is confirmed. It is established that cotton fabric with printed image at 140 oC provides high quality color printed thermal transfer images. This is confirmed by such qualimetric indicators as optical density, image contrast, brightness. Modeling the mechanism of fixing the printed image on the fabric during thermal transfer printing can be divided into four stages: diffusion of the dye from the environment to the surface of the fibers; sorption of the dye on the surface; diffusion of dye inside the fiber; sorption of the dye on the inner surface of the fiber, which require more detailed and in-depth studies.
APA, Harvard, Vancouver, ISO, and other styles
35

Oe, Shunichiro, Kennichi Kaida, Daisuke Nagai, Mituo Nakamura, Tomohiro Kimura, and Koichi Kameyama. "Inspection System of Soldering Joint on Printed Circuit Board by Using Neural Network." Journal of Robotics and Mechatronics 7, no. 3 (June 20, 1995): 225–29. http://dx.doi.org/10.20965/jrm.1995.p0225.

Full text
Abstract:
This paper deals with a new inspection system of soldering joint on printed circuit board by using neural network. A sensor unit of this system consists of a semiconductor laser unit, four PSDs, and a pin photo-diode. We can obtain four types of images which are called height image, PSD brightness image, vertical image and vector image, by using four sensor units. We extract the features which show the state of soldering joint from these images and develop an inspection system using the neural networks constructed for the features and the state of soldering joint.
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Chi-Ching, Fu-Ling Chang, Yuan-Shing Perng, and Shih-Tsung Yu. "Effects of Single and Blended Coating Pigments on the Inkjet Image Quality of Dye Sublimation Transfer Printed Paper: SiO2, CaCO3, Talc, and Sericite." Advances in Materials Science and Engineering 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/4863024.

Full text
Abstract:
In this study, we investigated the effects on the image quality of CaCO3, SiO2, talc, and sericite on coated inkjet paper. The papers serve as dye sublimation transfer paper for printing on fabrics. The brightness, smoothness, and contact angle of the coated papers were evaluated. The papers were then printed with a textile color image evaluation test form, and the imprinted images were evaluated with respect to six criteria of the solid ink density, tone value increase, print contrast, ink trapping, grayness, and hue error. The overall printed image quality was correlated with the smoothness and brightness of the coated paper but showed no correlation with the contact angle. For single-pigment-coated papers, CaCO3produced paper with the best color difference performance and could be substituted for silica. On the other hand, SiO2was found to be suitable for blending with talc, calcium carbonate, and sericite, and its combination with these materials generally produced better image qualities than silica alone. Talc and sericite, when blended with silica as composite coating pigments, produced better printed image qualities than those as single-pigment-coated papers. The overall image quality ranking suggests that the best performance was achieved with CaCO3-, SiO2/talc-, CaCO3/SiO2-, SiO2/sericite-, and SiO2-coated papers.
APA, Harvard, Vancouver, ISO, and other styles
37

Castro-Bleda, M. J., S. España-Boquera, J. Pastor-Pellicer, and F. Zamora-Martínez. "The NoisyOffice Database: A Corpus To Train Supervised Machine Learning Filters For Image Processing." Computer Journal 63, no. 11 (November 30, 2019): 1658–67. http://dx.doi.org/10.1093/comjnl/bxz098.

Full text
Abstract:
Abstract This paper presents the ‘NoisyOffice’ database. It consists of images of printed text documents with noise mainly caused by uncleanliness from a generic office, such as coffee stains and footprints on documents or folded and wrinkled sheets with degraded printed text. This corpus is intended to train and evaluate supervised learning methods for cleaning, binarization and enhancement of noisy images of grayscale text documents. As an example, several experiments of image enhancement and binarization are presented by using deep learning techniques. Also, double-resolution images are also provided for testing super-resolution methods. The corpus is freely available at UCI Machine Learning Repository. Finally, a challenge organized by Kaggle Inc. to denoise images, using the database, is described in order to show its suitability for benchmarking of image processing systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Eerola, Tuomas, Lasse Lensu, Heikki Kälviäinen, and Alan C. Bovik. "Study of no-reference image quality assessment algorithms on printed images." Journal of Electronic Imaging 23, no. 6 (August 13, 2014): 061106. http://dx.doi.org/10.1117/1.jei.23.6.061106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhao, Lei, and Guang Xue Chen. "Offset Ink Quantity Estimation Model Based on Image Block Processing." Advanced Materials Research 174 (December 2010): 199–202. http://dx.doi.org/10.4028/www.scientific.net/amr.174.199.

Full text
Abstract:
In the offset printing industry, ink quantity plays an important role. At the present time, ink evaluation usually depends on human naked eyes. Although the method is easily to carry out, to the same original image, the results usually differ from the practical ones. In the research, processed gray prepress digital images to be input into plate making device are considered, with image block processing theory, according to parameters such as screening lines and ink density and so on, the ink quantity of the original image on the printed sheet is figured out. Similarly, batch printed sheets ink quantity could be calculated. The experiment shows that the ink quantity reaches rather high exactitude, approaches factual ink quantity in the printing workroom with fairly high precision.
APA, Harvard, Vancouver, ISO, and other styles
40

Win, Htwe Pa Pa, Phyo Thu Thu Khine, and Khin Nwe Ni Tun. "Character Segmentation Scheme for OCR System." International Journal of Computer Vision and Image Processing 1, no. 4 (October 2011): 50–58. http://dx.doi.org/10.4018/ijcvip.2011100104.

Full text
Abstract:
Automatic machine-printed Optical Characters or texts Recognizers (OCR) are highly desirable for a multitude of modern IT applications, including Digital Library software. However, the state of the art OCR systems cannot do for Myanmar scripts as the language poses many challenges for document understanding. Therefore, the authors design an Optical Character Recognition System for Myanmar Printed Document (OCRMPD), with several proposed techniques that can automatically recognize Myanmar printed text from document images. In order to get more accurate system, the authors propose the method for isolation of the character image by using not only the projection methods but also structural analysis for wrongly segmented characters. To reveal the effectiveness of the segmentation technique, the authors follow a new hybrid feature extraction method and choose the SVM classifier for recognition of the character image. The proposed algorithms have been tested on a variety of Myanmar printed documents and the results of the experiments indicate that the methods can increase the segmentation accuracy as well as recognition rates.
APA, Harvard, Vancouver, ISO, and other styles
41

Tan, J., J. H. Lai, P. Wang, and N. Bi. "Multiscale Region Projection Method to Discriminate Between Printed and Handwritten Text on Registration Forms." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 08 (November 22, 2015): 1553005. http://dx.doi.org/10.1142/s0218001415530055.

Full text
Abstract:
Techniques to identify printed and handwritten text in scanned documents differ significantly. In this paper, we address the question of how to discriminate between each type of writing on registration forms. Registration-form documents consist of various type zones, such as printed text, handwriting, table, image, noise, etc., so segmenting the various zones is a challenge. We adopt herein an approach called “multiscale-region projection” to identify printed text and handwriting. An important aspect of our approach is the use of multiscale techniques to segment document images. A new set of projection features extracted from each zone is also proposed. The classification rules are mining and are used to discern printed text and table lines from handwritten text. The proposed system was tested on 11[Formula: see text]118 samples in two registration-form-image databases. Some possible measures of efficiency are computed, and in each case the proposed approach performs better than traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Qiao Naosheng, 乔闹生, 张奋 Zhang Fen, and 黎小琴 Li Xiaoqin. "Defect Image Preprocessing of Printed Circuit Board." Laser & Optoelectronics Progress 52, no. 2 (2015): 021003. http://dx.doi.org/10.3788/lop52.021003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Itoh, Naoki, and Takashi Nakayama. "Image Understanding System for Printed Chemical Structures." Proceedings of Annual Conference, Japan Society of Information and Knowledge 3 (1995): 61–66. http://dx.doi.org/10.2964/jsikproc.3.0_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Horng-Hai Loh and Ming-Sing Lu. "Printed circuit board inspection using image analysis." IEEE Transactions on Industry Applications 35, no. 2 (1999): 426–32. http://dx.doi.org/10.1109/28.753638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hakro, Dil Nawaz, and Abdullah Zawawi Talib. "Printed Text Image Database for Sindhi OCR." ACM Transactions on Asian and Low-Resource Language Information Processing 15, no. 4 (June 2, 2016): 1–18. http://dx.doi.org/10.1145/2846093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Natarajan, Dr Balakrishnan, and Dr A. Vanitha. "A Comprehensive Study on Intelligence System for Automatize Event Tracker System Using Learning Method." Alinteri Journal of Agriculture Sciences 36, no. 2 (July 13, 2021): 18–21. http://dx.doi.org/10.47059/alinteri/v36i2/ajas21111.

Full text
Abstract:
In image processing, the radical scheme is required to propose a model for extracting the required content from an image. It plays a critical position to offer significant facts and needs methods in various automation arenas. By keeping the way of a parting textual content from images has proposed via following the sparse matrix illustration, grouping text components are based on heuristic rules and clustered into sentence generation. This paper directs a study on image analysis that inspects visual items as objects and different text patterns. Logistic Regression, Linear Discriminant Analysis naïve Bayes Algorithm are used to predict the image forms. This proposed work promotes the learning algorithm called Learning Vector Quantization Prediction Algorithm (LVQ Predict) is used to analysis the parts of the image. The features are extracted and classifies into printed and non-printed texts. Further, these texts are normalized and documented.
APA, Harvard, Vancouver, ISO, and other styles
47

Guo, Zhongyuan, Hong Zheng, Changhui You, Xiaohang Xu, Xiongbin Wu, Zhaohui Zheng, and Jianping Ju. "Digital Forensics of Scanned QR Code Images for Printer Source Identification Using Bottleneck Residual Block." Sensors 20, no. 21 (November 5, 2020): 6305. http://dx.doi.org/10.3390/s20216305.

Full text
Abstract:
With the rapid development of information technology and the widespread use of the Internet, QR codes are widely used in all walks of life and have a profound impact on people’s work and life. However, the QR code itself is likely to be printed and forged, which will cause serious economic losses and criminal offenses. Therefore, it is of great significance to identify the printer source of QR code. A method of printer source identification for scanned QR Code image blocks based on convolutional neural network (PSINet) is proposed, which innovatively introduces a bottleneck residual block (BRB). We give a detailed theoretical discussion and experimental analysis of PSINet in terms of network input, the first convolution layer design based on residual structure, and the overall architecture of the proposed convolution neural network (CNN). Experimental results show that the proposed PSINet in this paper can obtain extremely excellent printer source identification performance, the accuracy of printer source identification of QR code on eight printers can reach 99.82%, which is not only better than LeNet and AlexNet widely used in the field of digital image forensics, but also exceeds state-of-the-art deep learning methods in the field of printer source identification.
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Chao, Cai Yin Wang, and Shu Jie Wang. "A Black Generation Method for Black Ink Hiding Infrared Security Image." Applied Mechanics and Materials 262 (December 2012): 9–12. http://dx.doi.org/10.4028/www.scientific.net/amm.262.9.

Full text
Abstract:
In the multi-color printing, the four inks used are cyan, magenta, yellow and black, which have different response in the near infrared spectral area, in which the black with CMK color inks have no response while the black with K color ink has a good response. But two kinds of black are both visible and equivalent under daylight. Based on this principle and gray component replacement (GCR) theory, this paper introduces a black component generation method for black ink hiding infrared security image. Firstly, we establish a color lookup table for GCR and find the substitution relation between K and CMY. Secondly, the source image is color-separated with the mask which is the security image. The source image is firstly converted to CMY color format data words. The black value of each pixel for security image is detected and treated as GCR value of the corresponding pixel for source image. Then CMY values are found in GCR LUT and subtracted from three primary CMY values. And an adjustment coefficient is employed for color correction. Thirdly, the color separation image is printed. In daylight the visual effect of printed image is the same as that of source image, but the security image is detected by IR camera. Finally, a series of experiments are performed on HP Color LaserJet CP2035 printer. Experimental results show that the proposed method is promising and feasible for black ink hiding IR security image.
APA, Harvard, Vancouver, ISO, and other styles
49

NATARAJAN, PREMKUMAR, ZHIDONG LU, RICHARD SCHWARTZ, ISSAM BAZZI, and JOHN MAKHOUL. "MULTILINGUAL MACHINE PRINTED OCR." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 01 (February 2001): 43–63. http://dx.doi.org/10.1142/s0218001401000745.

Full text
Abstract:
This paper presents a script-independent methodology for optical character recognition (OCR) based on the use of hidden Markov models (HMM). The feature extraction, training and recognition components of the system are all designed to be script independent. The training and recognition components were taken without modification from a continuous speech recognition system; the only component that is specific to OCR is the feature extraction component. To port the system to a new language, all that is needed is text image training data from the new language, along with ground truth which gives the identity of the sequences of characters along each line of each text image, without specifying the location of the characters on the image. The parameters of the character HMMs are estimated automatically from the training data, without the need for laborious handwritten rules. The system does not require presegmentation of the data, neither at the word level nor at the character level. Thus, the system is able to handle languages with connected characters in a straightforward manner. The script independence of the system is demonstrated in three languages with different types of script: Arabic, English, and Chinese. The robustness of the system is further demonstrated by testing the system on fax data. An unsupervised adaptation method is then described to improve performance under degraded conditions.
APA, Harvard, Vancouver, ISO, and other styles
50

Nousiainen, Katri, and Teemu Mäkelä. "Measuring geometric accuracy in magnetic resonance imaging with 3D-printed phantom and nonrigid image registration." Magnetic Resonance Materials in Physics, Biology and Medicine 33, no. 3 (October 23, 2019): 401–10. http://dx.doi.org/10.1007/s10334-019-00788-6.

Full text
Abstract:
Abstract Objective We aimed to develop a vendor-neutral and interaction-free quality assurance protocol for measuring geometric accuracy of head and brain magnetic resonance (MR) images. We investigated the usability of nonrigid image registration in the analysis and looked for the optimal registration parameters. Materials and methods We constructed a 3D-printed phantom and imaged it with 12 MR scanners using clinical sequences. We registered a geometric-ground-truth computed tomography (CT) acquisition to the MR images using an open-source nonrigid-registration-toolbox with varying parameters. We applied the transforms to a set of control points in the CT image and compared their locations to the corresponding visually verified reference points in the MR images. Results With optimized registration parameters, the mean difference (and standard deviation) of control point locations when compared to the reference method was (0.17 ± 0.02) mm for the 12 studied scanners. The maximum displacements varied from 0.50 to 1.35 mm or 0.89 to 2.30 mm, with vendors’ distortion correction on or off, respectively. Discussion Using nonrigid CT–MR registration can provide a robust and relatively test-object-agnostic method for estimating the intra- and inter-scanner variations of the geometric distortions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography