To see the other types of publications on this topic, follow the link: Computer generated images.

Journal articles on the topic 'Computer generated images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer generated images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kerlow, Isaac Victor. "Computer-generated images and traditional printmaking." Visual Computer 4, no. 1 (January 1988): 8–18. http://dx.doi.org/10.1007/bf01901075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramek, Michael. "Colour vision and computer-generated images." Journal of Physics: Conference Series 237 (June 1, 2010): 012018. http://dx.doi.org/10.1088/1742-6596/237/1/012018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bouhali, Othmane, and Ali Sheharyar. "Distributed rendering of computer-generated images on commodity compute clusters." Qatar Foundation Annual Research Forum Proceedings, no. 2012 (October 2012): CSP16. http://dx.doi.org/10.5339/qfarf.2012.csp16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pini, Ezequiel. "Computer Generated Inspiration." Temes de Disseny, no. 36 (October 1, 2020): 192–207. http://dx.doi.org/10.46467/tdd36.2020.192-207.

Full text
Abstract:
This pictorial addresses the new use of Computer Generated Imagery as a tool for contextualising and inspiring futures. Using research through design experimental methodology, these techniques allow us to create utopic spaces by embracing accidental outcomes, displaying an as yet unexplored path lacking the limitations of the real world. The resulting images prove how 3D digital imagery used in the design context can serve as a new medium for artistic self-expression, as a tool for future designs and as an instrument to raise awareness about environmental challenges. The term we have coined, Computer Generated Inspiration, embraces the freedom of experimentation and artistic expression and the goal of inspiring others through unreal collective imaginaries.
APA, Harvard, Vancouver, ISO, and other styles
5

Lucas, Gale M., Bennett Rainville, Priya Bhan, Jenna Rosenberg, Kari Proud, and Susan M. Koger. "Memory for Computer-Generated Graphics: Boundary Extension in Photographic vs. Computer-Generated Images." Psi Chi Journal of Psychological Research 10, no. 2 (2005): 43–48. http://dx.doi.org/10.24839/1089-4136.jn10.2.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sando, Yusuke, Masahide Itoh, and Toyohiko Yatagai. "Color computer-generated holograms from projection images." Optics Express 12, no. 11 (2004): 2487. http://dx.doi.org/10.1364/opex.12.002487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wanger, L. R., J. A. Ferwerda, and D. P. Greenberg. "Perceiving spatial relationships in computer-generated images." IEEE Computer Graphics and Applications 12, no. 3 (May 1992): 44–58. http://dx.doi.org/10.1109/38.135913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Katoh, N., and M. Ito. "Gamut Mapping for Computer Generated Images (II)." Color and Imaging Conference 4, no. 1 (January 1, 1996): 126–28. http://dx.doi.org/10.2352/cic.1996.4.1.art00034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alfaqheri, Taha, Akuha Solomon Aondoakaa, Mohammad Rafiq Swash, and Abdul Hamid Sadka. "Low-delay single holoscopic 3D computer-generated image to multiview images." Journal of Real-Time Image Processing 17, no. 6 (June 19, 2020): 2015–27. http://dx.doi.org/10.1007/s11554-020-00991-y.

Full text
Abstract:
Abstract Due to the nature of holoscopic 3D (H3D) imaging technology, H3D cameras can capture more angular information than their conventional 2D counterparts. This is mainly attributed to the macrolens array which captures the 3D scene with slightly different viewing angles and generates holoscopic elemental images based on fly’s eyes imaging concept. However, this advantage comes at the cost of decreasing the spatial resolution in the reconstructed images. On the other hand, the consumer market is looking to find an efficient multiview capturing solution for the commercially available autostereoscopic displays. The autostereoscopic display provides multiple viewers with the ability to simultaneously enjoy a 3D viewing experience without the need for wearing 3D display glasses. This paper proposes a low-delay content adaptation framework for converting a single holoscopic 3D computer-generated image into multiple viewpoint images. Furthermore, it investigates the effects of varying interpolation step sizes on the converted multiview images using the nearest neighbour and bicubic sampling interpolation techniques. In addition, it evaluates the effects of changing the macrolens array size, using the proposed framework, on the perceived visual quality both objectively and subjectively. The experimental work is conducted on computer-generated H3D images with different macrolens sizes. The experimental results show that the proposed content adaptation framework can be used to capture multiple viewpoint images to be visualised on autostereoscopic displays.
APA, Harvard, Vancouver, ISO, and other styles
10

Weeks, Arthur R. "Computer-generated noise images for the evaluation of image processing algorithms." Optical Engineering 32, no. 5 (1993): 982. http://dx.doi.org/10.1117/12.130267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Yan, and Xiao Wang. "Preprocessing and Edge Detection of Natural Images and Computer Generated Images." International Journal of Signal Processing, Image Processing and Pattern Recognition 9, no. 5 (May 31, 2013): 281–90. http://dx.doi.org/10.14257/ijsip.2016.9.5.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Durbhakula, K., and S. Dhali. "Computer-generated images of streamer propagation in nitrogen." IEEE Transactions on Plasma Science 27, no. 1 (1999): 24–25. http://dx.doi.org/10.1109/27.763008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

BALUJA, SHUMEET, DEAN POMERLEAU, and TODD JOCHEM. "Towards Automated Artificial Evolution for Computer-generated Images." Connection Science 6, no. 2-3 (January 1994): 325–54. http://dx.doi.org/10.1080/09540099408915729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Goncharsky, Anton, and Svyatoslav Durlevich. "Cylindrical computer-generated hologram for displaying 3D images." Optics Express 26, no. 17 (August 13, 2018): 22160. http://dx.doi.org/10.1364/oe.26.022160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Barrett, Tara M., Hans R. Zuuring, and Treg Christopher. "Interpretation of forest characteristics from computer-generated images." Landscape and Urban Planning 80, no. 4 (May 2007): 396–403. http://dx.doi.org/10.1016/j.landurbplan.2006.09.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shim, Hyunjung, and Seungkyu Lee. "Automatic color realism enhancement for computer generated images." Computers & Graphics 36, no. 8 (December 2012): 966–73. http://dx.doi.org/10.1016/j.cag.2012.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dang, L., Syed Hassan, Suhyeon Im, Jaecheol Lee, Sujin Lee, and Hyeonjoon Moon. "Deep Learning Based Computer Generated Face Identification Using Convolutional Neural Network." Applied Sciences 8, no. 12 (December 13, 2018): 2610. http://dx.doi.org/10.3390/app8122610.

Full text
Abstract:
Generative adversarial networks (GANs) describe an emerging generative model which has made impressive progress in the last few years in generating photorealistic facial images. As the result, it has become more and more difficult to differentiate between computer-generated and real face images, even with the human’s eyes. If the generated images are used with the intent to mislead and deceive readers, it would probably cause severe ethical, moral, and legal issues. Moreover, it is challenging to collect a dataset for computer-generated face identification that is large enough for research purposes because the number of realistic computer-generated images is still limited and scattered on the internet. Thus, a development of a novel decision support system for analyzing and detecting computer-generated face images generated by the GAN network is crucial. In this paper, we propose a customized convolutional neural network, namely CGFace, which is specifically designed for the computer-generated face detection task by customizing the number of convolutional layers, so it performs well in detecting computer-generated face images. After that, an imbalanced framework (IF-CGFace) is created by altering CGFace’s layer structure to adjust to the imbalanced data issue by extracting features from CGFace layers and use them to train AdaBoost and eXtreme Gradient Boosting (XGB). Next, we explain the process of generating a large computer-generated dataset based on the state-of-the-art PCGAN and BEGAN model. Then, various experiments are carried out to show that the proposed model with augmented input yields the highest accuracy at 98%. Finally, we provided comparative results by applying the proposed CNN architecture on images generated by other GAN researches.
APA, Harvard, Vancouver, ISO, and other styles
18

Hu, Bingtao, and Jinwei Wang. "Deep Learning for Distinguishing Computer Generated Images and Natural Images: A Survey." Journal of Information Hiding and Privacy Protection 2, no. 2 (2020): 95–105. http://dx.doi.org/10.32604/jihpp.2020.010464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Rui-Song, Wei-Ze Quan, Lu-Bin Fan, Li-Ming Hu, and Dong-Ming Yan. "Distinguishing Computer-Generated Images from Natural Images Using Channel and Pixel Correlation." Journal of Computer Science and Technology 35, no. 3 (May 2020): 592–602. http://dx.doi.org/10.1007/s11390-020-0216-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jiao, Yuzhong, Kayton Wai Keung Cheung, Mark Ping Chan Mok, and Yiu Kei Li. "Spatial Distance-based Interpolation Algorithm for Computer Generated 2D+Z Images." Electronic Imaging 2020, no. 2 (January 26, 2020): 140–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-140.

Full text
Abstract:
Computer generated 2D plus Depth (2D+Z) images are common input data for 3D display with depth image-based rendering (DIBR) technique. Due to their simplicity, linear interpolation methods are usually used to convert low-resolution images into high-resolution images for not only depth maps but also 2D RGB images. However linear methods suffer from zigzag artifacts in both depth map and RGB images, which severely affects the 3D visual experience. In this paper, spatial distance-based interpolation algorithm for computer generated 2D+Z images is proposed. The method interpolates RGB images with the help of depth and edge information from depth maps. Spatial distance from interpolated pixel to surrounding available pixels is utilized to obtain the weight factors of surrounding pixels. Experiment results show that such spatial distance-based interpolation can achieve sharp edges and less artifacts for 2D RGB images. Naturally, it can improve the performance of 3D display. Since bilinear interpolation is used in homogenous areas, the proposed algorithm keeps low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
21

Elkins, James. "Art History and the Criticism of Computer-Generated Images." Leonardo 27, no. 4 (1994): 335. http://dx.doi.org/10.2307/1576009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Han Zhe, 韩哲, 亓岩 Qi Yan, 王延伟 Wang Yanwei, and 颜博霞 Yan Boxia. "Zoom Technology of Reconstructed Images of Computer Generated Holograms." Chinese Journal of Lasers 45, no. 5 (2018): 0509001. http://dx.doi.org/10.3788/cjl201845.0509001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Rosen, Joseph. "Computer-generated holograms of images reconstructed on curved surfaces." Applied Optics 38, no. 29 (October 10, 1999): 6136. http://dx.doi.org/10.1364/ao.38.006136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tan, D. Q., X. J. Shen, J. Qin, and H. P. Chen. "Detecting computer generated images based on local ternary count." Pattern Recognition and Image Analysis 26, no. 4 (October 2016): 720–25. http://dx.doi.org/10.1134/s1054661816040167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Upfold, J. B., M. S. R. Smith, and M. J. Edwards. "Three-dimensional reconstruction of tissue using computer-generated images." Journal of Neuroscience Methods 20, no. 2 (June 1987): 131–38. http://dx.doi.org/10.1016/0165-0270(87)90045-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tepper, Belinda Emmily, Benjamin Francis, Lijing Wang, and Bin Lee. "Acquisition and Application of Reflectance for Computer-Generated Images." International Journal of Computer Vision and Image Processing 13, no. 1 (October 3, 2023): 1–26. http://dx.doi.org/10.4018/ijcvip.331386.

Full text
Abstract:
In the field of computer graphics, accurate representation of material properties is crucial for rendering realistic imagery. This paper focuses on the bidirectional reflectance distribution function (BRDF) and its role in determining how materials interact with light. The authors review the state of the art in reflectance measurement systems, with a focus on BRDF and bidirectional texture function (BTF) measurement. They discuss practical limitations in measuring multi-dimensional functions and provide examples of how researchers have addressed these challenges. Additionally, they analyse various approaches to converting measured data into practical analytical functions for use in commercial rendering software, including data-driven methods such as neural networks and hybridized approaches.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Peng, Fuyu Li, Shanshan Yuan, and Wanyi Li. "Unsupervised Image-Generation Enhanced Adaptation for Object Detection in Thermal Images." Mobile Information Systems 2021 (December 27, 2021): 1–6. http://dx.doi.org/10.1155/2021/1837894.

Full text
Abstract:
Object detection in thermal images is an important computer vision task and has many applications such as unmanned vehicles, robotics, surveillance, and night vision. Deep learning-based detectors have achieved major progress, which usually need large amount of labelled training data. However, labelled data for object detection in thermal images is scarce and expensive to collect. How to take advantage of the large number labelled visible images and adapt them into thermal image domain is expected to solve. This paper proposes an unsupervised image-generation enhanced adaptation method for object detection in thermal images. To reduce the gap between visible domain and thermal domain, the proposed method manages to generate simulated fake thermal images that are similar to the target images and preserves the annotation information of the visible source domain. The image generation includes a CycleGAN-based image-to-image translation and an intensity inversion transformation. Generated fake thermal images are used as renewed source domain, and then the off-the-shelf domain adaptive faster RCNN is utilized to reduce the gap between the generated intermediate domain and the thermal target domain. Experiments demonstrate the effectiveness and superiority of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
28

Dang, L. Minh, Kyungbok Min, Sujin Lee, Dongil Han, and Hyeonjoon Moon. "Tampered and Computer-Generated Face Images Identification Based on Deep Learning." Applied Sciences 10, no. 2 (January 10, 2020): 505. http://dx.doi.org/10.3390/app10020505.

Full text
Abstract:
Image forgery is an active topic in digital image tampering that is performed by moving a region from one image into another image, combining two images to form one image, or retouching an image. Moreover, recent developments of generative adversarial networks (GANs) that are used to generate human facial images have made it more challenging for even humans to detect the tampered one. The spread of those images on the internet can cause severe ethical, moral, and legal issues if the manipulated images are misused. As a result, much research has been conducted to detect facial image manipulation based on applying machine learning algorithms on tampered face datasets in the last few years. This paper introduces a deep learning-based framework that can identify manipulated facial images and GAN-generated images. It is comprised of multiple convolutional layers, which can efficiently extract features using multi-level abstraction from tampered regions. In addition, a data-based approach, cost-sensitive learning-based approach (class weight), and ensemble-based approach (eXtreme Gradient Boosting) is applied to the proposed model to deal with the imbalanced data problem (IDP). The superiority of the proposed model that deals with an IDP is verified using a tampered face dataset and a GAN-generated face dataset under various scenarios. Experimental results proved that the proposed framework outperformed existing expert systems, which has been used for identifying manipulated facial images and GAN-generated images in terms of computational complexity, area under the curve (AUC), and robustness. As a result, the proposed framework inspires the development of research on image forgery identification and enables the potential to integrate these models into practical applications, which require tampered facial image detection.
APA, Harvard, Vancouver, ISO, and other styles
29

Meena, Kunj Bihari, and Vipin Tyagi. "Distinguishing computer-generated images from photographic images using two-stream convolutional neural network." Applied Soft Computing 100 (March 2021): 107025. http://dx.doi.org/10.1016/j.asoc.2020.107025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kocsis, O., L. Costaridou, G. Mandellos, D. Lymberopoulos, and G. Panayiotakis. "Compression assessment based on medical image quality concepts using computer-generated test images." Computer Methods and Programs in Biomedicine 71, no. 2 (June 2003): 105–15. http://dx.doi.org/10.1016/s0169-2607(02)00090-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Duminil, Alexandra, Sio-Song Ieng, and Dominique Gruyer. "A Comprehensive Exploration of Fidelity Quantification in Computer-Generated Images." Sensors 24, no. 8 (April 11, 2024): 2463. http://dx.doi.org/10.3390/s24082463.

Full text
Abstract:
Generating realistic road scenes is crucial for advanced driving systems, particularly for training deep learning methods and validation. Numerous efforts aim to create larger and more realistic synthetic datasets using graphics engines or synthetic-to-real domain adaptation algorithms. In the realm of computer-generated images (CGIs), assessing fidelity is challenging and involves both objective and subjective aspects. Our study adopts a comprehensive conceptual framework to quantify the fidelity of RGB images, unlike existing methods that are predominantly application-specific. This is probably due to the data complexity and huge range of possible situations and conditions encountered. In this paper, a set of distinct metrics assessing the level of fidelity of virtual RGB images is proposed. For quantifying image fidelity, we analyze both local and global perspectives of texture and the high-frequency information in images. Our focus is on the statistical characteristics of realistic and synthetic road datasets, using over 28,000 images from at least 10 datasets. Through a thorough examination, we aim to reveal insights into texture patterns and high-frequency components contributing to the objective perception of data realism in road scenes. This study, exploring image fidelity in both virtual and real conditions, takes the perspective of an embedded camera rather than the human eye. The results of this work, including a pioneering set of objective scores applied to real, virtual, and improved virtual data, offer crucial insights and are an asset for the scientific community in quantifying fidelity levels.
APA, Harvard, Vancouver, ISO, and other styles
32

Surma, Mateusz, Izabela Ducin, Przemyslaw Zagrajek, and Agnieszka Siemion. "Sub-Terahertz Computer Generated Hologram with Two Image Planes." Applied Sciences 9, no. 4 (February 15, 2019): 659. http://dx.doi.org/10.3390/app9040659.

Full text
Abstract:
An advanced optical structure such as a synthetic hologram (also called a computer-generated hologram) is designed for sub-terahertz radiation. The detailed design process is carried out using the ping-pong method, which is based on the modified iterative Gerchberg–Saxton algorithm. The novelty lies in designing and manufacturing a single hologram structure creating two different images at two distances. The hologram area is small in relation to the wavelength used (the largest hologram dimension is equivalent to around 57 wavelengths). Thus, it consists of a small amount of coded information, but despite this fact, the reconstruction is successful. Moreover, one of the reconstructed images is larger than the hologram area. Good accordance between numerical simulations and experimental evaluation was obtained.
APA, Harvard, Vancouver, ISO, and other styles
33

Kochańska, Paula Adrianna, and Michal Makowski. "Compression of computer-generated holograms in image projection." Photonics Letters of Poland 9, no. 2 (July 1, 2017): 60. http://dx.doi.org/10.4302/plp.v9i2.719.

Full text
Abstract:
Computer-generated holography is a technique of a lossless and lens-less forming of images. Methods that use local devices to compute such holograms are very power- and time-consuming. In order to make it possible to transfer the calculations to the cloud, it is necessary to elaborate efficient algorithms of a lossless compression. In this paper two methods of compression are presented and supported by both simulation and experimental results. A lossy compression method omitting certain bit-planes of the holographic data is also presented, which allows insignificant loss of information, while achieving a greater compression ratio. Full Text: PDF ReferencesM. Makowski "Simple holographic projection in color.", Opt. Express 20, 25130-25136 (2012). CrossRef M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, A. Kowalczyk, "Performance of the 4k phase-only spatial light modulator in image projection by computer-generated holography, " Phot. Lett. Poland 8, 26-28 (2016). CrossRef A. Kowalczyk, M. Bieda, M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, A. Sobczyk, "Analysis of computational complexity in holographic lens-less projection," Phot. Lett. Poland 6, 84-86 (2014). CrossRef M. Makowski, "Minimized speckle noise in lens-less holographic projection by pixel separation," Opt. Express 21, 29205-29216 (2013). CrossRef H. Niwase, N. Takada, H. Araki, Y. Maeda, M. Fujiwara, H. Nakayama, T. Kakue, T. Shimobaba, T. Ito "Real-time electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator and the InfiniBand network." Opt. Eng. 55, 093108-093108 (2016). CrossRef T. Shimobaba and T. Ito, "Random phase-free computer-generated hologram", Opt. Express 23(7) 9549-9554 (2015) CrossRef S.R. Kodituwakku, "Comparison of lossless data compression algorithms for text data", Indian Journal of Computer Science and Engineering Vol 1 No 4 416-426 (2010) DirectLink
APA, Harvard, Vancouver, ISO, and other styles
34

Hushlak, Gerald, and Jennifer Eiserman. "The Mistake: The Importance of Errors in Computer-generated Images." International Journal of the Image 1, no. 2 (2011): 93–102. http://dx.doi.org/10.18848/2154-8560/cgp/v01i02/44173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Maciejewski, Ross, Tobias Isenberg, William M. Andrews, David S. Ebert, Mario Costa Sousa, and Wei Chen. "Measuring Stipple Aesthetics in Hand-Drawn and Computer-Generated Images." IEEE Computer Graphics and Applications 28, no. 2 (March 2008): 62–74. http://dx.doi.org/10.1109/mcg.2008.35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kim, Eun-Seok. "Holographic stereogram using a geometric method for computer-generated images." Optical Engineering 37, no. 9 (September 1, 1998): 2449. http://dx.doi.org/10.1117/1.601767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bergen, S. D., C. A. Ulbricht, J. L. Fridley, and M. A. Ganter. "The validity of computer-generated graphic images of forest landscape." Journal of Environmental Psychology 15, no. 2 (June 1995): 135–46. http://dx.doi.org/10.1016/0272-4944(95)90021-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Barbour, Christopher G., and Gary W. Meyer. "Visual cues and pictorial limitations for computer generated photorealistic images." Visual Computer 9, no. 3 (March 1992): 151–65. http://dx.doi.org/10.1007/bf01902554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kurihara, Takayuki, and Yasuhiro Takaki. "Speckle-free, shaded 3D images produced by computer-generated holography." Optics Express 21, no. 4 (February 11, 2013): 4044. http://dx.doi.org/10.1364/oe.21.004044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Colonna, Carl M., Michael T. Zugelder, and John E. Anderson. "Computer-generated images and the deceased actor: Intellectual property rights." International Advances in Economic Research 2, no. 2 (May 1996): 196. http://dx.doi.org/10.1007/bf02295065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chryssafis, A. "Anti-Aliasing of Computer-Generated Images: A Picture Independent Approach." Computer Graphics Forum 5, no. 2 (June 1986): 125–29. http://dx.doi.org/10.1111/j.1467-8659.1986.tb00281.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

de Rezende, Edmar R. S., Guilherme C. S. Ruppert, Antônio Theóphilo, Eric K. Tokuda, and Tiago Carvalho. "Exposing computer generated images by using deep convolutional neural networks." Signal Processing: Image Communication 66 (August 2018): 113–26. http://dx.doi.org/10.1016/j.image.2018.04.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lui, Nicholas, Bryan Chia, William Berrios, Candace Ross, and Douwe Kiela. "Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14220–28. http://dx.doi.org/10.1609/aaai.v38i13.29333.

Full text
Abstract:
Computer vision models have been known to encode harmful biases, leading to the potentially unfair treatment of historically marginalized groups, such as people of color. However, there remains a lack of datasets balanced along demographic traits that can be used to evaluate the downstream fairness of these models. In this work, we demonstrate that diffusion models can be leveraged to create such a dataset. We first use a diffusion model to generate a large set of images depicting various occupations. Subsequently, each image is edited using inpainting to generate multiple variants, where each variant refers to a different perceived race. Using this dataset, we benchmark several vision-language models on a multi-class occupation classification task. We find that images generated with non-Caucasian labels have a significantly higher occupation misclassification rate than images generated with Caucasian labels, and that several misclassifications are suggestive of racial biases. We measure a model’s downstream fairness by computing the standard deviation in the probability of predicting the true occupation label across the different identity groups. Using this fairness metric, we find significant disparities between the evaluated vision-and-language models. We hope that our work demonstrates the potential value of diffusion methods for fairness evaluations.
APA, Harvard, Vancouver, ISO, and other styles
44

Peng, Fei, Juan Liu, and Min Long. "Identification of Natural Images and Computer Generated Graphics Based on Hybrid Features." International Journal of Digital Crime and Forensics 4, no. 1 (January 2012): 1–16. http://dx.doi.org/10.4018/jdcf.2012010101.

Full text
Abstract:
Examining the identification of natural images (NI) and computer generated graphics (CG), a novel method is proposed based on hybrid features. Since the image acquisition pipelines are different, some differences exist in statistical, visual, and noise characteristics between natural images and computer generated graphics. Firstly, the mean, variance, kurtosis, skew-ness, and median of the histograms of grayscale image in the spatial and wavelet domain are selected as statistical features. Secondly, the fractal dimensions of grayscale image and wavelet sub-bands are extracted as visual features. Thirdly, considering the shortage of the photo response non-uniformity noise (PRNU) acquired from wavelet based de-noising filter, a pre-processing of Gaussian high pass filter is applied to the image before the extraction of PRNU, and the physical features are calculated from the enhanced PRNU. In the identification, a support vector machine (SVM) classifier is used in experiments and an average classification accuracy of 94.29% is achieved, where the classification accuracy for computer generated graphics is 97.3% and for natural images is 91.28%. Analysis and discussion show that the method is suitable for the identification of natural images and computer generated graphics and can achieve better identification accuracy than the existing methods with fewer dimensions of features.
APA, Harvard, Vancouver, ISO, and other styles
45

Banda, Anish. "Image Captioning using CNN and LSTM." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 2666–69. http://dx.doi.org/10.22214/ijraset.2021.37846.

Full text
Abstract:
Abstract: In the model we proposed, we examine the deep neural networks-based image caption generation technique. We give image as input to the model, the technique give output in three different forms i.e., sentence in three different languages describing the image, mp3 audio file and an image file is also generated. In this model, we use the techniques of both computer vision and natural language processing. We are aiming to develop a model using the techniques of Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to build a model to generate a Caption. Target image is compared with the training images, we have a large dataset containing the training images, this is done by convolutional neural network. This model generates a decent description utilizing the trained data. To extract features from images we need encoder, we use CNN as encoder. To decode the description of image generated we use LSTM. To evaluate the accuracy of generated caption we use BLEU metric algorithm. It grades the quality of content generated. Performance is calculated by the standard calculation matrices. Keywords: CNN, RNN, LSTM, BLEU score, encoder, decoder, captions, image description.
APA, Harvard, Vancouver, ISO, and other styles
46

Y. K. Lam, Y. K. Lam, W. C. Situ W. C. Situ, and P. W. M. Tsang P. W. M. Tsang. "Fast compression of computer-generated holographic images based on a GPU-accelerated skip-dimension vector quantization method." Chinese Optics Letters 11, no. 5 (2013): 050901–50905. http://dx.doi.org/10.3788/col201311.050901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Vitkine, Alexandre. "Photographic and Electronically Generated Images." Leonardo 19, no. 4 (1986): 305. http://dx.doi.org/10.2307/1578376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bouhamidi, Yacine, and Kai Wang. "Simple Methods for Improving the Forensic Classification between Computer-Graphics Images and Natural Images." Forensic Sciences 4, no. 1 (March 14, 2024): 164–83. http://dx.doi.org/10.3390/forensicsci4010010.

Full text
Abstract:
From the information forensics point of view, it is important to correctly classify between natural images (outputs of digital cameras) and computer-graphics images (outputs of advanced graphics rendering engines), so as to know the source of the images and the authenticity of the scenes described in the images. It is challenging to achieve good classification performance when the forensic classifier is tested on computer-graphics images generated by unknown rendering engines and when we have a limited number of training samples. In this paper, we propose two simple yet effective methods to improve the classification performance under such challenging situations, respectively based on data augmentation and the combination of local and global prediction results. Compared with existing methods, our methods are conceptually simple and computationally efficient, while achieving satisfying classification accuracy. Experimental results on datasets comprising computer-graphics images generated by four popular and advanced graphics rendering engines demonstrate the effectiveness of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Salgado, Tomas Garcia. "Comment on "Art History and the Criticism of Computer-Generated Images"." Leonardo 29, no. 1 (1996): 82. http://dx.doi.org/10.2307/1576292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Koreshev, S. N., S. O. Starovoitov, D. S. Smorodinov, and M. A. Frolova. "Quality assessment of binary object images reconstructed by computer-generated holograms." Scientific and Technical Journal of Information Technologies, Mechanics and Optics 20, no. 3 (June 1, 2020): 327–34. http://dx.doi.org/10.17586/2226-1494-2020-20-3-327-334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography