Academic literature on the topic 'Computer generated images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer generated images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer generated images"

1

Kerlow, Isaac Victor. "Computer-generated images and traditional printmaking." Visual Computer 4, no. 1 (January 1988): 8–18. http://dx.doi.org/10.1007/bf01901075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramek, Michael. "Colour vision and computer-generated images." Journal of Physics: Conference Series 237 (June 1, 2010): 012018. http://dx.doi.org/10.1088/1742-6596/237/1/012018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bouhali, Othmane, and Ali Sheharyar. "Distributed rendering of computer-generated images on commodity compute clusters." Qatar Foundation Annual Research Forum Proceedings, no. 2012 (October 2012): CSP16. http://dx.doi.org/10.5339/qfarf.2012.csp16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pini, Ezequiel. "Computer Generated Inspiration." Temes de Disseny, no. 36 (October 1, 2020): 192–207. http://dx.doi.org/10.46467/tdd36.2020.192-207.

Full text
Abstract:
This pictorial addresses the new use of Computer Generated Imagery as a tool for contextualising and inspiring futures. Using research through design experimental methodology, these techniques allow us to create utopic spaces by embracing accidental outcomes, displaying an as yet unexplored path lacking the limitations of the real world. The resulting images prove how 3D digital imagery used in the design context can serve as a new medium for artistic self-expression, as a tool for future designs and as an instrument to raise awareness about environmental challenges. The term we have coined, Computer Generated Inspiration, embraces the freedom of experimentation and artistic expression and the goal of inspiring others through unreal collective imaginaries.
APA, Harvard, Vancouver, ISO, and other styles
5

Lucas, Gale M., Bennett Rainville, Priya Bhan, Jenna Rosenberg, Kari Proud, and Susan M. Koger. "Memory for Computer-Generated Graphics: Boundary Extension in Photographic vs. Computer-Generated Images." Psi Chi Journal of Psychological Research 10, no. 2 (2005): 43–48. http://dx.doi.org/10.24839/1089-4136.jn10.2.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sando, Yusuke, Masahide Itoh, and Toyohiko Yatagai. "Color computer-generated holograms from projection images." Optics Express 12, no. 11 (2004): 2487. http://dx.doi.org/10.1364/opex.12.002487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wanger, L. R., J. A. Ferwerda, and D. P. Greenberg. "Perceiving spatial relationships in computer-generated images." IEEE Computer Graphics and Applications 12, no. 3 (May 1992): 44–58. http://dx.doi.org/10.1109/38.135913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Katoh, N., and M. Ito. "Gamut Mapping for Computer Generated Images (II)." Color and Imaging Conference 4, no. 1 (January 1, 1996): 126–28. http://dx.doi.org/10.2352/cic.1996.4.1.art00034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alfaqheri, Taha, Akuha Solomon Aondoakaa, Mohammad Rafiq Swash, and Abdul Hamid Sadka. "Low-delay single holoscopic 3D computer-generated image to multiview images." Journal of Real-Time Image Processing 17, no. 6 (June 19, 2020): 2015–27. http://dx.doi.org/10.1007/s11554-020-00991-y.

Full text
Abstract:
Abstract Due to the nature of holoscopic 3D (H3D) imaging technology, H3D cameras can capture more angular information than their conventional 2D counterparts. This is mainly attributed to the macrolens array which captures the 3D scene with slightly different viewing angles and generates holoscopic elemental images based on fly’s eyes imaging concept. However, this advantage comes at the cost of decreasing the spatial resolution in the reconstructed images. On the other hand, the consumer market is looking to find an efficient multiview capturing solution for the commercially available autostereoscopic displays. The autostereoscopic display provides multiple viewers with the ability to simultaneously enjoy a 3D viewing experience without the need for wearing 3D display glasses. This paper proposes a low-delay content adaptation framework for converting a single holoscopic 3D computer-generated image into multiple viewpoint images. Furthermore, it investigates the effects of varying interpolation step sizes on the converted multiview images using the nearest neighbour and bicubic sampling interpolation techniques. In addition, it evaluates the effects of changing the macrolens array size, using the proposed framework, on the perceived visual quality both objectively and subjectively. The experimental work is conducted on computer-generated H3D images with different macrolens sizes. The experimental results show that the proposed content adaptation framework can be used to capture multiple viewpoint images to be visualised on autostereoscopic displays.
APA, Harvard, Vancouver, ISO, and other styles
10

Weeks, Arthur R. "Computer-generated noise images for the evaluation of image processing algorithms." Optical Engineering 32, no. 5 (1993): 982. http://dx.doi.org/10.1117/12.130267.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computer generated images"

1

Evans, Steven R. "The pixel generator : a VLSI device for computer generated images." Thesis, University of Sussex, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.332912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Strassmann, Steven Henry. "Hairy brushes in computer-generated images." Thesis, Massachusetts Institute of Technology, 1986. http://hdl.handle.net/1721.1/78948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Quan, Weize. "Detection of computer-generated images via deep learning." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT076.

Full text
Abstract:
Avec les progrès des outils logiciels d'édition et de génération d'images, il est devenu plus facile de falsifier le contenu des images ou de créer de nouvelles images, même pour les novices. Ces images générées, telles que l'image de rendu photoréaliste et l'image colorisée, ont un réalisme visuel de haute qualité et peuvent potentiellement menacer de nombreuses applications importantes. Par exemple, les services judiciaires doivent vérifier que les images ne sont pas produites par la technologie de rendu infographique, les images colorisées peuvent amener les systèmes de reconnaissance / surveillance à produire des décisions incorrectes, etc. Par conséquent, la détection d'images générées par ordinateur a attiré une large attention dans la communauté de recherche en sécurité de multimédia. Dans cette thèse, nous étudions l'identification de différents types d'images générées par ordinateur, y compris l'image de rendu et l'image coloriée. Nous nous intéressons à identifier si une image est acquise par une caméra ou générée par un programme informatique. L'objectif principal est de concevoir un détecteur efficace, qui a une précision de classification élevée et une bonne capacité de généralisation. Nous considérons la construction de jeux de données, l'architecture du réseau de neurones profond, la méthode d'entraînement, la visualisation et la compréhension, pour les problèmes d'investigation légale des images considérés. Nos principales contributions sont : (1) une méthode de détection d'image colorisée basée sur l'insertion d'échantillons négatifs, (2) une méthode d'amélioration de la généralisation pour la détection d'image colorisée, (3) une méthode d'identification d'image naturelle et d'image de rendu basée sur le réseau neuronal convolutif, et (4) une méthode d'identification d'image de rendu basée sur l'amélioration de la diversité des caractéristiques et des échantillons contradictoires
With the advances of image editing and generation software tools, it has become easier to tamper with the content of images or create new images, even for novices. These generated images, such as computer graphics (CG) image and colorized image (CI), have high-quality visual realism, and potentially throw huge threats to many important scenarios. For instance, the judicial departments need to verify that pictures are not produced by computer graphics rendering technology, colorized images can cause recognition/monitoring systems to produce incorrect decisions, and so on. Therefore, the detection of computer-generated images has attracted widespread attention in the multimedia security research community. In this thesis, we study the identification of different computer-generated images including CG image and CI, namely, identifying whether an image is acquired by a camera or generated by a computer program. The main objective is to design an efficient detector, which has high classification accuracy and good generalization capability. Specifically, we consider dataset construction, network architecture, training methodology, visualization and understanding, for the considered forensic problems. The main contributions are: (1) a colorized image detection method based on negative sample insertion, (2) a generalization method for colorized image detection, (3) a method for the identification of natural image (NI) and CG image based on CNN (Convolutional Neural Network), and (4) a CG image identification method based on the enhancement of feature diversity and adversarial samples
APA, Harvard, Vancouver, ISO, and other styles
4

Bassanino, May Nahab. "The perception of computer generated architectural images." Thesis, University of Liverpool, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Beale, Gareth. "Representing Roman statuary using computer generated images." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/375493/.

Full text
Abstract:
This thesis explores the potential of computer graphics as a means of producing hypothetical visual reconstructions of a painted statue of a young woman discovered at Herculaneum in 2006 (inv. 4433/87021). The visualisations incorporate accurate representation of experimentally derived data using physically accurate rendering techniques. The statue is reconstructed according to a range of different hypotheses and is visualised within a selection of architectural contexts. The work presented here constitutes both a technical and theoretical innovation for archaeological research. The methodology describes the implementation of physically accurate computer graphical simulation as a tool for the interpretation, visualisation and hypothetical reconstruction of Roman sculpture. These developments are underpinned by a theoretical re-assessment of the value of computationally generated images and computational image making processes to archaeological practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Cartwright, Paul. "Realisation of computer generated integral three dimensional images." Thesis, De Montfort University, 2000. http://hdl.handle.net/2086/13289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Payne, Doug. "Simulating perceived 3D images replayed by computer generated holograms." Thesis, Heriot-Watt University, 2004. http://hdl.handle.net/10399/355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rosenzweig, Elizabeth. "A personal color proofing system for computer generated images." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/77304.

Full text
Abstract:
Thesis (M.S.V.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH.
Includes bibliographical references (leaves 71-73).
Just as the invention of photography challenged the world's visual literacy at the turn of the century, so has computer graphics begun to reshape visual communication. Although still in its infancy, recently developed tools for visual communication have made computer graphics and electronic imaging accessible to a much larger number of artists and other professionals. The use of these systems has emphasized a new need for an automatic color image proofing system combining text and images in variable size. Such a system would ease the distribution of and accessibility to digital images with high quality and low cost. This thesis describes the Color Proofer, a system that has been designed to meet these needs. This system includes a software package that performs basic image processing functions similar to basic darkroom manipulations, designed for color hardcopy. It also includes a system for creating halftone dot fonts to aid in the production of high quality color proofs. A typesetter capable of producing magazine quality color images and text is an integral component of the Color Proofer. The Color Proofer is designed for photographers and graphic designers who want high quality proofs of computer generated images.
by Elizabeth Rosenzweig.
M.S.V.S.
APA, Harvard, Vancouver, ISO, and other styles
9

Youssef, Osama Hassan. "Acceleration techniques for photo realistic computer generated integral images." Thesis, De Montfort University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.699807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Oglesby, Corliss Deionn. "The New Frontier of Advertising: Computer-Generated Images as Influencers." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/861.

Full text
Abstract:
The use of computer-generated images as influencers of consumer opinions and behavior is an emerging advertising strategy. This research investigates the advantages and disadvantages of using computer-generated images (CGI) as influencers of human behavior from the perspective of promoting a brand. The power of electronic word of mouth (social media) and how it is incorporated in computer-generated images used as influencers is discussed as a major factor in consumer decision making. The social media account content of computer-generated images was analyzed by conducting a frame analysis to discern how computer-generated images are portrayed on social media. A frame analysis of 577 social media posts was used to develop a framework for future social media strategies for computer-generated images. CGIs be portrayed as transparent, engaging, and as a socialite. Best practices for using computer-generated images were identified by conducting an interview with a representative of a brand that has collaborated with an influential computer-generated image. Innovation, listening to the consumer voice, and creative control should be prioritized in this growing field of CGIs and CGI partnerships.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computer generated images"

1

Magnenat-Thalmann, Nadia, and Daniel Thalmann, eds. Computer-Generated Images. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bigand, André, Julien Dehos, Christophe Renaud, and Joseph Constantin. Image Quality Assessment of Computer-generated Images. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73543-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cartwright, Paul. Realisation of computer generated integral three dimensional images. Leicester: De Montfort University, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Smarandache, Florentin. Hieroglyphs & diagrams: Computer-generated outer-art : composed, found, changed, modified, alternated computer-programmed images. Gallup, NM: F. Smarandache, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Magnenat-Thalmann, Nadia. Computer-Generated Images: The State of the Art Proceedings of Graphics Interface '85. Tokyo: Springer Japan, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Napleton, Steven. The technological and aesthetic impact of computer-generated images on the Hollywood cinema. London: University of East London, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

The colourful world of computer-generated images, visual interaction and visual communication: Computer graphics in practice ; technology transfer bridging research to new products, sciences and applications on the global market. Darmstadt: Europ. Wirtschafts-Verl., 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huber, Dieter. Klones: Computergenerierte Fotoarbeiten = computer generated photographs. Nürnberg: Verlag für Moderne Kunst, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hall, Roy. Illumination and Color in Computer Generated Imagery. New York, NY: Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4612-3526-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hall, Roy. Illumination and Color in Computer Generated Imagery. New York, NY: Springer New York, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computer generated images"

1

Greenberg, Joel M. "Computer Animation in Distance Teaching." In Computer-Generated Images, 260–66. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cléroux, R., Y. Lepage, and N. Ranger. "Computer Graphics for Multivariate Data." In Computer-Generated Images, 395–410. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heath, A. M., and R. B. Flavell. "Colour Coding Scales and Computer Graphics." In Computer-Generated Images, 307–18. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ratib, Osman, and Alberto Righetti. "Computer Analysis of Cardiac Wall Motion Asynchrony." In Computer-Generated Images, 98–105. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zeltzer, David. "Towards an Integrated View of 3-D Computer Animation." In Computer-Generated Images, 230–48. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lichten, Larry, and Ronald Eaton. "An Innovative User Interface for Microcomputer-Based Computer-Aided Design." In Computer-Generated Images, 321–29. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mills, Michael I. "Image Synthesis." In Computer-Generated Images, 3–10. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Michaud, C., A. S. Malowany, and M. D. Levine. "Multi-Robot Assembly of IC’s." In Computer-Generated Images, 106–17. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Appel, R., M. Funk, C. Pellegrini, D. Hochstrasser, and A. F. Müller. "A Computerized System for Spot Detection and Analysis of Two-Dimensional Electrophoresis Images." In Computer-Generated Images, 118–24. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Prusinkiewicz, Przemyslaw, and Mark Christopher. "Hologram-Like Transmission of Pictures." In Computer-Generated Images, 125–34. Tokyo: Springer Japan, 1985. http://dx.doi.org/10.1007/978-4-431-68033-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer generated images"

1

Kornfeld, Gertrude H. "Interpreting computer-generated IR images." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.thc2.

Full text
Abstract:
Computer generation of infrared imagery for automatic target recognizer (ATR) evaluation and human perception studies is discussed in this presentation. Initially a videotape with driving vehicles and observer motion is shown. A comparison of forward-looking infrared radiometer (FLIR) imagery and computer generations demonstrates achieved realism. A computer-generated FLIR search pattern with an approaching tank demonstrates a motion that is difficult to detect by humans, but ATRs programmed for center of hot spot detection and frames subtraction might detect the tank motion. Finally, a simulation of the imagery seen by a FLIR mounted in a helicopter shows low overflight of trees before a serpentine approach to a tank column; it illustrates the difficulties of distance estimates of FLIR imagery and the degrees of freedom of sensor motion that were programmed.
APA, Harvard, Vancouver, ISO, and other styles
2

Hazza, Zubaidah Muataz, and Normaziah Abdul Aziz. "Detecting Computer Generated Images for Image Spam Filtering." In 2012 International Conference on Advanced Computer Science Applications and Technologies (ACSAT). IEEE, 2012. http://dx.doi.org/10.1109/acsat.2012.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Newswanger, Craig, Chris Outwater, and David Coons. "Holographic Stereograms From Computer Generated Images." In Hague International Symposium, edited by Jean P. L. Ebbeni. SPIE, 1987. http://dx.doi.org/10.1117/12.941628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bielecki, Dustin, Prakhar Jaiswal, and Rahul Rai. "Binary Image Recognition Utilizing Computer Generated Templates." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67983.

Full text
Abstract:
This paper covers a method of taking images of physical parts which are then preprocessed and compared against CAD generated templates. A pseudo milling operation was performed on discretized points along CAD generated mill paths to create binary image templates. The computer-generated images were then tested against one another as a preliminarily sorting technique. This was done to reduce the number of sorting approaches used, by selecting the most reliable and discerning ones, and discarding the others. To apply the selected sorting methods for comparing CAD generated images and the images of physical parts, a translational and scaling normalization technique was implemented. Rotational variation occurs while scanning physical parts and it was addressed using two different techniques: first by determination of best rotation based on modified-Hausdorff distance (MHD); and second by comparing against all CAD based images for all template rotations. The proposed approach for automated sorting of physical parts was demonstrated by categorizing multiple geometries.
APA, Harvard, Vancouver, ISO, and other styles
5

Wenqi, Gao, Tan Suqing, and Zhou Jin. "Computer-generated hologram for reconstruction of unusual mode image." In Diffractive Optics and Micro-Optics. Washington, D.C.: Optica Publishing Group, 1996. http://dx.doi.org/10.1364/domo.1996.jtub.26.

Full text
Abstract:
Usually in reconstruction of Fourier computer-generated hologram[FCGH] the lens is necessary to make imaging at finite distance instead of imaging at infinite originally. The reconstructed images are mutual inverted ( one upright image,another inverted image) both appear in a same plane. Whether the imaging lens in reconstruction FCGH will be able to omit? Whether two reconstructed images will be able to separate in spatial? Whether two images have an identical direction and different shape in the same plane? It is the motivation for us to do this study. Through theoretic analysis and experimental reconstruction these assume can be realized essentially.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, T., Pengwei Hao, and Sang Uk Lee. "Efficient coding of computer generated compound images." In 2005 International Conference on Image Processing. IEEE, 2005. http://dx.doi.org/10.1109/icip.2005.1529812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hollosi, Janos, and Aron Ballagi. "Training Neural Networks with Computer Generated Images." In 2019 IEEE 15th International Scientific Conference on Informatics. IEEE, 2019. http://dx.doi.org/10.1109/informatics47936.2019.9119273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dirik, A. E., S. Bayram, H. T. Sencar, and N. Memon. "New Features to Identify Computer Generated Images." In 2007 IEEE International Conference on Image Processing. IEEE, 2007. http://dx.doi.org/10.1109/icip.2007.4380047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bassanino, May Nahab, and Andre G. P. Brown. "Computer Generated Architectural Images: A Comparative Study." In eCAADe 1999: Architectural Computing: From Turing to 2000. eCAADe, 1999. http://dx.doi.org/10.52842/conf.ecaade.1999.552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bassanino, May Nahab, and Andre G. P. Brown. "Computer Generated Architectural Images: A Comparative Study." In eCAADe 1999: Architectural Computing: From Turing to 2000. eCAADe, 1999. http://dx.doi.org/10.52842/conf.ecaade.1999.552.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computer generated images"

1

Johra, Hicham, Martin Veit, Mathias Østergaard Poulsen, Albert Daugbjerg Christensen, Rikke Gade, Thomas B. Moeslund, and Rasmus Lund Jensen. Training and testing labelled image and video datasets of human faces for different indoor visual comfort and glare visual discomfort situations. Department of the Built Environment, 2023. http://dx.doi.org/10.54337/aau542153983.

Full text
Abstract:
The aim of this technical report is to provide a description and access to labelled image and video datasets of human faces that have been generated for different indoor visual comfort and glare visual discomfort situations. These datasets have been used to train and test a computer-vision artificial neural network detecting glare discomfort from images of human faces.
APA, Harvard, Vancouver, ISO, and other styles
2

Mbani, Benson, Timm Schoening, and Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, May 2023. http://dx.doi.org/10.3289/sw_2_2023.

Full text
Abstract:
The Automated and Integrated Seafloor Classification Workflow (AI-SCW) is a semi-automated underwater image processing pipeline that has been customized for use in classifying the seafloor into semantic habitat categories. The current implementation has been tested against a sequence of underwater images collected by the Ocean Floor Observation System (OFOS), in the Clarion-Clipperton Zone of the Pacific Ocean. Despite this, the workflow could also be applied to images acquired by other platforms such as an Autonomous Underwater Vehicle (AUV), or Remotely Operated Vehicle (ROV). The modules in AI-SCW have been implemented using the python programming language, specifically using libraries such as scikit-image for image processing, scikit-learn for machine learning and dimensionality reduction, keras for computer vision with deep learning, and matplotlib for generating visualizations. Therefore, AI-SCW modularized implementation allows users to accomplish a variety of underwater computer vision tasks, which include: detecting laser points from the underwater images for use in scale determination; performing contrast enhancement and color normalization to improve the visual quality of the images; semi-automated generation of annotations to be used downstream during supervised classification; training a convolutional neural network (Inception v3) using the generated annotations to semantically classify each image into one of pre-defined seafloor habitat categories; evaluating sampling strategies for generation of balanced training images to be used for fitting an unsupervised k-means classifier; and visualization of classification results in both feature space view and in map view geospatial co-ordinates. Thus, the workflow is useful for a quick but objective generation of image-based seafloor habitat maps to support monitoring of remote benthic ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
3

Fader, G. B. J., R. O. Miller, and B. J. Todd. Geological interpretation of Halifax Harbour, Nova Scotia, Canada. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331504.

Full text
Abstract:
An important part of seabed mapping is understanding the shape of the seabed and the depth of water. Hydrographic charts are produced for this purpose by the Canadian Hydrographic Service. During the final survey stages of the Harbour a new technology called multibeam bathymetry became available for high resolution mapping. This system uses transducers (sound sources) mounted on a ship that produce many independent sound beams and can map a large swath of the seabed at one time covering 100% of the bottom. The images that are produced are computer shaded to look as if the water is drained and you are flying over the area. They are the underwater equivalent of aerial photographs of the adjacent land. Because the information is collected digitally, many different kinds of maps can be produced to show subtle aspects of sediment deposition, erosion, and seabed features. The information can also be displayed using various colour schemes to represent seabed shape and computer generated fly-throughs can be produced. The multibeam bathymetric images nicely complement the other geological data sets.
APA, Harvard, Vancouver, ISO, and other styles
4

Anderson, Gerald L., and Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7585193.bard.

Full text
Abstract:
This research report describes a methodology whereby multi-spectral and hyperspectral imagery from remote sensing, is used for deriving predicted field maps of selected plant growth attributes which are required for precision cropping. A major task in precision cropping is to establish areas of the field that differ from the rest of the field and share a common characteristic. Yield distribution f maps can be prepared by yield monitors, which are available for some harvester types. Other field attributes of interest in precision cropping, e.g. soil properties, leaf Nitrate, biomass etc. are obtained by manual sampling of the filed in a grid pattern. Maps of various field attributes are then prepared from these samples by the "Inverse Distance" interpolation method or by Kriging. An improved interpolation method was developed which is based on minimizing the overall curvature of the resulting map. Such maps are the ground truth reference, used for training the algorithm that generates the predicted field maps from remote sensing imagery. Both the reference and the predicted maps are stratified into "Prototype Plots", e.g. 15xl5 blocks of 2m pixels whereby the block size is 30x30m. This averaging reduces the datasets to manageable size and significantly improves the typically poor repeatability of remote sensing imaging systems. In the first two years of the project we used the Normalized Difference Vegetation Index (NDVI), for generating predicted yield maps of sugar beets and com. The NDVI was computed from image cubes of three spectral bands, generated by an optically filtered three camera video imaging system. A two dimensional FFT based regression model Y=f(X), was used wherein Y was the reference map and X=NDVI was the predictor. The FFT regression method applies the "Wavelet Based", "Pixel Block" and "Image Rotation" transforms to the reference and remote images, prior to the Fast - Fourier Transform (FFT) Regression method with the "Phase Lock" option. A complex domain based map Yfft is derived by least squares minimization between the amplitude matrices of X and Y, via the 2D FFT. For one time predictions, the phase matrix of Y is combined with the amplitude matrix ofYfft, whereby an improved predicted map Yplock is formed. Usually, the residuals of Y plock versus Y are about half of the values of Yfft versus Y. For long term predictions, the phase matrix of a "field mask" is combined with the amplitude matrices of the reference image Y and the predicted image Yfft. The field mask is a binary image of a pre-selected region of interest in X and Y. The resultant maps Ypref and Ypred aremodified versions of Y and Yfft respectively. The residuals of Ypred versus Ypref are even lower than the residuals of Yplock versus Y. The maps, Ypref and Ypred represent a close consensus of two independent imaging methods which "view" the same target. In the last two years of the project our remote sensing capability was expanded by addition of a CASI II airborne hyperspectral imaging system and an ASD hyperspectral radiometer. Unfortunately, the cross-noice and poor repeatability problem we had in multi-spectral imaging was exasperated in hyperspectral imaging. We have been able to overcome this problem by over-flying each field twice in rapid succession and developing the Repeatability Index (RI). The RI quantifies the repeatability of each spectral band in the hyperspectral image cube. Thereby, it is possible to select the bands of higher repeatability for inclusion in the prediction model while bands of low repeatability are excluded. Further segregation of high and low repeatability bands takes place in the prediction model algorithm, which is based on a combination of a "Genetic Algorithm" and Partial Least Squares", (PLS-GA). In summary, modus operandi was developed, for deriving important plant growth attribute maps (yield, leaf nitrate, biomass and sugar percent in beets), from remote sensing imagery, with sufficient accuracy for precision cropping applications. This achievement is remarkable, given the inherently high cross-noice between the reference and remote imagery as well as the highly non-repeatable nature of remote sensing systems. The above methodologies may be readily adopted by commercial companies, which specialize in proving remotely sensed data to farmers.
APA, Harvard, Vancouver, ISO, and other styles
5

Atherosclerosis Biomarkers by Computed Tomography Angiography (CTA). Chair Andrew Buckler, Luca Saba, and Uwe Joseph Schoepf. Radiological Society of North America (RSNA) / Quantitative Imaging Biomarkers Alliance (QIBA), March 2023. http://dx.doi.org/10.1148/qiba/20230328.

Full text
Abstract:
The clinical application of Computed Tomography Angiography (CTA) is widely available as a technique to optimize the therapeutic approach to treating vascular disease. Evaluation of atherosclerotic arterial plaque characteristics is currently based on qualitative biomarkers. However, the reproducibility of such findings has historically been limited even among experts (1). Quantitative imaging biomarkers have been shown to have additive value above traditional qualitative imaging metrics and clinical risk scores regarding patient outcomes (2). However, many definitions and cut-offs are present in the current literature; therefore, standardization of quantitative evaluation of CTA datasets is needed before becoming a valuable tool in daily clinical practice. To establish these biomarkers in clinical practice, techniques are required to standardize quantitative imaging across different manufacturers with cross-calibration. Moreover, the post-processing of atherosclerotic plaque segmentation needs to be optimized and standardized. The goal of a Quantitative Imaging Biomarker Alliance (QIBA) Profile is to provide an implementation guide to generate a biomarker with an effective level of performance, mostly by reducing variability and bias in the measurement. The performance claims represent expert consensus and will be empirically demonstrated at a subsequent stage. Users of this Profile are encouraged to refer to the following site to understand the document’s context: http://qibawiki.rsna.org/index.php/QIBA_Profile_Stages. All statistical performance assessments are stated in carefully considered metrics and according to strict definitions as given in (3-8), which also includes detailed, peer-reviewed rationale on the importance of adhering to such standards. The expected performance is expressed as Claims (Section 1.2). To achieve those claims, Actors (Scanners, Reconstruction Software, Image Analysis Tools, Imaging Physicians, Physicists, and Technologists) must meet the Checklist Requirements (Section 3) covering Subject Handling, Image Data Acquisition, Image Data Reconstruction, Image QA, and Image Analysis. This Profile is at the Clinically Feasible stage (qibawiki.rsna.org/index.php/QIBA_Profile_Stages) which indicate that multiple sites have performed the Profile and found it to be practical and expect it to achieve the claimed performance. QIBA Profiles for other CT, MRI, PET, and Ultrasound biomarkers can be found at qibawiki.rsna.org
APA, Harvard, Vancouver, ISO, and other styles
6

Fader, G. B. J., R. O. Miller, and B. J. Todd. Making a three-dimensional model of Halifax Harbour, Nova Scotia, Canada. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331507.

Full text
Abstract:
Halifax Harbour is one of best-studied harbours in the world. Researchers at the Bedford Institute of Oceanography map the seabed, perform geochemical analyses of sediment core samples, measure currents and tides and study the effects of pollution on the biota. To illustrate the complexity and intricate detail that exists on the seabed, a physical relief model of the harbour and surrounding area was constructed using the most recent technology. The model, which was milled from lightweight surfboard foam, shows underwater relief (bathymetry) as well as the land topography. Onshore, high-resolution satellite imagery was "draped" over the topographic relief using a specially designed 3-D plotter. In underwater areas, bathymetry is represented by a suite of colours ranging from light blue, to indicate shallow areas, to darker blue for deeper water. Computer generated shading was applied to emphasize detailed texture. Four "zoom" panels were also produced to focus on some of the finer details that are evident in the seabed. These details help us understand more about the harbour's geological history as well as the processes that are active today, both natural and man-made. This poster explains the many stages in the process of creating the Halifax Harbour relief model.
APA, Harvard, Vancouver, ISO, and other styles
7

Malinowski, Owen, Scott Riccardella, and Jason Van Velsor. PR-335-203810-R03 CT Fundamentals with Calibration and Reference Standards for Pipeline Anomaly Detection. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), March 2022. http://dx.doi.org/10.55274/r0012216.

Full text
Abstract:
X-ray Computed Tomography (XRCT) was initially developed and utilized in the medical industry to image the internal structure of the human body. X-ray imaging was conceived and realized at the turn of the 20th century and subsequently, XRCT, was conceived in the middle of the 20th century and its development continues today. Near the end of the 20th century industrial cone beam XRCT for applications such as dimensional metrology branched off, including its use for identifying and dimensioning flaws. XRCT has been utilized successfully for three-dimensional imaging of flaws in the small panel cut-outs from steel oil and gas transmission pipelines. However, the performance of XRCT on full-circumference pipe samples has not been assessed to determine if the technology can be used to obtain flaw dimensional information with the same accuracy that has been observed on panel cut-outs. This would enable the industry to generate full-circumference reference samples with well-characterize flaw dimensions, which would be much more practical and useful for qualification, certification, and validation of inline inspection and nondestructive examination tools, personnel, and procedures. This tasks for this project were to evaluate the state-of-the-art in XRCT technology, establish guidelines for XRCT scanning of pipeline samples, compare XRCT performance on artificial and natural flaws, and compare performance of lab-based and in-the-ditch XRCT technologies on artificial and natural flaws through scanning multiple samples utilizing multiple XRCT vendors and subsequently destructive testing the samples. The overall objective of the project was to determine if XRCT is a viable alternative to destructive testing for collecting "truth" data from flaw reference samples. Related webinar
APA, Harvard, Vancouver, ISO, and other styles
8

Blais-Stevens, A., A. Castagner, A. Grenier, and K D Brewer. Preliminary results from a subbottom profiling survey of Seton Lake, British Columbia. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/332277.

Full text
Abstract:
Seton Lake is a freshwater fiord located in southwestern British Columbia, roughly 4 km west of Lillooet and 250 km north-northeast of Vancouver. Located in the Coast Mountains, it is an alpine lake about 22-km long and roughly 1-1.5 km wide. It is separated from nearby Anderson Lake, located to the west, by a large pre-historic rock avalanche deposit at Seton Portage. The lake stands at about 243 m above sea level and is up to about 150 m deep (BC gov., 1953). Water level is controlled by a hydroelectric dam (i.e., Seton dam) located at the eastern end of the lake. Here, the lake drains east into Seton Canal, a 5 km diversion of the flow of the Seton River, which begins at the Seton dam. The Seton Canal pushes water to the Seton Powerhouse, a hydroelectric generating station at the Fraser River, just south of the community of Sekw'el'was and confluence of the Seton River, which drains into the Fraser River at Lillooet. Seton Portage, Shalatlh, South Shalatlh, Tsal'alh (Shalath), Sekw'el'was (Cayoosh Creek), and T'it'q'et (Lillooet) are communities that surround the lake. Surrounded by mountainous terrain, the lake is flanked at mid-slope by glacial and colluvial sediments deposited during the last glacial and deglacial periods (Clague, 1989; Jakob, 2018). The bedrock consists mainly of mafic to ultramafic volcanic rocks with minor carbonate and argillite from the Carboniferous to Middle Jurassic periods (Journeay and Monger, 1994). As part of the Public Safety Geoscience Program at the Geological Survey of Canada (Natural Resources Canada), our goal is to provide baseline geoscience information to nearby communities, stakeholders and decision-makers. Our objective was to see what kind of sediments were deposited and specifically if we could identify underwater landslide deposits. Thus, we surveyed the lake using a Pinger SBP sub bottom profiler made by Knudsen Engineering Ltd., with dual 3.5 / 200 kHz transducers mounted to a small boat (see photo). This instrument transmits sound energy down through the water column that reflects off the lake bottom surface and underlying sediment layers. At the lake surface, the reflected sound energy is received by the profiler, recorded on a laptop computer, and integrated with GPS data. These data are processed to generate a two-dimensional image (or profile) showing the character of the lake bottom and underlying sediments along the route that the boat passed over. Our survey in 2022 recorded 98 profiles along Seton Lake. The red transect lines show the locations of the 20 profiles displayed on the poster. The types of sediments observed are mostly fine-grained glaciolacustrine sediments that are horizontally bedded with a subtle transition between glaciolacustrine to lacustrine (e.g., profiles A-A'; C-C'; F-F'; S-S'). Profile S-S' displays this transition zone. The glaciolacustrine sediments probably were deposited as the Cordilleran Ice Sheet retreated from the local area (~13,000-11,000 years ago; Clague, 2017) and the lacustrine sediments, after the ice receded to present-day conditions. Some of the parallel reflections are interrupted, suggesting abrupt sedimentation by deposits that are not horizontally bedded; these are interpreted as landslide deposits (see pink or blue deposits on profiles). The deposits that show disturbance in the sedimentation found within the horizontal beds are thought to be older landslides (e.g., blue arrows/deposits in profiles C-C'; E-E'; F-F'; G-G'; I-I'; J-J'; K-K'; N-N'; P-P'; Q-Q'; R-R'; T-T'; U-U'), but the ones that are found on top of the horizontally laminated sediments (red arrows/pink deposits), and close to the lake wall, are interpreted to be younger (e.g., profiles B-B'; C-C'; H-H'; K-K'; M-M'; O-O'; P-P'; Q-Q'). At the fan delta just west of Seton dam, where there was no acoustic signal penetration, it is interpreted that the delta failed and brought down coarser deposits at the bottom of the lake (e.g., profiles H-H'; M-M'; and perhaps K-K'). However, these could be glacial deposits, bedrock, or other coarser deposits. Some of the deposits that reflect poor penetration of the acoustic signal, below the glaciolacustrine sediments, could represent glacial deposits, old landslide deposits, or perhaps the presence of gas (orange arrows; e.g, B-B'; D-D'; J-J'; O-O', T-T'). The preliminary results from sub bottom profiling reveal that there are underwater landslides deposits of widely varying ages buried in the bottom of the lake. However, the exact timing of these is not known. Hence our preliminary survey gives an overview of the distribution of landslides where there seems to be a larger number of landslides recorded in the narrower eastern portion of the lake.
APA, Harvard, Vancouver, ISO, and other styles
9

L'Estampe en France: Thirty-Four Young Printmakers. Inter-American Development Bank, February 1999. http://dx.doi.org/10.18235/0006415.

Full text
Abstract:
Forty-five limited edition prints (primarily etching and engraving, but also lithography, silkscreen, and computer-generated images, among others) by French printmakers under forty years of age. Through the Association Française d¿Action Artistique (AFAA) and L¿Association Les Ateliers, an association of Parisian printmaking workshops, the Center brought contemporary works by master printers whose work represents an extraordinary diversity of vision. The exhibition was organized in honor of Paris, France, site of the 40th Annual Meeting of the IDB Board of Governors in March, 1999, and later in the year travelled on exhibition to Brazil.
APA, Harvard, Vancouver, ISO, and other styles
10

Foundation models such as ChatGPT through the prism of the UNESCO Recommendation on the Ethics of Artificial Intelligence. UNESCO, 2023. http://dx.doi.org/10.54678/bgiv6160.

Full text
Abstract:
The release into the public domain and massive growth in the user base of artificial intelligence (AI) foundation models for text, images, and audio is fuelling debate about the risks they pose to work, education, scientific research, and democracy, as well as their potential negative impacts on cultural diversity and cross-cultural interactions, among other areas. Foundation models are AI systems that are characterized by the use of very large machine learning models trained on massive unlabelled data sets using considerable compute resources. Examples include large language models (LLMs) such as the GPT series and Bard, and image generator tools such as DALL·E 2 and Stable Diffusion. This discussion paper focuses on a widely used foundation model, ChatGPT, as a case study, but many of the points below are applicable to other LLMs and foundation models more broadly. UNESCO Catno: 0000385629
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography