Academic literature on the topic '006.7/8'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Contents
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '006.7/8.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "006.7/8"
Editorial, E. "Erratum: The impact of volumetric arc radiotherapy on clinical outcomes of patients with gynaecological malignancies." Medical review 76, no. 11-12 (2023): 352. http://dx.doi.org/10.2298/mpns2312352e.
Full textЛісений, Олександр, Віталій Глуховський, Миколай Мар’єнков, Світлана Дубовик, Іван Любченко, and Михайло Яковенко. "ОБСТЕЖЕННЯ, ОЦІНКА ТЕХНІЧНОГО СТАНУ ТА УМОВИ ВІДНОВЛЕННЯ ЖИТЛОВОГО БУДИНКУ НА ПРОСПЕКТІ В. ЛОБАНОВСЬКОГО, 6-А В М. КИЄВІ, ПОШКОДЖЕНОГО ВНАСЛІДОК ВОЄННИХ ДІЙ." Наука та будівництво 33, no. 3-4 (January 27, 2023). http://dx.doi.org/10.33644/10.33644/2313-6679-34-2022-6.
Full textDissertations / Theses on the topic "006.7/8"
Moreira, Julian. "Évaluer l'apport du binaural dans une application mobile audiovisuelle." Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1243/document.
Full textIn recent years, smartphone and tablet global performances have been increased significantly (CPU, screen resolution, webcams, etc.). This can be particularly observed with video quality of mobile media services, such as video streaming applications, or interactive applications (e.g., video games). However, these evolutions barely go with the integration of high quality sound restitution systems. Beside these evolutions though, new technologies related to spatialized sound on headphones have been developed, namely the binaural restitution model, using HRTF (Head Related Transfer Functions) filters.In this thesis, we assess the potential contribution of the binaural technology to enhance the quality of experience of an audiovisual mobile application. A part of our work has been dedicated to define what is an “audiovisual mobile application”, what kind of application could be fruitfully experienced with a binaural sound, and among those applications which one could lead to a comparative experiment with and without binaural.In a first place, the coupling of a binaural sound with a mobile-rendered visual tackles a question related to perception: how to spatially arrange a virtual scene whose sound can be spread all around the user, while its visual is limited to a very small space? We propose an experiment in these conditions to study how far a sound and a visual can be moved apart without breaking their perceptual fusion. The results reveal a strong tolerance of subjects to spatial discrepancies between the two modalities. Notably, the absence or presence of individualization for the HRTF filters, and a large separation in elevation between sound and visual don’t seem to affect the perception. Besides, subjects consider the virtual scene as if they were projected inside, at the camera’s position, no matter what distance to the phone they sit. All these results suggest that an association between a binaural sound and a visual on a smartphone could be used by the general public.In the second part, we address the main question of the thesis, i.e., the contribution of binaural, and we conduct an experiment in a realistic context of use. Thirty subjects play an Infinite Runner video game in their daily lives. The game was developed for the occasion in two versions, a monophonic one and a binaural one. The experiment lasts five weeks, at a rate of two sessions per day, which relates to a protocol known as the “Experience Sampling Method”. We collect at each session notes of immersion, memorization and performance, and compare the notes between the monophonic sessions and the binaural ones. Results indicate a significantly better immersion in the binaural sessions. No effect of sound rendering was found for memorization and performance. Beyond the contribution of the binaural, we discuss about the protocol, the validity of the collected data, and oppose theoretical considerations to practical feasibility
Moreira, Julian. "Évaluer l'apport du binaural dans une application mobile audiovisuelle." Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1243.
Full textIn recent years, smartphone and tablet global performances have been increased significantly (CPU, screen resolution, webcams, etc.). This can be particularly observed with video quality of mobile media services, such as video streaming applications, or interactive applications (e.g., video games). However, these evolutions barely go with the integration of high quality sound restitution systems. Beside these evolutions though, new technologies related to spatialized sound on headphones have been developed, namely the binaural restitution model, using HRTF (Head Related Transfer Functions) filters.In this thesis, we assess the potential contribution of the binaural technology to enhance the quality of experience of an audiovisual mobile application. A part of our work has been dedicated to define what is an “audiovisual mobile application”, what kind of application could be fruitfully experienced with a binaural sound, and among those applications which one could lead to a comparative experiment with and without binaural.In a first place, the coupling of a binaural sound with a mobile-rendered visual tackles a question related to perception: how to spatially arrange a virtual scene whose sound can be spread all around the user, while its visual is limited to a very small space? We propose an experiment in these conditions to study how far a sound and a visual can be moved apart without breaking their perceptual fusion. The results reveal a strong tolerance of subjects to spatial discrepancies between the two modalities. Notably, the absence or presence of individualization for the HRTF filters, and a large separation in elevation between sound and visual don’t seem to affect the perception. Besides, subjects consider the virtual scene as if they were projected inside, at the camera’s position, no matter what distance to the phone they sit. All these results suggest that an association between a binaural sound and a visual on a smartphone could be used by the general public.In the second part, we address the main question of the thesis, i.e., the contribution of binaural, and we conduct an experiment in a realistic context of use. Thirty subjects play an Infinite Runner video game in their daily lives. The game was developed for the occasion in two versions, a monophonic one and a binaural one. The experiment lasts five weeks, at a rate of two sessions per day, which relates to a protocol known as the “Experience Sampling Method”. We collect at each session notes of immersion, memorization and performance, and compare the notes between the monophonic sessions and the binaural ones. Results indicate a significantly better immersion in the binaural sessions. No effect of sound rendering was found for memorization and performance. Beyond the contribution of the binaural, we discuss about the protocol, the validity of the collected data, and oppose theoretical considerations to practical feasibility
Randrianarivo, Hicham. "Apprentissage statistique de classes sémantiques pour l'interprétation d'images aériennes." Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1117/document.
Full textThis work is about interpretation of the content of very high resolution aerial optical panchromatic images. Two methods are proposed for the classification of this kind of images. The first method aims at detecting the instances of a class of objects and the other method aims at segmenting superpixels extracted from the images using a contextual model of the relations between the superpixels. The object detection method in very high resolution images uses a mixture of appearance models of a class of objects then fuses the hypothesis returned by the models. We develop a method that clusters training samples into visual subcategories based on a two stages procedure using metadata and visual information. The clustering part allows to learn models that are specialised in recognizing a subset of the dataset and whose fusion lead to a generalization of the object detector. The performances of the method are evaluate on several dataset of very high resolution images at several resolutions and several places. The method proposed for contextual semantic segmentation use a combination of visual description of a superpixel extract from the image and contextual information gathered between a superpixel and its neighbors. The contextual representation is based on a graph where the nodes are the superpixels and the edges are the relations between two neighbors. Finally we predict the category of a superpixel using the predictions made by of the neighbors using the contextual model in order to make the prediction more reliable. We test our method on a dataset of very high resolution images
Randrianarivo, Hicham. "Apprentissage statistique de classes sémantiques pour l'interprétation d'images aériennes." Electronic Thesis or Diss., Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1117.
Full textThis work is about interpretation of the content of very high resolution aerial optical panchromatic images. Two methods are proposed for the classification of this kind of images. The first method aims at detecting the instances of a class of objects and the other method aims at segmenting superpixels extracted from the images using a contextual model of the relations between the superpixels. The object detection method in very high resolution images uses a mixture of appearance models of a class of objects then fuses the hypothesis returned by the models. We develop a method that clusters training samples into visual subcategories based on a two stages procedure using metadata and visual information. The clustering part allows to learn models that are specialised in recognizing a subset of the dataset and whose fusion lead to a generalization of the object detector. The performances of the method are evaluate on several dataset of very high resolution images at several resolutions and several places. The method proposed for contextual semantic segmentation use a combination of visual description of a superpixel extract from the image and contextual information gathered between a superpixel and its neighbors. The contextual representation is based on a graph where the nodes are the superpixels and the edges are the relations between two neighbors. Finally we predict the category of a superpixel using the predictions made by of the neighbors using the contextual model in order to make the prediction more reliable. We test our method on a dataset of very high resolution images
Books on the topic "006.7/8"
F, Ransome James, ed. Cloud computing: Implementation, management, and security. Boca Raton, FL: CRC Press, 2010.
Find full textJason, Roberts, ed. Director 8 demystified: The official guide to Macromedia Director, Lingo, and Shockwave. Berkeley, Calif: Peachpit in association with Macromedia Press, 2000.
Find full textMacromedia Flash MX 2004 for Windows and Macintosh. Berkeley, Calif: Peachpit Press, 2004.
Find full textSams teach yourself Macromedia Flash MX 2004 in 24 hours. Indianapolis: Sams, 2004.
Find full textCloud Computing: Implementation, Management, and Security. CRC Press LLC, 2012.
Find full textAdobe Flash CS3 Professional Bible. Wiley, 2007.
Find full textAdobe Dreamweaver CS3: Video Training Book. Peachpit Press, 2007.
Find full textAdobe Dreamweaver CS3: Includes exercise files and demo movies. Berkeley, CA: lynda.com/books : Peachpit Press, 2007.
Find full textBeyond Bullet Points: Using Microsoft Office PowerPoint 2007 to Create Presentations That Inform, Motivate, and Inspire (Bpg - Other). Microsoft Press, 2007.
Find full text