Auswahl der wissenschaftlichen Literatur zum Thema „006.7/8“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "006.7/8" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "006.7/8"
Editorial, E. „Erratum: The impact of volumetric arc radiotherapy on clinical outcomes of patients with gynaecological malignancies“. Medical review 76, Nr. 11-12 (2023): 352. http://dx.doi.org/10.2298/mpns2312352e.
Der volle Inhalt der QuelleЛісений, Олександр, Віталій Глуховський, Миколай Мар’єнков, Світлана Дубовик, Іван Любченко und Михайло Яковенко. „ОБСТЕЖЕННЯ, ОЦІНКА ТЕХНІЧНОГО СТАНУ ТА УМОВИ ВІДНОВЛЕННЯ ЖИТЛОВОГО БУДИНКУ НА ПРОСПЕКТІ В. ЛОБАНОВСЬКОГО, 6-А В М. КИЄВІ, ПОШКОДЖЕНОГО ВНАСЛІДОК ВОЄННИХ ДІЙ“. Наука та будівництво 33, Nr. 3-4 (27.01.2023). http://dx.doi.org/10.33644/10.33644/2313-6679-34-2022-6.
Der volle Inhalt der QuelleDissertationen zum Thema "006.7/8"
Moreira, Julian. „Évaluer l'apport du binaural dans une application mobile audiovisuelle“. Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1243/document.
Der volle Inhalt der QuelleIn recent years, smartphone and tablet global performances have been increased significantly (CPU, screen resolution, webcams, etc.). This can be particularly observed with video quality of mobile media services, such as video streaming applications, or interactive applications (e.g., video games). However, these evolutions barely go with the integration of high quality sound restitution systems. Beside these evolutions though, new technologies related to spatialized sound on headphones have been developed, namely the binaural restitution model, using HRTF (Head Related Transfer Functions) filters.In this thesis, we assess the potential contribution of the binaural technology to enhance the quality of experience of an audiovisual mobile application. A part of our work has been dedicated to define what is an “audiovisual mobile application”, what kind of application could be fruitfully experienced with a binaural sound, and among those applications which one could lead to a comparative experiment with and without binaural.In a first place, the coupling of a binaural sound with a mobile-rendered visual tackles a question related to perception: how to spatially arrange a virtual scene whose sound can be spread all around the user, while its visual is limited to a very small space? We propose an experiment in these conditions to study how far a sound and a visual can be moved apart without breaking their perceptual fusion. The results reveal a strong tolerance of subjects to spatial discrepancies between the two modalities. Notably, the absence or presence of individualization for the HRTF filters, and a large separation in elevation between sound and visual don’t seem to affect the perception. Besides, subjects consider the virtual scene as if they were projected inside, at the camera’s position, no matter what distance to the phone they sit. All these results suggest that an association between a binaural sound and a visual on a smartphone could be used by the general public.In the second part, we address the main question of the thesis, i.e., the contribution of binaural, and we conduct an experiment in a realistic context of use. Thirty subjects play an Infinite Runner video game in their daily lives. The game was developed for the occasion in two versions, a monophonic one and a binaural one. The experiment lasts five weeks, at a rate of two sessions per day, which relates to a protocol known as the “Experience Sampling Method”. We collect at each session notes of immersion, memorization and performance, and compare the notes between the monophonic sessions and the binaural ones. Results indicate a significantly better immersion in the binaural sessions. No effect of sound rendering was found for memorization and performance. Beyond the contribution of the binaural, we discuss about the protocol, the validity of the collected data, and oppose theoretical considerations to practical feasibility
Moreira, Julian. „Évaluer l'apport du binaural dans une application mobile audiovisuelle“. Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1243.
Der volle Inhalt der QuelleIn recent years, smartphone and tablet global performances have been increased significantly (CPU, screen resolution, webcams, etc.). This can be particularly observed with video quality of mobile media services, such as video streaming applications, or interactive applications (e.g., video games). However, these evolutions barely go with the integration of high quality sound restitution systems. Beside these evolutions though, new technologies related to spatialized sound on headphones have been developed, namely the binaural restitution model, using HRTF (Head Related Transfer Functions) filters.In this thesis, we assess the potential contribution of the binaural technology to enhance the quality of experience of an audiovisual mobile application. A part of our work has been dedicated to define what is an “audiovisual mobile application”, what kind of application could be fruitfully experienced with a binaural sound, and among those applications which one could lead to a comparative experiment with and without binaural.In a first place, the coupling of a binaural sound with a mobile-rendered visual tackles a question related to perception: how to spatially arrange a virtual scene whose sound can be spread all around the user, while its visual is limited to a very small space? We propose an experiment in these conditions to study how far a sound and a visual can be moved apart without breaking their perceptual fusion. The results reveal a strong tolerance of subjects to spatial discrepancies between the two modalities. Notably, the absence or presence of individualization for the HRTF filters, and a large separation in elevation between sound and visual don’t seem to affect the perception. Besides, subjects consider the virtual scene as if they were projected inside, at the camera’s position, no matter what distance to the phone they sit. All these results suggest that an association between a binaural sound and a visual on a smartphone could be used by the general public.In the second part, we address the main question of the thesis, i.e., the contribution of binaural, and we conduct an experiment in a realistic context of use. Thirty subjects play an Infinite Runner video game in their daily lives. The game was developed for the occasion in two versions, a monophonic one and a binaural one. The experiment lasts five weeks, at a rate of two sessions per day, which relates to a protocol known as the “Experience Sampling Method”. We collect at each session notes of immersion, memorization and performance, and compare the notes between the monophonic sessions and the binaural ones. Results indicate a significantly better immersion in the binaural sessions. No effect of sound rendering was found for memorization and performance. Beyond the contribution of the binaural, we discuss about the protocol, the validity of the collected data, and oppose theoretical considerations to practical feasibility
Randrianarivo, Hicham. „Apprentissage statistique de classes sémantiques pour l'interprétation d'images aériennes“. Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1117/document.
Der volle Inhalt der QuelleThis work is about interpretation of the content of very high resolution aerial optical panchromatic images. Two methods are proposed for the classification of this kind of images. The first method aims at detecting the instances of a class of objects and the other method aims at segmenting superpixels extracted from the images using a contextual model of the relations between the superpixels. The object detection method in very high resolution images uses a mixture of appearance models of a class of objects then fuses the hypothesis returned by the models. We develop a method that clusters training samples into visual subcategories based on a two stages procedure using metadata and visual information. The clustering part allows to learn models that are specialised in recognizing a subset of the dataset and whose fusion lead to a generalization of the object detector. The performances of the method are evaluate on several dataset of very high resolution images at several resolutions and several places. The method proposed for contextual semantic segmentation use a combination of visual description of a superpixel extract from the image and contextual information gathered between a superpixel and its neighbors. The contextual representation is based on a graph where the nodes are the superpixels and the edges are the relations between two neighbors. Finally we predict the category of a superpixel using the predictions made by of the neighbors using the contextual model in order to make the prediction more reliable. We test our method on a dataset of very high resolution images
Randrianarivo, Hicham. „Apprentissage statistique de classes sémantiques pour l'interprétation d'images aériennes“. Electronic Thesis or Diss., Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1117.
Der volle Inhalt der QuelleThis work is about interpretation of the content of very high resolution aerial optical panchromatic images. Two methods are proposed for the classification of this kind of images. The first method aims at detecting the instances of a class of objects and the other method aims at segmenting superpixels extracted from the images using a contextual model of the relations between the superpixels. The object detection method in very high resolution images uses a mixture of appearance models of a class of objects then fuses the hypothesis returned by the models. We develop a method that clusters training samples into visual subcategories based on a two stages procedure using metadata and visual information. The clustering part allows to learn models that are specialised in recognizing a subset of the dataset and whose fusion lead to a generalization of the object detector. The performances of the method are evaluate on several dataset of very high resolution images at several resolutions and several places. The method proposed for contextual semantic segmentation use a combination of visual description of a superpixel extract from the image and contextual information gathered between a superpixel and its neighbors. The contextual representation is based on a graph where the nodes are the superpixels and the edges are the relations between two neighbors. Finally we predict the category of a superpixel using the predictions made by of the neighbors using the contextual model in order to make the prediction more reliable. We test our method on a dataset of very high resolution images
Bücher zum Thema "006.7/8"
F, Ransome James, Hrsg. Cloud computing: Implementation, management, and security. Boca Raton, FL: CRC Press, 2010.
Den vollen Inhalt der Quelle findenJason, Roberts, Hrsg. Director 8 demystified: The official guide to Macromedia Director, Lingo, and Shockwave. Berkeley, Calif: Peachpit in association with Macromedia Press, 2000.
Den vollen Inhalt der Quelle findenMacromedia Flash MX 2004 for Windows and Macintosh. Berkeley, Calif: Peachpit Press, 2004.
Den vollen Inhalt der Quelle findenSams teach yourself Macromedia Flash MX 2004 in 24 hours. Indianapolis: Sams, 2004.
Den vollen Inhalt der Quelle findenCloud Computing: Implementation, Management, and Security. CRC Press LLC, 2012.
Den vollen Inhalt der Quelle findenAdobe Flash CS3 Professional Bible. Wiley, 2007.
Den vollen Inhalt der Quelle findenAdobe Dreamweaver CS3: Video Training Book. Peachpit Press, 2007.
Den vollen Inhalt der Quelle findenAdobe Dreamweaver CS3: Includes exercise files and demo movies. Berkeley, CA: lynda.com/books : Peachpit Press, 2007.
Den vollen Inhalt der Quelle findenBeyond Bullet Points: Using Microsoft Office PowerPoint 2007 to Create Presentations That Inform, Motivate, and Inspire (Bpg - Other). Microsoft Press, 2007.
Den vollen Inhalt der Quelle finden