Academic literature on the topic 'Generative audio models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Generative audio models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Generative audio models"
Evans, Zach, Scott H. Hawley, and Katherine Crowson. "Musical audio samples generated from joint text embeddings." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A178. http://dx.doi.org/10.1121/10.0015956.
Full textWang, Heng, Jianbo Ma, Santiago Pascual, Richard Cartwright, and Weidong Cai. "V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15492–501. http://dx.doi.org/10.1609/aaai.v38i14.29475.
Full textSakirin, Tam, and Siddartha Kusuma. "A Survey of Generative Artificial Intelligence Techniques." Babylonian Journal of Artificial Intelligence 2023 (March 10, 2023): 10–14. http://dx.doi.org/10.58496/bjai/2023/003.
Full textBroad, Terence, Frederic Fol Leymarie, and Mick Grierson. "Network Bending: Expressive Manipulation of Generative Models in Multiple Domains." Entropy 24, no. 1 (December 24, 2021): 28. http://dx.doi.org/10.3390/e24010028.
Full textAldausari, Nuha, Arcot Sowmya, Nadine Marcus, and Gelareh Mohammadi. "Video Generative Adversarial Networks: A Review." ACM Computing Surveys 55, no. 2 (March 31, 2023): 1–25. http://dx.doi.org/10.1145/3487891.
Full textShen, Qiwei, Junjie Xu, Jiahao Mei, Xingjiao Wu, and Daoguo Dong. "EmoStyle: Emotion-Aware Semantic Image Manipulation with Audio Guidance." Applied Sciences 14, no. 8 (April 10, 2024): 3193. http://dx.doi.org/10.3390/app14083193.
Full textAndreu, Sergi, and Monica Villanueva Aylagas. "Neural Synthesis of Sound Effects Using Flow-Based Deep Generative Models." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 18, no. 1 (October 11, 2022): 2–9. http://dx.doi.org/10.1609/aiide.v18i1.21941.
Full textLattner, Stefan, and Javier Nistal. "Stochastic Restoration of Heavily Compressed Musical Audio Using Generative Adversarial Networks." Electronics 10, no. 11 (June 5, 2021): 1349. http://dx.doi.org/10.3390/electronics10111349.
Full textYang, Junpeng, and Haoran Zhang. "Development And Challenges of Generative Artificial Intelligence in Education and Art." Highlights in Science, Engineering and Technology 85 (March 13, 2024): 1334–47. http://dx.doi.org/10.54097/vaeav407.
Full textChoi, Ha-Yeong, Sang-Hoon Lee, and Seong-Whan Lee. "DDDM-VC: Decoupled Denoising Diffusion Models with Disentangled Representation and Prior Mixup for Verified Robust Voice Conversion." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 17862–70. http://dx.doi.org/10.1609/aaai.v38i16.29740.
Full textDissertations / Theses on the topic "Generative audio models"
Douwes, Constance. "On the Environmental Impact of Deep Generative Models for Audio." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS074.
Full textIn this thesis, we investigate the environmental impact of deep learning models for audio generation and we aim to put computational cost at the core of the evaluation process. In particular, we focus on different types of deep learning models specialized in raw waveform audio synthesis. These models are now a key component of modern audio systems, and their use has increased significantly in recent years. Their flexibility and generalization capabilities make them powerful tools in many contexts, from text-to-speech synthesis to unconditional audio generation. However, these benefits come at the cost of expensive training sessions on large amounts of data, operated on energy-intensive dedicated hardware, which incurs large greenhouse gas emissions. The measures we use as a scientific community to evaluate our work are at the heart of this problem. Currently, deep learning researchers evaluate their works primarily based on improvements in accuracy, log-likelihood, reconstruction, or opinion scores, all of which overshadow the computational cost of generative models. Therefore, we propose using a new methodology based on Pareto optimality to help the community better evaluate their work's significance while bringing energy footprint -- and in fine carbon emissions -- at the same level of interest as the sound quality. In the first part of this thesis, we present a comprehensive report on the use of various evaluation measures of deep generative models for audio synthesis tasks. Even though computational efficiency is increasingly discussed, quality measurements are the most commonly used metrics to evaluate deep generative models, while energy consumption is almost never mentioned. Therefore, we address this issue by estimating the carbon cost of training generative models and comparing it to other noteworthy carbon costs to demonstrate that it is far from insignificant. In the second part of this thesis, we propose a large-scale evaluation of pervasive neural vocoders, which are a class of generative models used for speech generation, conditioned on mel-spectrogram. We introduce a multi-objective analysis based on Pareto optimality of both quality from human-based evaluation and energy consumption. Within this framework, we show that lighter models can perform better than more costly models. By proposing to rely on a novel definition of efficiency, we intend to provide practitioners with a decision basis for choosing the best model based on their requirements. In the last part of the thesis, we propose a method to reduce the inference costs of neural vocoders, based on quantizated neural networks. We show a significant gain on the memory size and give some hints for the future use of these models on embedded hardware. Overall, we provide keys to better understand the impact of deep generative models for audio synthesis as well as a new framework for developing models while accounting for their environmental impact. We hope that this work raises awareness on the need to investigate energy-efficient models simultaneously with high perceived quality
Caillon, Antoine. "Hierarchical temporal learning for multi-instrument and orchestral audio synthesis." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS115.
Full textRecent advances in deep learning have offered new ways to build models addressing a wide variety of tasks through the optimization of a set of parameters based on minimizing a cost function. Amongst these techniques, probabilistic generative models have yielded impressive advances in text, image and sound generation. However, musical audio signal generation remains a challenging problem. This comes from the complexity of audio signals themselves, since a single second of raw audio spans tens of thousands of individual samples. Modeling musical signals is even more challenging as important information are structured across different time scales, from micro (e.g. timbre, transient, phase) to macro (e.g. genre, tempo, structure) information. Modeling every scale at once would require large architectures, precluding the use of resulting models in real time setups for computational complexity reasons.In this thesis, we study how a hierarchical approach to audio modeling can address the musical signal modeling task, while offering different levels of control to the user. Our main hypothesis is that extracting different representation levels of an audio signal allows to abstract the complexity of lower levels for each modeling stage. This would eventually allow the use of lightweight architectures, each modeling a single audio scale. We start by addressing raw audio modeling by proposing an audio model combining Variational Auto Encoders and Generative Adversarial Networks, yielding high-quality 48kHz neural audio synthesis, while being 20 times faster than real time on CPU. Then, we study how autoregressive models can be used to understand the temporal behavior of the representation yielded by this low-level audio model, using optional additional conditioning signals such as acoustic descriptors or tempo. Finally, we propose a method for using all the proposed models directly on audio streams, allowing their use in realtime applications that we developed during this thesis. We conclude by presenting various creative collaborations led in parallel of this work with several composers and musicians, directly integrating the current state of the proposed technologies inside musical pieces
Nishikimi, Ryo. "Generative, Discriminative, and Hybrid Approaches to Audio-to-Score Automatic Singing Transcription." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263772.
Full textCHEMLA, ROMEU SANTOS AXEL CLAUDE ANDRE'. "MANIFOLD REPRESENTATIONS OF MUSICAL SIGNALS AND GENERATIVE SPACES." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/700444.
Full textAmong the diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, jointly nourishing both scientific and artistic practices since its creation. Inherent in computer music since its genesis, audio generation has inspired numerous approaches, evolving both with musical practices and scientific/technical advances. Moreover, some syn- thesis processes also naturally handle the reverse process, named analysis, such that synthesis parameters can also be partially or totally extracted from actual sounds, and providing an alternative representation of the analyzed audio signals. On top of that, the recent rise of machine learning algorithms earnestly questioned the field of scientific research, bringing powerful data-centred methods that raised several epistemological questions amongst researchers, in spite of their efficiency. Especially, a family of machine learning methods, called generative models, are focused on the generation of original content using features extracted from an existing dataset. In that case, such methods not only questioned previous approaches in generation, but also the way of integrating this methods into existing creative processes. While these new generative frameworks are progressively introduced in the domain of image generation, the application of such generative techniques in audio synthesis is still marginal. In this work, we aim to propose a new audio analysis-synthesis framework based on these modern generative models, enhanced by recent advances in machine learning. We first review existing approaches, both in sound synthesis and in generative machine learning, and focus on how our work inserts itself in both practices and what can be expected from their collation. Subsequently, we focus a little more on generative models, and how modern advances in the domain can be exploited to allow us learning complex sound distributions, while being sufficiently flexible to be integrated in the creative flow of the user. We then propose an inference / generation process, mirroring analysis/synthesis paradigms that are natural in the audio processing domain, using latent models that are based on a continuous higher-level space, that we use to control the generation. We first provide preliminary results of our method applied on spectral information, extracted from several datasets, and evaluate both qualitatively and quantitatively the obtained results. Subsequently, we study how to make these methods more suitable for learning audio data, tackling successively three different aspects. First, we propose two different latent regularization strategies specifically designed for audio, based on and signal / symbol translation and perceptual constraints. Then, we propose different methods to address the inner temporality of musical signals, based on the extraction of multi-scale representations and on prediction, that allow the obtained generative spaces that also model the dynamics of the signal. As a last chapter, we swap our scientific approach to a more research & creation-oriented point of view: first, we describe the architecture and the design of our open-source library, vsacids, aiming to be used by expert and non-expert music makers as an integrated creation tool. Then, we propose an first musical use of our system by the creation of a real-time performance, called aego, based jointly on our framework vsacids and an explorative agent using reinforcement learning to be trained during the performance. Finally, we draw some conclusions on the different manners to improve and reinforce the proposed generation method, as well as possible further creative applications.
À travers les différents domaines de recherche de la musique computationnelle, l’analysie et la génération de signaux audio sont l’exemple parfait de la trans-disciplinarité de ce domaine, nourrissant simultanément les pratiques scientifiques et artistiques depuis leur création. Intégrée à la musique computationnelle depuis sa création, la synthèse sonore a inspiré de nombreuses approches musicales et scientifiques, évoluant de pair avec les pratiques musicales et les avancées technologiques et scientifiques de son temps. De plus, certaines méthodes de synthèse sonore permettent aussi le processus inverse, appelé analyse, de sorte que les paramètres de synthèse d’un certain générateur peuvent être en partie ou entièrement obtenus à partir de sons donnés, pouvant ainsi être considérés comme une représentation alternative des signaux analysés. Parallèlement, l’intérêt croissant soulevé par les algorithmes d’apprentissage automatique a vivement questionné le monde scientifique, apportant de puissantes méthodes d’analyse de données suscitant de nombreux questionnements épistémologiques chez les chercheurs, en dépit de leur effectivité pratique. En particulier, une famille de méthodes d’apprentissage automatique, nommée modèles génératifs, s’intéressent à la génération de contenus originaux à partir de caractéristiques extraites directement des données analysées. Ces méthodes n’interrogent pas seulement les approches précédentes, mais aussi sur l’intégration de ces nouvelles méthodes dans les processus créatifs existants. Pourtant, alors que ces nouveaux processus génératifs sont progressivement intégrés dans le domaine la génération d’image, l’application de ces techniques en synthèse audio reste marginale. Dans cette thèse, nous proposons une nouvelle méthode d’analyse-synthèse basés sur ces derniers modèles génératifs, depuis renforcés par les avancées modernes dans le domaine de l’apprentissage automatique. Dans un premier temps, nous examinerons les approches existantes dans le domaine des systèmes génératifs, sur comment notre travail peut s’insérer dans les pratiques de synthèse sonore existantes, et que peut-on espérer de l’hybridation de ces deux approches. Ensuite, nous nous focaliserons plus précisément sur comment les récentes avancées accomplies dans ce domaine dans ce domaine peuvent être exploitées pour l’apprentissage de distributions sonores complexes, tout en étant suffisamment flexibles pour être intégrées dans le processus créatif de l’utilisateur. Nous proposons donc un processus d’inférence / génération, reflétant les paradigmes d’analyse-synthèse existant dans le domaine de génération audio, basé sur l’usage de modèles latents continus que l’on peut utiliser pour contrôler la génération. Pour ce faire, nous étudierons déjà les résultats préliminaires obtenus par cette méthode sur l’apprentissage de distributions spectrales, prises d’ensembles de données diversifiés, en adoptant une approche à la fois quantitative et qualitative. Ensuite, nous proposerons d’améliorer ces méthodes de manière spécifique à l’audio sur trois aspects distincts. D’abord, nous proposons deux stratégies de régularisation différentes pour l’analyse de signaux audio : une basée sur la traduction signal/ symbole, ainsi qu’une autre basée sur des contraintes perceptives. Nous passerons par la suite à la dimension temporelle de ces signaux audio, proposant de nouvelles méthodes basées sur l’extraction de représentations temporelles multi-échelle et sur une tâche supplémentaire de prédiction, permettant la modélisation de caractéristiques dynamiques par les espaces génératifs obtenus. En dernier lieu, nous passerons d’une approche scientifique à une approche plus orientée vers un point de vue recherche & création. Premièrement, nous présenterons notre librairie open-source, vsacids, visant à être employée par des créateurs experts et non-experts comme un outil intégré. Ensuite, nous proposons une première utilisation musicale de notre système par la création d’une performance temps réel, nommée ægo, basée à la fois sur notre librarie et sur un agent d’exploration appris dynamiquement par renforcement au cours de la performance. Enfin, nous tirons les conclusions du travail accompli jusqu’à maintenant, concernant les possibles améliorations et développements de la méthode de synthèse proposée, ainsi que sur de possibles applications créatives.
Guenebaut, Boris. "Automatic Subtitle Generation for Sound in Videos." Thesis, University West, Department of Economics and IT, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1784.
Full textThe last ten years have been the witnesses of the emergence of any kind of video content. Moreover, the appearance of dedicated websites for this phenomenon has increased the importance the public gives to it. In the same time, certain individuals are deaf and occasionally cannot understand the meanings of such videos because there is not any text transcription available. Therefore, it is necessary to find solutions for the purpose of making these media artefacts accessible for most people. Several software propose utilities to create subtitles for videos but all require an extensive participation of the user. Thence, a more automated concept is envisaged. This thesis report indicates a way to generate subtitles following standards by using speech recognition. Three parts are distinguished. The first one consists in separating audio from video and converting the audio in suitable format if necessary. The second phase proceeds to the recognition of speech contained in the audio. The ultimate stage generates a subtitle file from the recognition results of the previous step. Directions of implementation have been proposed for the three distinct modules. The experiment results have not done enough satisfaction and adjustments have to be realized for further work. Decoding parallelization, use of well trained models, and punctuation insertion are some of the improvements to be done.
Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.
Full textMehri, Soroush. "Sequential modeling, generative recurrent neural networks, and their applications to audio." Thèse, 2016. http://hdl.handle.net/1866/18762.
Full textBooks on the topic "Generative audio models"
Osipov, Vladimir. Control and audit of the activities of a commercial organization: external and internal. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1137320.
Full textKazimagomedov, Abdulla, Aida Abdulsalamova, M. Mel'nikov, and N. Gadzhiev. Analysis of the activities of a commercial bank. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1831614.
Full textKerouac, Jack. Big-Sur: Roman. Sankt-Peterburg: Azbuka, 2013.
Find full textColmeiro, José. Peripheral Visions / Global Sounds. Liverpool University Press, 2018. http://dx.doi.org/10.5949/liverpool/9781786940308.001.0001.
Full textAguayo, Angela J. Documentary Resistance. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190676216.001.0001.
Full textKerouac, Jack. Big Sur. Penguin Books, Limited, 2018.
Find full textKerouac, Jack. Big Sur. Penguin Books, 2011.
Find full textKerouac, Jack. Big Sur. Penguin Classics, 2001.
Find full textKerouac, Jack. Big Sur. McGraw-Hill Companies, 1990.
Find full textKerouac, Jack. Big Sur. Independently Published, 2021.
Find full textBook chapters on the topic "Generative audio models"
Huzaifah, Muhammad, and Lonce Wyse. "Deep Generative Models for Musical Audio Synthesis." In Handbook of Artificial Intelligence for Music, 639–78. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72116-9_22.
Full textYe, Sheng, Yu-Hui Wen, Yanan Sun, Ying He, Ziyang Zhang, Yaoyuan Wang, Weihua He, and Yong-Jin Liu. "Audio-Driven Stylized Gesture Generation with Flow-Based Model." In Lecture Notes in Computer Science, 712–28. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20065-6_41.
Full textWyse, Lonce, Purnima Kamath, and Chitralekha Gupta. "Sound Model Factory: An Integrated System Architecture for Generative Audio Modelling." In Artificial Intelligence in Music, Sound, Art and Design, 308–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-03789-4_20.
Full textFarkas, Michal, and Peter Lacko. "Using Advanced Audio Generating Techniques to Model Electrical Energy Load." In Engineering Applications of Neural Networks, 39–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65172-9_4.
Full textGolani, Mati, and Shlomit S. Pinter. "Generating a Process Model from a Process Audit Log." In Lecture Notes in Computer Science, 136–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44895-0_10.
Full textde Berardinis, Jacopo, Valentina Anita Carriero, Nitisha Jain, Nicolas Lazzari, Albert Meroño-Peñuela, Andrea Poltronieri, and Valentina Presutti. "The Polifonia Ontology Network: Building a Semantic Backbone for Musical Heritage." In The Semantic Web – ISWC 2023, 302–22. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47243-5_17.
Full textKim, Sang-Kyun, Doo Sun Hwang, Ji-Yeun Kim, and Yang-Seock Seo. "An Effective News Anchorperson Shot Detection Method Based on Adaptive Audio/Visual Model Generation." In Lecture Notes in Computer Science, 276–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11526346_31.
Full textYoshii, Kazuyoshi, and Masataka Goto. "MusicCommentator: Generating Comments Synchronized with Musical Audio Signals by a Joint Probabilistic Model of Acoustic and Textual Features." In Lecture Notes in Computer Science, 85–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04052-8_8.
Full textRenugadevi, R., J. Shobana, K. Arthi, Kalpana A. V., D. Satishkumar, and M. Sivaraja. "Real-Time Applications of Artificial Intelligence Technology in Daily Operations." In Advances in Computational Intelligence and Robotics, 243–57. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2615-2.ch012.
Full textCarpio de los Pinos, Carmen, and Arturo Galán González. "Facilitating Accessibility: A Study on Innovative Didactic Materials to Generate Emotional Interactions with Pictorial Art." In The Science of Emotional Intelligence. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.97796.
Full textConference papers on the topic "Generative audio models"
Yang, Hyukryul, Hao Ouyang, Vladlen Koltun, and Qifeng Chen. "Hiding Video in Audio via Reversible Generative Models." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00119.
Full textNguyen, Viet-Nhat, Mostafa Sadeghi, Elisa Ricci, and Xavier Alameda-Pineda. "Deep Variational Generative Models for Audio-Visual Speech Separation." In 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2021. http://dx.doi.org/10.1109/mlsp52302.2021.9596406.
Full textMingliang Gu and Yuguo Xia. "Fusing generative and discriminative models for Chinese dialect identification." In 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590173.
Full textShah, Neil, Dharmeshkumar M. Agrawal, and Niranajan Pedanekar. "Adding Crowd Noise to Sports Commentary using Generative Models." In Life Improvement in Quality by Ubiquitous Experiences Workshop. Brazilian Computing Society, 2021. http://dx.doi.org/10.5753/lique.2021.15715.
Full textBarnett, Julia. "The Ethical Implications of Generative Audio Models: A Systematic Literature Review." In AIES '23: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3600211.3604686.
Full textYe, Zhenhui, Zhou Zhao, Yi Ren, and Fei Wu. "SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/620.
Full textAgiomyrgiannakis, Yannis. "B-Spline Pdf: A Generalization of Histograms to Continuous Density Models for Generative Audio Networks." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461399.
Full textVatanparvar, Korosh, Viswam Nathan, Ebrahim Nemati, Md Mahbubur Rahman, and Jilong Kuang. "Adapting to Noise in Speech Obfuscation by Audio Profiling Using Generative Models for Passive Health Monitoring." In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) in conjunction with the 43rd Annual Conference of the Canadian Medical and Biological Engineering Society. IEEE, 2020. http://dx.doi.org/10.1109/embc44109.2020.9176156.
Full textSchimbinschi, Florin, Christian Walder, Sarah M. Erfani, and James Bailey. "SynthNet: Learning to Synthesize Music End-to-End." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/467.
Full textFarooq, Ahmed, Jari Kangas, and Roope Raisamo. "TAUCHI-GPT: Leveraging GPT-4 to create a Multimodal Open-Source Research AI tool." In AHFE 2023 Hawaii Edition. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004176.
Full textReports on the topic "Generative audio models"
Decleir, Cyril, Mohand-Saïd Hacid, and Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.
Full textVakaliuk, Tetiana A., Valerii V. Kontsedailo, Dmytro S. Antoniuk, Olha V. Korotun, Iryna S. Mintii, and Andrey V. Pikilnyak. Using game simulator Software Inc in the Software Engineering education. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3762.
Full text