Littérature scientifique sur le sujet « Deep Generatve Models »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Deep Generatve Models ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Deep Generatve Models"
Mehmood, Rayeesa, Rumaan Bashir et Kaiser J. Giri. « Deep Generative Models : A Review ». Indian Journal Of Science And Technology 16, no 7 (21 février 2023) : 460–67. http://dx.doi.org/10.17485/ijst/v16i7.2296.
Texte intégralRagoza, Matthew, Tomohide Masuda et David Ryan Koes. « Generating 3D molecules conditional on receptor binding sites with deep generative models ». Chemical Science 13, no 9 (2022) : 2701–13. http://dx.doi.org/10.1039/d1sc05976a.
Texte intégralSalakhutdinov, Ruslan. « Learning Deep Generative Models ». Annual Review of Statistics and Its Application 2, no 1 (10 avril 2015) : 361–85. http://dx.doi.org/10.1146/annurev-statistics-010814-020120.
Texte intégralPartaourides, Harris, et Sotirios P. Chatzis. « Asymmetric deep generative models ». Neurocomputing 241 (juin 2017) : 90–96. http://dx.doi.org/10.1016/j.neucom.2017.02.028.
Texte intégralChangsheng Du, Changsheng Du, Yong Li Changsheng Du et Ming Wen Yong Li. « G-DCS : GCN-Based Deep Code Summary Generation Model ». 網際網路技術學刊 24, no 4 (juillet 2023) : 965–73. http://dx.doi.org/10.53106/160792642023072404014.
Texte intégralWu, Han. « Face image generation and feature visualization using deep convolutional generative adversarial networks ». Journal of Physics : Conference Series 2634, no 1 (1 novembre 2023) : 012041. http://dx.doi.org/10.1088/1742-6596/2634/1/012041.
Texte intégralBerrahal, Mohammed, Mohammed Boukabous, Mimoun Yandouzi, Mounir Grari et Idriss Idrissi. « Investigating the effectiveness of deep learning approaches for deep fake detection ». Bulletin of Electrical Engineering and Informatics 12, no 6 (1 décembre 2023) : 3853–60. http://dx.doi.org/10.11591/eei.v12i6.6221.
Texte intégralChe, Tong, Xiaofeng Liu, Site Li, Yubin Ge, Ruixiang Zhang, Caiming Xiong et Yoshua Bengio. « Deep Verifier Networks : Verification of Deep Discriminative Models with Deep Generative Models ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 8 (18 mai 2021) : 7002–10. http://dx.doi.org/10.1609/aaai.v35i8.16862.
Texte intégralScurto, Hugo, Thomas Similowski, Samuel Bianchini et Baptiste Caramiaux. « Probing Respiratory Care With Generative Deep Learning ». Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (28 septembre 2023) : 1–34. http://dx.doi.org/10.1145/3610099.
Texte intégralPrakash Patil, Et al. « GAN-Enhanced Medical Image Synthesis : Augmenting CXR Data for Disease Diagnosis and Improving Deep Learning Performance ». Journal of Electrical Systems 19, no 3 (25 janvier 2024) : 53–61. http://dx.doi.org/10.52783/jes.651.
Texte intégralThèses sur le sujet "Deep Generatve Models"
Miao, Yishu. « Deep generative models for natural language processing ». Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e4e1f1f9-e507-4754-a0ab-0246f1e1e258.
Texte intégralMisino, Eleonora. « Deep Generative Models with Probabilistic Logic Priors ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24058/.
Texte intégralNilsson, Mårten. « Augmenting High-Dimensional Data with Deep Generative Models ». Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233969.
Texte intégralDataaugmentering är en teknik som kan utföras på flera sätt för att förbättra träningen av diskriminativa modeller. De senaste framgångarna inom djupa generativa modeller har öppnat upp nya sätt att augmentera existerande dataset. I detta arbete har ett ramverk för augmentering av annoterade dataset med hjälp av djupa generativa modeller föreslagits. Utöver detta så har en metod för kvantitativ evaulering av kvaliteten hos genererade data set tagits fram. Med hjälp av detta ramverk har två dataset för pupillokalisering genererats med olika generativa modeller. Både väletablerade modeller och en ny modell utvecklad för detta syfte har testats. Den unika modellen visades både kvalitativt och kvantitativt att den genererade de bästa dataseten. Ett antal mindre experiment på standardiserade dataset visade exempel på fall där denna generativa modell kunde förbättra prestandan hos en existerande diskriminativ modell. Resultaten indikerar att generativa modeller kan användas för att augmentera eller ersätta existerande dataset vid träning av diskriminativa modeller.
Lindqvist, Niklas. « Automatic Question Paraphrasing in Swedish with Deep Generative Models ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294320.
Texte intégralParafrasgenerering syftar på uppgiften att, utifrån en given mening eller text, automatiskt generera en parafras, det vill säga en annan text med samma betydelse. Parafrasgenerering är en grundläggande men ändå utmanande uppgift inom naturlig språkbehandling och används i en rad olika applikationer som informationssökning, konversionssystem, att besvara frågor givet en text etc. I den här studien undersöker vi problemet med parafrasgenerering av frågor på svenska genom att utvärdera två olika djupa generativa modeller som visat lovande resultat på parafrasgenerering av frågor på engelska. Den första modellen är en villkorsbaserad variationsautokodare (C-VAE). Den andra modellen är också en C-VAE men introducerar även en diskriminator vilket gör modellen till ett generativt motståndarnätverk (GAN). Förutom modellerna presenterade ovan, implementerades även en icke maskininlärningsbaserad metod som en baslinje. Modellerna utvärderades med både kvantitativa och kvalitativa mått inklusive grammatisk korrekthet och likvärdighet mellan parafras och originalfråga. Resultaten visar att de djupa generativa modellerna presterar bättre än baslinjemodellen på alla kvantitativa mätvärden. Vidare, visade the kvalitativa utvärderingen att de djupa generativa modellerna kunde generera grammatiskt korrekta frågor i större utsträckning än baslinjemodellen. Det var däremot ingen större skillnad i semantisk ekvivalens mellan parafras och originalfråga för de olika modellerna.
Gane, Georgiana Andreea. « Building generative models over discrete structures : from graphical models to deep learning ». Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121611.
Texte intégralCataloged from PDF version of thesis. Page 173 blank.
Includes bibliographical references (pages 159-172).
The goal of this thesis is to investigate generative models over discrete structures, such as binary grids, alignments or arbitrary graphs. We focused on developing models easy to sample from, and we approached the task from two broad perspectives: defining models via structured potential functions, and via neural network based decoders. In the first case, we investigated Perturbation Models, a family of implicit distributions where samples emerge through optimization of randomized potential functions. Designed explicitly for efficient sampling, Perturbation Models are strong candidates for building generative models over structures, and the leading open questions pertain to understanding the properties of the induced models and developing practical learning algorithms.
In this thesis, we present theoretical results showing that, in contrast to the more established Gibbs models, low-order potential functions, after undergoing randomization and maximization, lead to high-order dependencies in the induced distributions. Furthermore, while conditioning in Gibbs' distributions is straightforward, conditioning in Perturbation Models is typically not, but we theoretically characterize cases where the straightforward approach produces the correct results. Finally, we introduce a new Perturbation Models learning algorithm based on Inverse Combinatorial Optimization. We illustrate empirically both the induced dependencies and the inverse optimization approach, in learning tasks inspired by computer vision problems. In the second case, we sequentialize the structures, converting structure generation into a sequence of discrete decisions, to enable the use of sequential models.
We explore maximum likelihood training with step-wise supervision and continuous relaxations of the intermediate decisions. With respect to intermediate discrete representations, the main directions consist of using gradient estimators or designing continuous relaxations. We discuss these solutions in the context of unsupervised scene understanding with generative models. In particular, we asked whether a continuous relaxation of the counting problem also discovers the objects in an unsupervised fashion (given the increased training stability that continuous relaxations provide) and we proposed an approach based on Adaptive Computation Time (ACT) which achieves the desired result. Finally, we investigated the task of iterative graph generation. We proposed a variational lower-bound to the maximum likelihood objective, where the approximate posterior distribution renormalizes the prior distribution over local predictions which are plausible for the target graph.
For instance, the local predictions may be binary values indicating the presence or absence of an edge indexed by the given time step, for a canonical edge indexing chosen a-priori. The plausibility of each local prediction is assessed by solving a combinatorial optimization problem, and we discuss relevant approaches, including an induced sub-graph isomorphism-based algorithm for the generic graph generation case, and a polynomial algorithm for the special case of graph generation resulting from solving graph clustering tasks. In this thesis, we focused on the generic case, and we investigated the approximate posterior's relevance on synthetic graph datasets.
by Georgiana Andreea Gane.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Mescheder, Lars Morten [Verfasser]. « Stability and Expressiveness of Deep Generative Models / Lars Morten Mescheder ». Tübingen : Universitätsbibliothek Tübingen, 2020. http://d-nb.info/1217249257/34.
Texte intégralRastgoufard, Rastin. « Multi-Label Latent Spaces with Semi-Supervised Deep Generative Models ». ScholarWorks@UNO, 2018. https://scholarworks.uno.edu/td/2486.
Texte intégralDouwes, Constance. « On the Environmental Impact of Deep Generative Models for Audio ». Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS074.
Texte intégralIn this thesis, we investigate the environmental impact of deep learning models for audio generation and we aim to put computational cost at the core of the evaluation process. In particular, we focus on different types of deep learning models specialized in raw waveform audio synthesis. These models are now a key component of modern audio systems, and their use has increased significantly in recent years. Their flexibility and generalization capabilities make them powerful tools in many contexts, from text-to-speech synthesis to unconditional audio generation. However, these benefits come at the cost of expensive training sessions on large amounts of data, operated on energy-intensive dedicated hardware, which incurs large greenhouse gas emissions. The measures we use as a scientific community to evaluate our work are at the heart of this problem. Currently, deep learning researchers evaluate their works primarily based on improvements in accuracy, log-likelihood, reconstruction, or opinion scores, all of which overshadow the computational cost of generative models. Therefore, we propose using a new methodology based on Pareto optimality to help the community better evaluate their work's significance while bringing energy footprint -- and in fine carbon emissions -- at the same level of interest as the sound quality. In the first part of this thesis, we present a comprehensive report on the use of various evaluation measures of deep generative models for audio synthesis tasks. Even though computational efficiency is increasingly discussed, quality measurements are the most commonly used metrics to evaluate deep generative models, while energy consumption is almost never mentioned. Therefore, we address this issue by estimating the carbon cost of training generative models and comparing it to other noteworthy carbon costs to demonstrate that it is far from insignificant. In the second part of this thesis, we propose a large-scale evaluation of pervasive neural vocoders, which are a class of generative models used for speech generation, conditioned on mel-spectrogram. We introduce a multi-objective analysis based on Pareto optimality of both quality from human-based evaluation and energy consumption. Within this framework, we show that lighter models can perform better than more costly models. By proposing to rely on a novel definition of efficiency, we intend to provide practitioners with a decision basis for choosing the best model based on their requirements. In the last part of the thesis, we propose a method to reduce the inference costs of neural vocoders, based on quantizated neural networks. We show a significant gain on the memory size and give some hints for the future use of these models on embedded hardware. Overall, we provide keys to better understand the impact of deep generative models for audio synthesis as well as a new framework for developing models while accounting for their environmental impact. We hope that this work raises awareness on the need to investigate energy-efficient models simultaneously with high perceived quality
Patsanis, Alexandros. « Network Anomaly Detection and Root Cause Analysis with Deep Generative Models ». Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-397367.
Texte intégralAlabdallah, Abdallah. « Human Understandable Interpretation of Deep Neural Networks Decisions Using Generative Models ». Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-41035.
Texte intégralLivres sur le sujet "Deep Generatve Models"
Mukhopadhyay, Anirban, Ilkay Oksuz, Sandy Engelhardt, Dajiang Zhu et Yixuan Yuan, dir. Deep Generative Models. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18576-2.
Texte intégralMukhopadhyay, Anirban, Ilkay Oksuz, Sandy Engelhardt, Dajiang Zhu et Yixuan Yuan, dir. Deep Generative Models. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53767-7.
Texte intégralEngelhardt, Sandy, Ilkay Oksuz, Dajiang Zhu, Yixuan Yuan, Anirban Mukhopadhyay, Nicholas Heller, Sharon Xiaolei Huang, Hien Nguyen, Raphael Sznitman et Yuan Xue, dir. Deep Generative Models, and Data Augmentation, Labelling, and Imperfections. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88210-5.
Texte intégralAli, Hazrat, Mubashir Husain Rehmani et Zubair Shah, dir. Advances in Deep Generative Models for Medical Artificial Intelligence. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46341-9.
Texte intégralBongard, Josh. Modeling self and others. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0011.
Texte intégralMukhopadhyay, Anirban, Dajiang Zhu, Sandy Engelhardt, Ilkay Oksuz et Yixuan Yuan. Deep Generative Models : Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. Springer, 2022.
Trouver le texte intégralMukhopadhyay, Anirban, Dajiang Zhu, Sandy Engelhardt, Ilkay Oksuz et Yixuan Yuan. Deep Generative Models, and Data Augmentation, Labelling, and Imperfections : First Workshop, DGM4MICCAI 2021, and First Workshop, DALI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings. Springer International Publishing AG, 2021.
Trouver le texte intégralConstantinesco, Thomas. Writing Pain in the Nineteenth-Century United States. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192855596.001.0001.
Texte intégralJoho, Tobias. Thucydides, Epic, and Tragedy. Sous la direction de Sara Forsdyke, Edith Foster et Ryan Balot. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199340385.013.40.
Texte intégralAguayo, Angela J. Documentary Resistance. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190676216.001.0001.
Texte intégralChapitres de livres sur le sujet "Deep Generatve Models"
Vasudevan, Shriram K., Sini Raj Pulari et Subashri Vasudevan. « Generative Models ». Dans Deep Learning, 209–25. New York : Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-9.
Texte intégralCalin, Ovidiu. « Generative Models ». Dans Deep Learning Architectures, 591–609. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_19.
Texte intégralTomczak, Jakub M. « Autoregressive Models ». Dans Deep Generative Modeling, 13–25. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93158-2_2.
Texte intégralTomczak, Jakub M. « Energy-Based Models ». Dans Deep Generative Modeling, 143–58. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93158-2_6.
Texte intégralTomczak, Jakub M. « Flow-Based Models ». Dans Deep Generative Modeling, 27–56. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93158-2_3.
Texte intégralTomczak, Jakub M. « Latent Variable Models ». Dans Deep Generative Modeling, 57–127. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93158-2_4.
Texte intégralSchön, Julian, Raghavendra Selvan et Jens Petersen. « Interpreting Latent Spaces of Generative Models for Medical Images Using Unsupervised Methods ». Dans Deep Generative Models, 24–33. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18576-2_3.
Texte intégralMensing, Daniel, Jochen Hirsch, Markus Wenzel et Matthias Günther. « 3D (c)GAN for Whole Body MR Synthesis ». Dans Deep Generative Models, 97–105. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18576-2_10.
Texte intégralDietrichstein, Marc, David Major, Martin Trapp, Maria Wimmer, Dimitrios Lenis, Philip Winter, Astrid Berg, Theresa Neubauer et Katja Bühler. « Anomaly Detection Using Generative Models and Sum-Product Networks in Mammography Scans ». Dans Deep Generative Models, 77–86. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18576-2_8.
Texte intégralTrullo, Roger, Quoc-Anh Bui, Qi Tang et Reza Olfati-Saber. « Image Translation Based Nuclei Segmentation for Immunohistochemistry Images ». Dans Deep Generative Models, 87–96. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18576-2_9.
Texte intégralActes de conférences sur le sujet "Deep Generatve Models"
Han, Tian, Jiawen Wu et Ying Nian Wu. « Replicating Active Appearance Model by Generator Network ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/305.
Texte intégralMisra, Siddharth, Jungang Chen, Polina Churilova et Yusuf Falola. « Generative Artificial Intelligence for Geomodeling ». Dans International Petroleum Technology Conference. IPTC, 2024. http://dx.doi.org/10.2523/iptc-23477-ms.
Texte intégralLi, Chen, Chikashige Yamanaka, Kazuma Kaitoh et Yoshihiro Yamanishi. « Transformer-based Objective-reinforced Generative Adversarial Network to Generate Desired Molecules ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/539.
Texte intégralChen, Wei, et Faez Ahmed. « PaDGAN : A Generative Adversarial Network for Performance Augmented Diverse Designs ». Dans ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22729.
Texte intégralXiao, Chaowei, Bo Li, Jun-yan Zhu, Warren He, Mingyan Liu et Dawn Song. « Generating Adversarial Examples with Adversarial Networks ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/543.
Texte intégralOussidi, Achraf, et Azeddine Elhassouny. « Deep generative models : Survey ». Dans 2018 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEE, 2018. http://dx.doi.org/10.1109/isacv.2018.8354080.
Texte intégralLiu, Bochao, Pengju Wang, Shikun Li, Dan Zeng et Shiming Ge. « Model Conversion via Differentially Private Data-Free Distillation ». Dans Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California : International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/243.
Texte intégralCintas, Celia, Payel Das, Brian Quanz, Girmaw Abebe Tadesse, Skyler Speakman et Pin-Yu Chen. « Towards Creativity Characterization of Generative Models via Group-Based Subset Scanning ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/683.
Texte intégralÇelik, Mustafa, et Ahmet HaydarÖrnek. « GAN-Based Data Augmentation and Anonymization for Mask Classification ». Dans 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112315.
Texte intégralLopez, Christian, Scarlett R. Miller et Conrad S. Tucker. « Human Validation of Computer vs Human Generated Design Sketches ». Dans ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85698.
Texte intégralRapports d'organisations sur le sujet "Deep Generatve Models"
Ogunbire, Abimbola, Panick Kalambay, Hardik Gajera et Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, décembre 2023. http://dx.doi.org/10.31979/mti.2023.2320.
Texte intégralHuang, Lei, Meng Song, Hui Shen, Huixiao Hong, Ping Gong, Deng Hong-Wen et Zhang Chaoyang. Deep learning methods for omics data imputation. Engineer Research and Development Center (U.S.), février 2024. http://dx.doi.org/10.21079/11681/48221.
Texte intégralPascal Notin, Pascal Notin. Designing ultrastable carbonic anhydrase with deep generative models and high-throughput assays. Experiment, octobre 2023. http://dx.doi.org/10.18258/57574.
Texte intégralSadoune, Igor, Marcelin Joanis et Andrea Lodi. Implementing a Hierarchical Deep Learning Approach for Simulating multilevel Auction Data. CIRANO, septembre 2023. http://dx.doi.org/10.54932/lqog8430.
Texte intégralBidier, S., U. Khristenko, A. Kodakkal, C. Soriano et R. Rossi. D7.4 Final report on Stochastic Optimization results. Scipedia, 2022. http://dx.doi.org/10.23967/exaqute.2022.3.02.
Texte intégralMalej, Matt, et Fengyan Shi. Suppressing the pressure-source instability in modeling deep-draft vessels with low under-keel clearance in FUNWAVE-TVD. Engineer Research and Development Center (U.S.), mai 2021. http://dx.doi.org/10.21079/11681/40639.
Texte intégralMohammadi, N., D. Corrigan, A. A. Sappin et N. Rayner. Evidence for a Neoarchean to earliest-Paleoproterozoic mantle metasomatic event prior to formation of the Mesoproterozoic-age Strange Lake REE deposit, Newfoundland and Labrador, and Quebec, Canada. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330866.
Texte intégralHuijser, MP, J. W. Duffield, C. Neher, A. P. Clevenger et T. Mcguire. Final Report 2022 : Update and expansion of the WVC mitigation measures and their cost-benefit model. Nevada Department of Transportation, octobre 2022. http://dx.doi.org/10.15788/ndot2022.10.
Texte intégralMbani, Benson, Timm Schoening et Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, mai 2023. http://dx.doi.org/10.3289/sw_2_2023.
Texte intégralBuesseler, Buessele, Daniele Bianchi, Fei Chai, Jay T. Cullen, Margaret Estapa, Nicholas Hawco, Seth John et al. Paths forward for exploring ocean iron fertilization. Woods Hole Oceanographic Institution, octobre 2023. http://dx.doi.org/10.1575/1912/67120.
Texte intégral