Gotowa bibliografia na temat „Transfert de style”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Transfert de style”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Transfert de style"
Duez, Bernard. "Le transfert comme paradigme processuel de la groupalité psychique : de l'habitude au style". Revue de psychothérapie psychanalytique de groupe 45, nr 2 (2005): 31. http://dx.doi.org/10.3917/rppg.045.0031.
Pełny tekst źródłaOkubo, Miki. "La notion "gothique" traduite dans la culture pop du Japon contemporain". Jangada: crítica | literatura | artes 1, nr 17 (6.08.2021): 390–408. http://dx.doi.org/10.35921/jangada.v1i17.351.
Pełny tekst źródłaKoffi, Vivi, i Jean Lorrain. "L’intégration du successeur dans l’équipe de gestion des entreprises familiales : le cas des femmes chefs d’entreprise". Revue internationale P.M.E. 18, nr 3-4 (16.02.2012): 73–92. http://dx.doi.org/10.7202/1008483ar.
Pełny tekst źródłaGold, Iwala. "L’emprunt En Traduction". Tasambo Journal of Language, Literature, and Culture 3, nr 01 (15.02.2024): 390–95. http://dx.doi.org/10.36349/tjllc.2024.v03i01.045.
Pełny tekst źródłaLonvaud-Funel, Aline. "La désacidification biologique des vins. Etat de la question, perspectives d'avenir". OENO One 28, nr 2 (30.06.1994): 161. http://dx.doi.org/10.20870/oeno-one.1994.28.2.1149.
Pełny tekst źródłaFeuillat, François, Jean-René Perrin, René Keller, Danièle Aubert, Pierre Gelhaye, Claude Houssement, Jean Perrin i Pierre Michel. "Simulation expérimentale de « l'expérience tonneau ». Mesure des cinétiques d'imprégnation du liquide dans le bois et d'évaporation de surface". OENO One 28, nr 3 (30.09.1994): 227. http://dx.doi.org/10.20870/oeno-one.1994.28.3.1140.
Pełny tekst źródłaBouillon, Pierrette, i Katharina Boesefeldt. "Problèmes de traduction automatique dans les sous-langages des bulletins d’avalanches". Meta 37, nr 4 (30.09.2002): 635–46. http://dx.doi.org/10.7202/002108ar.
Pełny tekst źródłaBara, Florence, i Marie-France Morin. "Est-il nécessaire d’enseigner l’écriture script en première année ? Les effets du style d’écriture sur le lien lecture/écriture". Nouveaux cahiers de la recherche en éducation 12, nr 2 (30.07.2013): 149–60. http://dx.doi.org/10.7202/1017456ar.
Pełny tekst źródłaDorin, Stéphane. "Style du Velours : sociologie du transfert de capital symbolique entre Andy Warhol et le Velvet Underground (1965-1967)". A contrario 3, nr 1 (2005): 45. http://dx.doi.org/10.3917/aco.031.67.
Pełny tekst źródłaHandayani, Sri. "La chanson folklorique enfantine comme media de l’apprentissage interculturel et du transfert des valeurs morales". Digital Press Social Sciences and Humanities 3 (2019): 00040. http://dx.doi.org/10.29037/digitalpress.43313.
Pełny tekst źródłaRozprawy doktorskie na temat "Transfert de style"
Mohammed, Omar. "Méthodes d'apprentissage approfondi pour l'extraction et le transfert de style". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT035.
Pełny tekst źródłaOne aspect of a successful human-machine interface (e.g. human-robot interaction, chatbots, speech, handwriting…,etc) is the ability to have a personalized interaction. This affects the overall human experience, and allow for a more fluent interaction. At the moment, there is a lot of work that uses machine learning in order to model such interactions. However, these models do not address the issue of personalized behavior: they try to average over the different examples from different people in the training set. Identifying the human styles (persona) opens the possibility of biasing the models output to take into account the human preference. In this thesis, we focused on the problem of styles in the context of handwriting.Defining and extracting handwriting styles is a challenging problem, since there is no formal definition for those styles (i.e., it is an ill-posed problem). Styles are both social - depending on the writer's training, especially in middle school - and idiosyncratic - depends on the writer's shaping (letter roundness, sharpness…,etc) and force distribution over time. As a consequence, there are no easy/generic metrics to measure the quality of style in a machine behavior.We may want to change the task or adapt to a new person. Collecting data in the human-machine interface domain can be quite expensive and time consuming. Although most of the time the new task has many things in common with the old task, traditional machine learning techniques fail to take advantage of this commonality, leading to a quick degradation in performance. Thus, one of the objectives of my thesis is to study and evaluate the idea of transferring knowledge about the styles between different tasks, within the machine learning paradigm.The objective of my thesis is to study these problems of styles, in the domain of handwriting. Available to us is IRONOFF dataset, an online handwriting datasets, with 410 writers, with ~25K examples of uppercase, lowercase letters and digits drawings. For transfer learning, we used an extra dataset, QuickDraw!, a sketch drawing dataset containing ~50 million drawing over 345 categories.Major contributions of my thesis are:1) Propose a work pipeline to study the problem of styles in handwriting. This involves proposing methodology, benchmarks and evaluation metrics.We choose temporal generative models paradigm in deep learning in order to generate drawings, and evaluate their proximity/relevance to the intended/ground truth drawings. We proposed two metrics, to evaluate the curvature and the length of the generated drawings. In order to ground those metics, we proposed multiple benchmarks - which we know their relative power in advance -, and then verified that the metrics actually respect the relative power relationship.2) Propose a framework to study and extract styles, and verify its advantage against the previously proposed benchmarks.We settled on the idea of using a deep conditioned-autoencoder in order to summarize and extract the style information, without the need to focus on the task identity (since it is given as a condition). We validate this framework to the previously proposed benchmark using our evaluation metrics. We also to visualize on the extracted styles, leading to some exciting outcomes!3) Using the proposed framework, propose a way to transfer the information about styles between different tasks, and a protocol in order to evaluate the quality of transfer.We leveraged the deep conditioned-autoencoder used earlier, by extract the encoder part in it - which we believe had the relevant information about the styles - and use it to in new models trained on new tasks. We extensively test this paradigm over a different range of tasks, on both IRONOFF and QuickDraw! datasets. We show that we can successfully transfer style information between different tasks
Cifka, Ondrej. "Deep learning methods for music style transfer". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT029.
Pełny tekst źródłaRecently, deep learning methods have enabled transforming musical material in a data-driven manner. The focus of this thesis is on a family of tasks which we refer to as (one-shot) music style transfer, where the goal is to transfer the style of one musical piece or fragment onto another.In the first part of this work, we focus on supervised methods for symbolic music accompaniment style transfer, aiming to transform a given piece by generating a new accompaniment for it in the style of another piece. The method we have developed is based on supervised sequence-to-sequence learning using recurrent neural networks (RNNs) and leverages a synthetic parallel (pairwise aligned) dataset generated for this purpose using existing accompaniment generation software. We propose a set of objective metrics to evaluate the performance on this new task and we show that the system is successful in generating an accompaniment in the desired style while following the harmonic structure of the input.In the second part, we investigate a more basic question: the role of positional encodings (PE) in music generation using Transformers. In particular, we propose stochastic positional encoding (SPE), a novel form of PE capturing relative positions while being compatible with a recently proposed family of efficient Transformers.We demonstrate that SPE allows for better extrapolation beyond the training sequence length than the commonly used absolute PE.Finally, in the third part, we turn from symbolic music to audio and address the problem of timbre transfer. Specifically, we are interested in transferring the timbre of an audio recording of a single musical instrument onto another such recording while preserving the pitch content of the latter. We present a novel method for this task, based on an extension of the vector-quantized variational autoencoder (VQ-VAE), along with a simple self-supervised learning strategy designed to obtain disentangled representations of timbre and pitch. As in the first part, we design a set of objective metrics for the task. We show that the proposed method is able to outperform existing ones
Fares, Mireille. "Multimodal Expressive Gesturing With Style". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Pełny tekst źródłaThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks
Shen, Tianxiao. "Language style transfer". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117822.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 41-45).
This thesis studies style transfer on the basis of non-parallel text. This is an instance of a broad family of problems including machine translation, decipherment, and attribute modication. The key challenge is to separate the content from style in an unsupervised manner. We assume a shared latent content distribution across different text corpora, and propose a method that leverages refined alignment of latent representations to perform style transfer. The transferred sentences from one style should match example sentences from the other style as a population. To demonstrate the flexibility of the proposed model, we test it on three tasks: sentiment modication, decipherment of word substitution ciphers, and word order recovery. In both automatic and human evaluation our method achieves strong performance.
by Tianxiao Shen.
S.M.
Hart, David Marvin. "Light-Field Style Transfer". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7763.
Pełny tekst źródłaGraffieti, Gabriele. "Style Transfer with Generative Adversarial Networks". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17015/.
Pełny tekst źródłaMatthews, Nicholas (Nicholas J. ). "Evaluating style transfer in natural language". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119734.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 46-47).
Style transfer is an active area of research growing in popularity in the Natural Language setting. The goal of this thesis is present a comprehensive review of style transfer tasks used to date, analyze these tasks, and delineate important properties and candidate tasks for future methods researchers. Several challenges still exist, including the difficulty of distinguishing between content and style in a sentence. While some state of the art models attempt to overcome this problem, even tasks as simple as sentiment transfer are still non-trivial. Problems of granularity, transferability, and distinguishability have yet to be solved. I provide a comprehensive analysis of the popular sentiment transfer task along with a number of metrics that highlight its shortcomings. Finally, I introduce possible new tasks for consideration, news outlet style transfer and non-parallel error correction, and provide similar analysis for the feasibility of using these tasks as style transfer baselines.
by Nicholas Matthews.
M. Eng.
Battilana, Pietro. "Convolutional Neural Networks for Image Style Transfer". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16770/.
Pełny tekst źródłaShih, YiChang. "Data-driven photographic style using local transfer". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99846.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 139-154).
After taking pictures, photographers often seek to convey their unique moods by altering the style of their photographs, which can involve meticulous contrast management, lighting, dodging, and burning. In this sense, not only are advanced photographers concerned about their pictures' styles; casual photographers who take pictures with cellphone cameras also process their pictures using built-in applications to adjust the image's luminance, coloring, and details. In general, photographers who stylize pictures give them new, different visual appearances, while also preserving the original content. In this context, we investigate problems with novel image stylization, including reproducing the precise time-of-day where the lighting and atmosphere can make a landscape glow, and making a portrait style resemble that created by a renowned photographer. Given an already captured image, however, automatically achieving given styles is challenging. In fact, changing the appearance in a photograph to mimic another time-of-day requires the analysis and modeling of complex 3-D physical light interactions in the scene, while reproducing a portrait photographer's unique style require computers to acquire artistic tastes and a glimpse of the artist's creative process. In this dissertation, we sidestep these Al-complete problems to instead leverage the power of data. We exploit an image database consisting of time-lapse data describing variations in scene appearance during the course of an entire day, and stylish portraits that are already deliberately processed by artists. To leverage these data, we present new algorithms that put input images in dense and local correspondence with examples. In our first method, we change the time-of-day with a single image as the input, which we put in correspondence with a reference time-lapse video. We then extract the local appearance transformations between different frames of the reference, and apply them to the input. In our second method, we transfer the style of a portrait onto a new input by way of local and multi-scale transformations. We demonstrate our methods on public datasets and a large set of photos downloaded from the Internet. We show that we can successfully handle lightings at different times of day and styles by a variety of different artists.
by YiChang Shih.
Ph. D.
Tao, Joakim, i David Thimrén. "Smoothening of Software documentation : comparing a self-made sequence to sequence model to a pre-trained model GPT-2". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178186.
Pełny tekst źródłaThis thesis was presented on June 22, 2021, the presentation was done online on Microsoft teams.
Książki na temat "Transfert de style"
Style and the successful girl: Transform your look, transform your life. New York: Gotham Books, 2013.
Znajdź pełny tekst źródłaStencil style: Ideas and projects to transform your home. London: Ward Lock, 1995.
Znajdź pełny tekst źródłaRiley, Lesley. Creative image transfer--any artist, any style, any surface: 16 new mixed-media projects using transfer artist paper. Lafayette, CA: C&T Publishing, 2014.
Znajdź pełny tekst źródłaBump it up: Transform your pregnancy into the ultimate style statement. New York: Ballantine Books, 2010.
Znajdź pełny tekst źródłaIrv, Sternberg, red. How to run your business so you can leave it in style. New York, NY: Amacom, American Management Association, 1990.
Znajdź pełny tekst źródłaB, Carroll Kathryn, i Brown John H. 1947-, red. The completely revised how to run your business so you can leave it in style. Wyd. 2. [Denver, Colo.]: Business Enterprise Press, 1997.
Znajdź pełny tekst źródłaRosenberg, Merrick. Taking flight!: Master the four behavioral styles and transform your career, your relationships-- your life. Upper Saddle River, N.J: FT Press, 2013.
Znajdź pełny tekst źródłaHughes, Mike. Tweak to transform: Improving teaching : a practical handbook for school leaders. Stafford: Network Educational Press, 2002.
Znajdź pełny tekst źródła1969-, Wang Xun, red. Ji ceng zhi li mo shi zhuan xing: Yang Cun ge an yan jiu = Administration style transfer at the grass-roots : a case study of Yang Village. Beijing Shi: She hui ke xue wen xian chu ban she, 2008.
Znajdź pełny tekst źródła1969-, Wang Xun, red. Ji ceng zhi li mo shi zhuan xing: Yang Cun ge an yan jiu = Administration style transfer at the grass-roots : a case study of Yang Village. Beijing Shi: She hui ke xue wen xian chu ban she, 2008.
Znajdź pełny tekst źródłaCzęści książek na temat "Transfert de style"
Sarang, Poornachandra. "Style Transfer". W Artificial Neural Networks with TensorFlow 2, 577–611. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6150-7_12.
Pełny tekst źródłaLiu, Shiguang. "Style Transfer". W Synthesis Lectures on Visual Computing: Computer Graphics, Animation, Computational Photography and Imaging, 55–64. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-26030-8_6.
Pełny tekst źródłaWutt, Karl. "Stile". W Edition Transfer, 54–57. Vienna: Springer Vienna, 2010. http://dx.doi.org/10.1007/978-3-211-99154-1_7.
Pełny tekst źródłaKim, Sunnie S. Y., Nicholas Kolkin, Jason Salavon i Gregory Shakhnarovich. "Deformable Style Transfer". W Computer Vision – ECCV 2020, 246–61. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_15.
Pełny tekst źródłaChen, Dongdong, Lu Yuan i Gang Hua. "Deep Style Transfer". W Computer Vision, 1–8. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_863-1.
Pełny tekst źródłaDana, Kristin J. "Texture Style Transfer". W Computational Texture and Patterns, 47–50. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-031-01823-7_6.
Pełny tekst źródłaPaper, David. "Fast Style Transfer". W State-of-the-Art Deep Learning Models in TensorFlow, 295–319. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7341-8_12.
Pełny tekst źródłaChen, Dongdong, Lu Yuan i Gang Hua. "Deep Style Transfer". W Computer Vision, 269–76. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_863.
Pełny tekst źródłaTeoh, Teik Toe, i Zheng Rong. "Neural Style Transfer". W Machine Learning: Foundations, Methodologies, and Applications, 303–19. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8615-3_19.
Pełny tekst źródłaMcAllister, Tyler, i Björn Gambäck. "Music Style Transfer Using Constant-Q Transform Spectrograms". W Artificial Intelligence in Music, Sound, Art and Design, 195–211. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-03789-4_13.
Pełny tekst źródłaStreszczenia konferencji na temat "Transfert de style"
Schaldenbrand, Peter, Zhixuan Liu i Jean Oh. "StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/688.
Pełny tekst źródłaPonamaryov, Valeriy, i Victor Kitov. "Image Style Transfer With a Group of Similar Styles". W 33rd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2023. http://dx.doi.org/10.20948/graphicon-2023-565-571.
Pełny tekst źródłaChen, Jiafu, Boyan Ji, Zhanjie Zhang, Tianyi Chu, Zhiwen Zuo, Lei Zhao, Wei Xing i Dongming Lu. "TeSTNeRF: Text-Driven 3D Style Transfer via Cross-Modal Learning". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/642.
Pełny tekst źródłaYi, Xiaoyuan, Zhenghao Liu, Wenhao Li i Maosong Sun. "Text Style Transfer via Learning Style Instance Supported Latent Space". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/526.
Pełny tekst źródłaZuo, Zhiwen, Lei Zhao, Shuobin Lian, Haibo Chen, Zhizhong Wang, Ailin Li, Wei Xing i Dongming Lu. "Style Fader Generative Adversarial Networks for Style Degree Controllable Artistic Style Transfer". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/693.
Pełny tekst źródłaLu, Haofei, i Zhizhong Wang. "Universal Video Style Transfer via Crystallization, Separation, and Blending". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/687.
Pełny tekst źródłaWang, Zhizhong, Lei Zhao, Haibo Chen, Zhiwen Zuo, Ailin Li, Wei Xing i Dongming Lu. "DivSwapper: Towards Diversified Patch-based Arbitrary Style Transfer". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/690.
Pełny tekst źródłaYin, Di, Shujian Huang, Xin-Yu Dai i Jiajun Chen. "Utilizing Non-Parallel Text for Style Transfer by Making Partial Comparisons". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/747.
Pełny tekst źródłaSrivastava, Hardik, Sneha Sunil i K. Shantha Kumari. "Neural Text Style Transfer with Custom Language Styles for Personalized Communication Systems". W 2022 International Conference on Knowledge Engineering and Communication Systems (ICKECS). IEEE, 2022. http://dx.doi.org/10.1109/ickecs56523.2022.10059789.
Pełny tekst źródłaSemmo, Amir, Tobias Isenberg i Jürgen Döllner. "Neural style transfer". W the Symposium. New York, New York, USA: ACM Press, 2017. http://dx.doi.org/10.1145/3092919.3092920.
Pełny tekst źródłaRaporty organizacyjne na temat "Transfert de style"
Stock, Gregory J. An Investigation of Applications of Neural Style Transfer to Forensic Footwear Comparison. Gaithersburg, MD: National Institute of Standards and Technology, 2023. http://dx.doi.org/10.6028/nist.gcr.23-040.
Pełny tekst źródłaStock, Gregory. An Investigation of Applications of Neural Style Transfer to Forensic Footwear Comparison. Gaithersburg, MD: National Institute of Standards and Technology, 2023. http://dx.doi.org/10.6028/nist.ir.8460.
Pełny tekst źródłaEyal, Yoram, i Sheila McCormick. Molecular Mechanisms of Pollen-Pistil Interactions in Interspecific Crossing Barriers in the Tomato Family. United States Department of Agriculture, maj 2000. http://dx.doi.org/10.32747/2000.7573076.bard.
Pełny tekst źródłaLaw, Edward, Samuel Gan-Mor, Hazel Wetzstein i Dan Eisikowitch. Electrostatic Processes Underlying Natural and Mechanized Transfer of Pollen. United States Department of Agriculture, maj 1998. http://dx.doi.org/10.32747/1998.7613035.bard.
Pełny tekst źródłaXu, Jin-Rong, i Amir Sharon. Comparative studies of fungal pathogeneses in two hemibiotrophs: Magnaporthe grisea and Colletotrichum gloeosporioides. United States Department of Agriculture, maj 2008. http://dx.doi.org/10.32747/2008.7695585.bard.
Pełny tekst źródła