Gotowa bibliografia na temat „Transformeur robuste”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Transformeur robuste”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Transformeur robuste"
Yang, Mingze, Hai Zhu, Runzhe Zhu, Fei Wu, Ling Yin i Yuncheng Yang. "WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi". Sensors 23, nr 5 (27.02.2023): 2612. http://dx.doi.org/10.3390/s23052612.
Pełny tekst źródłaSantamaria-Bonfil, Guillermo, Gustavo Arroyo-Figueroa, Miguel A. Zuniga-Garcia, Carlos Gustavo Azcarraga Ramos i Ali Bassam. "Power Transformer Fault Detection: A Comparison of Standard Machine Learning and autoML Approaches". Energies 17, nr 1 (22.12.2023): 77. http://dx.doi.org/10.3390/en17010077.
Pełny tekst źródłaWei, Jiangshu, Jinrong Chen, Yuchao Wang, Hao Luo i Wujie Li. "Improved deep learning image classification algorithm based on Swin Transformer V2". PeerJ Computer Science 9 (30.10.2023): e1665. http://dx.doi.org/10.7717/peerj-cs.1665.
Pełny tekst źródłaOttele, Andy, i Rahmat Shoureshi. "Neural Network-Based Adaptive Monitoring System for Power Transformer". Journal of Dynamic Systems, Measurement, and Control 123, nr 3 (11.02.1999): 512–17. http://dx.doi.org/10.1115/1.1387248.
Pełny tekst źródłaSai, K. N., A. Galodha, P. Jain i D. Sharma. "DEEP AND MACHINE LEARNING FOR MONITORING GROUNDWATER STORAGE BASINS AND HYDROLOGICAL CHANGES USING THE GRAVITY RECOVERY AND CLIMATE EXPERIMENT (GRACE) SATELLITE MISSION AND SENTINEL-1 DATA FOR THE GANGA RIVER BASIN IN THE INDIAN REGION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13.12.2023): 1265–70. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1265-2023.
Pełny tekst źródłaRemigius Obinna Okeke, Akan Ime Ibokette, Onuh Matthew Ijiga, Enyejo, Lawrence Anebi, Godslove Isenyo Ebiega i Odeyemi Michael Olumubo. "THE RELIABILITY ASSESSMENT OF POWER TRANSFORMERS". Engineering Science & Technology Journal 5, nr 4 (3.04.2024): 1149–72. http://dx.doi.org/10.51594/estj.v5i4.981.
Pełny tekst źródłaPaul, Sayak, i Pin-Yu Chen. "Vision Transformers Are Robust Learners". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 2 (28.06.2022): 2071–81. http://dx.doi.org/10.1609/aaai.v36i2.20103.
Pełny tekst źródłaJancarczyk, Daniel, Marcin Bernaś i Tomasz Boczar. "Classification of Low Frequency Signals Emitted by Power Transformers Using Sensors and Machine Learning Methods". Sensors 19, nr 22 (10.11.2019): 4909. http://dx.doi.org/10.3390/s19224909.
Pełny tekst źródłaCortés-Caicedo, Brandon, Oscar Danilo Montoya i Andrés Arias-Londoño. "Application of the Hurricane Optimization Algorithm to Estimate Parameters in Single-Phase Transformers Considering Voltage and Current Measures". Computers 11, nr 4 (11.04.2022): 55. http://dx.doi.org/10.3390/computers11040055.
Pełny tekst źródłaXie, Fei, Dalong Zhang i Chengming Liu. "Global–Local Self-Attention Based Transformer for Speaker Verification". Applied Sciences 12, nr 19 (10.10.2022): 10154. http://dx.doi.org/10.3390/app121910154.
Pełny tekst źródłaRozprawy doktorskie na temat "Transformeur robuste"
Douzon, Thibault. "Language models for document understanding". Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Pełny tekst źródłaEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Sanchez, Sébastien. "Contribution à la conception de coupleurs magnétiques robustes pour convertisseurs multicellulaires parallèles". Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/14499/1/sanchez_partie_1_sur_2_2.pdf.
Pełny tekst źródłaAthanasius, Germane Information Technology & Electrical Engineering Australian Defence Force Academy UNSW. "Robust decentralised output feedback control of interconnected grid system". Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/39591.
Pełny tekst źródłaSwain, Sushree Diptimayee. "Design and Experimental Realization of Robust and Adaptive Control Schemes for Hybrid Series Active Power Filter". Thesis, 2017. http://ethesis.nitrkl.ac.in/9369/1/2017_PhD_SDSwain_512EE109.pdf.
Pełny tekst źródłaPradhan, Prangya Parimita. "Robust Control Schemes for a Doubly Fed Induction Generator based Wind Energy Conversion System". Thesis, 2022. http://ethesis.nitrkl.ac.in/10344/1/2022_PhD_PPPradhan_514EE1009_Robust.pdf.
Pełny tekst źródłaDeng, Yu-Shiu, i 鄧宇修. "A Fast and Robust Local Descriptor Using Features in The Intensity and Transformed Domains". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/38514468834376849455.
Pełny tekst źródła國立中興大學
資訊科學與工程學系
100
Feature point matching is to find out the point correspondences between two images of the same scene or object, and this task is a vital part in many images processing technique, such as image matching, object recognition, and other vision-based application. However, there often exist some different kinds of transformation between two images, and will cause bad matching result. To solve the problem, the best way is to construct local descriptor by extracting robust and invariant local feature from interest region, and that will bring out better matching result. SIFT is the most robust local descriptor and has been widely used in many application, but since it is a high-dimensional local descriptor and is complex on feature extracting, the main disadvantage of SIFT is very time-consuming. In order to construct a local descriptor with efficient computation and good matching performance, we refer to Contrast Context Histogram(CCH) which has good matching performance with fast computation and low-frequency DCT coefficients, as well as it keeps important information of an image, and then we proposed a fast and robust local descriptor using features in combining intensity and transformed domains in our study. In experimental results, we can observe that proposed local descriptor has good matching performance under different kinds of transformation. Compared with other local descriptors with good matching performance, proposed local descriptor is much faster on features extracting and has lower dimension, so it has more potential to be used in real-time applications.
"Robust Control of Wide Bandgap Power Electronics Device Enabled Smart Grid". Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.46215.
Pełny tekst źródłaDissertation/Thesis
Doctoral Dissertation Electrical Engineering 2017
Książki na temat "Transformeur robuste"
Helfont, Samuel. A Transformed Religious Landscape. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190843311.003.0009.
Pełny tekst źródłaRid, Thomas, i Marc Hecker. War 2.0. Praeger Security International, 2009. http://dx.doi.org/10.5040/9798216033455.
Pełny tekst źródłaPearson, David. Rebel Music in the Triumphant Empire. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197534885.001.0001.
Pełny tekst źródłaWoźniak, Monika, i Maria Wyke, red. The Novel of Neronian Rome and its Multimedial Transformations. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198867531.001.0001.
Pełny tekst źródłaCzęści książek na temat "Transformeur robuste"
Barnes, John R. "Inductors and Transformers". W Robust Electronic Design Reference Book, 126–55. New York, NY: Springer US, 2004. http://dx.doi.org/10.1007/1-4020-7830-7_9.
Pełny tekst źródłaLiao, Brian Hsuan-Cheng, Chih-Hong Cheng, Hasan Esen i Alois Knoll. "Are Transformers More Robust? Towards Exact Robustness Verification for Transformers". W Lecture Notes in Computer Science, 89–103. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40923-3_8.
Pełny tekst źródłaGu, Jindong, Volker Tresp i Yao Qin. "Are Vision Transformers Robust to Patch Perturbations?" W Lecture Notes in Computer Science, 404–21. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19775-8_24.
Pełny tekst źródłaWang, Libo, Si Chen, Zhen Wang, Da-Han Wang i Shunzhi Zhu. "Graph Attention Transformer Network for Robust Visual Tracking". W Communications in Computer and Information Science, 165–76. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1639-9_14.
Pełny tekst źródłaZhang, Yaning, Tianyi Wang, Minglei Shu i Yinglong Wang. "A Robust Lightweight Deepfake Detection Network Using Transformers". W Lecture Notes in Computer Science, 275–88. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20862-1_20.
Pełny tekst źródłaMvula, Paul K., Paula Branco, Guy-Vincent Jourdan i Herna L. Viktor. "HEART: Heterogeneous Log Anomaly Detection Using Robust Transformers". W Discovery Science, 673–87. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45275-8_45.
Pełny tekst źródłaPeiris, Himashi, Munawar Hayat, Zhaolin Chen, Gary Egan i Mehrtash Harandi. "A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation". W Lecture Notes in Computer Science, 162–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16443-9_16.
Pełny tekst źródłaNajari, Naji, Samuel Berlemont, Grégoire Lefebvre, Stefan Duffner i Christophe Garcia. "RESIST: Robust Transformer for Unsupervised Time Series Anomaly Detection". W Advanced Analytics and Learning on Temporal Data, 66–82. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-24378-3_5.
Pełny tekst źródłaPande, Jay, Wookhee Min, Randall D. Spain, Jason D. Saville i James Lester. "Robust Team Communication Analytics with Transformer-Based Dialogue Modeling". W Lecture Notes in Computer Science, 639–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36272-9_52.
Pełny tekst źródłaAlmalik, Faris, Mohammad Yaqub i Karthik Nandakumar. "Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification". W Lecture Notes in Computer Science, 376–86. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16437-8_36.
Pełny tekst źródłaStreszczenia konferencji na temat "Transformeur robuste"
Ottele, Andy, Rahmat Shoureshi, Duane Torgerson i John Work. "Neural Network-Based Adaptive Monitoring System for Power Transformer". W ASME 1999 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/imece1999-0069.
Pełny tekst źródłaWen, Qingsong, Tian Zhou, Chaoli Zhang, Weiqi Chen, Ziqing Ma, Junchi Yan i Liang Sun. "Transformers in Time Series: A Survey". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/759.
Pełny tekst źródłaLi, Jinpeng, Haibo Jin, Shengcai Liao, Ling Shao i Pheng-Ann Heng. "RePFormer: Refinement Pyramid Transformer for Robust Facial Landmark Detection". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/152.
Pełny tekst źródłaRoncancio, José. "Uso del Toolbox Simulink de Matlab en la enseñanza del cálculo de la disponibilidad programada en un sistema de producción". W Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.3151.
Pełny tekst źródłaLiu, Qinqing, Fei Dou, Meijian Yang, Ezana Amdework, Guiling Wang i Jinbo Bi. "Customized Positional Encoding to Combine Static and Time-varying Data in Robust Representation Learning for Crop Yield Prediction". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/676.
Pełny tekst źródłaLeón Montañez, Johan Sebastián, Jennifer Gabriela Ortiz Mora, Camilo Hernández Acevedo, Camilo Ayala García, Édgar Alejandro Marañón León, Andrés González Barrios, Óscar Alberto Álvarez Solano i Niyireth Porras Holguín. "Metodologías de extracción de aceite de coraza de marañón: evaluación de impacto ambiental por emisiones de CO2". W Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.3159.
Pełny tekst źródłaFernández, Javier Ernesto, Javier Leyton, María Cristina Ledezma, Patricia Acosta i Andrés Quiroga. "Experiencias en el control del patógeno emergente Helicobacter pylori en sistemas de abastecimiento de agua rural por medio de la tecnología de Filtración en Múltiples Etapas". W Ingeniería para transformar territorios. Asociacion Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.2989.
Pełny tekst źródłaLin, Yuzhang, i Ali Abur. "Robust transformer tap estimation". W 2017 IEEE Manchester PowerTech. IEEE, 2017. http://dx.doi.org/10.1109/ptc.2017.7980919.
Pełny tekst źródłaMao, Xiaofeng, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He i Hui Xue. "Towards Robust Vision Transformer". W 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01173.
Pełny tekst źródłaPrato García, Dorian. "Hidrógeno en Colombia: evaluando el potencial de la agroindustria para una transición energética sostenible". W Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.2780.
Pełny tekst źródłaRaporty organizacyjne na temat "Transformeur robuste"
Jones, Emily, Beatriz Kira, Anna Sands i Danilo B. Garrido Alves. The UK and Digital Trade: Which way forward? Blavatnik School of Government, luty 2021. http://dx.doi.org/10.35489/bsg-wp-2021/038.
Pełny tekst źródła