Literatura académica sobre el tema "Transformeur robuste"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Transformeur robuste".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Transformeur robuste"
Yang, Mingze, Hai Zhu, Runzhe Zhu, Fei Wu, Ling Yin y Yuncheng Yang. "WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi". Sensors 23, n.º 5 (27 de febrero de 2023): 2612. http://dx.doi.org/10.3390/s23052612.
Texto completoSantamaria-Bonfil, Guillermo, Gustavo Arroyo-Figueroa, Miguel A. Zuniga-Garcia, Carlos Gustavo Azcarraga Ramos y Ali Bassam. "Power Transformer Fault Detection: A Comparison of Standard Machine Learning and autoML Approaches". Energies 17, n.º 1 (22 de diciembre de 2023): 77. http://dx.doi.org/10.3390/en17010077.
Texto completoWei, Jiangshu, Jinrong Chen, Yuchao Wang, Hao Luo y Wujie Li. "Improved deep learning image classification algorithm based on Swin Transformer V2". PeerJ Computer Science 9 (30 de octubre de 2023): e1665. http://dx.doi.org/10.7717/peerj-cs.1665.
Texto completoOttele, Andy y Rahmat Shoureshi. "Neural Network-Based Adaptive Monitoring System for Power Transformer". Journal of Dynamic Systems, Measurement, and Control 123, n.º 3 (11 de febrero de 1999): 512–17. http://dx.doi.org/10.1115/1.1387248.
Texto completoSai, K. N., A. Galodha, P. Jain y D. Sharma. "DEEP AND MACHINE LEARNING FOR MONITORING GROUNDWATER STORAGE BASINS AND HYDROLOGICAL CHANGES USING THE GRAVITY RECOVERY AND CLIMATE EXPERIMENT (GRACE) SATELLITE MISSION AND SENTINEL-1 DATA FOR THE GANGA RIVER BASIN IN THE INDIAN REGION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 de diciembre de 2023): 1265–70. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1265-2023.
Texto completoRemigius Obinna Okeke, Akan Ime Ibokette, Onuh Matthew Ijiga, Enyejo, Lawrence Anebi, Godslove Isenyo Ebiega y Odeyemi Michael Olumubo. "THE RELIABILITY ASSESSMENT OF POWER TRANSFORMERS". Engineering Science & Technology Journal 5, n.º 4 (3 de abril de 2024): 1149–72. http://dx.doi.org/10.51594/estj.v5i4.981.
Texto completoPaul, Sayak y Pin-Yu Chen. "Vision Transformers Are Robust Learners". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 2 (28 de junio de 2022): 2071–81. http://dx.doi.org/10.1609/aaai.v36i2.20103.
Texto completoJancarczyk, Daniel, Marcin Bernaś y Tomasz Boczar. "Classification of Low Frequency Signals Emitted by Power Transformers Using Sensors and Machine Learning Methods". Sensors 19, n.º 22 (10 de noviembre de 2019): 4909. http://dx.doi.org/10.3390/s19224909.
Texto completoCortés-Caicedo, Brandon, Oscar Danilo Montoya y Andrés Arias-Londoño. "Application of the Hurricane Optimization Algorithm to Estimate Parameters in Single-Phase Transformers Considering Voltage and Current Measures". Computers 11, n.º 4 (11 de abril de 2022): 55. http://dx.doi.org/10.3390/computers11040055.
Texto completoXie, Fei, Dalong Zhang y Chengming Liu. "Global–Local Self-Attention Based Transformer for Speaker Verification". Applied Sciences 12, n.º 19 (10 de octubre de 2022): 10154. http://dx.doi.org/10.3390/app121910154.
Texto completoTesis sobre el tema "Transformeur robuste"
Douzon, Thibault. "Language models for document understanding". Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Texto completoEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Sanchez, Sébastien. "Contribution à la conception de coupleurs magnétiques robustes pour convertisseurs multicellulaires parallèles". Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/14499/1/sanchez_partie_1_sur_2_2.pdf.
Texto completoAthanasius, Germane Information Technology & Electrical Engineering Australian Defence Force Academy UNSW. "Robust decentralised output feedback control of interconnected grid system". Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/39591.
Texto completoSwain, Sushree Diptimayee. "Design and Experimental Realization of Robust and Adaptive Control Schemes for Hybrid Series Active Power Filter". Thesis, 2017. http://ethesis.nitrkl.ac.in/9369/1/2017_PhD_SDSwain_512EE109.pdf.
Texto completoPradhan, Prangya Parimita. "Robust Control Schemes for a Doubly Fed Induction Generator based Wind Energy Conversion System". Thesis, 2022. http://ethesis.nitrkl.ac.in/10344/1/2022_PhD_PPPradhan_514EE1009_Robust.pdf.
Texto completoDeng, Yu-Shiu y 鄧宇修. "A Fast and Robust Local Descriptor Using Features in The Intensity and Transformed Domains". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/38514468834376849455.
Texto completo國立中興大學
資訊科學與工程學系
100
Feature point matching is to find out the point correspondences between two images of the same scene or object, and this task is a vital part in many images processing technique, such as image matching, object recognition, and other vision-based application. However, there often exist some different kinds of transformation between two images, and will cause bad matching result. To solve the problem, the best way is to construct local descriptor by extracting robust and invariant local feature from interest region, and that will bring out better matching result. SIFT is the most robust local descriptor and has been widely used in many application, but since it is a high-dimensional local descriptor and is complex on feature extracting, the main disadvantage of SIFT is very time-consuming. In order to construct a local descriptor with efficient computation and good matching performance, we refer to Contrast Context Histogram(CCH) which has good matching performance with fast computation and low-frequency DCT coefficients, as well as it keeps important information of an image, and then we proposed a fast and robust local descriptor using features in combining intensity and transformed domains in our study. In experimental results, we can observe that proposed local descriptor has good matching performance under different kinds of transformation. Compared with other local descriptors with good matching performance, proposed local descriptor is much faster on features extracting and has lower dimension, so it has more potential to be used in real-time applications.
"Robust Control of Wide Bandgap Power Electronics Device Enabled Smart Grid". Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.46215.
Texto completoDissertation/Thesis
Doctoral Dissertation Electrical Engineering 2017
Libros sobre el tema "Transformeur robuste"
Helfont, Samuel. A Transformed Religious Landscape. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190843311.003.0009.
Texto completoRid, Thomas y Marc Hecker. War 2.0. Praeger Security International, 2009. http://dx.doi.org/10.5040/9798216033455.
Texto completoPearson, David. Rebel Music in the Triumphant Empire. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197534885.001.0001.
Texto completoWoźniak, Monika y Maria Wyke, eds. The Novel of Neronian Rome and its Multimedial Transformations. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198867531.001.0001.
Texto completoCapítulos de libros sobre el tema "Transformeur robuste"
Barnes, John R. "Inductors and Transformers". En Robust Electronic Design Reference Book, 126–55. New York, NY: Springer US, 2004. http://dx.doi.org/10.1007/1-4020-7830-7_9.
Texto completoLiao, Brian Hsuan-Cheng, Chih-Hong Cheng, Hasan Esen y Alois Knoll. "Are Transformers More Robust? Towards Exact Robustness Verification for Transformers". En Lecture Notes in Computer Science, 89–103. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40923-3_8.
Texto completoGu, Jindong, Volker Tresp y Yao Qin. "Are Vision Transformers Robust to Patch Perturbations?" En Lecture Notes in Computer Science, 404–21. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19775-8_24.
Texto completoWang, Libo, Si Chen, Zhen Wang, Da-Han Wang y Shunzhi Zhu. "Graph Attention Transformer Network for Robust Visual Tracking". En Communications in Computer and Information Science, 165–76. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1639-9_14.
Texto completoZhang, Yaning, Tianyi Wang, Minglei Shu y Yinglong Wang. "A Robust Lightweight Deepfake Detection Network Using Transformers". En Lecture Notes in Computer Science, 275–88. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20862-1_20.
Texto completoMvula, Paul K., Paula Branco, Guy-Vincent Jourdan y Herna L. Viktor. "HEART: Heterogeneous Log Anomaly Detection Using Robust Transformers". En Discovery Science, 673–87. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45275-8_45.
Texto completoPeiris, Himashi, Munawar Hayat, Zhaolin Chen, Gary Egan y Mehrtash Harandi. "A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation". En Lecture Notes in Computer Science, 162–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16443-9_16.
Texto completoNajari, Naji, Samuel Berlemont, Grégoire Lefebvre, Stefan Duffner y Christophe Garcia. "RESIST: Robust Transformer for Unsupervised Time Series Anomaly Detection". En Advanced Analytics and Learning on Temporal Data, 66–82. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-24378-3_5.
Texto completoPande, Jay, Wookhee Min, Randall D. Spain, Jason D. Saville y James Lester. "Robust Team Communication Analytics with Transformer-Based Dialogue Modeling". En Lecture Notes in Computer Science, 639–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36272-9_52.
Texto completoAlmalik, Faris, Mohammad Yaqub y Karthik Nandakumar. "Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification". En Lecture Notes in Computer Science, 376–86. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16437-8_36.
Texto completoActas de conferencias sobre el tema "Transformeur robuste"
Ottele, Andy, Rahmat Shoureshi, Duane Torgerson y John Work. "Neural Network-Based Adaptive Monitoring System for Power Transformer". En ASME 1999 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/imece1999-0069.
Texto completoWen, Qingsong, Tian Zhou, Chaoli Zhang, Weiqi Chen, Ziqing Ma, Junchi Yan y Liang Sun. "Transformers in Time Series: A Survey". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/759.
Texto completoLi, Jinpeng, Haibo Jin, Shengcai Liao, Ling Shao y Pheng-Ann Heng. "RePFormer: Refinement Pyramid Transformer for Robust Facial Landmark Detection". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/152.
Texto completoRoncancio, José. "Uso del Toolbox Simulink de Matlab en la enseñanza del cálculo de la disponibilidad programada en un sistema de producción". En Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.3151.
Texto completoLiu, Qinqing, Fei Dou, Meijian Yang, Ezana Amdework, Guiling Wang y Jinbo Bi. "Customized Positional Encoding to Combine Static and Time-varying Data in Robust Representation Learning for Crop Yield Prediction". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/676.
Texto completoLeón Montañez, Johan Sebastián, Jennifer Gabriela Ortiz Mora, Camilo Hernández Acevedo, Camilo Ayala García, Édgar Alejandro Marañón León, Andrés González Barrios, Óscar Alberto Álvarez Solano y Niyireth Porras Holguín. "Metodologías de extracción de aceite de coraza de marañón: evaluación de impacto ambiental por emisiones de CO2". En Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.3159.
Texto completoFernández, Javier Ernesto, Javier Leyton, María Cristina Ledezma, Patricia Acosta y Andrés Quiroga. "Experiencias en el control del patógeno emergente Helicobacter pylori en sistemas de abastecimiento de agua rural por medio de la tecnología de Filtración en Múltiples Etapas". En Ingeniería para transformar territorios. Asociacion Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.2989.
Texto completoLin, Yuzhang y Ali Abur. "Robust transformer tap estimation". En 2017 IEEE Manchester PowerTech. IEEE, 2017. http://dx.doi.org/10.1109/ptc.2017.7980919.
Texto completoMao, Xiaofeng, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He y Hui Xue. "Towards Robust Vision Transformer". En 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01173.
Texto completoPrato García, Dorian. "Hidrógeno en Colombia: evaluando el potencial de la agroindustria para una transición energética sostenible". En Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.2780.
Texto completoInformes sobre el tema "Transformeur robuste"
Jones, Emily, Beatriz Kira, Anna Sands y Danilo B. Garrido Alves. The UK and Digital Trade: Which way forward? Blavatnik School of Government, febrero de 2021. http://dx.doi.org/10.35489/bsg-wp-2021/038.
Texto completo