Auswahl der wissenschaftlichen Literatur zum Thema „Transformeur robuste“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Transformeur robuste" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Transformeur robuste"
Yang, Mingze, Hai Zhu, Runzhe Zhu, Fei Wu, Ling Yin und Yuncheng Yang. „WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi“. Sensors 23, Nr. 5 (27.02.2023): 2612. http://dx.doi.org/10.3390/s23052612.
Der volle Inhalt der QuelleSantamaria-Bonfil, Guillermo, Gustavo Arroyo-Figueroa, Miguel A. Zuniga-Garcia, Carlos Gustavo Azcarraga Ramos und Ali Bassam. „Power Transformer Fault Detection: A Comparison of Standard Machine Learning and autoML Approaches“. Energies 17, Nr. 1 (22.12.2023): 77. http://dx.doi.org/10.3390/en17010077.
Der volle Inhalt der QuelleWei, Jiangshu, Jinrong Chen, Yuchao Wang, Hao Luo und Wujie Li. „Improved deep learning image classification algorithm based on Swin Transformer V2“. PeerJ Computer Science 9 (30.10.2023): e1665. http://dx.doi.org/10.7717/peerj-cs.1665.
Der volle Inhalt der QuelleOttele, Andy, und Rahmat Shoureshi. „Neural Network-Based Adaptive Monitoring System for Power Transformer“. Journal of Dynamic Systems, Measurement, and Control 123, Nr. 3 (11.02.1999): 512–17. http://dx.doi.org/10.1115/1.1387248.
Der volle Inhalt der QuelleSai, K. N., A. Galodha, P. Jain und D. Sharma. „DEEP AND MACHINE LEARNING FOR MONITORING GROUNDWATER STORAGE BASINS AND HYDROLOGICAL CHANGES USING THE GRAVITY RECOVERY AND CLIMATE EXPERIMENT (GRACE) SATELLITE MISSION AND SENTINEL-1 DATA FOR THE GANGA RIVER BASIN IN THE INDIAN REGION“. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13.12.2023): 1265–70. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1265-2023.
Der volle Inhalt der QuelleRemigius Obinna Okeke, Akan Ime Ibokette, Onuh Matthew Ijiga, Enyejo, Lawrence Anebi, Godslove Isenyo Ebiega und Odeyemi Michael Olumubo. „THE RELIABILITY ASSESSMENT OF POWER TRANSFORMERS“. Engineering Science & Technology Journal 5, Nr. 4 (03.04.2024): 1149–72. http://dx.doi.org/10.51594/estj.v5i4.981.
Der volle Inhalt der QuellePaul, Sayak, und Pin-Yu Chen. „Vision Transformers Are Robust Learners“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 2 (28.06.2022): 2071–81. http://dx.doi.org/10.1609/aaai.v36i2.20103.
Der volle Inhalt der QuelleJancarczyk, Daniel, Marcin Bernaś und Tomasz Boczar. „Classification of Low Frequency Signals Emitted by Power Transformers Using Sensors and Machine Learning Methods“. Sensors 19, Nr. 22 (10.11.2019): 4909. http://dx.doi.org/10.3390/s19224909.
Der volle Inhalt der QuelleCortés-Caicedo, Brandon, Oscar Danilo Montoya und Andrés Arias-Londoño. „Application of the Hurricane Optimization Algorithm to Estimate Parameters in Single-Phase Transformers Considering Voltage and Current Measures“. Computers 11, Nr. 4 (11.04.2022): 55. http://dx.doi.org/10.3390/computers11040055.
Der volle Inhalt der QuelleXie, Fei, Dalong Zhang und Chengming Liu. „Global–Local Self-Attention Based Transformer for Speaker Verification“. Applied Sciences 12, Nr. 19 (10.10.2022): 10154. http://dx.doi.org/10.3390/app121910154.
Der volle Inhalt der QuelleDissertationen zum Thema "Transformeur robuste"
Douzon, Thibault. „Language models for document understanding“. Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Der volle Inhalt der QuelleEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Sanchez, Sébastien. „Contribution à la conception de coupleurs magnétiques robustes pour convertisseurs multicellulaires parallèles“. Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/14499/1/sanchez_partie_1_sur_2_2.pdf.
Der volle Inhalt der QuelleAthanasius, Germane Information Technology & Electrical Engineering Australian Defence Force Academy UNSW. „Robust decentralised output feedback control of interconnected grid system“. Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/39591.
Der volle Inhalt der QuelleSwain, Sushree Diptimayee. „Design and Experimental Realization of Robust and Adaptive Control Schemes for Hybrid Series Active Power Filter“. Thesis, 2017. http://ethesis.nitrkl.ac.in/9369/1/2017_PhD_SDSwain_512EE109.pdf.
Der volle Inhalt der QuellePradhan, Prangya Parimita. „Robust Control Schemes for a Doubly Fed Induction Generator based Wind Energy Conversion System“. Thesis, 2022. http://ethesis.nitrkl.ac.in/10344/1/2022_PhD_PPPradhan_514EE1009_Robust.pdf.
Der volle Inhalt der QuelleDeng, Yu-Shiu, und 鄧宇修. „A Fast and Robust Local Descriptor Using Features in The Intensity and Transformed Domains“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/38514468834376849455.
Der volle Inhalt der Quelle國立中興大學
資訊科學與工程學系
100
Feature point matching is to find out the point correspondences between two images of the same scene or object, and this task is a vital part in many images processing technique, such as image matching, object recognition, and other vision-based application. However, there often exist some different kinds of transformation between two images, and will cause bad matching result. To solve the problem, the best way is to construct local descriptor by extracting robust and invariant local feature from interest region, and that will bring out better matching result. SIFT is the most robust local descriptor and has been widely used in many application, but since it is a high-dimensional local descriptor and is complex on feature extracting, the main disadvantage of SIFT is very time-consuming. In order to construct a local descriptor with efficient computation and good matching performance, we refer to Contrast Context Histogram(CCH) which has good matching performance with fast computation and low-frequency DCT coefficients, as well as it keeps important information of an image, and then we proposed a fast and robust local descriptor using features in combining intensity and transformed domains in our study. In experimental results, we can observe that proposed local descriptor has good matching performance under different kinds of transformation. Compared with other local descriptors with good matching performance, proposed local descriptor is much faster on features extracting and has lower dimension, so it has more potential to be used in real-time applications.
„Robust Control of Wide Bandgap Power Electronics Device Enabled Smart Grid“. Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.46215.
Der volle Inhalt der QuelleDissertation/Thesis
Doctoral Dissertation Electrical Engineering 2017
Bücher zum Thema "Transformeur robuste"
Helfont, Samuel. A Transformed Religious Landscape. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190843311.003.0009.
Der volle Inhalt der QuelleRid, Thomas, und Marc Hecker. War 2.0. Praeger Security International, 2009. http://dx.doi.org/10.5040/9798216033455.
Der volle Inhalt der QuellePearson, David. Rebel Music in the Triumphant Empire. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197534885.001.0001.
Der volle Inhalt der QuelleWoźniak, Monika, und Maria Wyke, Hrsg. The Novel of Neronian Rome and its Multimedial Transformations. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198867531.001.0001.
Der volle Inhalt der QuelleBuchteile zum Thema "Transformeur robuste"
Barnes, John R. „Inductors and Transformers“. In Robust Electronic Design Reference Book, 126–55. New York, NY: Springer US, 2004. http://dx.doi.org/10.1007/1-4020-7830-7_9.
Der volle Inhalt der QuelleLiao, Brian Hsuan-Cheng, Chih-Hong Cheng, Hasan Esen und Alois Knoll. „Are Transformers More Robust? Towards Exact Robustness Verification for Transformers“. In Lecture Notes in Computer Science, 89–103. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40923-3_8.
Der volle Inhalt der QuelleGu, Jindong, Volker Tresp und Yao Qin. „Are Vision Transformers Robust to Patch Perturbations?“ In Lecture Notes in Computer Science, 404–21. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19775-8_24.
Der volle Inhalt der QuelleWang, Libo, Si Chen, Zhen Wang, Da-Han Wang und Shunzhi Zhu. „Graph Attention Transformer Network for Robust Visual Tracking“. In Communications in Computer and Information Science, 165–76. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1639-9_14.
Der volle Inhalt der QuelleZhang, Yaning, Tianyi Wang, Minglei Shu und Yinglong Wang. „A Robust Lightweight Deepfake Detection Network Using Transformers“. In Lecture Notes in Computer Science, 275–88. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20862-1_20.
Der volle Inhalt der QuelleMvula, Paul K., Paula Branco, Guy-Vincent Jourdan und Herna L. Viktor. „HEART: Heterogeneous Log Anomaly Detection Using Robust Transformers“. In Discovery Science, 673–87. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45275-8_45.
Der volle Inhalt der QuellePeiris, Himashi, Munawar Hayat, Zhaolin Chen, Gary Egan und Mehrtash Harandi. „A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation“. In Lecture Notes in Computer Science, 162–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16443-9_16.
Der volle Inhalt der QuelleNajari, Naji, Samuel Berlemont, Grégoire Lefebvre, Stefan Duffner und Christophe Garcia. „RESIST: Robust Transformer for Unsupervised Time Series Anomaly Detection“. In Advanced Analytics and Learning on Temporal Data, 66–82. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-24378-3_5.
Der volle Inhalt der QuellePande, Jay, Wookhee Min, Randall D. Spain, Jason D. Saville und James Lester. „Robust Team Communication Analytics with Transformer-Based Dialogue Modeling“. In Lecture Notes in Computer Science, 639–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36272-9_52.
Der volle Inhalt der QuelleAlmalik, Faris, Mohammad Yaqub und Karthik Nandakumar. „Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification“. In Lecture Notes in Computer Science, 376–86. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16437-8_36.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Transformeur robuste"
Ottele, Andy, Rahmat Shoureshi, Duane Torgerson und John Work. „Neural Network-Based Adaptive Monitoring System for Power Transformer“. In ASME 1999 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/imece1999-0069.
Der volle Inhalt der QuelleWen, Qingsong, Tian Zhou, Chaoli Zhang, Weiqi Chen, Ziqing Ma, Junchi Yan und Liang Sun. „Transformers in Time Series: A Survey“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/759.
Der volle Inhalt der QuelleLi, Jinpeng, Haibo Jin, Shengcai Liao, Ling Shao und Pheng-Ann Heng. „RePFormer: Refinement Pyramid Transformer for Robust Facial Landmark Detection“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/152.
Der volle Inhalt der QuelleRoncancio, José. „Uso del Toolbox Simulink de Matlab en la enseñanza del cálculo de la disponibilidad programada en un sistema de producción“. In Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.3151.
Der volle Inhalt der QuelleLiu, Qinqing, Fei Dou, Meijian Yang, Ezana Amdework, Guiling Wang und Jinbo Bi. „Customized Positional Encoding to Combine Static and Time-varying Data in Robust Representation Learning for Crop Yield Prediction“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/676.
Der volle Inhalt der QuelleLeón Montañez, Johan Sebastián, Jennifer Gabriela Ortiz Mora, Camilo Hernández Acevedo, Camilo Ayala García, Édgar Alejandro Marañón León, Andrés González Barrios, Óscar Alberto Álvarez Solano und Niyireth Porras Holguín. „Metodologías de extracción de aceite de coraza de marañón: evaluación de impacto ambiental por emisiones de CO2“. In Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.3159.
Der volle Inhalt der QuelleFernández, Javier Ernesto, Javier Leyton, María Cristina Ledezma, Patricia Acosta und Andrés Quiroga. „Experiencias en el control del patógeno emergente Helicobacter pylori en sistemas de abastecimiento de agua rural por medio de la tecnología de Filtración en Múltiples Etapas“. In Ingeniería para transformar territorios. Asociacion Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.2989.
Der volle Inhalt der QuelleLin, Yuzhang, und Ali Abur. „Robust transformer tap estimation“. In 2017 IEEE Manchester PowerTech. IEEE, 2017. http://dx.doi.org/10.1109/ptc.2017.7980919.
Der volle Inhalt der QuelleMao, Xiaofeng, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He und Hui Xue. „Towards Robust Vision Transformer“. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01173.
Der volle Inhalt der QuellePrato García, Dorian. „Hidrógeno en Colombia: evaluando el potencial de la agroindustria para una transición energética sostenible“. In Ingeniería para transformar territorios. Asociación Colombiana de Facultades de Ingeniería - ACOFI, 2023. http://dx.doi.org/10.26507/paper.2780.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Transformeur robuste"
Jones, Emily, Beatriz Kira, Anna Sands und Danilo B. Garrido Alves. The UK and Digital Trade: Which way forward? Blavatnik School of Government, Februar 2021. http://dx.doi.org/10.35489/bsg-wp-2021/038.
Der volle Inhalt der Quelle