Статті в журналах з теми "Midi files"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Midi files.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Midi files".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Cavalcanti, Maria Cláudia Reis, Marcelo Trannin Machado, Alessandro de Almeida Castro Cerqueira, Nelson Sampaio Araujo Júnior, and Geraldo Xexéo. "MIDIZ: content based indexing and retrieving MIDI files." Journal of the Brazilian Computer Society 6, no. 2 (1999): 00. http://dx.doi.org/10.1590/s0104-65001999000300002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Järpe, Eric, and Mattias Weckstén. "Velody 2—Resilient High-Capacity MIDI Steganography for Organ and Harpsichord Music." Applied Sciences 11, no. 1 (December 23, 2020): 39. http://dx.doi.org/10.3390/app11010039.

Повний текст джерела
Анотація:
A new method for musical steganography for the MIDI format is presented. The MIDI standard is a user-friendly music technology protocol that is frequently deployed by composers of different levels of ambition. There is to the author’s knowledge no fully implemented and rigorously specified, publicly available method for MIDI steganography. The goal of this study, however, is to investigate how a novel MIDI steganography algorithm can be implemented by manipulation of the velocity attribute subject to restrictions of capacity and security. Many of today’s MIDI steganography methods—less rigorously described in the literature—fail to be resilient to steganalysis. Traces (such as artefacts in the MIDI code which would not occur by the mere generation of MIDI music: MIDI file size inflation, radical changes in mean absolute error or peak signal-to-noise ratio of certain kinds of MIDI events or even audible effects in the stego MIDI file) that could catch the eye of a scrutinizing steganalyst are side-effects of many current methods described in the literature. This steganalysis resilience is an imperative property of the steganography method. However, by restricting the carrier MIDI files to classical organ and harpsichord pieces, the problem of velocities following the mood of the music can be avoided. The proposed method, called Velody 2, is found to be on par with or better than the cutting edge alternative methods regarding capacity and inflation while still possessing a better resilience against steganalysis. An audibility test was conducted to check that there are no signs of audible traces in the stego MIDI files.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tan, Jin Jack, Jiun Cai Ong, Kin Keong Chan, Kam Hing How, and Jee Hou Ho. "Development of a Portable Automated Piano Player CantaPlayer." Applied Mechanics and Materials 284-287 (January 2013): 2037–43. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.2037.

Повний текст джерела
Анотація:
This paper describes the development of a low cost, compact and portable automated piano player CantaPlayer. The system accepts digital MIDI (Musical Instrument Digital Interface) files as input and develops pushing actions against piano keys which in turn produces sounds of notes. CantaPlayer uses Pure Data, an audio processing software to parse MIDI files and serve as user interfaces. The parsed information will be sent to Arduino, an open source microcontroller platform, via serial communication. The Arduino I/O pins will be triggered based on the information from Pure Data of which connected transistors will be activated, acting as a switch to draw in larger power supply to power the solenoids. The solenoids will then push the respective piano keys and produce music. The performance of CantaPlayer is evaluated by examining the synchronousness of the note playing sequence for a source MIDI and the corresponding reproduced MIDI. Three types of MIDI playing sequence (scale, polyphonic and rapid note switching) were tested and the results were satisfactory.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xia, Dan. "Analysis of MIDI Audio Files Based on Method of Information Quantity Estimation." Applied Mechanics and Materials 733 (February 2015): 838–41. http://dx.doi.org/10.4028/www.scientific.net/amm.733.838.

Повний текст джерела
Анотація:
In order to protect the security of information networks, a dedicated MIDI audio steganalysis algorithm is proposed. The least significant bit embedding scheme in MIDI audio is discussed, and the parameter of music velocity is picked up, and the steganalysis in MIDI audio based on entropy estimation is analyzed. Experiments show that when embedded rate is high, FPR and FPR are all slow, and the entropy estimation could be taken as a dedicated MIDI audio steganalysis algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lee, Ji-Hye, Svetlana Kim, and Yong-Ik Yoon. "Real-time Orchestra Method using MIDI Files." Journal of the Korea Contents Association 10, no. 4 (April 28, 2010): 91–97. http://dx.doi.org/10.5392/jkca.2010.10.4.091.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Hui E. "The Design and Implementation of Micro Player." Advanced Materials Research 989-994 (July 2014): 4355–57. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.4355.

Повний текст джерела
Анотація:
Windows Media Player developed by Microsoft Player, it is a component of Microsoft Windows, can support through plug-ins enhancements. In this paper, by applying the ActiveX control, design a set play audio files, MIDI files and Windows Media files, movies (MPEG), such as video files as one of the players play a variety of Media files.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shen, Hung-Che. "Building a Japanese MIDI-to-Singing song synthesis using an English male voice." MATEC Web of Conferences 201 (2018): 02006. http://dx.doi.org/10.1051/matecconf/201820102006.

Повний текст джерела
Анотація:
This work reports development of a MIDI-to-Singing song synthesis that will produce audio files from MIDI data and arbitrary Romaji lyrics in Japanese. The MIDI-to-Singing system relies on the Flinger (Festival singer) for singing voice synthesis. Originally, this MIDI-to-Singing system was developed by English. Based on some Japanese pronunciation rules, a Japanese MIDI-to-Sing synthesis system was developed and derived. For a language transfer like Festival synthesized singing, two major tasks are the modifications of a phoneset and a lexicon. Originally, MIDI-to-Sing song synthesis can create singing voices in many languages, but there is no existing Japanese festival diphone voice available right now. We therefore used a voice transformation model in festival to develop Japanese MIDI-to-Singing synthesis. An evaluation of a song listening experiment was conducted and the result of this voice conversion showed that the synthesized singing voice successfully migrate from English to Japanese with high voice quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

White, Christopher William, and Ian Quinn. "The Yale-Classical Archives Corpus." Empirical Musicology Review 11, no. 1 (July 8, 2016): 50. http://dx.doi.org/10.18061/emr.v11i1.4958.

Повний текст джерела
Анотація:
The Yale-Classical Archives Corpus (YCAC) contains harmonic and rhythmic information for a dataset of Western European Classical art music. This corpus is based on data from <a href="http://www.classicalarchives.com/">classicalarchives.com</a>, a repository of thousands of user-generated MIDI representations of pieces from several periods of Western European music history. The YCAC makes available metadata for each MIDI file, as well as a list of pitch simultaneities ("salami slices") in the MIDI file. Metadata include the piece's composer, the composer's country of origin, date of composition, genre (e.g., symphony, piano sonata, nocturne, etc.), instrumentation, meter, and key. The processing step groups the file's pitches into vertical slices each time a pitch is added or subtracted from the texture, recording the slice's offset (measured in the number of quarter notes separating the event from the file's beginning), highest pitch, lowest pitch, prime form, scale-degrees in relation to the global key (as determined by experts), and local key information (as determined by a windowed key-profile analysis). The corpus contains 13,769 MIDI files by 571 composers yielding over 14,051,144 vertical slices. This paper outlines several properties of this corpus, along with a representative study using this dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Qiu, Lvyang, Shuyu Li, and Yunsick Sung. "DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification." Mathematics 9, no. 5 (March 3, 2021): 530. http://dx.doi.org/10.3390/math9050530.

Повний текст джерела
Анотація:
Music is a type of time-series data. As the size of the data increases, it is a challenge to build robust music genre classification systems from massive amounts of music data. Robust systems require large amounts of labeled music data, which necessitates time- and labor-intensive data-labeling efforts and expert knowledge. This paper proposes a musical instrument digital interface (MIDI) preprocessing method, Pitch to Vector (Pitch2vec), and a deep bidirectional transformers-based masked predictive encoder (MPE) method for music genre classification. The MIDI files are considered as input. MIDI files are converted to the vector sequence by Pitch2vec before being input into the MPE. By unsupervised learning, the MPE based on deep bidirectional transformers is designed to extract bidirectional representations automatically, which are musicological insight. In contrast to other deep-learning models, such as recurrent neural network (RNN)-based models, the MPE method enables parallelization over time-steps, leading to faster training. To evaluate the performance of the proposed method, experiments were conducted on the Lakh MIDI music dataset. During MPE training, approximately 400,000 MIDI segments were utilized for the MPE, for which the recovery accuracy rate reached 97%. In the music genre classification task, the accuracy rate and other indicators of the proposed method were more than 94%. The experimental results indicate that the proposed method improves classification performance compared with state-of-the-art models.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Miniailo, Ya O., M. V. Babenko, and O. O. Zhulkovskyi. "SOFTWARE DEVELOPMENT FOR RECEIVING MUSICAL NOTATION FROM MUSIC MIDI FORMAT FILES." Collection of scholarly papers of Dniprovsk State Technical University (Technical Sciences) 1, no. 32 (September 20, 2018): 110–14. http://dx.doi.org/10.31319/2519-2884.32.2018.176.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Cambouropoulos, Emilios. "Pitch Spelling: A Computational Model." Music Perception 20, no. 4 (2003): 411–29. http://dx.doi.org/10.1525/mp.2003.20.4.411.

Повний текст джерела
Анотація:
In this article, cognitive and musicological aspects of pitch and pitch interval representations are explored via computational modeling. The specific task under investigation is pitch spelling, that is, how traditional score notation can be derived from a simple unstructured 12-tone representation (e.g., pitch-class set or MIDI pitch representation). This study provides useful insights both into the domain of pitch perception and into musicological aspects of score notation strategies. A computational model is described that transcribes polyphonic MIDI pitch files into the Western traditional music notation. Input to the proposed algorithm is merely a sequence of MIDI pitch numbers in the order they appear in a MIDI file. No a priori knowledge such as key signature, tonal centers, time signature, chords, or voice separation is required. Output of the algorithm is a sequence of "correctly" spelled pitches. The algorithm is based on an interval optimization approach that takes into account the frequency of occurrence of pitch intervals within the major-minor tonal scale framework. The algorithm was evaluated on 10 complete piano sonatas by Mozart and had a success rate of 98.8% (634 pitches were spelled incorrectly out of a total of 54,418 notes); it was tested additionally on three Chopin waltzes and had a slightly worse success rate. The proposed pitch interval optimization approach is also compared with and tested against other pitch-spelling strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wu, Da-Chun, and Ming-Yao Chen. "Reversible data hiding in Standard MIDI Files by adjusting delta time values." Multimedia Tools and Applications 74, no. 21 (July 6, 2014): 9827–44. http://dx.doi.org/10.1007/s11042-014-2157-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Gahlawat, Vijeet, and Aakash Aakash. "MUSIC GENERATION USING DEEP LEARNING." International journal of multidisciplinary advanced scientific research and innovation 1, no. 10 (December 29, 2021): 369–73. http://dx.doi.org/10.53633/ijmasri.2021.1.10.09.

Повний текст джерела
Анотація:
We examine how lengthy short-term memory neural networks (NNs) may be utilised to create music compositions and offer a method for doing so in this study. Bach's musical style was chosen to train the NN in order for it to make similar music works. The recommended method converts midi files to song files before encoding them as NN inputs. Before feeding the files into the NNs, they are augmented, which converts them into distinct keys, and then they are fed into the NN for training. The final phase is the creation of music. The primary purpose is to assign an arbitrary note to the NN, which it will gradually modify until it produces a good piece of music. Several tests have been conducted in order to identify the ideal parameter values for producing good music. Keywords: lstm, music generation, deep learning, machine learning, neural networks
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Lopez-Rincon, Omar, Oleg Starostenko, and Alejandro Lopez-Rincon. "Algorithmic music generation by harmony recombination with genetic algorithm." Journal of Intelligent & Fuzzy Systems 42, no. 5 (March 31, 2022): 4411–23. http://dx.doi.org/10.3233/jifs-219231.

Повний текст джерела
Анотація:
Algorithmic music composition has recently become an area of prestigious research in projects such as Google’s Magenta, Aiva, and Sony’s CSL Lab aiming to increase the composers’ tools for creativity. There are advances in systems for music feature extraction and generation of harmonies with short-time and long-time patterns of music style, genre, and motif. However, there are still challenges in the creation of poly-instrumental and polyphonic music, pieces become repetitive and sometimes these systems copy the original files. The main contribution of this paper is related to the improvement of generating new non-plagiary harmonic developments constructed from the symbolic abstraction from MIDI music non-labeled data with controlled selection of rhythmic features based on evolutionary techniques. Particularly, a novel approach for generating new music compositions by replacing existing harmony descriptors in a MIDI file with new harmonic features from another MIDI file selected by a genetic algorithm. This allows combining newly created harmony with a rhythm of another composition guaranteeing the adjustment of a new music piece to a distinctive genre with regularity and consistency. The performance of the proposed approach has been assessed using artificial intelligent computational tests, which assure goodness of the extracted features and shows its quality and competitiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

MAIA, ADOLFO, PAUL DO VALLE, JONATAS MANZOLLI, and LEONARDO N. S. PEREIRA. "A computer environment for polymodal music." Organised Sound 4, no. 2 (June 1999): 111–14. http://dx.doi.org/10.1017/s135577189900206x.

Повний текст джерела
Анотація:
KYKLOS, an algorithmic composition program, is presented here. It generalises musical scales for use in composition as well as in performance. The sonic output of the system is referred to as polymodal music since it consists of four independent voices playing ‘synthetic modes’. KYKLOS is suitable for computer-assisted composition because it generates MIDI files which can be altered later by the composer. It can equally well be used in live performance for dynamic modification of parameters enabling realtime musical control.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

He, Qi. "A Music Genre Classification Method Based on Deep Learning." Mathematical Problems in Engineering 2022 (March 29, 2022): 1–9. http://dx.doi.org/10.1155/2022/9668018.

Повний текст джерела
Анотація:
Digital music resources have exploded in popularity since the dawn of the digital music age. The music genre is an important classification to use when describing music. The function of music labels in discovering and separating digital music resources is crucial. In the face of a huge music database, relying on manual annotation to classify will consume a lot of cost and time, which cannot meet the needs of the times. The following are the paper’s primary research findings and innovations: to better describe the music, this article will be divided into multiple local musical instrument digital interface (MIDI) music passages, playing style close by analyzing passages, passages feature extracting, and feature sequence of passages. Extraction of note feature matrix, extraction of topic and segment division based on note feature matrix, research and extraction of effective features based on segment theme, and composition of feature sequence are all part of the process. Because of the shallow structure of standard classification methods, it is difficult for classifiers to learn temporal and semantic information about music. This research investigates recurrent neural networks (RNN) and attention using the distinctive sequence of input MIDI segments. To create data sets and conduct music categorization tests, collect 1920 MIDI files with genre labels from the Internet. The method for music classification is validated when it is combined with the experimental accuracy of equal length segment categorization.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Kim, Sarah, Jeong Mi Park, Seungyeon Rhyu, Juhan Nam, and Kyogu Lee. "Quantitative analysis of piano performance proficiency focusing on difference between hands." PLOS ONE 16, no. 5 (May 19, 2021): e0250299. http://dx.doi.org/10.1371/journal.pone.0250299.

Повний текст джерела
Анотація:
Quantitative evaluation of piano performance is of interests in many fields, including music education and computational performance rendering. Previous studies utilized features extracted from audio or musical instrument digital interface (MIDI) files but did not address the difference between hands (DBH), which might be an important aspect of high-quality performance. Therefore, we investigated DBH as an important factor determining performance proficiency. To this end, 34 experts and 34 amateurs were recruited to play two excerpts on a Yamaha Disklavier. Each performance was recorded in MIDI, and handcrafted features were extracted separately for the right hand (RH) and left hand (LH). These were conventional MIDI features representing temporal and dynamic attributes of each note and computed as absolute values (e. g., MIDI velocity) or ratios between performance and corresponding scores (e. g., ratio of duration or inter-onset interval (IOI)). These note-based features were rearranged into additional features representing DBH by simple subtraction between features of both hands. Statistical analyses showed that DBH was more significant in experts than in amateurs across features. Regarding temporal features, experts pressed keys longer and faster with the RH than did amateurs. Regarding dynamic features, RH exhibited both greater values and a smoother change along melodic intonations in experts that in amateurs. Further experiments using principal component analysis (PCA) and support vector machine (SVM) verified that hand-difference features can successfully differentiate experts from amateurs according to performance proficiency. Moreover, existing note-based raw feature values (Basic features) and DBH features were tested repeatedly via 10-fold cross-validation, suggesting that adding DBH features to Basic features improved F1 scores to 93.6% (by 3.5%) over Basic features. Our results suggest that differently controlling both hands simultaneously is an important skill for pianists; therefore, DBH features should be considered in the quantitative evaluation of piano performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Miyawaki, T., and S. Takashima. "Control of an Automatic Performance Robot of Saxophone : Performance control using standard midi files." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2003 (2003): 33. http://dx.doi.org/10.1299/jsmermd.2003.33_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Liu, Yi-Hsin, and Da-Chun Wu. "A high-capacity performance-preserving blind technique for reversible information hiding via MIDI files using delta times." Multimedia Tools and Applications 79, no. 25-26 (January 23, 2020): 17281–302. http://dx.doi.org/10.1007/s11042-019-08526-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wu, Da-Chun, Chin-Yu Hsiang, and Ming-Yao Chen. "Steganography via MIDI Files by Adjusting Velocities of Musical Note Sequences With Monotonically Non-Increasing or Non-Decreasing Pitches." IEEE Access 7 (2019): 154056–75. http://dx.doi.org/10.1109/access.2019.2948493.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Prashant Krishnan, V., S. Rajarajeswari, Venkat Krishnamohan, Vivek Chandra Sheel, and R. Deepak. "Music Generation Using Deep Learning Techniques." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 3983–87. http://dx.doi.org/10.1166/jctn.2020.9003.

Повний текст джерела
Анотація:
This paper primarily aims to compare two deep learning techniques in the task of learning musical styles and generating novel musical content. Long Short Term Memory (LSTM), a supervised learning algorithm is used, which is a variation of the Recurrent Neural Network (RNN), frequently used for sequential data. Another technique explored is Generative Adversarial Networks (GAN), an unsupervised approach which is used to learn a distribution of a particular style, and novelly combine components to create sequences. The representation of data from the MIDI files as chord and note embedding are essential to the performance of the models. This type of embedding in the network helps it to discover structural patterns in the samples. Through the study, it is seen how a supervised learning technique performs better than the unsupervised one. A study helped in obtaining a Mean Opinion Score (MOS), which was used as an indicator of the comparative quality and performance of the respective techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Pinto, Luís Aleixo H. Sofía, and Nuno Correia. "A computational creativity approach from music to images." ACM SIGEVOlution 14, no. 2 (July 2021): 1–5. http://dx.doi.org/10.1145/3477379.3477380.

Повний текст джерела
Анотація:
Our system generates abstract images from music that serve as inspiration for the creative process. We developed one of many possible approaches for a cross-domain association between the musical and visual domains, by extracting features from MIDI music files and associating them to visual characteristics. The associations were led by the authors' aesthetic preferences and some experimentation. Three different approaches were pursued, two with direct or random associations and a third using a genetic algorithm that considers music and color theory while searching for better results. The resulting images were evaluated through online surveys, which confirmed that not only they were abstract, but also that there was a relationship with the music that served as the basis for the association process. Moreover, the majority of the participants ranked highest the images improved with the genetic algorithm. This newsletter contribution summarizes the full version of the article, which was presented at EvoMUSART 2021 (the 10th International Conference on Artificial Intelligence in Music, Sound, Art and Design).
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Peltroche-Llacsahuanga, Heidrun, Silke Schmidt, Michael Seibold, Rudolf Lütticken, and Gerhard Haase. "Differentiation between Candida dubliniensis andCandida albicans by Fatty Acid Methyl Ester Analysis Using Gas-Liquid Chromatography." Journal of Clinical Microbiology 38, no. 10 (2000): 3696–704. http://dx.doi.org/10.1128/jcm.38.10.3696-3704.2000.

Повний текст джерела
Анотація:
Candida dubliniensis is often found in mixed culture with C. albicans, but its recognition is hampered as the color of its colonies in primary culture on CHROMagar Candida varies. Furthermore, definite identification of C. dubliniensis is difficult to achieve, time-consuming, and expensive. Therefore, a method to discriminate between these two closely related yeast species by fatty acid methyl ester (FAME) analysis using gas-liquid chromatography (Sherlock Microbial Identification System [MIS]; MIDI, Inc., Newark, Del.) was developed. Although the chromatograms of these two species revealed no obvious differences when applying FAME analysis, a new library (CADLIB) was successfully created using Sherlock Library Generation Software (MIDI). The amount and frequency of FAME was analyzed using library training files (n = 10 for each species), preferentially those comprising reference strains. For testing the performance of the CADLIB, clinical isolates genetically assigned to the respective species (C. albicans, n = 32; C. dubliniensis, n = 28) were chromatographically analyzed. For each isolate tested, MIS computed a similarity index (SI) indicating a hierarchy of possible strain fits. When using the newly created library CADLIB, the SIs for C. albicans andC. dubliniensis ranged from 0.11 to 0.96 and 0.53 to 0.93 (for all but one), respectively. Only three isolates of C. albicans (9.4%) were misidentified as C. dubliniensis, whereas all isolates of C. dubliniensiswere correctly identified. Resulting differentiation accuracy was 90.6% for C. albicans and 100% for C. dubliniensis. Cluster analysis and principal component analysis of the resulting FAME profiles showed two clearly distinguishable clusters matching up with two assigned species for the strains tested. Thus, the created library proved to be well suited to discriminate between these two species.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Qiu, Lvyang, Shuyu Li, and Yunsick Sung. "3D-DCDAE: Unsupervised Music Latent Representations Learning Method Based on a Deep 3D Convolutional Denoising Autoencoder for Music Genre Classification." Mathematics 9, no. 18 (September 16, 2021): 2274. http://dx.doi.org/10.3390/math9182274.

Повний текст джерела
Анотація:
With unlabeled music data widely available, it is necessary to build an unsupervised latent music representation extractor to improve the performance of classification models. This paper proposes an unsupervised latent music representation learning method based on a deep 3D convolutional denoising autoencoder (3D-DCDAE) for music genre classification, which aims to learn common representations from a large amount of unlabeled data to improve the performance of music genre classification. Specifically, unlabeled MIDI files are applied to 3D-DCDAE to extract latent representations by denoising and reconstructing input data. Next, a decoder is utilized to assist the 3D-DCDAE in training. After 3D-DCDAE training, the decoder is replaced by a multilayer perceptron (MLP) classifier for music genre classification. Through the unsupervised latent representations learning method, unlabeled data can be applied to classification tasks so that the problem of limiting classification performance due to insufficient labeled data can be solved. In addition, the unsupervised 3D-DCDAE can consider the musicological structure to expand the understanding of the music field and improve performance in music genre classification. In the experiments, which utilized the Lakh MIDI dataset, a large amount of unlabeled data was utilized to train the 3D-DCDAE, obtaining a denoising and reconstruction accuracy of approximately 98%. A small amount of labeled data was utilized for training a classification model consisting of the trained 3D-DCDAE and the MLP classifier, which achieved a classification accuracy of approximately 88%. The experimental results show that the model achieves state-of-the-art performance and significantly outperforms other methods for music genre classification with only a small amount of labeled data.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zheng, Zhiqiang. "The Classification of Music and Art Genres under the Visual Threshold of Deep Learning." Computational Intelligence and Neuroscience 2022 (May 18, 2022): 1–8. http://dx.doi.org/10.1155/2022/4439738.

Повний текст джерела
Анотація:
Wireless networks are commonly employed for ambient assisted living applications, and artificial intelligence-enabled event detection and classification processes have become familiar. However, music is a kind of time-series data, and it is challenging to design an effective music genre classification (MGC) system due to a large quantity of music data. Robust MGC techniques necessitate a massive amount of data, which is time-consuming, laborious, and requires expert knowledge. Few studies have focused on the design of music representations extracted directly from input waveforms. In recent times, deep learning (DL) models have been widely used due to their characteristics of automatic extracting advanced features and contextual representation from actual music or processed data. This paper aims to develop a novel deep learning-enabled music genre classification (DLE-MGC) technique. The proposed DLE-MGC technique effectively classifies the music genres into multiple classes by using three subprocesses, namely preprocessing, classification, and hyperparameter optimization. At the initial stage, the Pitch to Vector (Pitch2vec) approach is applied as a preprocessing step where the pitches in the input musical instrument digital interface (MIDI) files are transformed into the vector sequences. Besides, the DLE-MGC technique involves the design of a cat swarm optimization (CSO) with bidirectional long-term memory (BiLSTM) model for the classification process. The DBTMPE technique has gained a moderately increased accuracy of 94.27%, and the DLE-MGC technique has accomplished a better accuracy of 95.87%. The performance validation of the DLE-MGC technique was carried out using the Lakh MIDI music dataset, and the comparative results verified the promising performance of the DLE-MGC technique over current methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Bragagnolo, Bibiana, and Didier Guigue. "Analysis of sonority in piano pieces." Revista Música 20, no. 1 (July 6, 2020): 219–48. http://dx.doi.org/10.11606/rm.v20i1.168644.

Повний текст джерела
Анотація:
This paper presents an analysis of sonority in selected parts of two performances of a piano piece by the Brazilian composer Almeida Prado, extracted from the first volume of his cycle Cartas Celestes I (1974), and of this piece as a whole. The main goal of this analysis is to present a methodology where the performative decisions are understood as main sources of information, with specific attention to the sonic aspects of the piece. In this context, the musical work is understood as a process, since the score is only one of the elements that will influence the final result (Costa, 2016). The methodology consists of three main steps, and was based on the methodology of analysis of the sonority by Guigue (2009): (1) the first step is the artistic process, which gets up to the construction of two performances of the piece by the same pianist (the first author of this paper), with different and contrasting performative decisions, and their recordings; (2) then, the piece is divided into homogeneous sonic units, always having the performative decisions as guides; and (3) at last, these units are analyzed using the software Spear and Sonic Visualiser for audio files and Open Music for MIDI files. The methodology intends to include the performance in the scope of musical analysis, as performative decisions may lead to dramatic differences in the final perceived sonic layout of a musical work.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Polfreman, Richard. "Modalys-ER for OpenMusic (MfOM): virtual instruments and virtual musicians." Organised Sound 7, no. 3 (December 2002): 325–38. http://dx.doi.org/10.1017/s1355771802003126.

Повний текст джерела
Анотація:
Modalys-ER is a graphical environment for creating physical model instruments and generating musical sounds with them. While Modalys-ER provides users with a relatively simple-to-use interface, it has only limited methods for mapping control data onto model parameters for performance. While these are sufficient for many interesting applications, they do not bridge the gap from high-level specifications such as MIDI files or Standard Western Notation (SWN) down to low-level parameters within the physical model. With this issue in mind, a part of Modalys-ER has now been ported to OpenMusic, providing a platform for developing more sophisticated automation and control systems that can be specified through OpenMusic's visual programming interface. An overview of the MfOM library is presented and illustrated with several musical examples using some early mapping designs. Also, some of the issues relating to building and controlling virtual instruments are discussed and future directions for research in this area are suggested. The first release is now available via the IRCAM Software Forum.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Sundberg, Johan. "Music technology and audio processing: rall. or accel. into the new millennium?" Organised Sound 4, no. 3 (November 16, 2000): 153–60. http://dx.doi.org/10.1017/s1355771800003058.

Повний текст джерела
Анотація:
Music acoustics research can provide support in terms of objective knowledge to further the rapidly developing areas of music technology and audio processing. This is illustrated by three examples taken from current projects at KTH. One concerns the improvement in quality of sound reproduction systems over the last century. A test, where expert listeners rated the year of recordings of different ages, demonstrated that significant advances were made between 1950 and 1970, while development was rather modest before and after this period. The second example investigates the secrets of timbral beauty. Acoustic analyses of recordings of an international opera tenor and a singer with an extremely unpleasant voice shed some light on the basic requirements of good vocal timbre. The unpleasant voice is found to suffer from pressed phonation, lack of a singer's formant, irregular vibrato and insufficient pitch accuracy. The third example elucidates tuning and phrasing differences between deadpan performances of MIDI files played on synthesizers and performances by musicians on real instruments. The examples suggest that future development in the areas of music technology and audio processing may gain considerably from a close interaction with music acoustics research.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Hellmer, Kahl, and Guy Madison. "Quantifying Microtiming Patterning and Variability in Drum Kit Recordings." Music Perception 33, no. 2 (December 1, 2015): 147–62. http://dx.doi.org/10.1525/mp.2015.33.2.147.

Повний текст джерела
Анотація:
Human performers introduce temporal variability in their performance of music. The variability consists of both long-range tempo changes and microtiming variability that are note-to-note level deviations from the nominal beat time. In many contexts, microtiming is important for achieving certain preferred characteristics in a performance, such as hang, drive, or groove; but this variability is also, to some extent, stochastic. In this paper, we present a method for quantifying the microtiming variability. First, we transcribed drum performance audio files into empirical data using a very precise onset detection system. Second, we separated the microtiming variability into two components: systematic variability (SV), defined as recurrent temporal patterns, and residual variability (RV), defined as the residual, unexplained temporal deviation. The method was evaluated using computer-performed audio drum tracks and the results show a slight overestimation of the variability magnitude, but proportionally correct ratios between SV and RV. Thereafter two data sets were analyzed: drum performances from a MIDI drum kit and real-life drum performances from professional drum recordings. The results from these data sets show that up to 65 percent of the total micro-timing variability can be explained by recurring and consistent patterns.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Pitout, Frédéric, Laurent Koechlin, Arturo López Ariste, Luc Dettwiller, and Jean-Michel Glorian. "Solar surveillance with CLIMSO: instrumentation, database and on-going developments." Journal of Space Weather and Space Climate 10 (2020): 47. http://dx.doi.org/10.1051/swsc/2020039.

Повний текст джерела
Анотація:
CLIMSO is a suite of solar telescopes installed at Pic du Midi observatory in the southwest of France. It consists of two refractors that image the full solar disk in Hα and CaII K, and two coronagraphs that capture the prominences and ejections of chromospheric matter in Hα and HeI. Synoptic observations are carried out since 2007 and they follow those of previous instruments. CLIMSO, together with its predecessors, offer a temporal coverage of several solar cycles. With a direct access to its images, CLIMSO contributes to real time monitoring of the Sun. For that matter, the national research council for astrophysics (CNRS/INSU) has labelled CLIMSO as a national observation service for “surveillance of the Sun and the terrestrial space environment”. Products, under the form of images, movies or data files, are available via the CLIMSO DataBase. In this paper, we present the current instrumental configuration; we detail the available products and show how to access them; we mention some possible applications for solar and space weather; and finally, we evoke developments underway, both numerical to valorise our data, and instrumental to offer more and better capabilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Hu, Han. "Research on the Interaction of Genetic Algorithm in Assisted Composition." Computational Intelligence and Neuroscience 2021 (November 22, 2021): 1–15. http://dx.doi.org/10.1155/2021/3137666.

Повний текст джерела
Анотація:
Computer-aided composition is an attempt to use a formalized process to minimize human (or composer) involvement in the creation of music using a computer. Exploring the problem of computer-aided composition can enable us to understand and simulate the thinking mode of composers in the special process of music creation, which is an important application of artificial intelligence in the field of art. Feature extraction on the MIDI files has been introduced in this paper. Based on the genetic algorithm in this paper, a platform of the sampling coding method to optimize the character representation has solved the traditional algorithmic music composition study. Music directly from the pitch and duration can be derived from the characteristics, respectively, in the form of a one-hot encoding independently said. Failure to the rhythm of the characterization of the pitch and duration are problems that lead to the inability of compositional networks to learn musical styles better. Rhythm is the combination of pitch and time values according to certain rules. The rhythm of music affects the overall style of music. By associating the pitch and time value coding, the rhythm style of music can be preserved better so that the composition network can learn the style characteristics of music more easily.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

MORI, Takuya, Naoya WAKABAYASHI, and Suguru TAKASHIMA. "1A1-E05 Control of an Automatic Saxophone Performance Robot : Performance Control for Playing in a Band Using MIDI Files(Robots for Amusement and Entertainment)." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2014 (2014): _1A1—E05_1—_1A1—E05_4. http://dx.doi.org/10.1299/jsmermd.2014._1a1-e05_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Yang, Daniel, Arya Goutam, Kevin Ji, and TJ Tsai. "Large-Scale Multimodal Piano Music Identification Using Marketplace Fingerprinting." Algorithms 15, no. 5 (April 26, 2022): 146. http://dx.doi.org/10.3390/a15050146.

Повний текст джерела
Анотація:
This paper studies the problem of identifying piano music in various modalities using a single, unified approach called marketplace fingerprinting. The key defining characteristic of marketplace fingerprinting is choice: we consider a broad range of fingerprint designs based on a generalization of standard n-grams, and then select the fingerprint designs at runtime that are best for a specific query. We show that the large-scale retrieval problem can be framed as an economics problem in which a consumer and a store interact. In our analogy, the runtime search is like a consumer shopping in the store, the items for sale correspond to fingerprints, and purchasing an item corresponds to doing a fingerprint lookup in the database. Using basic principles of economics, we design an efficient marketplace in which the consumer has many options and adopts a rational buying strategy that explicitly considers the cost and expected utility of each item. We evaluate our marketplace fingerprinting approach on four different sheet music retrieval tasks involving sheet music images, MIDI files, and audio recordings. Using a database containing approximately 375,000 pages of sheet music, our method is able to achieve 0.91 mean reciprocal rank with sub-second average runtime on cell phone image queries. On all four retrieval tasks, the marketplace method substantially outperforms previous methods while simultaneously reducing average runtime. We present comprehensive experimental results, as well as detailed analyses to provide deeper intuition into system behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Yang, Daniel, Arya Goutam, Kevin Ji, and TJ Tsai. "Large-Scale Multimodal Piano Music Identification Using Marketplace Fingerprinting." Algorithms 15, no. 5 (April 26, 2022): 146. http://dx.doi.org/10.3390/a15050146.

Повний текст джерела
Анотація:
This paper studies the problem of identifying piano music in various modalities using a single, unified approach called marketplace fingerprinting. The key defining characteristic of marketplace fingerprinting is choice: we consider a broad range of fingerprint designs based on a generalization of standard n-grams, and then select the fingerprint designs at runtime that are best for a specific query. We show that the large-scale retrieval problem can be framed as an economics problem in which a consumer and a store interact. In our analogy, the runtime search is like a consumer shopping in the store, the items for sale correspond to fingerprints, and purchasing an item corresponds to doing a fingerprint lookup in the database. Using basic principles of economics, we design an efficient marketplace in which the consumer has many options and adopts a rational buying strategy that explicitly considers the cost and expected utility of each item. We evaluate our marketplace fingerprinting approach on four different sheet music retrieval tasks involving sheet music images, MIDI files, and audio recordings. Using a database containing approximately 375,000 pages of sheet music, our method is able to achieve 0.91 mean reciprocal rank with sub-second average runtime on cell phone image queries. On all four retrieval tasks, the marketplace method substantially outperforms previous methods while simultaneously reducing average runtime. We present comprehensive experimental results, as well as detailed analyses to provide deeper intuition into system behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Li, Shuyu, Sejun Jang, and Yunsick Sung. "Melody Extraction and Encoding Method for Generating Healthcare Music Automatically." Electronics 8, no. 11 (October 31, 2019): 1250. http://dx.doi.org/10.3390/electronics8111250.

Повний текст джерела
Анотація:
The strong relationship between music and health has helped prove that soft and peaceful classical music can significantly reduce people’s stress; however, it is difficult to identify and collect examples of such music to build a library. Therefore, a system is required that can automatically generate similar classical music selections from a small amount of input music. Melody is the main element that reflects the rhythms and emotions of musical works; therefore, most automatic music generation research is based on melody. Given that melody varies frequently within musical bars, the latter are used as the basic units of composition. As such, there is a requirement for melody extraction techniques and bar-based encoding methods for automatic generation of bar-based music using melodies. This paper proposes a method that handles melody track extraction and bar encoding. First, the melody track is extracted using a pitch-based term frequency–inverse document frequency (TFIDF) algorithm and a feature-based filter. Subsequently, four specific features of the notes within a bar are encoded into a fixed-size matrix during bar encoding. We conduct experiments to determine the accuracy of track extraction based on verification data obtained with the TFIDF algorithm and the filter; an accuracy of 94.7% was calculated based on whether the extracted track was a melody track. The estimated value demonstrates that the proposed method can accurately extract melody tracks. This paper discusses methods for automatically extracting melody tracks from MIDI files and encoding based on bars. The possibility of generating music through deep learning neural networks is facilitated by the methods we examine within this work. To help the neural networks generate higher quality music, which is good for human health, the data preprocessing methods contained herein should be improved in future works.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Pfeil, Hannah, Abram Hindle, and Hazel Campbell. "Amazing Grace: How Sweet the Sound of Synthesised Bagpipes." Alberta Academic Review 2, no. 2 (September 18, 2019): 59–60. http://dx.doi.org/10.29173/aar65.

Повний текст джерела
Анотація:
A bagpipe is a type of wind instrument that contains a melody pipe, which has an enclosed reed called the chanter and other drone pipes. The chanter is the part of the bagpipe that supplies the note, and the air that the pipes are fed is provided by the bag, which is inflated by a blowpipe and driven by the player’s arm. The goal of this project was to create a bagpipe using a program called Supercollider. Supercollider is used for audio synthesis. While creating this artificial bagpipe (here on referred to as a ‘synth’), it was broken down into four components: the chanter, the base drone, the first tenor drone and the second tenor drone. The chanter has the frequency of the note, the base drone’s frequency will be half that of the chanter and the frequency of the tenor drone will be half that of the base drone. This is because of the length of the pipes in relation to each other. In order to create the synth, a sine oscillator was used, and then put through a resonance filter, and then a reverb filter. This was done in order to mimic the echo that sound has when it is forced through a tube, or enclosed space. All four pipes were added together to create the synth. In order to play a song, the synth was put into a pattern so Supercollider could receive an array of notes, which serve as the frequency of the chanter, and then play the song automatically. The notes for Amazing Grace were transcribed into midi-notes and beat durations and these arrays were fed into the pattern to create the song. The synthetic version of Amazing Grace, in terms of frequency and loudness, was then graphed and compared to the graph of a recording of Amazing Grace played on a real bagpipe. There are differences between the two sound files, the most significant being that the real bagpipe has much more variation in terms of loudness. The synthesized bagpipe had a more gradual and subdued noise level, where the natural bagpipe was much more randomized. Taking the comparisons into consideration, Supercollider can be used to create an approximation of a bagpipe, but under scrutiny, the artificial version currently falls short.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Huang, Xin, Dan Lu, Daniel M. Ricciuto, Paul J. Hanson, Andrew D. Richardson, Xuehe Lu, Ensheng Weng, et al. "A model-independent data assimilation (MIDA) module and its applications in ecology." Geoscientific Model Development 14, no. 8 (August 20, 2021): 5217–38. http://dx.doi.org/10.5194/gmd-14-5217-2021.

Повний текст джерела
Анотація:
Abstract. Models are an important tool to predict Earth system dynamics. An accurate prediction of future states of ecosystems depends on not only model structures but also parameterizations. Model parameters can be constrained by data assimilation. However, applications of data assimilation to ecology are restricted by highly technical requirements such as model-dependent coding. To alleviate this technical burden, we developed a model-independent data assimilation (MIDA) module. MIDA works in three steps including data preparation, execution of data assimilation, and visualization. The first step prepares prior ranges of parameter values, a defined number of iterations, and directory paths to access files of observations and models. The execution step calibrates parameter values to best fit the observations and estimates the parameter posterior distributions. The final step automatically visualizes the calibration performance and posterior distributions. MIDA is model independent, and modelers can use MIDA for an accurate and efficient data assimilation in a simple and interactive way without modification of their original models. We applied MIDA to four types of ecological models: the data assimilation linked ecosystem carbon (DALEC) model, a surrogate-based energy exascale earth system model: the land component (ELM), nine phenological models and a stand-alone biome ecological strategy simulator (BiomeE). The applications indicate that MIDA can effectively solve data assimilation problems for different ecological models. Additionally, the easy implementation and model-independent feature of MIDA breaks the technical barrier of applications of data–model fusion in ecology. MIDA facilitates the assimilation of various observations into models for uncertainty reduction in ecological modeling and forecasting.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

JANSEN, LUISE. "Remake cinématographique, remake phonologique ? La (non-)réalisation du schwa dansMarius1931 et 2013." Journal of French Language Studies 28, no. 3 (April 23, 2018): 377–98. http://dx.doi.org/10.1017/s0959269518000030.

Повний текст джерела
Анотація:
RÉSUMÉCet article vise à examiner la mise en scène de l'accent marseillais dans le film le plus emblématique de Marseille, Marius (1931) et son remake de 2013 par une une analyse du comportement du schwa. La réalisation fréquente desegraphiques fait en effet partie des particularités phonético-phonologiques de l'accent prototypique de Marseille. Pourtant, aujourd'hui, le nombre de réalisations des schwas semble diminuer continuellement. Notre étude a pour but de découvrir à quel degré l'accent mis en scène diffère de l'accent des locuteurs marseillais actuels et si les deux films, qui ont été tournés avec 80 ans d’écart, soulignent le changement linguistique en cours dans le Midi. Les deux films ont été transcrits et codés selon le protocole du programme phonologie du Français contemporain (PFC). Par la suite, les données ont été comparées à celles des enquêtes PFC. Les résultats montrent que l'accent mis en scène se distingue nettement de l'accent authentique d'aujourd'hui. Le schwa ne se comporte cependant pas différemment dans les deux films.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Callander, Natalie S., William T. Phillips, Darlene F. Metter, Leonel Ochoa-Bayona, Cesar O. Freytes, and Sharon A. Primeaux. "TECHNITIUM-99m-SESTAMIBI SCANNING (MIBI) CORRELATES WITH DISEASE ACTIVITY IN PATIENTS (PTS) WITH MULTIPLE MYELOMA UNDERGOING AUTOLOGOUS PERIPHERAL BLOOD STEM CELL TRANSPLANTATION (PBSCT)." Blood 104, no. 11 (November 16, 2004): 2469. http://dx.doi.org/10.1182/blood.v104.11.2469.2469.

Повний текст джерела
Анотація:
Abstract Traditional radiographs are often used to establish a diagnosis of MM, as over 80% of patients will have some form of bone disease, including osteolysis and osteopenia. However, radiographs offer incomplete and sometimes misleading information regarding an individual patient’s progress. Pace et al (Eur J Nuc Med, 1998) initially reported the use of MIBI scans to examine bone disease in pts with MM. We hypothesized that MIBI scans could provide a convenient, reproducible method to track pts with MM. We therefore investigated the ability of MIBI scans, in comparison with radiographs and dual energy x-ray absorptiometry (DEXA) to diagnose and follow MM disease progression, as assessed by clinical criteria. Methods: Patients were subjects enrolled in an autologous PBSCT protocol using escalating doses of busulfan and melphalan. Pts received a complete bone survey, DEXA scanning and MIBI scanning prior to PBSCT. Pts able to return for follow up received all 3 tests at various time intervals. MIBI scans consisted of the injection of 30 mCi of technietium-99m-sestamibi followed by immediate scanning with a dual headed gamma camera to produce whole body images. Spot films were made of areas of increased activity. Two radiologists blinded to clinical outcomes reviewed and scored the MIBI films based on scan intensity rated as 0–4 in each of five areas. Scores were averaged, then summed and reported as a total score for each pt. Interrater reliability was determined by using Cohen’s kappa statistic. Results: 31 pts were enrolled on the PBSCT protocol, with 29 pts receiving baseline MIBI scanning. The initial total MIBI scores ranged from 2.5 to 16.5 with a mean score of 11 out of a possible 20. Of note, pts categorized as having either a complete remission (CR) or near complete remission (nCR) (n=4, mean score 6.1) had significantly lower (p=.02) MIBI scores in comparison to pts graded as either stable or progressive disease (n=12, mean score 11.5) prior to PBSCT . Eleven pts had serial scans performed at intervals of 6 to 12 months. Of these, 6 pts with stable/progressive disease had pre PBSCT MIBI scores ranging from 9 to 16.5 with a median score of 12. The single pt with nCR had a MIBI score of 5.5. At the first follow up, pts in stable/PD group had MIBI scores that significantly improved, ranging from 1.5 to 5.5 (median=3.3) with concomitant laboratory and marrow improvement (near CR-3, PR-3) but no change in their xray or DEXA scans. The pt with a nCR also showed improvement in protein and marrow findings to a CR with follow up MIBI score of 1.5. Two of 11 pts classified before PBSCT as "non secretory" MM with negative bone marrow biopsies had prePBSCT MIBI scores of 10.5 and 12, which improved post PBSCT to 3 and 6, respectively and corresponded to subjective improvement in bone pain and anemia. With median follow up of 30 mos for these 11 patients, 2 pts have relapsed by standard criteria, with corresponding increase in MIBI score, but no new lytic lesions documented by radiographs or DEXA. Conclusions: Technitium-99m-sestamibi scans provide a simple, inexpensive method (average cost per MIBI $175) to follow the clinical course of MM patients, especially those with nonsecretory disease, and may prove more cost effective and accurate than other imaging methods for tracking total disease burden. We believe that further investigation of MIBI scanning for this indication is warranted.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Lim, Song Hwee. "Towards a poor cinema: ubiquitous trafficking and poverty as problematic in Midi Z’s films." Transnational Cinemas 9, no. 2 (April 16, 2018): 131–46. http://dx.doi.org/10.1080/20403526.2018.1454700.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Koechlin, Laurent, Luc Dettwiller, Maurice Audejean, Maël Valais, and Arturo López Ariste. "Solar survey at Pic du Midi: Calibrated data and improved images." Astronomy & Astrophysics 631 (October 21, 2019): A55. http://dx.doi.org/10.1051/0004-6361/201732504.

Повний текст джерела
Анотація:
Context. We carry out a solar survey with images of the photosphere, prominences, and corona at Pic du Midi observatory. This survey, named CLIMSO (for CLIchés Multiples du SOleil), is in the following spectral lines: Fe XIII corona (1.075 μm), Hα (656.3 nm), and He I (1.083 μm) prominences, and Hα and Ca II (393.4 nm) photosphere. All frames cover 1.3 times the diameter of the Sun with an angular resolution approaching one arcsecond. The frame rate is one per minute per channel (weather permitting) for the prominences and chromosphere, and one per hour for the Fe XIII corona. This survey started in 2007 for the disk and prominences and in 2015 for the corona. We have almost completed one solar cycle and hope to cover several more, keeping the same wavelengths or adding others. Aims. We seek to make the CLIMSO images easier to use and more profitable for the scientific community. Methods. At the beginning of the survey, the images that we sent to the CLIMSO database were not calibrated. We have implemented a photometric calibration for the present and future images, in order to provide “science-ready” data. The old images have been calibrated. We have also improved the contrast capabilities of our coronagraphs, which now provide images of the Fe XIII corona, in addition to previous spectral channels. We also implemented an autoguiding system based on a diffractive Fresnel array for precise positioning of the Sun behind coronagraphic masks. Results. The data, including the images and films, are publicly available and downloadable through virtual observatories and dedicated websites (use “CLIMSO” and “IRAP” keywords to find them). For the Hα and Ca II channels we calibrate the data into physical units, independent of atmospheric or instrumental conditions; we provide solar maps of spectral radiances in W m−2 sr−1 nm−1. The instrumental improvements and calibration process are presented in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Puccini, Sérgio. "A escola francesa de som direto: Jean-Pierre Ruh e Éric Rohmer." Significação: Revista de Cultura Audiovisual 49, no. 57 (February 4, 2022): 259–79. http://dx.doi.org/10.11606/issn.2316-7114.sig.2022.174752.

Повний текст джерела
Анотація:
O artigo trata de um dos mais destacados representantes da escola francesa de som direto, o técnico Jean-Pierre Ruh, a partir de seu trabalho com o cineasta Éric Rohmer nos filmes Ma nuit chez Maud (Minha noite com ela, 1969), Le genou de Claire (O joelho de Claire, 1970) e L'amour l'après-midi (Um amor à tarde, 1972). Tendo como principal base os depoimentos do próprio Jean-Pierre Ruh, pretendemos focar em questões caras à escola do som direto, principalmente no que tange aos sons fora de campo, às imperfeições de som, e aos sons ordinários, entendidas dentro de uma concepção que valoriza uma verdade e uma singularidade do som no filme. Observou-se que cada filme possui uma particularidade própria, assim como um som único, e todos exploram as vozes sempre projetadas no espaço, fechado ou aberto, o que acrescenta uma informação de perspectiva e profundidade.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zeng, Yao-Fu, Xu-Ge Liu, Dong-Hang Tan, Wen-Xin Fan, Yi-Na Li, Yu Guo, and Honggen Wang. "Halohydroxylation of alkenyl MIDA boronates: switchable stereoselectivity induced by B(MIDA) substituent." Chemical Communications 56, no. 31 (2020): 4332–35. http://dx.doi.org/10.1039/d0cc00722f.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

TOFT, J., and B. HESSE. "Are age and sex of importantance in myocardial bull's eye reference files with Tc-99m MIBI?" Journal of Nuclear Cardiology 2, no. 2 (March 1995): S36. http://dx.doi.org/10.1016/s1071-3581(05)80230-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Espindola, Andres S., and Kitty F. Cardwell. "Microbe Finder (MiFi®): Implementation of an Interactive Pathogen Detection Tool in Metagenomic Sequence Data." Plants 10, no. 2 (January 28, 2021): 250. http://dx.doi.org/10.3390/plants10020250.

Повний текст джерела
Анотація:
Agricultural high throughput diagnostics need to be fast, accurate and have multiplexing capacity. Metagenomic sequencing is being widely evaluated for plant and animal diagnostics. Bioinformatic analysis of metagenomic sequence data has been a bottleneck for diagnostic analysis due to the size of the data files. Most available tools for analyzing high-throughput sequencing (HTS) data require that the user have computer coding skills and access to high-performance computing. To overcome constraints to most sequencing-based diagnostic pipelines today, we have developed Microbe Finder (MiFi®). MiFi® is a web application for quick detection and identification of known pathogen species/strains in raw, unassembled HTS metagenomic data. HTS-based diagnostic tools developed through MiFi® must pass rigorous validation, which is outlined in this manuscript. MiFi® allows researchers to collaborate in the development and validation of HTS-based diagnostic assays using MiProbe™, a platform used for developing pathogen-specific e-probes. Validated e-probes are made available to diagnosticians through MiDetect™. Here we describe the e-probe development, curation and validation process of MiFi® using grapevine pathogens as a model system. MiFi® can be used with any pathosystem and HTS platform after e-probes have been validated.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Debruyne, F., G. Geuens, V. Vander Poorten, and P. Delaere. "Re-operation for secondary hyperparathyroidism." Journal of Laryngology & Otology 122, no. 9 (November 30, 2007): 942–47. http://dx.doi.org/10.1017/s0022215107001120.

Повний текст джерела
Анотація:
AbstractObjective:In cases of re-operation for secondary hyperparathyroidism, to evaluate the extent to which the location of recurrent hyperplasia was predicted by (1) operative data from the first intervention, and (2) pre-operative imaging (before the re-operation).Methods:The files of 18 patients undergoing surgery for recurrent secondary hyperparathyroidism were reviewed. The surgical findings were compared both with the report of the initial operation and with the results of pre-operative imaging (i.e. ultrasonography, Mibi scintigraphy or computed tomography).Results:The location of the recurrent hyperplasia corresponded with the data for the primary intervention in about one-third of patients. There was a partial correlation in one-third of patients, and no correlation at all in one-third. Pre-operative imaging enabled better prediction of the location of recurrent disease.Conclusion:Surgeons should have both sources of information at their disposal when planning a re-intervention for secondary hyperparathyroidism. However, in our series, the predictive value of imaging was superior to that of information deduced from the previous surgical record.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Fan, Wen-Xin, Ji-Lin Li, Wen-Xin Lv, Ling Yang, Qingjiang Li, and Honggen Wang. "Synthesis of fluorinated amphoteric organoborons via iodofluorination of alkynyl and alkenyl MIDA boronates." Chemical Communications 56, no. 1 (2020): 82–85. http://dx.doi.org/10.1039/c9cc08386c.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Nagiub, A., and F. Mansfeld. "Microbiologically influenced corrosion inhibition (MICI) due to bacterial contamination." Materials and Corrosion 52, no. 11 (November 2001): 817–26. http://dx.doi.org/10.1002/1521-4176(200111)52:11<817::aid-maco817>3.0.co;2-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Li, Shichao, Muyao Li, Shu-Sen Li, and Jianbo Wang. "Pd-Catalyzed coupling of benzyl bromides with BMIDA-substituted N-tosylhydrazones: synthesis of trans-alkenyl MIDA boronates." Chemical Communications 58, no. 3 (2022): 399–402. http://dx.doi.org/10.1039/d1cc06170d.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wang, Lucia, Shengjia Lin, Yawei Zhu, Daniel Ferrante, Timaf Ishak, Yuki Baba та Abhishek Sharma. "α-Hydroxy boron-enabled regioselective access to bifunctional halo-boryl alicyclic ethers and α-halo borons". Chemical Communications 57, № 37 (2021): 4564–67. http://dx.doi.org/10.1039/d1cc00336d.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії