Literatura científica selecionada sobre o tema "High efficiency sample introduction system"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "High efficiency sample introduction system".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "High efficiency sample introduction system"

1

Patterson, Eric E., Sujeewa C. Piyankarage, Kyaw ThetMaw Myasein, Jose S. Pulido, Robert F. Dundervill, R. Mark Hatfield e Scott A. Shippy. "A high-efficiency sample introduction system for capillary electrophoresis analysis of amino acids from dynamic samples and static dialyzed human vitreous samples". Analytical and Bioanalytical Chemistry 392, n.º 3 (12 de agosto de 2008): 409–16. http://dx.doi.org/10.1007/s00216-008-2304-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

MIYASHITA, Shin-ichi, Shin-ichiro FUJII e Kazumi INAGAKI. "Single-particle/Cell Analysis by Highly Time-resolved ICP-MS Using a High-efficiency Sample Introduction System". Bunseki kagaku 66, n.º 9 (2017): 663–76. http://dx.doi.org/10.2116/bunsekikagaku.66.663.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Aliaga-Campuzano, M. P., J. P. Bernal, S. B. Briceño-Prieto, O. Pérez-Arvizu e E. Lounejeva. "Direct analysis of lanthanides by ICPMS in calcium-rich water samples using a modular high-efficiency sample introduction system–membrane desolvator". Journal of Analytical Atomic Spectrometry 28, n.º 7 (2013): 1102. http://dx.doi.org/10.1039/c3ja50070e.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kreisberg, N. M., D. R. Worton, Y. Zhao, G. Isaacman, A. H. Goldstein e S. V. Hering. "Development of an automated high-temperature valveless injection system for online gas chromatography". Atmospheric Measurement Techniques 7, n.º 12 (12 de dezembro de 2014): 4431–44. http://dx.doi.org/10.5194/amt-7-4431-2014.

Texto completo da fonte
Resumo:
Abstract. A reliable method of sample introduction is presented for online gas chromatography with a special application to in situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a valveless sample introduction interface that offers the advantage of long-term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing this pressure-switching-based device for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient-pressure detector and 15% accurate when applied to a vacuum-based detector. Laboratory comparisons made between the two methods of sample introduction using a thermal desorption aerosol gas chromatograph (TAG) show that the new interface has approximately 3 times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in situ instrument demonstrate typically less than 2% week−1 response trending and a zero failure rate during field deployments ranging up to 4 weeks of continuous sampling. Extension of the valveless interface to dual collection cells is presented with less than 3% cell-to-cell carryover.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kreisberg, N. M., D. R. Worton, Y. Zhao, G. Isaacman, A. H. Goldstein e S. V. Hering. "Development of an automated high temperature valveless injection system for on-line gas chromatography". Atmospheric Measurement Techniques Discussions 7, n.º 7 (23 de julho de 2014): 7531–67. http://dx.doi.org/10.5194/amtd-7-7531-2014.

Texto completo da fonte
Resumo:
Abstract. A reliable method of sample introduction is presented for on-line gas chromatography with a special application to in-situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a controlled pressure switching device that offers the advantage of long term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing the interface for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient detector and 15% accurate when applied to a vacuum based detector. Laboratory comparisons made between the two methods of sample introduction using a thermal desorption aerosol gas chromatograph (TAG) show approximately three times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in-situ instrument demonstrate minimal trending and a zero failure rate during field deployments ranging up to four weeks of continuous sampling. Extension of the VLI to dual collection cells is presented with less than 3% cell-to-cell carry-over.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kajner, Gyula, Ádám Bélteki, Martin Cseh, Zsolt Geretovszky, Tibor Ajtai, Lilla Barna, Mária A. Deli, Bernadett Pap, Gergely Maróti e Gábor Galbács. "Design, Optimization, and Application of a 3D-Printed Polymer Sample Introduction System for the ICP-MS Analysis of Nanoparticles and Cells". Nanomaterials 13, n.º 23 (25 de novembro de 2023): 3018. http://dx.doi.org/10.3390/nano13233018.

Texto completo da fonte
Resumo:
Commonly used sample introduction systems for inductively coupled plasma mass spectrometry (ICP-MS) are generally not well-suited for single particle ICP-MS (spICP-MS) applications due to their high sample requirements and low efficiency. In this study, the first completely 3D-printed, polymer SIS was developed to facilitate spICP-MS analysis. The system is based on a microconcentric pneumatic nebulizer and a single-pass spray chamber with an additional sheath gas flow to further facilitate the transport of larger droplets or particles. The geometry of the system was optimized using numerical simulations. Its aerosol characteristics and operational conditions were studied via optical particle counting and a course of spICP-MS measurements, involving nanodispersions and cell suspensions. In a comparison of the performance of the new and the standard (quartz microconcentric nebulizer plus a double-pass spray chamber) systems, it was found that the new sample introduction system has four times higher particle detection efficiency, significantly better signal-to-noise ratio, provides ca. 20% lower size detection limit, and allows an extension of the upper limit of transportable particle diameters to about 25 µm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Dvoryanchikova, Е. M., К. A. Dzhevello e D. D. Galuzin. "Experience in using high matrix introduction (HMI) technology for the analysis of lead by inductively coupled plasma mass-spectrometry (ICP-MS)". Industrial laboratory. Diagnostics of materials 87, n.º 7 (24 de julho de 2021): 17–22. http://dx.doi.org/10.26896/1028-6861-2021-87-7-17-22.

Texto completo da fonte
Resumo:
The impurities contained in lead and lead-based alloys, which are widely used in various branches of industry, i.e., nuclear, medical, electrical engineering, etc., affect their physicochemical properties which necessitates developing of the reliable method for the impurity determination. Photometric, spectral, and chemical — spectral methods used to address this problem are labor-intensive and do not always have the required sensitivity. A method of inductively coupled plasma mass spectrometry (ICP-MS) coupled with High Matrix Introduction (HMI) technology has been proposed as alternative easy to use procedure designed to be more sensitive. The Agilent HMI Sample Injection System provides inline dilution of the sample aerosol (supplied from the spray chamber to the burner) with pure argon. This method of sample introduction provides for analysis of the solutions with a solute content of up to 1% and higher. The aerosol dilution reduces concentration of the matrix and solvent at the inductively coupled plasma interface without conventional dilution. In this case, the matrix suppression of impurities is almost eliminated and CeO+/Ce+ is reduced to 0.2%, while the typical CeO+/Ce+ ratio for the Agilent 7500 mass spectrometers is 1 – 2%, but no more than 3%. We present application of this method to the analysis of Mg, Ca, Fe, Cu, As, Ag, Sn, Sb, Bi in lead by an Agilent 7500cx ICP-MS with preliminary acid digestion of lead samples in a microwave autoclave. The use of the HMI system made it possible to exclude the stage of sample dilution, reducing the possibility of sample contamination with a diluent, and to determine the content of impurities in a highly concentrated matrix at a level of 10–4 – 10–5 %. The efficiency of the method, as well as the possibility of using multi-element standard solutions prepared with 1% nitric acid for analysis of the samples with high lead content is shown.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zheng, Jian, Liguo Cao, Keiko Tagami e Shigeo Uchida. "Triple-Quadrupole Inductively Coupled Plasma-Mass Spectrometry with a High-Efficiency Sample Introduction System for Ultratrace Determination of 135Cs and 137Cs in Environmental Samples at Femtogram Levels". Analytical Chemistry 88, n.º 17 (19 de agosto de 2016): 8772–79. http://dx.doi.org/10.1021/acs.analchem.6b02150.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Todolí, José-Luis, e Jean-Michel Mermet. "Evaluation of a direct injection high-efficiency nebulizer (DIHEN) by comparison with a high-efficiency nebulizer (HEN) coupled to a cyclonic spray chamber as a liquid sample introduction system for ICP-AES". J. Anal. At. Spectrom. 16, n.º 5 (2001): 514–20. http://dx.doi.org/10.1039/b009430g.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Manard, Benjamin T., Veronica C. Bradley, C. Derrick Quarles, Lyndsey Hendriks, Daniel R. Dunlap, Cole R. Hexel, Patrick Sullivan e Hunter B. Andrews. "Towards Automated and High-Throughput Quantitative Sizing and Isotopic Analysis of Nanoparticles via Single Particle-ICP-TOF-MS". Nanomaterials 13, n.º 8 (9 de abril de 2023): 1322. http://dx.doi.org/10.3390/nano13081322.

Texto completo da fonte
Resumo:
The work described herein assesses the ability to characterize gold nanoparticles (Au NPs) of 50 and 100 nm, as well as 60 nm silver shelled gold core nanospheres (Au/Ag NPs), for their mass, respective size, and isotopic composition in an automated and unattended fashion. Here, an innovative autosampler was employed to mix and transport the blanks, standards, and samples into a high-efficiency single particle (SP) introduction system for subsequent analysis by inductively coupled plasma–time of flight–mass spectrometry (ICP-TOF-MS). Optimized NP transport efficiency into the ICP-TOF-MS was determined to be >80%. This combination, SP-ICP-TOF-MS, allowed for high-throughput sample analysis. Specifically, 50 total samples (including blanks/standards) were analyzed over 8 h, to provide an accurate characterization of the NPs. This methodology was implemented over the course of 5 days to assess its long-term reproducibility. Impressively, the in-run and day-to-day variation of sample transport is assessed to be 3.54 and 9.52% relative standard deviation (%RSD), respectively. The determination of Au NP size and concentration was of <5% relative difference from the certified values over these time periods. Isotopic characterization of the 107Ag/109Ag particles (n = 132,630) over the course of the measurements was determined to be 1.0788 ± 0.0030 with high accuracy (0.23% relative difference) when compared to the multi-collector–ICP-MS determination.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "High efficiency sample introduction system"

1

Bastardo-Fernandez, Isabel. "Vers une fiabilité améliorée de la détermination de (nano)particules de TiO2 par single particle inductively coupled plasma-mass spectrometry : application à la caractérisation des aliments et aux études de migration". Electronic Thesis or Diss., Maisons-Alfort, École nationale vétérinaire d'Alfort, 2024. http://www.theses.fr/2024ENVA0001.

Texto completo da fonte
Resumo:
Le projet de thèse NanoTi-Food vise principalement à améliorer la fiabilité de la caractérisation des nanoparticules de TiO2 (NPs) et à acquérir des connaissances sur l'additif alimentaire E171 y compris la migration de ces NP à partir des emballages alimentaires. Dans la première partie de l'étude (à réaliser à Anses), une nouvelle approche pour la caractérisation des NP de TiO2 sera développée et optimisée en utilisant l'approche « single particle » en combinaison avec la spectrométrie de masse à plasma à couplage inductif triple quadripôle (Sp-ICP- QQQMS). À cette fin, les paramètres analytiques les plus critiques, tels que les méthodes de calcul de l'efficacité du transport (TE) et le système d'introduction des échantillons seront évalués dans différentes conditions de travail (par exemple gaz de réaction, choix de l'isotope). Dans ce dernier cas, deux systèmes d'introduction d'échantillons à haut rendement (type APEX) seront comparés. Par ailleurs, une approche Sp complémentaire basée sur la MS-ICP haute résolution (Sp-ICP-HR MS) sera développée au LNE. La nouveauté dans ce cas sera l'utilisation d'un ICP-MS à haute résolution (champ de secteur magnétique) pour la détection, qui est la technique de pointe pour la détermination des éléments traces métalliques fortement interférés tels que le Ti. Un système d'injection interne sera également optimisé pour augmenter l'efficacité et la sensibilité du transport de l'échantillon. La validation de la méthode sera réalisée par comparaison inter-laboratoires entre le LNE et l'Anses. Une véritable valeur ajoutée du projet sera l'évaluation de l'incertitude de mesure liée à la caractérisation des NP de TiO2 par les deux approches Sp-ICP-MS (QQQ et HR). Les calculs d'incertitude prendront en compte non seulement la reproductibilité expérimentale et les incertitudes de chacune des variables nécessaires pour convertir le signal ICP-MS en taille et concentration de NPs, mais aussi et pour la première fois, l'effet du choix du seuil pour discriminer le signal ionique ICP-MS de celui des NP. L'effet des écarts par rapport à la forme sphérique sur les tailles sera également étudié et comparé à la microscopie électronique à balayage (MEB), qui est la méthode de référence pour la caractérisation des NP. Le projet vise également la préparation et la caractérisation exhaustive d'un matériau de référence réel (additif alimentaire) contenant des nanoparticules de TiO2. Une étude de faisabilité du développement d'une MR à base de E171 sous forme de suspension sera réalisée. À cette fin, un échantillon E171 représentatif sera préparé et entièrement caractérisé par un panel de techniques complémentaires, telles que SEM, Sp-ICP-QQQMS, Sp-ICP-HR MS, diffraction des rayons X (XRD) pour évaluer avec précision les principaux paramètres d'intérêt, tels que le diamètre médian et moyen, la distribution de taille, la fraction de nanoparticules, les impuretés chimiques et la fraction cristallographique. Enfin, les deux approches analytiques développées à l'Anses et au LNE, dont la méthode développée pour l'évaluation de l'incertitude globale, seront appliquées à l'étude du transfert des NP de TiO2 à partir des emballages alimentaires. Tout au long du projet, les données de taille obtenues en utilisant les nouvelles approches basées sur l'approche « single particle » pour la caractérisation des NP de TiO2 seront comparées aux mesures SEM, qui est la méthode de référence pour la taille dans ce domaine d'étude. Les études sur la migration des emballages alimentaires sont en effet une étude de cas sélectionnée où la Sp-ICP-MS a le potentiel de fournir des informations supplémentaires par rapport à d'autres paramètres tels que la concentration de particules, la proportion de particules par rapport à la forme dissoute, qui sont également importantes pour la migration qui est important afin d'améliorer les études d'évaluation des risques
This PhD project aims primarily to improve the reliability of the characterisation of TiO2 nanoparticles (NPs) and to gain knowledge of the food additive E171 and in real-life applications such as migration of these NPs from food packaging. In the first part of the study (to be carried out at Anses), a new approach for TiO2 NPs characterisation will be developed and optimized by using the single particle approach in combination with inductively coupled plasma-triple quadrupole mass spectrometry (Sp-ICP-QQQMS). For this purpose, the most critical analytical parameters, such as the transport efficiency (TE) calculation methods and the sample introduction system will be assessed under different working conditions (e.g. reaction gas, choice of isotope). In the latter case, two high efficiency sample introduction systems (APEX type) will be critically compared. Further, a complementary Sp approach based on ICP-high resolution MS (Sp-ICP-HRMS) will be developed at LNE. The novelty in this case will be the use of a high resolution (magnetic sector field) ICP-MS for detection, which is the state-of-the art technique for trace and ultra-trace metals determination of highly interfered elements such as the case of Ti. An in-house injection system will also be optimized to increase the transport efficiency and sensitivity. Method validation by inter-laboratory comparison between LNE and ANSES will be achieved here. A truly added value of the project will be the assessment of the measurement uncertainty related to TiO2 NPs characterization by both Sp-ICP-MS (QQQ and HR) approaches. The uncertainty calculations will take into account, not only the experimental reproducibility and the uncertainties of each variables required to convert ICP-MS signal into NPs size and concentration, but also and for the first time, the effect of the choice of the cut-off to discriminate the ICP-MS ionic signal from that of NPs. The effect of deviations from the spherical shape on the sizes will also be explored and compared with scanning electron microscopy (SEM), which is the reference method for NPs characterisation. The project also aims at the preparation and exhaustive characterization of a real-life (food additive) reference material containing TiO2 nanoparticles. A feasibility study of the development of an E171-based RM under a suspension form will be carried out. For this purpose, a representative E171 sample will be prepared and fully characterized by a panel of complementary techniques, such as SEM, Sp-ICP-QQQ MS, Sp-ICP-HRMS, X-ray diffraction (XRD) to accurately assess the main parameters of interest, such as the median and mean diameter, size distribution, fraction of nanoparticles, chemical impurities and crystallographic fraction. Finally, both analytical approaches developed at Anses and LNE, including the developed method for global uncertainty assessment, will be applied to the study of the transfer of TiO2 NPs from food packaging. All along the project, the size data obtained by using the newly developed “single particle” based approaches for TiO2 NPs characterisation will be compared to SEM measurements, which is the reference method for size in this study field. Food packaging migration studies is indeed a selected case study where Sp-ICP-MS has the potential of supplying additional information compared to other instruments, such as: particle concentration, proportion of particulate vs. dissolved form, which are of importance for migration as well as to improve risk assessment studies
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "High efficiency sample introduction system"

1

Plaskova, Nataliya. Methodology. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1842566.

Texto completo da fonte
Resumo:
The monograph reveals a system of methodological approaches of a theoretical, methodological and practical nature to improve the processes of creating and functioning of a system of accounting and analytical information that comprehensively reflects the vital activity of an organization in the modern conditions of the development of the digital economy of Russia. The article presents a set of organizational and methodological tasks and options for their solutions regarding the formation of a high-quality information base for providing a controlling system and making internal management decisions by the management and managers of companies, as well as to meet the information requests of external stakeholders. The introduction of the proposed author's methods and methods into the accounting and analytical practice of organizations allows optimizing management costs associated with accounting and management accounting, analysis, planning, contributes to the qualitative functioning of internal information flows of the company, reliable disclosure of the financial situation and effectiveness of its activities, the organization of a quality controlling system and timely adequate response of management to negative impacts of external and internal factors, increasing business efficiency, strengthening its competitiveness. It is intended for researchers, university teachers, postgraduates, bachelors and masters studying in the fields of Economics, Management, Finance and Credit, as well as practitioners in the field of accounting, analysis, audit, internal control and management of financial and economic activities of organizations.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "High efficiency sample introduction system"

1

Xu, Jianfeng, Fuhui Sun e Qiwei Chen. "High Quality and Efficiency Operation and Maintenance". In Introduction to the Smart Court System-of-Systems Engineering Project of China, 385–409. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2382-1_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Gaudio Francesca, Di, Cucina Annamaria e Indelicato Sergio. "Turbulent Flow Chromatography: A Unique Two-Dimensional Liquid Chromatography". In High Performance Liquid Chromatography - Recent Advances and Applications [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.110427.

Texto completo da fonte
Resumo:
Among 2D-LC techniques, a particular approach is commercialized by Thermo Fisher Scientific that may enable the direct introduction of biological samples into an online automated extraction system without any additional pre-treatment: the TurboFlow technology. It combines chemical and size exclusion capability of chromatography columns packed with porous particles in which a turbulent solvent flow is able to separate smaller molecules from larger ones (e.g. proteins). Once extracted, the small molecules can also be transferred to an analytical column for improving separation prior to detection. This is done through a unique plumbing and customized valve-switching arrangement that allows the focusing of molecules onto the second column. This enables a very efficient chromatographic separation. The use of the TurboFlow not only eliminates extensive sample preparation, thus reducing inter-operator variability and matrix effects, but also increases the capacity for high-throughput analyses due to a unique multiplexing technology, in which multiple LC channels are connected to a single detector.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zlatović, Miran, Igor Balaban e Željko Hutinski. "A Model of the Continual Adaptive Online Knowledge Assessment System". In E-Learning and Digital Education in the Twenty-First Century - Challenges and Prospects [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.95295.

Texto completo da fonte
Resumo:
This chapter presents a model of a novel adaptive online knowledge assessment system and tests the efficiency of its implementation. System enables continual and cumulative knowledge assessment, comprised of sequence of at least two interconnected assessments, carried-out throughout a reasonably long period of time. Important characteristics of the system are: (a) introduction of new course topics in every subsequent assessment, (b) re-assessment of earlier course topics in every subsequent assessment iteration, (c) in an adaptive manner, based on student’s achievements during previous assessments. Personalized post-assessment feedback guides each student in preparations for upcoming assessments. The efficiency has been tested on a sample of 78 students. Results indicate that the proposed adaptive system is efficient on an individual learning goal level.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ahmed, L. Jubair, S. Dhanasekar, K. Martin Sagayam, Surbhi Vijh, Vipin Tyagi, Mayank Singh e Alex Norta. "Introduction to Neuromorphic Computing Systems". In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 1–29. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6596-7.ch001.

Texto completo da fonte
Resumo:
The process of using electronic circuits to replicate the neurobiological architectures seen in the nervous system is known as neuromorphic engineering, also referred to as neuromorphic computing. These technologies are essential for the future of computing, although most of the work in neuromorphic computing has been focused on hardware development. The execution speed, energy efficiency, accessibility and robustness against local failures are vital advantages of neuromorphic computing over conventional methods. Spiking neural networks are generated using neuromorphic computing. This chapter covers the basic ideas of neuromorphic engineering, neuromorphic computing, and its motivating factors and challenges. Deep learning models are frequently referred to as deep neural networks because deep learning techniques use neural network topologies. Deep learning techniques and their different architectures were also covered in this section. Furthermore, Emerging memory Devices for neuromorphic systems and neuromorphic circuits were illustrated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Gray, John S., e Michael Elliott. "Introduction". In Ecology of Marine Sediments. Oxford University Press, 2009. http://dx.doi.org/10.1093/oso/9780198569015.003.0004.

Texto completo da fonte
Resumo:
As the oceans cover 70% of the earth’s surface, marine sediments constitute the second largest habitat on earth, after the ocean water column, and yet we still know more about the dark side of the moon than about the biota of this vast habitat. The primary aim of this book is to give an overview of the biota of marine sediments from an ecological perspective—we will talk of the benthos, literally the plants and animals at the bottom of the sea, but we will also use the term to include those organisms living on the intertidal sediments, the sands and muds of the shore. Given that most of that area is below the zone where light penetrates, the photic zone, the area is dominated by the animals and so we will concentrate on this component. Many of the early studies of marine sediments were taxonomic, describing new species. One of the pioneers was Carl von Linnaeus (1707–1778), the great Swedish biologist who developed the Linnaean classification system for organisms that is still used today (but under threat from some molecular biologists who argue that the Linnaean system is outdated and propose a new system called Phylocode). Linnaeus described hundreds of marine species, many of which come from marine sediments. The British marine biologist Edward Forbes was a pioneer who invented the dredge to sample marine animals that lived below the tidemarks. Forbes showed that there were fewer species as the sampled depth increased and believed that the great pressures at depths meant that no animals would be found deeper than 600 m. This was disproved by Michael Sars who in 1869 used a dredge to sample the benthos at 600 m depth off the Lofoten islands in Norway. Sars found 335 species and in fact was the first to show that the deep sea (off the continental shelf) had high numbers of species. Following these pioneering studies, one of the earliest systematic studies of marine sediments was the HMS Challenger expedition of 1872–1876, the first global expedition. The reports of the expedition were extensive but were mostly descriptive, relating to taxonomy and general natural history.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kuznetsova, Maryna. "Comparative Analysis of National and Foreign Experience of Legal Regulation of the Labor Disputes Resolution with the Challenges of 2022". In Стратегія сучасного розвитку України: синтез правових, освітніх та економічних механізмів : колективна монографія / за загальною редакцією професора Старченка Г. В., 67–84. ГО «Науково-освітній інноваційний центр суспільних трансформацій», 2022. http://dx.doi.org/10.54929/monograph-12-2022-02-01.

Texto completo da fonte
Resumo:
The issue of legal regulation of the labor disputes resolution is quite relevant in crisis periods of the society development. To date, there are a number of problems in Ukraine related to the guarantee and protection of the labor rights of citizens, which are related, among other things, to Russian armed aggression and, as a result, the increase in the level of labor migration, problems of access to justice in the war zone, etc. The study of the current legislation and foreign experience of legal regulation of labor dispute resolution made it possible to draw a number of conclusions. In the context of the judicial system, the introduction of a system of specialized courts in Ukraine, which exists in many other countries, would make it possible to resolve labor disputes more effectively. At the same time, the parties will be able to choose arbitrators who have a high degree of knowledge in this highly specialized field. The advantages of considering cases in specialized courts should also include the confidentiality of the process, the speed and finality of decisions, which contribute to a more efficient and effective way of resolving labor conflicts. As for the pre-trial and alternative form, attention is focused on the role of the Labor Disputes Commissions, trade unions and the possibility of using mediation as an alternative form of resolving labor disputes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Yousaf, Khurram, Kunjie Chen e Muhammad Azam Khan. "An Introduction of Biomimetic System and Heat Pump Technology in Food Drying Industry". In Biomimetics [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.93386.

Texto completo da fonte
Resumo:
Drying of food products is a relatively complex, nonlinear, and dynamic process due to simultaneous heat and mass transfer, rapid moisture evaporation, and biological and chemical reactions. Therefore, the monitoring of food quality during the drying process using bio-inspired technologies can play a vital role. The demand for high-quality dried food products and the rapid growth of energy in food processing are attracting new and renewable sources of energy. Energy efficiency, improved food product quality, and less environmental impact are always the main priorities of any drying system development. In-depth knowledge of biomimetic systems and drying kinetics would be helpful to design new dryers and technologies. Due to the excellent features (controllable drying temperature, drying time, drying air velocity, and relative humidity), heat pump drying systems have been used widely to ensure food and agricultural product quality. This chapter helps to understand the relationship between bio-inspired technologies and the role of heat pump technology in the food drying industry in terms of cost-effectiveness, energy saving, and better food product quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Dhabliya, Dharmesh, Vivek Veeraiah, Sukhvinder Singh Dari, Jambi Ratna Raja Kumar, Ritika Dhabliya, Sabyasachi Pramanik e Ankur Gupta. "Creating a Data Lakehouse for a South African Government-Sector Learning Control Enforcing Quality Control for Incremental Extract-Load-Transform Pipe". In Big Data Quantification for Complex Decision-Making, 88–109. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-1582-8.ch004.

Texto completo da fonte
Resumo:
The Durban University of Technology is now engaged in a project to create a data lake house system for a Training Authority in the South African Government sector. This system is crucial for improving the monitoring and evaluation capacities of the training authority and ensuring efficient service delivery. Ensuring the high quality of data being fed into the lakehouse is crucial, since low data quality negatively impacts the effectiveness of the lakehouse system. This chapter examines quality control methods for ingestion-layer pipelines in order to present a framework for ensuring data quality. The metrics taken into account for assessing data quality were completeness, accuracy, integrity, correctness, and timeliness. The efficiency of the framework was assessed by effectively implementing it on a sample semi-structured dataset. Suggestions for future development including enhancing by integrating data from a wider range of sources and providing triggers for incremental data intake.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Liang, Peishen. "Risk Assessment of Supply Chain Network Through Subset Simulation-Based Reliability Analysis". In Digitalization and Management Innovation II. IOS Press, 2023. http://dx.doi.org/10.3233/faia230757.

Texto completo da fonte
Resumo:
In this paper, a subset simulation-based approach is proposed to tackle the network reliability analysis problem in supply chain risk assessment. The reliability of the supply chain is essential for seamless operations in modern businesses. Network reliability analysis evaluates the continuity and reliability of business operations in the event of failures occurring in nodes and paths within the supply chain system. Traditional methods for network reliability analysis often rely on computationally expensive Monte Carlo simulation due to the complexities and high-dimensional input space of supply chain systems. To overcome this challenge, a subset simulation-based approach is proposed in this paper. Subset simulation is an efficient method which employs importance sampling and Bayesian inference to accurately estimate the probability of extreme events. To implement the approach, the supply chain system is modeled as a network and sample uncertainties associated with input parameters. Subsequently, subset simulation techniques are utilized to generate a set of samples and evaluate the reliability of the supply chain system and assess potential risks. To validate the effectiveness of the proposed method, experiments are conducted on a real-world supply chain system. The results demonstrate that the approach achieves superior computational efficiency compared to traditional Monte Carlo simulation methods, while accurately assessing risks within the supply chain system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Sharma, Aman, e Madhu Dhatwalia. "Evaluating the Guidance Needs of Secondary School Students: An Empirical Investigation in Shimla City of Himachal Pradesh, India". In Social Development and Governance Innovations in Education, Technology and Management, 31–46. QTanalytics India, 2023. http://dx.doi.org/10.48001/978-81-966500-9-4_3.

Texto completo da fonte
Resumo:
The contemporary world is undergoing rapid transformations, leading to intricate challenges such as conflicts, frustration, and unhealthy competition. These complexities contribute to a value crisis within the social system, affecting personal values, family dynamics, and fostering maladjustment among adolescents. Against this backdrop, the present research delves into the guidance needs of secondary school students in Shimla city. The study uses a descriptive research design, employing the Guidance Need Inventory (GNI) for data collection. The study encompasses the entire student population of arts and science streams of Government Senior Secondary Schools operating in Shimla city, with a sample size of 240 students selected through a simple random sampling technique. Descriptive statistics reveal uniformly high levels of guidance needs across all areas for all students. Intriguingly, the results indicate that guidance needs do not significantly differ based on gender and academic stream, except for the sociological part in relation to their stream. The findings underscore the pressing need for guidance programs within schools, emphasizing the importance of employing trained professionals to address the diverse and high-level guidance needs of students. In light of the ever-evolving challenges in today’s society, the study contributes valuable insights into crafting targeted interventions that can positively impact the physical, social, psychological, educational, and vocational development of students, fostering their overall efficiency and well-being.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "High efficiency sample introduction system"

1

Pratt, David M., e David J. Moorhouse. "Common Currency for System Integration of High Intensity Energy Subsystems". In ASME/JSME 2011 8th Thermal Engineering Joint Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/ajtec2011-44013.

Texto completo da fonte
Resumo:
Aerospace vehicle design has progressed in an evolutionary manner, with certain discrete changes such as turbine engines replacing propellers for higher speeds. The evolution has worked very well for commercial aircraft because the major components can be optimized independently. This is not true for many military configurations which require a more integrated approach. In addition, the introduction of aspects for which there is no pre-existing database requires special attention. Examples of subsystem that have no pre-existing data base include directed energy weapons (DEW) such as high power microwaves (HPM) and high energy lasers (HEL). These devices are inefficient, therefore a large portion of the energy required to operate the device is converted to waste heat and must be transferred to a suitable heat sink. For HPM, the average heat load during one ‘shot’ is on the same order as traditional subsystems and thus designing a thermal management system is possible. The challenge is transferring the heat from the HPM device to a heat sink. The power density of each shot could be hundreds of megawatts. This heat must be transferred from the HPM beam dump to a sink. The heat transfer must occur at a rate that will support shots in the 10–100Hz range. For HEL systems, in addition to the high intensity, there are substantial system level thermal loads required to provide an ‘infinite magazine.’ Present models are inadequate to analyze these problems, current systems are unable to sustain the energy dissipation required and the high intensity heat fluxes applied over a very short duration phenomenon is not well understood. These are examples of potential future vehicle integration challenges. This paper addresses these and other subsystems integration challenges using a common currency for vehicle optimization. Exergy, entropy generation minimization, and energy optimization are examples of methodologies that can enable the creation of energy optimized systems. These approaches allow the manipulation of fundamental equations governing thermodynamics, heat transfer, and fluid mechanics to produce minimized irreversibilities at the vehicle, subsystem and device levels using a common currency. Applying these techniques to design for aircraft system-level energy efficiency would identify not only which subsystems are inefficient but also those that are close to their maximum theoretical efficiency while addressing diverse system interaction and optimal subsystem integration. Such analyses would obviously guide researchers and designers to the areas having the highest payoff and enable departures from the evolutionary process and create a breakthrough design.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Zhenyu, e Botao Lin. "Hydraulic Fracturing of a Tight Conglomerate Sample with Distributed Optical Fibre Monitoring". In 57th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2023. http://dx.doi.org/10.56952/arma-2023-0097.

Texto completo da fonte
Resumo:
ABSTRACT In this paper, a distributed fiber-optic monitoring system is used to finely portray the effect of gravel on crack extension during a true triaxial physical simulation experiment. The experimental results show that under the control of stress difference, the obstruction effect of large-grained gravels on cracks is much higher than that of small-grained gravels on cracks. The fracture encounters the large-grained gravels to wind around the gravels mainly, and the fracture produces more microfractures within the rock. The waterfall cloud map shows a greater variation in the stretching and compression regions. When the fractures encounter small gravels, the fractures are still predominantly gravel-wrapping with some gravel penetration, and the strain distribution on the waterfall cloud diagram is uniform. When the gravel is disordered, the fracture morphology is complex and diverse. INTRODUCTION Conglomerate reservoirs are low porosity and low permeability. Hydraulic fracturing technology contributes to the efficient development of conglomerate reservoirs. The shape of the sewing mesh after pressing affects the final production result (Kang et al., 2020). Conglomerate reservoirs, another key development target in China after shale, have large development potential. Due to the high non-homogeneity of conglomerate, The extension of fractures in conglomerate reservoirs is affected by complex mechanisms, and characterization of fracture morphology is more difficult (Tang et al., 2022). Some scholars have conducted indoor experimental studies on the hydraulic fracture morphology of conglomerate reservoirs. Ren et al. found that hydraulic fractures extend along the edges of conglomerate grains by large-scale physical simulations with conglomerate outcrops (Ren et al., 2023). Liu et al. used the baikouquan formation of the Mahu depression as the object of study. The indentation hardness tests found that the gravel was 5.3 times harder than matrix hardness (Liu et al., 2014). Li experimentally clarified several forms of fractures and gravels (Li et al., 2013).
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Dremin, A. I. "EP-4(0) PROTECTIVE PROPERTIES EXPERIMENTAL ASSESSMENT APPROACHES UNDER HIGH-VOLTAGE EQUIPMENT MAINTENANCE". In The 4th «OCCUPATION and HEALTH» International Youth Forum (OHIYF-2022). FSBSI «IRIOH», 2022. http://dx.doi.org/10.31089/978-5-6042929-6-9-2022-1-68-72.

Texto completo da fonte
Resumo:
Introduction. Railway overhead contact system electrical staff work under disconnected line and live line work are associated with hazard and dangerous occupational factors human exposure. It requires possible human health risks reduce ensuring by occupational conditions safety organizing. Goal: evaluation of power frequency (PF) electric field (EF) decrease by shielding personal protective equipment use effectiveness in laboratory and actual-use test conditions. Methods of the research: The shielding sets' effectiveness evaluation based on the power frequency electric field strength measurements under clothes was carried out in two generic shielding individual protective equipment consisting of jacket, trousers and a jumpsuit. In the investigations there were simulated the work of electrical staff at the ground potential and live-line contact wire as well. Results: The research results show required power frequency electric field shielding properties of both tested individual protective equipment, with more than 30 dB attenuation coefficient in different occupational conditions. Conclusion: Carried out tests showed high efficiency of power frequency electric field attenuation by use of both samples of EP-4(0) personal protective equipment. The research results indicate appropriateness and possibility of laboratory and actual-use test conditions approaches applying for protective sets' power frequency electric field staff protection shielding properties assessment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Bilous, Vadym, e Kirill Sarachuk. "Development of the model-based planning system for augmented reality in industrial plant maintenance". In AHFE 2023 Hawaii Edition. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004449.

Texto completo da fonte
Resumo:
Augmented reality (AR) is extensively used in modern industrial automation, especially in industrial plant maintenance (lPM). The growing amount of both research and practical projects explore, develop and integrate the AR applications for a variety of tasks in lPM. With the introduction of AR, the accomplishment of tasks like training of maintenance staff, the visualization of instructions during maintenance and error correction, the visualization of plant control processes, and many others become more visual, interactive and are, therefore, considerably simplified. Significant time efficiency with respect to commissioning of industrial equipment may also be achieved. Thus, the incorporation of AR technologies leads to comprehensive benefits in solving the automation tasks during the lPM.At the same time, the expansion of AR in IPM does not match the high potential it has demonstrated. The reasons for this comprise the implementation and adaptation issues (high risks and cost of the specific AR implementation), the technical problems (hardware. and software-related), special developer requirements, etc. Therefore, a model for planning the implementation of AR in IPM and for the benefit prediction in terms of AR efficiency is required. However, the majority of the projects and studies in the IPM area focus on the practical side of the AR implementation. The AR introduction benefits (usually in terms of development speedup or process time reduction) tend to be considered on a case-by-case basis. There is seemingly a lack of scientific papers that review the general planning of AR for the IPM in automation: there are no models to identify the feasibility of solving a particular task using the AR in general or the prediction of AR implementation results. The respective research gap consists primarily of a comprehensive analysis of the factors determining the necessity of implementing AR for a project or a process with the defined characteristics, their relation to the resulting benefits, and the main emphases to be considered when planning and deploying the AR technologies in the IPM area in particular and in the automation in general.To fill this research gap, we propose a model-based planning system (MBPS) for AR in the area of the industrial plant maintenance. This system should provide a deep scientific analysis of the feasibility and necessity of using AR to solve particular tasks in the automation field. Additionally, this MBPS should enable predictable planning and forecasting of the results of AR integration, like efficiency, applicability, quality and other criteria, and therefor support the decision making about AR implementation. This requires a broad study and analysis of criteria for evaluating the results of AR integration and usage in automation in general and in the IPM in particular.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Chen, Changchang, Guodong Ji, Hongyuan Zhang, Yuqi Sun, Qiang Wu e Haichao Jiang. "Study and Application of Rock Breaking by PDC Bit in Ultra Deep In-Situ High-Temperature and High-Pressure Environment". In 57th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2023. http://dx.doi.org/10.56952/arma-2023-0349.

Texto completo da fonte
Resumo:
ABSTRACT The finite element model of the cutting tooth rock system based on ABAQUS is established by using the grid adaptive method. The average stress of each node of the cutting tooth is solved by using the secondary development of PYTHON language, and the rock breaking efficiency, real stress distribution and the influence laws of temperature and pressure of different cutting tooth shapes and rock samples are obtained. A super deep PDC bit with special non plane teeth is optimized and designed, which has achieved good results in the southern edge of Xinjiang. The research shows that the mechanical parameters of the rock change obviously under the high temperature and high pressure environment. The micro damage structure that is conducive to rock breaking by the bit is produced under the high temperature. The rock breaking efficiency is negatively related to the temperature due to the transformation of the rock from brittleness to plasticity under the high confining pressure. The double row structure of axe shaped teeth and cone shaped teeth has the best rock breaking effect under the ultra deep high temperature and high pressure environment. The drill bit footage is the longest. The problem of bit shoulder collapse is prominent under the ultra deep in-situ environment, The design of different rail double row teeth should be adopted, the design of bit stability should be strengthened, and the drilling parameters with large WOB, low speed and large displacement should be adopted. INTRODUCTION Onshore deep and extra deep layers where 39% of the remaining oil and 57% of the remaining natural gas are distributed [1-3]. The ultra deep layer in the south border region of Junggar basin is an important oil and gas reservoir and production increasing block in Xinjiang Oilfield. The reservoir in this block has deeply buried depth with high temperature and pressure, low ROP and long drilling cycle. The average well depth of the three risk exploration wells drilled in 2020 has reached 7205m, and the estimated well depth of well Matan 1 to be newly deployed in 2021 reaches even 8200m. There are extraordinary high pressure system and extremely thick salt gypsum layer in the deep well section, resulting in many complex situations, and long drilling cycle with the per-well average drilling cycle being 335d, and the average ROP of deep formation being less than 2m / h. Improving the ROP and reducing the complexity of accidents are the urgent needs and great challenges for the deep layer drilling in the south border region. Under the ultra-deep high temperature and high pressure environment in the south border region, the crustal stress increases. Affected by high temperature and high pressure, the rock crushing mechanism is extremely complex, which puts forward higher requirements for the design of bit [4].
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Yuan, Zihao, e Haifeng Zhao. "Towards Ultra-Deep Exploration in the Moon: Modeling and Implementation of a Mole-Type Burrowing System". In 57th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2023. http://dx.doi.org/10.56952/arma-2023-0174.

Texto completo da fonte
Resumo:
ABSTRACT Effective scientific exploration of extraterrestrial locations requires reaching subsurface destinations and obtaining geology samples of high scientific value. One of the most widely used techniques for geological sampling is drilling, but there is still a need for investigation to achieve greater depth and efficiency. This paper proposes a mole-type autonomous burrowing robotic system for drilling, steering, and accessing scientific samples from the deep subsurface of the Moon. The paper describes the robotic locomotion and operational loads of the self-burrowing robot. The self-burrowing robot with dual-screw configuration is designed and a mathematical model is developed to simulate its excavating capacities. Finally, the paper discusses the robot's performance and compares it with conventional drilling methods using the specific energy method. INTRODUCTION The core of near-earth exploration missions, such as those to the Moon, Mars, or asteroids, is extraterrestrial subsurface drilling and sampling (Gorevan et al., 2003). The primary tools for gaining insights into the evolutionary history of the solar system are subsurface exploration equipment, such as coring machines or robots. To achieve scientific objectives, it is necessary to reach the subsurface destination and obtain geological samples of high scientific value (Glass et al., 2020). Typically, the lunar subsurface within about 10 meters in depth consists of igneous regolith and disturbed materials from fine granular particles to chunky rocks along the depth direction (Slyuta, 2014). Drilling is one of most widely used techniques in geological sampling for sand or soil. These have been implemented in extraterrestrial exploration, in missions such as Luna, Apollo and Chang’e (Prissel & Prissel, 2021). Towards the deep exploration and samples collection in subsurface, there are proposed the conceptual designs or prototypes. This recent research focused on the new technical system to achieve deeper and more efficient soil movement (Lopez-Arreguin & Montenegro, 2020), which imitates locomotion mechanisms of animals and insects such as crawling, swimming, and burrowing in granular mediums. Among these developed bio-inspired robot, typical prototypes, the inchworm robots, such as IDDS (Gorevan et al., 2000) and IBR (Zhang et al., 2018), operate sequentially combining the actions of anchoring and drilling. The wood wasp robots (Cerkvenik et al., 2019) mimic a reciprocating motion of two halves saw-toothed bits to shear and stick into the soil.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Teraji, David. "Taurus™ 65 Gas Turbine Product". In ASME Turbo Expo 2006: Power for Land, Sea, and Air. ASMEDC, 2006. http://dx.doi.org/10.1115/gt2006-90099.

Texto completo da fonte
Resumo:
This paper will review the Taurus 65 gas turbine product, the newest member of Solar Turbine’s product line. The Taurus 65 is a 6.3 MW, 32.9% efficient single shaft gas turbine specifically designed for the combined heat and power (CHP) market with low emissions and excellent exhaust heat capacity. It leverages the reliability and durability of the Centaur® and Taurus product technology. The 13-stage compressor includes the Centaur 50 compressor plus two new aft stages. The combustion system adopts Solar’s proven SoLoNOx™ technology and has a guaranteed NOx emission of 15 ppmv (15% O2). The newly designed three-stage turbine incorporates the proven Taurus 70 high efficiency design with advanced cooling and material technologies developed in the Mercury™ 50 turbine engine. Solar developed a new Taurus 65 package system design utilizing 6 Sigma methodologies. The new design incorporates features that allows for quick installation and easy operation and maintenance. A Kaizen service event successfully demonstrated the field maintenance and engine removal on the first package built. The Taurus 65 universal package design will become the standard design for the Centaur 40, Centaur 50 and Taurus 60 products, and will have the same footprint as the current Taurus 60 package. The first Taurus 65 gas turbine started development test during the fourth quarter 2004. The development test results have been excellent. A Taurus 65 gen-set unit will start endurance testing during the third quarter 2005 at Solar’s San Diego facility. The first production unit will be available for shipment in the first quarter 2006. The New Product Introduction (NPI) process, 6 Sigma process, and Kaizen processes were utilized during the product design, development and introduction phases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Gómez, Elena, Roberto Del Teso, Enrique Cabrera, Elvira Estruch-Juan, Pascual Jose Maximino, Miguel Ortiz, Guillermo Del Pozo e Carlos Marco. "Energy diagnosis of pressurized water systems with the ENERGOS tool". In 2nd WDSA/CCWI Joint Conference. València: Editorial Universitat Politècnica de València, 2022. http://dx.doi.org/10.4995/wdsa-ccwi2022.2022.14773.

Texto completo da fonte
Resumo:
Although the transport of fluids by pipeline is the most efficient, it requires high energy consumption, due to the volume transferred and the pressure required. This is the consequence of transporting large volumes of water (sometimes over considerable distances) and having to deliver it at the required pressure. In the current context of climate change, with both resources becoming increasingly scarce, the only way to minimize their impact and control energy consumption is to improve efficiency, a process whose most relevant stages are the diagnosis, which identifies the starting point and the existing margin for savings, and the audit, which locates and quantifies inefficiencies. This paper presents a simple tool, ENERGOS, which allows to perform the first stage, the energy diagnosis of a pressurized water system. The objective of this diagnosis is to know the current state of the system, and more importantly, the possible margin for improvement, if any, from the introduction of very few data. This is the first step to improve the efficiency of the system. The tool, and the energy indicators presented in it, have been designed under the premise that only the minimum information, which any manager should know about his system, is required. That is, global volumes billed and injected, the most representative system levels and energy consumed by the pumps (available in the electricity bills). ENERGOS classifies the systems into three large groups, and performs the diagnosis according to the group. These are, firstly, the simple systems, defined as a pipeline, with one entry, generally, one pump, and one exit. Multi-scenario systems are systems with several inputs and outputs, and constant changes in their mode of operation, where each of these scenarios corresponds to a different layout. Finally, networks with one or more inputs and numerous outputs can operate differently, but without changing the layout. In all cases, the tool has a schematic, simple and intuitive data entry part, and from the data entered, it calculates the diagnosis, consisting of the comparison of the current energy intensity indicator (kWh/m3) with the ideal energy intensity, the one that would imply the total absence of losses, both operational (friction, water losses, inefficiencies in the pumping stations) and structural (losses due to topography). Since it is impossible to reach the ideal energy intensity value, an intermediate indicator, the target intensity, is defined. The calculation of this intensity requires the establishment of targets for losses, reasonable reference values to be achieved with the current technology for the system under analysis. For example, the current efficiency of the pumping station is estimated, and a minimum acceptable level is calculated for it (establishing values for the efficiency of the motor, the variator and the pump itself). The same is established for friction, water losses and excess pressure.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Khadivi, Babak, e Hossein Masoumi. "Experimental and Numerical Investigations into Fracturing Process of Brittle Rocks". In 58th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2024. http://dx.doi.org/10.56952/arma-2024-0071.

Texto completo da fonte
Resumo:
ABSTRACT: Tensile strength plays a vital role in the efficient design of underground openings and support system in rock engineering practice. Among various approaches developed for estimating this property, Brazilian testing has received considerable attention in analytical, numerical, and experimental studies. The test has been deployed for various materials such as rock and concrete; however, its fracturing process at micro and macro scales has not yet been thoroughly characterized. In this study, a coupled experimental-numerical technique was used to characterize the failure process of granite to better understand the fracturing process of brittle rocks under indirect tensile loading. In the experimental part, two independent monitoring techniques, Acoustic Emission (AE) and high-speed imaging, were deployed to investigate micro and macro cracking processes. The sample's behaviour was then simulated using a Distinct Element Method (DEM) code through Trigon logic to further compare the brittle failure characteristics with the experimental results. The collective results revealed that the tensile microcracks are the first to initiate during the failure with higher intensity compared to shear microcracks, resulting in a dominant tensile failure at macro scale. Also, it was noted that the sample's failure started from the center, which keeps the Brazilian results valid for brittle rocks while some overestimations may be added due to following shearing and the lack of enough stiffness in the testing machine. 1. INTRODUCTION The initiation of fractures in brittle rocks can occur through tensile events, highlighting the significant role of tensile strength in the design of rock engineering projects (Griffith, 1921, Tapponnier and Brace, 1976, Stacey, 1981, Diederichs and Kaiser, 1999, Perras and Diederichs, 2014, Khadivi et al., 2023). Therefore, accurate assessment of tensile strength and cracking mechanism is essential for better understanding the mechanical behavior of brittle rocks. The convenience of sample preparation and straightforward testing operation have made the Brazilian test the most popular approach to estimate the tensile strength across a wide range of materials (Hobbs, 1964, Barla and Innaurato, 1973, Jonsén et al., 2007, Bahaaddini et al., 2019). Valid testing results are obtained by inducing a tensile fracture at the center, splitting the sample along the compressive diameter into two halves (Colback, 1966, Carmona and Aguado, 2012).
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Bonino, Christopher A., Joshua Hlebak, Nicholas Baldasaro e Dennis Gilmore. "Pilot-Scale System With Particle-Based Heat Transfer Fluids for Concentrated Solar Power Applications". In ASME 2020 Power Conference collocated with the 2020 International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/power2020-16588.

Texto completo da fonte
Resumo:
Abstract Concentrated solar power (CSP) is a promising large-scale, renewable power generation and energy storage technology, yet limited by the material properties of the heat transfer fluid. Current CSP plants use molten salts, which degrade above 600°C and freeze below 220°C. A dry, particle-based heat transfer fluid (pHTF) can operate up to and above 1,000°C, enabling high-efficiency power cycles, which may enhance CSP’s commercial competitiveness. Demonstration of the flow and heat-transfer performance of the pHTF in a scalable process is thereby critical to assess the feasibility for this technology. In this study, we report on a first-of-a-kind pilot system to evaluate heat transfer to/from a densely flowing pHTF. This process unit circulates the pHTF at flowrates up to 1 tonne/h. Thermal energy is transferred to the pHTF as it flows through an electrically heated pipe. A fluidization gas in the heated zone enhances the wall-to-pHTF heat transfer rate. We found that the introduction of gas mixtures with thermal conductivities 4.6 times greater than that of air led to a 65% increase in the heat transfer coefficient compared to fluidization by air alone. In addition to the fluidization gas, the particle size also plays a critical role in heat transfer performance. Particles with an average diameter of 270 μm contributed to heat transfer coefficients that were up to 25% greater than the performance of other particles of the same composition in size range of 65 to 350 μm. The considerations for the design of an on-sun system are also discussed. Moreover, the collective work demonstrates the promise of this unique design in solar applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "High efficiency sample introduction system"

1

Einarsson, Rasmus. Nitrogen in the food system. TABLE, fevereiro de 2024. http://dx.doi.org/10.56661/2fa45626.

Texto completo da fonte
Resumo:
Nitrogen (N) plays a dual role in the agri-food system: it is an essential nutrient for all life forms, yet also an environmental pollutant causing a range of environmental and human health impacts. As the plant nutrient needed in greatest quantities, and as a building block of proteins and other biomolecules, N is a necessary part of all life. In the last century, an enormous increase of N turnover in the agri-food system has enabled increasing per-capita food supply for a growing world population, but as an unintended side effect, N pollution has increased to levels widely agreed in science and policy to be far beyond sustainable limits. There is no such thing as perfectly circular N supply. Losses of N to the environment inevitably arise as N is transformed and used in the food system, for example in soil processes, in manure storage, and in fertilizer application. This lost N must be replaced by ‘new’ N, which is N converted to bioavailable forms from the vast atmospheric pool of unreactive dinitrogen (N2). New N comes mainly as synthetic N fertilizer and through a process known as biological N fixation (BNF). In addition, there is a large internal flow of recycled N in the food system, mainly in the form of livestock excreta. This recirculated N, however, is internal to the food system and cannot make up for the inevitable losses of N. The introduction of synthetic N fertilizer during the 20th century revolutionized the entire food system. The industrial production of synthetic N fertilizer was a revolution for agricultural systems because it removed the natural constraint of N scarcity. Given sufficient energy, synthetic N fertilizer can be produced in limitless quantities from atmospheric dinitrogen (N2). This has far-reaching consequences for the whole agri-food system. The annual input of synthetic N fertilizer today is more than twice the annual input of new N in pre-industrial agriculture. Since 1961, increased N input has enabled global output of both crop and livestock products to roughly triple. During the same time period, total food-system N emissions to the environment have also more than tripled. Livestock production is responsible for a large majority of agricultural N emissions. Livestock consume about three-quarters of global cropland N output and are thereby responsible for a similar share of cropland N emissions to air and water. In addition, N emissions from livestock housing and manure management systems contribute a substantial share of global N emissions to air. There is broad political agreement that global N emissions from agriculture should be reduced by about 50%. High-level policy targets of the EU and of the UN Convention on Biological Diversity are for a 50% reduction in N emissions. These targets are in line with a large body of research assessing what would be needed to stay within acceptable limits as regards ecosystem change and human health impacts. In the absence of dietary change towards less N-intensive diets, N emissions from food systems could be reduced by about 30%, compared to business-as-usual scenarios. This could be achieved by implementing a combination of technical measures, improved management practices, improved recycling of wasted N (including N from human excreta), and spatial optimization of agriculture. Human dietary change, especially in the most affluent countries, offers a huge potential for reducing N emissions from food systems. While many of the world’s poor would benefit nutritionally from increasing their consumption of nutrient-rich animal-source foods, many other people consume far more nutrients than is necessary and could reduce consumption of animal-source food by half without any nutritional issues. Research shows that global adoption of healthy but less N-polluting diets might plausibly cut future food-system N losses by 10–40% compared to business-as-usual scenarios. There is no single solution for solving the N challenge. Research shows that efficiency improvements and food waste reductions will almost certainly be insufficient to reach agreed environmental targets. To reach agreed targets, it seems necessary to also shift global average food consumption onto a trajectory with less animal-source food.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Lehotay, Steven J., e Aviv Amirav. Fast, practical, and effective approach for the analysis of hazardous chemicals in the food supply. United States Department of Agriculture, abril de 2007. http://dx.doi.org/10.32747/2007.7695587.bard.

Texto completo da fonte
Resumo:
Background to the topic: For food safety and security reasons, hundreds of pesticides, veterinary drugs, and environmental pollutants should be monitored in the food supply, but current methods are too time-consuming, laborious, and expensive. As a result, only a tiny fraction of the food is tested for a limited number of contaminants. Original proposal objectives: Our main original goal was to develop fast, practical, and effective new approaches for the analysis of hazardous chemicals in the food supply. We proposed to extend the QuEChERS approach to more pesticides, veterinary drugs and pollutants, further develop GC-MS and LC-MS with SMB and combine QuEChERS with GC-SMB-MS and LC-SMB-EI-MS to provide the “ultimate” approach for the analysis of hazardous chemicals in food. Major conclusions, solutions and achievements: The original QuEChERS method was validated for more than 200 pesticide residues in a variety of food crops. For the few basic pesticides for which the method gave lower recoveries, an extensive solvent suitability study was conducted, and a buffering modification was made to improve results for difficult analytes. Furthermore, evaluation of the QuEChERS approach for fatty matrices, including olives and its oil, was performed. The QuEChERS concept was also extended to acrylamide analysis in foods. Other advanced techniques to improve speed, ease, and effectiveness of chemical residue analysis were also successfully developed and/or evaluated, which include: a simple and inexpensive solvent-in-silicone-tube extraction approach for highly sensitive detection of nonpolar pesticides in GC; ruggedness testing of low-pressure GC-MS for 3-fold faster separations; optimization and extensive evaluation of analyte protectants in GC-MS; and use of prototypical commercial automated direct sample introduction devices for GC-MS. GC-MS with SMB was further developed and combined with the Varian 1200 GCMS/ MS system, resulting in a new type of GC-MS with advanced capabilities. Careful attention was given to the subject of GC-MS sensitivity and its LOD for difficult to analyze samples such as thermally labile pesticides or those with weak or no molecular ions, and record low LOD were demonstrated and discussed. The new approach of electron ionization LC-MS with SMB was developed, its key components of sample vaporization nozzle and flythrough ion source were improved and was evaluated with a range of samples, including carbamate pesticides. A new method and software based on IAA were developed and tested on a range of pesticides in agricultural matrices. This IAA method and software in combination with GC-MS and SMB provide extremely high confidence in sample identification. A new type of comprehensive GCxGC (based on flow modulation) was uniquely combined with GC-MS with SMB, and we demonstrated improved pesticide separation and identification in complex agricultural matrices using this novel approach. An improved device for aroma sample collection and introduction (SnifProbe) was further developed and favorably compared with SPME for coffee aroma sampling. Implications, both scientific and agricultural: We succeeded in achieving significant improvements in the analysis of hazardous chemicals in the food supply, from easy sample preparation approaches, through sample analysis by advanced new types of GC-MS and LCMS techniques, all the way to improved data analysis by lowering LOD and providing greater confidence in chemical identification. As a result, the combination of the QuEChERS approach, new and superior instrumentation, and the novel monitoring methods that were developed will enable vastly reduced time and cost of analysis, increased analytical scope, and a higher monitoring rate. This provides better enforcement, an added impetus for farmers to use good agricultural practices, improved food safety and security, increased trade, and greater consumer confidence in the food supply.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ступнік, М. І., В. С. Моркун e З. П. Бакум. Information and Communication Technologies in the Process of Mining Engineer Training. Криворізький державний педагогічний університет, 2013. http://dx.doi.org/10.31812/0564/405.

Texto completo da fonte
Resumo:
Based on scientific analysis the authors of the article argued the necessity of solving priority tasks – the development of new educational technologies aimed at supporting the training of engineers in terms of the mining engineering as high-tech industry. The features of mining computer technologies are determined. There was worked out the project of the adaptive system of a mining engineer individual training "Electronic manual" aimed at the development of future professionals. The essence of individual preparation of future mining engineer ICT is defined. It is proved that the efficiency of the designing and planning of mining operations through the introduction of ICT at present is the real way to influence the quality of mining products that will promote individual learning orientation. For the first time pedagogical foundations for introducing adaptive training of mining engineers are clarified.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia