Academic literature on the topic 'Code-removal Patche'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Code-removal Patche.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Code-removal Patche"

1

Puglisi, Giuseppe, Gueorgui Mihaylov, Georgia V. Panopoulou, Davide Poletti, Josquin Errard, Paola A. Puglisi, and Giacomo Vianello. "Improved galactic foreground removal for B-mode detection with clustering methods." Monthly Notices of the Royal Astronomical Society 511, no. 2 (January 12, 2022): 2052–74. http://dx.doi.org/10.1093/mnras/stac069.

Full text
Abstract:
ABSTRACT Characterizing the sub-mm Galactic emission has become increasingly critical especially in identifying and removing its polarized contribution from the one emitted by the cosmic microwave background (CMB). In this work, we present a parametric foreground removal performed on to sub-patches identified in the celestial sphere by means of spectral clustering. Our approach takes into account efficiently both the geometrical affinity and the similarity induced by the measurements and the accompanying errors. The optimal partition is then used to parametrically separate the Galactic emission encoding thermal dust and synchrotron from the CMB one applied on two nominal observations of forthcoming experiments from the ground and from the space. Moreover, the clustering is performed on tracers that are different from the data used for component separation, e.g. the spectral index maps of dust and synchrotron. Performing the parametric fit singularly on each of the clustering derived regions results in an overall improvement: both controlling the bias and the uncertainties in the CMB B-mode recovered maps. We finally apply this technique using the map of the number of clouds along the line of sight, $\mathcal {N}_c$, as estimated from H i emission data and perform parametric fitting on to patches derived by clustering on this map. We show that adopting the $\mathcal {N}_c$ map as a tracer for the patches related to the thermal dust emission, results in reducing the B-mode residuals post-component separation. The code is made publicly available https://github.com/giuspugl/fgcluster.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Hongjia, Hui Zhang, Xiaohua Wan, Zhidong Yang, Chengmin Li, Jintao Li, Renmin Han, Ping Zhu, and Fa Zhang. "Noise-Transfer2Clean: denoising cryo-EM images based on noise modeling and transfer." Bioinformatics 38, no. 7 (February 4, 2022): 2022–29. http://dx.doi.org/10.1093/bioinformatics/btac052.

Full text
Abstract:
Abstract Motivation Cryo-electron microscopy (cryo-EM) is a widely used technology for ultrastructure determination, which constructs the 3D structures of protein and macromolecular complex from a set of 2D micrographs. However, limited by the electron beam dose, the micrographs in cryo-EM generally suffer from the extremely low signal-to-noise ratio (SNR), which hampers the efficiency and effectiveness of downstream analysis. Especially, the noise in cryo-EM is not simple additive or multiplicative noise whose statistical characteristics are quite different from the ones in natural image, extremely shackling the performance of conventional denoising methods. Results Here, we introduce the Noise-Transfer2Clean (NT2C), a denoising deep neural network (DNN) for cryo-EM to enhance image contrast and restore specimen signal, whose main idea is to improve the denoising performance by correctly learning the noise distribution of cryo-EM images and transferring the statistical nature of noise into the denoiser. Especially, to cope with the complex noise model in cryo-EM, we design a contrast-guided noise and signal re-weighted algorithm to achieve clean-noisy data synthesis and data augmentation, making our method authentically achieve signal restoration based on noise’s true properties. Our work verifies the feasibility of denoising based on mining the complex cryo-EM noise patterns directly from the noise patches. Comprehensive experimental results on simulated datasets and real datasets show that NT2C achieved a notable improvement in image denoising, especially in background noise removal, compared with the commonly used methods. Moreover, a case study on the real dataset demonstrates that NT2C can greatly alleviate the obstacles caused by the SNR to particle picking and simplify the identifying of particles. Availabilityand implementation The code is available at https://github.com/Lihongjia-ict/NoiseTransfer2Clean/. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
3

Ginelli, Davide, Matias Martinez, Leonardo Mariani, and Martin Monperrus. "A comprehensive study of code-removal patches in automated program repair." Empirical Software Engineering 27, no. 4 (May 5, 2022). http://dx.doi.org/10.1007/s10664-021-10100-7.

Full text
Abstract:
AbstractAutomatic Program Repair (APR) techniques can promisingly help reduce the cost of debugging. Many relevant APR techniques follow the generate-and-validate approach, that is, the faulty program is iteratively modified with different change operators and then validated with a test suite until a plausible patch is generated. In particular, Kali is a generate-and-validate technique developed to investigate the possibility of generating plausible patches by only removing code. Former studies show that indeed Kali successfully addressed several faults. This paper addresses the single and particular case of code-removal patches in automated program repair. We investigate the reasons and the scenarios that make their creation possible, and the relationship with patches implemented by developers. Our study reveals that code-removal patches are often insufficient to fix bugs, and proposes a comprehensive taxonomy of code-removal patches that provides evidence of the problems that may affect test suites, opening new opportunities for researchers in the field of automatic program repair.
APA, Harvard, Vancouver, ISO, and other styles
4

Doria, David. "Criminisi Inpainting." Insight Journal, February 4, 2011. http://dx.doi.org/10.54294/aqxdcz.

Full text
Abstract:
This document presents a system to fill a hole in an image by copying patches from elsewhere in the image. These patches should be a good continuation of the hole boundary into the hole. The patch copying is done in an order which attempts to preserve linear structures in the image. This implementation is based on the algorithm described in ``Object Removal by Exemplar-Based Inpainting’’ (Criminisi et. al.).The code is available here: https://github.com/daviddoria/Inpainting
APA, Harvard, Vancouver, ISO, and other styles
5

Hazarathaiah, A., Dharani Udayabhanu, Anukupalli Anjali, Biruduraju Nymisha, and Borra Prathyusha. "An Optimum Deraining Scheme using Sparse Coding." International Journal of Emerging Research in Engineering, Science, and Management 1, no. 2 (2022). http://dx.doi.org/10.58482/ijeresm.v1i2.2.

Full text
Abstract:
Rain streak removal is a challenging and interesting task of image processing where the rain streaks will be removed from an image with rain streaks. In the literature, a large number of proposals are made where rain streak removal is considered as image enhancement or denoising. In this paper, a model of rain streak removal was proposed using sparse coding. First the regularization terms of the rain streak removal are defined. Then, a suitable dictionary of sub- dictionary with respect to specific patch of input rainy image are prepared. Finally, the Sparse code is applied on the patches of input image individually. The simulation results prove that the proposed technique performs very well even if the raindrop size is above a certain threshold.
APA, Harvard, Vancouver, ISO, and other styles
6

Jain, Neha, Petra Janning, and Heinz Neumann. "14-3-3 protein Bmh1 triggers short-range compaction of mitotic chromosomes by recruiting sirtuin deacetylase Hst2." Journal of Biological Chemistry, November 13, 2020, jbc.AC120.014758. http://dx.doi.org/10.1074/jbc.ac120.014758.

Full text
Abstract:
During mitosis, chromosomes are compacted in length by over 100-fold into rod-shaped forms. In yeast, this process depends on the presence of a centromere, which promotes condensation in cisby recruiting mitotic kinases such as Aurora B kinase. This licensing mechanism enables the cell to discriminate chromosomal from non-centromeric DNA and to prohibit the propagation of the latter. Aurora B kinase elicits a cascade of events starting with phosphorylation of histone H3 serine 10 (H3S10ph), which signals the recruitment of lysine deacetylase Hst2 and the removal of lysine 16 acetylation in histone 4 (H4). The unmasked H4 tails interact with the acidic patch of neighboring nucleosomes to drive short-range compaction of chromatin, but the mechanistic details surrounding Hst2 activity remain unclear. Using in vitroand in vivoassays, we demonstrate that the interaction of Hst2 with H3S10ph is mediated by the yeast 14-3-3 protein Bmh1. As a homodimer, Bmh1 binds simultaneously to H3S10ph and the phosphorylated C- terminus of Hst2. Our pulldown experiments with extracts of synchronized cells show that the Hst2-Bmh1 interaction is cell cycle dependent, peaking in the M phase. Furthermore, we show that phosphorylation of C-terminal residues of Hst2, introduced by genetic code expansion, stimulates its deacetylase activity. Hence, the data presented here identify Bmh1 as a key player in the mechanism of licensing of chromosome compaction in mitosis.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Danni, Frank E. Peters, and Matthew C. Frank. "A Semiautomatic, Cleaning Room Grinding Method for the Metalcasting Industry." Journal of Manufacturing Science and Engineering 139, no. 12 (November 2, 2017). http://dx.doi.org/10.1115/1.4037890.

Full text
Abstract:
This paper presents a semi-automated grinding system for the postprocessing of metalcastings. Grinding is an important procedure in the “cleaning room” of a foundry, where the removal of gate contacts, parting line flash, surface defects, and weld-repaired areas is performed, and almost always manually. While the grinding of repetitive locations on medium to high production castings can be automated using robotics or otherwise, it is not as practical for larger castings (e.g., > 200 kg) that are typically produced in smaller production volumes. Furthermore, automation is even more challenging in that the locations of the required grinding are not a constant depending on the unique conditions and anomalies of each pouring of a component. The proposed approach is intended for a simple x−y−z positioner (gantry) device with a feedback controlled grinding head that enables automated path planning. The process begins with touch probing of the surfaces that contain the anomaly requiring grinding, and then the system automatically handles the path planning and force control to remove the anomaly. A layer-based algorithm for path planning employs a search-and-destroy technique where the surrounding geometry is interpolated across the grind-requiring surface patch. In this manner, each unique condition of the casting surface after initial torch or saw cutting can be handled cost effectively without the need for human shaping and the egregious ergonomic problems associated. Implementation of the proposed grinding control is prototyped at a lab scale to demonstrate the feasibility and versatility of this strategy. The average error for the prototype was on the order of 0.007 in (0.2 mm) with a flatness of the ground surface within 0.035 in (0.9 mm), which is within the cleaning room grinding requirements, as per ISO and ASTM dimensional and surface tolerance requirements. A significant contribution of the work is the layer-based algorithm that allows an effective automation of the process planning for grinding, avoiding robot programming or numerical control code generation altogether. This is a key to addressing the largely unknown and unpredictable conditions of, for example, the riser contact surface removal area on a metalcasting.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Code-removal Patche"

1

GINELLI, DAVIDE. "Understanding and Improving Automatic Program Repair: A Study of Code-removal Patches and a New Exception-driven Fault Localization Approach." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/317046.

Full text
Abstract:
Le attività di debugging e risoluzione dei bug sono estremamente importanti e sono eseguite regolarmente per eliminare i difetti presenti nei programmi. Tuttavia, la loro esecuzione richiede molto tempo, e dunque migliorare il loro grado di automazione è sempre più importante in un mondo altamente competitivo. Una possibile soluzione è offerta dalle tecniche di riparazione automatica del programma (Automatic Program Repair, APR), attraverso le quali è possibile generare in modo automatico le patch che possono essere fornite agli sviluppatori come possibili soluzioni oppure possono essere integrate direttamente nel programma contenente il difetto. Nonostante negli ultimi anni siano state sviluppate numerose tecniche di APR, ci sono ancora numerose sfide aperte legate alla loro introduzione come soluzione stabile e funzionante da integrare nelle pipeline di sviluppo. Infatti, la maggior parte delle tecniche di APR si appoggiano ai casi di test per valutare la correttezza delle patch, i quali però costituiscono un metodo di valutazione fragile che può portare alla generazione di patch non corrette. In particolare, recenti studi empirici mostrano che le tecniche di APR spesso generano patch di rimozione del codice (code-removal patches), che sono patch che rimuovono le funzionalità per risolvere i difetti che affliggono i programmi. Un altro aspetto che influenza fortemente il successo delle tecniche di APR è la localizzazione dei difetti. Infatti, se la posizione corretta per generare la patch non viene trovata, è difficile o persino impossibile generare una patch. Prove sperimentali mostrano che le strategie attuali utilizzate per la localizzazione dei difetti spesso non sono in grado di identificare i punti del codice corretti da modificare, rendendo la generazione delle patch estremamente difficile. In questo contesto, questa tesi di dottorato fornisce due contributi chiave: 1) uno studio empirico riguardo ai fattori che influenzano la generazione delle patch di rimozione del codice e un’analisi delle informazioni utili che possono essere estratte da esse; e 2) una nuova tecnica di localizzazione dei difetti che sfrutta la semantica delle eccezioni per guidare attentamente il processo di localizzazione dei difetti.
Debugging and bug fixing are extremely important activities that are regularly performed to eliminate defects from software. They are however time consuming, and thus improving their degree of automation is increasingly important in a competitive world. A possible solution is offered by Automatic Program Repair (APR) techniques, through which it is possible to automatically generate patches that can be either presented to developers as candidate patches or directly integrated into the target programs. Although many different APR techniques have been developed in the last few years, there are still open challenges related to their introduction as stable and working solutions in development pipelines. Indeed, most of the APR techniques rely on test cases to evaluate the correctness of patches, which is a weak validation method that can lead to the generation of incorrect patches. In particular, recent empirical studies show that APR techniques often result in the generation of code-removal patches, that are patches that drop functionalities to address the faults that afflict programs. Yet another aspect that strongly influences the success of APR is fault localization. Indeed, if the correct location to generate the patch is not found, it is hard or even impossible to generate a patch. Experimental evidence shows that current strategies used for the fault localization are often unable to identify the correct statements to be modified, making the generation of patches extremely hard. In this context, this Ph.D. thesis provides two key contributions: 1) an empirical study about the factors that influence the generation of code-removal patches and an analysis of the useful information that can be extracted from them; and 2) a new fault localization technique that exploits the semantic of exceptions to accurately guide the fault localization process.
APA, Harvard, Vancouver, ISO, and other styles
2

Alfaro, Hidalgo Luis Adolfo. "Experimental path loss models for UWB multistatic radar systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14656/.

Full text
Abstract:
The use of Ultra-Wideband (UWB) radio technology in a multistatic radar system has recently gained interest to implement Wireless Sensor Networks (WSN) capable of detecting and tracking targets in indoor environments. Due to the increasing attention towards multistatic UWB systems, it is important to perform the radio channel characterization. In this thesis we focus on the characterization of the path loss exponent (α). To perform the present work, the followed methodology was to collect experimental data from the UWB devices using a suitable target, this information was processed with a clutter removal algorithm using the Empty Room (ER) approach, then the contribution of the target was isolated to produce a graph of energy as a function of the product between the target-to-transmitter and the target-to-receiver distances in a bistatic configuration. Finally, using this plot it was properly obtained the value of the path loss exponent. As as additional experimental result, the main statistical parameters associated to the residual clutter were calculated, which are expected to allow having a better understanding and characterization of the radar system performance in the experimental environments.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Code-removal Patche"

1

Liechty, Edward A., and Martin M. Fisher, eds. Coding for Pediatrics, 2016. 21st ed. American Academy of Pediatrics, 2015. http://dx.doi.org/10.1542/9781581109597.

Full text
Abstract:
Published annually and currently in its 21th edition, Coding for Pediatrics is the signature publication in a comprehensive suite of coding products offered by the American Academy of Pediatrics (AAP). This AAP exclusive complements standard coding manuals with pediatric-specific documentation and billing solutions for pediatricians, nurse practitioners, administration staff, and pediatric coders. This year's edition has been fully updated and revised to include all changes to the 2016 Current Procedural Terminology (CPT®), complete with accompanying guidelines for their application. The numerous clinical vignettes and examples featured in the book, as well as the many coding pearls included throughout, have also been fully revised and revisited. Coding for Pediatrics, 2016 continues to provide guidance on ICD-10-CM transition including coding tips highlighting key conventions and documentation elements to support specific and accurate ICD-10-CM code selection. Other updates for this edition include Detailed information on new and revised CPT® codes for 2016 including Prolonged clinical staff time Removal of impacted cerumen with irrigation or lavage Revision of photo-screening services New chapter on enhanced quality and pay for performance Expanded coding resources including articles for the AAP Pediatric Coding Newsletter, coding fact sheets, sample appeal letter, denial tracking tool, and more All clinical vignettes presented with corresponding ICD-10-CM codes. Some included with valuable quality measure. Online access to many additional practice resources Table of Contents New and Revised CPT® Codes for 2016 Diagnosis Coding: ICD-10-CM Modifiers and Coding Edits Evaluation and Management Documentation (E/M) and Coding Guidelines: Incident-To, PATH Guidelines, and Scope of Practice Laws Preventive Services Evaluation and Management Services in the Office, Outpatient, Home, or Nursing Facility Setting Perinatal Counseling and Care of the Neonate Noncritical Hospital Evaluation and Management Services Emergency Department Services Critical Care and Intensive Care Evolving Evaluation and Management for Nonphysician Services Common Procedures and Non-E/M Medical Services Coding for Quality and Performance Measures\ Preventing Fraud and Abuse: Compliance, Audits, and Paybacks The Business of Medicine: From Clean Claims to Correct Payment and Emerging Payment Methodologies
APA, Harvard, Vancouver, ISO, and other styles
2

Liechty, Edward A., Cindy Hughes, and Becky Dolan, eds. Coding for Pediatrics, 2014. American Academy of Pediatrics, 2013. http://dx.doi.org/10.1542/9781581108354.

Full text
Abstract:
“Published annually and currently in its 19th edition, Coding for Pediatrics is the signature publication in a comprehensive suite of coding products offered by the American Academy of Pediatrics (AAP). Written by coding experts for coders and physicians, this manual is a product of the AAP Committee on Coding and Nomenclature and is extensively reviewed each year by the AAP Coding Publications Editorial Advisory Board. This edition has been fully updated and revised to include all changes to the 2014 Current Procedural Terminology (CPT®) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes, complete with accompanying guidelines for their application. The numerous clinical vignettes and examples featured in the book, as well as the many “Coding Pearls” included throughout, have also been fully revised and revisited. New to this edition, is an emphasis through the entirety of the manual on the upcoming transition to International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) with newly added “Transitioning to 10” boxes. These boxes accompany the text and highlight for the reader the various codes and situations most affected by the forthcoming change. New 2014 features and updates make Coding for Pediatrics more indispensable that ever! ICD-10-CM guidance and examples with dynamic call-out boxes New chapter on preventive medicine services New information on changes to transitional care management Updated guidance for reporting new codes for interprofessional consultations New explanation of changes to the code for cerumen removal Web access to Coding for Pediatrics updates and alerts Updated clinical vignettes to bring complex coding issues to life. Updated coding fact sheets, sample letters, denial tracking tool, and more The basics and beyond-with chapter after chapter of important information, updates and advice, including * New and Revised CPT® and ICD-9-CM Codes for 2014 * Diagnosis Coding: ICD-9-CM and ICD-10-CM * Evaluation and Management Documentation and Coding Guidelines: Incident-To, PATH Guidelines, and Scope of Practice Laws * Preventive Evaluation and Management Services in the Office, Outpatient, Home, or Nursing Facility Setting * Noncritical Hospital Care * Perinatal Counseling and Care of the Neonate and Critically Ill Infant/Child * Emergency Department Services * Common Procedures and Non-E/M Medical Services * Modifiers and Coding Edits * Category II CPT® Codes-Pay for Performance Measures and Category III CPT® Codes-Emerging Technologies * Fraud and Abuse: Compliance for the Pediatric Practice * The Business of Medicine: From Clean Claims to Correct Payment and Emerging Payment Methodologies Coding for Pediatrics, has the prior approval of American Academy of Professional Coders (AAPC) for 4.0 continuing education hours. Granting of prior approval in no way constitutes endorsement by AAPC of the program content or the program sponsor.”
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Code-removal Patche"

1

Kim, Dehee, Jaehyuk Eoh, and Tae-Ho Lee. "An Optimal Design Approach for the Decay Heat Removal System in PGSFR." In ASME 2014 4th Joint US-European Fluids Engineering Division Summer Meeting collocated with the ASME 2014 12th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/fedsm2014-21971.

Full text
Abstract:
Sodium-cooled Fast Reactor (SFR) is one of the generation IV (Gen-IV) nuclear reactors. Prototype Gen-IV SFR (PGSFR) is a SFR being developed in Korea Atomic Energy Research Institute (KAERI). Decay Heat Removal System (DHRS) in the PGSFR has a safety function to make shutdown the reactor under abnormal plant conditions. Single DHRS loop consists of sodium-to-sodium decay heat exchanger (DHX), helical-tube sodium-to-air heat exchanger (AHX) or finned-tube sodium-to-air heat exchanger (FHX), loop piping, and expansion vessel. The DHXs are located in the cold pool and the AHXs and FHXs are installed in the upper region of the reactor building. The DHRS loop is a closed loop and liquid sodium coolant circulates inside the loop by natural circulation head for passive system and by forced circulation head for active system. There are three independent heat transport paths in the DHRS, i.e., the DHX shell-side sodium flow path, the DHRS sodium loop path through the piping, the AHX shell-side air flow path. To design the components of the DHRS and to determine its configuration, key design parameters such as mass flow rates in each path, inlet/outlet temperatures of primary and secondary flow sides of each heat exchanger should be determined reflecting on the coupled heat transfer mechanism over the heat transfer paths. The number of design parameters is larger than that of the governing equations and optimization approach is required for compact design of the DHRS. Therefore, a genetic algorithm has been implemented to decide the optimal design point. The one-dimensional system design code which can predict heat transfer rates and pressure losses through the heat exchangers and piping calculates the objective function and the genetic algorithm code searches a global optimal point. In this paper, we present a design methodology of the DHRS, for which we have developed a system code coupling a one-dimensional system code with a genetic algorithm code. As a design result, the DHRS layouts and the sizing of the heat exchangers have been shown.
APA, Harvard, Vancouver, ISO, and other styles
2

Laxmiprasad, Putta, and Sanjay Sarma. "A Feature Free Approach to 5-Axis Tool Path Generation." In ASME 1997 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/detc97/dfm-4325.

Full text
Abstract:
Abstract In recent years, CAD/CAM integration for machining has been based on the concept of manufacturing features. In this paper we present an overview of an alternative scheme to integrate CAD with CAM. Instead of decomposing the design into manufacturing features, we generate tool paths directly from the shape of the workpiece using visibility and accessibility arguments. This approach builds on the work of a number of previous efforts to bring concepts from robotic path planning into the realm of NC planning. We present a hardware approach for computing visibility rapidly. This information is then used to determine the “principal directions”, or setups, around which the search will be conducted to remove material from the workpiece. The machining is conducted in two stages. The first stage, global roughing, is performed for each principal direction. The bulk of the material is removed through 5-axis roughing tool paths. Algorithms have been developed to generate these tool paths in an efficient way. The removal volume is first stratified into “4−1/2D machining pockets”, for which tool paths are generated with simple “space filling 2D curves based on the Voronoi Diagram. After global roughing, the surface of the part can be finished with “face based finishing.” In this step, each face is independently machined to the required finish and accuracy using a face-oriented, rather than feature-oriented, approach. The primary concern is to meet accessibility constraints for each face while generating tool paths individually. Once toolpaths have been generated, we simulate them and correct residual intersection problems before outputting final NC code. Since this is an overview paper that attempts to summarize current and future research in an ongoing project, we concentrate on some key issues such as accessibility, while only summarizing other individual ideas briefly. We also present some experimental results through illustrations.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Zhengkai, Thomas M. Tucker, Chandra Nath, Thomas R. Kurfess, and Richard W. Vuduc. "Step Ring Based 3D Path Planning via GPU Simulation for Subtractive 3D Printing." In ASME 2016 11th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/msec2016-8751.

Full text
Abstract:
In this paper, both software model visualization with path simulation and associated machining product are produced based on the step ring based 3-axis path planning to demo model-driven graphics processing unit (GPU) feature in tool path planning and 3D image model classification by GPU simulation. Subtractive 3D printing (i.e., 3D machining) is represented as integration between 3D printing modeling and CNC machining via GPU simulated software. Path planning is applied through material surface removal visualization in high resolution and 3D path simulation via ring selective path planning based on accessibility of path through pattern selection. First, the step ring selects critical features to reconstruct computer aided design (CAD) design model as STL (stereolithography) voxel, and then local optimization is attained within interested ring area for time and energy saving of GPU volume generation as compared to global all automatic path planning with longer latency. The reconstructed CAD model comes from an original sample (GATech buzz) with 2D image information. CAD model for optimization and validation is adopted to sustain manufacturing reproduction based on system simulation feedback. To avoid collision with the produced path from retraction path, we pick adaptive ring path generation and prediction in each planning iteration, which may also minimize material removal. Moreover, we did partition analysis and g-code optimization for large scale model and high density volume data. Image classification and grid analysis based on adaptive 3D tree depth are proposed for multi-level set partition of the model to define no cutting zones. After that, accessibility map is computed based on accessibility space for rotational angular space of path orientation to compare step ring based pass planning verses global all path planning. Feature analysis via central processing unit (CPU) or GPU processor for GPU map computation contributes to high performance computing and cloud computing potential through parallel computing application of subtractive 3D printing in the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Hardwick, Martin, Fiona Zhao, Fred Proctor, Sid Venkatesh, David Odendahl, and Xun Xu. "A Roadmap for STEP-NC Enabled Interoperable Manufacturing." In ASME 2011 International Manufacturing Science and Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/msec2011-50029.

Full text
Abstract:
STEP-NC is the result of a ten-year international effort to replace the RS274D (ISO 6983) G and M code standard with a modern associative language. The new standard connects CAD design data to CAM process data so that smart applications can understand both the design requirements for a part and the manufacturing solutions developed to make that part. STEP-NC builds on a previous ten-year effort to develop the STEP standard for CAD to CAD and CAD to CAM data exchange, and uses the modern geometric constructs in that standard to specify device independent tool paths, and CAM independent volume removal features. This paper reviews a series of demonstrations carried out to test and validate the STEP-NC standard. These demonstrations were an international collaboration between industry, academia and research agencies. Each demonstration focused on testing and extending the STEP-NC data model for a different application.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeong, Hae-Yong, Kwi-Seok Ha, Kwi-Lim Lee, Young-Min Kwon, Won-Pyo Chang, Su-Dong Suk, and Yeong-Il Kim. "Pre-Test Analysis of Natural Circulation Test of PHENIX End-of-Life With the MARS-LMR Code." In 18th International Conference on Nuclear Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/icone18-29874.

Full text
Abstract:
PHENIX, a prototype sodium-cooled fast reactor (SFR), has demonstrated a fast breeder reactor technology and also achieved its important role as an irradiation facility for innovative fuels and materials. In 2009 PHENIX reached its final shutdown and the CEA launched a PHENIX end-of-life (EOL) test program, which provided a unique opportunity to validate an SFR system analysis code. The Korea Atomic Energy Research Institute (KAERI) joined this program to evaluate the capability and limitation of the MARS-LMR code, which will be used as a basic tool for the design and analysis of future SFRs in Korea. For this purpose, pre-test analyses of PHENIX EOL natural circulation tests have been performed and one-dimensional thermal-hydraulic behaviors for these tests have been analyzed. The natural circulation test was initiated by the decrease of heat removal through steam generators (SGs). This resulted in the increase of intermediate heat exchanger (IHX) secondary inlet temperature, followed by a manual reactor scram and the decrease of secondary pump speed. After that, the primary flow rate was also controlled by the manual trip of three primary pumps. For the pre-test analysis the Phenix primary system and IHXs were nodalized into several volumes. Total 981 subassemblies in the core were modeled and they were divided into 7 flow channels. The active 4 IHXs were modeled independently to investigate the change of flow into each IHX. The cold pool was modeled by two axial nodes having 5 and 6 sub-volumes, respectively. The reactor vessel cooling system was modeled to match the flow balance in the primary system. The flow path of vessel cooling system was quite complicated. However, it is simplified in the modeling. For a MARS-LMR simulation, the dryout of SGs have been described by the use of the boundary conditions for IHTS as a form of time-to-temperature table. This boundary condition reflects the increase in IHTS temperature by SG dryout during the initial stage of the transient and the increase in heat removal by the opening of the two SG containments at 3 hours after the initiation of the transient. Through the comparison of the pre-analysis results with the prediction by other computer codes, it is found that the MARS-LMR code predicts natural circulation phenomena in a sodium system in a reasonable manner. The final analysis for validation of the code against the test data will be followed with an improved modeling in near future.
APA, Harvard, Vancouver, ISO, and other styles
6

Pavanaskar, Sushrut, and Sara McMains. "Machine Specific Energy Consumption Analysis for CNC-Milling Toolpaths." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-48014.

Full text
Abstract:
This paper describes our work on analyzing and modeling energy consumption in CNC machining with an emphasis on the geometric aspects of toolpaths. We address effects of geometric and other aspects of toolpaths on energy consumed in machining by providing an advanced energy consumption model for CNC machining. We performed several controlled machining experiments to isolate, identify, and analyze the effects of various aspects of toolpaths (such as path parameters, angular change, etc.) on energy consumption. Based on our analyses, we developed an analytical energy consumption model for CNC machining that, along with the commonly used input of material removal rate (MRR), incorporates the effects of geometric toolpath parameters as well as effects of machine construction when estimating energy requirements for a toolpath. We also developed a simple web-based software interface to our model, that, once customized for a particular CNC machine, provides energy requirement estimates for a toolpath given its G/M code. Such feedback can help process planners and CNC machine operators make informed choices when generating/selecting toolpath alternatives using commercial CAM software.
APA, Harvard, Vancouver, ISO, and other styles
7

Felten, Frederic N. "Numerical Prediction of Solid Particle Erosion for Elbows Mounted in Series." In ASME 2014 4th Joint US-European Fluids Engineering Division Summer Meeting collocated with the ASME 2014 12th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/fedsm2014-21172.

Full text
Abstract:
Erosive wear due to solid-particle impact is a complex phenomenon where different parameters are responsible for causing material removal from the metal surface. Some of the most critical parameters regarding the solid particles are the size, density, roundness, and volume concentration. The properties of the carrying fluid (density, dynamic viscosity, bulk modulus…), the geometry of the flow path (straight or deviated), and the surface material properties are also major contributors to the overall severity of the solid-particle erosion process. The intent of this paper is to focus on the impact of the flow path geometry on surface erosion for a specific carrier fluid, flow rate, sand type and sand-volume concentration. A numerical approach using the commercial CFD code FLUENT is applied to investigate the solid particle erosion in two 90° pipe elbows mounted in series. The distance between the two elbows is varied, as is the angle between them. A total of 16 cases are analyzed numerically. The relationships between the parameters pertinent to the two elbows and the erosion pattern, erosion intensity, and location of maximum erosion are presented. Prior to the analyses for the two elbows mounted in series, an in-depth validation effort for a single elbow geometry is undertaken to determine the appropriate mesh requirement, turbulence model, and to calibrate the inputs to the erosion model.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yabing, and Xuewu Cao. "Study on Hydrogen Risk of Spent Fuel Compartment Induced by Containment Venting." In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-67800.

Full text
Abstract:
Hydrogen risk in the spent fuel compartment becomes a matter of concern after the Fukushima accident. However, researches are mainly focused on the hydrogen generated by spent fuels due to lack of cooling. As a severe accident management strategy, one of the containment venting paths is to vent the containment through the normal residual heat removal system (RNS) to the spent fuel compartment, which will cause hydrogen build up in it. Therefore, the hydrogen risk induced by containment venting for the spent fuel compartment is studied for advanced passive PWR in this paper. The spent fuel pool compartment model is built and analyzed with integral accident analysis code couple with the containment analysis. Hydrogen risk in the spent fuel pool compartment is evaluated combining with containment venting. Since the containment venting is mainly implemented in two different strategies, containment depressurization and control hydrogen flammability, these two strategies are analyzed in this paper to evaluated the hydrogen risk in the spent fuel compartment. Result shows that there will not be significate hydrogen built up with the hydrogen control system available in the containment. However, if the hydrogen control system is not available, venting into the spent fuel pool compartment will cause a certain level of hydrogen risk there. Besides, suggestions are made for containment venting strategy considering hydrogen risk in spent fuel pool compartment.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Sheng, Shanbin Shi, Xiao Wu, Xiaodong Sun, and Richard Christensen. "Double-Wall Natural Draft Heat Exchanger Design for Tritium Control in FHRs." In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-67844.

Full text
Abstract:
Tritium control is potentially a critical issue for Fluoride salt-cooled High-temperature Reactors (FHRs) and Molten Salt Reactors (MSRs). Tritium production rate in these reactors can be significantly higher compared to that in Light Water Reactors (LWRs). Tritium is highly permeable at high temperatures through reactor structures, especially. Therefore, heat exchangers with large heat transfer areas in FHRs and MSRs provide practical paths for the tritium generated in the primary salt migrating into the surroundings, such as Natural Draft Heat Exchangers (NDHXs) in the direct reactor auxiliary cooling system (DRACS), which are proposed as a passive decay heat removal system for these reactors. A double-wall heat exchanger design was proposed in the literature to significantly minimize the tritium release rate to the environment in FHRs. This unique shell and tube heat exchanger design adopts a three-fluid design concept and each of the heat exchanger tube consists of an inner tube and an outer tube. Each of these tube units forms three flow passages, i.e., the inner channel, annular channel, and outer channel. While this type of heat exchangers was proposed, few such heat exchangers have been designed in the literature, taking into account both heat and tritium mass transfer performance. In this study, a one-dimensional heat and mass transfer model was developed to assist the design of a double-wall NDHX for FHRs. In this model, the molten salt and air flow through the inner and outer channels, respectively. A selected sweep gas acting as a tritium removal medium flows in the annular channel and takes tritium away to minimize tritium leakage to the air flowing in the outer channel. The heat transfer model was benchmarked against a Computational Fluid Dynamics (CFD) code, i.e., ANSYS Fluent. Good agreement was obtained between the model simulation and Fluent analysis. In addition, the heat and mass transfer models combined with non-dominated sorting in generic algorithms (NSGA) were applied to investigate a potential NDHX design in Advanced High-Temperature Reactor (AHTR), a pre-conceptual FHR design developed by the Oak Ridge National Laboratory. A double-wall NDHX design using inner and outer fluted tubes was therefore optimized and compared with a single-wall design in terms of performance and economics.
APA, Harvard, Vancouver, ISO, and other styles
10

Haynau, Rémy, Jackson B. Marcinichen, Raffaele L. Amalfi, Filippo Cataldo, and John R. Thome. "Compact Thermosyphon Cooling System For High Heat Flux Servers: Validation Of Thermosyphon Simulation Code Considering New Test Data." In ASME 2021 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/ipack2021-72620.

Full text
Abstract:
Abstract Passive, gravity-driven thermosyphons represent a step-change in technology towards the goal of greatly reducing PUE (Power Usage Effectiveness) of datacenters by replacing energy hungry fans of air-cooling approach with a highly-reliable solution able to dissipate the rising heat loads demanded in a cost-effective manner. The European Union has launched a zero carbon-footprint target for datacenters by the timeline of 2030, which would include new standards for implementing green solutions. In the present study, a newly updated version of the general thermosyphon simulation code previously presented at InterPACK 2019 and InterPACK 2020 is considered. To facilitate the industrial transition to thermosyphon cooling technology, with its intrinsic complex flow phenomena, the availability of a general-use, widely validated design tool that handles both air-cooled and liquid-cooled types of thermosyphons is of paramount importance. The solver must be able to analyze and design thermosyphon-based cooling systems with high accuracy and handle the numerous geometric singularities in the working fluid’s flow path, besides that of the secondary coolant. Therefore, a new extensive validation of the thermosyphon simulation solver is performed and presented here versus experimental data gathered for a compact liquid-cooled thermosyphon design, which is being considered for the cooling of high-performance servers. The new experimental database has been gathered to be able to characterize the effect of filling ratio, heat load, secondary coolant temperature and mass flow rate on the cooling performance, using R1234ze(E) as a low GWP (Global Warming Potential) working fluid. This compact design has experimentally demonstrated high performance, maintaining the pseudo chip’s temperature lower than 45°C for evaporator footprint heat fluxes up to 18W/cm2. The comparison shows that the solver is able to accurately predict thermosyphon thermal-hydraulic performance, and based on this prediction, characterize the internal flow rate generated by the thermosyphon, which is key to correctly estimate the maximum heat removal capability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography