Academic literature on the topic 'Large Scale Applications Implementing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Large Scale Applications Implementing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Large Scale Applications Implementing"

1

Honea, Rosemary, and Bonnie Mensch. "Maintaining continuity of clinical operations while implementing large-scale filmless operations." Journal of Digital Imaging 12, S1 (May 1999): 50–53. http://dx.doi.org/10.1007/bf03168754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Biao. "Architecting Large Scale Wireless Sensor Networks Publish/Subscribe Applications: A Graph-Oriented Approach." Applied Mechanics and Materials 321-324 (June 2013): 2768–71. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.2768.

Full text
Abstract:
This paper presented a approach, called GOHM, for modeling and implementing the architecture of large scale wireless sensor networks(LSWSNs) publish/subscribe(Pub/Sub) applications using a graph-oriented hierarchical model(GOHM). Considering the scalability of topology which was necessary in the LSWSNs, a sparse hierarchical graph(SHG) was defined based on the hierarchical topology model. The simulation indicated that SHG provided low mean-latency at high throughput for transmitting messages. The results imply that SHG can easily be constructed, while ensuring good scalability.
APA, Harvard, Vancouver, ISO, and other styles
3

Stephen Dass A. and Prabhu J. "Ameliorating the Privacy on Large Scale Aviation Dataset by Implementing MapReduce Multidimensional Hybrid k-Anonymization." International Journal of Web Portals 11, no. 2 (July 2019): 14–40. http://dx.doi.org/10.4018/ijwp.2019070102.

Full text
Abstract:
In this fast growing data universe, data generation and data storage are moving into the next-generation process by generating petabytes and gigabytes in an hour. This leads to data accumulation where privacy and preservation are certainly misplaced. This data contains some sensitive and high privacy data which is to be hidden or removed using hashing or anonymization algorithms. In this article, the authors propose a hybrid k anonymity algorithm to handle large scale aircraft datasets with combined concepts of Big Data analytics and privacy preservation of storing the dataset with the help of MapReduce. This published anonymized data are moved by MapReduce to the Hive database for data storage. The authors propose a multi-dimensional hybrid k-anonymity technique to solve the privacy issue and compare the proposed system with other two anonymization methods such as BUG and TDS. Three experiments were performed for evaluating classifier error, calculating disruption value and p% hybrid anonymity and estimation of processing time.
APA, Harvard, Vancouver, ISO, and other styles
4

Mishra, Nilamadhab, Chung-Chih Lin, and Hsien-Tsung Chang. "A Cognitive Adopted Framework for IoT Big-Data Management and Knowledge Discovery Prospective." International Journal of Distributed Sensor Networks 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/718390.

Full text
Abstract:
In future IoT big-data management and knowledge discovery for large scale industrial automation application, the importance of industrial internet is increasing day by day. Several diversified technologies such as IoT (Internet of Things), computational intelligence, machine type communication, big-data, and sensor technology can be incorporated together to improve the data management and knowledge discovery efficiency of large scale automation applications. So in this work, we need to propose a Cognitive Oriented IoT Big-data Framework (COIB-framework) along with implementation architecture, IoT big-data layering architecture, and data organization and knowledge exploration subsystem for effective data management and knowledge discovery that is well-suited with the large scale industrial automation applications. The discussion and analysis show that the proposed framework and architectures create a reasonable solution in implementing IoT big-data based smart industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Kareekunnan, Afsal, Tatsufumi Agari, Takeshi Kudo, Shunsuke Niwa, Yoshito Abe, Takeshi Maruyama, Hiroshi Mizuta, and Manoharan Muruganathan. "Graphene electric field sensor for large scale lightning detection network." AIP Advances 12, no. 9 (September 1, 2022): 095209. http://dx.doi.org/10.1063/5.0095449.

Full text
Abstract:
Graphene is widely used in various real-life applications due to its high sensitivity to the change in the carrier concentration. Here, we demonstrate that graphene can be used for implementing a reliable lightning detection network as it shows excellent sensitivity to the electric field of both positive and negative polarities, with a wide range of magnitude. The lowest electric field detected by our graphene sensor is 67 V/m, which is much smaller than the detection limit of previously reported graphene sensors and comparable to that of field mill and MEMS-based sensors. We also present the results of outdoor experiments where the response of the graphene sensor to the atmospheric electric field on a lightning day was tested and found to be in good agreement with the existing field mill sensor.
APA, Harvard, Vancouver, ISO, and other styles
6

Lapidus, Azariy, and Ivan Abramov. "Implementing large-scale construction projects through application of the systematic and integrated method." IOP Conference Series: Materials Science and Engineering 365 (June 2018): 062002. http://dx.doi.org/10.1088/1757-899x/365/6/062002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Amanda Hickey, Margaret Henning, and Lissa Sirois. "Lessons Learned During Large-Scale Implementation Project Focused on Workplace Lactation Practices and Policies." American Journal of Health Promotion 36, no. 3 (November 20, 2021): 477–86. http://dx.doi.org/10.1177/08901171211055692.

Full text
Abstract:
Purpose This practice-based research funded by the Centers for Disease Control and Prevention (CDC) focuses on the translation of evidence-based practices and policies into real-world applications. To the best of our knowledge, this is the largest study to research the implementation process for lactation accommodations and policies for work sites. Design or Approach Pre-/post-test evaluation of work-site lactation accommodations, and 6-month follow-up with business that worked on the project. Setting/Participants 34 businesses across New Hampshire. Method The team developed work-site selection criteria to award mini-grants; developed trainings and a toolkit; and worked with 34 businesses over a 3-year period. Pre-/post-implementation data were collected using the CDC work-site scorecard. A 6-month follow-up phone interview was conducted with each site. Results We assessed the CDC scorecard and evaluated the challenges of implementing lactation spaces by industry. In our 6-month follow-up, we found that spaces were still being utilized and we identified specific research to inform practical evidence-based applications and lessons learned when implementing a work-site lactation space. Conclusion We successfully provided financial/technical support to develop or improve 45 lactation spaces, with policies and practices to support mothers and families for 34 businesses. We identified key takeaway lessons that can be used to guide the development of lactation spaces and policies in work sites. Sites self-report that these work-site changes were sustainable at 6-month follow-up.
APA, Harvard, Vancouver, ISO, and other styles
8

Ortt, Roland, Claire Stolwijk, and Matthijs Punter. "Implementing Industry 4.0: assessing the current state." Journal of Manufacturing Technology Management 31, no. 5 (August 10, 2020): 825–36. http://dx.doi.org/10.1108/jmtm-07-2020-0284.

Full text
Abstract:
PurposeThe purpose of this paper is to introduce, summarize and combine the results of 11 articles in a special issue on the implementation of Industry 4.0. Industry 4.0 emerged as a phenomenon about a decade ago. That is why, it is interesting now to explore the implementation of the concept. In doing so, four research questions are addressed: (1) What is Industry 4.0? (2) How to implement Industry 4.0? (3) How to assess the implementation status of Industry 4.0? (4) What is the current implementation status of Industry 4.0?Design/methodology/approachSubgroups of articles are formed, around one or more research questions involving the implementation of Industry 4.0. The articles are carefully analyzed to provide comprehensive answers.FindingsBy comparing definitions systematically, the authors show important aspects for defining Industry 4.0. The articles in the special issue explore several cases of manufacturing companies that implemented Industry 4.0. In addition, systematic approaches to aid implementation are described: an approach to combine case-study results to solve new implementation problems, approaches to assess readiness or maturity of companies regarding Industry 4.0 and surveys showing the status of implementation in larger samples of companies as well as showing relationships between company characteristics and type of implementation. Small and large firms differ considerably in their process of implementing Industry 4.0, for example.Research limitations/implicationsThis special issue discusses implementation of Industry 4.0. The issue is limited to 11 articles, each of which with its own strengths and limitations.Practical implicationsThe practical relevance of the issue is that it focuses on the implementation of Industry 4.0. Cases showing successful implementation, measurement instruments to assess degree of implementation and advice how to build a database with cases together with large-scale studies on the state of implementation do provide a wealth of information with a large managerial relevance.Originality/valueThe paper introduces an original take on Industry 4.0 by focusing on implementation. The special issue contains both literature reviews, articles describing case studies of implementation, articles developing systematic measurement instruments to assess degree of implementation and some articles reporting large-scale studies on the state of implementation of Industry 4.0 and thereby combine several perspectives on implementation of Industry 4.0.
APA, Harvard, Vancouver, ISO, and other styles
9

Gilpin, William. "Cryptographic hashing using chaotic hydrodynamics." Proceedings of the National Academy of Sciences 115, no. 19 (April 23, 2018): 4869–74. http://dx.doi.org/10.1073/pnas.1721852115.

Full text
Abstract:
Fluids may store and manipulate information, enabling complex applications ranging from digital logic gates to algorithmic self-assembly. While controllable hydrodynamic chaos has previously been observed in viscous fluids and harnessed for efficient mixing, its application to the manipulation of digital information has been sparsely investigated. We show that chaotic stirring of a viscous fluid naturally produces a characteristic signature of the stirring process in the arrangement of particles in the fluid, and that this signature directly satisfies the requirements for a cryptographic hash function. This includes strong divergence between similar stirring protocols’ hashes and avoidance of collisions (identical hashes from distinct stirs), which are facilitated by noninvertibility and a broad chaotic attractor that samples many points in the fluid domain. The hashing ability of the chaotic fluidic map implicates several unexpected mechanisms, including incomplete mixing at short time scales that produces a hyperuniform hash distribution. We investigate the dynamics of hashing using interparticle winding statistics, and find that hashing starts with large-scale winding of kinetically disjoint regions of the chaotic attractor, which gradually gives way to smaller scale braiding of single-particle trajectories. In addition to providing a physically motivated approach to implementing and analyzing deterministic chaotic maps for cryptographic applications, we anticipate that our approach has applications in microfluidic proof-of-work systems and characterizing large-scale turbulent flows from sparse tracer data.
APA, Harvard, Vancouver, ISO, and other styles
10

Andre, Walder. "Efficient adaptation of the Karatsuba algorithm for implementing on FPGA very large scale multipliers for cryptographic algorithms." International Journal of Reconfigurable and Embedded Systems (IJRES) 9, no. 3 (November 1, 2020): 235. http://dx.doi.org/10.11591/ijres.v9.i3.pp235-241.

Full text
Abstract:
<span lang="EN-US">Here, we present a modified version of the Karatsuba algorithm to facilitate the FPGA-based implementation of three signed multipliers: 32-bit × 32-bit, 128-bit x 128-bit, and 512-bit × 512-bit. We also implement the conventional 32-bit × 32-bit multiplier for comparative purposes. The Karatsuba algorithm is preferable for multiplications with very large operands such as 64-bit × 64-bit, 128-bit × 128-bit, 256-bit × 256-bit, 512-bit × 512-bit multipliers and up. Experimental results show that the Karatsuba multiplier uses less hardware in the FPGA compared to the conventional multiplier. The Xilinx xc7k325tfbg900 FPGA using the Genesis 2 development board is used to implement the proposed scheme. The results obtained are promising for applications that require rapid implementation and reconfiguration of cryptographic algorithms.</span>Here, we present a modified version of the Karatsuba algorithm to facilitate the FPGA-based implementation of three signed multipliers: 32-bit × 32-bit, 128-bit x 128-bit, and 512-bit × 512-bit. We also implement the conventional 32-bit × 32-bit multiplier for comparative purposes. The Karatsuba algorithm is preferable for multiplications with very large operands such as 64-bit × 64-bit, 128-bit × 128-bit, 256-bit × 256-bit, 512-bit × 512-bit multipliers and up. Experimental results show that the Karatsuba multiplier uses less hardware in the FPGA compared to the conventional multiplier. The Xilinx xc7k325tfbg900 FPGA using the Genesis 2 development board is used to implement the proposed scheme. The results obtained are promising for applications that require rapid implementation and reconfiguration of cryptographic algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Large Scale Applications Implementing"

1

Smaragdakis, Ioannis. "Implementing large-scale object-oriented components /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Martínez, Trujillo Andrea. "Dynamic Tuning for Large-Scale Parallel Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/125872.

Full text
Abstract:
La era actual de computación a gran escala se caracteriza por el uso de aplicaciones paralelas ejecutadas en miles de cores. Sin embargo, el rendimiento obtenido al ejecutar estas aplicaciones no siempre es el esperado. La sintonización dinámica es una potente técnica que puede ser usada para reducir la diferencia entre el rendimiento real y el esperado en aplicaciones paralelas. Actualmente, la mayoría de las aproximaciones que ofrecen sintonización dinámica siguen una estructura centralizada, donde un único módulo de análisis, responsable de controlar toda la aplicación paralela, puede convertirse en un cuello de botella en entornos a gran escala. La principal contribución de esta tesis es la creación de un modelo novedoso que permite la sintonización dinámica descentralizada de aplicaciones paralelas a gran escala. Dicho modelo se apoya en dos conceptos principales: la descomposición de la aplicación y un mecanismo de abstracción. Mediante la descomposición, la aplicación paralela es dividida en subconjuntos disjuntos de tareas, los cuales son analizados y sintonizados separadamente. Mientras que el mecanismo de abstracción permite que estos subconjuntos sean vistos como una única aplicación virtual y, de esta manera, se puedan conseguir mejoras de rendimiento globales. Este modelo se diseña como una red jerárquica de sintonización formada por módulos de análisis distribuidos. La topología de la red de sintonización se puede configurar para acomodarse al tamaño de la aplicación paralela y la complejidad de la estrategia de sintonización empleada. De esta adaptabilidad surge la escalabilidad del modelo. Para aprovechar la adaptabilidad de la topología, en este trabajo se propone un método que calcula topologías de redes de sintonización compuestas por el mínimo número de módulos de análisis necesarios para proporcionar sintonización dinámica de forma efectiva. El modelo propuesto ha sido implementado como una herramienta para sintonización dinámica a gran escala llamada ELASTIC. Esta herramienta presenta una arquitectura basada en plugins y permite aplicar distintas técnicas de análisis y sintonización. Empleando ELASTIC, se ha llevado a cabo una evaluación experimental sobre una aplicación sintética y una aplicación real. Los resultados muestran que el modelo propuesto, implementado en ELASTIC, es capaz de escalar para cumplir los requerimientos de sintonizar dinámicamente miles de procesos y, además, mejorar el rendimiento de esas aplicaciones.
The current large-scale computing era is characterised by parallel applications running on many thousands of cores. However, the performance obtained when executing these applications is not always what it is expected. Dynamic tuning is a powerful technique which can be used to reduce the gap between real and expected performance of parallel applications. Currently, the majority of the approaches that offer dynamic tuning follow a centralised scheme, where a single analysis module, responsible for controlling the entire parallel application, can become a bottleneck in large-scale contexts. The main contribution of this thesis is a novel model that enables decentralised dynamic tuning of large-scale parallel applications. Application decomposition and an abstraction mechanism are the two key concepts which support this model. The decomposition allows a parallel application to be divided into disjoint subsets of tasks which are analysed and tuned separately. Meanwhile, the abstraction mechanism permits these subsets to be viewed as a single virtual application so that global performance improvements can be achieved. A hierarchical tuning network of distributed analysis modules fits the design of this model. The topology of this tuning network can be configured to accommodate the size of the parallel application and the complexity of the tuning strategy being employed. It is from this adaptability that the model's scalability arises. To fully exploit this adaptable topology, in this work a method is proposed which calculates tuning network topologies composed of the minimum number of analysis modules required to provide effective dynamic tuning. The proposed model has been implemented in the form of ELASTIC, an environment for large-scale dynamic tuning. ELASTIC presents a plugin architecture, which allows different performance analysis and tuning strategies to be applied. Using ELASTIC, experimental evaluation has been carried out on a synthetic and a real parallel application. The results show that the proposed model, embodied in ELASTIC, is able to not only scale to meet the demands of dynamic tuning over thousands of processes, but is also able to effectively improve the performance of these applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Dacosta, Italo. "Practical authentication in large-scale internet applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44863.

Full text
Abstract:
Due to their massive user base and request load, large-scale Internet applications have mainly focused on goals such as performance and scalability. As a result, many of these applications rely on weaker but more efficient and simpler authentication mechanisms. However, as recent incidents have demonstrated, powerful adversaries are exploiting the weaknesses in such mechanisms. While more robust authentication mechanisms exist, most of them fail to address the scale and security needs of these large-scale systems. In this dissertation we demonstrate that by taking into account the specific requirements and threat model of large-scale Internet applications, we can design authentication protocols for such applications that are not only more robust but also have low impact on performance, scalability and existing infrastructure. In particular, we show that there is no inherent conflict between stronger authentication and other system goals. For this purpose, we have designed, implemented and experimentally evaluated three robust authentication protocols: Proxychain, for SIP-based VoIP authentication; One-Time Cookies (OTC), for Web session authentication; and Direct Validation of SSL/TLS Certificates (DVCert), for server-side SSL/TLS authentication. These protocols not only offer better security guarantees, but they also have low performance overheads and do not require additional infrastructure. In so doing, we provide robust and practical authentication mechanisms that can improve the overall security of large-scale VoIP and Web applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Roy, Yagnaseni. "Modeling nanofiltration for large scale desalination applications." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100096.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 91-94).
The Donnan Steric Pore Model with dielectric exclusion (DSPM-DE) is implemented over flatsheet and spiral-wound leaves to develop a comprehensive model for nanofiltration modules. This model allows the user to gain insight into the physics of the nanofiltration process by allowing one to adjust and investigate effects of membrane charge, pore radius, and other membrane characteristics. The study shows how operating conditions such as feed flow rate and pressure affect the recovery ratio and solute rejection across the membrane. A comparison is made between the results for the flat-sheet and spiral-wound configurations. The comparison showed that for the spiral-wound leaf, the maximum values of transmembrane pressure, flux and velocity occur at the feed entrance (near the permeate exit), and the lowest value of these quantities are at the diametrically opposite corner. This is in contrast to the flat-sheet leaf, where all the quantities vary only in the feed flow direction. However it is found that the extent of variation of these quantities along the permeate flow direction in the spiral-wound membrane is negligibly small in most cases. Also, for identical geometries and operating conditions, the flatsheet and spiral-wound configurations give similar results. Thus the computationally expensive and complex spiral-wound model can be replaced by the flat-sheet model for a variety of purposes. In addition, the model was utilized to predict the performance of a seawater nanofiltration system which has been validated with the data obtained from a large-scale seawater desalination plant, thereby establishing a reliable model for desalination using nanofiltration.
by Yagnaseni Roy.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Jen-Cheng. "Efficient simulation techniques for large-scale applications." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53963.

Full text
Abstract:
Architecture simulation is an important performance modeling approach. Modeling hardware components with sufficient detail helps architects to identify both hardware and software bottlenecks. However, the major issue of architectural simulation is the huge slowdown compared to native execution. The slowdown gets higher for the emerging workloads that feature high throughput and massive parallelism, such as GPGPU kernels. In this dissertation, three simulation techniques were proposed to simulate emerging GPGPU kernels and data analytic workloads efficiently. First, TBPoint reduce the simulated instructions of GPGPU kernels using the inter-launch and intra-launch sampling approaches. Second, GPUmech improves the simulation speed of GPGPU kernels by abstracting the simulation model using functional simulation and analytical modeling. Finally, SimProf applies stratified random sampling with performance counters to select representative simulation points for data analytic workloads to deal with data-dependent performance. This dissertation presents the techniques that can be used to simulate the emerging large-scale workloads accurately and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
6

Verdugo, Retamal Cristian Andrés. "Photovoltaic power converter for large scale applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672343.

Full text
Abstract:
Most of large scale photovoltaic systems are based on centralized configurations with voltage source converters of two or three output voltage levels connected to photovoltaic panels. With the development of multilevel converters, new possible topologies have come out to replace current configurations in large scale photovoltaic applications, reducing filter requirements in the ac side, increasing the voltage level operation and improving power quality. One of the main challenges of implementing multilevel converter in large scale photovoltaic power plants is the appearance of high leakage currents and floating voltages due to the significant number of power modules in series connection. To solve this issue, multilevel converters have introduced high or low frequency transformers, which provide inherent galvanic isolation to the photovoltaic panels. The Cascaded H-Bridge converter (CHB) with high frequency transformers in a second conversion stage has provided a promising solution for large scale application, since it eliminates the floating voltage problem and provides an isolated control stage for each dc side of the power modules. In an effort to integrate ac transformers to reduce the requirement of a second conversion stage, Cascaded Transformer Multilevel Inverters (CTMI) have been propose for photovoltaic applications. These configurations use the secondary side of the ac transformers to create the series connection, while the primary side is connected to each power module, satisfying isolation requirements and providing different possibilities of winding connections for symmetrical and asymmetrical configurations. Based on the requirements of multilevel converters for large scale photovoltaic applications, the main goal of this PhD dissertation is to develop a new multilevel converter which provides galvanic isolation to all power modules, while allowing an independent control algorithm for their power generation. The configuration proposed is called Isolated Multi-Modular Converter (IMMC) and provides galvanic isolation through ac transformers. The IMMC comprises two group of series connected power modules referred as arm, which are electrically interconnected in parallel. The power modules are based on three-phase voltage source converters connected to individual group of photovoltaic panels in the dc side, while the ac side is connected to three-phase low frequency transformers. Therefore, several isolated modules can be connected in series. Because the power generated by photovoltaic panels may be affected by environmental conditions, power modules are prone to generate different power levels. This scenario must be covered by the IMMC, thus providing high flexibility in the power regulation. In this regard, this PhD dissertation proposes two control strategies embedded in each power module, whose role is to control the power flow based on the dc voltage level and the current arm information. The Amplitude Voltage Compensation (AVC) regulates the amplitude of the modulated voltage, while the Quadrature Voltage Compensation (QVC) regulates its phase angle by introducing a circulating current flowing through the arms. Additionally, it is demonstrated that combining both control strategies, the capability to withstand power imbalances increases, providing a higher flexibility. The IMMC was modelled and validated via simulation results. Besides, a control algorithm was proposed to regulate the total power generated. A downscale experimental setup of 10kW was built to endorse the analysis demonstrated via simulation. This study considers an IMMC connected to the electrical grid, which operates in balance and imbalance power scenarios, demonstrating the complete flexibility of the converter to be implemented in large photovoltaic applications.
La mayoría de los sistemas fotovoltaicos de gran escala tienen una configuraci ón centralizada con convertidores de dos otres niveles de tensión de salida conectados a paneles fotovoltaicos. Con el desarrollo de los convertidores multinivel, nuevas topologías han aparecido para reemplazar las configuraciones usadas actualmente en aplicaciones fotovoltaicas, reduciendo los requerimientos de grandes filtros, incrementando los niveles de tensi ón de operación y mejorando la calidad de la potencia. Uno de los mayores desafíos de los convertidores multinivel en aplicaciones fotovoltaicas de gran escala es la presencia de corrientes de fuga y tensiones de flotación debido al significante aumento de módulos de potencia conectados en serie. Para solucionar este problema, los convertidores multinivel incluyen transformadores de alta o baja frecuencia, los cuales proveen aislación galvánica a los paneles fotovoltaicos. El Convertidor Cascada con Puente H y transformadores de alta frecuencia en una segunda etapa de conversión ha proporcionado una solución prometedora para aplicaciones de gran escala, ya que elimina el problema de tensión de flotación y además proporciona una etapa de control independiente al bus dc. En un esfuerzo de integrar transformadores en el lado de corriente alterna para evitar una segunda etapa de conversión, los Convertidores Multinivel con Transformadores en Cascada (CTMI) han sido propuestos para aplicaciones fotovoltaicas. Estas configuraciones utilizan el secundario del transformador para crear la conexi ón serie, mientras que el primario es conectado a cada módulo de potencia, satisfaciendo los requisitos de aislaci ón y proporcionando diferentes posibilidades de conexiones en los devanados para generar configuraciones sim étricas y asimétricas. Considerando los requisitos de convertidores multinivel para aplicaciones fotovoltaicas de gran escala, el principal propósito de esta tesis es desarrollar una configuración de convertidor multinivel el cual proporcione aislación galvánica a todos sus módulos, además de un control independiente para la potencia generada. La configuraci ón propuesta se llama Convertidor Multi-Modular Aislado (IMMC) y proporciona aislaci ón galvánica a través de transformadores en el lado ac. El IMMC se conforma de dos grupos de módulos de potencia conectados en serie denominadas ramas, las cuales se interconectan en paralelo. Los módulos de potencia se conforman de convertidores fuente de tensi ón trifásicos conectados a grupos individuales de paneles fotovoltaicos, mientras que el lado ac se conecta a transformadores trif ásicos de baja frecuencia. Por lo tanto, varios módulos aislados pueden ser conectados en serie. Debido a que la potencia generada por los paneles fotovoltaicos depende de las condiciones ambientales, los m ódulos son propensos a generar diferentes niveles de potencia. Este escenario debe ser soportado por el IMMC, proporcionando alta flexibilidad en la regulación de potencia. Por lo tanto, esta tesis propone dos estrategias de control cuyo rol es regular el flujo de potencia de cada módulo mediante la tensión en la etapa continua y la corriente de rama. La compensaci ón por amplitud de tensión (AVC) regula la amplitud del índice de modulación, mientras que la compensación de la tensión en cuadratura (QVC) regula el ángulo de fase. Adicionalmente, se demuestra que, al combinar ambas estrategias de control, la capacidad para tolerar desbalances de potencia aumenta, proporcionando una mayor flexibilidad. El convertidor IMMC fue modelado y validado mediante resultados de simulaci ón. Además, una estrategia de control fue propuesta para regular la potencia total generada. Un prototipo de 10kW fue construido para respaldar los resultados presentados en simulación. Este estudio considera un convertidor IMMC conectado a la red el éctrica que opera en diferentes condiciones de potencia, demostrando una alta flexibilidad
Sistemes d'energia elèctrica
APA, Harvard, Vancouver, ISO, and other styles
7

Branco, Miguel. "Distributed data management for large scale applications." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/72283/.

Full text
Abstract:
Improvements in data storage and network technologies, the emergence of new highresolution scientific instruments, the widespread use of the Internet and the World Wide Web and even globalisation have contributed to the emergence of new large scale dataintensive applications. These applications require new systems that allow users to store, share and process data across computing centres around the world. Worldwide distributed data management is particularly important when there is a lot of data, more than can fit in a single computer or even in a single data centre. Designing systems to cope with the demanding requirements of these applications is the focus of the present work. This thesis presents four contributions. First, it introduces a set of design principles that can be used to create distributed data management systems for data-intensive applications. Second, it describes an architecture and implementation that follows the proposed design principles, and which results in a scalable, fault tolerant and secure system. Third, it presents the system evaluation, which occurred under real operational conditions using close to one hundred computing sites and with more than 14 petabytes of data. Fourth, it proposes novel algorithms to model the behaviour of file transfers on a wide-area network. This work also presents a detailed description of the problem of managing distributed data, ranging from the collection of requirements to the identification of the uncertainty that underlies a large distributed environment. This includes a critique of existing work and the identification of practical limits to the development of transfer algorithms on a shared distributed environment. The motivation for this work has been the ATLAS Experiment for the Large Hadron Collider (LHC) at CERN, where the author was responsible for the development of the data management middleware.
APA, Harvard, Vancouver, ISO, and other styles
8

Van, Mai Vien. "Large-Scale Optimization With Machine Learning Applications." Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263147.

Full text
Abstract:
This thesis aims at developing efficient algorithms for solving some fundamental engineering problems in data science and machine learning. We investigate a variety of acceleration techniques for improving the convergence times of optimization algorithms.  First, we investigate how problem structure can be exploited to accelerate the solution of highly structured problems such as generalized eigenvalue and elastic net regression. We then consider Anderson acceleration, a generic and parameter-free extrapolation scheme, and show how it can be adapted to accelerate practical convergence of proximal gradient methods for a broad class of non-smooth problems. For all the methods developed in this thesis, we design novel algorithms, perform mathematical analysis of convergence rates, and conduct practical experiments on real-world data sets.

QC 20191105

APA, Harvard, Vancouver, ISO, and other styles
9

McKenzie, Donald. "Modeling large-scale fire effects : concepts and applications /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/5602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Haihao Ph D. Massachusetts Institute of Technology. "Large-scale optimization Methods for data-science applications." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122272.

Full text
Abstract:
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Department of Mathematics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 203-211).
In this thesis, we present several contributions of large scale optimization methods with the applications in data science and machine learning. In the first part, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. We consider general convex optimization problem, where we presume knowledge of a strict lower bound (like what happened in empirical risk minimization in machine learning). We introduce a new functional measure called the growth constant for the convex objective function, that measures how quickly the level sets grow relative to the function value, and that plays a fundamental role in the complexity analysis. Based on such measure, we present new computational guarantees for both smooth and non-smooth convex optimization, that can improve existing computational guarantees in several ways, most notably when the initial iterate is far from the optimal solution set.
The usual approach to developing and analyzing first-order methods for convex optimization always assumes that either the gradient of the objective function is uniformly continuous (in the smooth setting) or the objective function itself is uniformly continuous. However, in many settings, especially in machine learning applications, the convex function is neither of them. For example, the Poisson Linear Inverse Model, the D-optimal design problem, the Support Vector Machine problem, etc. In the second part, we develop a notion of relative smoothness, relative continuity and relative strong convexity that is determined relative to a user-specified "reference function" (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth or relatively continuous with respect to a correspondingly fairly-simple reference function.
We extend the mirror descent algorithm to our new setting, with associated computational guarantees. Gradient Boosting Machine (GBM) introduced by Friedman is an extremely powerful supervised learning algorithm that is widely used in practice -- it routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In the third part, we propose the Randomized Gradient Boosting Machine (RGBM) and the Accelerated Gradient Boosting Machine (AGBM). RGBM leads to significant computational gains compared to GBM, by using a randomization scheme to reduce the search in the space of weak-learners. AGBM incorporate Nesterov's acceleration techniques into the design of GBM, and this is the first GBM type of algorithm with theoretically-justified accelerated convergence rate. We demonstrate the effectiveness of RGBM and AGBM over GBM in obtaining a model with good training and/or testing data fidelity.
by Haihao Lu.
Ph. D. in Mathematics and Operations Research
Ph.D.inMathematicsandOperationsResearch Massachusetts Institute of Technology, Department of Mathematics
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Large Scale Applications Implementing"

1

Biegler, Lorenz T., Andrew R. Conn, Thomas F. Coleman, and Fadil N. Santosa, eds. Large-Scale Optimization with Applications. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-0693-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Biegler, Lorenz T., Thomas F. Coleman, Andrew R. Conn, and Fadil N. Santosa, eds. Large-Scale Optimization with Applications. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1960-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Biegler, Lorenz T., Thomas F. Coleman, Andrew R. Conn, and Fadil N. Santosa, eds. Large-Scale Optimization with Applications. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1962-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

T, Biegler Lorenz, ed. Large-scale optimization with applications. New York: Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Crull, Anna W., and Dick Hooker. Fuel cells for large scale applications. Norwalk, CT: Business Communications Co., 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1974-, Wang Lizhe, and Chen Jingying 1973 -, eds. Large-scale simulation: Models, algorithms, and applications. Boca Raton, FL: Taylor & Francis, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hazra, Subhendu Bikash. Large-Scale PDE-Constrained Optimization in Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-01502-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martin, Grötschel, Krumke Sven O, and Rambau Jörg, eds. Online optimization of large scale systems. Berlin: Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Supercomputers and Large-Scale Optimization (Workshop) ( 1988 Minneapolis, MN). Supercomputers and large-scale optimization: Algorithms, software, applications. Edited by Rosen J. B and University of Minnesota. Supercomputer Institute. Computer Science Department. Basel, Switzerland: J.C, Baltzer Scientific Publishing Co, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cooper, Robert. Supporting large scale applications on networks of workstations. [Washington, DC: National Aeronautics and Space Administration, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Large Scale Applications Implementing"

1

Stewart, D. E. "Aspects of Implementing a ‘C’ Matrix Library." In Linear Algebra for Large Scale and Real-Time Applications, 423–24. Dordrecht: Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-015-8196-7_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Edwards, Sherrill, and Hongming Zhang. "Implementing a Advanced DTS Tool for Large-Scale Operation Training." In Advanced Power Applications for System Reliability Monitoring, 293–342. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44544-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oberai, Assad A., Manish Malhotra, and Peter M. Pinsky. "Implementing highly accurate non-reflecting boundary conditions for large scale problems in structural acoustics." In Fluid Mechanics and Its Applications, 255–64. Dordrecht: Springer Netherlands, 1998. http://dx.doi.org/10.1007/978-94-015-9095-2_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Serrano, Maria A., Erez Hadad, Roberto Cavicchioli, Rut Palmero, Luca Chiantore, Danilo Amendola, and Eduardo Quiñones. "Distributed Big Data Analytics in a Smart City." In Technologies and Applications for Big Data Value, 475–96. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78307-5_21.

Full text
Abstract:
AbstractThis chapter describes an actual smart city use-case application for advanced mobility and intelligent traffic management, implemented in the city of Modena, Italy. This use case is developed in the context of the European Union’s Horizon 2020 project CLASS [4]—Edge and Cloud Computation: A highly Distributed Software for Big Data Analytics. This use-case requires both real-time data processing (data in motion) for driving assistance and online city-wide monitoring, as well as large-scale offline processing of big data sets collected from sensors (data at rest). As such, it demonstrates the advanced capabilities of the CLASS software architecture to coordinate edge and cloud for big data analytics. Concretely, the CLASS smart city use case includes a range of mobility-related applications, including extended car awareness for collision avoidance, air pollution monitoring, and digital traffic sign management. These applications serve to improve the quality of road traffic in terms of safety, sustainability, and efficiency. This chapter shows the big data analytics methods and algorithms for implementing these applications efficiently.
APA, Harvard, Vancouver, ISO, and other styles
5

Bolle, Ruud M., Jonathan H. Connell, Sharath Pankanti, Nalini K. Ratha, and Andrew W. Senior. "Large-Scale Applications." In Guide to Biometrics, 177–92. New York, NY: Springer New York, 2004. http://dx.doi.org/10.1007/978-1-4757-4036-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zuehlsdorff, Tim Joachim. "Large-Scale Applications." In Computing the Optical Properties of Large Systems, 167–85. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19770-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lyons-Thomas, Juliette, Kadriye Ercikan, Eugene Gonzalez, and Irwin Kirsch. "Implementing ILSAs." In International Handbook of Comparative Large-Scale Studies in Education, 701–19. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-88178-8_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lyons-Thomas, Juliette, Kadriye Ercikan, Eugene Gonzalez, and Irwin Kirsch. "Implementing ILSAs." In International Handbook of Comparative Large-Scale Studies in Education, 1–19. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-38298-8_28-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Konovalova, Natalia. "Possibilities of Social Bonds Using to Finance Higher Education Institutions." In Innovation, Technology, and Knowledge Management, 295–313. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-84044-0_14.

Full text
Abstract:
AbstractIn many countries, funding for higher education institutions is insufficient and requires the search for new financial instruments and financing models. One such financing model could be the issuance of social impact bonds aimed at improving the efficiency of higher education institutions. The study focuses on the use of financial instruments as social bonds for additional funding of higher education institutions. The peculiarities of social bonds and the possibilities of their application in the field of higher education are explored in the paper. The results of the study comprise three proposed innovative approaches to the development of a mechanism for the issuance of bonds. The first approach assumes that the issuer of social bonds in favour of the university is a bank or other financial institution. The second approach is based on the methodology of issuing social bonds by a university with the participation of the state. The third approach to the use of social bonds is the creation of a platform for financing long-term educational programs; it can be done with the participation of a large company implementing large-scale socio-economic projects. Such platform will have a great social and economic effect.
APA, Harvard, Vancouver, ISO, and other styles
10

Trinder, Philip W., Hans-Wolfgang Loidl, and Kevin Hammond. "Large Scale Functional Applications." In Research Directions in Parallel Functional Programming, 399–426. London: Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0841-2_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Large Scale Applications Implementing"

1

Zompakis, N., L. Papadopoulos, G. Sirakoulis, and D. Soudris. "Implementing cellular automata modeled applications on network-on-chip platforms." In 2007 IFIP International Conference on Very Large Scale Integration. IEEE, 2007. http://dx.doi.org/10.1109/vlsisoc.2007.4402514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Faghraoui, Ahmed, Mohamed-Ghassane Kabadi, Naim Kosayyer, David Morel, Dominique Sauter, and Christophe Aubrun. "SOA-based platform implementing a structural modelling for large-scale system fault detection: Application to a board machine." In 2012 IEEE International Conference on Control Applications (CCA). IEEE, 2012. http://dx.doi.org/10.1109/cca.2012.6402705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Joe, David Taylor, and David Hodgkinson. "Further Large-Scale Implementation of Advanced Pipeline Technologies." In 2008 7th International Pipeline Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/ipc2008-64472.

Full text
Abstract:
TransCanada PipeLines Limited (TransCanada) has continued its leading effort in developing and implementing pipeline technologies. With a well structured and large-scale technology implementation program and collaboration of many partners over a period of three years, TransCanada has successfully implemented a number of technologies in a 38 km long NPS 42 pipeline construction project. The technology implementation program included installation of 7.3 km Grade 690 (X100) pipe supplied by two manufacturers, deployment of tandem welding system, field trial of a phased array automated ultrasonic testing (AUT) system, the application of high performance composite coating (HPCC) and Alternative Integrity Validation (AIV) process that led to first ever construction hydrostatic test waiver from National Energy Board. The paper provides an overview of the technology implementation program and the experience gained in applying a wide range of advanced pipeline technologies.
APA, Harvard, Vancouver, ISO, and other styles
4

Yoshimura, K., I. Gaus, K. Kaku, T. Sakaki, A. Deguchi, and S. Vomvoris. "The Role of Large Scale Demonstration Experiments in Supporting the Implementation of a High Level Waste Programme." In ASME 2013 15th International Conference on Environmental Remediation and Radioactive Waste Management. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/icem2013-96048.

Full text
Abstract:
Large scale demonstration experiments in underground research laboratories (both onsite and off-site) are currently undertaken by most high level radioactive waste management organisations. The decision to plan and implement prototype experiments, which might have a life of several decades, has both important strategic and budgetary consequences for the organisation. Careful definition of experimental objectives based on the design and safety requirements is critical. The implementation requires the involvement of many parties and needs flexible but consequent management as, for example, additional goals for the experiments, identified in the course of the implementation, might jeopardise initial primary goals. The outcomes of an international workshop in which European and Japanese implementers (SKB, Posiva, Andra, ONDRAF, NUMO and Nagra) but also certain research organisations (JAEA, RWMC) participated identified which experiments are likely to be needed depending on the progress in implementing a disposal programme. Already earlier in a programme, large scale demonstrations are generally performed aiming at reducing uncertainties identified during the safety case development such as thermo-hydraulic-mechanical process validation in the engineered barrier system and target host rock. Also feasibility testing of underground construction in a potential host rock at relevant depth might be required. Later in a programme, i.e., closer to the license application, large scale experiments aim largely at demonstrating engineering feasibility and performance confirmation of complete repository components. Ultimately, before licensing repository operation, 1:1 scale commissioning testing will be required. Factors contributing to the successful completion of large scale demonstration experiments in terms of planning, defining the objectives, optimising results and main lessons learned over the last 30 years are being discussed. The need for international coordination in defining the objectives of new large scale demonstration experiments is addressed. The paper is expected to provide guidance to implementing organisations (especially those in their early stages of the programme), considering participating in and/or or conducting on their own large scale experiments in the near future.
APA, Harvard, Vancouver, ISO, and other styles
5

Hyman, Daniel J., and Roger Kuroda. "Flip-Chip Assembly of RF MEMS for Microwave Hybrid Circuitry." In ASME 2005 Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems collocated with the ASME 2005 Heat Transfer Summer Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/ipack2005-73419.

Full text
Abstract:
XCom Wireless is a small business specializing in RF MEMS-enabled tunable filters and phase shifters for next-generation communications systems. XCom has developed a high-yielding flip-chip assembly and packaging technique for implementing RF MEMS devices into fully-packaged chip-scale hybrid integrated circuitry for radio and microwave frequency applications through 25 GHz. This paper discusses the packaging approach employed, performance and reliability aspects, and lessons learned. The packaging is similar to a hybrid module approach, with discrete RF MEMS component dies flip-chipped into larger packages containing large-area integrated passives. The first level of interconnect is a pure gold flip chip for high yield strength and reliability with small dies. The use of first-level flip-chip and second-level BGAs allows the extremely large bandwidth MEMS devices to maintain high performance characteristics.
APA, Harvard, Vancouver, ISO, and other styles
6

Carlson, J. David, and B. F. Spencer. "Magnetorheological Fluid Dampers for Seismic Control." In ASME 1997 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/detc97/vib-4124.

Full text
Abstract:
Abstract Magnetorheological (MR) fluid dampers have recently emerged as enabling technology for implementing semi-active control in a variety of applications. The successful use of a linear and rotary MR fluid damper in a variety of real-time control applications in the field has recently been demonstrated. Examples of several of these controllable MR fluid actuators, that are now either in commercial production or extended field test, are described herein. This technology is presently being extended to dampers for seismic control applications. Because of their mechanical simplicity, high dynamic range, low power requirements, large force capacity and robustness, magnetorheological (MR) fluid dampers mesh well with application demands and constraints to offer an attractive means of protecting civil infrastructure systems against severe earthquake and wind loading. Following an overview of the current status of MR fluid technology, this paper presents both laboratory and full-scale studies of the efficacy of MR dampers for seismic hazard mitigation.
APA, Harvard, Vancouver, ISO, and other styles
7

Colbertaldo, Paolo, Giulio Guandalini, Elena Crespi, and Stefano Campanari. "Balancing a High-Renewables Electric Grid With Hydrogen-Fuelled Combined Cycles: A Country Scale Analysis." In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-15570.

Full text
Abstract:
Abstract A key approach to large renewable energy sources (RES) power management is based on implementing storage technologies, including batteries, power-to-hydrogen (P2H), pumped-hydro, and compressed air energy storage. Power-to-hydrogen presents specific advantages in terms of suitability for large-scale and long-term energy storage as well as capability to decarbonize a wide range of end-use sectors, e.g., including both power generation and mobility. This work applies a multi-nodal model for the hourly simulation of the energy system at a nation scale, integrating the power, transport, and natural gas sectors. Three main infrastructures are considered: (i) the power grid, characterized by instantaneous supply-demand balance and featuring a variety of storage options; (ii) the natural gas network, which can host a variable hydrogen content, supplying NG-H2 blends to the final consumers; (iii) the hydrogen production, storage, and re-electrification facilities. The aim of the work is to assess the role that can be played by gas turbine-based combined cycles in the future high-RES electric grid. Combined cycles (GTCCs) would exploit hydrogen generated by P2H implementation at large scale, transported through the natural gas infrastructure at increasingly admixed fractions, thus closing the power-to-power (P2P) conversion of excess renewables and becoming a strategic asset for future grid balancing applications. A long-term scenario of the Italian energy system is analyzed, involving a massive increase of intermittent RES power generation capacity and a significant introduction of low-emission vehicles based on electric drivetrains (pure-battery or fuel-cell). The analysis highlights the role of hydrogen as clean energy vector, not only for specific use in new applications like fuel cell vehicles and stationary fuel cells, but also for substitution of fossil fuels in conventional combustion devices. The study also explores the option of repowering the combined cycles at current sites and evaluates the effect of inter-zonal limits on power and hydrogen exchange. Moreover, results include the evaluation of the required hydrogen storage size, distributed at regional scale or in correspondence of the power plant sites. Results show that when extra hydrogen generated by P2H is fed to GTCCs, up to 17–24% H2 use is achieved, reaching up to 70–100% in southern regions, with a parallel reduction in fossil NG input and CO2 emissions of the GTCC plants.
APA, Harvard, Vancouver, ISO, and other styles
8

Ohnemus, Kenneth R. "Implementing a large scale windows help system." In the 13th annual international conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/223984.224006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Yi, and Damir Novosel. "Challenges in Implementing a Large-Scale PMU System." In 2006 International Conference on Power System Technology. IEEE, 2006. http://dx.doi.org/10.1109/icpst.2006.321829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Collin, Seo Jin Park, Ankita Kejriwal, Satoshi Matsushita, and John Ousterhout. "Implementing linearizability at large scale and low latency." In SOSP '15: ACM SIGOPS 25th Symposium on Operating Systems Principles. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2815400.2815416.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Large Scale Applications Implementing"

1

Anderson, R. E., E. M. Leonard, R. F. Shea, and R. R. Berggren. Nuclear-pumped lasers for large-scale applications. Office of Scientific and Technical Information (OSTI), May 1989. http://dx.doi.org/10.2172/6179433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Klasky, Scott, Karsten Schwan, Ron A. Oldfield, and Gerald F. ,. II Lofstead. Advanced I/O for large-scale scientific applications. Office of Scientific and Technical Information (OSTI), January 2010. http://dx.doi.org/10.2172/1004371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gelernter, David. Applications and Systems for Large-Scale Adaptive Parallelism. Fort Belvoir, VA: Defense Technical Information Center, March 1997. http://dx.doi.org/10.21236/ada326097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jawerth, Bjorn. A Fast PDE Solver Environment for Large-Scale Applications. Fort Belvoir, VA: Defense Technical Information Center, April 2001. http://dx.doi.org/10.21236/ada394522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bhatele, A., T. Gamblin, B. Gunney, M. Schulz, and P. Bremer. Simplifying Performance Analysis of Large-scale Adaptive Scientific Applications. Office of Scientific and Technical Information (OSTI), January 2012. http://dx.doi.org/10.2172/1104994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bagnall, P., R. Briscoe, and A. Poppitt. Taxonomy of Communication Requirements for Large-scale Multicast Applications. RFC Editor, December 1999. http://dx.doi.org/10.17487/rfc2729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jason Nieh. Final Report: Migration Mechanisms for Large-scale Parallel Applications. Office of Scientific and Technical Information (OSTI), October 2009. http://dx.doi.org/10.2172/966698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Knight, John, Dennis Heimbigner, Alexander L. Wolf, Antonio Carzaniga, Jonathan Hill, Premkumar Devanbu, and Michael Gertz. The Willow Architecture: Comprehensive Survivability for Large-Scale Distributed Applications. Fort Belvoir, VA: Defense Technical Information Center, December 2001. http://dx.doi.org/10.21236/ada436790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karavanic, K. Final report for''automated diagnosis of large scale parallel applications''. Office of Scientific and Technical Information (OSTI), November 2000. http://dx.doi.org/10.2172/15005433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carr, Robert D., Todd Morrison, William Eugene Hart, Nicolas L. Benavides, Harvey J. Greenberg, Jean-Paul Watson, and Cynthia Ann Phillips. LDRD final report : robust analysis of large-scale combinatorial applications. Office of Scientific and Technical Information (OSTI), September 2007. http://dx.doi.org/10.2172/921748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography