Journal articles on the topic 'Fragmentation (computing)'

To see the other types of publications on this topic, follow the link: Fragmentation (computing).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Fragmentation (computing).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hudic, Aleksandar, Shareeful Islam, Peter Kieseberg, Sylvi Rennert, and Edgar R. Weippl. "Data confidentiality using fragmentation in cloud computing." International Journal of Pervasive Computing and Communications 9, no. 1 (March 29, 2013): 37–51. http://dx.doi.org/10.1108/17427371311315743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Park, Y. T., P. Sthapit, and J. Y. Pyun. "Energy Efficient Data Fragmentation for Ubiquitous Computing." Computer Journal 57, no. 2 (September 1, 2013): 263–72. http://dx.doi.org/10.1093/comjnl/bxt080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rasche, Florian, Aleš Svatoš, Ravi Kumar Maddula, Christoph Böttcher, and Sebastian Böcker. "Computing Fragmentation Trees from Tandem Mass Spectrometry Data." Analytical Chemistry 83, no. 4 (February 15, 2011): 1243–51. http://dx.doi.org/10.1021/ac101825k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Scheubert, Kerstin, Franziska Hufsky, Florian Rasche, and Sebastian Böcker. "Computing Fragmentation Trees from Metabolite Multiple Mass Spectrometry Data." Journal of Computational Biology 18, no. 11 (November 2011): 1383–97. http://dx.doi.org/10.1089/cmb.2011.0168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Beckham, Olly, Gord Oldman, Julie Karrie, and Dorth Craig. "Techniques used to formulate confidential data by means of fragmentation and hybrid encryption." International research journal of management, IT and social sciences 6, no. 6 (October 15, 2019): 68–86. http://dx.doi.org/10.21744/irjmis.v6n6.766.

Full text
Abstract:
Cloud computing is a concept shifting in the approach how computing resources are deployed and purchased. Even though the cloud has a capable, elastic, and consistent design, several security concerns restrain customers to completely accept this novel technology and move from traditional computing to cloud computing. In the article, we aspire to present a form of a novel architectural model for offering protection to numerous cloud service providers with the intention to devise and extend security means for cloud computing. In this work, we presented a two-tier architecture for security in multi-clouds; one at the client side, and other at the server side. The article presented a security domination outline for multi-clouds and supports security needs like Confidentiality, Integrity, Availability, Authorization, and Non-repudiation for cloud storage. Through this document we have anticipated, HBDaSeC, a secure-computation protocol to ease the challenges of enforcing the protection of data for information security in the cloud.
APA, Harvard, Vancouver, ISO, and other styles
6

Rasche, Florian, Aleš Svatoš, Ravi Kumar Maddula, Christoph Böttcher, and Sebastian Böcker. "Correction to Computing Fragmentation Trees from Tandem Mass Spectrometry Data." Analytical Chemistry 83, no. 17 (September 2011): 6911. http://dx.doi.org/10.1021/ac201785d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vivek, V., R. Srinivasan, R. Elijah Blessing, and R. Dhanasekaran. "Payload fragmentation framework for high-performance computing in cloud environment." Journal of Supercomputing 75, no. 5 (November 17, 2018): 2789–804. http://dx.doi.org/10.1007/s11227-018-2660-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salman, Mahdi Abed, Hasanain Ali Al Essa, and Khaldoon Alhussayni. "A Distributed Approach for Disk Defragmentation." JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences 26, no. 3 (January 9, 2018): 1–5. http://dx.doi.org/10.29196/jub.v26i3.548.

Full text
Abstract:
Fragmentation is a computing problem that occurs when files of a computer system are replaced frequently. In this paper, the fragments of each file are collected and grouped, thanks to ant-colony optimization ACO, in one place as a mission for a group of ants. The study shows the ability of ants to work in a distributed environment such as cloud computing systems to solve such problem. The model is simulated using NetLogo.
APA, Harvard, Vancouver, ISO, and other styles
9

Pei, Xin, Huiqun Yu, and Guisheng Fan. "Fine-Grained Access Control via XACML Policy Optimization in Cloud Computing." International Journal of Software Engineering and Knowledge Engineering 25, no. 09n10 (November 2015): 1709–14. http://dx.doi.org/10.1142/s0218194015710047.

Full text
Abstract:
One primary challenge of enforcing access control in cloud computing is how to ensure access with high efficiency while preserving data security. This paper proposes a fine-grained access control method for cloud resources. The basic idea is to use XACML as access control language and to optimize policies by data fragmentation and policy refinement algorithms. Through data fragmentation, the accessible resources are divided into disjoint data blocks, and each of them will be combined with a set of policy rules. This helps to refine the policy and to avoid data leakage caused by rule conflicting on the resource intersections. Finally, the disjoint data blocks and the optimized policy are distributed in the three-layered cloud, and the decision to a request is made by rule matching on a specific resource rather than traversing the whole policy rules. Experiments show that our proposal enjoys higher efficiency in cloud-based access control.
APA, Harvard, Vancouver, ISO, and other styles
10

Mariam O. Alrashidi, Mariam O. Alrashidi. "A Framework and Cryptography Algorithm for Protecting Sensitive Data on Cloud Service Providers." journal of King Abdulaziz University Computing and Information Technology Sciences 8, no. 2 (March 8, 2019): 69–92. http://dx.doi.org/10.4197/comp.8-2.6.

Full text
Abstract:
Most companies are sceptical about the security and insurance measures offered by cloud services and are reluctant to store sensitive data, such as employee records, in the cloud. Thus, more effort is needed to support the security of information in cloud computing. This paper proposes a cryptography algorithm called ―random algorithm‖ because it is built on the idea of randomising the encryption of uploaded files among four encryption algorithms. The proposed fragmentation technique helps add security and privacy to cloud storage applications. Based on earlier studies, we have created a file-level fragmentation technique that does not work at the database level, in contrast to commonly employed approaches in the case of fragmentation techniques such as horizontal fragmentation, vertical fragmentation, and hybrid fragmentation, which do work at the database level. The proposed encryption algorithm and fragmentation technique work within an integrated security framework that includes a user authentication gateway that encrypts user registration data through a cryptography algorithm called the RivestShamir-Adleman algorithm (RSA). The results of the proposed security framework were positive, as it contributed to reducing the encryption time and decoding time by approximately 99%, compared to earlier studies
APA, Harvard, Vancouver, ISO, and other styles
11

Oke, Ayodeji Emmanuel, Ahmed Farouk Kineber, Ibraheem Albukhari, Idris Othman, and Chukwuma Kingsley. "Assessment of Cloud Computing Success Factors for Sustainable Construction Industry: The Case of Nigeria." Buildings 11, no. 2 (January 23, 2021): 36. http://dx.doi.org/10.3390/buildings11020036.

Full text
Abstract:
Cloud Computing has become a valuable platform for sustainability in many countries. This study evaluates the cloud computing implementation and its Critical Success Factors (CSFs) towards ensuring sustainable construction projects in Nigeria. Data were collected from previous literature, supplemented by a quantitative approach via a questionnaire survey. Data were collected from 104 construction professionals while cloud computing CSFs were examined using Relative Importance Ranking (RII) and Exploratory Factor Analysis (EFA). The results show that cloud computing’s awareness level is 96.2%, which means that the respondents are aware of cloud computing concept. Furthermore, the result shows that most of the respondents are adopting the concept. The analysis of the CSFs indicated that reliable data storage, performance as well as cost of accessibility and availability were the four most significant CSFs to cloud computing applications. Analysis of the CSFs through EFA generated four main components which include human satisfaction, organization, client’s acceptance, and industry-based. Consequently, this study contributed to existing body of knowledge by highlighting the cloud computing CSFs for achieving sustainable construction project. As such, the results could be a game-changer in the construction industry—not only in Nigeria but also in developing nations where construction projects are implemented through similar style and procedure. This study would be a benchmark for supporting decision-makers to improve data fragmentation, in which the use of data is paramount to the execution of construction works. Finally, the results of this study would be useful for enhancing sustainability and general management of construction projects through cloud computing implementation.
APA, Harvard, Vancouver, ISO, and other styles
12

Starodubov, A. N., A. N. Kadochigova, and A. V. Kaplun. "Application of the discrete element method for simulation of coal mining by a cutter-loader in a working face." Mining Industry Journal (Gornay Promishlennost), S2/2023 (November 10, 2023): 150–54. http://dx.doi.org/10.30686/1609-9192-2023-s2-150-154.

Full text
Abstract:
Timulation modeling as a method of studying complex technological processes is successfully used in mining. Due to the creation of digital models, it is possible to estimate and improve the efficiency of technical and organizational solutions used in difficult mining and geological conditions. The article describes the development of a simulation model of coal extraction by a shearer and its transportation to an armored conveyer. Fragmentation of a high coal seam was simulated under the action of the cutting planes of the shearer. The created model in the Rocky DEM simulation environment takes into account the physical and mechanical properties of materials and 3D structures of mining equipment. The method of discrete elements was used as a mathematical tool for computing fragmentation of the coalbed, which helped to perform the computing without loss of the initial mass and volume of the materials. During the validation of the simulation model, a series of tests were carried out to determine the angle of natural slope of non-spherical coal particles close in granulometric composition to the mass formed during the fragmentation of the coalbed. Based on the test results, an array of values of coal particle adhesion parameters was obtained, which allows conducting simulation studies in conditions of existing and projected coal mines.
APA, Harvard, Vancouver, ISO, and other styles
13

Cho, Minseon, and Donghyun Kang. "FragTracer: Real-Time Fragmentation Monitoring Tool for F2FS File System." Sensors 23, no. 9 (May 5, 2023): 4488. http://dx.doi.org/10.3390/s23094488.

Full text
Abstract:
Emerging hardware devices (e.g., NVMe SSD, RISC-V, etc.) open new opportunities for improving the overall performance of computer systems. In addition, the applications try to fully utilize hardware resources to keep up with those improvements. However, these trends can cause significant file system overheads (i.e., fragmentation issues). In this paper, we first study the reason for the fragmentation issues on an F2FS file system and present a new tool, called FragTracer, which helps to analyze the ratio of fragmentation in real-time. For user-friendly usage, we designed FragTracer with three main modules, monitoring, pre-processing, and visualization, which automatically runs without any user intervention. We also optimized FragTracer in terms of performance to hide its overhead in tracking and analyzing fragmentation issues on-the-fly. We evaluated FragTracer with three real-world databases on the F2FS file system, so as to study the fragmentation characteristics caused by databases, and we compared the overhead of FragTracer. Our evaluation results clearly show that the overhead of FragTracer is negligible when running on commodity computing environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Yuan. "The Key Challenges for the Development of a BIM and Mobile Computing Based Collaborative System in the Construction Industry." Applied Mechanics and Materials 197 (September 2012): 656–60. http://dx.doi.org/10.4028/www.scientific.net/amm.197.656.

Full text
Abstract:
The construction industry has characterized the fragmentation features including the multiple construction phases and multidisciplinary participants. The advances of BIM concepts and technologies give the construction industry a powerful potential to improve the coordination and collaboration for the whole building lifecycle. This paper reports an on-going project that aims to develop a BIM and Mobile Computing supported collaborative system and discusses the key challenges for the implementation of BIM and Mobile Computing in the construction industry. This paper concludes with future development and validation of the system.
APA, Harvard, Vancouver, ISO, and other styles
15

Najmi, Mahboobeh. "Investigation of Improving Load Balancing and Data Fragmentation in Cloud Computing Performance." Indian Journal of Science and Technology 12, no. 7 (February 1, 2019): 1–8. http://dx.doi.org/10.17485/ijst/2019/v12i7/140946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pascariu, Mihai-Cosmin, Nicolae Dinca, Carolina Cojocariu, Eugen Sisu, Alina Serb, Romina Birza, and Marius Georgescu. "Computed Mass-Fragmentation Energy Profiles of Some Acetalized Monosaccharides for Identification in Mass Spectrometry." Symmetry 14, no. 5 (May 23, 2022): 1074. http://dx.doi.org/10.3390/sym14051074.

Full text
Abstract:
Our study found that quantum calculations can differentiate fragmentation energies into isomeric structures with asymmetric carbon atoms, such as those of acetalized monosaccharides. It was justified by the good results that have been published in recent years on the discrimination of structural isomers and diastereomers by correlating the calculated mass energy fragmentation profiles with their mass spectra. Based on the quantitative structure–fragmentation relationship (QSFR), this technique compares the intensities of primary ions from the experimental spectrum using the mass energy profiles calculated for the candidate structures. Maximum fit is obtained for the true structure. For a preliminary assessment of the accuracy of the identification of some di-O-isopropylidene monosaccharide diastereomers, we used fragmentation enthalpies (ΔfH) and Gibbs energies (ΔfG) as the energetic descriptors of fragmentation. Four quantum chemical methods were used: RM1, PM7, DFT ΔfH and DFT ΔfG. The mass energy database shows that the differences between the profiles of the isomeric candidate structures could be large enough to be distinguished from each other. This database allows the optimization of energy descriptors and quantum computing methods that can ensure the correct identification of these isomers.
APA, Harvard, Vancouver, ISO, and other styles
17

Abdul Saleem, S., and N. Ramya. "File Fragmentation to Improve Security in Cloud Using Graph Topology Grid Algorithm: A Survey." Asian Journal of Computer Science and Technology 8, S2 (March 5, 2019): 46–51. http://dx.doi.org/10.51983/ajcst-2019.8.s2.2026.

Full text
Abstract:
Cloud computing is an emerging patterning that provides computing, communication and storage resources-as-service over a network. In existing system, data stored in a cloud is unsafe due to the eaves dropping and hacking process. To overcome the drawbacks of earlier approaches, Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) methodology is used. The node selection is ensured by means of Graph Topology Grid Algorithm and also data is encrypted here for security. In this process, the common data are divided into multiple nodes and replicate the fragmented data over the cloud nodes. Each data is stored in a different node as fragments in individual locations. In case of any attackers attack a node, no meaningful information will expose to them. The controlled replication of the file fragments is ensured and each of the fragments is replicated only once for the purpose of improved security and minimal retrieval time. In this survey, various relevant approaches were studied and analyzed. Furthermore the DROPS with Graph Topology Grid Algorithm give the better way of security in cloud environment over the earlier approaches.
APA, Harvard, Vancouver, ISO, and other styles
18

Kaur, Amandeep, and Mr Pawan Luthra. "ENHANCED SECURITY MECHANISM IN CLOUD COMPUTING USING HYBRID ENCRYPTION ALGORITHM AND FRAGMENTATION: A REVIEW." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, no. 8 (June 2, 2015): 5987–93. http://dx.doi.org/10.24297/ijct.v14i8.1859.

Full text
Abstract:
Cloud is a term used as a metaphor for the wide area networks (like internet) or any such large networked environment. It came partly from the cloud-like symbol used to represent the complexities of the networks in the schematic diagrams. It represents all the complexities of the network which may include everything from cables, routers, servers, data centers and all such other devices. Cloud based systems saves data off multiple organizations on shared hardware systems. Data segregation is done by encrypting data of users, but encryption is not complete solution. We can do segregate data by creating virtual partitions of data for saving and allowing user to access data in his partition only. We will be implementing cloud security aspects for data mining by implementing cloud system. After implementing cloud infrastructure for data mining for cloud system we shall be evaluating security measure for data mining in cloud. We will be fixing threats in data mining to Personal/private data in cloud systems. Â
APA, Harvard, Vancouver, ISO, and other styles
19

Rani, K. Sasi Kala, and S. N. Deepa. "Hybrid evolutionary computing algorithms and statistical methods based optimal fragmentation in smart cloud networks." Cluster Computing 22, S1 (December 27, 2017): 241–54. http://dx.doi.org/10.1007/s10586-017-1547-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Monjezi, Masoud, Hasan Ali Mohamadi, Bahare Barati, and Manoj Khandelwal. "Application of soft computing in predicting rock fragmentation to reduce environmental blasting side effects." Arabian Journal of Geosciences 7, no. 2 (November 23, 2012): 505–11. http://dx.doi.org/10.1007/s12517-012-0770-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Baharlouii, M., D. Mafi Gholami, and M. Abbasi. "INVESTIGATING MANGROVE FRAGMENTATION CHANGES USING LANDSCAPE METRICS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 159–62. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-159-2019.

Full text
Abstract:
Abstract. Generally, investigation of long-term mangroves fragmentation changes can be used as an important tool in assessing sensitivity and vulnerability of these ecosystems to the multiple environmental hazards. Therefore, the aim of this study was to reveal the trend of mangroves fragmentation changes in Khamir habitat using satellite imagery and Fragstats software during a 30-year period (1986–2016). To this end, Landsat images of 1986, 1998, and 2016 were used and after computing the normalized difference vegetation index (NDVI) to distinguish mangroves from surrounding water and land areas, images were further processed and classified into two types of land cover (i.e., mangrove and non-mangrove areas) using the maximum likelihood classification method. By determining the extent of mangroves in the Khamir habitat in the years of 1986, 1998 and 2017, the trend of fragmentation changes was quantified using CA, NP, PD and LPI landscape metrics. The results showed that the extent of mangroves in Khamir habitat (CA) decreased in the period post-1998 (1998–2016). The results also showed that, the NP and PD increased in the period of post-1998 and in contrast, the LPI decrease in this period. These results revealed the high degree of vulnerability of mangroves in Khamir habitat to the drought occurrence and are thus threatened by climate change. We hope that the results of this study stimulate further climate change adaptation planning efforts and help decision-makers prioritize and implement conservative measures in the mangrove ecosystems on the northern coasts of the PG and the GO and elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
22

Anderton, Nicole, and Michiel Postema. "Fragmentation thresholds simulated for antibubbles with various infinitesimal elastic shells." Current Directions in Biomedical Engineering 8, no. 2 (August 1, 2022): 73–76. http://dx.doi.org/10.1515/cdbme-2022-1020.

Full text
Abstract:
Abstract Antibubbles are small gas bubbles comprising one or multiple liquid or solid cores, typically surrounded by stabilising shells. Acoustically active microscopic antibubbles have been proposed for use as theranostic agents. For clinical applications such as ultrasound-guided drug delivery and flash-echo, it is relevant to know the fragmentation threshold of antibubbles and the influence of the stabilising shells thereon. For antibubbles with an infinitesimal frictionless elastic shell of constant surface tension, we simulated ultrasoundassisted fragmentation by computing radial pulsation as a function of time using an adapted Rayleigh-Plesset equation, and converting the solutions to time-variant kinetic energy of the shell and time-variant surface energy deficit. By repetition over a range of pressure amplitudes, fragmentation thresholds were found for antibubbles of varying size, core volume, shell stiffness, and driving frequency. As backscattering increases with scatterer size, and as drug delivery would require vehicles just small enough to pass through capillaries with a relatively large payload, we chose to present typical results for antibubbles of resting diameter 6 μm with a 90% incompressible core. At a driving frequency of 13 MHz, the fragmentation threshold was found to correspond to a mechanical indices less than 0.4, irrespective of shell stiffness. This mechanical index is not considered unsafe in diagnosis. That means that antibubbles acting as drug-carrying vehicles could release their payload under diagnostic conditions.
APA, Harvard, Vancouver, ISO, and other styles
23

Sridharan, R., and S. Domnic. "Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment." Scalable Computing: Practice and Experience 20, no. 2 (May 2, 2019): 335–48. http://dx.doi.org/10.12694/scpe.v20i2.1526.

Full text
Abstract:
Cloud computing hosts large number of modern day applications using the virtualization concept. However, end-to-end network latencies are detrimental to the performance of IoT (Internet of Things) applications like video surveillance and health monitoring. Although edge/fog computing alleviates the latency concerns to some extent, it is still not suitable for applications having intercommunicating tasks. Further, these applications can be elastic in nature and demand more tasks during their life-time. To address this gap, in this paper a network aware co-allocation strategy for the tasks of an individual applications is proposed. After modelling the problem using bin packing approach with additional constraints, the authors propose a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. Using CloudsimPlus simulator the performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM. Extensive simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms in our consideration.
APA, Harvard, Vancouver, ISO, and other styles
24

Armony, Gad, Sven Brehmer, Tharan Srikumar, Lennard Pfennig, Fokje Zijlstra, Dennis Trede, Gary Kruppa, Dirk J. Lefeber, Alain J. van Gool, and Hans J. C. T. Wessels. "The GlycoPaSER Prototype as a Real-Time N-Glycopeptide Identification Tool Based on the PaSER Parallel Computing Platform." International Journal of Molecular Sciences 24, no. 9 (April 26, 2023): 7869. http://dx.doi.org/10.3390/ijms24097869.

Full text
Abstract:
Real-time database searching allows for simpler and automated proteomics workflows as it eliminates technical bottlenecks in high-throughput experiments. Most importantly, it enables results-dependent acquisition (RDA), where search results can be used to guide data acquisition during acquisition. This is especially beneficial for glycoproteomics since the wide range of physicochemical properties of glycopeptides lead to a wide range of optimal acquisition parameters. We established here the GlycoPaSER prototype by extending the Parallel Search Engine in Real-time (PaSER) functionality for real-time glycopeptide identification from fragmentation spectra. Glycopeptide fragmentation spectra were decomposed into peptide and glycan moiety spectra using common N-glycan fragments. Each moiety was subsequently identified by a specialized algorithm running in real-time. GlycoPaSER can keep up with the rate of data acquisition for real-time analysis with similar performance to other glycoproteomics software and produces results that are in line with the literature reference data. The GlycoPaSER prototype presented here provides the first proof-of-concept for real-time glycopeptide identification that unlocks the future development of RDA technology to transcend data acquisition.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Xiulei, Shoulu Hou, Qiang Tong, Xuhong Liu, Zhihui Qin, and Junyang Yu. "A Prediction Approach for Video Hits in Mobile Edge Computing Environment." Security and Communication Networks 2020 (November 17, 2020): 1–6. http://dx.doi.org/10.1155/2020/8857564.

Full text
Abstract:
Smart device users spend most of the fragmentation time in the entertainment applications such as videos and films. The migration and reconstruction of video copies can improve the storage efficiency in distributed mobile edge computing, and the prediction of video hits is the premise for migrating video copies. This paper proposes a new prediction approach for video hits based on the combination of correlation analysis and wavelet neural network (WNN). This is achieved by establishing a video index quantification system and analyzing the correlation between the video to be predicted and already online videos. Then, the similar videos are selected as the influencing factors of video hits. Compared with the autoregressive integrated moving average (ARIMA) and gray prediction, the proposed approach has a higher prediction accuracy and a broader application scope.
APA, Harvard, Vancouver, ISO, and other styles
26

Khobragade, Shrutika, Rohini Bhosale, and Rahul Jiwahe. "High Security Mechanism: Fragmentation and Replication in the Cloud with Auto Update in the System." APTIKOM Journal on Computer Science and Information Technologies 5, no. 2 (April 22, 2020): 54–59. http://dx.doi.org/10.34306/csit.v5i2.138.

Full text
Abstract:
Cloud Computing makes immense use of internet to store a huge amount of data. Cloud computing provides high quality service with low cost and scalability with less requirement of hardware and software management. Security plays a vital role in cloud as data is handled by third party hence security is the biggest concern to matter. This proposed mechanism focuses on the security issues on the cloud. As the file is stored at a particular location which might get affected due to attack and will lost the data. So, in this proposed work instead of storing a complete file at a particular location, the file is divided into fragments and each fragment is stored at various locations. Fragments are more secured by providing the hash key to each fragment. This mechanism will not reveal all the information regarding a particular file even after successful attack. Here, the replication of fragments is also generated with strong authentication process using key generation. The auto update of a fragment or any file is also done here. The concept of auto update of filles is done where a file or a fragment can be updated online. Instead of downloading the whole file, a fragment can be downloaded to update. More time is saved using this methodology.
APA, Harvard, Vancouver, ISO, and other styles
27

Khobragade, Shrutika, Rohini Bhosale, and Rahul Jiwane. "High security mechanism: fragmentation and replication in the cloud with auto update in the system." Computer Science and Information Technologies 1, no. 2 (July 1, 2020): 78–83. http://dx.doi.org/10.11591/csit.v1i2.p78-83.

Full text
Abstract:
Cloud Computing makes immense use of internet to store a huge amount of data. Cloud computing provides high quality service with low cost and scalability with less requirement of hardware and software management. Security plays a vital role in cloud as data is handled by third party hence security is the biggest concern to matter. This proposed mechanism focuses on the security issues on the cloud. As the file is stored at a particular location which might get affected due to attack and will lost the data. So, in this proposed work instead of storing a complete file at a particular location, the file is divided into fragments and each fragment is stored at various locations. Fragments are more secured by providing the hash key to each fragment. This mechanism will not reveal all the information regarding a particular file even after successful attack. Here, the replication of fragments is also generated with strong authentication process using key generation. The auto update of a fragment or any file is also done here. The concept of auto update of filles is done where a file or a fragment can be updated online. Instead of downloading the whole file, a fragment can be downloaded to update. More time is saved using this methodology.
APA, Harvard, Vancouver, ISO, and other styles
28

Khobragade, Shrutika, Rohini Bhosale, and Rahul Jiwane. "High security mechanism: fragmentation and replication in the cloud with auto update in the system." Computer Science and Information Technologies 1, no. 2 (July 1, 2020): 78–83. http://dx.doi.org/10.11591/csit.v1i2.pp78-83.

Full text
Abstract:
Cloud computing makes immense use of internet to store a huge amount of data. Cloud computing provides high quality service with low cost and scalability with less requirement of hardware and software management. Security plays a vital role in cloud as data is handled by third party hence security is the biggest concern to matter. This proposed mechanism focuses on the security issues on the cloud. As the file is stored at a particular location which might get affected due to attack and will lost the data. So, in this proposed work instead of storing a complete file at a particular location, the file is divided into fragments and each fragment is stored at various locations. Fragments are more secured by providing the hash key to each fragment. This mechanism will not reveal all the information regarding a particular file even after successful attack. Here, the replication of fragments is also generated with strong authentication process using key generation. The auto update of a fragment or any file is also done here. The concept of auto update of files is done where a file or a fragment can be updated online. Instead of downloading the whole file, a fragment can be downloaded to update. More time is saved using this methodology.
APA, Harvard, Vancouver, ISO, and other styles
29

Zheng, Zhe, Yu Han, Yingying Chi, Fusheng Yuan, Wenpeng Cui, Hailong Zhu, Yi Zhang, and Peiying Zhang. "Network Resource Allocation Algorithm Using Reinforcement Learning Policy-Based Network in a Smart Grid Scenario." Electronics 12, no. 15 (August 3, 2023): 3330. http://dx.doi.org/10.3390/electronics12153330.

Full text
Abstract:
The exponential growth in user numbers has resulted in an overwhelming surge in data that the smart grid must process. To tackle this challenge, edge computing emerges as a vital solution. However, the current heuristic resource scheduling approaches often suffer from resource fragmentation and consequently get stuck in local optimum solutions. This paper introduces a novel network resource allocation method for multi-domain virtual networks with the support of edge computing. The approach entails modeling the edge network as a multi-domain virtual network model and formulating resource constraints specific to the edge computing network. Secondly, a policy network is constructed for reinforcement learning (RL) and an optimal resource allocation strategy is obtained under the premise of ensuring resource requirements. In the experimental section, our algorithm is compared with three other algorithms. The experimental results show that the algorithm has an average increase of 5.30%, 8.85%, 15.47% and 22.67% in long-term average revenue–cost ratio, virtual network request acceptance ratio, long-term average revenue and CPU resource utilization, respectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Momenzadeh, Zahra, and Faramarz Safi-Esfahani. "Workflow scheduling applying adaptable and dynamic fragmentation (WSADF) based on runtime conditions in cloud computing." Future Generation Computer Systems 90 (January 2019): 327–46. http://dx.doi.org/10.1016/j.future.2018.07.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hung, Tran Van, and Chuan He Huang. "A Dynamic Data Fragmentation and Distribution Strategy for Main-Memory Database Cluster." Advanced Materials Research 490-495 (March 2012): 1231–36. http://dx.doi.org/10.4028/www.scientific.net/amr.490-495.1231.

Full text
Abstract:
MMDB cluster system is a memory optimized relation database that implements on cluster computing platform, provides applications with extremely fast response time and very high throughput as required by many applications in a wide range of industries. Here, a new dynamic fragment allocation algorithm (DFAPR) in Partially Replicated allocation scenario is proposed. This algorithm reallocates data with respect to changing data access pattern for each fragment in which data is maintained in current site, migrated or created new replicas on remote sites depend on accessing frequency and average response time. At last, the simulation results show that the DFAPR is suitable for MMDB cluster because it provides a better response time and maximize the locality of processing so it could be developed parallel processing of MMDB in cluster environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Gao, Xianming, Baosheng Wang, and Xiaozhe Zhang. "VR-Cluster: Dynamic Migration for Resource Fragmentation Problem in Virtual Router Platform." Scientific Programming 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/3976965.

Full text
Abstract:
Network virtualization technology is regarded as one of gradual schemes to network architecture evolution. With the development of network functions virtualization, operators make lots of effort to achieve router virtualization by using general servers. In order to ensure high performance, virtual router platform usually adopts a cluster of general servers, which can be also regarded as a special cloud computing environment. However, due to frequent creation and deletion of router instances, it may generate lots of resource fragmentation to prevent platform from establishing new router instances. In order to solve “resource fragmentation problem,” we firstly propose VR-Cluster, which introduces two extra function planes including switching plane and resource management plane. Switching plane is mainly used to support seamless migration of router instances without packet loss; resource management plane can dynamically move router instances from one server to another server by using VR-mapping algorithms. Besides, three VR-mapping algorithms including first-fit mapping algorithm, best-fit mapping algorithm, and worst-fit mapping algorithm are proposed based on VR-Cluster. At last, we establish VR-Cluster protosystem by using general X86 servers, evaluate its migration time, and further analyze advantages and disadvantages of our proposed VR-mapping algorithms to solve resource fragmentation problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Li Xuan, Li Fang Liu, Shen Ling Liu, Dong Chen, and Yu Jiao Chen. "A Secured Distributed and Data Fragmentation Model for Cloud Storage." Applied Mechanics and Materials 347-350 (August 2013): 2693–99. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2693.

Full text
Abstract:
The increasing popularity of cloud service is leading people to concentrate more on cloud storage than traditional storage. However, cloud storage confronts many challenges, especially, the security of the out-sourced data (the data that is not stored/retrieved from the tenants own servers). Security not only can keep the data from attacking but also can recover the original data after attack efficiently. Thus, to address the security issue, we proposed a new distributed and data fragmentation model of cloud storage named DDFM (Distributed and Data Fragmentation Model). DDFM aims to provide tenants a secured and integrated cloud storage service with layer-to-layer protection strategy. The layer-to-layer protection strategy of our model includes three main algorithms: the Authentication and Authorization Management Algorithm based on OpenID and OAuth, the Data Fragment Algorithm based on Granular Computing and the Haystack File Storage Algorithm. Considering tenants' security requirement our model DDFM based on these algorithms provided a better decision of cloud storage architecture for our tenants. Furthermore, DDFM can defense most of the network threats and provide a secured way for the third-party applications to access sensitive information that stored on the cloud storage.
APA, Harvard, Vancouver, ISO, and other styles
34

Rajani, Mrs K., Y. Sreeja, T. Tejaswini, and B. Manasa. "Optimal Performance of Security by Fragmentation and Replication of Data in Cloud." International Journal for Research in Applied Science and Engineering Technology 10, no. 8 (August 31, 2022): 1547–50. http://dx.doi.org/10.22214/ijraset.2022.45151.

Full text
Abstract:
Abstract: Outsourcing data to a third-party administrative control, as is done in cloud computing, gives rise to security concerns. The data compromise may occur due to attacks by other users and nodes within the cloud. Therefore, high security measures are required to protect data within the cloud. However, the employed security strategy must also take into account the optimization of the data retrieval time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that collectively approaches the security and performance issues. In the DROPS methodology, we divide a file into fragments, and replicate the fragmented data over the cloud nodes. Each of the nodes stores only a single fragment of a particular data file that ensures that even in case of a successful attack, no meaningful information is revealed to the attacker.
APA, Harvard, Vancouver, ISO, and other styles
35

Mustafa, Rashed, Md Javed Hossain, and Thomas Chowdhury. "A better way for finding the optimal number of nodes in a distributed database management system." Daffodil International University Journal of Science and Technology 4, no. 2 (February 10, 2010): 19–22. http://dx.doi.org/10.3329/diujst.v4i2.4362.

Full text
Abstract:
Distributed Database Management System (DDBMS) is one of the prime concerns in distributed computing. The driving force of development of DDBMS is the demand of the applications that need to query very large databases (order of terabytes). Traditional Client- Server database systems are too slower to handle such applications. This paper presents a better way to find the optimal number of nodes in a distributed database management systems. Keywords: DDBMS, Data Fragmentation, Linear Search, RMI. DOI: 10.3329/diujst.v4i2.4362 Daffodil International University Journal of Science and Technology Vol.4(2) 2009 pp.19-22
APA, Harvard, Vancouver, ISO, and other styles
36

Fathi, Mohamad Syazli, Mohammad Abedi, and Norshakila Rawai. "The Potential of Cloud Computing Technology for Construction Collaboration." Applied Mechanics and Materials 174-177 (May 2012): 1931–34. http://dx.doi.org/10.4028/www.scientific.net/amm.174-177.1931.

Full text
Abstract:
A major barrier to successful construction project delivery has been the fragmentation and poor relationships existing between players in the construction industry. This significant issue has exerted a negative influence on project objectives, especially those which are predetermined, such as time, specified budget and standard quality. Construction projects require intensive efforts and processes, which it is often a challenge to provide to parties within the construction industry, to access accurate information and efficient communications. The purpose of this paper is to investigate the construction collaboration tools along with concepts of context-aware computing and cloud computing. The findings in this research are based on a thorough review of the comprehensive literature on IT, computing and construction. Consequently, this study sets out to introduce and develop the concepts of potential innovative collaborative tools, such as Context-Aware Cloud Computing Information Systems (CACCIS), for facilitating the construction supply chain processes and networks by enhancing the opportunities for achieving better competitive advantages. Firstly, it is hoped that this study will lead to improved construction collaboration to enhance the competitive advantages and opportunities for the internationalisation and globalisation of the construction industry. Secondly, it presents an effective method to provide new insights into the process of integration and, most significantly, to improve the productivity, efficiency and effectiveness of the construction industry.
APA, Harvard, Vancouver, ISO, and other styles
37

Pandian, Dr A. Pasumpon, and Dr Smys S. "Effective Fragmentation Minimization by Cloud Enabled Back Up Storage." Journal of Ubiquitous Computing and Communication Technologies 2, no. 1 (March 5, 2020): 1–9. http://dx.doi.org/10.36548/jucct.2020.1.001.

Full text
Abstract:
Nowadays to increase the efficiency, consistency and the quality of the organizations and to further extend the business world wide the digitization is followed in processing, storing and conveying the information. This in turn has also caused huge set of data flow paving way for the data recovery services. The cloud computing with the massive storage capabilities have become a predominantly used paradigm for data storage and recovery due to its on demand network access, elasticity, flexibility and pay as you go. Moreover to secure the information that is stored the information’s are fragmented and stored. However this fragmentation process often occurs in the form of dispersed and scattered packages lacking proper order heightening the time and minimizing the efficiency of the recovery and information collection. To bring down the restoration time and enhance its efficiency the proposed method in the paper tries to reduce the fragmentation by minimizing the number of dispersed and scattered packages for this the paper utilizes the Hybridized Historical aware algorithm (HHAR) along with the cache aware filter to gather the historical information’s associated with the back-up system and the identify the out of order containers respectively. Further the every data package is protected applying the advanced encryption standard by producing a key to authenticate the access of the data. The proposed model is simulated using the network simulator-II and the results obtained shows that the recovery time is enhanced by 95% and the restore performance is improved by 94.3%.
APA, Harvard, Vancouver, ISO, and other styles
38

Иванова, Е. В., and Л. Б. Соколинский. "Using Intel Xeon Phi coprocessors for execution of natural join on compressed data." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 4 (December 18, 2015): 534–42. http://dx.doi.org/10.26089/nummet.v16r450.

Full text
Abstract:
В статье описывается сопроцессор баз данных для высокопроизводительных кластерных вычислительных систем с многоядерными ускорителями, использующий распределенные колоночные индексы с интервальной фрагментацией. Работа сопроцессора рассматривается на примере выполнения операции естественного соединения. Параллельная декомпозиция естественного соединения выполняется на основе использования распределенных колоночных индексов. Предложенный подход позволяет выполнять реляционные операции на кластерных вычислительных системах без массовых обменов данными. Приводятся результаты вычислительных экспериментов с использованием сопроцессоров Intel Xeon Phi, подтверждающие эффективность разработанных методов и алгоритмов. A database coprocessor for high-performance cluster computing systems with many-core accelerators is described. This coprocessor uses distributed columnar indexes with interval fragmentation. The operation of the coprocessor engine is considered by an example of natural join processing. The parallel decomposition of natural join operator is performed using distributed columnar indexes. The proposed approach allow one to perform relational operators on computing clusters without massive data exchange. The results of computational experiments on Intel Xeon Phi confirm the efficiency of the developed methods and algorithms.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Wei, Shuhua Li, and Yuansheng Jiang. "Generalized Energy-Based Fragmentation Approach for Computing the Ground-State Energies and Properties of Large Molecules." Journal of Physical Chemistry A 111, no. 11 (March 2007): 2193–99. http://dx.doi.org/10.1021/jp067721q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Xin, and Huiqun Yu. "A Game Theory Approach to Fair and Efficient Resource Allocation in Cloud Computing." Mathematical Problems in Engineering 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/915878.

Full text
Abstract:
On-demand resource management is a key characteristic of cloud computing. Cloud providers should support the computational resource sharing in a fair way to ensure that no user gets much better resources than others. Another goal is to improve the resource utilization by minimizing the resource fragmentation when mapping virtual machines to physical servers. The focus of this paper is the proposal of a game theoretic resources allocation algorithm that considers the fairness among users and the resources utilization for both. The experiments with an FUGA implementation on an 8-node server cluster show the optimality of this algorithm in keeping fairness by comparing with the evaluation of the Hadoop scheduler. The simulations based on Google workload trace demonstrate that the algorithm is able to reduce resource wastage and achieve a better resource utilization rate than other allocation mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
41

Al-Bakri, Ali Y., and Mohammed Sazid. "Application of Artificial Neural Network (ANN) for Prediction and Optimization of Blast-Induced Impacts." Mining 1, no. 3 (November 26, 2021): 315–34. http://dx.doi.org/10.3390/mining1030020.

Full text
Abstract:
Drilling and blasting remain the preferred technique used for rock mass breaking in mining and construction projects compared to other methods from an economic and productivity point of view. However, rock mass breaking utilizes only a maximum of 30% of the blast explosive energy, and around 70% is lost as waste, thus creating negative impacts on the safety and surrounding environment. Blast-induced impact prediction has become very demonstrated in recent research as a recommended solution to optimize blasting operation, increase efficiency, and mitigate safety and environmental concerns. Artificial neural networks (ANN) were recently introduced as a computing approach to design the computational model of blast-induced fragmentation and other impacts with proven superior capability. This paper highlights and discusses the research articles conducted and published in this field among the literature. The prediction models of rock fragmentation and some blast-induced effects, including flyrock, ground vibration, and back-break, were detailed investigated in this review. The literature showed that applying the artificial neural network for blast events prediction is a practical way to achieve optimized blasting operation with reduced undesirable effects. At the same time, the examined papers indicate a lack of articles focused on blast-induced fragmentation prediction using the ANN technique despite its significant importance in the overall economy of whole mining operations. As well, the investigation revealed some lack of research that predicted more than one blast-induced impact.
APA, Harvard, Vancouver, ISO, and other styles
42

Menikarachchi, Lochana C., and José A. Gascón. "An extrapolation method for computing protein solvation energies based on density fragmentation of a graphical surface tessellation." Journal of Molecular Graphics and Modelling 30 (September 2011): 38–45. http://dx.doi.org/10.1016/j.jmgm.2011.06.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ronkin, Mikhail V., Elena N. Akimova, and Vladimir E. Misilov. "Review of deep learning approaches in solving rock fragmentation problems." AIMS Mathematics 8, no. 10 (2023): 23900–23940. http://dx.doi.org/10.3934/math.20231219.

Full text
Abstract:
<abstract><p>One of the most significant challenges of the mining industry is resource yield estimation from visual data. An example would be identification of the rock chunk distribution parameters in an open pit. Solution of this task allows one to estimate blasting quality and other parameters of open-pit mining. This task is of the utmost importance, as it is critical to achieving optimal operational efficiency, reducing costs and maximizing profits in the mining industry. The mentioned task is known as rock fragmentation estimation and is typically tackled using computer vision techniques like instance segmentation or semantic segmentation. These problems are often solved using deep learning convolutional neural networks. One of the key requirements for an industrial application is often the need for real-time operation. Fast computation and accurate results are required for practical tasks. Thus, the efficient utilization of computing power to process high-resolution images and large datasets is essential. Our survey is focused on the recent advancements in rock fragmentation, blast quality estimation, particle size distribution estimation and other related tasks. We consider most of the recent results in this field applied to open-pit, conveyor belts and other types of work conditions. Most of the reviewed papers cover the period of 2018-2023. However, the most significant of the older publications are also considered. A review of publications reveals their specificity, promising trends and best practices in this field. To place the rock fragmentation problems in a broader context and propose future research topics, we also discuss state-of-the-art achievements in real-time computer vision and parallel implementations of neural networks.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
44

Nixon, J. Sebastian, and Megersa Amenu. "Investigating Security Issues and Preventive Mechanisms in Ipv6 Deployment." International Journal of Advanced Engineering and Nano Technology 9, no. 2 (February 28, 2022): 1–20. http://dx.doi.org/10.35940/ijaent.b0466.029222.

Full text
Abstract:
Internet Protocols are utilized to empower the communication between the computing devices in the computer networks. IPv6 offers additional address space and more noteworthy security than IPv4. The progress from IPv4 to IPv6 has been finished through three primary change systems: dual-stack, tunneling, and translation. The IPv6 progress relies upon the similarity with the enormous introduced base of IPv4 nodes and routers just as keeping up with the security of the network from possible threats and vulnerabilities of both Internet protocols. This research identifies potential security issues in the transition mechanisms and proposing prevention mechanisms to the problems identified. Dual-Stack & Tunneling mechanisms were completely implemented in this research work and the security test was based on dual-stack network. A simulation has been designed by using GNS3 and the penetration test by the THC-IPv6 toolkit. After the implementation of simulation, IPv6 in the dual-stack mechanism was identified as vulnerable to DoS via RA flooding and IPv6 fragmentation attacks that shown the IPv6 security problems. Therefore, IPv6 ACLs and RA guards were proposed in order to protect from flooding attacks and VFR should be configured to prevent IPv6 fragmentation.
APA, Harvard, Vancouver, ISO, and other styles
45

Alekseev, Aleksandr V., and Nikolay N. Novitsky. "Information and computing spaces integration for operating enterprises of pipeline systems using “ANGARA” computer technology." E3S Web of Conferences 39 (2018): 04001. http://dx.doi.org/10.1051/e3sconf/20183904001.

Full text
Abstract:
The article describes the informatization level of enterprises operating pipeline networks. Existing problems are presented. These problems are related to the fragmentation of information systems and the inability to provide information support for decision-making on the management of the development and operation of pipeline systems. An overview of the most high-usage used software for automate the activities of enterprises is given. The need to create a common information space of an enterprise and the opportunities it provides is shown. A brief description of the information-computer environment “ANGARA” and the technology of organizing a common information space on its basis is given. The experience of practical application of the information-computer environment “ANGARA” for creating the common information space of the Municipal Unitary Enterprise “Vodocanal” in Irkutsk is cited. On the diagram of the organizational structure of the enterprise the workplaces with access to a common information is showed.
APA, Harvard, Vancouver, ISO, and other styles
46

Coyle, Diane, and David Nguyen. "Cloud Computing, Cross-Border Data Flows and New Challenges for Measurement in Economics." National Institute Economic Review 249 (August 2019): R30—R38. http://dx.doi.org/10.1177/002795011924900112.

Full text
Abstract:
When economists talk about ‘measurement’ they tend to refer to metrics that can capture changes in quantity, quality and distribution of goods and services. In this paper we argue that the digital transformation of the economy, particularly the rise of cloud computing as a general-purpose technology, can pose serious challenges to traditional concepts and practices of economic measurement. In the first part we show how quality-adjusted prices of cloud services have been falling rapidly over the past decade, which is currently not captured by the deflators used in official statistics. We then discuss how this enabled the spread of data-driven business models, while also lowering entry barriers to advanced production techniques such as artificial intelligence or robotic-process-automation. It is likely that these process innovations are not fully measured at present. A final challenge to measurement arises from the fragmentation of value chains across borders and increasing use of intangible intermediate inputs such as intellectual property and data. While digital technologies make it very easy for these types of inputs to be transferred within or between companies, existing economic statistics often fail to capture them at all.
APA, Harvard, Vancouver, ISO, and other styles
47

Takaki, Tomohiro, Shinji Sakane, and Ryosuke Suzuki. "High-performance GPU computing of phase-field lattice Boltzmann simulations for dendrite growth with natural convection." IOP Conference Series: Materials Science and Engineering 1281, no. 1 (May 1, 2023): 012056. http://dx.doi.org/10.1088/1757-899x/1281/1/012056.

Full text
Abstract:
Abstract The effect of natural convection on dendrite morphology is investigated through three-dimensional large-scale phase-field lattice Boltzmann simulations using a block-structured adaptive mesh refinement scheme with the mother-leaf method in a parallel-GPU environment. The simulations confirmed that downward buoyancy enhances the growth of the primary and secondary arms, and upward buoyancy delays the growth of those arms. In addition, the effect of natural convection on the solidification morphologies gradually decreased as the primary arm tip reached the top of the computational domain and finally stopped. Furthermore, in the longer simulation under purely isothermal diffusive conditions, detachment of the secondary arms owing to curvature-driven fragmentation was observed. A large-scale non-isothermal dendrite growth simulation was also conducted, wherein it was observed that the tip growth rate of the primary arm was delayed, and the secondary arm spacing was larger than that in the isothermal condition.
APA, Harvard, Vancouver, ISO, and other styles
48

Beyer, Christoph, Stefan Bujack, Stefan Dietrich, Thomas Finnern, Martin Flemming, Patrick Fuhrmann, Martin Gasthuber, et al. "Beyond HEP: Photon and accelerator science computing infrastructure at DESY." EPJ Web of Conferences 245 (2020): 07036. http://dx.doi.org/10.1051/epjconf/202024507036.

Full text
Abstract:
DESY is one of the largest accelerator laboratories in Europe. It develops and operates state of the art accelerators for fundamental science in the areas of high energy physics, photon science and accelerator development. While for decades high energy physics (HEP) has been the most prominent user of the DESY compute, storage and network infrastructure, various scientific areas as science with photons and accelerator development have caught up and are now dominating the demands on the DESY infrastructure resources, with significant consequences for the IT resource provisioning. In this contribution, we will present an overview of the computational, storage and network resources covering the various physics communities on site. Ranging from high-throughput computing (HTC) batch-like offline processing in the Grid and the interactive user analyses resources in the National Analysis Factory (NAF) for the HEP community, to the computing needs of accelerator development or of photon sciences such as PETRA III or the European XFEL. Since DESY is involved in these experiments and their data taking, their requirements include fast low-latency online processing for data taking and calibration as well as offline processing, thus high-performance computing (HPC) workloads, that are run on the dedicated Maxwell HPC cluster. As all communities face significant challenges due to changing environments and increasing data rates in the following years, we will discuss how this will reflect in necessary changes to the computing and storage infrastructures. We will present DESY compute cloud and container orchestration plans as a basis for infrastructure and platform services. We will show examples of Jupyter notebooks for small scale interactive analysis, as well as its integration into large scale resources such as batch systems or Spark clusters. To overcome the fragmentation of the various resources for all scientific communities at DESY, we explore how to integrate them into a seamless user experience in an Interdisciplinary Data Analysis Facility.
APA, Harvard, Vancouver, ISO, and other styles
49

Zharinov, A. V. "ORGANIZATION OF THE TRANSMISSION OF SIGNALING MESSAGES IN THE COMPUTER NETWORKS OF THE AIRCRAFT." Izvestiya of Samara Scientific Center of the Russian Academy of Sciences 25, no. 1 (2023): 76–82. http://dx.doi.org/10.37313/1990-5378-2023-25-1-76-82.

Full text
Abstract:
This article considers the problem of transmitting signal text messages in the onboard computer networks of an aircraft. The article provides a brief overview of promising areas in the design of modern avionics, such as integrated modular avionics and Avionics Full Duplex Ethernet. The text of the article provides a protocol that allows you to generate the text of the signal zone on the multifunctional indicator in the cockpit, as well as to transmit the encoded text using the Arinc 664 (AFDX) protocol. The use of this protocol makes it possible to transmit to the multifunctional indicator in the cockpit the zone of signal messages completely formed in the computing unit. The advantage of this approach is the absence of information fragmentation, and therefore an increase in the frequency of updating information on the indicator screen.
APA, Harvard, Vancouver, ISO, and other styles
50

Lubashevskiy, Vasily, Seval Yurtcicek Ozaydin, and Fatih Ozaydin. "Improved Link Entropy with Dynamic Community Number Detection for Quantifying Significance of Edges in Complex Social Networks." Entropy 25, no. 2 (February 16, 2023): 365. http://dx.doi.org/10.3390/e25020365.

Full text
Abstract:
Discovering communities in complex networks is essential in performing analyses, such as dynamics of political fragmentation and echo chambers in social networks. In this work, we study the problem of quantifying the significance of edges in a complex network, and propose a significantly improved version of the Link Entropy method. Using Louvain, Leiden and Walktrap methods, our proposal detects the number of communities in each iteration on discovering the communities. Running experiments on various benchmark networks, we show that our proposed method outperforms the Link Entropy method in quantifying edge significance. Considering also the computational complexities and possible defects, we conclude that Leiden or Louvain algorithms are the best choice for community number detection in quantifying edge significance. We also discuss designing a new algorithm for not only discovering the number of communities, but also computing the community membership uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography