Academic literature on the topic 'Java optimised processor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Java optimised processor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Java optimised processor"

1

Chai, Zhilei, Wenbo Xu, Shiliang Tu, and Zhanglong Chen. "Java Processor Optimized for RTSJ." EURASIP Journal on Embedded Systems 2007 (2007): 1–8. http://dx.doi.org/10.1155/2007/57575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chai, Zhilei, Wenbo Xu, Shiliang Tu, and Zhanglong Chen. "Java Processor Optimized for RTSJ." EURASIP Journal on Embedded Systems 2007, no. 1 (2007): 057575. http://dx.doi.org/10.1186/1687-3963-2007-057575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ARTIGAS, PEDRO V., MANISH GUPTA, SAMUEL P. MIKIFF, and JOSÉ E. MOREIRA. "AUTOMATIC LOOP TRANSFORMATIONS AND PARALLELIZATION FOR JAVA." Parallel Processing Letters 10, no. 02n03 (June 2000): 153–64. http://dx.doi.org/10.1142/s0129626400000160.

Full text
Abstract:
This paper describes a prototype Java compiler that achieves performance levels approaching those of current state-of-the-art Fortran compilers on numerical codes. We present a new transformation called alias versioning that takes advantage of the simplicity of pointers in Java. This transformation, combined with other techniques that we have developed, enables the compiler to perform high order loop transformations and parallelization completely automatically. We believe that our compiler is the first to have such capabilities of optimizing numerical Java codes. By exploiting synergies between our compiler and the Array package for Java, we achieve between 80 and 100% of the performance of highly optimized Fortran code in a variety of benchmarks. Furthermore, automatic parallelization achieves speedups of up to 3.8 on four processors. Our compiler technology makes Java a serious contender for implementing numerical applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Karunanidy, Dinesh, Rajakumar Ramalingam, Ankur Dumka, Rajesh Singh, Ibrahim Alsukayti, Divya Anand, Habib Hamam, and Muhammad Ibrahim. "An Intelligent Optimized Route-Discovery Model for IoT-Based VANETs." Processes 9, no. 12 (December 2, 2021): 2171. http://dx.doi.org/10.3390/pr9122171.

Full text
Abstract:
Intelligent Transportation system are becoming an interesting research area, after Internet of Things (IoT)-based sensors have been effectively incorporated in vehicular ad hoc networks (VANETs). The optimal route discovery in a VANET plays a vital role in establishing reliable communication in uplink and downlink direction. Thus, efficient optimal path discovery without a loop-free route makes network communication more efficient. Therefore, this challenge is addressed by nature-inspired optimization algorithms because of their simplicity and flexibility for solving different kinds of optimization problems. NIOAs are copied from natural phenomena and fall under the category of metaheuristic search algorithms. Optimization problems in route discovery are intriguing because the primary objective is to find an optimal arrangement, ordering, or selection process. Therefore, many researchers have proposed different kinds of optimization algorithm to maintain the balance between intensification and diversification. To tackle this problem, we proposed a novel Java macaque algorithm based on the genetic and social behavior of Java macaque monkeys. The behavior model mimicked from the Java macaque monkey maintains well-balanced exploration and exploitation in the search process. The experimentation outcome depicts the efficiency of the proposed Java macaque algorithm compared to existing algorithms such as discrete cuckoo search optimization (DCSO) algorithm, grey wolf optimizer (GWO), particle swarm optimization (PSO), and genetic algorithm (GA).
APA, Harvard, Vancouver, ISO, and other styles
5

Troudet, Julien, Fred Legendre, and Régine Vignes-Lebbe. "Darwin Core Spatial Processor (DwCSP): a Fast Biodiversity Occurrences Curator." Biodiversity Information Science and Standards 2 (May 22, 2018): e26104. http://dx.doi.org/10.3897/biss.2.26104.

Full text
Abstract:
Primary biodiversity data, or occurrence data, are being produced at an increasing rate and are used in numerous studies (Hampton et al. 2013, La Salle et al. 2016). This data avalanche is a remarkable opportunity but it comes with hurdles. First, available software solutions are rare for very large datasets and those solutions often require significant computer skills (Gaiji et al. 2013), while most biologists are not formally trained in bioinformatics (List et al. 2017). Second, large datasets are heterogeneous because they come from different producers and they can contain erroneous data (Gaiji et al. 2013). Hence, they need to be curated. In this context, we developed a biodiversity occurrence curator designed to quickly handle large amounts of data through a simple interface: the Darwin Core Spatial Processor (DwCSP). DwCSP does not require the installation or use of third-party software and has a simple graphical user interface that requires no computer knowledge. DwCSP allows for the data enrichment of biodiversity occurrences and also ensures data quality through outlier detection. For example, the software can enrich a tabulated occurrence file (Darwin Core for instance) with spatial data from polygon files (e.g., Esri shapefile) or a Rasters file (geotiff). The speed of the enriching procedures is ensured through multithreading and optimized spatial access methods (R-Tree indexes). DwCSP can also detect and tag outliers based on their geographic coordinates or environmental variables. The first type of outlier detection uses a computed distance between the occurrence and its nearest neighbors, whereas the second type uses a Mahalanobis distance (Mahalanobis 1936). One hundred thousand occurrences can be processed by DwCSP in less than 20 minutes and another test on forty million occurrences was completed in a few days on a recent personal computer. DwCSP has an English interface including documentation and will be available as a stand-alone Java Archive (JAR) executable that works on all computers having a Java environment (version 1.8 and onward).
APA, Harvard, Vancouver, ISO, and other styles
6

Oanta, Emil, Cornel Panait, Gheorghe Lazaroiu, and Anca Elena Dascalescu. "Computer Aided Instrument to be Used as an Automatic Design Component." Advanced Materials Research 1036 (October 2014): 1017–22. http://dx.doi.org/10.4028/www.scientific.net/amr.1036.1017.

Full text
Abstract:
The paper presents an original method to enhance the productivity of the design activity based on a dimensioning stage which integrates information from various sources: tabular data, diagrams, experimental data and others. The integration of such data in an analytic way is based on an original data processor which interpolates data and outputs diagrams, and outputs information regarding the laws of variation expressed as comma separated values data files, as well as automatically generated source code libraries in C++, Java and MATHLAB/GNU Octave. This automatically generated code is optimized in terms of execution time and it was verified in several applications. The data processor allows the user to visually verify the output data accuracy. The paper also presents some of the applications of the data processor: automatic design, method to enhance the accuracy of computer based calculus for both analytic and numerical methods, conversion of the discontinuous experimental data into laws of variation and experimental data processing in applied elasticity. There are several directions of future research, such as: a more general mathematical approach, development of a more general computer based instrument with increased flexibility and accuracy. Also, a complex software for applied elasticity calculi based on the results of the data processor is under development. Implementation of these applications in commercial CAD/CAE applications is another direction of research.
APA, Harvard, Vancouver, ISO, and other styles
7

Patil, Shubangini, and Rekha Patil. "An optimized and efficient multiuser data sharing using the selection scheme design secure approach and federated learning in cloud environment." International Journal of Pervasive Computing and Communications, June 22, 2022. http://dx.doi.org/10.1108/ijpcc-02-2022-0047.

Full text
Abstract:
Purpose Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for securing the data, such as the generation of the key with the help of encryption algorithms like Rivest–Shamir–Adleman and others. Here are some of the related works that have been done previously. Remote damage control resuscitation (RDCR) scheme by Yan et al. (2017) is proposed based on the minimum bandwidth. By enabling the third party to perform the verification of public integrity. Although it supports the repair management for the corrupt data and tries to recover the original data, in practicality it fails to do so, and thus it takes more computation and communication cost than our proposed system. In a paper by Chen et al. (2015), using broadcast encryption, an idea for cloud storage data sharing has been developed. This technique aims to accomplish both broadcast data and dynamic sharing, allowing users to join and leave a group without affecting the electronic press kit (EPK). In this case, the theoretical notion was true and new, but the system’s practicality and efficiency were not acceptable, and the system’s security was also jeopardised because it proposed adding a member without altering any keys. In this research, an identity-based encryption strategy for data sharing was investigated, as well as key management and metadata techniques to improve model security (Jiang and Guo, 2017). The forward and reverse ciphertext security is supplied here. However, it is more difficult to put into practice, and one of its limitations is that it can only be used for very large amounts of cloud storage. Here, it extends support for dynamic data modification by batch auditing. The important feature of the secure and efficient privacy preserving provable data possession in cloud storage scheme was to support every important feature which includes data dynamics, privacy preservation, batch auditing and blockers verification for an untrusted and an outsourced storage model (Pathare and Chouragadec, 2017). A homomorphic signature mechanism was devised to prevent the usage of the public key certificate, which was based on the new id. This signature system was shown to be resistant to the id attack on the random oracle model and the assault of forged message (Nayak and Tripathy, 2018; Lin et al., 2017). When storing data in a public cloud, one issue is that the data owner must give an enormous number of keys to the users in order for them to access the files. At this place, the knowledge assisted software engineering (KASE) plan was publicly unveiled for the first time. While sharing a huge number of documents, the data owner simply has to supply the specific key to the user, and the user only needs to provide the single trapdoor. Although the concept is innovative, the KASE technique does not apply to the increasingly common manufactured cloud. Cui et al. (2016) claim that as the amount of data grows, distribution management system (DMS) will be unable to handle it. As a result, various proven data possession (PDP) schemes have been developed, and practically all data lacks security. So, here in these certificates, PDP was introduced, which was based on bilinear pairing. Because of its feature of being robust as well as efficient, this is mostly applicable in DMS. The main purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research provides an efficient and secure protocol for multiple user data in the cloud, allowing many users to easily share data. Design/methodology/approach The methodology and contribution of this paper is given as follows. The major goal of this study is to design and implement a secure cloud infrastructure for sharing group data. This study provides an efficient and secure protocol for multiple user data in cloud, allowing several users to share data without difficulty. The primary purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research develops an efficient and secure protocol for multiple user data in the cloud, allowing numerous users to exchange data without difficulty. Selection scheme design (SSD) comprises two algorithms; first algorithm is designed for limited users and algorithm 2 is redesigned for the multiple users. Further, the authors design SSD-security protocol which comprises a three-phase model, namely, Phase 1, Phase 2 and Phase 3. Phase 1 generates the parameters and distributes the private key, the second phase generates the general key for all the users that are available and third phase is designed to prevent the dishonest user to entertain in data sharing. Findings Data sharing in cloud computing provides unlimited computational resources and storage to enterprise and individuals; moreover, cloud computing leads to several privacy and security concerns such as fault tolerance, reliability, confidentiality and data integrity. Furthermore, the key consensus mechanism is fundamental cryptographic primitive for secure communication; moreover, motivated by this phenomenon, the authors developed SSDmechanismwhich embraces the multiple users in the data-sharing model. Originality/value Files shared in the cloud should be encrypted for security purpose; later these files are decrypted for the users to access the file. Furthermore, the key consensus process is a crucial cryptographic primitive for secure communication; additionally, the authors devised the SSD mechanism, which incorporates numerous users in the data-sharing model, as a result of this phenomena. For evaluation of the SSD method, the authors have considered the ideal environment of the system, that is, the authors have used java as a programming language and eclipse as the integrated drive electronics tool for the proposed model evaluation. Hardware configuration of the model is such that it is packed with 4 GB RAM and i7 processor, the authors have used the PBC library for the pairing operations (PBC Library, 2022). Furthermore, in the following section of this paper, the number of users is varied to compare with the existing methodology RDIC (Li et al., 2020). For the purposes of the SSD-security protocol, a prime number is chosen as the number of users in this work.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Java optimised processor"

1

Schoeberl, Martin. "JOP: A Java Optimized Processor." In On The Move to Meaningful Internet Systems 2003: OTM 2003 Workshops, 346–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39962-9_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Fan, Ran Sun, Wenhao Fan, Yuan'An Liu, Feng Liu, and Hui Lu. "A Privacy Protection Approach Based on Android Application's Runtime Behavior Monitor and Control." In Cyber Warfare and Terrorism, 342–61. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2466-4.ch022.

Full text
Abstract:
This article proposes a system that focuses on Android application runtime behavior forensics. Using Linux processes, a dynamic injection and a Java function hook technology, the system is able to manipulate the runtime behavior of applications without modifying the Android framework and the application's source code. Based on this method, a privacy data protection policy that reflects users' intentions is proposed by extracting and recording the privacy data usage in applications. Moreover, an optimized random forest algorithm is proposed to reduce the policy training time. The result shows that the system realizes the functions of application runtime behavior monitor and control. An experiment on 134 widely used applications shows that the basic privacy policy could satisfy the majority of users' privacy intentions.
APA, Harvard, Vancouver, ISO, and other styles
3

Sawlikar, Alka Prasad, Zafar Jawed Khan, and Sudhir Gangadharrao Akojwar. "Efficient Energy Saving Cryptographic Techniques with Software Solution in Wireless Network." In Cryptography, 159–79. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1763-5.ch010.

Full text
Abstract:
To reduce communication costs, to protect our data from eavesdropping and from unauthorized users, cryptographic algorithms are used. Cryptographic module has to be developed for combining the operation of compression and encryption synchronously on the file. The information file is preliminary processed and then converts into one intermediary form so that it can be compressed with better efficiency and security. In this paper an optimized approaching coding technique which deals with both the issues of size and security is introduced and characterized experimentally using the performance measurement approach java in which file of any data length can be practically compressed and encrypted using new encryption technique and a novel energy saving technique in wireless communication network with efficient hardware solution is presented. To improve the strength and capability of algorithms and to compress the transmitted data an intelligent and reversible conversion technique is applied.
APA, Harvard, Vancouver, ISO, and other styles
4

Sibalija, Tatjana. "Parametric Optimization of Integrated Circuit Assembly Process: An Evolutionary Computing-Based Approach." In Proceedings of CECNet 2021. IOS Press, 2021. http://dx.doi.org/10.3233/faia210408.

Full text
Abstract:
Strict demands for very tight tolerances and increasing complexity in the semiconductors’ assembly impose a need for an accurate parametric design that deals with multiple conflicting requirements. This paper presents application of the advanced optimization methodology, based on evolutionary algorithms (EAs), on two studies addressing parametric optimization of the wire bonding process in the semiconductors’ assembly. The methodology involves statistical pre-processing of the experimental data, followed by an accurate process modeling by artificial neural networks (ANNs). Using the neural model, the process parameters are optimized by four metaheuristics: the two most commonly used algorithms – genetic algorithm (GA) and simulated annealing (SA), and the two newly designed algorithms that have been rarely utilized in semiconductor assembly optimizations – teaching-learning based optimization (TLBO) and Jaya algorithm. The four algorithm performances in two wire bonding studies are benchmarked, considering the accuracy of the obtained solutions and the convergence rate. In addition, influence of the algorithm hyper-parameters on the algorithms effectiveness is rigorously discussed, and the directions for the algorithm selection and settings are suggested. The results from two studies clearly indicate superiority of the TLBO and Jaya algorithms over GA and SA, especially in terms of the solution accuracy and the built-in algorithm robustness. Furthermore, the proposed evolutionary computing-based optimization methodology significantly outperforms the four frequently used methods from the literature, explicitly demonstrating effectiveness and accuracy in locating global optimum for delicate optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
5

Basurto-Pensado, Miguel, Carlos Alberto Ochoa Ortiz Zezzatti, Rosenberg Romero, Jesús Escobedo-Alatorre, Jessica Morales-Valladares, Arturo García-Arias, and Margarita Tecpoyotl Torres. "Optical Application improved with Logistics of Artificial Intelligent and Electronic Systems." In Logistics Management and Optimization through Hybrid Artificial Intelligence Systems, 439–55. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0297-7.ch017.

Full text
Abstract:
Computer science and electronics have a very big incidence in several research areas; optics and photonics are not the exception. The utilization of computers, electronic systems, and devices has allowed the authors to develop several projects to control processes. A description of the computer tool called Laser Micro-Lithography (LML) to characterize materials is realized. The Reasoning Based on Cases (RBC) and its implementation in the software using Java are presented. In order to guarantee the lithography precision, a control system based on a microcontroller was developed and coupled to the mechanical system. An alternative of LML, considering the use of a Personal Digital Assistant (PDA), instead of a Personal Computer (PC) is described. In this case, C language is used for programming. RBC optimizes the materials characterization, recovering information of materials previously characterized. The communication between the PDA and the displacement table is achieved by means of a system based on a micro-controller DSPIC. The developed computers tool permits obtaining lithography with channels narrower than an optical fiber with minimum equipment. The development of irradiance meters based on electronic automation is shown; this section includes the basic theoretical concepts, the experimental device design and the experimental results. Future research trends are presented, and as a consequence of the developed work, perspectives of micro drilling and cutting are also analyzed.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Java optimised processor"

1

Chai, Zhilei, Wenke Zhao, and Wenbo Xu. "Real-time Java processor optimized for RTSJ." In the 2007 ACM symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1244002.1244332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chai, Zhilei, and Mi Zhang. "Memory Management in Java Processor Optimized for RTSJ." In 2009 International Conference on Computational Intelligence and Software Engineering. IEEE, 2009. http://dx.doi.org/10.1109/cise.2009.5365442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Silveira, Jardel, David Viana, Helano Castro, Alexandre Coelho, and Jarbas Silveira. "Técnica de Proteção de Bytecodes para Processador Java em Tecnologia CMOS." In Simpósio em Sistemas Computacionais de Alto Desempenho. Sociedade Brasileira de Computação, 2009. http://dx.doi.org/10.5753/wscad.2009.17405.

Full text
Abstract:
O soft core JOP (Java Optimized Processor) para FPGAs (Field Programmable Gate Array) é uma implementação otimizada da máquina virtual Java em hardware, para aplicações de tempo real. No entanto, este processador não contempla em sua arquitetura técnicas de tolerância a falhas. O trabalho descrito neste artigo é parte de um esforço maior para tornar o processador JOP um processador tolerante a falhas. Neste artigo, apresentamos os resultados da aplicação de uma técnica de tolerância a falhas, proteção de memória através de ECC (Error Correction Code), no soft core JOP, que detecta e corrige erros na área destinada ao código da memória SRAM (Static Random Access Memory). A ocorrência da falha é percebida no nível sistêmico através de uma exceção, característica esta disponível na linguagem Java. Este artigo apresenta resultados inovadores na medida em que não existem registrados na literatura outro processador Java de tempo real e tolerante a falhas.
APA, Harvard, Vancouver, ISO, and other styles
4

Khazaee, Mohammad Erfan, and Shima Hoseinzadeh. "Using Java optimized processor as an intellectual property core beside a RISC processor in FPGA." In 2014 East-West Design & Test Symposium (EWDTS). IEEE, 2014. http://dx.doi.org/10.1109/ewdts.2014.7027048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Makida, Mitsumasa, Naoki Nakamura, and Osamu Nozaki. "Utilization of Cold-Flow Numerical Simulation With Overset Grid Method in Development Process of Aeroengine Combustor." In ASME Turbo Expo 2010: Power for Land, Sea, and Air. ASMEDC, 2010. http://dx.doi.org/10.1115/gt2010-23089.

Full text
Abstract:
In the TechCLEAN project of JAXA, a combustor for a small aircraft engine has been developed. The combustor was tuned to show the behavior of the Rich-Lean combustion through combustion tests under atmospheric and practical conditions. In the development process of the combustor, numerical simulation methods were also utilized as analysis tools to accelerate the development of the combustor. To use in the screening process of the combustor design, we focused on cost-effective simulation methods and adopted the cold-flow RANS simulation code UPACS which has been developed in JAXA. And to simplify the treatment of calculation grids of the combustor with complicated configuration, we also utilized combination of the overset grid method with the attached multi-block grid method. This simulation method was applied to three phases in the combustor development process; first to the analysis of the combustor configuration to adjust the overall pressure loss, secondly to the analysis of flame stability, and thirdly to the tuning of air flow ratio to optimize emission characteristics of full annular combustors. Finally, the full annular combustor was successively tuned to reduce NOx emissions to 38.1% of the ICAO CAEP4 standard under ICAO LTO cycles, also sustaining basic performances as an aircraft combustor.
APA, Harvard, Vancouver, ISO, and other styles
6

Izumo, Sari, Hideo Usui, Mitsuo Tachibana, Yasuyuki Morimoto, Nobuo Takahashi, Takashi Tokuyasu, Yoshio Tanaka, and Noritake Sugitsue. "Development of Evaluation Models of Manpower Needs for Dismantling the Dry Conversion Process-Related Equipment in Uranium Refining and Conversion Plant (URCP)." In ASME 2013 15th International Conference on Environmental Remediation and Radioactive Waste Management. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/icem2013-96097.

Full text
Abstract:
Evaluation models for determining the manpower needs for dismantling various types of equipment in uranium refining and conversion plant (URCP) have been developed. The models are widely applicable to other uranium handling facilities. Additionally, a simplified model was developed for easily and accurately calculating the manpower needs for dismantling dry conversion process–related equipment (DP equipment). It is important to evaluate beforehand project management data such as manpower needs to prepare an optimized decommissioning plan and implement effective dismantling activity. The Japan Atomic Energy Agency (JAEA) has developed the project management data evaluation system for dismantling activities (PRODIA code), which can generate project management data using evaluation models. For preparing an optimized decommissioning plan, these evaluation models should be established based on the type of nuclear facility and actual dismantling data. In URCP, the dry conversion process of reprocessed uranium and others was operated until 1999, and the equipment related to the main process was dismantled from 2008 to 2011. Actual data such as manpower for dismantling were collected during the dismantling activities, and evaluation models were developed using the collected actual data on the basis of equipment classification considering the characteristics of uranium handling facility.
APA, Harvard, Vancouver, ISO, and other styles
7

Albers, Albert, Noel Leon, Humberto Aguayo, and Thomas Maier. "Multi-Objective System Optimization of Engine Crankshafts Using an Integration Approach." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-67447.

Full text
Abstract:
The ever increasing computer capabilities allow faster analysis in the field of Computer Aided Design and Engineering (CAD & CAE). CAD and CAE systems are currently used in Parametric and Structural Optimization to find optimal topologies and shapes of given parts under certain conditions. This paper describes a general strategy to optimize the balance of a crankshaft, using CAD and CAE software integrated with Genetic Algorithms (GAs) via programming in Java. An introduction to the groundings of this strategy is made among different tools used for its implementation. The analyzed crankshaft is modeled in commercial parametric 3D CAD software. CAD is used for evaluating the fitness function (the balance) and to make geometric modifications. CAE is used for evaluating dynamic restrictions (the eigen-frequencies). A Java interface is programmed to link the CAD model to the CAE software and to the genetic algorithms. In order to make geometry modifications to our case study, it was decided to substitute the profile of the counterweights with splines from its original “arc-shaped” design. The variation of the splined profile via control points results in an imbalance response. The imbalance of the crankshaft was defined as an independent objective function during a first approach, followed by a Pareto optimization of the imbalance from both correction planes, plus the curvature of the profile of the counterweights as restrictions for material flow during forging. The natural frequency was considered as an additional objective function during a second approach. The optimization process runs fully automated and the CAD program is on hold waiting for new set of parameters to receive and process, saving computing time, which is otherwise lost during the repeated startup of the cad application.
APA, Harvard, Vancouver, ISO, and other styles
8

Kurniawan, Hendri. "Various Alertness Cognitive Stimulation on the Optimization of Motor Recovery Among Post Stroke Patients in Neurorehabilitation Program." In The 7th International Conference on Public Health 2020. Masters Program in Public Health, Universitas Sebelas Maret, 2020. http://dx.doi.org/10.26911/the7thicph.05.06.

Full text
Abstract:
ABSTRACT Background: Stroke affects the patient’s cognitive ability. Cognitive ability also underlies the motor and functional recovery processes of post-stroke patients. Various Alertness Cognitive Stimulation (VACS) is a computer-based cognitive exercise that combines cognitive and motor components. This study aimed to examine the effect of VACS on the optimization of motor recovery among post-stroke patients. Subjects and Methods: This was an experimental study with pre-test and post-test control design. This study conducted at the Mandiri Center and Neurorehabilitation Clinics, in November-December 2019. A total of 40 post-stroke patients were enrolled in this study and divided into 20 patients in control and 20 patients in treatment groups. The dependent variables were cognitive and motor skills. The independent variable was VACS. Data were collected using Cognistat and Modified Motor Assessment Scale (MMAS). The data were analyzed by means of a comparative test. Results: The MMAS score of post intervention in the treatment group (Mean=3.40; SD= 1.88) was higher than the control group (Mean= 2.25; SD=1.68). The motor ability of the treatment group was significantly different from the control group (t= 2.04; p= 0.048). Conclusion: VACS can optimize the motor recovery of post-stroke patients in a neurorehabilitation program. Keywords: cognitive training, VACS, MMAS Correspondence: Hendri Kurniawan. Department of Occupational Therapy, School of Health Polytechnics, Ministry of Health, Surakarta, Central Java. Jl. Capt. Adisoemarmo, Toduhan, Colomadu, Karanganyar – Surakarta, 57173. Email: kurnia_hyckle@yahoo.co.id DOI: https://doi.org/10.26911/the7thicph.05.06
APA, Harvard, Vancouver, ISO, and other styles
9

Makida, Mitsumasa, Hideshi Yamada, Kazuo Shimodaira, Seiji Yoshida, Yoji Kurosawa, and Takeshi Yamamoto. "Verification of Low NOx Performance of Simple Primary Rich Combustion Approach by a Newly Established Full Annular Combustor Test Facility." In ASME Turbo Expo 2008: Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-51419.

Full text
Abstract:
In the TechCLEAN project of JAXA, experimental research has been being conducted to develop a combustor for a small aircraft engine (with pressure ratio of about 20). The combustor was tuned to show the behavior of the Rich-Lean combustion through tests under atmospheric and practical conditions. And in 2006, by a designed multi-sector combustor, NOx emissions were reduced to lower than 42% of the ICAO CAEP4 standard. Based on the tuned combustor, full annular combustors were designed. In parallel, an experimental facility to test the full annular combustors under practical conditions was newly constructed in the spring of 2007. The inlet air conditions were set to the ICAO LTO cycle conditions of the target engine; 0.3–1.8MPa for pressure, 400–700K for temperature and 4–18kg/s for air mass flow rate. Through the full annular combustion experiments under practical conditions, the combustors were tuned to keep good combustion performance which was verified by the multi-sector combustors. The optimized full annular combustor finally achieved the following performance; NOx emissions were reduced to lower than 40% of the ICAO CAEP4 standard, maintaining low CO and THC emissions, good exit temperature profiles (P.T.F. = 0.19 at the take-off condition) and good lean blow-out performance (AFR>200 at the idle condition). The process of the optimization is discussed in this report.
APA, Harvard, Vancouver, ISO, and other styles
10

Asadi, Sadegh, Riezal Arieffiandhany, Priantoro Setiawan, Hendro Vico, Christine Lorita, Achmad Mansur, Reynaldi Chrislianto, and Gunawan Sucahyo. "Integrating Advanced Acoustic Measurement and Geomechanics with Hydraulic Fracturing Field Data Helped to Improve Hydraulic Fracture Geometry Characterization and Increase Productivity." In Offshore Technology Conference Asia. OTC, 2022. http://dx.doi.org/10.4043/31647-ms.

Full text
Abstract:
Abstract Hydraulic fracturing optimisation for tight sandstone requires a reliable geomechanical model in the reservoirs and bounding formations to achieve an optimum production after fracturing. This paper presents a case study of Upper Cibulakan tight sandstone reservoirs in an oil field located in Offshore Northwest Java, Indonesia. Field structure is composed of multiple reservoir sandstones with interlayer shales. Two sandstone units with gross thicknesses up to 60 feet, effective porosity of 15% and permeability of 8 mD were targeted for hydraulic fracturing. An integrated approach is proposed to use available offset wells data, real-time acoustic logs, calibrated geomechanical model, and miniFrac and Step-rate tests to optimise hydraulic fracturing parameters and treatment schedule. In pre-fracturing stage, geomechanical model was developed for target intervals using offset wells data including fracture closure pressures from past miniFrac tests. To estimate the reservoir and bounding formations Young’ modulus and Poisson's ratio, compressional and dipole shear wave slowness logs as well as bulk density logs from offset wells were used. Poroelastic minimum horizontal stress in the sandstone intervals was calibrated with closure pressure data while bounding shale stress was calibrated with regional leak-off pressures. The final stress model of offset wells was verified with the borehole condition and drilling experiences. Target well for hydraulic fracturing was drilled with a 12¼° wellbore, 45 degrees deviated and oriented sub-parallel to maximum horizontal stress azimuth (north south). Processed acoustic logs were used to revise the pre-frac rock mechanical properties which verified the low ranges of static Young's modulus. Analysis of mini fall-off tests revealed important information about reservoir pressure depletion of ~250 psi which was not captured by offset wells pore pressure data. Pore pressure profile across the reservoirs was modified and depletion induced poroelastic stresses were estimated. Stress profile calibrated with actual closure pressure data from miniFrac test integrated with actual reservoir pressure revealed the stress contrast of up to ~350 psi between reservoir sandstones and bounding shales, which is favorable for fracture containment. Calibrated Geomechanics model was used to update the treatment schedule for main hydraulic fracturing including optimisation of size, volume and concentration of injected proppants and volume of fracturing fluid. Integrated Geomechanics modelling with acoustic logging and fracturing design enabled to achieve a successful hydraulic fracturing stimulation by exceeding the planned production rate. Post fracturing production test showed initial rate of approximately 50-barrel oil per day (bbl/d) higher than expected production rate from stimulated reservoir volume. Calibrated geomechanics model provided valuable inputs for proppant size and conductivity optimisation to reduce the effects of proppant embedment as well as proper estimation of injected proppant volume based on robust minimum horizontal stress profile to minimize the risk of unwanted vertical fracture propagation to other zones such as water.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography