Auswahl der wissenschaftlichen Literatur zum Thema „In second part there is analyzed company and information system“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "In second part there is analyzed company and information system" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "In second part there is analyzed company and information system"

1

Kuś, Agnieszka, und Paula Pypłacz. „The Importance of Information Management in the Context of Industry 4.0: Evidence from the Kuyavian-Pomeranian Forbes Diamonds“. Social Sciences 8, Nr. 6 (01.06.2019): 169. http://dx.doi.org/10.3390/socsci8060169.

Der volle Inhalt der Quelle
Annotation:
Currently, the role of information and of the tools facilitating its acquisition and processing is so significant that, in the economic nomenclature, there are already such concepts as an information civilization or information society. The aftermath of this state of affairs is the commercial breakthrough consisting of the enterprise’s success on the intellectual capital rather than the material one. This article aims to show the outline of the information management process and to determine how its components are used in companies. The first part of the paper is devoted to the description of information management concepts in enterprises and the functioning of organizational information systems. The second part of the study includes the results of original research carried out using the Computer-Assisted Web Interview (CAWI) method on the 70 Kuyavian-Pomeranian enterprises counted among the Forbes Diamonds 2018. An analysis of the obtained results indicates that the level of diversity of methods and frequencies of information channels used depends on the size of the entity, its capital structure, and the industry in which they operate. The primary sources of information for the surveyed companies are customers and competitors. The respondents agree that a well-functioning enterprise information system facilitates making decisions in the company and improves internal communication. The most frequently implemented development strategy in the analyzed companies is the market development strategy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hlavatý, Daniel, und Jakub Kraus. „Safety of Cargo Aircraft Handling Procedure“. MAD - Magazine of Aviation Development 5, Nr. 3 (18.07.2017): 13. http://dx.doi.org/10.14311/mad.2017.03.02.

Der volle Inhalt der Quelle
Annotation:
The aim of this paper is to get acquainted with the ways how to improve the safety management system during cargo aircraft handling. The first chapter is dedicated to general information about air cargo transportation. This includes the history or types of cargo aircraft handling, but also the means of handling. The second part is focused on detailed description of cargo aircraft handling, including a description of activities that are performed before and after handling. The following part of this paper covers a theoretical interpretation of safety, safety indicators and legislative provisions related to the safety of cargo aircraft handling. The fourth part of this paper analyzes the fault trees of events which might occur during handling. The factors found by this analysis are compared with safety reports of FedEx. Based on the comparison, there is a proposal on how to improve the safety management in this transportation company.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zamyshliaev, A. М. „Premises of the creation of a digital traffic safety management system“. Dependability 19, Nr. 4 (17.12.2019): 45–52. http://dx.doi.org/10.21683/1729-2646-2019-19-4-45-52.

Der volle Inhalt der Quelle
Annotation:
Aim.The digital transformation of the traffic safety management system in JSC RZD involves top-level integration with the operating processes of all business units in terms of integral assessment of the risk of possible events and achievement of specified indicators. The result will be the merger of the traffic safety management system with the processes of all levels of the company’s management enabled by an integrated intelligent system for managing processes and services whose functionality includes real-time traffic safety management.Methods. The paper uses system analysis of existing approaches and methods of processing of large quantities of structured and unstructered data.Results. The paper examines the development stages of train traffic safety management, as well as automated information and control systems that enable traffic safety management. General trends in the creation of systems for collection and processing of information are analyzed. The applicability of such technologies as Big Data, Data Mining, Data Science as part of advanced control systems is shown. The paper examines the performance of the above technologies by analyzing the effect of various factors on the average daily performance of a locomotive, where, at the first level, such factors as average daily run of a locomotive, average trainload are taken into consideration; at the second level, the focus is on the service speed, locomotive turnover at station, etc.; at the sixth level, the focus is on the type of locomotive, its technical state, etc. It is shown that statistical methods of factor analysis and link analysis combined with such other methods of Data Mining as methods of simulation and prediction, the average daily performance of a locomotive can be planned proactively. The author proposes a procedure of migration towards a digital traffic safety management system that would be based on models of interaction of safety and dependability factors of all railway facilities at all railway levels of hierarchy, as well as in association with other factors that have no direct relation to dependability, yet affect the safety of the transportation process.Conclusions. The primary benefit of migration towards Big Data consists in the development of a dynamic model of traffic safety, the elimination of human factor in control systems. Most importantly, it enables the creation within the Russian Railways company (JSC RZD) of an integrated intelligent process and service management system that enables real-time traffic safety management. An extensive process of development and deployment within the company of the URRAN Single Corporate Platform (SCP) enabled executive decision support as regards risk-based functional dependability and safety of transportation facilities. Thus, the URRAN SCP sets the stage for the digital transformation of the traffic safety management system in JSC RZD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Pikhart, Marcel, und Blanka Klimova. „Information and Communication Technology-Enhanced Business and Managerial Communication in SMEs in the Czech Republic“. Information 11, Nr. 6 (24.06.2020): 336. http://dx.doi.org/10.3390/info11060336.

Der volle Inhalt der Quelle
Annotation:
Current managerial communication in the global business world has recently experienced dramatic and unprecedented changes connected to the use of Information and Communication Technology (ICT) in business and managerial communication. The objective of this paper is to analyze the changes in ICT-enhanced business and managerial communication in Small and Medium Enterprises (SMEs) in the Czech Republic. The use of ICT in business and managerial communication is obvious and brings various benefits, but it also has some drawbacks that should be identified and analyzed. From a methodological point of view, this study is twofold. Firstly, we conduct a systematic review of the current literature on the topic of business and managerial communication, providing an understanding of the recent development in the area of business and managerial communication. Secondly, we conduct qualitative research into the current state of ICT-enhanced managerial and business communication in several SMEs in the Czech Republic. The findings of the literature research show that there are two key aspects that define modern business and managerial communication, i.e., interculturality and interconnectedness. These two aspects of business and managerial communication are very recent, and they bring many challenges that must be considered in order to optimize communication. These altered communication paradigms have the potential to improve global competitiveness and produce new opportunities in the global market. The second part of the research shows that the general awareness of the changes in business communication is limited, and this could potentially pose a threat to business and managerial communication, leading to a loss of opportunities and reduced competitiveness. The majority of global-based companies have already become culture-, communication-, technology- and information-dependent, and ignoring or neglecting this fact presents a significant risk, which may be one of the biggest threats to global competitiveness. Since the success of SMEs is critical for the development of the national economy, it is recommended that company communication be continuously enhanced by frequent training at all organizational levels. This presents a challenge for educational institutions and training centers, managers and businesspeople, of creating communication competencies that would be highly rewarded in the global business environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jung, Se Young, Taehyun Kim, Hyung Ju Hwang und Kyungpyo Hong. „Mechanism Design of Health Care Blockchain System Token Economy: Development Study Based on Simulated Real-World Scenarios“. Journal of Medical Internet Research 23, Nr. 9 (13.09.2021): e26802. http://dx.doi.org/10.2196/26802.

Der volle Inhalt der Quelle
Annotation:
Background Despite the fact that the adoption rate of electronic health records has increased dramatically among high-income nations, it is still difficult to properly disseminate personal health records. Token economy, through blockchain smart contracts, can better distribute personal health records by providing incentives to patients. However, there have been very few studies regarding the particular factors that should be considered when designing incentive mechanisms in blockchain. Objective The aim of this paper is to provide 2 new mathematical models of token economy in real-world scenarios on health care blockchain platforms. Methods First, roles were set for the health care blockchain platform and its token flow. Second, 2 scenarios were introduced: collecting life-log data for an incentive program at a life insurance company to motivate customers to exercise more and recruiting participants for clinical trials of anticancer drugs. In our 2 scenarios, we assumed that there were 3 stakeholders: participants, data recipients (companies), and data providers (health care organizations). We also assumed that the incentives are initially paid out to participants by data recipients, who are focused on minimizing economic and time costs by adapting mechanism design. This concept can be seen as a part of game theory, since the willingness-to-pay of data recipients is important in maintaining the blockchain token economy. In both scenarios, the recruiting company can change the expected recruitment time and number of participants. Suppose a company considers the recruitment time to be more important than the number of participants and rewards. In that case, the company can increase the time weight and adjust cost. When the reward parameter is fixed, the corresponding expected recruitment time can be obtained. Among the reward and time pairs, the pair that minimizes the company’s cost was chosen. Finally, the optimized results were compared with the simulations and analyzed accordingly. Results To minimize the company’s costs, reward–time pairs were first collected. It was observed that the expected recruitment time decreased as rewards grew, while the rewards decreased as time cost grew. Therefore, the cost was represented by a convex curve, which made it possible to obtain a minimum—an optimal point—for both scenarios. Through sensitivity analysis, we observed that, as the time weight increased, the optimized reward increased, while the optimized time decreased. Moreover, as the number of participants increased, the optimization reward and time also increased. Conclusions In this study, we were able to model the incentive mechanism of blockchain based on a mechanism design that recruits participants through a health care blockchain platform. This study presents a basic approach to incentive modeling in personal health records, demonstrating how health care organizations and funding companies can motivate one another to join the platform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Klimaitienė, Rūta, Kristina Rudžionienė und Andželika Verbliugevičiūtė. „The effectiveness of ABC method in small companies“. Buhalterinės apskaitos teorija ir praktika, Nr. 16 (05.07.2019): 40–54. http://dx.doi.org/10.15388/batp.2014.no16.4.

Der volle Inhalt der Quelle
Annotation:
Under the current conditions of Economics, in order to keep competition it is necessary to possess not only good economical resources, but also to apply them relevantly in production and this is impossible without good administration in accounting of costs and productions costs. Before the calculation of product cost, first, it is necessary to know which costs and how many of them are included into the indicators of product cost. The validity and objectiveness of the obtained information depend on the preciseness of current cost distribution for the product cost. Scientific problem – it is unclear how ABC method affects the calculation of partial cost in small companies. It is not clear which method of cost calculation is most beneficial to apply for small companies –it is necessary to research how to distribute indirect costs – either applying ABC system (activity based costing - ABC) or using more simple – traditional distribution methods. The object is ABC method. The aim is to research the application efficiency of ABC method in small companies. In the first part there are analysed the pieces of research carried out by foreign authors investigating ABC method. There is executed the research in cost calculation in Lithuanian companies in order to identify if companies apply the following method, what benefit they receive and which difficulties they face. Having completed the analysis there were formulated advantages and disadvantages of the following method. In the second part, following the possessed cost calculation data of the company ‘Baldas’ there is compared the cost of the key company’s products calculated under traditional method and applying ABC method. It was indicated if it is useful for a company to apply ABC method for cost calculation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hart, A., Y. S. Ahn, C. Watson, M. Skeans, A. Barlev, B. Thompson und VR Dharnidharka. „High Healthcare Resource Utilization and Cost for PTLD Patients Following Kidney Transplants“. Blood 136, Supplement 1 (05.11.2020): 32–33. http://dx.doi.org/10.1182/blood-2020-140947.

Der volle Inhalt der Quelle
Annotation:
Background: Post-transplant lymphoproliferative disease (PTLD) is a rare and aggressive disease with high mortality rates, and may have substantial healthcare resource utilization (HRU) and cost. Objectives: To describe the HRU and Medicare paid amounts up to 3 years after kidney transplants for PTLD patients and for all kidney transplant patients. Methods: The United States Renal Data System (USRDS) is a national data system that collects, analyzes and distributes information about chronic kidney disease (CKD) and end-stage renal disease (ESRD) in the United States. It is a claim dataset that provides Medicare paid amounts and HRU for these patients, including their kidney transplants. Patients are Medicare-eligible for 3 years after a successful kidney transplant. The Scientific Registry of Transplant Recipients (SRTR) is a national data system that reports data from transplant centers on all solid organ transplants in the US, including data on malignancies after transplants, including PTLD. The USRDS data was used to identify Medicare-covered kidney transplant recipients from 2007 until 2016. SRTR malignancy data were used to obtain patients with a PTLD diagnosis, which was linked to the USRDS. Patients had to have a USRDS claim on the date of transplantation. Only patients enrolled in Medicare Part A and Part B fee-for-service were included. Solitary kidney re-transplants were included but not multi-organ transplants, due to a potential difference in HRU and cost in the latter. Medicare Part A (inpatient, outpatient, home health, hospice, skilled nursing) and B HRU and paid amounts were tabulated for up to 3 years' post-transplant. The year post-transplant was calculated using 365.25 days per year following the kidney transplant. For example, recipients with a PTLD diagnosis any time within 365.25 days following their kidney transplant were considered a PTLD patients within 1-year post-transplant. A per person-year (PPY) was computed to standardize costs. Patients were censored at loss of Medicare eligibility or death or at the end of 3-year post-transplant. Results: A total of 83,818 Medicare-covered kidney transplants were included in this study with a mean age of 51.55 (SD: 15.11) at transplant and 60.77% male. Average paid amount from transplant to 1-year post-transplant was $83,546 PPY, with approximately two-thirds of the cost coming from inpatient hospitalization. For year 2 and 3 post-transplant, the average paid amount was $26,148 PPY and $25,326 PPY, respectively, with similar cost coming from inpatient and Part B (~39% each). Among these kidney transplant recipients, 281 transplant recipients had PTLD while Medicare-eligible with an average age of 53.98 (SD: 19.19) years at transplant and 67.97% male. Median time from transplant to PTLD diagnosis was 1.02 years (IQR: 0.60-1.93 years). For patients who were diagnosed with PTLD within the first year after transplant (n=139), average paid amount from date of the first PTLD diagnosis to 1-year post-transplant was $222,336 PPY, with approximately two-thirds of the cost from inpatient hospitalization. Of these patients who survived or were eligible in year 2 (n=94) and 3 (n=64), the average cost was $60,981 PPY and $36,118 PPY, respectively. For patients who were diagnosed with PTLD in the second year after transplant (n=81), average paid amount in year 2 post-transplant was $203,374 PPY with approximately 60% coming from inpatient hospitalization. Year 3 post-transplant cost for these PTLD patients who survived or were eligible (n=59) was $58,879 PPY. For patients who were diagnosed with PTLD in the third year after transplant (n=61), average paid amount in year 3 post-transplant was $211,941 PPY with 68% coming from inpatient hospitalization. Conclusions: PTLD is associated with substantial HRU and cost (>$200k PPY in the year diagnosed), regardless of the year diagnosed post-transplant, while patients without PTLD and patients who haven't developed PTLD yet had a cost of ~$83k PPY in year 1, and ~$26k PPY for year 2 and 3 post-transplant. Disclosures Ahn: Medtronic: Current equity holder in publicly-traded company; Bristol-Myers Squibb: Research Funding. Watson:Atara Biotherapeutics: Current Employment, Current equity holder in publicly-traded company. Skeans:Bristol-Myers Squibb: Research Funding; Astellas: Research Funding; Atara Biotherapeutics: Research Funding. Barlev:Atara Biotherapeutics: Current Employment, Current equity holder in publicly-traded company. Thompson:Atara Biotherapeutics: Research Funding; Bristol-Myers Squibb: Research Funding. Dharnidharka:Atara Biotherapeutics: Consultancy, Honoraria, Research Funding; CareDx: Honoraria, Research Funding, Speakers Bureau.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Ting, Ting Qu, George Q. Huang, Xin Chen und Zongzhong Wang. „Sizing, pricing and common replenishment in a headquarter-managed centralized distribution center“. Industrial Management & Data Systems 116, Nr. 6 (11.07.2016): 1086–104. http://dx.doi.org/10.1108/imds-08-2015-0343.

Der volle Inhalt der Quelle
Annotation:
Purpose – Commonly shared logistics services help manufacturing companies to cut down redundant logistics investments while enhance the overall service quality. Such service-sharing mode has been naturally adopted by group companies to form the so-called headquarter-managed centralized distribution center (HQ-CDC). The HQ-CDC manages the common inventories for the group’s subsidiaries and provides shared storage services to the subsidiaries through appropriate sizing, pricing and common replenishment. Apart from seeking a global optimal solution for the whole group, the purpose of this paper is to investigate balanced solutions between the HQ-CDC and the subsidiaries. Design/methodology/approach – Two decision models are formulated. Integrated model where the group company makes all-in-one decision to determine the space allocation, price setting and the material replenishment on behalf of HQ-CDC and subsidiaries. Bilevel programming model where HQ-CDC and subsidiaries make decisions sequentially to draw a balance between their local objectives. From the perspective of result analysis, the integrated model will develop a managerial benchmark which minimizes the group company’s total cost, while the bilevel programming model could be used to measure the interactive effects between local objectives as well as their final effect on the total objective. Findings – Through comparing the numerical results of the two models, two major findings are obtained. First, the HQ-CDC’s profit is noticeably improved in the bilevel programming model as compared to the integrated model. However, the improvement of HQ-CDC’s profit triggers the cost increasing of subsidiaries. Second, the analyses of different sizing and pricing policies reveal that the implementation of the leased space leads to a more flexible space utilization in the HQ-CDC and the reduced group company’s total cost especially in face of large demand and high demand fluctuation. Research limitations/implications – Several classical game-based decision models are to be introduced to examine the more complex relationships between the HQ-CDC and the subsidiaries, such as Nash Game model or Stackelberg Game model, and more complete and meaningful managerial implications may be found through result comparison with the integrated model. The analytical solutions may be developed to achieve more accurate results, but the mathematical models may have to be with easier structure or tighter assumptions. Practical implications – The group company should take a comprehensive consideration on both cost and profit before choosing the decision framework and the coordination strategy. HQ-CDC prefers a more flexible space usage strategy to avoid idle space and to increase the space utilization. The subsidiaries with high demand uncertainties should burden a part of cost to induce the subsidiaries with steady demands to coordinate. Tanshipments should be encouraged in HQ-CDC to reduce the aggregate inventory level as well as to maintain the customer service level. Social implications – The proposed decision frameworks and warehousing policies provide guidance for the managers in group companies to choose the proper policy and for the subsidiaries to better coordinate. Originality/value – This research studies the services sharing on the warehouse sizing, pricing and common replenishment in a HQ-CDC. The interactive decisions between the HQ-CDC and the subsidiaries are formulated in a bilevel programming model and then analyzed under various practical scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zunic, Emir, Haris Hasic und Sead Delalic. „Strategic Approach to Implementation and Integration of Routing-Based Tasks in Warehouse Management Information Systems“. International Journal of e-Education, e-Business, e-Management and e-Learning 10, Nr. 4 (2020): 294–311. http://dx.doi.org/10.17706/ijeeee.2020.10.4.294-311.

Der volle Inhalt der Quelle
Annotation:
One of the frequently occurring tasks during the development of warehouse management systems is the implementation of routing algorithms of some kind. Whether it is for routing workers during order picking, delivery vehicles or company representatives, this task has proven to be challenging in the technical as well as the social sense. In other words, the task is heavily dependent on various general and company-specific constraints and it directly dictates the way employees should do their job. This paper describes a strategic approach to the development and gradual integration of such algorithms which makes sure that all constraints are satisfied and, more importantly, ensures that route suggestions are viewed by the employees as a helpful tool rather than a threat to their job. In the first part of this paper, the approach is described and evaluated on a warehouse representative routing problem through a real-world case study in a medium-to-large warehouse. In the second part, the same approach is adapted to a delivery vehicle routing problem for a smaller retailer company. In both cases, routing efficiency almost doubled in comparison to previous approaches used by the companies. The most important factors of the implementation and integration stages as well as the impact of the changes on employee satisfaction are aggregated, analysed in detail, and discussed throughout different stages of development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Samuels, Janet A., und Kimberly M. Sawers. „SRS Educational Supply Company: An Instructional Budget Project“. Issues in Accounting Education 32, Nr. 4 (01.04.2017): 51–59. http://dx.doi.org/10.2308/iace-51733.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT This project provides students with an opportunity to develop a master budget for a merchandiser. Students work in a team environment that also helps them discover budget process issues. The project consists of two parts. The first part requires students to grasp the mechanics of master budget creation and develop an understanding of how the individual budgets fit together and how information flows from one budget to other budgets. The second part of the project requires students to create new budget information when given a different organizational structure and incentive system. Both parts of the project expose students to budget process issues related to communication, cooperation, public/private information, information sharing, truth telling, and the influence of incentives on the budget process. The project is directed at undergraduate students in Managerial or Cost Accounting courses. We also present modifications that emphasize the budget process issues that would be useful for a graduate-level managerial accounting course (Master's in Management or M.B.A.).
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "In second part there is analyzed company and information system"

1

Kuchta, Jiří. „Posouzení informačního systému firmy a návrh změn“. Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2021. http://www.nusl.cz/ntk/nusl-444583.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "In second part there is analyzed company and information system"

1

Almazari, Ahmad Aref. „Valuation of Banking Sector“. In Advances in Business Information Systems and Analytics, 175–200. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1086-5.ch010.

Der volle Inhalt der Quelle
Annotation:
This chapter examines in particular the valuation of banks which can be classified into five parts. It introduces several valuation approaches to find out whether there is a superior method. This chapter starts with a description of bank regulations and their impact on bank valuations and continues with an overview of valuation approaches. The second part applies the banking sector decision Models. The third section shows banking sector valuation models. The fourth part presents the input factors that are needed to value a company. In the last part, financial statements have been used to analyze the main ratios of the Bank of America, and the calculated values were then compared over time (2014-2018) to assess the explanatory power of the bank.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gavrilova, M. L. „Adaptive Algorithms for Intelligent Geometric Computing“. In Machine Learning, 97–104. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch109.

Der volle Inhalt der Quelle
Annotation:
This chapter spans topics from such important areas as Artificial Intelligence, Computational Geometry and Biometric Technologies. The primary focus is on the proposed Adaptive Computation Paradigm and its applications to surface modeling and biometric processing. Availability of much more affordable storage and high resolution image capturing devices have contributed significantly over the past few years to accumulating very large datasets of collected data (such as GIS maps, biometric samples, videos etc.). On the other hand, it also created significant challenges driven by the higher than ever volumes and the complexity of the data, that can no longer be resolved through acquisition of more memory, faster processors or optimization of existing algorithms. These developments justified the need for radically new concepts for massive data storage, processing and visualization. To address this need, the current chapter presents the original methodology based on the paradigm of the Adaptive Geometric Computing. The methodology enables storing complex data in a compact form, providing efficient access to it, preserving high level of details and visualizing dynamic changes in a smooth and continuous manner. The first part of the chapter discusses adaptive algorithms in real-time visualization, specifically in GIS (Geographic Information Systems) applications. Data structures such as Real-time Optimally Adaptive Mesh (ROAM) and Progressive Mesh (PM) are briefly surveyed. The adaptive method Adaptive Spatial Memory (ASM), developed by R. Apu and M. Gavrilova, is then introduced. This method allows fast and efficient visualization of complex data sets representing terrains, landscapes and Digital Elevation Models (DEM). Its advantages are briefly discussed. The second part of the chapter presents application of adaptive computation paradigm and evolutionary computing to missile simulation. As a result, patterns of complex behavior can be developed and analyzed. The final part of the chapter marries a concept of adaptive computation and topology-based techniques and discusses their application to challenging area of biometric computing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yadav, Vanita, und Rajen Gupta. „A Paradigmatic and Methodological Review of Research in Outsourcing“. In IT Outsourcing, 19–28. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-770-6.ch002.

Der volle Inhalt der Quelle
Annotation:
Due to the growing academic and practitioner interest in the field of outsourcing, there is a need to do a comprehensive assessment and synthesis of research activities to date. This chapter addresses this need and examines the academic literature on information systems outsourcing and business process outsourcing using a paradigmatic and methodological lens. The objective of this chapter is fourfold. Firstly, it examines the status of outsourcing research from 1995 to 2005 in eight leading academic journals, to compare the current research trends with past research directions in terms of methodologies applied. Secondly, it analyzes the research paradigms adopted in these research papers using the Operations Research Paradigm framework. Thirdly, it compares and contrasts the outsourcing research work published in three leading European journals with the work published in three leading American journals. Finally, it uncovers the implications of this study and the directions for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yadav, Vanita, und Rajen K. Gupta. „A Paradigmatic and Methodological Review of Research in Outsourcing“. In Outsourcing and Offshoring of Professional Services, 71–88. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-972-4.ch004.

Der volle Inhalt der Quelle
Annotation:
Due to the growing academic and practitioner interest in the field of outsourcing, there is a need to do a comprehensive assessment and synthesis of research activities to date. This chapter addresses this need and examines the academic literature on information systems outsourcing and business process outsourcing using a paradigmatic and methodological lens. The objective of this chapter is fourfold. Firstly, it examines the status of outsourcing research from 1995 to 2005 in eight leading academic journals, to compare the current research trends with past research directions in terms of methodologies applied. Secondly, it analyzes the research paradigms adopted in these research papers using the Operations Research Paradigm framework. Thirdly, it compares and contrasts the outsourcing research work published in three leading European journals with the work published in three leading American journals. Finally, it uncovers the implications of this study and the directions for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Janczewski, Lech. „Road Map to Information Security Management“. In Encyclopedia of Multimedia Technology and Networking, Second Edition, 1249–56. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch169.

Der volle Inhalt der Quelle
Annotation:
Developments in multimedia technology and in networking offer to organizations new and more effective ways of conducting their businesses. That includes both internal as well as external contacts. Practically every business person owns a mobile phone, has PDA/laptop with wireless capabilities, and is able to communicate with colleagues/clients all over the world and from every place on the globe. As a result, well defined barriers between different organizations are becoming less and less visible. This technical progress intensifies the competing forces. In the past, an organization was directly exposed to competition located within their city or region. Now, due to easy communication, their competitor could be located on the opposite side of the globe. The advantage of using multimedia technology and networking could be accomplished only if data handled by a company are secure, that is, are available only to the authorised persons (confidentiality), represent true values (i.e., had not been changed during storage, processing, or transport), and are available on demand (availability). Thus, managing security of information becomes an obligatory part of running any modern IT system. There is not absolute IT system security. If a system is accessible by authorised people, by definition it is impossible to eliminate chances of unauthorised access. However, proper means exist to dramatically decrease the probability of occurrence of such unauthorised activities. This article illustrates the importance of proper managing in information security processes in an organization and presents a first level guidance on how to approach this problem. The most widely known document on information security is an annual Computer Crime and Security Survey (CCSS), conducted by San Francisco’s Computer Security Institute in cooperation with the FBI (CSI, 2006). It is based on responses from over 500 professionals representing all types and sizes of organizations from huge international corporations to small businesses from nationwide government agencies to small community centres. The message the survey is conveying is frightening: • Total losses for 2006 were $52,494,290 (USD) for the 313 respondents that were willing and able to estimate losses. • Losses due to virus contamination caused the most significant loss (over $15 million). • Unauthorised access to information was the second-most expensive computer crime among survey respondents. • As in previous years, virus incidents (65.2%) and insider abuse of network access (47%) were the most cited forms of attack or abuse. • The impact of the Sarbanes–Oxley Act on information security continues to be substantial. In fact, in open-ended comments, respondents noted that regulatory compliance related to information security is among the most critical security issues they face.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Morze, Nataliia V., Eugenia Smyrnova-Trybulska und Olena Glazunova. „Design of a University Learning Environment for SMART Education“. In Advances in Business Information Systems and Analytics, 221–48. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2492-2.ch011.

Der volle Inhalt der Quelle
Annotation:
This chapter discusses theoretical, methodological and practical aspects of a design of a university learning environment for SMART education. Smart technology is analyzed against university background. The authors consider a process of transformation from e-learning to smart education, in particular the VLE objective according to the concept of smart education, formation of individual learning trajectories in a smart environment and a quality university educational environment for smart education. In the second part of chapter, the authors look at the development of teacher ICT competence of teachers in the system of smart education and present their conclusions. The references include more than thirty items: articles, books, chapters, conference proceedings on SMART education, university learning environment, virtual learning environment (VLE).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yasui, Arisa, Muneyoshi Numada und Chaitanya Krishna. „Disaster Management Process Approach: Case Study by BOSS for Disaster Response under COVID-19“. In Natural Hazards - Impacts, Adjustments and Resilience [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94954.

Der volle Inhalt der Quelle
Annotation:
Comprehensive disaster response processes need to be managed and progress communicated to avoid ineffective management such as duplication with stakeholders, amendments as a result of leaders’ incomplete instructions, and waiting without instruction from the EOC (Emergency Operation Center). As there is existing research on standardization and systematization of disaster response processes, a pure paper-based SOP (Standard Operation Procedure) is challenging to use in actual and practical situations concerning the standard workflow based on the SOP. For effective disaster management, this study developed a Business Operation Support System (BOSS). The BOSS characteristics have the standard workflow chart based on the related documents and experiences, such as the SOP, concerning manuals/documents, past experiences, and knowledge. The overview, checkpoints, necessary documents, related information systems linked to the disaster management plan, and document formats are defined in every workflow. Even for the young or non-experienced individuals, the BOSS can support the responders through the processes for necessary actions during disasters. This research aims to compare the effect of responses to the 2019 massive rain disaster in Kawasaki city, with or without the BOSS. First,a comprehensive workflow focusing on shelter management under the Coronavirus Disease 2019 (COVID-19) workshop with Kawasaki city staff and community people in the BOSS was created. Second, experiments (with or without the BOSS) were carried out to analyze the differences and the BOSS effect. “With the BOSS” means that the responders can follow the workflow in the BOSS for shelter management. “Without the BOSS” means that the conventional paper-based manuals are used for the operations. Two types of manuals in Kawasaki city were used; one guides the expected shelter management points, and the other contains the explanation about COVID-19. Members of both teams comprise one leader and two staff. As a result of the experiments, the big difference between the two teams is the leader’s behavior. Because the BOSS team leader instructed the different staff works following the BOSS workflow, the BOSS team responded to more kinds of works compared to the manual team. The role of all members of the BOSS team was evident. On the other hand, the manual team responded to one work by all members, including the leader, without the leader’s instruction. Due to no instruction from the leader, a period of waiting was observed in the next work manual. This research obtained that the leader’s instructions’ effect caused the effective responses by quantitative analysis of the demonstrative experiment. For future research, the leader’s behavior and decision-making should be analyzed for BOSS’s effective operation and team-building.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Gurau, Calin. „UML as an Essential Tool for Implementing eCRM Systems“. In Encyclopedia of Multimedia Technology and Networking, Second Edition, 1453–63. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch196.

Der volle Inhalt der Quelle
Annotation:
Electronic commerce requires the redefinition of the firm’s relationships with partners, suppliers, and customers. The goal of effective customer relationship management (CRM) practice is to increase the firm’s customer equity, which is defined by the quality, quantity, and duration of customer relationships (Fjermestad & Romano, 2003). The explosive development of the online market and the rapid evolution of customer management applications have determined the companies to implement electronic customer relationship management (eCRM) systems, which are using advanced technology to enhance customer relationship management practices. The successful implementation of an eCRM system requires a specific combination of IT applications that support the classic domains of the CRM concept: marketing, sales, and service (Kennedy, 2006). Electronic marketing aims for acquiring new customers and moving existing customers to further purchases. Electronic sales try to simplify the buying process and to provide superior customer support. Electronic service has the task to provide electronic information and services for arising questions and problems or to convey customers to the right contact person in the organization. The eCRM system comprises a number of business processes, interlinked in a logical succession: • Market segmentation: The collection of historical data, complemented with information provided by third parties (such as marketing research agencies), is segmented on the basis of customer life-time value (CLV) criteria, using data mining applications. • Capturing the customer: The potential customer is attracted to the Web site of the firm through targeted promotional messages, diffused through various communication channels. • Customer information retrieval: The information retrieval process can be either implicit or explicit. When implicit, the information retrieval process registers the Web behaviour of customers, using specialized software applications, such as “cookies.” On the other hand, explicit information can be gathered through direct input of demographic data by the customer (using online registration forms or questionnaires). Often, these two categories of information are connected at database level. • Customer profile definition: The customer information collected is analyzed in relation with the target market segments identified through data mining, and a particular customer profile is defined. The profile can be enriched with additional data (e.g., external information from marketing information providers). This combination creates a holistic view of the customer, his needs, wants, interests and behaviour (Pan & Lee, 2003). • Personalization of firm-customer interaction: the customer profile is used to identify the best customer management campaign (CMC), which is applied to personalize the company-customer online interaction. • Resource management: The company-customer transaction require complex resource management operations, which are partially managed automatically, through specialized IT-applications, such as Enterprise Resource Planning (ERP) or Supply Chain Management (SCM), and partly through the direct involvement and coordination of operational managers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Oermann, Andrea, und Jana Dittmann. „Trust in E-Technologies“. In Information Security and Ethics, 3122–32. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-937-3.ch209.

Der volle Inhalt der Quelle
Annotation:
When reflecting the term trust, there are two main hypotheses which can be found in most of the literature: First, trust is presented as an amorphous phenomenon, which is difficult to measure empirically (Endress, 2002). Second, the characteristic of trust is rather fragile. Trust as a mediator of social interactions cannot be quantified precisely, it has to be generated and recreated at any time varying with its social context. Volken summarizes this particular connection between trust and the context in which it is created: “Trust is a complex construct with multiple dimensions, and their relative effects on innovative actions may be highly dependent on their respective social context” (Volken, 2002). In the age of globalization trust is particularly important when one operates in the areas of e-commerce, e-government, and mobile commerce, or develops IT-systems which are touching the interface between technical innovation and its application by users. The latter live and work in a certain social context in which trust can be established in various ways. This necessarily has consequences for IT-solutions and IT-security which this article tries to explore. Giddens (1990) pointed out that “mechanised technologies of communication have dramatically influenced all aspects of globalization since the first introduction of mechanical printing into Europe [Johannes Gutenberg, 16th century]” (p.77). Without Johannes Gutenberg, there would have been no Reformation, without information technology, there would have been no global information age. Both historical developments, as different as they may be, took place in a certain social context, of which technical innovation became a part. At the same time every society depends on the key ingredient, which is a requirement for social interaction: Trust. As a reader of the Gutenberg Bible trusted that his book is complete and correct, any user of information technology trusts that the applied system functions properly and is reliable. The following questions arise: How does trust which basically is part of most social interactions fits within information technology using “0” and “1” to enable any sort of interaction? How is trust created, maintained and developed in the information age? Which forms of trust exist and are necessary to operate in an interconnected world? The article will explore these questions by describing current definitions and concepts of trust outside and inside a context of information technology. After exploring the link to concepts of trust in social science and culture a new concept of trust in e-technologies such as e-commerce, e-government, and mobile commerce will be developed. Important trust-building factors such as transparency or participation will be analyzed in order to conceptionally deal with the increasing importance of trust in a virtual world.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chaikovsky, Illya, und Maksym Boreiko. „ELECTROCARDIOGRAPHY AS A PART OF HEART DISEASES SCREENING DURING EPIDEMIOLOGICAL RESEARCH: CURRENT STATE, TECHNOLOGICAL TRENDS, UNRESOLVED ISSUES“. In Priority areas for development of scientific research: domestic and foreign experience. Publishing House “Baltija Publishing”, 2021. http://dx.doi.org/10.30525/978-9934-26-049-0-38.

Der volle Inhalt der Quelle
Annotation:
The goal of this paper is to analyze modern views on the electrocardiography (ECG) for heart disease screening, to review the experience of using portable ECG devices, the amount and nature of information that can be obtained using ECG devices with different numbers of leads, their regulatory base, especially in the context of cardiovascular diseases (CVD) screening. The characteristics of various scales for determining serious cardiovascular events are given. It is concluded that there is a need to personalize the scale risk assessment, i.e. to supplement the traditional risk factors with individual physiologically important parameters recorded using instrumental methods. The most important of these instrumental methods is ECG. A detailed description of numerous studies using ECG predictors of cardiovascular events, both in the general population and in various cohorts, is given, with an indication of their evidentiary power. The evolution of views on the indications for ECG examination of clinically healthy individuals in the course of epidemiological studies is described. Miniature portable electrocardiographic devices that are used by the patient outside the doctor's office as part of a broader trend, point-of-care testing (POCT), i.e. a medical test performed directly at the patient's location, outside the doctor's office, are considered. These are mainly single-channel electrocardiographs with finger electrodes: AfibAlert (USA), AliveCor / Kardia (USA), DiCare (China), ECG Check (USA), HeartCheck Pen (Canada), InstantCheck (Taiwan), MD100E (China), PC -80 (China). REKA E 100 (Singapore), Zenicor (Sweden), Omron Heart Scan (Japan), MDK (Holland). The experience of AliveCor / Kardia in the context of successive obtaining of several FDA approvals is especially considered. The features of screening for cardiovascular diseases using ECG devices with a limited number of leads are analyzed. The original electrocardiographic hardware and software complexes created at the Glushkov Institute of Cybernetics of National Academy of Science of Ukraine are described. The uniqueness of the software of these complexes is based on the analysis of subtle ECG changes that are invisible during the usual visual and/or automatic interpretation of the ECG signal. The idea of the analysis method consists, firstly, in measuring the maximum number of ECG parameters and heart rate variability, and secondly, in positioning each parameter on a scale between the absolute norm and extreme pathology. The software for these devices is structured according to a hierarchical principle. It consists of four levels – from individual particular indicators to the general integral indicator of the functional state of the cardiovascular system. When moving to higher levels of analysis, the information obtained at the previous level is generalized and aggregated. This is expressed in the averaging of all point values of all parameters of indicators of the previous level. indicators of the first level are averaged at the second level, the second – at the third, the third – at the fourth. The complex index, available in the software, is formed on the basis of assessments of generally accepted and original indicators of heart rate variability, characteristics of QRS complexes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "In second part there is analyzed company and information system"

1

Bainier, Francis, Pascal Alas, Florian Morin und Tony Pillay. „Two Years of Improvement and Experience in PEMS for Gas Turbines“. In ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/gt2016-56138.

Der volle Inhalt der Quelle
Annotation:
Due to environmental regulations, Nitrogen oxides (NOx), Carbon Monoxide (CO) and Sulfur Dioxide (SO2) emissions are key issues for gas turbine plants. Regulators are becoming more and more involved and they often require complete and real-time emission information. The measurements can be done with gas analyzers, this technology is called CEMS: Continuous Emissions Monitoring System. An alternative method [1][2] is to use a calculation based on the turbine instrumentation. This technology is called PEMS: Predictive Emissions Monitoring System. But these technologies do not provide all the information required by the regulator. GRTgaz, the main gas transmission company in France managing 44 turbines spread over 27 stations across France, has decided to monitor its emissions by PEMS for many years. Two years ago, GRTgaz developed successfully its own PEMS equations, organized answers to regulators around this technology and decided to spread the technology across its gas turbine fleet. The complete intellectual path followed is described in the paper GT2014-25242. This 2016 3-part paper describes the PEMS project steps forward. In the first part of the paper, a review is done of the PEMS equations used at GRTgaz for NOx and CO concentrations. The various lean premixed combustion turbines differ in terms of combustion design, control and instrumentation. These differences are analyzed considering their influence on combustion and their impact on the PEMS results accuracy. In order to comply with regulators requirements a calibration of the PEMS results is done every quarter. The results of the first 2 stations equipped with PEMS are described in this first part. The second part of the paper introduces the smoke developed and the neutral air flow to complete the real time calculation required by the regulators: SO2 concentration and the mass flowrates for NOx, CO and SO2. The final calculation integrates the mass flowrate in order to elaborate the total mass emitted into the atmosphere over different time periods. The last part deals with developing personnel involvement, managing the data and compiling the results given to regulators. These aspects were more difficult to implement than expected. The importance of these aspects should not be underestimated because the scientific credibility of PEMS cannot be confirmed without them.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bainier, Francis, und Florian Havard. „Setting Up and Managing a PEMS Mathematical Formula to Fully Respond to the Expectations of the Regulator“. In ASME Turbo Expo 2014: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/gt2014-25242.

Der volle Inhalt der Quelle
Annotation:
Due to environmental regulations, Oxides of nitrogen (NOx) and carbon monoxide (CO) are important issues for gas turbine plants. Regulators are becoming more and more involved and they often require overall and real-time emission information. These measurements can easily be done with gas analyzers. This technology is called CEMS: Continuous Emissions Monitoring System. With CEMS, regulators easily understand proofs such as calibration and certificate. However, this technology is expensive to buy and to maintain. An alternative method is to use a calculation based on the turbine operation. This technology is called PEMS: Predictive Emissions Monitoring System. Although PEMS is less expensive than CEMS, setting up and managing a PEMS mathematical formula to fully respond to the expectations of the regulator is more complicated than using CEMS. This paper describes an approach to set up an equation for PEMS which can be accepted by regulators. Before starting PEMS research, expectations must be established. What are the expectations of the regulator? What are the users’ expectations? Yes, do not forget the goals of the unit operators. It is not sufficient to provide answers for regulator at the lowest cost. A responsible user is also concerned about the environmental aspect and this fact will help your in the future. GRTgaz, the main gas transmission company in France managing 44 turbines spread over 25 stations across France, has decided to answer these questions before starting research. The answers to these questions make up the first part of this 4-part paper. The second part of the paper is a review of publications and literature. Since no applicable PEMS exist for all types of combustion turbines, analyzing empirical and theoretical formulas leads to the important parameters and the link between them. The third part is dedicated to turbines. A lean premixed combustion system and its regulation are described. Also in this part, the available measured parameters are listed in regards to their combustion influence and their accuracy. The fourth part deals with the establishment of equations and their tests in regards to the goals. The last part presents the implementation of the equations on the units. If tools and knowledge are available, the conclusion point out that achieving a PEMS requires a good evaluation and understanding of the expectations of the regulator, not only in the present day but also what they will and could be.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chen, Hung-Che, Yung-Hua Kao, Paul C. P. Chao und Chin-Long Wey. „A New Automatic Readout Circuit for a Gas Sensor With Organic Vertical Nano-Junctions“. In ASME 2016 Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/isps2016-9582.

Der volle Inhalt der Quelle
Annotation:
The design of the proposed readout circuit provides benefits of detection speed, portability, low-cost and less human operational errors compared with the measurement by traditional instruments. Thus the added value is brought for biosensors and applied in home care. A novel readout circuit for a gas sensor based on an organic diode with vertical nano-junctions (VNJ) is proposed in this study. There are seven parts included in the readout system. First part is a preamplifier, second part is a peak-detect-and-hold circuit, third part is a divider, fourth part is the saturation detector, fifth part is the auto-reset circuit, sixth part is a logic gate and a buffer, seventh part is a micro-processor control unit (MCU). STM32 is the CPU of proposed MCU by ALIENTEK. The ADC of MCU is used to transform the output data of readout circuit. The designed circuit is accomplished by Taiwan Semiconductor Manufacturing Company (TSMC) 0.35 μm 2P4M 3.3 V mixed signal CMOS process, the area of chip is 0.74×0.75 mm2. Finally, the differences between experimental results with post-simulation results in 10 ppb ∼ 3 ppm of ammonia, the differences are within 7.24%. The sensing system is able to detect minimum ammonia concentration of 10 ppb, while the maximum one reaches around 3 ppm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lecheler, Stefan, Rainer Schnell und Bertram Stubert. „Experimental and Numerical Investigation of the Flow in a 5-Stage Transonic Compressor Rig“. In ASME Turbo Expo 2001: Power for Land, Sea, and Air. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/2001-gt-0344.

Der volle Inhalt der Quelle
Annotation:
The flow in a 5-stage transonic compressor rig was experimentally and numerically investigated. The rig is geometrically scaled from the front part of an existing heavy duty gas turbine compressor with CDA-blading. At several operating points pressures and temperatures were measured in order to get the performance map of the compressor, radial distributions at vane leading edges and static pressures at vanes and casing. In order to identify shock locations and to get information about the rotor-stator interactions and the flow angles, 3D L2F-measurements were done in the first and second rotor at two speeds. A newly developed double periodic measuring L2F-system was employed in order to analyze the results with respect to up- and downstream effects of both rotors. The experimental data were used for the validation and calibration of two steady 3D multistage Navier-Stokes codes with mixing plane models between the blade rows. With STAGE3D, calculations were mainly done for the first 2 transonic stages in order to compare shock locations, velocities, inlet and outlet flow angles with measurements. Furthermore the code was calibrated, which means, that grid sizes and numerical parameters were adjusted to give the best fit to the experiments while having acceptable calculation times. A set of parameters was found, which will be used for the design of new transonic multistage compressors. However not all flow quantities fit well to the measurements. Mass flow and pressures could not be fulfilled at the same time. The other code used was TRACE-S. A procedure for a fast calculation of the whole performance map for all 9 blade rows together was developed and tested. The shape of the calculated speed lines fit well to the measured ones, but the mass flow is predicted too high at off-design conditions. The reason is a too low calculated loss production which is confirmed by the higher calculated radial total pressure distributions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schumann, Christian-Andreas, und Andreas Rutsch. „An Approach for an Automated and Market-Driven Optimisation of Product Ranges as Part of the Product Lifecycle Management (PLM) Framework“. In ASME 2008 9th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2008. http://dx.doi.org/10.1115/esda2008-59477.

Der volle Inhalt der Quelle
Annotation:
From the perspective of systems theory, looking at a defined product range of a company, it is possible to describe the main influences as well as restrictions from the market on it. After figuring out, which of those variables are specific, measurable, attainable, realistic, and timely, there is the potential to create sensed values for a “market-driven optimization of product ranges”-feedback loop. The controller of the circuit affects the reference to become the optimal product range for the customer. Therefore a second model or subsystem is needed, which commits the optimum product range in respect to economic interests of the quoting company. Changes in market demands will have, under the restrictions of intra-corporate economic targets, direct effect on the product ranges. Prior achieving that interlinking between market and product, the controller and its functions need to be determined. Rules have to be developed and evaluated in relation to proper feedback on market demands. The whole system will be prototypical implemented based on an integrated enterprise information system, including Computer Aided Design (CAD), Product Data Management (PDM), Part Management (PM), Enterprise Resource Planning (ERP), and E-Commerce (EC) subsystems. The feedback loop stated, will be the core of the system in the PLM framework. The main impact of the loop will be on the product data model, especially its requirements to definition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bejgerowski, Wojciech, und Satyandra K. Gupta. „Runner Optimization for In-Mold Assembly of Multi-Material Compliant Mechanisms“. In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48573.

Der volle Inhalt der Quelle
Annotation:
The runner system in injection molding process is used to supply the polymer melt from injection nozzle to the gates of final part cavities. Realizing complex multi-material mechanisms by in-mold assembly process requires special runner layout design considerations due to the existence of the first stage components. This paper presents the development of an optimization approach for runner systems in the in-mold assembly of multi-material compliant mechanisms. First, the issues specific to the in-mold assembly process are identified and analyzed. Second, the general optimization problem is formulated by identification of all parameters, design variables, objective functions and constraints. Third, the implementation of the optimization problem in Matlab® environment is described based on a case study of a runner system for an in-mold assembly of a MAV drive mechanism. This multi-material compliant mechanism consists of seven rigid links interconnected by six compliant hinges. Finally, several optimization approaches are analyzed to study their performance in solving the formulated problem. The most appropriate optimization approach is selected. The case study showed the applicability of the developed optimization approach to runner systems for complex in-mold assembled multi-material mechanism designs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tang, J., und K. W. Wang. „Vibration Confinement via Optimal Eigenvector Assignment and Piezoelectric Networks“. In ASME 2001 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/detc2001/vib-21474.

Der volle Inhalt der Quelle
Annotation:
Abstract The underlying principle for vibration confinement is to alter the structural modes so that the corresponding modal components have much smaller amplitude in concerned areas than the remaining part of the structure. In this research, the state-of-the-art in vibration confinement technique is advanced in two correlated ways. First, a new eigenstructure assignment algorithm is developed to more directly suppress vibration in regions of interest. This algorithm is featured by the optimal selection of achievable eigenvectors that minimizes the eigenvector components at concerned areas by using the Rayleigh Principle. Second, the active control input is applied through an active-passive hybrid piezoelectric network. With the introduction of circuitry elements, which is much easier to implement than changing or adding mechanical components, the state matrices can be reformed and the design space in eigenstructure assignment can be greatly enlarged. The merit of the proposed system and scheme is demonstrated and analyzed using a numerical example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Liu, Ruijun, Myriam Servières und Guillaume Moreau. „A 3D-GIS Updating Method With Interaction in an Urban Environment“. In ASME 2012 11th Biennial Conference on Engineering Systems Design and Analysis. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/esda2012-82769.

Der volle Inhalt der Quelle
Annotation:
This paper presents a method for reconstructing 3D buildings and updating Geographic Information System (GIS) data from video. We use 2D-GIS data and a ground-based video sequence as inputs. The main approach consists of three parts. In the first part, the data is captured and analyzed: besides the 2D-GIS data, we capture a video from a street view; then we can obtain thousands of3D feature points by our extracting algorithm and design a noise filter to remove outliers. In the second part, we present a generation process, which contains the footprint extraction and basic facades reconstruction. The last part is the correction and updating process: after correcting the footprint and computing the height of the building, our method will update the data into GIS. In addition, we use some user knowledge to make the results much more accurate. In the filtering and the correcting process, our method can deal with several interactive operations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Boss, Terry, J. Kevin Wison, Charlie Childs und Bernie Selig. „Gas Transmission Pipeline Safety and Integrity Activities and Results for INGAA Pipeline Companies“. In 2012 9th International Pipeline Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/ipc2012-90490.

Der volle Inhalt der Quelle
Annotation:
Interstate natural gas transmission pipelines have performed some standardized integrity management processes since the inception of ASME B3.18 in 1942. These standardized practices have been always preceded by new technology and individual company efforts to improve processes. These standardized practices have improved through the decades through newer consensus standard editions and the adoption of pipeline safety regulations (49 CFR Part 192). The Pipeline Safety Improvement Act which added to the list of these improved practices was passed at the end of 2002 and has been recently reaffirmed in January of 2012. The law applies to natural gas transmission pipeline companies and mandates additional practices that the pipeline operators must conduct to ensure the safety and integrity of natural gas pipelines with specific safety programs. Central to the 2002 Act is the requirement that pipeline operators implement an Integrity Management Program (IMP), which among other things requires operators to identify so-called High Consequence Areas (HCAs) on their systems, conduct risk analyses of these areas, and perform baseline integrity assessments and reassessments of each HCA, according to a prescribed schedule and using prescribed methods. The 2002 Act formalized, expanded and standardized the Integrity Management (IM) practices that individual operators had been conducting on their pipeline systems. The recently passed 2012 Pipeline Safety Act has expanded this effort to include measures to improve the integrity of the total transmission pipeline system. In December 2010, INGAA launched a voluntary initiative to enhance pipeline safety and communicate the results to stakeholders. The efforts are focused on analyzing data that measures the effectiveness of safety and integrity practices, detects successful practices, identifies opportunities for improvement, and further focuses our safety performance by developing an even more effective integrity management process. During 2011, a group chartered under the Integrity Management Continuous Improvement initiative(IMCI) identified information that may be useful in understanding the safety progress of the INGAA membership as they implemented their programs that were composed of the traditional safety practices under DOT Part 192, the PHMSA IMP regulations that were codified in 2004 and the individual operator voluntary programs. The paper provides a snapshot, above and beyond the typical PHMSA mandated reporting, of the results from the data collected and analyzed from this integrity management activity on 185,000 miles of natural gas transmission pipelines operated by interstate natural gas transmission pipelines. Natural gas transmission pipeline companies have made significant strides to improve their systems and the integrity and safety of their pipelines in and beyond HCAs. Our findings indicate that over the course of the data gathering period, pipeline operators’ efforts are shown to be effective and are resulting in improved pipeline integrity. Since the inception of the IMP and the expanded voluntary IM programs, the probability of leaks in the interstate natural gas transmission pipeline system continues on a downward slope, and the number of critical repairs being made to pipe segments that are being reassessed under integrity programs, both mandated and voluntary, are decreasing dramatically. Even with this progress, INGAA members committed in 2011 to embarking on a multi-year effort to expand the width and depth of integrity management practices on the interstate natural gas transmission pipeline systems. A key component of that extensive effort is to design metrics to measure the effectiveness to achieve the goals of that program. As such, this report documents the performance baseline before the implementation of the future program.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Van den Braembussche, R. A., Z. Alsalihi, T. Verstraete, A. Matsuo, S. Ibaraki, K. Sugimoto und I. Tomita. „Multidisciplinary Multipoint Optimization of a Transonic Turbocharger Compressor“. In ASME Turbo Expo 2012: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/gt2012-69645.

Der volle Inhalt der Quelle
Annotation:
A transonic centrifugal compressor for turbocharger applications has been redesigned by means of a multidisciplinary multipoint optimization system composed of: a 3D Navier-Stokes solver, a Finite Element stress Analyzer, a Genetic Algorithm and an Artificial Neural Network. The latter makes use of a database, containing the geometry and corresponding performance of previously analyzed impellers and allows a considerable reduction in computational effort. The performance of every new geometry is verified by a 3D Navier-Stokes solver. A Finite Element Analysis verifies the mechanical integrity of the impeller. The geometrical description of the impeller has been extended to better adapt the inducer part of the impeller to transonic flows. The splitters are no longer copies of the full blades but specially designed for minimum losses and equal mass flow on both sides. The blade thickness and number of blades are unchanged because defined by robustness and inertia considerations. The operating range is guaranteed by a two-step optimization procedure. The first one provides information allowing a modification of the inlet section to guarantee the required choking mass flow and a more accurate prediction of the boundary conditions for the Navier-Stokes analysis of the modified impeller. The second one predicts the performance curve of the new geometry for which the choking mass flow is known. It is shown how these extensions of the optimization method have led to a considerable improvement of the efficiency and corresponding pressure ratio, while respecting the surge and choking limits without increase of the stress level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie