Journal articles on the topic 'Data management and data science not elsewhere classified'

To see the other types of publications on this topic, follow the link: Data management and data science not elsewhere classified.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data management and data science not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bindewald, A., S. Miocic, A. Wedler, and J. Bauhus. "Forest inventory-based assessments of the invasion risk of Pseudotsuga menziesii (Mirb.) Franco and Quercus rubra L. in Germany." European Journal of Forest Research 140, no. 4 (March 26, 2021): 883–99. http://dx.doi.org/10.1007/s10342-021-01373-0.

Full text
Abstract:
AbstractIn Europe, some non-native tree species (NNT) are classified as invasive because they have spread into semi-natural habitats. Yet, available risk assessment protocols are often based on a few limited case studies with unknown representativeness and uncertain data quality. This is particularly problematic when negative impacts of NNT are confined to particular ecosystems or processes, whilst providing valuable ecosystem services elsewhere. Here, we filled this knowledge gap and assessed invasion risks of two controversially discussed NNT in Germany (Quercus rubra L., Pseudotsuga menziesii (Mirb.) Franco) for broad forest types using large scale inventory data. For this purpose, establishment success of natural regeneration was quantified in terms of cover and height classes. The current extent of spread into protected forest habitats was investigated in south-west Germany using regional data. Establishment was most successful at sites where the NNT are abundant in the canopy and where sufficient light is available in the understory. Natural regeneration of both NNT was observed in 0.3% of the total area of protected habitats. In forest habitats with sufficient light in the understory and competitively inferior tree species, there is a risk that Douglas fir and red oak cause changes in species composition in the absence of management interventions. The installation of buffer zones and regular removal of unwanted regeneration could minimize such risks for protected areas. Our study showed that forest inventories can provide valuable data for comparing the establishment risk of NNT amongst ecosystem types, regions or jurisdictions. This information can be improved by recording the abundance and developmental stage of widespread NNT, particularly in semi-natural ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
2

Feng, Shuxian, and Toshiya Yamamoto. "Preliminary research on sponge city concept for urban flood reduction: a case study on ten sponge city pilot projects in Shanghai, China." Disaster Prevention and Management: An International Journal 29, no. 6 (November 9, 2020): 961–85. http://dx.doi.org/10.1108/dpm-01-2020-0019.

Full text
Abstract:
PurposeThis research aimed to determine the differences and similarities in each pilot project to understand the primary design forms and concepts of sponge city concept (SCC) projects in China. It also aimed to examine ten pilot projects in Shanghai to extrapolate their main characteristics and the processes necessary for implementing SCC projects effectively.Design/methodology/approachA literature review and field survey case study were employed. Data were mostly collected through a field survey in Shanghai, focusing on both the projects and the surrounding environment. Based on these projects' examination, a comparative method was used to determine the characteristics of the ten pilot SCC projects and programs in Shanghai.FindingsSix main types of SCC projects among 30 pilot cities were classified in this research to find differences and similarities among the pilot cities. Four sponge design methods were classified into ten pilot projects. After comparing each project size using the same geographical size, three geometrical types were categorized into both existing and new city areas. SCC project characteristics could be identified by combining four methods and three geometrical types and those of the SCC programs by comparing the change in land-use and the surrounding environment in ten pilot projects.Originality/valueThe results are valuable for implementing SCC projects in China and elsewhere and future research on the impact of SCC projects.
APA, Harvard, Vancouver, ISO, and other styles
3

Geromont, H. F., and D. S. Butterworth. "Generic management procedures for data-poor fisheries: forecasting with few data." ICES Journal of Marine Science 72, no. 1 (January 15, 2014): 251–61. http://dx.doi.org/10.1093/icesjms/fst232.

Full text
Abstract:
Abstract The majority of fish stocks worldwide are not managed quantitatively as they lack sufficient data, particularly a direct index of abundance, on which to base an assessment. Often these stocks are relatively “low value”, which renders dedicated scientific management too costly, and a generic solution is therefore desirable. A management procedure (MP) approach is suggested where simple harvest control rules are simulation tested to check robustness to uncertainties. The aim of this analysis is to test some very simple “off-the-shelf” MPs that could be applied to groups of data-poor stocks which share similar key characteristics in terms of status and demographic parameters. For this initial investigation, a selection of empirical MPs is simulation tested over a wide range of operating models (OMs) representing resources of medium productivity classified as severely depleted, to ascertain how well these different MPs perform. While the data-moderate MPs (based on an index of abundance) perform somewhat better than the data-limited ones (which lack such input) as would be expected, the latter nevertheless perform surprisingly well across wide ranges of uncertainty. These simple MPs could well provide the basis to develop candidate MPs to manage data-limited stocks, ensuring if not optimal, at least relatively stable sustainable future catches.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yang. "Intelligent Community Management System Based on Big Data Technology." Scientific Programming 2022 (February 23, 2022): 1–10. http://dx.doi.org/10.1155/2022/5396636.

Full text
Abstract:
Community safety has become an important part of social public safety. The construction of a safe community focuses on the accumulation of community safety capabilities. This paper discusses the application of big data technology in community safety construction and the improvement of community safety promotion capabilities. We analyzed the sources and collection methods of community data, classified multisource heterogeneous community data, and constructed seven types of community data. We designed the conceptual structure and storage structure of the community database. On the basis of the construction of the community database, the architecture design of the big data platform for community security was launched. From the perspective of different user types, the functional requirements of the big data platform were analyzed. Combined with demand analysis, the overall architecture design of the community big data platform was carried out. On the basis of the overall architecture, the application architecture and technical architecture were designed in more detail, and the key technologies of the community big data platform were analyzed. Finally, it analyzes how to use the community big data platform to predict public security risks by constructing a CART regression tree model.
APA, Harvard, Vancouver, ISO, and other styles
5

Aljabhan, Basim, and Melese Abeyie. "Big Data Analytics in Supply Chain Management: A Qualitative Study." Computational Intelligence and Neuroscience 2022 (September 16, 2022): 1–10. http://dx.doi.org/10.1155/2022/9573669.

Full text
Abstract:
This work explores the leading supply chain processes impacted by big data analytic techniques. Although these concepts are being extensively applied to supply chain management, the number of works that examine and classify the main processes in the current literature is still scarce. This article, therefore, provides a classification of the current literature on the use of big data analytics and provides insight from professionals in the field in relation to this topic. A well-established set of practical guidelines was used to design and carry out a systematic literature mapping. A total of 50 primary studies were analysed and classified, chosen from a sample of 5, 437 studies after careful filtering to answer six research questions. In addition, a survey was prepared and applied by professionals working in the area. In total, 25 professionals answered a questionnaire with eleven questions, ten of which seek to explore the importance of big data analytics for the areas of the supply chain addressed in this work, and one intends to list the three areas where BDA can be more shocking. More than 60% of the studies are directly linked to the area of chain management; most studies performed empirical studies but rarely classified or detailed methodological procedures; almost 50% bring models to optimize some process or forecasts for better decision-making; more than 50% of professionals working in the area believe that the processes where big data analytics can effectively contribute are related to inventory and stockout management. This study serves as a basis for further research and future work, as it reviews the literature, pointing out the main areas that are being addressed and making a relationship with understanding these areas in practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Sant'Anna, Annibal P. "Data envelopment analysis of randomized ranks." Pesquisa Operacional 22, no. 2 (December 2002): 203–15. http://dx.doi.org/10.1590/s0101-74382002000200007.

Full text
Abstract:
Probabilities and odds, derived from vectors of ranks, are here compared as measures of efficiency of decision-making units (DMUs). These measures are computed with the goal of providing preliminary information before starting a Data Envelopment Analysis (DEA) or the application of any other evaluation or composition of preferences methodology. Preferences, quality and productivity evaluations are usually measured with errors or subject to influence of other random disturbances. Reducing evaluations to ranks and treating the ranks as estimates of location parameters of random variables, we are able to compute the probability of each DMU being classified as the best according to the consumption of each input and the production of each output. Employing the probabilities of being the best as efficiency measures, we stretch distances between the most efficient units. We combine these partial probabilities in a global efficiency score determined in terms of proximity to the efficiency frontier.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Juan. "Application of Intelligent Archives Management Based on Data Mining in Hospital Archives Management." Journal of Electrical and Computer Engineering 2022 (April 7, 2022): 1–13. http://dx.doi.org/10.1155/2022/6217328.

Full text
Abstract:
Data mining belongs to knowledge discovery, which is the process of revealing implicit, unknown, and valuable information from a large amount of fuzzy application data. The potential information revealed by data mining can help decision makers adjust market strategies and reduce market risks. The information excavated must be real and not universally known, and it can be the discovery of a specific problem. Data mining algorithms mainly include the neural network method, decision tree method, genetic algorithm, rough set method, fuzzy set method, association rule method, and so on. Archives management, also known as archive work, is the general term for various business works, in which archives directly manage archive entities and archive information and provide utilization services. It is also the most basic part of national archives. Hospital archives are an important part of hospital management, and hospital archives are the accumulation of work experience and one of the important elements for building a modern hospital. Hospital archives are documents, work records, charts, audio recordings, videos, photos, and other types of documents, audio-visual materials, and physical materials, such as certificates, trophies, and medals obtained by hospitals, departments, and individuals. The purpose of this paper is to study the application of intelligent archives management based on data mining in hospital archives management, expecting to use the existing data mining technology to improve the current hospital archives management. This paper investigates the age and educational background of hospital archives management workers and explores the relationship between them and the quality of archives management. Based on the decision number algorithm, on the basis of the database, the hospital data is classified and analyzed, and the hospital file data is classified and processed through the decision number algorithm to improve the system data processing capability. The experimental results of this paper show that among the staff working in the archives management department of the hospital, 20-to-30-year-olds account for 46.2% of the total group. According to the data, the staff in the archives management department of the hospital also tends to be younger. Among the staff under the age of 30, the file pass rate was 98.3% and the failure rate was 1.7%. Among the staff over 50 years old, the file pass rate was 99.9% and the failure rate was 0.1%. According to the data, the job is related to the experience of the employee.
APA, Harvard, Vancouver, ISO, and other styles
8

Beránek, Václav, Tomáš Olšan, Martin Libra, Vladislav Poulek, Jan Sedláček, Minh-Quan Dang, and Igor Tyukhov. "New Monitoring System for Photovoltaic Power Plants’ Management." Energies 11, no. 10 (September 20, 2018): 2495. http://dx.doi.org/10.3390/en11102495.

Full text
Abstract:
An innovative solar monitoring system has been developed. The system aimed at measuring the main parameters and characteristics of solar plants; collecting, diagnosing and processing data. The system communicates with the inverters, electrometers, metrological equipment and additional components of the photovoltaic arrays. The developed and constructed long working system is built on special data collecting technologies. At the generating plants, a special data logger BBbox is installed. The new monitoring system has been used to follow 65 solar plants in the Czech Republic and elsewhere for 175 MWp. As an example, we have selected 13 PV plants in this paper that are at least seven years old. The monitoring system contributes to quality management of plants, and it also provides data for scientific purposes. Production of electricity in the built PV plants reflects the expected values according to internationally used software PVGIS (version 5) during the previous seven years of operation. A comparison of important system parameters clearly shows the new solutions and benefits of the new Solarmon-2.0 monitoring system. Secured communications will increase data protection. A higher frequency of data saving allows higher accuracy of the mathematical models.
APA, Harvard, Vancouver, ISO, and other styles
9

Rocha, Rafael Brandão, and Maria Aparecida Cavalcanti Netto. "A data envelopment analysis model for rank ordering suppliers in the oil industry." Pesquisa Operacional 22, no. 2 (December 2002): 123–31. http://dx.doi.org/10.1590/s0101-74382002000200002.

Full text
Abstract:
The benefits of integration companies-suppliers top the strategic agendas of managers. Developing a system showing which suppliers merit continuing and deepening the partnership is difficult because of the large quantity of variables to be analyzed. The internationalized petroleum industry, requiring a large variety of materials, is no different. In this context, the Brazilian company PETROBRAS S.A. has a system to evaluate its suppliers based on a consensus panel formed by its managers. This paper shows a two phase methodology for classifying and awarding suppliers using the DEA model. Firstly, the suppliers are classified according to their efficiency based on commercial transactions realized. Secondly they are classified according to the opinions of the managers, using a DEA model for calculating votes, with the assurance regions and superefficiency defining the best suppliers. The paper presents a case study in the E&P segment of PETROBRAS and the results obtained with the methodology.
APA, Harvard, Vancouver, ISO, and other styles
10

Wilson, Lee, Tiong T. Goh, and William Yu Chung Wang. "Big Data Management Challenges in a Meteorological Organisation." International Journal of E-Adoption 4, no. 2 (April 2012): 1–14. http://dx.doi.org/10.4018/jea.2012040101.

Full text
Abstract:
Data management practices strongly impact enterprise performance, especially for e-science organisations dealing with big data. This study identifies the key challenges and issues facing information system managers in growing demand for big data operations to deliver timely meteorological products. Data was collected from in-depth interviews with five MetService information system managers, including the CIO. Secondary data sources include internal documents and relevant literatures. The study revealed the pressing and challenging big data management issues can broadly be classified as data governance, infrastructure management, and workflow management. The study identifies a gap in adopting effective workflow management system and coordinated outsourcing plan within the organisation. Although the study is limited by its sample size and generalisation, the findings are useful for other IT managers and practitioners of data-intensive organisations to examine their data management practices on the need to balance the demand for efficient scientific operations and sustainable business growth. This study recognised that although the organisation is implementing up-to-date and practical solutions to meet these challenges, effort is needed to harmonise and align these solutions with business growth strategies to sustain future growth. This study enhanced societies’ understanding to the current practices of a real world organization.
APA, Harvard, Vancouver, ISO, and other styles
11

Zu, Enhou, Ming-Hung Shu, Jui-Chan Huang, Bi-Min Hsu, and Chien-Ming Hu. "Management Problems of Modern Logistics Information System Based on Data Mining." Mobile Information Systems 2021 (September 20, 2021): 1–9. http://dx.doi.org/10.1155/2021/5241921.

Full text
Abstract:
With the development of technology, the data stored by humans is growing geometrically. Especially in the logistics industry, the rise of online e-commerce has created a huge data flow in the informatized logistics network. How to collect, analyze, and organize this information in time and analyze the meaning of this information from it is a difficult problem. The paper aims to learn the management of logistics systems from the perspective of statistics. This article uses random analysis of 1,000 customers’ logistics records from the logistics enterprise information system, uses mathematical analysis and matrix theory to analyze the correlation among them, and analyzes customer types and shopping. The information on habits, daily consumption patterns, and brand preferences is classified and summarized using mathematical statistics. The experimental results show that the results of the study can well reflect customers’ daily habits and consumption habits. The experimental data show that mining effective and accurate information from massive information can help companies to quickly make decisions, formulate scientific logistics management programs, improve operating efficiency, reduce operating costs, and obtain good benefits.
APA, Harvard, Vancouver, ISO, and other styles
12

Cooper, Alan K. "Future progress in Antarctic science: improving data care, sharing and collaboration." Earth and Environmental Science Transactions of the Royal Society of Edinburgh 104, no. 1 (March 2013): 69–80. http://dx.doi.org/10.1017/s1755691013000091.

Full text
Abstract:
ABSTRACTData are the foundation of modern observational science. High-quality science relies on high quality data. In Antarctica, unlike elsewhere, researchers must disperse data and conduct science differently. They must work within the laws enacted under Antarctic Treaty that defines Antarctica as a continent for peace and science, where data sharing and international collaboration are requisite keystones. Scientists also work under oversight guidance of the Scientific Committee on Antarctic Research (SCAR). In the last decade, rapid technological advances and vast increase in digital data volumes have changed the ways data are acquired, communicated, analysed, displayed and reported. Yet, the underlying science culture in which data are funded, utilised and cared for has changed little. Science-culture changes are needed for greater progress in Antarctic science.We briefly summarise and discuss aspects of Antarctic ‘data care’, which is a subset of data management. We offer perceptions on how changes to some aspects of current science-culture could inspire greater data sharing and international collaboration, to achieve greater success. The changes would place greater emphasis on data visualisation, higher national priority on data care, implementation of a data-library concept for data sharing, greater individual responsibility for data care, and further integration of cultural arts into data and science presentations.Much effort has gone into data management in the international community, and there are many excellent examples of successful collaborative Antarctic science programs within SCAR built on existing data sets. Yet, challenges in data care remain and specific suggestions we make deserve attention by the science community, to further promote peace and progress in Antarctic science.
APA, Harvard, Vancouver, ISO, and other styles
13

LU, LANTING, and CHRISTINE S. M. CURRIE. "EVALUATION OF THE ARROWS METHOD FOR CLASSIFICATION OF DATA." Asia-Pacific Journal of Operational Research 27, no. 01 (February 2010): 121–42. http://dx.doi.org/10.1142/s0217595910002600.

Full text
Abstract:
We evaluate the Arrows Classification Method (ACM) for grouping objects based on the similarity of their data. This is a new method, which aims to achieve a balance between the conflicting objectives of maximizing internal cohesion and external isolation in the output groups. The method is widely applicable, especially in simulation input and output modelling, and has previously been used for grouping machines on an assembly line, based on data on time-to-repair; and hospital procedures, based on length-of-stay data. The similarity of the data from a pair of objects is measured using the two-sample Cramér-von-Mises goodness of fit statistic, with bootstrapping employed to find the significance or p-value of the calculated statistic. The p-values coming from the paired comparisons serve as inputs to the ACM, and allow the objects to be classified such that no pair of objects that are grouped together have significantly different data. In this article, we give the technical details of the method and evaluate its use through testing with specially generated samples. We will also demonstrate its practical application with two real examples.
APA, Harvard, Vancouver, ISO, and other styles
14

Tonta, Yaşar. "Keynote 2: Developments in Education for Information: Will “Data” Trigger the Next Wave of Curriculum Changes in LIS Schools?" Pakistan Journal of Information Management and Libraries 17 (December 1, 2016): 2–12. http://dx.doi.org/10.47657/201617888.

Full text
Abstract:
The first university-level library schools were opened during the last quarter of the 19th century. The number of such schools has gradually increased during the first half of the 20th century, especially after the Second World War, both in the USA and elsewhere. As information has gained further importance in scientific endeavors and social life, librarianship became a more interdisciplinary field and library schools were renamed as schools of library and information science/ information studies/ information management/information to better reflect the range of education provided. In this paper, we review the major developments in education for library and information science (LIS) and the impact of these developments on the curricula of LIS schools. We then review the programs and courses introduced by some LIS schools to address the data science and data curation issues. We also discuss some of the factors such as "data deluge" and "big data" that might have forced LIS schools to add such courses to their programs. We conclude by observing that "data" has already triggered some curriculum changes in a number of LIS schools in the USA and elsewhere as "Data Science" is becoming an interdisciplinary research field just as "Information Science" has once been (and still is).
APA, Harvard, Vancouver, ISO, and other styles
15

Jiang, Xiaobo. "Intelligent Classification Method of Archive Data Based on Multigranular Semantics." Computational Intelligence and Neuroscience 2022 (May 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/7559523.

Full text
Abstract:
With the rapid development of information technology, the amount of data in various digital archives has exploded. How to reasonably mine and analyze archive data and improve the effect of intelligent management of newly included archives has become an urgent problem to be solved. The existing archival data classification method is manual classification oriented to management needs. This manual classification method is inefficient and ignores the inherent content information of the archives. In addition, for the discovery and utilization of archive information, it is necessary to further explore and analyze the correlation between the contents of the archive data. Facing the needs of intelligent archive management, from the perspective of the text content of archive data, further analysis of manually classified archives is carried out. Therefore, this paper proposes an intelligent classification method for archive data based on multigranular semantics. First, it constructs a semantic-label multigranular attention model; that is, the output of the stacked expanded convolutional coding module and the label graph attention module are jointly connected to the multigranular attention Mechanism network, the weighted label output by the multigranularity attention mechanism network is used as the input of the fully connected layer, and the output value of the fully connected layer used to map the predicted label is input into a Sigmoid layer to obtain the predicted probability of each label; then, the model for training: use the multilabel data set to train the constructed semantic-label multigranularity attention model, adjust the parameters until the semantic-label multigranularity attention model converges, and obtain the trained semantic-label multigranularity attention model. Taking the multilabel data set to be classified as input, the semantic-label multigranularity attention model after training outputs the classification result.
APA, Harvard, Vancouver, ISO, and other styles
16

Kim, Yoon-Sung, Hae-Chang Rim, and Do-Gil Lee. "Business environmental analysis for textual data using data mining and sentence-level classification." Industrial Management & Data Systems 119, no. 1 (February 4, 2019): 69–88. http://dx.doi.org/10.1108/imds-07-2017-0317.

Full text
Abstract:
Purpose The purpose of this paper is to propose a methodology to analyze a large amount of unstructured textual data into categories of business environmental analysis frameworks. Design/methodology/approach This paper uses machine learning to classify a vast amount of unstructured textual data by category of business environmental analysis framework. Generally, it is difficult to produce high quality and massive training data for machine-learning-based system in terms of cost. Semi-supervised learning techniques are used to improve the classification performance. Additionally, the lack of feature problem that traditional classification systems have suffered is resolved by applying semantic features by utilizing word embedding, a new technique in text mining. Findings The proposed methodology can be used for various business environmental analyses and the system is fully automated in both the training and classifying phases. Semi-supervised learning can solve the problems with insufficient training data. The proposed semantic features can be helpful for improving traditional classification systems. Research limitations/implications This paper focuses on classifying sentences that contain the information of business environmental analysis in large amount of documents. However, the proposed methodology has a limitation on the advanced analyses which can directly help managers establish strategies, since it does not summarize the environmental variables that are implied in the classified sentences. Using the advanced summarization and recommendation techniques could extract the environmental variables among the sentences, and they can assist managers to establish effective strategies. Originality/value The feature selection technique developed in this paper has not been used in traditional systems for business and industry, so that the whole process can be fully automated. It also demonstrates practicality so that it can be applied to various business environmental analysis frameworks. In addition, the system is more economical than traditional systems because of semi-supervised learning, and can resolve the lack of feature problem that traditional systems suffer. This work is valuable for analyzing environmental factors and establishing strategies for companies.
APA, Harvard, Vancouver, ISO, and other styles
17

Lu, Huiqun, RuiLing Wang, and Zhenju Huang. "Application of Data Mining in Performance Management of Public Hospitals." Mobile Information Systems 2022 (February 9, 2022): 1–10. http://dx.doi.org/10.1155/2022/2412928.

Full text
Abstract:
With the rapid development of computer technology, information technology covers all aspects of daily life, and the medical industry is also paying more attention to information construction. Conventional management methods have been unable to further improve the hospital’s management capabilities. At the same time, countries that are better in terms of hospital management practices have set a benchmark for mainland hospitals and reformed hospitals in order to stand out in the future. In addition to evaluating the economic benefits and work efficiency of doctors, hospitals must also consider that hospitals, as a special service industry, cannot be measured by economic indicators. Therefore, there is a multiparty game in the performance appraisal of hospitals, and it is necessary to consider not only economic factors but also the characteristics of public services. This article is based on the case of a large domestic tertiary hospital, combined with the hospital’s performance management reform plan, through the design idea of performance management and incentive performance pay distribution, using data mining technology as an auxiliary means. It successfully helped the hospital complete the performance and incentive performance pay aspects reform. The main research work of this paper is divided into the following three aspects. (1) Using data mining technology, according to each nursing unit’s workload, risk level, the difficulty of internship, and other objective factors in the past year for patient outpatient visits, surgery implementation, critical first aid, etc., are classified in line with the actual situation and provide a reliable basis for the reasonable and efficient allocation of hospital human resources. (2) In the performance management system, we integrate the third-party data mining tool weka to assist in the evaluation of the performance distribution plan and the calculation of the follow-up incentive performance pay. (3) We use the mathematical model of data mining to measure and evaluate the reasonableness of historical workload and performance appraisal, determine a new incentive performance pay distribution model, and use the software as a calculation tool for the internal distribution of performance wages to provide monthly incentive performance wage statistics in the future.
APA, Harvard, Vancouver, ISO, and other styles
18

Buenrostro Mazon, S., I. Riipinen, D. M. Schultz, M. Valtanen, M. Dal Maso, L. Sogacheva, H. Junninen, T. Nieminen, V. M. Kerminen, and M. Kulmala. "Classifying previously undefined days from eleven years of aerosol-particle-size distribution data from the SMEAR II station, Hyytiälä, Finland." Atmospheric Chemistry and Physics 9, no. 2 (January 28, 2009): 667–76. http://dx.doi.org/10.5194/acp-9-667-2009.

Full text
Abstract:
Abstract. Studies of secondary aerosol-particle formation depend on identifying days in which new particle formation occurs and, by comparing them to days with no signs of particle formation, identifying the conditions favourable for formation. Continuous aerosol size distribution data has been collected at the SMEAR II station in a boreal forest in Hyytiälä, Finland, since 1996, making it the longest time series of aerosol size distributions available worldwide. In previous studies, the data have been classified as particle-formation event, nonevent, and undefined days, with almost 40% of the dataset classified as undefined. In the present study, eleven years (1996–2006) of undefined days (1630 days) were reanalyzed and subdivided into three new classes: failed events (37% of all previously undefined days), ultrafine-mode concentration peaks (34%), and pollution-related concentration peaks (19%). Unclassified days (10%) comprised the rest of the previously undefined days. The failed events were further subdivided into tail events (21%), where a tail of a formation event presumed to be advected to Hyytiälä from elsewhere, and quasi events (16%) where new particles appeared at sizes 3–10 nm, but showed unclear growth, the mode persisted for less than an hour, or both. The ultrafine concentration peaks days were further subdivided into nucleation-mode peaks (24%) and Aitken-mode peaks (10%), depending on the size range where the particles occurred. The mean annual distribution of the failed events has a maximum during summer, whereas the two peak classes have maxima during winter. The summer minimum previously found in the seasonal distribution of event days partially offsets a summer maximum in failed-event days. Daily-mean relative humidity and condensation sink values are useful in discriminating the new classes from each other. Specifically, event days had low values of relative humidity and condensation sink relative to nonevent days. Failed-event days possessed intermediate condensation sink and relative humidity values, whereas both ultrafine-mode peaks and, to a greater extent, pollution-related peaks had high values of both, similar to nonevent days. Using 96-h back trajectories, particle-size concentrations were plotted as a function of time the trajectory spent over land. Increases in particle size and number concentration during failed-event days were similar to that during the later stages of event days, whereas the particle size and number concentration for both nonevent and peaks classes did not increase as fast as for event and failed events days.
APA, Harvard, Vancouver, ISO, and other styles
19

Ruiz Jr, Facundo Burgos, Márcia Silva Santos, Helen Souto Siqueira, and Ulisses Correa Cotta. "Clinical features, diagnosis and treatment of acute primary headaches at an emergency center: why are we still neglecting the evidence?" Arquivos de Neuro-Psiquiatria 65, no. 4b (December 2007): 1130–33. http://dx.doi.org/10.1590/s0004-282x2007000700007.

Full text
Abstract:
In order to analyze the clinical features, approach and treatment of patients with acute primary headaches seen at the Clinics Hospital of the Federal University of Uberlândia (HC-UFU) throughout 2005, the medical charts of 109 patients were evaluated through a standardized questionnaire as to age, gender, main diagnosis, characteristics of the headache attacks, diagnostic tests and treatment. Probable migraine was the most common type of primary headache (47.7%), followed by probable tension-type headache (37.6%), unspecified headache (11.9%), and headache not elsewhere classified (2.8%). As to characteristics of the crisis, the location of the pain was described in 86.2% of the patients. The most commonly used drugs for treatment of acute headache attacks were dipyrone (74.5%), tenoxicam (31.8%), diazepam (20.9%), dimenhydrate (10.9%), and metochlopramide (9.9%). The data collected are in agreement with those reported in literature. In most cases, treatment was not what is recommended by consensus or clinical studies with appropriate methodology. Therefore, we suggest the introduction of a specific acute headache management protocol which could facilitate the diagnosis, treatment and management of these patients.
APA, Harvard, Vancouver, ISO, and other styles
20

Kronick, Dorothy. "Profits and Violence in Illegal Markets: Evidence from Venezuela." Journal of Conflict Resolution 64, no. 7-8 (April 22, 2020): 1499–523. http://dx.doi.org/10.1177/0022002719898881.

Full text
Abstract:
Some theories predict that profits facilitate peace in illegal markets, while others predict that profits fuel violence. I provide empirical evidence from drug trafficking in Venezuela. Using original data, I compare lethal violence trends in municipalities near a major trafficking route to trends elsewhere, both before and after counternarcotics policy in neighboring Colombia increased the use of Venezuelan transport routes. For thirty years prior to this policy change, lethal violence trends were similar; afterward, outcomes diverged: violence increased more along the trafficking route than elsewhere. Together with qualitative accounts, these findings illuminate the conditions under which profits fuel violence in illegal markets.
APA, Harvard, Vancouver, ISO, and other styles
21

Sree Hari Rao, V., and Murthy V. Jonnalagedda. "Insurance Dynamics – A Data Mining Approach for Customer Retention in Health Care Insurance Industry." Cybernetics and Information Technologies 12, no. 1 (March 1, 2012): 49–60. http://dx.doi.org/10.2478/cait-2012-0004.

Full text
Abstract:
Abstract Extraction of customer behavioral patterns is a complex task and widely studied for various industrial applications under different heading viz., customer retention management, business intelligence and data mining. In this paper, authors experimented to extract the behavioral patterns for customer retention in Health care insurance. Initially, the customers are classified into three general categories - stable, unstable and oscillatory. To extract the patterns the concept of Novel index tree (a variant of K-d tree) clubbed with K-Nearest Neighbor algorithm is proposed for efficient classification of data, as well as outliers and the concept of insurance dynamics is proposed for analyzing customer behavioral patterns
APA, Harvard, Vancouver, ISO, and other styles
22

Womack, Dana M., Michelle R. Hribar, Linsey M. Steege, Nancy H. Vuckovic, Deborah H. Eldredge, and Paul N. Gorman. "Registered Nurse Strain Detection Using Ambient Data: An Exploratory Study of Underutilized Operational Data Streams in the Hospital Workplace." Applied Clinical Informatics 11, no. 04 (August 2020): 598–605. http://dx.doi.org/10.1055/s-0040-1715829.

Full text
Abstract:
Abstract Background Registered nurses (RNs) regularly adapt their work to ever-changing situations but routine adaptation transforms into RN strain when service demand exceeds staff capacity and patients are at risk of missed or delayed care. Dynamic monitoring of RN strain could identify when intervention is needed, but comprehensive views of RN work demands are not readily available. Electronic care delivery tools such as nurse call systems produce ambient data that illuminate workplace activity, but little is known about the ability of these data to predict RN strain. Objectives The purpose of this study was to assess the utility of ambient workplace data, defined as time-stamped transaction records and log file data produced by non-electronic health record care delivery tools (e.g., nurse call systems, communication devices), as an information channel for automated sensing of RN strain. Methods In this exploratory retrospective study, ambient data for a 1-year time period were exported from electronic nurse call, medication dispensing, time and attendance, and staff communication systems. Feature sets were derived from these data for supervised machine learning models that classified work shifts by unplanned overtime. Models for three timeframes —8, 10, and 12 hours—were created to assess each model's ability to predict unplanned overtime at various points across the work shift. Results Classification accuracy ranged from 57 to 64% across three analysis timeframes. Accuracy was lowest at 10 hours and highest at shift end. Features with the highest importance include minutes spent using a communication device and percent of medications delivered via a syringe. Conclusion Ambient data streams can serve as information channels that contain signals related to unplanned overtime as a proxy indicator of RN strain as early as 8 hours into a work shift. This study represents an initial step toward enhanced detection of RN strain and proactive prevention of missed or delayed patient care.
APA, Harvard, Vancouver, ISO, and other styles
23

Payne, J. A., G. D. Moys, C. J. Hutchings, and R. J. Henderson. "Development, Calibration and Further Data Requirements of the Sewer Flow Quality Model Mosqito." Water Science and Technology 22, no. 10-11 (October 1, 1990): 103–9. http://dx.doi.org/10.2166/wst.1990.0294.

Full text
Abstract:
MOSQITO is the initial version of a sever flow quality model being developed by Hydraulics Research Ltd and the Water Research Centre as part of the UK River Basin Management programme. MOSQITO I simulates the time-varying behaviour of suspended solids, biochemical oxygen demand, chemical oxygen demand, ammoniacal nitrogen and hydrogen sulphide on catchment surfaces and in sewer systems. The model produces discharge pollutographs for these determinands which can be used as input to a river water quality model. MOSQITO consists of four sub-models which represent washoff from catchment surfaces, foul water inflow, pollutant behaviour in pipes and channels, and pollutant behaviour in ancillary structures within drainage systems. These sub-models are linked to the flow simulation model incorporated in the WALLRUS package which is the latest computer implementation of the Wallingford Procedure. The rationale behind the model, its structure and its operational basis have been discussed elsewhere (Moys and Henderson, 1988) and are therefore described briefly so that emphasis can be placed on the aspects which follow. Calibration and verification of the model are being carried out using data from a variety of experimental catchments in the UK. These catchments have been selected to exhibit a wide range of characteristics and include separate and combined sewer systems. Results of the calibration work are presented together with illustrations of the performance of the various sub-models and the overall model.
APA, Harvard, Vancouver, ISO, and other styles
24

Okut, Levent. "Primary school science and mathematic teachers' beliefs in terms of the relationship between education and classroom management." Pegem Eğitim ve Öğretim Dergisi 1, no. 4 (December 1, 2011): 39–51. http://dx.doi.org/10.14527/c1s4m5.

Full text
Abstract:
The purpose of this study was to determine if there were a significant relationship between teachers' beliefs related to education and classroom management. For this purpose Educational Beliefs Inventory which was developed by Okut (2009), Attitudes and Beliefs on Classroom Control Inventory which was developed by Martin, Yin and Baldwin (1998) and adapted to Turkish by Savran (2002) were used to gather data. Inventories were administered to 289 teachers. (126 Science teachers, 163 mathematics teachers). Data were analyzed by utilizing descriptive statistics, Chi-Square Test, One-Way Anova Test, t-test, Pearson Product-Moment Correlation Coefficient and Kruskal Wallis H-Test. Results revealed that 10 % of teachers were classified as transmissive, 37 % of teachers were classified as eclectic and 53 % of teachers appeared to have progressive educational beliefs. Teachers had interventionist beliefs on the Instructional Management subscale, whereas they had non-interventionist beliefs on the People Management subscale. Significant relationship was found between teachers' beliefs related to education and classroom management. Teachers who were interventionist also tended to be transmissive. Similarly teachers who were non-interventionist also tended to be progressive.
APA, Harvard, Vancouver, ISO, and other styles
25

Nikfarjam, Hava, Mohsen Rostamy-Malkhalifeh, and Abbasali Noura. "A New Robust Dynamic Data Envelopment Analysis Approach for Sustainable Supplier Evaluation." Advances in Operations Research 2018 (December 9, 2018): 1–20. http://dx.doi.org/10.1155/2018/7625025.

Full text
Abstract:
Supplier selection is one of the intricate decisions of managers in modern business era. There are different methods and techniques for supplier selection. Data envelopment analysis (DEA) is a popular decision-making method that can be used for this purpose. In this paper, a new dynamic DEA approach is proposed which is capable of evaluating the suppliers in consecutive periods based on their inputs, outputs, and the relationships between the periods classified as desirable relationships, undesirable relationships, and free relationships with positive and negative natures. To this aim various social, economic, and environmental criteria are taken into account. A new method for constructing an ideal decision-making unit (DMU) is proposed in this paper which differs from the existing ones in the literature according to its capability of considering periods with unit efficiencies which do not necessarily belong to a unique DMU. Furthermore, the new ideal DMU has the required ability to rank the suppliers with the same efficiency ratio. In the concerned problem, the supplier that has unit efficiency in each period is selected to construct an ideal supplier. Since it is possible to have more than one supplier with unit efficiency in each period, the ideal supplier can be made with different scenarios with a given probability. To deal with such uncertain condition, a new robust dynamic DEA model is elaborated based on a scenario-based robust optimization approach. Computational results indicate that the proposed robust optimization approach can evaluate and rank the suppliers with unit efficiencies which could not be ranked previously. Furthermore, the proposed ideal DMU can be appropriately used as a benchmark for other DMUs to adjust the probable improvement plans.
APA, Harvard, Vancouver, ISO, and other styles
26

Nery, José A. C., Anna M. Sales, Mariana A. V. B. Hacker, Milton O. Moraes, Raquel C. Maia, Euzenir N. Sarno, and Ximena Illarramendi. "Low rate of relapse after twelve-dose multidrug therapy for hansen’s disease: A 20-year cohort study in a brazilian reference center." PLOS Neglected Tropical Diseases 15, no. 5 (May 3, 2021): e0009382. http://dx.doi.org/10.1371/journal.pntd.0009382.

Full text
Abstract:
The World Health Organization has raised concerns about the increasing number of Hansen disease (HD) relapses worldwide, especially in Brazil, India, and Indonesia that report the highest number of recurrent cases. Relapses are an indicator of MDT effectiveness and can reflect Mycobacterium leprae persistence or re-infection. Relapse is also a potential marker for the development or progression of disability. In this research, we studied a large cohort of persons affected by HD treated with full fixed-dose multibacillary (MB) multidrug therapy (MDT) followed for up to 20 years and observed that relapses are a rare event. We estimated the incidence density of relapse in a cohort of patients classified to receive MB regime (bacillary index (BI) > 0), diagnosed between September 1997 and June 2017, and treated with twelve-dose MB-MDT at a HD reference center in Rio de Janeiro, Brazil. We obtained the data from the data management system of the clinic routine service. We linked the selected cases to the dataset of relapses of the national HD data to confirm possible relapse cases diagnosed elsewhere. We diagnosed ten cases of relapse in a cohort of 713 patients followed-up for a mean of 12.1 years. This resulted in an incidence rate of 1.16 relapse cases per 1000 person-year (95% CI = 0.5915–2.076). The accumulated risk was 0.025 in 20 years. The very low risk observed in this cohort of twelve-dose-treated MB patients reinforces the success of the current MDT scheme.
APA, Harvard, Vancouver, ISO, and other styles
27

Tazeen, Nazia, and Sandhya Rani K. "A Survey on Some Big Data Applications Tools and Technologies." International Journal of Recent Technology and Engineering 9, no. 6 (March 30, 2021): 239–42. http://dx.doi.org/10.35940/ijrte.f5575.039621.

Full text
Abstract:
Big Data is a broad area that deals with enormous chunks of data sets. It is a word for enormous data sets having huge volume, more diverse structures of data originating from diverse sources are growing rapidly. Many data being generated because of fast data transmission between devices concerning different sectors like healthcare, science, media, business, entertainment and engineering. Data collection capacity and its storage is big concern. Apache Hadoop software is a store of accessible source programs to store big data and perform analytics and various other operations related to big data. Many organizations base their decisions by extracting knowledge from huge and complex data, because of this prime cause of decision making, Big Data has to be accurately classified and analyzed. In order to overcome the complex challenges encountered by Big Data, various Big Data tools and technologies have developed. Big Data Applications, tools and technologies used to handle it are briefly discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Hak-Jin, Hun Choi, and Jinwoo Kim. "A Comparative Study of the Effects of Low and High Uncertainty Avoidance on Continuance Behavior." Journal of Global Information Management 18, no. 2 (April 2010): 1–29. http://dx.doi.org/10.4018/jgim.2010040101.

Full text
Abstract:
This study examines the effects of uncertainty avoidance (UA) at the individual level on continuance behavior in the domain of mobile data services (MDS). It proposes a research model for post-expectation factors and continuance behavior that considers the moderating effect of UA, and verifies the model with online survey data gathered in Korea and Hong Kong. Post-expectation factors are classified as either intrinsic or extrinsic motivational factors, while respondents are classified according to their propensities into low-UA and high-UA groups. The results indicate that UA has substantial effects not only on the mean values of the post-expectation factors studied but also on the strength of those factors’ impact on satisfaction and continuance intention. The effects of intrinsic motivational factors on satisfaction and continuance intention are stronger for the high-UA group than for the low-UA group. In contrast, the effects of extrinsic motivational factors are generally stronger for the low-UA group.
APA, Harvard, Vancouver, ISO, and other styles
29

Eskhita, Radwan, Vijaya Kittu Manda, and Arbia Hlali. "Dubai and Barcelona as Smart Cities: Some Reflections on Data Protection Law and Privacy." Environmental Policy and Law 51, no. 6 (December 22, 2021): 403–7. http://dx.doi.org/10.3233/epl-210023.

Full text
Abstract:
This study introduces a descriptive analysis to carry out the transformation of the Dubai smart city as a case study in the GCC region with reference to the Barcelona smart city. Furthermore, to investigate how the Dubai smart city will deal with the huge amount of the collected personal data through Internet of Things devices and applications. The theoretical analysis shows that the Barcelona smart city can be represented as an effective model, its innovations recommends to be used in Dubai smart city. The analysis founds that the classification of the collected data inside smart city to open and shared data did not provide sufficient privacy for personal data. Therefore, the personal data should be classified explicitly in order to be processed separately under the rules of the data protection law.
APA, Harvard, Vancouver, ISO, and other styles
30

Bodenmiller, Adam E., Adnan K. Shaout, and Zhivko V. Tyankov. "A Classified Advertisement Framework to Support Niche and other Targeted Markets." International Journal of Enterprise Information Systems 12, no. 3 (July 2016): 38–59. http://dx.doi.org/10.4018/ijeis.2016070103.

Full text
Abstract:
Classified advertisement sites often follow two different approaches to filter classified advertisement data to customers and potential customers. The first approach is to reach out to the broader market, allowing the customers to filter to their target market on the classified website. The downside of this approach is that it lacks the customization and specialization people in a niche market tend to prefer. The second approach is to create classified advertisement websites that are customized to meet the needs of a target or niche market. This specialization is more appealing to niche market customers. However, the second approach is more focused. Therefore, the customer base is smaller, and expanding to more markets requires building more websites, which can be costly in time, money and effort. In this paper the authors propose a framework that allows website developers to quickly and efficiently create targeted market websites. The framework proposed enables website developers to quickly customize text, context and features offered on the newly created targeted market website. In addition, the framework overcomes the entry barrier new websites face by obtaining the starting classified listings required to make a new classified website viable to potential buyers and sellers.
APA, Harvard, Vancouver, ISO, and other styles
31

Sheldrick, Alistair, James Evans, and Gabriele Schliwa. "Policy learning and sustainable urban transitions: Mobilising Berlin’s cycling renaissance." Urban Studies 54, no. 12 (July 8, 2016): 2739–62. http://dx.doi.org/10.1177/0042098016653889.

Full text
Abstract:
Cities are increasingly seeking to learn from experiences elsewhere when planning programmes of sustainable transition management, and the contingencies of policy-learning arrangements in this field are beginning to receive greater attention. This paper applies insights from the field of policy mobilities to the burgeoning field of transition management to critically explore a proposed ‘learning relationship’ between Berlin (Germany) and Manchester (UK) around cycling policy. Drawing on qualitative data, the paper casts doubt over the existing consensus attributing recent growth in bicycle use in Berlin to concerted governmental interventions. A multi-actor analysis suggests that contextual factors caused the growth in cycling and that policy has been largely reactive. The emergence and circulation of the Berlin cycling renaissance as a policy model is then traced through policy documents and interviews with actors in Manchester, UK, to understand why and how it has become a model for action elsewhere. It is concluded that Berlin’s cycling renaissance has been simplified and mobilised to demonstrate the requisite ambition and proficiency to secure competitive funds for sustainable urban transport. The paper develops an original study of the role policy knowledge and learning play in sustainable urban transition management, and argues that attending to the dynamics of policy learning can enhance our understanding of its successes and failures.
APA, Harvard, Vancouver, ISO, and other styles
32

Sun, Shichao, and Dongyuan Yang. "Identifying Public Transit Commuters Based on Both the Smartcard Data and Survey Data: A Case Study in Xiamen, China." Journal of Advanced Transportation 2018 (November 1, 2018): 1–10. http://dx.doi.org/10.1155/2018/9693272.

Full text
Abstract:
Understanding the travel patterns of public transit commuters was important to the efforts towards improving the service quality, promoting public transit use, and better planning the public transit system. Smartcard data, with its wide coverage and relative abundance, could provide new opportunities to study public transit riders’ behaviors and travel patterns with much less cost than conventional data source. However, the major limitation of smartcard data is the absence of social attributes of the cardholders, so that it cannot clearly extract public transit commuters and explain the mechanism of their travel behaviors. This study employed a machine learning approach called Naive Bayesian Classifier (NBC) to identify public transit commuters based on both the smartcard data and survey data, demonstrated in Xiamen, China. Compared with existing methods which were plagued by the validation of the accuracy of the identification results, the adopted approach was a machine learning algorithm with functions of accuracy checking. The classifier was trained and tested by survey data obtained from 532 valid questionnaires. The accuracy rate for identification of public transit commuters was 92% in the test instances. Then, under a low calculation load, it identified the objectives in smartcard data without requiring travel regularity assumptions of public transit commuters. Nearly 290,000 cardholders were classified as public transit commuters. Statistics such as average first boarding time and travel frequency of workdays during peak hours were obtained. Finally, the smartcard data were fused with bus location data to reveal the spatial distributions of the home and work locations of these public transit commuters, which could be utilized to improve public transit planning and operations.
APA, Harvard, Vancouver, ISO, and other styles
33

AS, Shabir. "Emergency Endoscopic Management of Pediatric Upper Gastrointestinal Tract Foreign Bodies: A North Indian Study." Gastroenterology & Hepatology International Journal 5, no. 1 (January 4, 2020): 1–7. http://dx.doi.org/10.23880/ghij-16000172.

Full text
Abstract:
Introduction: Ingestion of a foreign body (FB) is a common pediatric emergency seen in daily clinical practice all over. Scarce data on this problem is available from this part of the world. Methods: We present our experience over four years about the spectrum of foreign bodies presenting to a gastrointestinal endoscopy (GI) centre and their subsequent management. Data was collected from all consecutive patients with FB ingestion presenting to our endoscopy center from January 2015 to December 2018. The demographic data, clinical presentation and endoscopic management was reviewed and analyzed. Results: A total of 130 patients with suspected FB ingestion underwent endoscopic management. 130 FBs were found in 130 patients with suspected FB. Scarf pin was most common type seen in 69 % cases followed by coins in 10.7 % cases. Button battery was noted in 7.7 % patients. Most of the FBs were located in the stomach (69 %) followed by esophagus (13.8 %). The majority of patients (94.4 %) were successfully removed with flexible endoscopy with the addition of suitable devices without any serious procedure-related or anesthesia -related complications. Conclusion: In this part of the world the pattern and types of Upper gastrointestinal (UGI) tract foreign bodies in pediatric population is unique not seen elsewhere across the globe. Early endoscopic management was found to be highly safe and efficacious.
APA, Harvard, Vancouver, ISO, and other styles
34

Grassmann, Winfried K. "WARM-UP PERIODS IN SIMULATION CAN BE DETRIMENTAL." Probability in the Engineering and Informational Sciences 22, no. 3 (May 27, 2008): 415–29. http://dx.doi.org/10.1017/s0269964808000247.

Full text
Abstract:
The question of how long to run a discrete event simulation before data collection starts is an important issue when estimating steady-state performance measures such as average queue lengths. By using experiments based on numerical (nonsimulation) methods published elsewhere, we shed light on this question. Our experiments indicate that no initialization phase should be used when starting in state with a reasonable high equilibrium probability. Delaying data collection is only justified if the starting state is highly unlikely, and data collection should start as soon as a system enters a state with reasonably high probability.
APA, Harvard, Vancouver, ISO, and other styles
35

Hughes, Kenneth V., Michael C. Bard, Jean E. Lewis, Jan L. Kasperbauer, and George W. Facer. "Hemangiopericytoma of the Nasal Cavity: A Review of 15 Cases over a 40-Year Period." American Journal of Rhinology 6, no. 6 (November 1992): 203–9. http://dx.doi.org/10.2500/105065892781976655.

Full text
Abstract:
Hemangiopericytomas are rare tumors of vascular origin most commonly found in the extremities or retroperitoneal area. When they originate from the nasal cavity and paranasal sinuses, they tend to be less aggressive and generally do not metastasize. The term “hemangiopericytoma-like lesion” has been coined for sinonasal hemangiopericytomas that display more benign histologic and growth characteristics than do those located elsewhere. Fifteen cases of hemangiopericytoma of the nasal cavity and paranasal sinuses were reviewed over the period 1951 to 1990; included are follow-up data on cases reported earlier from this institution. The clinical course, management, and outcome was evaluated and correlated with the histologic characteristics of the tumors. The recurrence rate in our series was 13.3%; the mean follow-up was 11 years. No patients died of their disease or had evidence of metastatic disease. This clinicopathologic review suggests that sinonasal hemangiopericytomas should not be classified as “hemangiopericytoma-like” lesions; rather, they should be expected to have significant local recurrence rates with low rates of distant metastasis and mortality. Long-term follow-up is essential as there can be local recurrence after many years.
APA, Harvard, Vancouver, ISO, and other styles
36

García-Jara, Germán, Pavlos Protopapas, and Pablo A. Estévez. "Improving Astronomical Time-series Classification via Data Augmentation with Generative Adversarial Networks." Astrophysical Journal 935, no. 1 (August 1, 2022): 23. http://dx.doi.org/10.3847/1538-4357/ac6f5a.

Full text
Abstract:
Abstract Due to the latest advances in technology, telescopes with significant sky coverage will produce millions of astronomical alerts per night that must be classified both rapidly and automatically. Currently, classification consists of supervised machine-learning algorithms whose performance is limited by the number of existing annotations of astronomical objects and their highly imbalanced class distributions. In this work, we propose a data augmentation methodology based on generative adversarial networks (GANs) to generate a variety of synthetic light curves from variable stars. Our novel contributions, consisting of a resampling technique and an evaluation metric, can assess the quality of generative models in unbalanced data sets and identify GAN-overfitting cases that the Fréchet inception distance does not reveal. We applied our proposed model to two data sets taken from the Catalina and Zwicky Transient Facility surveys. The classification accuracy of variable stars is improved significantly when training with synthetic data and testing with real data with respect to the case of using only real data.
APA, Harvard, Vancouver, ISO, and other styles
37

Steinmetz, Alice Alonzo, Felício Cassalho, Tamara Leitzke Caldeira, Vinícius Augusto de Oliveira, Samuel Beskow, and Luis Carlos Timm. "Assessment of soil loss vulnerability in data-scarce watersheds in southern Brazil." Ciência e Agrotecnologia 42, no. 6 (December 2018): 575–87. http://dx.doi.org/10.1590/1413-70542018426022818.

Full text
Abstract:
ABSTRACT Soil erosion is currently one of the main concerns in agriculture, water resources, soil management and natural hazards studies, mainly due to its economic, environmental and human impacts. This concern is accentuated in developing countries where the hydrological monitoring and proper soil surveys are scarce. Therefore, the use of indirect estimates of soil loss by means of empirical equations stands out. In this context, the present study proposed the assessment of the Revised Universal Soil Loss Equation (RUSLE) with the aid of Geographical Information Systems (GIS) and remote sensing for two agricultural watersheds in southern Rio Grande do Sul - Brazil. Among all RUSLE factors, LS showed the closest patterns to the local when compared to the total annual soil loss, thus being a good indicator t of risk areas. The total annual soil loss varied from 0 to more than 100 t ha-1 yr-1, with the vast majority (about 65% of the total area) classified from slight to moderate rates of soil loss. The results estimated according to RUSLE indicated that over 10% of the study area presented very high to extremely high soil loss rates, thus requiring immediate soil conservation practices. The present study stands out as an important scientific and technical support for practitioners and decision-makers, being probably the first of its nature applied to extreme southern Brazil.
APA, Harvard, Vancouver, ISO, and other styles
38

Makwiza, Chikondi, Musandji Fuamba, Fadoua Houssa, and Heinz Erasmus Jacobs. "Estimating the impact of climate change on residential water use using panel data analysis: a case study of Lilongwe, Malawi." Journal of Water, Sanitation and Hygiene for Development 8, no. 2 (September 26, 2017): 217–26. http://dx.doi.org/10.2166/washdev.2017.056.

Full text
Abstract:
Abstract In this study, panel linear models were used to develop an empirical relationship between metered household water use and the independent variables plot size and theoretical irrigation requirement. The estimated statistical model provides a means of estimating the climate-sensitive component of residential water use. Ensemble averages of temperature and rainfall projections were used to quantify potential changes in water use due to climate change by 2050. Annual water use per household was estimated to increase by approximately 1.5% under the low emissions scenario or 2.3% under the high emissions scenario. The model results provide information that can enhance water conservation initiatives relating particularly to outdoor water use. The model approach presented utilizes data that are readily available to water supply utilities and can therefore be easily replicated elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
39

Hansen, Gretchen J. A., Stephen R. Carpenter, Jereme W. Gaeta, Joseph M. Hennessy, and M. Jake Vander Zanden. "Predicting walleye recruitment as a tool for prioritizing management actions." Canadian Journal of Fisheries and Aquatic Sciences 72, no. 5 (May 2015): 661–72. http://dx.doi.org/10.1139/cjfas-2014-0513.

Full text
Abstract:
We classified walleye (Sander vitreus) recruitment with 81% accuracy (recruitment success and failure predicted correctly in 84% and 78% of lake-years, respectively) using a random forest model. Models were constructed using 2779 surveys collected from 541 Wisconsin lakes between 1989 and 2013 and predictor variables related to lake morphometry, thermal habitat, land use, and fishing pressure. We selected predictors to minimize collinearity while maximizing classification accuracy and data availability. The final model classified recruitment success based on lake surface area, water temperature degree-days, shoreline development factor, and conductivity. On average, recruitment was most likely in lakes larger than 225 ha. Low degree-days also increased the probability of successful recruitment, but primarily in lakes smaller than 150 ha. We forecasted the probability of walleye recruitment in 343 lakes considered for walleye stocking; lakes with high probability of natural reproduction but recent history of recruitment failure were prioritized for restoration stocking. Our results highlight the utility of models designed to predict recruitment for guiding management decisions, provided models are validated appropriately.
APA, Harvard, Vancouver, ISO, and other styles
40

Wallace, R., K. Pathak, M. Fife, N. L. Jones, J. P. Holland, D. Stuart, J. Harris, C. Butler, and D. R. Richards. "Information infrastructure for integrated ecohydraulic and water resources modeling and assessment." Journal of Hydroinformatics 8, no. 4 (December 1, 2006): 317–33. http://dx.doi.org/10.2166/hydro.2006.007.

Full text
Abstract:
Watershed management increasingly requires ecohydraulic modeling and assessment within a regional context, rather than on a project-by-project basis. Such holistic modeling and assessment require evaluation capabilities across multiple temporal and spatial scales. Thus, modeling and assessment tools must be integrated in a scientifically and computationally effective infrastructure. The US Army Engineer Research and Development Center, in concert with the Hydrologic Engineering Center and its academic partners, including Brigham Young University, is establishing a comprehensive set of hydroinformatics modeling and assessment tools for ecohydraulic and water resources management applications, all linked based on a common data and information infrastructure. This paper presents the attributes of this information infrastructure and compares it with the analogous integration initiatives elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
41

Pahl, Jan. "Individualisation in Couple Finances: Who Pays for the Children?" Social Policy and Society 4, no. 4 (October 2005): 381–91. http://dx.doi.org/10.1017/s1474746405002575.

Full text
Abstract:
This article examines changing patterns of money management in the UK and elsewhere and argues that couples are becoming more individualised in their finances. It draws on quantitative and qualitative data and considers some of the implications of individualisation, in particular in terms of paying for children and childcare. The conclusion is that independent management of money may give both partners a sense of autonomy and personal freedom – so long as their incomes are broadly equivalent. However, if the woman's income drops, for example when children are born, while her outgoings increase, because she is expected to pay the costs of children, the situation may change. Individualisation in money management can then be a route to inequality, so long as women's earnings are lower than men's and women are responsible for paying for children and childcare.
APA, Harvard, Vancouver, ISO, and other styles
42

Palacpac, Eric Parala, and Erwin Manantan Valiente. "Measuring Efficiencies of Dairy Buffalo Farms in the Philippines Using Data Envelopment Analysis." Journal of Buffalo Science 12 (January 24, 2023): 1–15. http://dx.doi.org/10.6000/1927-520x.2023.12.01.

Full text
Abstract:
This study aimed to measure the efficiency scores of 75 dairy buffalo farms in the province of Nueva Ecija, Central Luzon, Philippines, using an input-oriented, variable-return-to-scale Data Envelopment Analysis (DEA) model. The farmer-informants or decision-making units (DMUs) were categorized as smallholders, family modules, and semi-commercial in operations. Personal interviews using structured questionnaires were done to gather various information on the socio-economic and management practices of the DMUs. Output in the form of volume and value of milk produced and inputs such as quantities and costs of biologics, feeds, forage, and labor were also collected and evaluated among individual DMUs. The efficiency scores were computed using PIM-DEA software, which identified fully efficient DMUs lying on the frontier line (scores of 1.0) and those enveloped by it (inefficient DMUs with scores of less than 1.0). The overall mean Technical Efficiency (TE), Allocative Efficiency (AE), and Economic Efficiency (EE) scores among the DMUs were 0.80, 0.81, and 0.65, respectively. Most of the inefficient DMUs were in the smallholder category. In sum, smallholder DMUs classified under low and moderate TE clusters should reduce their inputs by 53.31% and 40.01%, respectively, to become fully efficient. Likewise, higher lambda values among efficient peer DMUs indicate the best practice frontiers that the inefficient peer DMUs can benchmark with. Extension and advisory services can help promote the best management practices of the frontiers to improve the TE, AE, and EE of the inefficient DMUs.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Yongquan. "Python Data Analysis and Attribute Information Extraction Method Based on Intelligent Decision System." Mobile Information Systems 2022 (April 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/2495166.

Full text
Abstract:
In order to improve the effect or characteristics of Python data analysis and attribute information extraction, a method for intelligent decision system is proposed. The content of the method is as follows: to create a big data mining model for optimal decision making, to use smart data integration method to integrate data functions, to reconstruct management data evaluation information, and to extract and use management multidimensional information parameters regularly. Characterization methods multidimensional information breakdown and optimization of features are performed, characteristics are classified according to their differences, and management and decision-making optimizations are implemented. Based on linguistic Python, combined with rich and powerful libraries such as regular expression, urllib2, and Beautiful Soup, this paper discusses the methods of building modular web data collection, HTML parsing, and capturing link data. The experimental results show that there is a certain gap in the decision support time of the three methods with the change of iteration times. Among them, the decision support time of the method in this paper is always less than 2 s, while the time of the other two methods is longer. Compared with the other two methods, the decision support time of the method in this paper is shortened by about 1.7 s and 3.1 s, respectively. This is because this method classifies the data attribute gap in decision support, which saves the time of decision support. It is verified that the method in this paper can carry out decision support quickly and has certain reliability. It is proved that intelligent decision system can effectively improve Python data analysis and attribute information extraction.
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Limei. "Applications of Small and Medium Enterprise Management System Using Edge Algorithm." Mobile Information Systems 2021 (June 5, 2021): 1–11. http://dx.doi.org/10.1155/2021/8730413.

Full text
Abstract:
The traditional small and medium enterprise management system has a low efficiency of operation and management. A small and medium enterprise management system based on an edge algorithm is designed in this paper to address this problem. In the proposed algorithm, the management system is used for efficient information transmission and sharing. The system consists of a collaborative function layer, data service layer, basic environment layer, and application layer. The application layer realizes the storage and integration of enterprise information data through the regulation of the collaborative management center and realizes the privacy encryption protection of information based on the protection mechanism of edge computing. The information is transmitted to the data layer for classified storage through the network communication of the basic environment layer. The collaborative function layer takes the process management control engine combined with each service interface to complete the information retrieval of the data layer. It uses the algorithm based on the number of fusion common scoring users and information interest relationship to query and push information. The test results show that the system has good information query performance, pushing query information in one minute. The running function can meet the user needs and effectively achieve information privacy protection.
APA, Harvard, Vancouver, ISO, and other styles
45

Maryanaji, Zohreh, Hajar Merrikhpour, and Hamed Abbasi. "Predicting soil temperature by applying atmosphere general circulation data in west Iran." Journal of Water and Climate Change 8, no. 2 (February 6, 2017): 203–18. http://dx.doi.org/10.2166/wcc.2017.027.

Full text
Abstract:
The main objective of this study is to develop a general methodology for predicting soil temperature based on general circulation data. To meet this demand, we used temperature data that can be profitably used to predict soil temperature in a period of 20 years. Accordingly, air temperature data were downscaled to 2016–2025 based on LARS-WG data. The obtained results indicated that the model has precisely predicted minimal and maximal temperatures. According to the results, the best correlation methods are S, cubic, and quadratic. To investigate soil temperature changes, the predicted data were classified and categorized into two separate decades (2016–2025 and 2026–2035). The results showed that air temperature increases to 1 °C and 1.2 °C in the first decade (2016–2025) and the second decade (2026–2035), respectively, but varies in different regions. The predicted air temperature is lower in the eastern part of the region. In the central region, air and soil temperatures are predicted to be greater than that of other regions. It should also be mentioned that a variety of temperature changes are related to the depth of soil.
APA, Harvard, Vancouver, ISO, and other styles
46

Sarafoglou, Nikias, Arne Andersson, Ingvar Holmberg, and Olle Ohlsson. "Spatial infrastructure and productivity in Sweden." Yugoslav Journal of Operations Research 16, no. 1 (2006): 67–83. http://dx.doi.org/10.2298/yjor0601067s.

Full text
Abstract:
Infrastructure consists of durable resources that are classified as "collective goods" generating external effects. The purpose of this paper is to analyze the role of spatial infrastructure on the industrial productivity in Sweden by utilizing two complementary approaches: A non-parametric approach - Data Envelopment Analysis and a parametric approach - Production Function. These approaches are applied to a cross-section data set of regions in Sweden. These approaches show that metropolitan regions have relatively low road efficiencies in comparison with other regions in Sweden. On the other hand the northern regions are more efficient than the southern regions.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Huiyu, Chao Yang, and Xiangdong Xu. "Clustering Vehicle Temporal and Spatial Travel Behavior Using License Plate Recognition Data." Journal of Advanced Transportation 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/1738085.

Full text
Abstract:
Understanding travel patterns of vehicle can support the planning and design of better services. In addition, vehicle clustering can improve management efficiency through more targeted access to groups of interest and facilitate planning by more specific survey design. This paper clustered 854,712 vehicles in a week using K-means clustering algorithm based on license plate recognition (LPR) data obtained in Shenzhen, China. Firstly, several travel characteristics related to temporal and spatial variability and activity patterns are used to identify homogeneous clusters. Then, Davies-Bouldin index (DBI) and Silhouette Coefficient (SC) are applied to capture the optimal number of groups and, consequently, six groups are classified in weekdays and three groups are sorted in weekends, including commuting vehicles and some other occasional leisure travel vehicles. Moreover, a detailed analysis of the characteristics of each group in terms of spatial travel patterns and temporal changes are presented. This study highlights the possibility of applying LPR data for discovering the underlying factor in vehicle travel patterns and examining the characteristic of some groups specifically.
APA, Harvard, Vancouver, ISO, and other styles
48

Gaikar, Dipak Damodar, Bijith Marakarkandy, and Chandan Dasgupta. "Using Twitter data to predict the performance of Bollywood movies." Industrial Management & Data Systems 115, no. 9 (October 19, 2015): 1604–21. http://dx.doi.org/10.1108/imds-04-2015-0145.

Full text
Abstract:
Purpose – The purpose of this paper is to address the shortcomings of limited research in forecasting the power of social media in India. Design/methodology/approach – This paper uses sentiment analysis and prediction algorithms to analyze the performance of Indian movies based on data obtained from social media sites. The authors used Twitter4j Java API for extracting the tweets through authenticating connection with Twitter web sites and stored the extracted data in MySQL database and used the data for sentiment analysis. To perform sentiment analysis of Twitter data, the Probabilistic Latent Semantic Analysis classification model is used to find the sentiment score in the form of positive, negative and neutral. The data mining algorithm Fuzzy Inference System is used to implement sentiment analysis and predict movie performance that is classified into three categories: hit, flop and average. Findings – In this study the authors found results of movie performance at the box office, which had been based on fuzzy interface system algorithm for prediction. The fuzzy interface system contains two factors, namely, sentiment score and actor rating to get the accurate result. By calculation of opening weekend collection, the authors found that that the predicted values were approximately same as the actual values. For the movie Singham Returns over method of prediction gave a box office collection as 84 crores and the actual collection turned out to be 88 crores. Research limitations/implications – The current study suffers from the limitation of not having enough computing resources to crawl the data. For predicting box office collection, there is no correct availability of ticket price information, total number of seats per screen and total number of shows per day on all screens. In the future work the authors can add several other inputs like budget of movie, Central Board of Film Certification rating, movie genre, target audience that will improve the accuracy and quality of the prediction. Originality/value – The authors used different factors for predicting box office movie performance which had not been used in previous literature. This work is valuable for promoting of product and services of the firms.
APA, Harvard, Vancouver, ISO, and other styles
49

Qiu, Zhiqi, and Jiwei Han. "Artificial Intelligence of Internet of Things Based on Machine Learning and College Student Management." Mobile Information Systems 2022 (August 23, 2022): 1–10. http://dx.doi.org/10.1155/2022/8620277.

Full text
Abstract:
Under the background of the development of higher education, according to the characteristics of college student management, after analyzing its background and practical significance, this study constructs an intelligent college student management system of Internet of things based on machine learning. The data volume of the Internet of things is huge, so ensuring the normal and efficient operation of the system is the primary goal. In this study, the data management model of the system is constructed with the help of cyclic neural network in machine learning algorithm to predict the data and optimize the computer program. At the same time, the system data are filled and classified by the k-nearest neighbor model, and the data are trained and simulated by constructing a safe bilstm neural network system. Because the information related to students in the university database involves personal privacy, in order to ensure the security of the system and avoid relevant data leakage, in the judgment standard of configuration error data flow, this study calculates the monitoring abnormal data and loss function through the dark network flow and ip2vec algorithm, so as to establish the system abnormal monitoring model and identify the system error data flow. Finally, it constructs the college student management system and expounds on the basic requirements of the system use cases. After a series of tests of system performance, capacity, and stability, the results meet the basic requirements of system operation, which provides a certain reference for the application of college student management in the future.
APA, Harvard, Vancouver, ISO, and other styles
50

Lowe, Roger C., Chris J. Cieszewski, Shangbin Liu, Qingmin Meng, Jacek P. Siry, Michał Zasada, and Jaroslaw Zawadzki. "Assessment of Stream Management Zones and Road Beautifying Buffers in Georgia Based on Remote Sensing and Various Ground Inventory Data." Southern Journal of Applied Forestry 33, no. 2 (May 1, 2009): 91–100. http://dx.doi.org/10.1093/sjaf/33.2.91.

Full text
Abstract:
Abstract Stream management zones (SMZs) and road beautifying buffers (RBBs) are voluntary in Georgia and have an unknown extent and impact on the state's forest production. We describe analyses of these buffers, including an estimation of their potential areas and volumes, and their distributions in different forest cover types under an assumption of their full implementation. We base this analysis on Landsat 7 Enhanced Thematic Mapper Plus imagery and various sources of ancillary data, such as those from the Georgia Gap Analysis Program, the Forest Inventory and Analysis large-scale forest survey, and various industrial forest ground inventories. We considered stream data classified into trout, perennial, and intermittent streams, which we combined with elevation and slope information to assess buffer widths consistent with Georgia's Best Management Practices rules. Our results indicate that minimum width 12.2-m SMZ buffers would occupy about 4.01% of the total forested area in Georgia and would cover about 4.32% of the state's volume. The area of the wider, 30.5-m SMZ buffers would cover about 8.65% of the total forested area in Georgia and would cover about 9.27% of the state's total volume. The minimum-width 12.2-m RBBs would occupy about 3.64% of the total forested area in Georgia and would cover about 3.52% of the state's volume. The area of the wider, 30.5-m RBBs would occupy almost 8.68% of the total forested area in Georgia and would cover about 8.40% of the state's total volume.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography