Journal articles on the topic 'Records Australia Management Data processing'

To see the other types of publications on this topic, follow the link: Records Australia Management Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Records Australia Management Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Goh, Elaine. "Clear skies or cloudy forecast?" Records Management Journal 24, no. 1 (March 11, 2014): 56–73. http://dx.doi.org/10.1108/rmj-01-2014-0001.

Full text
Abstract:
Purpose – Using the example of audiovisual materials, this paper aims to illustrate how records-related and archival legislation lags behind advances in technology. As more audiovisual materials are created on the cloud, questions arise about the applicability of national laws over the control, ownership, and custody of data and records. Design/methodology/approach – This paper analyses court cases relating to audiovisual materials in the cloud and archival legislation from three Commonwealth countries: Canada, Australia, and Singapore – representing North America, the Pacific, and Asia respectively. Findings – Current records-related and archival legislation does not effectively address the creation, processing, and preservation of records and data in a cloud environment. The paper identifies several records-related risks linked to the cloud – risks related to the ownership and custody of data, legal risks due to transborder data flow, and risks due to differing interpretations on the act of copying and ownership of audiovisual materials. Research limitations/implications – The paper identifies the need for records professionals to pay greater attention to the implications of the emerging cloud environment. There is a need for further research on how the concept of extraterritoriality and transborder laws can be applied to develop model laws for the management and preservation of records in the cloud. Originality/value – The paper identifies record-related risks linked to the cloud by analyzing court cases and archival legislation. The paper examines maritime law to find useful principles that the archival field could draw on to mitigate some of these risks.
APA, Harvard, Vancouver, ISO, and other styles
2

Campos-Andaur, Paulina, Karen Padilla-Lobo, Nicolás Contreras-Barraza, Guido Salazar-Sepúlveda, and Alejandro Vega-Muñoz. "The Wine Effects in Tourism Studies: Mapping the Research Referents." Sustainability 14, no. 5 (February 23, 2022): 2569. http://dx.doi.org/10.3390/su14052569.

Full text
Abstract:
This research provides an empirical overview of articles and authors referring to research on wine tourism, analyzed from 2000 to 2021, and what they contribute to deepening the Sustainable Development Goals (SDGs) 8. The articles were examined through a bibliometric approach based on data from 199 records stored in the Web of Science (JCR), applying traditional bibliometric laws, and using VOSviewer for data processing and metadata. The results highlight an exponential increase in scientific production without interruptions between 2005 and 2020, with a concentration in only 35 highly cited authors, where the hegemony is held by Australia, among the co-authorship networks of worldwide relevance. The main topics observed in the literature are local development through wine tourism, sustainability and nature conservation, and strategies for sustainable development. Finally, there are six articles with great worldwide influence in wine tourism studies that maintain in their entirety the contribution made by researchers affiliated with Australian universities.
APA, Harvard, Vancouver, ISO, and other styles
3

Haile-Mariam, M., E. Schelfhorst, and M. E. Goddard. "Effect of data collection methods on the availability of calving ease, fertility and herd health data for evaluating Australian dairy cattle." Australian Journal of Experimental Agriculture 47, no. 6 (2007): 664. http://dx.doi.org/10.1071/ea05267.

Full text
Abstract:
There is concern in the Australian dairy industry that the fertility, calving ease and disease resistance of cows is declining and that this decline is, at least in part, a genetic change. Improvement in these traits might be achieved through better herd management and genetic selection. Both these strategies are dependant on the availability of suitable data. The Australian Dairy Herd Improvement Scheme publishes estimated breeding values for fertility, calving ease and somatic cell count. However, the accuracy of the estimated breeding values is limited by the amount and quality of data collected. This paper reports on a project conducted to identify a more efficient system for collecting non-production data, with the hypothesis that quantity and quality of data collected would improve if farmers used electronic data collection methods instead of ‘traditional’ methods, such as writing in a notebook. Of 78 farmers involved in the trial, 51 used a PALM handheld (PALM group), 18 wrote data on paper and later entered it in their farm computer (PC group) and nine submitted a paper record to their data processing centres for entry into the centres’ computers (PAPER group). Data collected from these 78 trial herds during the trial period (2002–04) were compared to data collected from 88 similar non-trial farms, which kept records on PC or paper. The ratio of number of events (health, calving ease or fertility) recorded to number of calvings was considered as a measure of level of recording. The results showed that, after adjusting for location and level of recording before the trial started, the PALM group collected significantly more calving ease, pregnancy test and other fertility data per calving than farmers who were not involved in the trial and PAPER and PC groups. The number of records collected by the PALM group increased from 0.13 pregnancy tests in 2001 to 0.36 per calving in 2004, whereas there was little change in the amount of data collected by the other groups. Similarly, the number of calving ease records increased from 0.26 in 2001 to 0.33 in 2004 and the number of heats recorded increased from 0.02 in 2001 to 0.12 in 2004. This increase in data capture among farmers using the PALM was partly due to an increase in the number of farmers who submitted any data at all. For instance, of the PALM group, 86% sent data on calving ease and 61% on pregnancy, as compared to those from the PC and PAPER groups (below 57%) or those who were not involved in the trial (below 44%). When farmers who at least submitted one record of each type of data are considered, farmers in the PALM group still submitted significantly more fertility event data than those who were not involved in the trial and those in the PAPER group. The quality of the data did not appear to be affected by the data collection methods, though the completeness of the mating data was better in PALM and PC users. The use of electronic data entry on farms would increase the amount of data available for the calculation of estimated breeding values and hence the accuracy of these values for fertility, calving ease and health traits.
APA, Harvard, Vancouver, ISO, and other styles
4

van Gemert, Caroline, Rebecca Guy, Mark Stoove, Wayne Dimech, Carol El-Hayek, Jason Asselin, Clarissa Moreira, et al. "Pathology Laboratory Surveillance in the Australian Collaboration for Coordinated Enhanced Sentinel Surveillance of Sexually Transmitted Infections and Blood-Borne Viruses: Protocol for a Cohort Study." JMIR Research Protocols 8, no. 8 (August 8, 2019): e13625. http://dx.doi.org/10.2196/13625.

Full text
Abstract:
Background Passive surveillance is the principal method of sexually transmitted infection (STI) and blood-borne virus (BBV) surveillance in Australia whereby positive cases of select STIs and BBVs are notified to the state and territory health departments. A major limitation of passive surveillance is that it only collects information on positive cases and notifications are heavily dependent on testing patterns. Denominator testing data are important in the interpretation of notifications. Objective The aim of this study is to establish a national pathology laboratory surveillance system, part of a larger national sentinel surveillance system called ACCESS (the Australian Collaboration for Coordinated Enhanced Sentinel Surveillance). ACCESS is designed to utilize denominator testing data to understand trends in case reporting and monitor the uptake and outcomes of testing for STIs and BBVs. Methods ACCESS involves a range of clinical sites and pathology laboratories, each with a separate method of recruitment, data extraction, and data processing. This paper includes pathology laboratory sites only. First established in 2007 for chlamydia only, ACCESS expanded in 2012 to capture all diagnostic and clinical monitoring tests for STIs and BBVs, initially from pathology laboratories in New South Wales and Victoria, Australia, to at least one public and one private pathology laboratory in all Australian states and territories in 2016. The pathology laboratory sentinel surveillance system incorporates a longitudinal cohort design whereby all diagnostic and clinical monitoring tests for STIs and BBVs are collated from participating pathology laboratories in a line-listed format. An anonymous, unique identifier will be created to link patient data within and between participating pathology laboratory databases and to clinical services databases. Using electronically extracted, line-listed data, several indicators for each STI and BBV can be calculated, including the number of tests, unique number of individuals tested and retested, test yield, positivity, and incidence. Results To date, over 20 million STI and BBV laboratory test records have been extracted for analysis for surveillance monitoring nationally. Recruitment of laboratories is ongoing to ensure appropriate coverage for each state and territory; reporting of indicators will occur in 2019 with publication to follow. Conclusions The ACCESS pathology laboratory sentinel surveillance network is a unique surveillance system that collects data on diagnostic testing, management, and care for and of STIs and BBVs. It complements the ACCESS clinical network and enhances Australia’s capacity to respond to STIs and BBVs. International Registered Report Identifier (IRRID) DERR1-10.2196/13625
APA, Harvard, Vancouver, ISO, and other styles
5

Schlumpf, Heidi, Nina Gaze, Hugh Grenfell, Frances Duff, Kelly Hall, Judith Charles, and Benjamin Mortensen. "Data Detectives - The Backlog Cataloguing Project at Auckland War Memorial Museum." Biodiversity Information Science and Standards 2 (June 15, 2018): e25194. http://dx.doi.org/10.3897/biss.2.25194.

Full text
Abstract:
The Collection Access and Readiness Programme (CARP) is a unique, well-defined programme with committed funding at Auckland War Memorial Museum (AWMM). In the Natural Sciences department, CARP has funded the equivalent of five positions over five collecting areas for four years. These are filled by six part-time collection technicians and a senior full-time manager. As Collection Technicians, our role, across Botany, Entomology, Geology, Marine, and Palaeontology, is to digitise acquisitions prior to December 2012. We are processing the backlogs of our collections, which are prioritised across all museum activities in distinct taxonomic projects. The cataloguing method involves gathering and verifying all available information and entering data into Vernon, our collections management system (https://vernonsystems.com/products/vernon-cms/), with specifically designed record standards aligned to Darwin Core (Wieczorek et al. 2012). CARP has allowed us the freedom to explore backlog collections, some of which have not been fully processed, revealing mysteries that would otherwise have sat undiscovered, and to resolve uncertainties across the collections. For example, in Botany, cataloguing the foreign ferns reveals previously unrealised type specimens; in Marine, cataloguing all 9117 specimen lots of the New Zealand Bivalvia collection, brought classification and locality data uncertainties to resolution. There are multiple projects running concurrently in each collecting area, continually enriching our collection data. In turn, this is opening up a far wider range of information to the public through our online collection portal, AWMM Collections Online http://www.aucklandmuseum.com/discover/collections-online (currently 800,000 records). Open accessibility promotes careful consideration of how and what data we deliver, as it is disseminated through global portals, such as the Global Biodiversity Information Facility (GBIF) and Atlas of Living Australia (ALA). Collections that have often had no more attention than recording of their original labels, have interesting stories beyond “just” cataloguing them. As cataloguers, we have found that the uncertainties or sometimes apparent lack of detail increases our engagement with our collections. Rather than solely copying information into the database, we become detectives, resolving uncertainties and verifying the background of our objects, collection sites and collectors. This engagement and the global reach of our data mean that we are invested in the programme, so that data entry continuity and accuracy are maximised. Our presentation will give an overview of the CARP and our method, and a look at our progress two years in, highlighting some of our discoveries and how the uncertainty in our data allows us to engage more with our collections.
APA, Harvard, Vancouver, ISO, and other styles
6

Oruganti, Yagna. "Technology Focus: Data Analytics (October 2021)." Journal of Petroleum Technology 73, no. 10 (October 1, 2021): 60. http://dx.doi.org/10.2118/1021-0060-jpt.

Full text
Abstract:
With a moderate- to low-oil-price environment being the new normal, improving process efficiency, thereby leading to hydrocarbon recovery at reduced costs, is becoming the need of the hour. The oil and gas industry generates vast amounts of data that, if properly leveraged, can generate insights that lead to recovering hydrocarbons with reduced costs, better safety records, lower costs associated with equipment downtime, and reduced environmental footprint. Data analytics and machine-learning techniques offer tremendous potential in leveraging the data. An analysis of papers in OnePetro from 2014 to 2020 illustrates the steep increase in the number of machine-learning-related papers year after year. The analysis also reveals reservoir characterization, formation evaluation, and drilling as domains that have seen the highest number of papers on the application of machine-learning techniques. Reservoir characterization in particular is a field that has seen an explosion of papers on machine learning, with the use of convolutional neural networks for fault detection, seismic imaging and inversion, and the use of classical machine-learning algorithms such as random forests for lithofacies classification. Formation evaluation is another area that has gained a lot of traction with applications such as the use of classical machine-learning techniques such as support vector regression to predict rock mechanical properties and the use of deep-learning techniques such as long short-term memory to predict synthetic logs in unconventional reservoirs. Drilling is another domain where a tremendous amount of work has been done with papers on optimizing drilling parameters using techniques such as genetic algorithms, using automated machine-learning frameworks for bit dull grade prediction, and application of natural language processing for stuck-pipe prevention and reduction of nonproductive time. As the application of machine learning toward solving various problems in the upstream oil and gas industry proliferates, explainable artificial intelligence or machine-learning interpretability becomes critical for data scientists and business decision-makers alike. Data scientists need the ability to explain machine-learning models to executives and stakeholders to verify hypotheses and build trust in the models. One of the three highlighted papers used Shapley additive explanations, which is a game-theory-based approach to explain machine-learning outputs, to provide a layer of interpretability to their machine-learning model for identification of identification of geomechanical facies along horizontal wells. A cautionary note: While there is significant promise in applying these techniques, there remain many challenges in capitalizing on the data—lack of common data models in the industry, data silos, data stored in on-premises resources, slow migration of data to the cloud, legacy databases and systems, lack of digitization of older/legacy reports, well logs, and lack of standardization in data-collection methodologies across different facilities and geomarkets, to name a few. I would like to invite readers to review the selection of papers to get an idea of various applications in the upstream oil and gas space where machine-learning methods have been leveraged. The highlighted papers cover the topics of fatigue dam-age of marine risers and well performance optimization and identification of frackable, brittle, and producible rock along horizontal wells using drilling data. Recommended additional reading at OnePetro: www.onepetro.org. SPE 201597 - Improved Robustness in Long-Term Pressure-Data Analysis Using Wavelets and Deep Learning by Dante Orta Alemán, Stanford University, et al. SPE 202379 - A Network Data Analytics Approach to Assessing Reservoir Uncertainty and Identification of Characteristic Reservoir Models by Eugene Tan, the University of Western Australia, et al. OTC 30936 - Data-Driven Performance Optimization in Section Milling by Shantanu Neema, Chevron, et al.
APA, Harvard, Vancouver, ISO, and other styles
7

Hallinan, Christine Mary, Sedigheh Khademi Habibabadi, Mike Conway, and Yvonne Ann Bonomo. "Social media discourse and internet search queries on cannabis as a medicine: A systematic scoping review." PLOS ONE 18, no. 1 (January 20, 2023): e0269143. http://dx.doi.org/10.1371/journal.pone.0269143.

Full text
Abstract:
The use of cannabis for medicinal purposes has increased globally over the past decade since patient access to medicinal cannabis has been legislated across jurisdictions in Europe, the United Kingdom, the United States, Canada, and Australia. Yet, evidence relating to the effect of medical cannabis on the management of symptoms for a suite of conditions is only just emerging. Although there is considerable engagement from many stakeholders to add to the evidence base through randomized controlled trials, many gaps in the literature remain. Data from real-world and patient reported sources can provide opportunities to address this evidence deficit. This real-world data can be captured from a variety of sources such as found in routinely collected health care and health services records that include but are not limited to patient generated data from medical, administrative and claims data, patient reported data from surveys, wearable trackers, patient registries, and social media. In this systematic scoping review, we seek to understand the utility of online user generated text into the use of cannabis as a medicine. In this scoping review, we aimed to systematically search published literature to examine the extent, range, and nature of research that utilises user-generated content to examine to cannabis as a medicine. The objective of this methodological review is to synthesise primary research that uses social media discourse and internet search engine queries to answer the following questions: (i) In what way, is online user-generated text used as a data source in the investigation of cannabis as a medicine? (ii) What are the aims, data sources, methods, and research themes of studies using online user-generated text to discuss the medicinal use of cannabis. We conducted a manual search of primary research studies which used online user-generated text as a data source using the MEDLINE, Embase, Web of Science, and Scopus databases in October 2022. Editorials, letters, commentaries, surveys, protocols, and book chapters were excluded from the review. Forty-two studies were included in this review, twenty-two studies used manually labelled data, four studies used existing meta-data (Google trends/geo-location data), two studies used data that was manually coded using crowdsourcing services, and two used automated coding supplied by a social media analytics company, fifteen used computational methods for annotating data. Our review reflects a growing interest in the use of user-generated content for public health surveillance. It also demonstrates the need for the development of a systematic approach for evaluating the quality of social media studies and highlights the utility of automatic processing and computational methods (machine learning technologies) for large social media datasets. This systematic scoping review has shown that user-generated content as a data source for studying cannabis as a medicine provides another means to understand how cannabis is perceived and used in the community. As such, it provides another potential ‘tool’ with which to engage in pharmacovigilance of, not only cannabis as a medicine, but also other novel therapeutics as they enter the market.
APA, Harvard, Vancouver, ISO, and other styles
8

Unwin, Elizabeth, James Codde, Louise Gill, Suzanne Stevens, and Timothy Nelson. "The WA Hospital Morbidity Data System: An Evaluation of its Performance and the Impact of Electronic Data Transfer." Health Information Management 26, no. 4 (December 1996): 189–92. http://dx.doi.org/10.1177/183335839702600407.

Full text
Abstract:
This paper evaluates the performance of the Hospital Morbidity Data System, maintained by the Health Statistics Branch (HSB) of the Health Department of Western Australia (WA). The time taken to process discharge summaries was compared in the first and second halves of 1995, using the number of weeks taken to process 90% of all discharges and the percentage of records processed within four weeks as indicators of throughput. Both the hospitals and the HSB showed improvements in timeliness during the second half of the year. The paper also examines the impact of a recently introduced electronic data transfer system for WA country public hospitals on the timeliness of morbidity data. The processing time of country hospital records by the HSB was reduced to a similar time as for metropolitan hospitals, but the processing time in the hospitals increased, resulting in little improvement in total processing time.
APA, Harvard, Vancouver, ISO, and other styles
9

Mesibov, Robert. "An audit of some processing effects in aggregated occurrence records." ZooKeys 751 (April 20, 2018): 129–46. http://dx.doi.org/10.3897/zookeys.751.24791.

Full text
Abstract:
A total of ca 800,000 occurrence records from the Australian Museum (AM), Museums Victoria (MV) and the New Zealand Arthropod Collection (NZAC) were audited for changes in selected Darwin Core fields after processing by the Atlas of Living Australia (ALA; for AM and MV records) and the Global Biodiversity Information Facility (GBIF; for AM, MV and NZAC records). Formal taxon names in the genus- and species-groups were changed in 13–21% of AM and MV records, depending on dataset and aggregator. There was little agreement between the two aggregators on processed names, with names changed in two to three times as many records by one aggregator alone compared to records with names changed by both aggregators. The type status of specimen records did not change with name changes, resulting in confusion as to the name with which a type was associated. Data losses of up to 100% were found after processing in some fields, apparently due to programming errors. The taxonomic usefulness of occurrence records could be improved if aggregators included both original and the processed taxonomic data items for each record. It is recommended that end-users check original and processed records for data loss and name replacements after processing by aggregators.
APA, Harvard, Vancouver, ISO, and other styles
10

van Ginneken, A. M. "Modelling Domain-Knowledge: a Step toward Intelligent Data Management." Methods of Information in Medicine 32, no. 04 (1993): 270–71. http://dx.doi.org/10.1055/s-0038-1634940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pearce, Christopher, Adam McLeod, Jon Patrick, Jason Ferrigi, Michael Michael Bainbridge, Natalie Rinehart, and Anna Fragkoudi. "Coding and classifying GP data: the POLAR project." BMJ Health & Care Informatics 26, no. 1 (November 2019): e100009. http://dx.doi.org/10.1136/bmjhci-2019-100009.

Full text
Abstract:
BackgroundData, particularly ‘big’ data are increasingly being used for research in health. Using data from electronic medical records optimally requires coded data, but not all systems produce coded data.ObjectiveTo design a suitable, accurate method for converting large volumes of narrative diagnoses from Australian general practice records to codify them into SNOMED-CT-AU. Such codification will make them clinically useful for aggregation for population health and research purposes.MethodThe developed method consisted of using natural language processing to automatically code the texts, followed by a manual process to correct codes and subsequent natural language processing re-computation. These steps were repeated for four iterations until 95% of the records were coded. The coded data were then aggregated into classes considered to be useful for population health analytics.ResultsCoding the data effectively covered 95% of the corpus. Problems with the use of SNOMED CT-AU were identified and protocols for creating consistent coding were created. These protocols can be used to guide further development of SNOMED CT-AU (SCT). The coded values will be immensely useful for the development of population health analytics for Australia, and the lessons learnt applicable elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
12

Keakopa, Tumelo, and Olefhile Mosweu. "Data protection in Botswana." ESARBICA Journal: Journal of the Eastern and Southern Africa Regional Branch of the International Council on Archives 39, no. 1 (December 24, 2020): 65–78. http://dx.doi.org/10.4314/esarjo.v39i1.5.

Full text
Abstract:
Data protection legislation is concerned with the safeguarding of privacy rights of individuals in relation to the processing of personal data, regardless of media or format. The Government of Botswana enacted the Data Protection Act in 2018 for purposes of regulating personal data and to ensure the protection of individual privacy as it relates to personal data, and its maintenance. This paper investigates opportunities and challenges for records management, and recommends measures to be put in place in support of data protection, through proper records management practices. The study employed a desktop approach and data was collected using content analysis. The study found that opportunities such as improved retrieval and access to information, improved job opportunities for records management professionals and a conducive legislative framework are available. It also revealed that a lack of resources to drive the records management function, limitations in electronic document and records systems and a lack of freedom of information to regulate access to public information by members of the public is still a challenge. The study recommends the employment of qualified records management staff with capacity to manage records in the networked environment for purposes of designing and implementing records management programmes that can facilitate compliance with the requirements prescribed by the Data Protection Act.
APA, Harvard, Vancouver, ISO, and other styles
13

Alleway, Heidi K., Ruth H. Thurstan, Peter R. Lauer, and Sean D. Connell. "Incorporating historical data into aquaculture planning." ICES Journal of Marine Science 73, no. 5 (November 2, 2015): 1427–36. http://dx.doi.org/10.1093/icesjms/fsv191.

Full text
Abstract:
Abstract Marine historical research has made progress in bridging the gap between science and policy, but examples in which it has been effectively applied remain few. In particular, its application to aquaculture remains unexplored. Using actual examples of natural resource management in the state of South Australia, we illustrate how historical data of varying resolution can be incorporated into aquaculture planning. Historical fisheries records were reviewed to identify data on the now extinct native oyster Ostrea angasi fishery throughout the 1800 and early-1900s. Records of catch, number of boats fishing, and catch per unit effort (cpue) were used to test fishing rates and estimate the total quantity of oysters taken from select locations across periods of time. Catch quantities enabled calculation of the minimum number of oysters per hectare for two locations. These data were presented to government scientists, managers, and industry. As a result, interest in growing O. angasi increased and new areas for oyster aquaculture were included in regulatory zoning (spatial planning). Records of introductions of the non-native oyster Saccostrea glomerata, Sydney rock oysters, from 1866 through 1959, were also identified and used to evaluate the biosecurity risk of aquaculture for this species through semi-quantitative risk assessment. Although applications to culture S. glomerata in South Australia had previously been declined, the inclusion of historical data in risk assessment led to the conclusion that applications to culture this species would be accepted. The examples presented here have been effectively incorporated into management processes and represent an important opportunity for the aquaculture industry in South Australia to diversify. This demonstrates that historical data can be used to inform planning and support industry, government, and societies in addressing challenges associated with aquaculture, as well as natural resource management more broadly.
APA, Harvard, Vancouver, ISO, and other styles
14

Ciora, Radu Adrian, Daniela Gîfu, and Adriana-Lavinia Cioca. "A Solution for Medical Information Management." Acta Medica Transilvanica 26, no. 3 (September 1, 2021): 30–33. http://dx.doi.org/10.2478/amtsb-2021-0045.

Full text
Abstract:
Abstract Nowadays, the amount of data that is being generated by medical devices has exponentially increased. The aim of this paper is to develop an integrated health data management tool, that aggregates data from various sources, which are in various formats. With the aid of artificial intelligence (AI), this data will be processed and will help healthcare professionals be aware of the improvements that could make the healthcare system be more preventive, predictive and personalized. This paper introduces an integrated medical information management system – that intends to manage medical activities in hospitals, clinics and laboratories and describes its development and future directions of improvement. Furthermore, it presents a smart analysis tool that can generate both statistical data, but also infer additional information from the medical records based on natural language processing (NLP), image processing and machine learning. The novelty of the system is that it gives an overview of the patients’ medical record, statistical analysis, examinations results and interpretations. Furthermore, the system is trying to predict the evolution of a disease, based on previous medical records.
APA, Harvard, Vancouver, ISO, and other styles
15

Segawa, Tomoyo, and Catherine Kemper. "Cetacean strandings in South Australia (1881–2008)." Australian Mammalogy 37, no. 1 (2015): 51. http://dx.doi.org/10.1071/am14029.

Full text
Abstract:
Long-term monitoring of cetacean strandings is essential for good management. This study updates previous summaries for South Australia by adding up to 20 years of comprehensive data, including results of necropsy examinations. A total of 1078 records were examined. Thirty-one species were recorded: 9 (7% of records) mysticetes, 22 (88%) odontocetes and the rest (5%) unidentified. The number of species new to South Australia did not reach an asymptote, with potential for at least five additional species. Small cetaceans were more frequently recorded after 1990, possibly due to increased reporting effort. Stranding records increased markedly after 1970. Records for all species occurred year-round. Beaked whales stranded primarily during January–April, baleen whales during July–January and common dolphins during February–May. Geographic hotspots were identified and related to upwelling and reporting effort. A necropsy program since 1990 resulted in 315 of 856 records being assigned to a circumstance of death, with anthropogenic circumstances accounting for 42% of these. Known Entanglement (21%, 66 of 315) and Probable Entanglement (12%, 37 of 315) were the most recorded anthropogenic circumstances of death. Future research correlating strandings with oceanographic/climatic conditions may help to explain the documented patterns but first the effects of reporting effort need to be accounted for.
APA, Harvard, Vancouver, ISO, and other styles
16

Viola, Cristina N. A., Danielle C. Verdon-Kidd, David J. Hanslow, Sam Maddox, and Hannah E. Power. "Long-Term Dataset of Tidal Residuals in New South Wales, Australia." Data 6, no. 10 (September 23, 2021): 101. http://dx.doi.org/10.3390/data6100101.

Full text
Abstract:
Continuous water level records are required to detect long-term trends and analyse the climatological mechanisms responsible for extreme events. This paper compiles nine ocean water level records from gauges located along the New South Wales (NSW) coast of Australia. These gauges represent the longest and most complete records of hourly—and in five cases 15-min—water level data for this region. The datasets were adjusted to the vertical Australian Height Datum (AHD) and had the rainfall-related peaks removed from the records. The Unified Tidal Analysis and Prediction (Utide) model was subsequently used to predict tides for datasets with at least 25 years of records to obtain the associated tidal residuals. Finally, we provide a series of examples of how this dataset can be used to analyse trends in tidal anomalies as well as extreme events and their causal processes.
APA, Harvard, Vancouver, ISO, and other styles
17

Bernad Julvian Zebua and Lea Sri Ita Br P.A. "Statistik Data Administrasi Sensus Data Pasien Raat Inap Di RSE Medan." MAMEN: Jurnal Manajemen 1, no. 3 (July 30, 2022): 286–93. http://dx.doi.org/10.55123/mamen.v1i3.662.

Full text
Abstract:
The daily census of inpatients is the number of inpatients starting at 00.00 to 24.00. In its implementation at RSE Medann involving nurses and data processing officers in the medical record section. However there are problems with medical record officers, completeness of data, effectiveness data processing, and timeliness of information presentation. The purpose of this study is to evaluate data management activities daily census of inpatients per day at the Medan RSE. This research is a descriptive study with quantitative and qualitative approaches, with the objects of daily census data management activities being inpatients, nurses, medical records officers, heads of medical records installations, and heads of inpatient rooms as subjects. Data were collected using questionnaires and checklists, analyzed by quantitative and qualitative analysis. From the results of the study, it is known that the inpatient data information in each room is incomplete, which will affect the effectiveness of data processing. It is concluded that this error occurs from two sides, namely the input side and the output side. On the input side, namely the education of medical record officers is not appropriate, data is incomplete on the length of care, age, debtor, diagnosis, recapitulation has not been completed, from the output side within one month the information cannot be known in the following month. It is hoped that there will be standard operating procedures and computer-based medical record systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Mazza, Danielle, Christopher Pearce, Lyle Robert Turner, Maria De Leon-Santiago, Adam McLeod, Jason Ferriggi, and Marianne Shearer. "The Melbourne East Monash General Practice Database (MAGNET): Using data from computerised medical records to create a platform for primary care and health services research." Journal of Innovation in Health Informatics 23, no. 2 (July 4, 2016): 523. http://dx.doi.org/10.14236/jhi.v23i2.181.

Full text
Abstract:
The Melbourne East MonAsh GeNeral PracticE DaTabase (MAGNET) research platform was launched in 2013 to provide a unique data source for primary care and health services research in Australia. MAGNET contains information from the computerised records of 50 participating general practices and includes data from the computerised medical records of more than 1,100,000 patients. The data extracted is patient-level episodic information and includes a variety of fields related to patient demographics and historical clinical information, along with the characteristics of the participating general practices. While there are limitations to the data that is currently available, the MAGNET research platform continues to investigate other avenues for improving the breadth and quality of data, with the aim of providing a more comprehensive picture of primary care in Australia
APA, Harvard, Vancouver, ISO, and other styles
19

Souza, Rosembergue Pereira, Luiz Fernando Rust da Costa Carmo, and Luci Pirmez. "Rapid video assessment for monitoring testing facility fraud." International Journal of Quality & Reliability Management 35, no. 8 (September 3, 2018): 1508–18. http://dx.doi.org/10.1108/ijqrm-01-2017-0022.

Full text
Abstract:
Purpose The purpose of this paper is to present a procedure for finding unusual patterns in accredited tests using a rapid processing method for analyzing video records. The procedure uses the temporal differencing technique for object tracking and considers only frames not identified as statistically redundant. Design/methodology/approach An accreditation organization is responsible for accrediting facilities to undertake testing and calibration activities. Periodically, such organizations evaluate accredited testing facilities. These evaluations could use video records and photographs of the tests performed by the facility to judge their conformity to technical requirements. To validate the proposed procedure, a real-world data set with video records from accredited testing facilities in the field of vehicle safety in Brazil was used. The processing time of this proposed procedure was compared with the time needed to process the video records in a traditional fashion. Findings With an appropriate threshold value, the proposed procedure could successfully identify video records of fraudulent services. Processing time was faster than when a traditional method was employed. Originality/value Manually evaluating video records is time consuming and tedious. This paper proposes a procedure to rapidly find unusual patterns in videos of accredited tests with a minimum of manual effort.
APA, Harvard, Vancouver, ISO, and other styles
20

Archanjo, Gabriel A., and Fernando J. Von Zuben. "Genetic Programming for Automating the Development of Data Management Algorithms in Information Technology Systems." Advances in Software Engineering 2012 (July 5, 2012): 1–14. http://dx.doi.org/10.1155/2012/893701.

Full text
Abstract:
Information technology (IT) systems are present in almost all fields of human activity, with emphasis on processing, storage, and handling of datasets. Automated methods to provide access to data stored in databases have been proposed mainly for tasks related to knowledge discovery and data mining (KDD). However, for this purpose, the database is used only to query data in order to find relevant patterns associated with the records. Processes modelled on IT systems should manipulate the records to modify the state of the system. Linear genetic programming for databases (LGPDB) is a tool proposed here for automatic generation of programs that can query, delete, insert, and update records on databases. The obtained results indicate that the LGPDB approach is able to generate programs for effectively modelling processes of IT systems, opening the possibility of automating relevant stages of data manipulation, and thus allowing human programmers to focus on more complex tasks.
APA, Harvard, Vancouver, ISO, and other styles
21

Yoon, Seung-Chul, Tae Sung Shin, Kurt Lawrence, and Deana R. Jones. "Development of Online Egg Grading Information Management System with Data Warehouse Technique." Applied Engineering in Agriculture 36, no. 4 (2020): 589–604. http://dx.doi.org/10.13031/aea.13675.

Full text
Abstract:
Highlights Digital data collection and management system is developed for the USDA-AMS’s shell-egg grading program. Database system consisting of OLTP, data warehouse and OLAP databases enables online data entry and trend reporting. Data and information management is done through web application servers. Users access the databases via web browsers. Abstract . This paper is concerned with development of web-based online data entry and reporting system, capable of centralized data storage and analytics of egg grading records produced by USDA egg graders. The USDA egg grading records are currently managed in paper form. While there is useful information for data-driven knowledge discovery and decision making, the paper-based egg grading record system has fundamental limitations in effective and timely management of such information. Thus, there has been a demand to electronically and digitally store and manage the egg grading records in a database for data analytics and mining, such that the quality trends of eggs observed at various levels (e.g., nation or state) are readily available to decision makers. In this study, we report the design and implementation of a web-based online data entry and reporting information system (called USDA Egg Grading Information Management System, EGIMS), based on a data warehouse framework. The developed information system consisted of web applications for data entry and reporting, and internal databases for data storage, aggregation, and query processing. The internal databases consisted of online transaction processing (OLTP) database for data entry and retrieval, data warehouse (DW) for centralized data storage and online analytical processing (OLAP) database for multidimensional analytical queries. Thus, the key design goal of the system was to build a system platform that could provide the web-based data entry and reporting capabilities while rapidly updating the OLTP, DW and OLAP databases. The developed system was evaluated by a simulation study with statistically-modeled egg grading records of one hypothetical year. The study found that the EGIMS could handle approximately up to 600 concurrent users, 32 data entries per second and 164 report requests per second, on average. The study demonstrated the feasibility of an enterprise-level data warehouse system for the USDA and a potential to provide data analytics and data mining capabilities such that the queries about historical and current trends can be reported. Once fully implemented and tested in the field, the EGIMS is expected to provide a solution to modernize the egg grading practice of the USDA and produce the useful information for timely decisions and new knowledge discovery. Keywords: Data warehouse, Database, OLTP, OLAP, Egg grading, Information management, Web application, Information system, Data.
APA, Harvard, Vancouver, ISO, and other styles
22

Rahimi, M. M., and F. Hakimpour. "TOWARDS A CLOUD BASED SMART TRAFFIC MANAGEMENT FRAMEWORK." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W4 (September 27, 2017): 447–53. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w4-447-2017.

Full text
Abstract:
Traffic big data has brought many opportunities for traffic management applications. However several challenges like heterogeneity, storage, management, processing and analysis of traffic big data may hinder their efficient and real-time applications. All these challenges call for well-adapted distributed framework for smart traffic management that can efficiently handle big traffic data integration, indexing, query processing, mining and analysis. In this paper, we present a novel, distributed, scalable and efficient framework for traffic management applications. The proposed cloud computing based framework can answer technical challenges for efficient and real-time storage, management, process and analyse of traffic big data. For evaluation of the framework, we have used OpenStreetMap (OSM) real trajectories and road network on a distributed environment. Our evaluation results indicate that speed of data importing to this framework exceeds 8000 records per second when the size of datasets is near to 5 million. We also evaluate performance of data retrieval in our proposed framework. The data retrieval speed exceeds 15000 records per second when the size of datasets is near to 5 million. We have also evaluated scalability and performance of our proposed framework using parallelisation of a critical pre-analysis in transportation applications. The results show that proposed framework achieves considerable performance and efficiency in traffic management applications.
APA, Harvard, Vancouver, ISO, and other styles
23

da Rocha, Naila Camila, Abner Macola Pacheco Barbosa, Yaron Oliveira Schnr, Juliana Machado-Rugolo, Luis Gustavo Modelli de Andrade, José Eduardo Corrente, and Liciana Vaz de Arruda Silveira. "Natural Language Processing to Extract Information from Portuguese-Language Medical Records." Data 8, no. 1 (December 29, 2022): 11. http://dx.doi.org/10.3390/data8010011.

Full text
Abstract:
Studies that use medical records are often impeded due to the information presented in narrative fields. However, recent studies have used artificial intelligence to extract and process secondary health data from electronic medical records. The aim of this study was to develop a neural network that uses data from unstructured medical records to capture information regarding symptoms, diagnoses, medications, conditions, exams, and treatment. Data from 30,000 medical records of patients hospitalized in the Clinical Hospital of the Botucatu Medical School (HCFMB), São Paulo, Brazil, were obtained, creating a corpus with 1200 clinical texts. A natural language algorithm for text extraction and convolutional neural networks for pattern recognition were used to evaluate the model with goodness-of-fit indices. The results showed good accuracy, considering the complexity of the model, with an F-score of 63.9% and a precision of 72.7%. The patient condition class reached a precision of 90.3% and the medication class reached 87.5%. The proposed neural network will facilitate the detection of relationships between diseases and symptoms and prevalence and incidence, in addition to detecting the identification of clinical conditions, disease evolution, and the effects of prescribed medications.
APA, Harvard, Vancouver, ISO, and other styles
24

Stuart, Katharine. "Methods, methodology and madness." Records Management Journal 27, no. 2 (July 17, 2017): 223–32. http://dx.doi.org/10.1108/rmj-05-2017-0012.

Full text
Abstract:
Purpose This paper aims to present findings from a recent study examining current records management as fit for digital government in Australia. Design/methodology/approach This paper draws on findings from the first phase of research for a postdoctoral degree. This research was collected through an online quantitative survey of government records management professionals in Australia. The survey’s purpose was to understand whether the profession has kept pace with advances in, and expectations of, digital government. Building on the findings of the survey, this paper explores the concepts of methodology and methods and applies them to current digital records management in the Australian Government. Methodology for Australian Government digital records management is contained in the 2015 Digital Continuity 2020 policy. However, measuring method proved more difficult. The researcher supplemented data published by the National Archives of Australia and the Department of Finance with data from her own research to measure the validity of methods by examining suitability of current requirements. Findings Australian Government records management professionals overwhelmingly feel requirements, organisational culture and behaviour form a barrier to implementing successful records management programs. This paper finds that the Australian Government is buying ten times more digital storage per year than the sum of all of the digital Australian Government records known. This suggests perhaps not all records are recognised. While there will always be more storage than records, the ratio should not be so inflated. Further problems are found with requirements for records management being seen as mostly paper-based and too resource intensive to be of use. This research, combined with a contemporary literature review, shows that there is an imbalance with the current methodology and methods and asks the question: Has a methodology (Digital Continuity 2020) been created without suitable and known methods being in place? Research limitations/implications The method for collecting survey data was based on self-reporting, which can lead to limitations in that the population sample may exaggerate their response or demonstrate bias. However, responses to the survey were common enough to eliminate bias. The study is based on the Australian Government; however, findings may translate to other governments. This paper presents findings from the first phase of research of a postdoctoral degree. Not all findings are presented, only those relevant to the topic. Originality/value As the Australian Government moves to become a true digital government, records management is still required to ensure accountability of government actions and decisions. However, while the government transitions to digital, and information stores continue to grow, the question of whether records management has kept up with the rapid pace of digital information flow and expansion does not need to be asked. Instead, the time has come to ask, “What can we do to keep up?”
APA, Harvard, Vancouver, ISO, and other styles
25

Nguyen, Chinh, Rosemary Stockdale, Helana Scheepers, and Jason Sargent. "Electronic Records Management - An Old Solution to a New Problem." International Journal of Electronic Government Research 10, no. 4 (October 2014): 94–116. http://dx.doi.org/10.4018/ijegr.2014100105.

Full text
Abstract:
The rapid development of technology and interactive nature of Government 2.0 (Gov 2.0) is generating large data sets for Government, resulting in a struggle to control, manage, and extract the right information. Therefore, research into these large data sets (termed Big Data) has become necessary. Governments are now spending significant finances on storing and processing vast amounts of information because of the huge proliferation and complexity of Big Data and a lack of effective records management. On the other hand, there is a method called Electronic Records Management (ERM), for controlling and governing the important data of an organisation. This paper investigates the challenges identified from reviewing the literature for Gov 2.0, Big Data, and ERM in order to develop a better understanding of the application of ERM to Big Data to extract useable information in the context of Gov 2.0. The paper suggests that a key building block in providing useable information to stakeholders could potentially be ERM with its well established governance policies. A framework is constructed to illustrate how ERM can play a role in the context of Gov 2.0. Future research is necessary to address the specific constraints and expectations placed on governments in terms of data retention and use.
APA, Harvard, Vancouver, ISO, and other styles
26

Joseph, Pauline. "A case study of records management practices in historic motor sport." Records Management Journal 26, no. 3 (November 21, 2016): 314–36. http://dx.doi.org/10.1108/rmj-08-2015-0031.

Full text
Abstract:
Purpose This paper aims to report on empirical research that investigated the records management practices of two motor sport community-based organisations in Australia. Design/methodology/approach This multi-method case study was conducted on the regulator of motor sport, the Confederation of Australian Motor Sport Ltd (CAMS) and one affiliated historic car club, the Vintage Sports Car Club (VSCC), in Western Australia. Data were gathered using an online audit tool and by interviewing selected stakeholders in these organisations about their organisation’s records management practices. Findings The findings confirm that these organisations experience significant information management challenges, including difficulty in capturing, organising, managing, searching, accessing and preserving their records and archives. Hence, highlighting their inability to manage records advocated in the best practice Standard ISO 15489. It reveals the assumption of records management roles by unskilled members of the group. It emphasises that community-based organisations require assistance in managing their information management assets. Research limitations/implications This research focused on the historic car clubs; hence, it did not include other Australian car clubs in motor sport. Although four historical car clubs, one in each Australian state, were invited to participate, only the VSCC participated. This reduced the sample size to only one CAMS-affiliated historical car club in the study. Hence, further research is required to investigate the records management practices of other CAMS affiliated car clubs in all race disciplines and to confirm whether they experienced similar information management challenges. Comments from key informants in this project indicated that this is likely the case. Practical implications The research highlights risks to the motor sport community’s records and archives. It signals that without leadership by the sport’s governing body, current records and community archives of CAMS and its affiliated car clubs are in danger of being inaccessible, hence lost. Social implications The research highlights the risks in preserving the continuing memory of records and archives in leisure-based community organisations and showcases the threats in preserving its cultural identity and history. Originality/value It is the first study examining records management practices in the serious leisure sector using the motor sport community.
APA, Harvard, Vancouver, ISO, and other styles
27

Armstrong, Kyle N., Sylvia Clarke, Aimee Linke, Annette Scanlon, Philip Roetman, Jacqui Wilson, Alan T. Hitch, and Steven C. Donnellan. "Citizen science implements the first intensive acoustics-based survey of insectivorous bat species across the Murray–Darling Basin of South Australia." Australian Journal of Zoology 68, no. 6 (2020): 364. http://dx.doi.org/10.1071/zo20051.

Full text
Abstract:
Effective land management and biodiversity conservation policy relies on good records of native species occurrence and habitat association, but for many animal groups these data are inadequate. In the Murray–Darling Basin (MDB), the most environmentally and economically important catchment in Australia, knowledge gaps exist on the occurrence and habitat associations of insectivorous bat species. We relied on the interest and effort of citizen scientists to assist with the most intensive insectivorous bat survey ever undertaken in the MDB region of South Australia. We used an existing network of Natural Resource Management groups to connect interested citizens and build on historical observations of bat species using a fleet of 30 Anabat Swift bat detectors. The survey effort more than doubled the number of bat occurrence records for the area in two years (3000 records; cf. 2693 records between 1890 and 2018; freely available through the Atlas of Living Australia). We used multinomial logistic regression to look at the relationship between three types of environmental covariates: flight space, nearest open water source and vegetation type. There were no differences in species richness among the environmental covariates. The records have been, and will continue to be, used to inform government land management policy, more accurately predict the impact of development proposals on bat populations, and update conservation assessments for microbat species. A social survey tool also showed that participation in the project led to positive behaviours, and planned positive behaviours, for improving bat habitat on private land.
APA, Harvard, Vancouver, ISO, and other styles
28

Kraleva, Radoslava Stankova, Velin Spasov Kralev, Nina Sinyagina, Petia Koprinkova-Hristova, and Nadejda Bocheva. "Design and Analysis of a Relational Database for Behavioral Experiments Data Processing." International Journal of Online Engineering (iJOE) 14, no. 02 (February 28, 2018): 117. http://dx.doi.org/10.3991/ijoe.v14i02.7988.

Full text
Abstract:
In this paper, the results of a comparative analysis between different approaches to experimental data storage and processing are presented. Several studies related to the problem and some methods for solving it have been discussed. Different types of databases, ways of using them and the areas of their application are analyzed. For the purposes of the study, a relational database for storing and analyzing a specific data from behavioral experiments was designed. The methodology and conditions for conducting the experiments are described. Three different indicators were analyzed, respectively: memory required to store the data, time to load the data from an external file into computer memory and iteration time across all records through one cycle. The obtained results show that for storing a large number of records (in the order of tens of millions of rows), either dynamic arrays (stored on external media in binary file format), or an approach based on a local or remote database management system can be used. Regarding the data loading time, the fastest approach was the one that uses dynamic arrays. It outperforms significantly the approaches based on a local or remote database. The obtained results show that the dynamic arrays and the local data sets approaches iterated much faster across all data records than the remote database approach. The paper concludes with proposal for further developments towards using of web services.
APA, Harvard, Vancouver, ISO, and other styles
29

Kaur, Barjinder. "Information Management." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 3, no. 3 (August 5, 2013): 424–27. http://dx.doi.org/10.24297/ijct.v3i3a.2949.

Full text
Abstract:
Information management (IM) is the collection and management of information from one or more sources and the distribution of that information to one or more audiences. Management means the organization of and control over the planning, structure and organization, controlling, processing, evaluating and reporting of information activities in order to meet client objectives and to enable corporate functions in the delivery of information. 'Information' here refers to all types of information of value, whether having their origin inside or outside the organization, including data resources, such as production data; records and files related, for example, to the personnel function; market research data; and competitive intelligence from a wide range of sources. Information management deals with the value, quality, ownership, use and security of information in the context of organizational performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, K., K. K. W. Yau, and A. H. Lee. "Factors Influencing Hospitalisation of Infants for Recurrent Gastroenteritis in Western Australia." Methods of Information in Medicine 42, no. 03 (2003): 251–54. http://dx.doi.org/10.1055/s-0038-1634357.

Full text
Abstract:
Summary Objective: To determine factors affecting length of hospitalisation of infants for recurrent gastroenteritis using linked data records from the Western Australia heath information system. Methods: A seven-year retrospective cohort study was undertaken on all infants born in Western Australia in 1995 who were admitted for gastroenteritis during their first year of life (n = 519). Linked hospitalisation records were retrieved to derive the outcome measure and other demographic variables for the cohort. Unlike previous studies that focused mainly on a single episode of gastroenteritis, the durations of successive hospitalisations were analysed using a proportional hazards model with correlated frailty to determine the prognostic factors influencing recurrent gastroenteritis. Results: Older children experienced a shorter stay with an increased discharge rate of 1.9% for each month increase in admission age. An additional comorbidity recorded in the hospital discharge summary slowed the adjusted discharge rate by 46.5%. Aboriginal infants were readmitted to hospital more frequently, and had an adjusted hazard ratio of 0.253, implying a much higher risk of prolonged hospitalisation compared to non-Aborigines. Conclusions: The use of linked hospitalisation records has the advantage of providing access to hospital-based population information in the context of medical informatics. The analysis of linked data has enabled the assessment of prognostic factors influencing length of hospitalisations for recurrent gastroenteritis with high statistical power.
APA, Harvard, Vancouver, ISO, and other styles
31

Joseph, Pauline, and Jenna Hartel. "Visualizing information in the records and archives management (RAM) disciplines." Records Management Journal 27, no. 3 (November 20, 2017): 234–55. http://dx.doi.org/10.1108/rmj-06-2016-0017.

Full text
Abstract:
Purpose This paper aims to explore the concept of information in records and archives management (RAM) from a fresh, visual perspective by using arts-informed methodology and the draw-and-write technique. Design/methodology/approach Students and practitioners of RAM in Australia were asked to answer the question, “what is information?” in a drawing and then to describe the drawing in words. This produced a data set of 255 drawings of information or “iSquares”, for short. Compositional interpretation and a framework of graphic representations by Engelhardt were applied to determine how participants envision information and what the renderings imply for RAM. Findings The images reveal an overwhelming recognition in RAM of the diversity of media formats of information and the hyperconnectivity of information in networked information systems; and illustrate the central place of human beings within these systems. These findings offer striking, accessible illustrations of major concepts in RAM and enable new understandings through the construction of stories. Practical implications There are both pedagogical applications and practical implications of this work for students, practitioners and knowledge workers. The graphical representations of information in this research deepen the understanding of textual definitions of information. The data set of iSquares provides opportunities to create new storyboards to explain information definitions, practices and phenomena in RAM disciplines, and, to explain related concepts such as data, information, knowledge and wisdom hierarchy. Originality/value This is the first study in RAM disciplines to provide visual illustrations of information using graphical image representations.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Qian Mu, Rui Wang, Jie Yin, and Jun Hou. "The Design of Data Security Synchronization in the Network of Satellite and Ground Security Management Instrument." Key Engineering Materials 439-440 (June 2010): 208–14. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.208.

Full text
Abstract:
Network security, data management, data synchronization Abstract. The aim of Network of Satellite and Ground Security Management Instrument (NSGMI) is to increase network availability, improve network performance and control operation costs. After analyzing the shortcomings of traditional data synchronization mechanism, this paper reconstructs data processing method to improve communication ability of NSGMI, and provides a solution to guarantee real time and reliable on the application layer. The new mechanism provides buffer areas and operation flows to satisfy the high efficiency of data processing demand and the strong real time request. It solves how to open the buffer size when a direct access changes records.
APA, Harvard, Vancouver, ISO, and other styles
33

Heath, A. M., A. L. Culver, and C. W. Luxton. "Gathering good seismic data from the Otway Basin." Exploration Geophysics 20, no. 2 (1989): 247. http://dx.doi.org/10.1071/eg989247.

Full text
Abstract:
Cultus Petroleum N.L. began exploration in petroleum permit EPP 23 of the offshore Otway Basin in December 1987. The permit was sparsely explored, containing only 2 wells and poor quality seismic data. A regional study was made taking into account the shape of the basin and the characteristics of the major seismic sequences. A prospective trend was recognised, running roughly parallel to the present shelf edge of South Australia. A new seismic survey was orientated over this prospective trend. The parameters were designed to investigate the structural control of the prospects in the basin. To improve productivity during the survey, north-south lines had to be repositioned due to excessive swell noise on the cable. The new line locations were kept in accordance with the structural model. Field displays of the raw 240 channel data gave encouraging results. Processing results showed this survey to be the best quality in the area. An FK filter was designed on the full 240 channel records. Prior to wavelet processing, an instrument dephase was used to remove any influence of the recording system on the phase of the data. Close liaison was kept with the processing centre over the selection of stacking velocities and their relevance to the geological model. DMO was found to greatly improve the resolution of steeply dipping events and is now considered to be part of the standard processing sequence for Otway Basin data. Seismic data of a high enough quality for structural and stratigraphic interpretation can be obtained from this basin.
APA, Harvard, Vancouver, ISO, and other styles
34

Vikström, Antti, Hans Moen, Sanaz Rahimi Moosavi, Tapio Salakoski, and Sanna Salanterä. "Secondary use of electronic health records: Availability aspects in two Nordic countries." Health Information Management Journal 48, no. 3 (December 16, 2018): 144–51. http://dx.doi.org/10.1177/1833358318817473.

Full text
Abstract:
Background: The potential for the secondary use of electronic health records (EHRs) is underused due to restrictions in national legislation. For privacy purposes, legislative restrictions limit the availability and content of EHR data provided to secondary users. These limitations do not encourage healthcare organisations to develop procedures to promote the secondary use of EHRs. Objective: The objective of this study is to identify factors that restrict the secondary use of unstructured EHRs in academic research in Finland and Sweden. Method: A study was conducted to identify these availability-restricting issues that pertain to the academic secondary use of unstructured EHRs. Using semi-structured interviews, 14 domain experts in science, hospital management and business were interviewed to evaluate the efficiency of procedures and technologies that are implemented in secondary use processes. Results: The results demonstrate three aspects that restrict the availability of unstructured EHRs for secondary purposes: (i) the management and (ii) privacy preservation of such data as well as (iii) potential secondary users. Conclusion: Based on these categories, two approaches for the secondary use of unstructured EHRs are identified: the protected processing environment and altered data. Implications: The protected processing environment ensures patient privacy by providing unstructured EHRs for exclusive user groups that have preferred use intentions. Compared to the use of such processing environments, data alteration enables the secondary use of unstructured EHRs for a larger user group with various use intentions but that yield less valuable content.
APA, Harvard, Vancouver, ISO, and other styles
35

Flack, Anna L., Anthony S. Kiem, Tessa R. Vance, Carly R. Tozer, and Jason L. Roberts. "Comparison of published palaeoclimate records suitable for reconstructing annual to sub-decadal hydroclimatic variability in eastern Australia: implications for water resource management and planning." Hydrology and Earth System Sciences 24, no. 12 (November 29, 2020): 5699–712. http://dx.doi.org/10.5194/hess-24-5699-2020.

Full text
Abstract:
Abstract. Knowledge of past, current, and future hydroclimatic risk is of great importance. However, like many other countries, Australia's observed hydroclimate records are at best only ∼ 120 years long (i.e. from ∼ 1900 to the present) but are typically less than ∼ 50 years long. Therefore, recent research has focused on developing longer hydroclimate records based on palaeoclimate information from a variety of different sources. Here we review and compare the insights emerging from 11 published palaeoclimate records that are relevant for annual to sub-decadal hydroclimatic variability in eastern Australia over the last ∼ 1000 years. The sources of palaeoclimate information include ice cores, tree rings, cave deposits, and lake sediment deposits. The published palaeoclimate information was then analysed to determine when (and where) there was agreement (or uncertainty) about the timing of wet and dry epochs in the pre-instrumental period (1000–1899). The occurrence, frequency, duration, and spatial extent of pre-instrumental wet and dry epochs was then compared to wet and dry epochs since 1900. The results show that instrumental records (∼ 1900–present) underestimate (or at least misrepresent) the full range of rainfall variability that has occurred, and is possible, in eastern Australia. Even more disturbing is the suggestion, based on insights from the published palaeoclimate data analysed, that 71 % of the pre-instrumental period appears to have no equivalent in the instrumental period. This implies that the majority of the past 1000 years was unlike anything encountered in the period that informs water infrastructure, planning, and policy in Australia. A case study, using a typical water storage reservoir in eastern Australia, demonstrates that current water resource infrastructure and management strategies would not cope under the range of pre-instrumental conditions that this study suggests has occurred. When coupled with projected impacts of climate change and growing demands, these results highlight some major challenges for water resource management and infrastructure. Though our case study location is eastern Australia, these challenges, and the limitations associated with current methods that depend on instrumental records that are too short to realistically characterise interannual to multi-decadal variability, also apply globally.
APA, Harvard, Vancouver, ISO, and other styles
36

White, Saraya, and Warren Kealy-Bateman. "Primary evidence of seton therapy at Tarban Creek, New South Wales, 1839." Australasian Psychiatry 25, no. 3 (September 27, 2016): 293–96. http://dx.doi.org/10.1177/1039856216671666.

Full text
Abstract:
Objective: We aimed to find and explore the earliest available New South Wales asylum medical records to identify any management or therapeutic data that might be of interest to the psychiatric field. Conclusions: The earliest known existing records of New South Wales asylum data are from Tarban Creek Asylum. After almost two centuries the preserved records allow insight into treatment used in early colonial Australia, including the scarcely remembered seton therapy. This finding highlights the importance of preserving historical records. It also demonstrates the necessity and/or evolving wish within the colony to care for patients with perceived mental health difficulties based on a shared medical culture inherited from techniques used in Britain.
APA, Harvard, Vancouver, ISO, and other styles
37

Sangeeth L R, Silpa, Saji K. Mathew, and Vidyasagar Potdar. "Information Processing view of Electricity Demand Response Systems: A Comparative Study Between India and Australia." Pacific Asia Journal of the Association for Information Systems 12 (June 30, 2020): 27–63. http://dx.doi.org/10.17705/1thci.12402.

Full text
Abstract:
Abstract Background: In recent years, demand response (DR) has gained increased attention from utilities, regulators, and market aggregators to meet the growing demands of electricity. The key aspect of a successful DR program is the effective processing of data and information to gain critical insights. This study aims to identify information processing needs and capacity that interact to improve energy DR effectiveness. To this end, organizational information processing theory (OIPT) is employed to understand the role of Information Systems (IS) resources in achieving desired DR program performance. This study also investigates how information processing for DR systems differ between developing (India) and developed (Australia) countries. Method: This work adopts a case study methodology to propose a theoretical framework using OIPT for information processing in DR systems. The study further employs a comparative case data analyses between Australian and Indian DR initiatives. Results: Our cross case analysis identifies variables of value creation in designing DR programs - pricing structure for demand side participation, renewable integration at supply side, reforms in the regulatory instruments, and emergent technology. This research posits that the degree of information processing capacity mediates the influence of information processing needs on energy DR effectiveness. Further, we develop five propositions on the interaction between task based information processing needs and capacity, and their influence on DR effectiveness. Conclusions: The study generates insights on the role of IS resources that can help stakeholders in the electricity value chain to take informed and intelligent decisions for improved performance of DR programs. Recommended Citation Sangeeth L R, Silpa; Mathew, Saji K.; and Potdar, Vidyasagar (2020) "Information Processing view of Electricity Demand Response Systems: A Comparative Study Between India and Australia," Pacific Asia Journal of the Association for Information Systems: Vol. 12: Iss. 4, Article 2. DOI: 10.17705/1pais.12402 Available at: https://aisel.aisnet.org/pajais/vol12/iss4/2
APA, Harvard, Vancouver, ISO, and other styles
38

Kaddu, Sarah, Francis Ssekitto, and Moreen Matsiko Kyarimpa. "Records Management Practices in Uganda's Public Pension Office." University of Dar es Salaam Library Journal 17, no. 2 (January 18, 2023): 17–31. http://dx.doi.org/10.4314/udslj.v17i2.3.

Full text
Abstract:
The purpose of this study was to assess the records management practices in Uganda's public pension office. The study's objectives were: to find out the categories of records managed in Uganda's public pension office; to examine the records management practices in Uganda's public pension office; to find out the challenges faced in the management of records in Uganda's public pension office, and to propose strategies to improve the management of records in Uganda's public pension office. The study adopted a mixed methods research design. It was conducted at the Ministry of Public Service, specifically in the Compensation Department and Department of Records and Information Management. The study population was composed of thirty (30) staff working in the two departments, who were all adopted as the sample size, given the small population. Data were collected through semi-structured interviews, self-administered questionnaires and a document review. The findings revealed that personnel records were mostly kept and that the records management practices followed were guided by the Basic Registry Procedures Manual, a manual specifically developed for registries at the Ministry of Public Service. Despite having a records manual in place, some staff had poor attitudes towards records management due to poor remuneration while others had no/limited training in records management. Other challenges faced included inadequate equipment, non-streamlined records management practices and a lack of a Centre for benchmarking its practices as stipulated by the National Records and Information Management Policy framework. It is expected that the findings revealed by this study will inform policymakers, the government of Uganda and the Ministry of Public Service on the key issues to solve in a bid to strengthen records management in the public pension office to enhance the process of pension processing which is usually delayed by the lack of records.
APA, Harvard, Vancouver, ISO, and other styles
39

Sahrana, Oka, Safrizal Safrizal, Arfah Husna, and Dian Fera. "Process Evaluation on Medical Record Reporting and Information Usage Iskandar Muda Hospital Nagan Raya Regency." J-Kesmas: Jurnal Fakultas Kesehatan Masyarakat (The Indonesian Journal of Public Health) 8, no. 2 (October 22, 2021): 29. http://dx.doi.org/10.35308/j-kesmas.v8i2.3669.

Full text
Abstract:
Medical records are all records and documents about the patient's identities, examinations, treatments, actions and other services provided to the patient. Reporting medical records at Iskandar Muda hospital still does not follow standards. This is due to the lack of discipline of officers in filling out medical records, lack of medical records of officers and related health workers, then also influenced by the Hospital Management Information System that does not yet exist. The purpose of the study was to evaluate the reporting of medical records at Sultan Iskandar Muda hospital. This study uses qualitative research. The results showed that Sultan Iskandar Muda Hospital has been processing medical record data. The procedure of making a report that is not appropriate is the completion of resumes and daily census pain hospitalization. While the proper methods are a recapitulation of outpatient visits, making reports of hospital activities and making morbidity reports of inpatients and outpatients. The medical records unit has produced internal and external reports following the guidelines, and middle-level hospital management has fully used medical record information. It can be concluded that in processing medical record data, there are some obstacles. The procedure of making a report is not following the guidelines, and medical record information has been fully utilized.
APA, Harvard, Vancouver, ISO, and other styles
40

Robinson, Jo, Katrina Witt, Michelle Lamblin, Matthew J. Spittal, Greg Carter, Karin Verspoor, Andrew Page, et al. "Development of a Self-Harm Monitoring System for Victoria." International Journal of Environmental Research and Public Health 17, no. 24 (December 15, 2020): 9385. http://dx.doi.org/10.3390/ijerph17249385.

Full text
Abstract:
The prevention of suicide and suicide-related behaviour are key policy priorities in Australia and internationally. The World Health Organization has recommended that member states develop self-harm surveillance systems as part of their suicide prevention efforts. This is also a priority under Australia’s Fifth National Mental Health and Suicide Prevention Plan. The aim of this paper is to describe the development of a state-based self-harm monitoring system in Victoria, Australia. In this system, data on all self-harm presentations are collected from eight hospital emergency departments in Victoria. A natural language processing classifier that uses machine learning to identify episodes of self-harm is currently being developed. This uses the free-text triage case notes, together with certain structured data fields, contained within the metadata of the incoming records. Post-processing is undertaken to identify primary mechanism of injury, substances consumed (including alcohol, illicit drugs and pharmaceutical preparations) and presence of psychiatric disorders. This system will ultimately leverage routinely collected data in combination with advanced artificial intelligence methods to support robust community-wide monitoring of self-harm. Once fully operational, this system will provide accurate and timely information on all presentations to participating emergency departments for self-harm, thereby providing a useful indicator for Australia’s suicide prevention efforts.
APA, Harvard, Vancouver, ISO, and other styles
41

Brenton, Peter. "BioCollect - A modern cloud application for standards-base field data recording." Biodiversity Information Science and Standards 2 (May 17, 2018): e25439. http://dx.doi.org/10.3897/biss.2.25439.

Full text
Abstract:
Many organisations running citizen science projects don’t have access to or the knowledge or means to develop databases and apps for their projects. Some are also concerned about long-term data management and also how to make the data that they collect accessible and impactful in terms of scientific research, policy and management outcomes. To solve these issues, the Atlas of Living Australia (ALA) has developed BioCollect. BioCollect is a sophisticated, yet simple to use tool which has been built in collaboration with hundreds of real users who are actively involved in field data capture. It has been developed to support the needs of scientists, ecologists, citizen scientists and natural resource managers in the field-collection and management of biodiversity, ecological and natural resource management (NRM) data. BioCollect is a cloud-based facility hosted by the ALA and also includes associated mobile apps for offline data collection in the field. BioCollect provides form-based structured data collection for: Ad-hoc survey-based records; Method-based systematic structured surveys; and Activity-based projects such as natural resource management intervention projects (eg. revegetation, site restoration, seed collection, weed and pest management, etc.). This session will cover how BioCollect is being used for citizen science in Australia and some of the features of the tool.
APA, Harvard, Vancouver, ISO, and other styles
42

Vennila, M., and C. Murthy. "Procurement and processing management of ragi in Karnataka." INTERNATIONAL JOURNAL OF AGRICULTURAL SCIENCES 18, no. 1 (January 15, 2022): 219–24. http://dx.doi.org/10.15740/has/ijas/18.1/219-224.

Full text
Abstract:
Ragi is an important food crop which can be cultivated under adverse soil and climatic conditions. The present study was conducted onprocurement and processing management of ragi in Karnataka. The study has been carried out based on primary data whichwere collected from the processors of ragi with the help of well-structured questionnaire and secondary data from the records of processing units. Growth rate analysis and budgeting technique was computed. The procurement in terms of quantity and value of raw materials of ragi increased at the growth rate of 3.94 per cent over the years and 11.17 per cent over the years, respectively. The capacity utilization of the ragi processing unit was 62.00 per cent with the annual quantity of ragi processed was 61.26 MT against the total annual installed capacity 98.80 MT. The results revealed that the net present value of the unit was found to be Rs.10.75 lakhs at the end of economic life of project. The total processing cost for the process of ragi was recorded as Rs. 34,461 per ton. Among total processing cost, the total variable cost was contributes a share of 85.35 per cent with worth of Rs. 29,412 per ton and fixed cost contributes a share of 14.65 per cent with worth of Rs. 5,049 per ton. The total cost incurred on value addition of ragi flour was Rs. 36,905 per ton. Out of 1000 kg of ragi, 980 kg of ragi flour was obtained and the value of the processed product was Rs. 55 per kg. The sales price of the ragi flour was Rs. 53,900 and the net profit obtained by selling the processed product was Rs.16,995.
APA, Harvard, Vancouver, ISO, and other styles
43

Panwar, Arvind, Vishal Bhatnagar, Manju Khari, Ahmad Waleed Salehi, and Gaurav Gupta. "A Blockchain Framework to Secure Personal Health Record (PHR) in IBM Cloud-Based Data Lake." Computational Intelligence and Neuroscience 2022 (April 12, 2022): 1–19. http://dx.doi.org/10.1155/2022/3045107.

Full text
Abstract:
The health system in today’s real world is significant but difficult and overcrowded. These hurdles can be diminished using improved health record management and blockchain technology. These technologies can handle medical data to provide security by monitoring and maintaining patient records. The processing of medical data and patient records is essential to analyze the earlier prescribed medicines and to understand the severity of diseases. Blockchain technology can improve the security, performance, and transparency of sharing the medical records of the current healthcare system. This paper proposed a novel framework for personal health record (PHR) management using IBM cloud data lake and blockchain platform for an effective healthcare management process. The problem in the blockchain-based healthcare management system can be minimized with the utilization of the proposed technique. Significantly, the traditional blockchain system usually decreases the latency. Therefore, the proposed technique focuses on improving latency and throughput. The result of the proposed system is calculated based on various matrices, such as F1 Score, Recall, and Confusion matrices. Therefore, the proposed work scored high accuracy and provided better results than existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
44

Sheketa, Vasyl, Mykola Pasieka, Svitlana Chupakhina, Nadiia Pasieka, Uliana Ketsyk-Zinchenko, Yulia Romanyshyn, and Olha Yanyshyn. "Information System for Screening and Automation of Document Management in Oncological Clinics." Open Bioinformatics Journal 14, no. 1 (November 19, 2021): 39–50. http://dx.doi.org/10.2174/1875036202114010039.

Full text
Abstract:
Introduction: Automation of business documentation workflow in medical practice substantially accelerates and improves the process and results in better service development. Methods: Efficient use of databases, data banks, and document-oriented storage (warehouses data), including dual-purpose databases, enables performing specific actions, such as adding records, introducing changes into them, performing an either ordinary or analytical search of data, as well as their efficient processing. With the focus on achieving interaction between the distributed and heterogeneous applications and the devices belonging to the independent organizations, the specialized medical client application has been developed, as a result of which the quantity and quality of information streams of data, which can be essential for effective treatment of patients with breast cancer, have increased. Results: The application has been developed, allowing automating the management of patient records, taking into account the needs of medical staff, especially in managing patients’ appointments and creating patient’s medical records in accordance with the international standards currently in force. This work is the basis for the smoother integration of medical records and genomics data to achieve better prevention, diagnosis, prediction, and treatment of breast cancer (oncology). Conclusion: Since relevant standards upgrade the functioning of health care information technology and the quality and safety of patient’s care, we have accomplished the global architectural scheme of the specific medical automation system through harmonizing the medical services specified by the HL7 international.
APA, Harvard, Vancouver, ISO, and other styles
45

Spencer, G. A., D. F. Pridmore, and D. J. Isles. "Data integration of exploration data using colour space on an image processor." Exploration Geophysics 20, no. 2 (1989): 31. http://dx.doi.org/10.1071/eg989031.

Full text
Abstract:
lmage processing in exploration has rapidly evolved into the field of data integration, whereby independent data sets which coincide in space are displayed concurrently. Interrelation-ships between data sets which may be crucial to exploration can thus be identified much more effectively than with conventional hard copy overlays. The use of perceptual colour space; hue, saturation and luminosity (HSL) provides an effective means for integrating raster data sets, as illustrated with the multi-spectral scanner and airborne geophysical data from the Kambalda area in Western Australia. The integration process must also cater for data in vector format, which is more appropriate for geological, topographic and cultural information, but to date, image processing systems have poorly captured and managed such data. As a consequence, the merging of vector data management software such as GIS (geographic information system) with existing advanced image enhancement packages is an area of active development in the exploration industry.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Zhen, Chloé Pou-Prom, Ashley Jones, Michaelia Banning, David Dai, Muhammad Mamdani, Jiwon Oh, and Tony Antoniou. "Assessment of Natural Language Processing Methods for Ascertaining the Expanded Disability Status Scale Score From the Electronic Health Records of Patients With Multiple Sclerosis: Algorithm Development and Validation Study." JMIR Medical Informatics 10, no. 1 (January 12, 2022): e25157. http://dx.doi.org/10.2196/25157.

Full text
Abstract:
Background The Expanded Disability Status Scale (EDSS) score is a widely used measure to monitor disability progression in people with multiple sclerosis (MS). However, extracting and deriving the EDSS score from unstructured electronic health records can be time-consuming. Objective We aimed to compare rule-based and deep learning natural language processing algorithms for detecting and predicting the total EDSS score and EDSS functional system subscores from the electronic health records of patients with MS. Methods We studied 17,452 electronic health records of 4906 MS patients followed at one of Canada’s largest MS clinics between June 2015 and July 2019. We randomly divided the records into training (80%) and test (20%) data sets, and compared the performance characteristics of 3 natural language processing models. First, we applied a rule-based approach, extracting the EDSS score from sentences containing the keyword “EDSS.” Next, we trained a convolutional neural network (CNN) model to predict the 19 half-step increments of the EDSS score. Finally, we used a combined rule-based–CNN model. For each approach, we determined the accuracy, precision, recall, and F-score compared with the reference standard, which was manually labeled EDSS scores in the clinic database. Results Overall, the combined keyword-CNN model demonstrated the best performance, with accuracy, precision, recall, and an F-score of 0.90, 0.83, 0.83, and 0.83 respectively. Respective figures for the rule-based and CNN models individually were 0.57, 0.91, 0.65, and 0.70, and 0.86, 0.70, 0.70, and 0.70. Because of missing data, the model performance for EDSS subscores was lower than that for the total EDSS score. Performance improved when considering notes with known values of the EDSS subscores. Conclusions A combined keyword-CNN natural language processing model can extract and accurately predict EDSS scores from patient records. This approach can be automated for efficient information extraction in clinical and research settings.
APA, Harvard, Vancouver, ISO, and other styles
47

Churruca, Kate, Brian Draper, and Rebecca Mitchell. "Varying impact of co-morbid conditions on self-harm resulting in mortality in Australia." Health Information Management Journal 47, no. 1 (December 29, 2016): 28–37. http://dx.doi.org/10.1177/1833358316686799.

Full text
Abstract:
Background: Research has associated some chronic conditions with self-harm and suicide. Quantifying such a relationship in mortality data relies on accurate death records and adequate techniques for identifying these conditions. Objective: This study aimed to quantify the impact of identification methods for co-morbid conditions on suicides in individuals aged 30 years and older in Australia and examined differences by gender. Method: A retrospective examination of mortality records in the National Coronial Information System (NCIS) was conducted. Two different methods for identifying co-morbidities were compared: International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) coded data, which are provided to the NCIS by the Australian Bureau of Statistics, and free-text searches of Medical Cause of Death fields. Descriptive statistics and χ2 tests were used to compare the methods for identifying co-morbidities and look at differences by gender. Results: Results showed inconsistencies between ICD-10 coded and coronial reports in the identification of suicide and chronic conditions, particularly by type (physical or mental). There were also significant differences in the proportion of co-morbid conditions by gender. Conclusion: While ICD-10 coded mortality data more comprehensively identified co-morbidities, discrepancies in the identification of suicide and co-morbid conditions in both systems require further investigation to determine their nature (linkage errors, human subjectivity) and address them. Furthermore, due to the prescriptive coding procedures, the extent to which medico-legal databases may be used to explore potential and previously unrecognised associations between chronic conditions and self-harm deaths remains limited.
APA, Harvard, Vancouver, ISO, and other styles
48

Rendle, Jessica, Bethany Jackson, Stephen Vander Hoorn, Lian Yeap, Kristin Warren, Rebecca Donaldson, Samantha J. Ward, et al. "A Retrospective Study of Macropod Progressive Periodontal Disease (“Lumpy Jaw”) in Captive Macropods across Australia and Europe: Using Data from the Past to Inform Future Macropod Management." Animals 10, no. 11 (October 23, 2020): 1954. http://dx.doi.org/10.3390/ani10111954.

Full text
Abstract:
Macropod Progressive Periodontal Disease (MPPD) is a well-recognised disease that causes high morbidity and mortality in captive macropods worldwide. Epidemiological data on MMPD are limited, although multiple risk factors associated with a captive environment appear to contribute to the development of clinical disease. The identification of risk factors associated with MPPD would assist with the development of preventive management strategies, potentially reducing mortality. Veterinary and husbandry records from eight institutions across Australia and Europe were analysed in a retrospective cohort study (1995 to 2016), examining risk factors for the development of MPPD. A review of records for 2759 macropods found incidence rates (IR) and risk of infection differed between geographic regions and individual institutions. The risk of developing MPPD increased with age, particularly for macropods >10 years (Australia Incidence Rate Ratio (IRR) 7.63, p < 0.001; Europe IRR 7.38, p < 0.001). Prognosis was typically poor, with 62.5% mortality reported for Australian and European regions combined. Practical recommendations to reduce disease risk have been developed, which will assist zoos in providing optimal long-term health management for captive macropods and, subsequently, have a positive impact on both the welfare and conservation of macropods housed in zoos globally.
APA, Harvard, Vancouver, ISO, and other styles
49

Leslie, Heather. "Commentary: the patient's memory stick may complement electronic health records." Australian Health Review 29, no. 4 (2005): 401. http://dx.doi.org/10.1071/ah050401.

Full text
Abstract:
THE SITUATION DESCRIBED by Stevens1 in the foregoing article is similar to that navigated by thousands of individuals in hospitals around Australia each day. Stevens has been able to identify gaps in communication, processes and timely availability of pertinent information which potentially put her health at risk. There is little doubt that her call for ?legible and enduring record systems accessible by appropriate people? (page 400) would be supported by most of the general community. Health information management is hugely complex, with large numbers of concepts and high rates of clinical knowledge change. Electronic health records (EHRs) are definitely not simple concepts that are solved by storing information in a relational database for use in a single organisational silo, but require the capture of the full breadth of health information in a manner that can be easily stored, retrieved in varying contexts, and searched. Then there is the additional and unique requirement of sharing this same information with a range of health care providers with differing foci, requirements, technical tools and term-sets. When you add in some of the other more lateral requirements such as medico-legal accountability, pooling data for public health research, and privacy, consent and authorisation for sharing sensitive health information, it becomes increasingly evident that health data management has no real equivalent in other industries. In order for shareable electronic health records to become ubiquitous, there are numerous building blocks that need to be in place ? appropriate levels of funding, legislative changes, consensus on a range of standards, stakeholder engagement, implementation of massive change management programs and so on, as outlined by Grain.2 Australia?s solution is the HealthConnect program ? a joint Commonwealth and state government initiative ? which is gradually identifying the required pieces, and laying them out in a systematic way to solve the e-health system puzzle.
APA, Harvard, Vancouver, ISO, and other styles
50

Cumbane, Silvino Pedro, and Gyozo Gidófalvi. "Review of Big Data and Processing Frameworks for Disaster Response Applications." ISPRS International Journal of Geo-Information 8, no. 9 (September 3, 2019): 387. http://dx.doi.org/10.3390/ijgi8090387.

Full text
Abstract:
Natural hazards result in devastating losses in human life, environmental assets and personal, and regional and national economies. The availability of different big data such as satellite imageries, Global Positioning System (GPS) traces, mobile Call Detail Records (CDRs), social media posts, etc., in conjunction with advances in data analytic techniques (e.g., data mining and big data processing, machine learning and deep learning) can facilitate the extraction of geospatial information that is critical for rapid and effective disaster response. However, disaster response systems development usually requires the integration of data from different sources (streaming data sources and data sources at rest) with different characteristics and types, which consequently have different processing needs. Deciding which processing framework to use for a specific big data to perform a given task is usually a challenge for researchers from the disaster management field. Therefore, this paper contributes in four aspects. Firstly, potential big data sources are described and characterized. Secondly, the big data processing frameworks are characterized and grouped based on the sources of data they handle. Then, a short description of each big data processing framework is provided and a comparison of processing frameworks in each group is carried out considering the main aspects such as computing cluster architecture, data flow, data processing model, fault-tolerance, scalability, latency, back-pressure mechanism, programming languages, and support for machine learning libraries, which are related to specific processing needs. Finally, a link between big data and processing frameworks is established, based on the processing provisioning for essential tasks in the response phase of disaster management.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography