Journal articles on the topic 'Multiruolo'

To see the other types of publications on this topic, follow the link: Multiruolo.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 journal articles for your research on the topic 'Multiruolo.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Walker, Brandon S., Lauren N. Pearson, and Robert L. Schmidt. "An Analysis of Multirules for Monitoring Assay Quality Control." Laboratory Medicine 51, no. 1 (June 27, 2019): 94–98. http://dx.doi.org/10.1093/labmed/lmz038.

Full text
Abstract:
AbstractBackgroundMultirules are often employed to monitor quality control (QC). The performance of multirules is usually determined by simulation and is difficult to predict. Previous studies have not provided computer code that would enable one to experiment with multirules. It would be helpful for analysts to have computer code to analyze rule performance.ObjectiveTo provide code to calculate power curves and to investigate certain properties of multirule QC.MethodsWe developed computer code in the R language to simulate multirule performance. Using simulation, we studied the incremental performance of each rule and determined the average run length and time to signal.ResultsWe provide R code for simulating multirule performance. We also provide a Microsoft Excel spreadsheet with a tabulation of results that can be used to create power curves. We found that the R4S and 10x rules add very little power to a multirule set designed to detect shifts in the mean.ConclusionQC analysts should consider using a limited-rule set.
APA, Harvard, Vancouver, ISO, and other styles
2

Carey, R. Neill. "Implementation of Multirule Quality Control Procedures." Laboratory Medicine 20, no. 6 (June 1, 1989): 393–99. http://dx.doi.org/10.1093/labmed/20.6.393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Blum, A. S. "Computer evaluation of statistical procedures, and a new quality-control statistical procedure." Clinical Chemistry 31, no. 2 (February 1, 1985): 206–12. http://dx.doi.org/10.1093/clinchem/31.2.206.

Full text
Abstract:
Abstract I describe a program for definitive comparison of different quality-control statistical procedures. A microcomputer simulates quality-control results generated by repetitive analytical runs. It applies various statistical rules to each result, tabulating rule breaks to evaluate rules as routinely applied by the analyst. The process repeats with increasing amounts of random and systematic error. Rate of false rejection and true error detection for currently popular statistical procedures were comparatively evaluated together with a new multirule procedure described here. The nature of the analyst's response to out-of-control signals was also evaluated. A single-rule protocol that is as effective as the multirule protocol of Westgard et al. (Clin Chem 27:493, 1981) is reported.
APA, Harvard, Vancouver, ISO, and other styles
4

Payra, Swagata, and Manju Mohan. "Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool." Advances in Meteorology 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/456065.

Full text
Abstract:
The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD) approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF) model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.
APA, Harvard, Vancouver, ISO, and other styles
5

Lunetzky, Ellen S., and George S. Cembrowski. "Performance Characteristics of Bull’s Multirule Algorithm for the Quality Control of Multichannel Hematology Analyzers." American Journal of Clinical Pathology 88, no. 5 (November 1, 1987): 634–38. http://dx.doi.org/10.1093/ajcp/88.5.634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eggert, Arthur A., James O. Westgard, Patricia L. Barry, and Kenneth A. Emmerich. "Implementation of a multirule, multistage quality control program in a clinical laboratory computer system." Journal of Medical Systems 11, no. 6 (December 1987): 391–411. http://dx.doi.org/10.1007/bf00993007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Parvin, C. A. "Comparing the Power of Quality-Control Rules to Detect Persistent Systematic Error." Clinical Chemistry 38, no. 3 (March 1, 1992): 358–63. http://dx.doi.org/10.1093/clinchem/38.3.358.

Full text
Abstract:
Abstract A simulation approach that allows direct estimation of the power of a quality-control rule to detect error that persists until detection is used to compare and evaluate the error detection capabilities of a group of quality-control rules. Two persistent error situations are considered: a constant shift and a linear trend in the quality-control mean. A recently proposed "moving slope" quality-control test for the detection of linear trends is shown to have poor error detection characteristics. A multimean quality-control rule is introduced to illustrate the strategy underlying multirule procedures, which is to increase power without sacrificing response rate. This strategy is shown to provide superior error detection capability when compared with other rules evaluated under both error situations.
APA, Harvard, Vancouver, ISO, and other styles
8

Cembrowski, George S., Carol C. Patrick, and Cynthia A. Sentigar. "Use of a Multirule Control Chart for the Quality Control of PT and aPTT Analyses." Laboratory Medicine 20, no. 6 (June 1, 1989): 418–21. http://dx.doi.org/10.1093/labmed/20.6.418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Seungwoo Kim, Youngwan Cho, and Mignon Park. "A multirule-base controller using the robust property of a fuzzy controller and its design method." IEEE Transactions on Fuzzy Systems 4, no. 3 (1996): 315–27. http://dx.doi.org/10.1109/91.531773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Parvin, C. A. "New insight into the comparative power of quality-control rules that use control observations within a single analytical run." Clinical Chemistry 39, no. 3 (March 1, 1993): 440–47. http://dx.doi.org/10.1093/clinchem/39.3.440.

Full text
Abstract:
Abstract The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.
APA, Harvard, Vancouver, ISO, and other styles
11

Bishop, J., and A. B. Nix. "Comparison of quality-control rules used in clinical chemistry laboratories." Clinical Chemistry 39, no. 8 (August 1, 1993): 1638–49. http://dx.doi.org/10.1093/clinchem/39.8.1638.

Full text
Abstract:
Abstract Numerous papers have been written to show which combinations of Shewhart-type quality-control charts are optimal for detecting systematic shifts in the mean response of a process, increases in the random error of a process, and linear drift effects in the mean response across the assay batch. One paper by Westgard et al. (Clin Chem 1977;23:1857-67) especially seems to have attracted the attention of users. Here we derive detailed results that enable the characteristics of the various Shewhart-type control schemes, including the multirule scheme (Clin Chem 1981;27:493-501), to be calculated and show that a fundamental formula proposed by Westgard et al. in the earlier paper is in error, although their derived results are not seriously wrong. We also show that, from a practical point of view, a suitably chosen Cusum scheme is near optimal for all the types and combinations of errors discussed, thereby removing the selection problem for the user.
APA, Harvard, Vancouver, ISO, and other styles
12

Olafsdottir, E., J. O. Westgard, S. S. Ehrmeyer, and K. D. Fallon. "Matrix effects and the performance and selection of quality-control procedures to monitor PO2 measurements." Clinical Chemistry 42, no. 3 (March 1, 1996): 392–96. http://dx.doi.org/10.1093/clinchem/42.3.392.

Full text
Abstract:
Abstract We have assessed how variation in the matrix of control materials would affect error detection and false-rejection characteristics of quality-control (QC) procedures used to monitor PO2 in blood gas measurements. To determine the expected QC performance, we generated power curves for S(mat)/S(meas) ratios of 0.0-4.0. These curves were used to estimate the probabilities of rejecting analytical runs having medically important errors, calculated from the quality required by the CLIA '88 proficiency testing criterion and the precision and accuracy expected for a typical analytical system. When S(mat)/S(meas) ratios are low, the effects of matrix on QC performance are not serious, permitting selections of QC procedures based on simple power curves for a single component of variation. As S(mat)/S(meas) ratios increase, single-rule procedures generally show a loss in error detection, whereas multirule procedures, including the 3(1)s control rule, show an increase in false rejections. An optimized QC design is presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Fallest-Strobl, Patricia C., Elin Olafsdottir, Donald A. Wiebe, and James O. Westgard. "Comparison of NCEP performance specifications for triglycerides, HDL-, and LDL-cholesterol with operating specifications based on NCEP clinical and analytical goals." Clinical Chemistry 43, no. 11 (November 1, 1997): 2164–68. http://dx.doi.org/10.1093/clinchem/43.11.2164.

Full text
Abstract:
Abstract The National Cholesterol Education Program (NCEP) performance specifications for methods that measure triglycerides, HDL-cholesterol, and LDL-cholesterol have been evaluated by deriving operating specifications from the NCEP analytical total error requirements and the clinical requirements for interpretation of the tests. We determined the maximum imprecision and inaccuracy that would be allowable to control routine methods with commonly used single and multirule quality-control procedures having 2 and 4 control measurements per run, and then compared these estimates with the NCEP guidelines. The NCEP imprecision specifications meet the operating imprecision necessary to assure meeting the NCEP clinical quality requirements for triglycerides and HDL-cholesterol but not for LDL-cholesterol. More importantly, the NCEP imprecision specifications are not adequate to assure meeting the NCEP analytical total error requirements for any of these three tests. Our findings indicate that the NCEP recommendations fail to adequately consider the quality-control requirements necessary to detect medically important systematic errors.
APA, Harvard, Vancouver, ISO, and other styles
14

Bayat, Hassan, Sten A. Westgard, and James O. Westgard. "Multirule procedures vs moving average algorithms for IQC: An appropriate comparison reveals how best to combine their strengths." Clinical Biochemistry 102 (April 2022): 50–55. http://dx.doi.org/10.1016/j.clinbiochem.2022.01.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kazmierczak, S. C., P. G. Catrou, and F. Van Lente. "Diagnostic accuracy of pancreatic enzymes evaluated by use of multivariate data analysis." Clinical Chemistry 39, no. 9 (September 1, 1993): 1960–65. http://dx.doi.org/10.1093/clinchem/39.9.1960.

Full text
Abstract:
Abstract We analyzed pancreatic enzyme data from 508 patients with suspected pancreatitis by neural network analysis, by an Expert multirule generation protocol, and by receiver-operator characteristic (ROC) curve analysis of a single test result. Neural network analysis showed that use of lipase provided the best means for diagnosing pancreatitis. Diagnostic accuracies achieved by using amylase only, lipase only, and amylase and lipase in combination were 76%, 82%, and 84%, respectively. Use of the Expert rule generation protocol provided a diagnostic accuracy of 92% when rules for single and multiple samplings were combined. ROC curve analysis for initial enzyme activities showed the maximal diagnostic accuracy to be 82% and 85% for amylase and lipase, respectively; use of peak enzyme activities yielded accuracies of 81% and 88%, respectively. The evaluation of laboratory test data should include analysis of the diagnostic accuracy of laboratory tests by multivariate techniques such as neural network analysis or an Expert systems approach. Multivariate analysis should allow for a more realistic assessment of the diagnosis accuracy of laboratory tests because all the available data are included in the evaluation.
APA, Harvard, Vancouver, ISO, and other styles
16

Smith, Robert E. "Memory Exploitation in Learning Classifier Systems." Evolutionary Computation 2, no. 3 (September 1994): 199–220. http://dx.doi.org/10.1162/evco.1994.2.3.199.

Full text
Abstract:
Learning classifier systms (LCSs) offer a unique opportunity to study the adaptive exploitation of memory. Because memory is manipulated in the form of simple internal messages in the LCS, one can easily and carefully examine the development of a system of internal memory symbols. This study examines the LCS applied to a problem whose only performance goal is the effective exploitation of memory. Experimental results show that the genetic algorithm forms a relatively effective set of internal memory symbols, but that this effectiveness is directly limited by the emergence of parasite rules. The results indicate that the emergence of parasites may be an inevitable consequence in a system that must evolve its own set of internal memory symbols. The paper's primary conclusion is that the emergence of parasites is a fundamental obstacle in such problems. To overcome this obstacle, it is suggested that the LCS must form larger, multirule structures. In such structures, parasites can be more accurately evaluated and thus eliminated. This effect is demonstrated through a preliminary evaluation of a classifier corporation scheme. Final comments present future directions for research on memory exploitation in the LCS and similar evolutionary computing systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Dumontheil, Iroise, Russell Thompson, and John Duncan. "Assembly and Use of New Task Rules in Fronto-parietal Cortex." Journal of Cognitive Neuroscience 23, no. 1 (January 2011): 168–82. http://dx.doi.org/10.1162/jocn.2010.21439.

Full text
Abstract:
Severe capacity limits, closely associated with fluid intelligence, arise in learning and use of new task rules. We used fMRI to investigate these limits in a series of multirule tasks involving different stimuli, rules, and response keys. Data were analyzed both during presentation of instructions and during later task execution. Between tasks, we manipulated the number of rules specified in task instructions, and within tasks, we manipulated the number of rules operative in each trial block. Replicating previous results, rule failures were strongly predicted by fluid intelligence and increased with the number of operative rules. In fMRI data, analyses of the instruction period showed that the bilateral inferior frontal sulcus, intraparietal sulcus, and presupplementary motor area were phasically active with presentation of each new rule. In a broader range of frontal and parietal regions, baseline activity gradually increased as successive rules were instructed. During task performance, we observed contrasting fronto-parietal patterns of sustained (block-related) and transient (trial-related) activity. Block, but not trial, activity showed effects of task complexity. We suggest that, as a new task is learned, a fronto-parietal representation of relevant rules and facts is assembled for future control of behavior. Capacity limits in learning and executing new rules, and their association with fluid intelligence, may be mediated by this load-sensitive fronto-parietal network.
APA, Harvard, Vancouver, ISO, and other styles
18

Prasetya, Hieronymus Rayi, Nurlaili Farida Muhajir, and Magdalena Putri Iriyanti Dumatubun. "PENGGUNAAN SIX SIGMA PADA PEMERIKSAAN JUMLAH LEUKOSIT DI RSUD PANEMBAHAN SENOPATI BANTUL." Journal of Indonesian Medical Laboratory and Science (JoIMedLabS) 2, no. 2 (October 1, 2021): 165–74. http://dx.doi.org/10.53699/joimedlabs.v2i2.72.

Full text
Abstract:
Internal quality assurance is a prevention and control activity that must be carried out by the laboratory continuously and covers all aspects of laboratory examination parameters. Hematology examination in the laboratory is carried out using a Hematology analyzer, but this tool has limitations, one of which is that it can make leukocyte count reading errors. In order for the results of the tool to be reliable, it is necessary to carry out quality control on the hematology analyzer. The use of Westgard multirule is commonly used in laboratories, but the application of six sigma is still very rarely used, especially in the field of hematology. This research aims to know the internal quality control of the analytical stage of the Hematology analyzer for the leukocyte count based on the analysis of Westgard and Six sigma. This type of research is descriptive research. The sample in this study is the control value data for the examination of the leukocyte count for 1 month at Panembahan Senopati Hospital. The data were analyzed using the Westgard rules and Six sigma. At low level, 13s deviation (random error) is obtained. At the normal level, there is a deviation of 12s (warning). At high level 12s deviation is obtained (warning). The sigma scale at all control levels shows a scale above 6. Analysis based on six sigma for leukocyte count showed an average of 7.16 sigma which indicates that leukocyte examination using a hematology analyzer has an accuracy of 99.9%.
APA, Harvard, Vancouver, ISO, and other styles
19

Kilpatrick, Eric S. "Quality control failures exceeding the weekly limit (QC FEWL): a simple tool to improve assay error detection." Annals of Clinical Biochemistry: International Journal of Laboratory Medicine 56, no. 6 (August 20, 2019): 668–73. http://dx.doi.org/10.1177/0004563219869043.

Full text
Abstract:
Background Even when a laboratory analyte testing process is in control, routine quality control testing will fail with a frequency that can be predicted by the number of quality control levels used, the run frequency and the control rule employed. We explored whether simply counting the number of assay quality control run failures during a running week, and then objectively determining if there was an excess, could complement daily quality control processes in identifying an out-of-control assay. Methods Binomial statistics were used to determine the threshold number of quality control run failures in any rolling week which would statistically exceed that expected for a particular test. Power function graphs were used to establish error detection (Ped) and false rejection rates compared with popular control rules. Results Identifying quality control failures exceeding the weekly limit (QC FEWL) is a more powerful means of detecting smaller systematic (bias) errors than traditional daily control rules (12s, 13s or 13s/22s/R4s) and markedly superior in detecting smaller random (imprecision) errors while maintaining false identification rates below 2%. Error detection rates also exceeded those using a within- and between-run Westgard multirule (13s/22s/41s/10x). Conclusions Daily review of tests shown to statistically exceed their rolling week limit of expected quality control run failures is more powerful than traditional quality control tools at identifying potential systematic and random test errors and so offers a supplement to daily quality control practices that has no requirement for complex data extraction or manipulation.
APA, Harvard, Vancouver, ISO, and other styles
20

Kruk, Tamara, Sam Ratnam, Jutta Preiksaitis, Allan Lau, Todd Hatchette, Greg Horsman, Paul Van Caeseele, Brian Timmons, and Graham Tipples. "Results of Continuous Monitoring of the Performance of Rubella Virus IgG and Hepatitis B Virus Surface Antibody Assays Using Trueness Controls in a Multicenter Trial." Clinical and Vaccine Immunology 19, no. 10 (August 15, 2012): 1624–32. http://dx.doi.org/10.1128/cvi.00294-12.

Full text
Abstract:
ABSTRACTWe conducted a multicenter trial in Canada to assess the value of using trueness controls (TC) for rubella virus IgG and hepatitis B virus surface antibody (anti-HBs) serology to determine test performance across laboratories over time. TC were obtained from a single source with known international units. Seven laboratories using different test systems and kit lots included the TC in routine assay runs of the analytes. TC measurements of 1,095 rubella virus IgG and 1,195 anti-HBs runs were plotted on Levey-Jennings control charts for individual laboratories and analyzed using a multirule quality control (MQC) scheme as well as a single three-standard-deviation (3-SD) rule. All rubella virus IgG TC results were “in control” in only one of the seven laboratories. Among the rest, “out-of-control” results ranged from 5.6% to 10% with an outlier at 20.3% by MQC and from 1.1% to 5.6% with an outlier at 13.4% by the 3-SD rule. All anti-HBs TC results were “in control” in only two laboratories. Among the rest, “out-of-control” results ranged from 3.3% to 7.9% with an outlier at 19.8% by MQC and from 0% to 3.3% with an outlier at 10.5% by the 3-SD rule. In conclusion, through the continuous monitoring of assay performance using TC and quality control rules, our trial detected significant intra- and interlaboratory, test system, and kit lot variations for both analytes. In most cases the assay rejections could be attributable to the laboratories rather than to kit lots. This has implications for routine diagnostic screening and clinical practice guidelines and underscores the value of using an approach as described above for continuous quality improvement in result reporting and harmonization for these analytes.
APA, Harvard, Vancouver, ISO, and other styles
21

Coskun, Abdurrahman. "Westgard multirule for calculated laboratory tests." Clinical Chemistry and Laboratory Medicine (CCLM) 44, no. 10 (January 1, 2006). http://dx.doi.org/10.1515/cclm.2006.233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lee, Sungjin, Suyoun Park, Muhammad Tayyab Raza, Thi Bich Ngoc Nguyen, Thi Hong Nhung Vu, Anshula Tandon, and Sung Ha Park. "Multirule-Combined Algorithmic Assembly Demonstrated by DNA Tiles." ACS Applied Polymer Materials, July 14, 2022. http://dx.doi.org/10.1021/acsapm.2c00528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography