Academic literature on the topic 'Channel reliability metrics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Channel reliability metrics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Channel reliability metrics"

1

Forsting, Johannes, Marlena Rohm, Martijn Froeling, Anne-Katrin Güttsches, Matthias Vorgerd, Lara Schlaffke, and Robert Rehmann. "High Inter-Rater Reliability of Manual Segmentation and Volume-Based Tractography in Healthy and Dystrophic Human Calf Muscle." Diagnostics 11, no. 9 (August 24, 2021): 1521. http://dx.doi.org/10.3390/diagnostics11091521.

Full text
Abstract:
Background: Muscle diffusion tensor imaging (mDTI) is a promising surrogate biomarker in the evaluation of muscular injuries and neuromuscular diseases. Since mDTI metrics are known to vary between different muscles, separation of different muscles is essential to achieve muscle-specific diffusion parameters. The commonly used technique to assess DTI metrics is parameter maps based on manual segmentation (MSB). Other techniques comprise tract-based approaches, which can be performed in a previously defined volume. This so-called volume-based tractography (VBT) may offer a more robust assessment of diffusion metrics and additional information about muscle architecture through tract properties. The purpose of this study was to assess DTI metrics of human calf muscles calculated with two segmentation techniques—MSB and VBT—regarding their inter-rater reliability in healthy and dystrophic calf muscles. Methods: 20 healthy controls and 18 individuals with different neuromuscular diseases underwent an MRI examination in a 3T scanner using a 16-channel Torso XL coil. DTI metrics were assessed in seven calf muscles using MSB and VBT. Coefficients of variation (CV) were calculated for both techniques. MSB and VBT were performed by two independent raters to assess inter-rater reliability by ICC analysis and Bland-Altman plots. Next to analysis of DTI metrics, the same assessments were also performed for tract properties extracted with VBT. Results: For both techniques, low CV were found for healthy controls (≤13%) and neuromuscular diseases (≤17%). Significant differences between methods were found for all diffusion metrics except for λ1. High inter-rater reliability was found for both MSB and VBT (ICC ≥ 0.972). Assessment of tract properties revealed high inter-rater reliability (ICC ≥ 0.974). Conclusions: Both segmentation techniques can be used in the evaluation of DTI metrics in healthy controls and different NMD with low rater dependency and high precision but differ significantly from each other. Our findings underline that the same segmentation protocol must be used to ensure comparability of mDTI data.
APA, Harvard, Vancouver, ISO, and other styles
2

Amezcua Valdovinos, Ismael, Patricia Elizabeth Figueroa Millán, Jesús Arturo Pérez-Díaz, and Cesar Vargas-Rosales. "Distributed Channel Ranking Scheduling Function for Dense Industrial 6TiSCH Networks." Sensors 21, no. 5 (February 25, 2021): 1593. http://dx.doi.org/10.3390/s21051593.

Full text
Abstract:
The Industrial Internet of Things (IIoT) is considered a key enabler for Industry 4.0. Modern wireless industrial protocols such as the IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) deliver high reliability to fulfill the requirements in IIoT by following strict schedules computed in a Scheduling Function (SF) to avoid collisions and to provide determinism. The standard does not define how such schedules are built. The SF plays an essential role in 6TiSCH networks since it dictates when and where the nodes are communicating according to the application requirements, thus directly influencing the reliability of the network. Moreover, typical industrial environments consist of heavy machinery and complementary wireless communication systems that can create interference. Hence, we propose a distributed SF, namely the Channel Ranking Scheduling Function (CRSF), for IIoT networks supporting IPv6 over the IEEE 802.15.4e TSCH mode. CRSF computes the number of cells required for each node using a buffer-based bandwidth allocation mechanism with a Kalman filtering technique to avoid sudden allocation/deallocation of cells. CRSF also ranks channel quality using Exponential Weighted Moving Averages (EWMAs) based on the Received Signal Strength Indicator (RSSI), Background Noise (BN) level measurements, and the Packet Delivery Rate (PDR) metrics to select the best available channel to communicate. We compare the performance of CRSF with Orchestra and the Minimal Scheduling Function (MSF), in scenarios resembling industrial environmental characteristics. Performance is evaluated in terms of PDR, end-to-end latency, Radio Duty Cycle (RDC), and the elapsed time of first packet arrival. Results show that CRSF achieves high PDR and low RDC across all scenarios with periodic and burst traffic patterns at the cost of increased end-to-end latency. Moreover, CRSF delivers the first packet earlier than Orchestra and MSF in all scenarios. We conclude that CRSF is a viable option for IIoT networks with a large number of nodes and interference. The main contributions of our paper are threefold: (i) a bandwidth allocation mechanism that uses Kalman filtering techniques to effectively calculate the number of cells required for a given time, (ii) a channel ranking mechanism that combines metrics such as the PDR, RSSI, and BN to select channels with the best performance, and (iii) a new Key Performance Indicator (KPI) that measures the elapsed time from network formation until the first packet reception at the root.
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Yong, and Guangwei Bai. "Energy-Aware Adaptive Cooperative FEC Protocol in MIMO Channel for Wireless Sensor Networks." Journal of Electrical and Computer Engineering 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/891429.

Full text
Abstract:
We propose an adaptive cooperative forward error correction (ACFEC) based on energy efficiency combining Reed-Solomon (RS) coder algorithm and multiple input multiple output (MIMO) channel technology with monitoring signal-to-noise ratio (SNR) in wireless sensor networks. First, we propose a new Markov chain model for FEC based on RS codes and derive the expressions for QoS on the basis of this model, which comprise four metrics: throughput, packet error rate, delay, and energy efficiency. Then, we apply RS codes with the MIMO channel technology to the cross-layer design. Numerical and simulation results show that the joint design of MIMO and adaptive cooperative FEC based on RS codes can achieve considerable spectral efficiency gain, real-time performance, reliability, and energy utility.
APA, Harvard, Vancouver, ISO, and other styles
4

Bokharaie, V. Samadi, and A. Jahanian. "Side-channel leakage assessment metrics and methodologies at design cycle: A case study for a cryptosystem." Journal of Information Security and Applications 54 (October 2020): 102561. http://dx.doi.org/10.1016/j.jisa.2020.102561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chiariotti, Federico, Israel Leyva-Mayorga, Čedomir Stefanović, Anders E. Kalør, and Petar Popovski. "Spectrum Slicing for Multiple Access Channels with Heterogeneous Services." Entropy 23, no. 6 (May 28, 2021): 686. http://dx.doi.org/10.3390/e23060686.

Full text
Abstract:
Wireless mobile networks from the fifth generation (5G) and beyond serve as platforms for flexible support of heterogeneous traffic types with diverse performance requirements. In particular, the broadband services aim for the traditional rate optimization, while the time-sensitive services aim for the optimization of latency and reliability, and some novel metrics such as Age of Information (AoI). In such settings, the key question is the one of spectrum slicing: how these services share the same chunk of available spectrum while meeting the heterogeneous requirements. In this work we investigated the two canonical frameworks for spectrum sharing, Orthogonal Multiple Access (OMA) and Non-Orthogonal Multiple Access (NOMA), in a simple, but insightful setup with a single time-slotted shared frequency channel, involving one broadband user, aiming to maximize throughput and using packet-level coding to protect its transmissions from noise and interference, and several intermittent users, aiming to either to improve their latency-reliability performance or to minimize their AoI. We analytically assessed the performances of Time Division Multiple Access (TDMA) and ALOHA-based schemes in both OMA and NOMA frameworks by deriving their Pareto regions and the corresponding optimal values of their parameters. Our results show that NOMA can outperform traditional OMA in latency-reliability oriented systems in most conditions, but OMA performs slightly better in age-oriented systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Kulow, Alexander, Thomas Schamberger, Lars Tebelmann, and Georg Sigl. "Finding the Needle in the Haystack: Metrics for Best Trace Selection in Unsupervised Side-Channel Attacks on Blinded RSA." IEEE Transactions on Information Forensics and Security 16 (2021): 3254–68. http://dx.doi.org/10.1109/tifs.2021.3074884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ateeq, Muhammad, Muhammad Khalil Afzal, Muhammad Naeem, Muhammad Shafiq, and Jin-Ghoo Choi. "Deep Learning-Based Multiparametric Predictions for IoT." Sustainability 12, no. 18 (September 19, 2020): 7752. http://dx.doi.org/10.3390/su12187752.

Full text
Abstract:
Wireless Sensor Networks (WSNs) and Internet of Things (IoT) often suffer from error-prone links when deployed in resource-constrained industrial environments. Reliability is a critical performance requirement of loss-sensitive applications, and Signal-to-Noise Ratio (SNR) is a key indicator of successful communications. In addition to the improvement of the physical layer through modulation and channel coding, machine learning offers adaptive solutions by configuring various communication parameters dynamically. In this paper, we apply a Deep Neural Network (DNN) to predict SNR and Packet Delivery Ratio (PDR). Analysis results based on a real dataset show that the DNN can predict SNR and PDR at the accuracy of up to 96% and 98%, respectively, even when trained with very small fraction (≤10%) of data. Moreover, a common subset of features turns out to be useful in predicting both SNR and PDR so as to encourage considering both metrics jointly. We may control the transmission power in the dynamic and adaptive manner when we have predictable SNR and PDR, and thus fulfill the reliability requirements with energy conservation. This can help in achieving sustainable design for the communication system.
APA, Harvard, Vancouver, ISO, and other styles
8

Asuti, Manjunath G., and Prabhugoud I. Basarkod. "Efficiency enhancement using optimized static scheduling technique in TSCH networks." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 2 (April 1, 2020): 1952. http://dx.doi.org/10.11591/ijece.v10i2.pp1952-1962.

Full text
Abstract:
In recent times, the reliable and real-time data transmission becomes a mandatory requirement for various industries and organizations due to the large utilization of Internet of Things (IoT) devices. However, the IoT devices need high reliability, precise data exchange and low power utilization which cannot be achieved by the conventional Medium Access Control (MAC) protocols due to link failures and high interferences in the network. Therefore, the Time-Slotted Channel Hopping (TSCH) networks can be used for link scheduling under the IEEE 802.15.4e standard. In this paper, we propose an Optimized Static Scheduling Technique (OSST) for the link scheduling in IEEE 802.15.4e based TSCH networks. In OSST the link schedule is optimized by considering the packet latency information during transmission by checking the status of the transmitted packets as well as keeping track of the lost data packets from source to destination nodes. We evaluate the proposed OSST model using 6TiSCH Simulator and compare the different performance metrics with Simple distributed TSCH Scheduling.
APA, Harvard, Vancouver, ISO, and other styles
9

Gibbs, Matthew S., David McInerney, Greer Humphrey, Mark A. Thyer, Holger R. Maier, Graeme C. Dandy, and Dmitri Kavetski. "State updating and calibration period selection to improve dynamic monthly streamflow forecasts for an environmental flow management application." Hydrology and Earth System Sciences 22, no. 1 (February 1, 2018): 871–87. http://dx.doi.org/10.5194/hess-22-871-2018.

Full text
Abstract:
Abstract. Monthly to seasonal streamflow forecasts provide useful information for a range of water resource management and planning applications. This work focuses on improving such forecasts by considering the following two aspects: (1) state updating to force the models to match observations from the start of the forecast period, and (2) selection of a shorter calibration period that is more representative of the forecast period, compared to a longer calibration period traditionally used. The analysis is undertaken in the context of using streamflow forecasts for environmental flow water management of an open channel drainage network in southern Australia. Forecasts of monthly streamflow are obtained using a conceptual rainfall–runoff model combined with a post-processor error model for uncertainty analysis. This model set-up is applied to two catchments, one with stronger evidence of non-stationarity than the other. A range of metrics are used to assess different aspects of predictive performance, including reliability, sharpness, bias and accuracy. The results indicate that, for most scenarios and metrics, state updating improves predictive performance for both observed rainfall and forecast rainfall sources. Using the shorter calibration period also improves predictive performance, particularly for the catchment with stronger evidence of non-stationarity. The results highlight that a traditional approach of using a long calibration period can degrade predictive performance when there is evidence of non-stationarity. The techniques presented can form the basis for operational monthly streamflow forecasting systems and provide support for environmental decision-making.
APA, Harvard, Vancouver, ISO, and other styles
10

Yamashkin, Stanislav, Milan Radovanovic, Anatoliy Yamashkin, and Darko Vukovic. "Using ensemble systems to study natural processes." Journal of Hydroinformatics 20, no. 4 (March 5, 2018): 753–65. http://dx.doi.org/10.2166/hydro.2018.076.

Full text
Abstract:
Abstract Increasing accuracy of the data analysis of remote sensing of the Earth significantly affects the quality of decisions taken in the field of environmental management. The article describes the methodology for decoding multispectral space images based on the ensemble learning concept, which can effectively solve important problems of geosystems mapping, including diagnostics of the structure and condition of catchment basins, inventory of water bodies and assessment of their ecological state, study of channel processes; monitoring and forecasting of functioning, dynamics and development of geotechnical systems. The developed methodology is based on an algorithm for analyzing the structure of geosystems using ensemble systems based on a fundamentally new organization of the metaclassifier that allows for a weighted decision based on the efficiency matrix, which is characterized by an increase in accuracy of the decoding of space images and resistance to errors. The metaclassification training algorithm based on the method of weighted voting of monoclassifiers is proposed, in which the weights are calculated on the basis of error matrix metrics. The methodology was tested at the test site ‘Inerka’. The performed experiments confirmed that the use of ensemble systems increases the final accuracy, objectivity, and reliability of the analysis.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Channel reliability metrics"

1

Shaheem, Asri. "Iterative detection for wireless communications." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0223.

Full text
Abstract:
[Truncated abstract] The transmission of digital information over a wireless communication channel gives rise to a number of issues which can detract from the system performance. Propagation effects such as multipath fading and intersymbol interference (ISI) can result in significant performance degradation. Recent developments in the field of iterative detection have led to a number of powerful strategies that can be effective in mitigating the detrimental effects of wireless channels. In this thesis, iterative detection is considered for use in two distinct areas of wireless communications. The first considers the iterative decoding of concatenated block codes over slow flat fading wireless channels, while the second considers the problem of detection for a coded communications system transmitting over highly-dispersive frequency-selective wireless channels. The iterative decoding of concatenated codes over slow flat fading channels with coherent signalling requires knowledge of the fading amplitudes, known as the channel state information (CSI). The CSI is combined with statistical knowledge of the channel to form channel reliability metrics for use in the iterative decoding algorithm. When the CSI is unknown to the receiver, the existing literature suggests the use of simple approximations to the channel reliability metric. However, these works generally consider low rate concatenated codes with strong error correcting capabilities. In some situations, the error correcting capability of the channel code must be traded for other requirements, such as higher spectral efficiency, lower end-to-end latency and lower hardware cost. ... In particular, when the error correcting capabilities of the concatenated code is weak, the conventional metrics are observed to fail, whereas the proposed metrics are shown to perform well regardless of the error correcting capabilities of the code. The effects of ISI caused by a frequency-selective wireless channel environment can also be mitigated using iterative detection. When the channel can be viewed as a finite impulse response (FIR) filter, the state-of-the-art iterative receiver is the maximum a posteriori probability (MAP) based turbo equaliser. However, the complexity of this receiver's MAP equaliser increases exponentially with the length of the FIR channel. Consequently, this scheme is restricted for use in systems where the channel length is relatively short. In this thesis, the use of a channel shortening prefilter in conjunction with the MAP-based turbo equaliser is considered in order to allow its use with arbitrarily long channels. The prefilter shortens the effective channel, thereby reducing the number of equaliser states. A consequence of channel shortening is that residual ISI appears at the input to the turbo equaliser and the noise becomes coloured. In order to account for the ensuing performance loss, two simple enhancements to the scheme are proposed. The first is a feedback path which is used to cancel residual ISI, based on decisions from past iterations. The second is the use of a carefully selected value for the variance of the noise assumed by the MAP-based turbo equaliser. Simulations are performed over a number of highly dispersive channels and it is shown that the proposed enhancements result in considerable performance improvements. Moreover, these performance benefits are achieved with very little additional complexity with respect to the unmodified channel shortened turbo equaliser.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Channel reliability metrics"

1

Siderius, Christian, Robel Geressu, Martin C. Todd, Seshagiri Rao Kolusu, Julien J. Harou, Japhet J. Kashaigili, and Declan Conway. "High Stakes Decisions Under Uncertainty: Dams, Development and Climate Change in the Rufiji River Basin." In Climate Risk in Africa, 93–113. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-61160-6_6.

Full text
Abstract:
AbstractThe need to stress test designs and decisions about major infrastructure under climate change conditions is increasingly being recognised. This chapter explores new ways to understand and—if possible—reduce the uncertainty in climate information to enable its use in assessing decisions that have consequences across the water, energy, food and environment sectors. It outlines an approach, applied in the Rufiji River Basin in Tanzania, that addresses uncertainty in climate model projections by weighting them according to different skill metrics; how well the models simulate important climate features. The impact of different weighting approaches on two river basin performance indicators (hydropower generation and environmental flows) is assessed, providing an indication of the reliability of infrastructure investments, including a major proposed dam under different climate model projections. The chapter ends with a reflection on the operational context for applying such approaches and some of the steps taken to address challenges and to engage stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
2

Schneidewind, Norman F. "Software Requirements Risk and Maintainability." In Encyclopedia of Information Science and Technology, First Edition, 2562–66. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-553-5.ch454.

Full text
Abstract:
In order to continue to make progress in software measurement, as it pertains to reliability and maintainability, there must be a shift in emphasis from design and code metrics to metrics that characterize the risk of making requirements changes. By doing so, the quality of delivered software can be improved because defects related to problems in requirements specifications will be identified early in the life cycle. An approach is described for identifying requirements change risk factors as predictors of reliability and maintainability problems. This approach can be generalized to other applications with numerical results that would vary according to application. An example is provided that consists of 24 space shuttle change requests, 19 risk factors, and the associated failures and software metrics.
APA, Harvard, Vancouver, ISO, and other styles
3

Sadi, Muhammad Sheikh, D. G. Myers, and Cesar Ortega Sanchez. "Complexity Analysis at Design Stages of Service Oriented Architectures as a Measure of Reliability Risks." In Engineering Reliable Service Oriented Architecture, 292–314. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-493-6.ch014.

Full text
Abstract:
Tremendous growth in interest of Service oriented Architectures (SOA) triggers a substantial amount of research in its reliability assurances. To minimize the risks of these types of systems’ failure, it is a requirement to flag those components of SOA that are likely to have higher faults. Clearly, the degree of protection or prevention of faults mechanism is not same for all components. This chapter proposes the usage of metrics that are simply heuristics and are used to scan the system model and flag complex components where faults are more likely to take place. Thus the metric output is some priority or it is a measure of likelihood of faults in a component. This chapter then suggests the designers for possible changes in the design if there remains any risk(s) of degradation of desired functionalities.
APA, Harvard, Vancouver, ISO, and other styles
4

Seah, Winston K. G., and Hwee-Xian Tan. "Quality of Service in Mobile Ad Hoc Networks." In Mobile Computing, 2833–42. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-054-7.ch214.

Full text
Abstract:
Mobile ad hoc networks (MANETs) form a class of multi-hop wireless networks that can easily be deployed on-the-fly. These are autonomous systems that do not require existing infrastructure; each participating node in the network acts as a host as well as a packet-forwarding router. In addition to the difficulties experienced by conventional wireless networks, such as wireless interference, noise and obstructions from the environment, hidden/exposed terminal problems, and limited physical security, MANETs are also characterized by dynamically changing network topology and energy constraints. While MANETs were originally designed for use in disaster emergencies and defense-related applications, there are a number of potential applications of ad hoc networking that are commercially viable. Some of these applications include multimedia teleconferencing, home networking, embedded computing, electronic classrooms, sensor networks, and even underwater surveillance. The increased interest in MANETs in recent years has led to intensive research efforts which aim to provide quality of service (QoS) support over such infrastructure-less networks with unpredictable behaviour. Generally, the QoS of any particular network can be defined as its ability to deliver a guaranteed level of service to its users and/or applications. These service requirements often include performance metrics such as throughput, delay, jitter (delay variance), bandwidth, reliability, etc., and different applications may have varying service requirements. The performance metrics can be computed in three different ways: (i) concave (e.g., minimum bandwidth along each link); (ii) additive (e.g., total delay along a path); and (iii) multiplicative (e.g., packet delivery ratio along the entire route). While much effort has been invested in providing QoS in the Internet during the last decade, leading to the development of Internet QoS models such as integrated services (IntServ) (Braden, 1994) and differentiated services (DiffServ) (Blake, 1998), the Internet is currently able to provide only best effort (BE) QoS to its applications. In such networks with predictable resource availability, providing QoS beyond best effort is already a challenge. It is therefore even more difficult to achieve a BE-QoS similar to the Internet in networks like MANETs, which experience a vast spectrum of network dynamics (such as node mobility and link instability). In addition, QoS is only plausible in a MANET if it is combinatorially stable, i.e., topological changes occur slow enough to allow the successful propagation of updates throughout the network. As such, it is often debatable as to whether QoS in MANETs is just a myth or can become a reality.
APA, Harvard, Vancouver, ISO, and other styles
5

Burstein, Frada, and J. Cowie. "Mobile Decision Support for Time-Critical Decision Making." In Mobile Computing, 3552–60. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-054-7.ch259.

Full text
Abstract:
The wide availability of advanced information and communication technology has made it possible for users to expect a much wider access to decision support. Since the context of decision making is not necessarily restricted to the office desktop; decision support facilities have to be provided through access to technology anywhere; anytime; and through a variety of mediums. The spread of e-services and wireless devices has increased accessibility to data; and in turn; influenced the way in which users make decisions while on the move; especially in time-critical situations. For example; on site decision support for fire weather forecasting during bushfires can include realtime evaluation of quality of local fire weather forecast in terms of accuracy and reliability. Such decision support can include simulated scenarios indicating the probability of fire spreading over nearby areas that rely on data collected locally at the scene and broader data from the regional and national offices. Decision Support Systems (DSS) available on mobile devices; which triage nurses can rely on for immediate; expert advice based on available information; can minimise delay in actions and errors in triage at emergency departments (Cowie & Godley; 2006). Time-critical decision making problems require context-dependent metrics for representing expected cost of delaying an action (Greenwald & Dean; 1995); expected value of revealed information; expected value of displayed information (Horvitz; 1995) or expected quality of service (Krishnaswamy; Loke; & Zaslavsky; 2002). Predicting utility or value of information or services is aimed at efficient use of limited decision making time or processing time and limited resources to allow the system to respond to the time-critical situation within the required time frame. Sensitivity analysis (SA) pertains to analysis of changes in output due to changes in inputs (Churilov et al.;1996). In the context of decision support; traditionally SA includes the analysis of changes in output when some aspect of one or more of the decision model’s attributes change; and how these affect the final DSS recommendations (Triantaphyllou & Sanchez; 1997). In time-critical decision making monitoring; the relationship between the changes in the current input data and how these changes will impact on the expected decision outcome can be an important feature of the decision support (Hodgkin; San Pedro; & Burstein; 2004; San Pedro; Burstein; Zaslavsky; & Hodgkin; 2004). Thus; in a time-critical decision making environment; the decision maker requires information pertaining to both the robustness of the current model and ranking of feasible alternatives; and how sensitivity this information is to time; for example; whether in 2; 5; or 10 minutes; a different ranking of proposed solutions may be more relevant. The use of graphical displays to relay the sensitivity of a decision to changes in parameters and the model’s sensitivity to time has been shown to be a useful way of inviting the decision maker to fully investigate their decision model and evaluate the risk associated with making a decision now (whilst connectivity is possible); rather than at a later point in time (when perhaps a connection has been lost) (Cowie & Burstein; 2006). In this article; we present an overview of the available approaches to mobile decision support and specifically highlight the advantages such systems bring to the user in time-critical decision situations. We also identify the challenges that the developers of such systems have to face and resolve to ensure efficient decision support under uncertainty is provided.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Channel reliability metrics"

1

Hossler, Tom, Meryem Simsek, and Gerhard P. Fettweis. "Joint Analysis of Channel Availability and Time-Based Reliability Metrics for Wireless URLLC." In GLOBECOM 2018 - 2018 IEEE Global Communications Conference. IEEE, 2018. http://dx.doi.org/10.1109/glocom.2018.8647801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Szczecinski, L., A. Alvarado, and R. Feick. "Probability Density Functions of Reliability Metrics for 16-QAM-Based BICM Transmission in Rayleigh Channel." In 2007 IEEE International Conference on Communications. IEEE, 2007. http://dx.doi.org/10.1109/icc.2007.172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chadha, Shyam, Daniel Hung, and Samir Rashid. "A Novel Approach to Evaluating Leak Detection CPM System Sensitivity/Reliability Performance Trade-Offs." In 2014 10th International Pipeline Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/ipc2014-33650.

Full text
Abstract:
As defined in American Petroleum Institute Recommended Practice 1130 (API RP 1130), CPM system leak detection performance is evaluated on the basis of four distinct but interrelated metrics: sensitivity, reliability, accuracy and robustness. These performance metrics are captured to evaluate performance, manage risk and prioritize mitigation efforts. Evaluating and quantifying sensitivity performance of a CPM system is paramount to ensure the performance of the CPM system is acceptable based on a company’s risk profile for detecting leaks. Employing API RP 1130 recommended testing methodologies including parameter manipulation techniques, software simulated leak tests and/or removal of test quantities of commodity from the pipeline are excellent approaches to understanding the leak sensitivity metric. Good reliability (false alarm) performance is critical to ensure that control center operator desensitization does not occur through long term exposure to false alarms. Continuous tracking and analyzing of root causes of leak alarms ensures that the effects of seasonal variations or changes to operation on CPM system performance are managed appropriately. The complexity of quantifying this metric includes qualitatively evaluating the relevance of false alarms. The interrelated nature of the above performance metrics imposes conflicting requirements and results in inherent trade-offs. Optimizing the trade-off between reliability and sensitivity involves identifying the point that thresholds must be set to obtain a balance of a desired sensitivity and false alarm rate. This paper presents an approach to illustrate the combined sensitivity/reliability performance for an example pipeline. The paper discusses considerations addressed while determining the methodology such as stakeholder input, ongoing CPM system enhancements, sensitivity/reliability trade-off, risk based capital investment and graphing techniques. The paper also elaborates on a number of identified benefits of the selected overall methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Shaheem, A., H. J. Zepernick, and M. Caldera. "Channel reliability metric for Nakagami-m fading without channel state information." In GLOBECOM '05. IEEE Global Telecommunications Conference, 2005. IEEE, 2005. http://dx.doi.org/10.1109/glocom.2005.1577657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Malhotra, Ruchika, and Ankita Bansal. "Predicting change using software metrics: A review." In 2015 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions). IEEE, 2015. http://dx.doi.org/10.1109/icrito.2015.7359253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Injoong, Suresh K. Sitaraman, and Russell S. Peak. "Reliability Objects Model: A Knowledge Model of System Design for Reliability." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-79934.

Full text
Abstract:
Designing complex systems that satisfy a target reliability is difficult because of complex assembly structures and logical connections, numerous components and associated failure modes, limited reliability data or prediction models, and multi-disciplinary nature. To overcome these difficulties and to design complex systems in a systematic way, this research aims to develop a knowledge model of system design for reliability, called Reliability Object Model. This knowledge model contains a) a new failure analysis structure, b) reliability metrics that represent random failures and wearout failures, c) algorithms that allocate, predict, and assess reliability using the failure analysis structure, d) rules for design changes. The use of Reliability Object Model is demonstrated by prototype reliability tools and simplified electronic systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Nakamura, Mitsuhiro, and Tomoki Hamagami. "A Software Quality Evaluation Method Using the Change of Source Code Metrics." In 2012 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2012. http://dx.doi.org/10.1109/issrew.2012.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Moser, Raimund, Witold Pedrycz, and Giancarlo Succi. "Analysis of the reliability of a subset of change metrics for defect prediction." In the Second ACM-IEEE international symposium. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1414004.1414063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Menold, Jessica, Kathryn Jablokow, Timothy Simpson, and Rafael Seuro. "Evaluating the Discriminatory Value and Reliability of Ideation Metrics for Their Application to Concept Development and Prototyping." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67816.

Full text
Abstract:
Approximately half of new product development projects fail in the market place. Within the product development process, prototyping represents the largest sunk cost; it also remains the least researched and understood. While researchers have recently started to evaluate the impact of formalized prototyping methods and frameworks on end designs, these studies have typically evaluated the success or failure of these methods using binary metrics, and they often evaluate only the design’s technical feasibility. Intuitively, we know that a product’s success or failure in the marketplace is determined by far more than just the product’s technical quality; and yet, we have no clear way of evaluating the design changes and pivots that occur during concept development and prototyping activities, as an explicit set of rigorous and informative metrics to evaluate ideas after concept selection does not exist. The purpose of the current study was to investigate the discriminatory value and reliability of ideation metrics originally developed for concept generation as metrics to evaluate functional prototypes and related concepts developed throughout prototyping activities. Our investigation revealed that new metrics are needed in order to understand the translation of product characteristics, such as originality, novelty, and quality, from original concept through concept development and prototyping to finalized product.
APA, Harvard, Vancouver, ISO, and other styles
10

Jin, X., P. Woytowitz, and T. Tan. "On Determination of Sample Size to Evaluate Reliability Growth Plans." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-62189.

Full text
Abstract:
The reliability performance of Semiconductor Manufacturing Equipments (SME) is very important for both equipment manufacturers and customers. However, the response variables are random in nature and can significantly change due to many factors. In order to track the equipment reliability performance with certain confidence, this paper proposes an efficient methodology to calculate the number of samples needed to measure the reliability performance of the SME tools. This paper presents a frequency-based Statistics methodology to calculate the number of sampled tools to evaluate the SME reliability field performance based on certain confidence levels and error margins. One example case has been investigated to demonstrate the method. We demonstrate that the multiple weeks accumulated average reliability metrics of multiple tools do not equal the average of the multiple weeks accumulated average reliability metrics of these tools. We show how the number of required sampled tools increases when the reliability performance is improved and quantify the larger number of sampled tools required when a tighter margin of error or higher confidence level is needed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography