Journal articles on the topic 'Dual error extensions'

To see the other types of publications on this topic, follow the link: Dual error extensions.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Dual error extensions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Ruei-Ping, and Chao-Hung Lin. "Dual Guided Aggregation Network for Stereo Image Matching." Sensors 22, no. 16 (August 16, 2022): 6111. http://dx.doi.org/10.3390/s22166111.

Full text
Abstract:
Stereo image dense matching, which plays a key role in 3D reconstruction, remains a challenging task in photogrammetry and computer vision. In addition to block-based matching, recent studies based on artificial neural networks have achieved great progress in stereo matching by using deep convolutional networks. This study proposes a novel network called a dual guided aggregation network (Dual-GANet), which utilizes both left-to-right and right-to-left image matchings in network design and training to reduce the possibility of pixel mismatch. Flipped training with a cost volume consistentization is introduced to realize the learning of invisible-to-visible pixel matching and left–right consistency matching. In addition, suppressed multi-regression is proposed, which suppresses unrelated information before regression and selects multiple peaks from a disparity probability distribution. The proposed dual network with the left–right consistent matching scheme can be applied to most stereo matching models. To estimate the performance, GANet, which is designed based on semi-global matching, was selected as the backbone with extensions and modifications on guided aggregation, disparity regression, and loss function. Experimental results on the SceneFlow and KITTI2015 datasets demonstrate the superiority of the Dual-GANet compared to related models in terms of average end-point-error (EPE) and pixel error rate (ER). The Dual-GANet with an average EPE performance = 0.418 and ER (>1 pixel) = 5.81% for SceneFlow and average EPE = 0.589 and ER (>3 pixels) = 1.76% for KITTI2005 is better than the backbone model with the average EPE performance of = 0.440 and ER (>1 pixel) = 6.56% for SceneFlow and average EPE = 0.790 and ER (>3 pixels) = 2.32% for KITTI2005.
APA, Harvard, Vancouver, ISO, and other styles
2

Nie, Yungui, Jiamin Chen, Wanli Wen, Min Liu, Xiong Deng, and Chen Chen. "Orthogonal Subblock Division Multiple Access for OFDM-IM-Based Multi-User VLC Systems." Photonics 9, no. 6 (May 25, 2022): 373. http://dx.doi.org/10.3390/photonics9060373.

Full text
Abstract:
In this paper, we propose and experimentally demonstrate an orthogonal subblock division multiple access (OSDMA) scheme for orthogonal frequency division multiplexing with index modulation (OFDM-IM)-based multi-user visible light communication (MU-VLC) systems, where both single-mode index modulation (SM-IM) and dual-mode index modulation (DM-IM) are considered. In order to overcome the low-pass frequency response and the light-emitting diodes (LED) nonlinearity issues of practical MU-VLC systems, OSDMA is employed together with discrete Fourier transform spreading (DFT-S) and interleaving. The feasibility and superiority of the proposed scheme have been successfully verified via both simulations and hardware experiments. More specifically, we evaluate and compare the peak-to-average power ratio (PAPR) performance and the bit error rate (BER) performance of OFDM-SM-IM, DFT-S-OFDM-SM-IM, OFDM-DM-IM and DFT-S-OFDM-DM-IM without and with interleaving. Experimental results show that remarkable distance extensions can be achieved by employing DFT spreading and interleaving for both SM-IM and DM-IM in a two-user OSDMA-VLC system.
APA, Harvard, Vancouver, ISO, and other styles
3

Fan, Yongcun, Haotian Shi, Shunli Wang, Carlos Fernandez, Wen Cao, and Junhan Huang. "A Novel Adaptive Function—Dual Kalman Filtering Strategy for Online Battery Model Parameters and State of Charge Co-Estimation." Energies 14, no. 8 (April 17, 2021): 2268. http://dx.doi.org/10.3390/en14082268.

Full text
Abstract:
This paper aims to improve the stability and robustness of the state-of-charge estimation algorithm for lithium-ion batteries. A new internal resistance-polarization circuit model is constructed on the basis of the Thevenin equivalent circuit to characterize the difference in internal resistance between charge and discharge. The extended Kalman filter is improved through adding an adaptive noise tracking algorithm and the Kalman gain in the unscented Kalman filter algorithm is improved by introducing a dynamic equation. In addition, for benignization of outliers of the two above-mentioned algorithms, a new dual Kalman algorithm is proposed in this paper by adding a transfer function and through weighted mutation. The model and algorithm accuracy is verified through working condition experiments. The result shows that: the errors of the three algorithms are all maintained within 0.8% during the initial period and middle stages of the discharge; the maximum error of the improved extension of Kalman algorithm is over 1.5%, that of improved unscented Kalman increases to 5%, and the error of the new dual Kalman algorithm is still within 0.4% during the latter period of the discharge. This indicates that the accuracy and robustness of the new dual Kalman algorithm is better than those of traditional algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Xiaoping, Nancy L. Nihan, and Yinhai Wang. "Improved Dual-Loop Detection System for Collecting Real-Time Truck Data." Transportation Research Record: Journal of the Transportation Research Board 1917, no. 1 (January 2005): 108–15. http://dx.doi.org/10.1177/0361198105191700113.

Full text
Abstract:
The Washington State Department of Transportation (WSDOT) has a loop detection system on its Greater Seattle freeway network to provide real-time traffic data. The dual-loop detectors installed in the system are used to measure vehicle lengths and then classify each detected vehicle into one of four categories according to its length. The dual loop's capability of measuring vehicle length makes the loop detection system a potential real-time truck data source for freight movement studies because truck volume estimates by basic length category can be developed from the vehicle length measurements produced by the dual-loop detectors. However, a previous study found that the dual-loop detectors were consistently underreporting truck volumes, whereas the single-loop detectors were consistently overcounting vehicle volumes. As an extension of the previous study, the research project described here investigated possible causes of loop errors under nonforced-flow traffic conditions. A new dual-loop algorithm that can address these error causes and therefore tolerate erroneous loop actuation signals was developed to improve the performance of the WSDOT loop detection system. A quick remedy method was also recommended to address the dual-loop undercount problem without replacing any part of the existing system hardware or software. In addition, a laptop-based detector event data collection system (DEDAC) that can collect loop detector event data without interrupting a loop detection system's normal operation was developed in this research. The DEDAC system enables various kinds of transportation research and applications that could not otherwise be possible.
APA, Harvard, Vancouver, ISO, and other styles
5

Lord, Natacha H., and Anthony J. Mulholland. "A dual weighted residual method applied to complex periodic gratings." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 469, no. 2160 (December 8, 2013): 20130176. http://dx.doi.org/10.1098/rspa.2013.0176.

Full text
Abstract:
An extension of the dual weighted residual (DWR) method to the analysis of electromagnetic waves in a periodic diffraction grating is presented. Using the α ,0-quasi-periodic transformation, an upper bound for the a posteriori error estimate is derived. This is then used to solve adaptively the associated Helmholtz problem. The goal is to achieve an acceptable accuracy in the computed diffraction efficiency while keeping the computational mesh relatively coarse. Numerical results are presented to illustrate the advantage of using DWR over the global a posteriori error estimate approach. The application of the method in biomimetic, to address the complex diffraction geometry of the Morpho butterfly wing is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

RIYAZ, SABA, RAFIA JAN, SHOWKAT MAQBOOL, KHALID UL ISLAM RATHER, and T. R. JAN. "A MODIFIED CLASS OF DUAL TO RATIO-TYPE ESTIMATORS FOR ESTIMATING THE POPULATION VARIANCE UNDER SIMPLE RANDOM SAMPLING SCHEME AND ITS APPLICATION TO REAL DATA." Journal of Science and Arts 22, no. 3 (September 30, 2022): 593–604. http://dx.doi.org/10.46939/j.sci.arts-22.3-a06.

Full text
Abstract:
This work is an extension to the work of [1] on ratio estimators of variance, by modification using dual to ratio method. The consistency conditions, bias, mean square error, optimum mean square error and efficiency have been derived and its performance is illustrated using natural populations. It is observed that the proposed class of estimators is most efficient at its optimum value, due to highest percent relative efficiency generated by it, when compared to the usual unbiased estimator for variance.
APA, Harvard, Vancouver, ISO, and other styles
7

Beckers, Pierre. "Extension of dual analysis to 3-D problems: evaluation of its advantages in error estimation." Computational Mechanics 41, no. 3 (July 10, 2007): 421–27. http://dx.doi.org/10.1007/s00466-007-0198-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ismail, Rifky, Mochammad Ariyanto, Inri A. Perkasa, Rizal Adirianto, Farika T. Putri, Adam Glowacz, and Wahyu Caesarendra. "Soft Elbow Exoskeleton for Upper Limb Assistance Incorporating Dual Motor-Tendon Actuator." Electronics 8, no. 10 (October 18, 2019): 1184. http://dx.doi.org/10.3390/electronics8101184.

Full text
Abstract:
Loss of muscle functions, such as the elbow, can affect the quality of life of a person. This research is aimed at developing an affordable two DOF soft elbow exoskeleton incorporating a dual motor-tendon actuator. The soft elbow exoskeleton can be used to assist two DOF motions of the upper limb, especially elbow and wrist movements. The exoskeleton is developed using fabric for the convenience purpose of the user. The dual motor-tendon actuator subsystem employs two DC motors coupled with lead-to-screw converting motion from angular into linear motion. The output is connected to the upper arm hook on the soft exoskeleton elbow. With this mechanism, the proposed actuator system is able to assist two DOF movements for flexion/extension and pronation/supination motion. Proportional-Integral (PI) control is implemented for controlling the motion. The optimized value of Kp and Ki are 200 and 20, respectively. Based on the test results, there is a slight steady-state error between the first and the second DC motor. When the exoskeleton is worn by a user, it gives more steady-state errors because of the load from the arm weight. The test results demonstrate that the proposed soft exoskeleton elbow can be worn easily and comfortably by a user to assist two DOF for elbow and wrist motion. The resulted range of motion (ROM) for elbow flexion–extension can be varied from 90° to 157°, whereas the maximum of ROM that can be achieved for pronation and supination movements are 19° and 18°, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Jin-Hai, and Zhen-Xing Yao. "Reducing two-way splitting error of FFD method in dual domains." GEOPHYSICS 76, no. 4 (July 2011): S165—S175. http://dx.doi.org/10.1190/1.3590214.

Full text
Abstract:
The Fourier finite-difference (FFD) method is very popular in seismic depth migration. But its straightforward 3D extension creates two-way splitting error due to ignoring the cross terms of spatial partial derivatives. Traditional correction schemes, either in the spatial domain by the implicit finite-difference method or in the wavenumber domain by phase compensation, lead to substantially increased computational costs or numerical difficulties for strong velocity contrasts. We propose compensating the two-way splitting error in dual domains, alternately in the spatial and wavenumber domains via Fourier transform. First, we organize the expanded square-root operator in terms of two-way splitting FFD plus the usually ignored cross terms. Second, we select a group of optimized coefficients to maximize the accuracy of propagation in both inline and crossline directions without yet considering the diagonal directions. Finally, we further optimize the constant coefficient of the compensation part to further improve the overall accuracy of the operator. In implementation, the compensation terms are similar to the high-order corrections of the generalized-screen method, but their functions are to compensate the two-way splitting error rather than the expansion error. Numerical experiments show that optimized one-term compensation can achieve nearly perfect circular impulse responses and the propagation angle with less than 1% error for all azimuths is improved up to 60° from 35°. Compared with traditional single-domain methods, our scheme can handle lateral velocity variations (even for strong velocity contrasts) much more easily with only one additional Fourier transform based on the two-way splitting FFD method, which helps retain the computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Lacker, Daniel. "A non-exponential extension of Sanov’s theorem via convex duality." Advances in Applied Probability 52, no. 1 (March 2020): 61–101. http://dx.doi.org/10.1017/apr.2019.52.

Full text
Abstract:
AbstractThis work is devoted to a vast extension of Sanov’s theorem, in Laplace principle form, based on alternatives to the classical convex dual pair of relative entropy and cumulant generating functional. The abstract results give rise to a number of probabilistic limit theorems and asymptotics. For instance, widely applicable non-exponential large deviation upper bounds are derived for empirical distributions and averages of independent and identically distributed samples under minimal integrability assumptions, notably accommodating heavy-tailed distributions. Other interesting manifestations of the abstract results include new results on the rate of convergence of empirical measures in Wasserstein distance, uniform large deviation bounds, and variational problems involving optimal transport costs, as well as an application to error estimates for approximate solutions of stochastic optimization problems. The proofs build on the Dupuis–Ellis weak convergence approach to large deviations as well as the duality theory for convex risk measures.
APA, Harvard, Vancouver, ISO, and other styles
11

van Lint, J. W. C., and N. J. van der Zijpp. "Improving a Travel-Time Estimation Algorithm by Using Dual Loop Detectors." Transportation Research Record: Journal of the Transportation Research Board 1855, no. 1 (January 2003): 41–48. http://dx.doi.org/10.3141/1855-05.

Full text
Abstract:
An algorithm is presented for off-line estimation of route-level travel times for uninterrupted traffic flow facilities, such as motorway corridors, based on time series of traffic-speed observations taken from the sections that constitute a route. The proposed method is an extension of the widely used trajectory method. The novelty of the presented method is that trajectories are based on the assumption of piecewise linear (and continuous at section boundaries) vehicle speeds rather than piecewise constant (and discontinuous at section boundaries) speeds. From these assumptions, mathematical expressions are derived that describe the trajectories within each section. These expressions can be used to replace their existing counterparts in the traditional trajectory methods. A comparison of the accuracy of the new method and of the existing method was carried out by using simulated data. This comparison showed that the root-mean-square error ( RMSE) value for the new method is about half the RMSE value for the existing method. When this RMSE is decomposed in a bias and a residual error, it turns out that the existing method significantly overestimates the travel time. However, the largest part of the reduction of the RMSE value is still caused by a reduction of the residual error. In other words, if both methods are corrected for their bias, the new method performs significantly better.
APA, Harvard, Vancouver, ISO, and other styles
12

Ottoni Filho, Theophilo B., Marlon G. Lopes Alvarez, Marta V. Ottoni, and Arthur Bernardo Barbosa Dib Amorim. "Extension of the Gardner exponential equation to represent the hydraulic conductivity curve." Journal of Hydrology and Hydromechanics 67, no. 4 (December 1, 2019): 359–71. http://dx.doi.org/10.2478/johh-2019-0015.

Full text
Abstract:
Abstract The relative hydraulic conductivity curve Kr(h) = K/Ks is a key variable in soil modeling. This study proposes a model to represent Kr(h), the so-called Gardner dual (GD) model, which extends the classical Gardner exponential model to h values greater than ho, the suction value at the inflection point of the Kr(h) curve in the log-log scale. The goodness of fit of GD using experimental data from UNSODA was compared to that of the MVG [two-parameter (Kro, L) Mualem-van Genuchten] model and a corresponding modified MVG model (MVGm). In 77 soils without evidence of macropore flow, GD reduced the RMSE errors by 64% (0.525 to 0.191) and 29% (0.269 to 0.193) in relation to MVG and MVGm, respectively. In the remaining 76 soils, GD generally was less accurate than MVG and MVGm, since most of these soils presented evidence of macropore flow (dual permeability). GD has three parameters and two degrees of freedom, like MVG. Two of them allow the calculation of the macroscopic capillary length, a parameter from the infiltration literature. The three parameters are highly dependent on the Kr(h) data measurement in a short wet suction range around ho, which is an experimental advantage.
APA, Harvard, Vancouver, ISO, and other styles
13

Brunker, Alexander, Thomas Wohlgemuth, Michael Frey, and Frank Gauterin. "Dual-Bayes Localization Filter Extension for Safeguarding in the Case of Uncertain Direction Signals." Sensors 18, no. 10 (October 19, 2018): 3539. http://dx.doi.org/10.3390/s18103539.

Full text
Abstract:
In order to run a localization filter for parking systems in real time, the directional information must be directly available when a distance measurement of the wheel speed sensor is detected. When the vehicle is launching, the wheel speed sensors may already detect distance measurement in the form of Delta-Wheel-Pulse-Counts (DWPCs) without having defined a rolling direction. This phenomenon is particularly problematic during parking maneuvers, where many small correction strokes are made. If a localization filter is used for positioning, the restrained DWPCs cannot process in real time. Without directional information in the form of a rolling direction signal, the filter has to ignore the DWPCs or artificially stop until a rolling direction signal is present. For this reason, methods for earlier estimation of the rolling direction based on the pattern of the incoming DWPCs and based on the force equilibrium have been presented. Since the new methods still have their weaknesses and a wrong estimation of the rolling direction can occur, an extension of a so-called Dual-Localization filter approach is presented. The Dual-Localization filter uses two localization filters and an intelligent initialization logic that ensures that both filters move in opposite directions at launching. The primary localization filter uses the estimated and the secondary one the opposite direction. As soon as a valid rolling direction signal is present, an initialization logic is used to decide which localization filter has previously moved in the true direction. The localization filter that has moved in the wrong direction is initialized with the states and covariances of the other localization filter. This extension allows for a fast and real-time capability to be achieved, and the accumulated velocity error can be dramatically reduced.
APA, Harvard, Vancouver, ISO, and other styles
14

Cantin, Pierre, and Alexandre Ern. "Vertex-Based Compatible Discrete Operator Schemes on Polyhedral Meshes for Advection-Diffusion Equations." Computational Methods in Applied Mathematics 16, no. 2 (April 1, 2016): 187–212. http://dx.doi.org/10.1515/cmam-2016-0007.

Full text
Abstract:
AbstractWe devise and analyze vertex-based, Péclet-robust, lowest-order schemes for advection-diffusion equations that support polyhedral meshes. The schemes are formulated using Compatible Discrete Operators (CDO), namely, primal and dual discrete differential operators, a discrete contraction operator for advection, and a discrete Hodge operator for diffusion. Moreover, discrete boundary operators are devised to weakly enforce Dirichlet boundary conditions. The analysis sheds new light on the theory of Friedrichs' operators at the purely algebraic level. Moreover, an extension of the stability analysis hinging on inf-sup conditions is presented to incorporate divergence-free velocity fields under some assumptions. Error bounds and convergence rates for smooth solutions are derived and numerical results are presented on three-dimensional polyhedral meshes.
APA, Harvard, Vancouver, ISO, and other styles
15

Nazary-Moghadam, Salman, Mahyar Salavati, Ali Esteki, Behnam Akhbari, Sohrab Keyhani, and Afsaneh Zeinalzadeh. "Reliability of Knee Flexion–Extension Lyapunov Exponent in People With and Without Anterior Cruciate Ligament Deficiency." Journal of Sport Rehabilitation 29, no. 2 (February 1, 2020): 253–56. http://dx.doi.org/10.1123/jsr.2018-0468.

Full text
Abstract:
Objectives: The current study assessed the intrasession and intersession reliability of the knee flexion–extension Lyapunov exponent in patients with anterior cruciate ligament deficiency and healthy individuals. Study Design: University research laboratory. Methods: Kinematic data were collected in 14 patients with anterior cruciate ligament deficiency and 14 healthy individuals walked on a treadmill at a self-selected, low, and high speed, with and without cognitive load. The intraclass correlation coefficient, standard error of measurement, minimal metrically detectable change, and percentage of coefficient of variation were calculated to assess the reliability. Results: The knee flexion–extension Lyapunov exponent had high intrasession reliability, with intraclass correlation coefficients ranging from .83 to .98. In addition, the intersession intraclass correlation coefficient values of these measurements ranged from .35 to .85 regardless of group, gait speed, and dual tasking. In general, relative and absolute reliability were higher in the patients with anterior cruciate ligament deficiency than in the healthy individuals. Conclusions: Although knee flexion–extension Lyapunov exponent demonstrates good intrasession reliability, its low intersession reliability indicates that changes of these measurements between different days should be interpreted with caution.
APA, Harvard, Vancouver, ISO, and other styles
16

Magodora, Mangwiro, Hiranmoy Mondal, and Precious Sibanda. "Dual solutions of a micropolar nanofluid flow with radiative heat mass transfer over stretching/shrinking sheet using spectral quasilinearization method." Multidiscipline Modeling in Materials and Structures 16, no. 2 (September 13, 2019): 238–55. http://dx.doi.org/10.1108/mmms-01-2019-0028.

Full text
Abstract:
Purpose The purpose of this paper is to focus on the application of Chebyshev spectral collocation methodology with Gauss Lobatto grid points to micropolar fluid over a stretching or shrinking surface. Radiation, thermophoresis and nanoparticle Brownian motion are considered. The results have attainable scientific and technological applications in systems involving stretchable materials. Design/methodology/approach The model equations governing the flow are transformed into non-linear ordinary differential equations which are then reworked into linear form using the Newton-based quasilinearization method (SQLM). Spectral collocation is then used to solve the resulting linearised system of equations. Findings The validity of the model is established using error analysis. The velocity, temperature, micro-rotation, skin friction and couple stress parameters are conferred diagrammatically and analysed in detail. Originality/value The study obtains numerical explanations for rapidly convergent solutions using the spectral quasilinearization method. Convergence of the numerical solutions was monitored using the residual error analysis. The influence of radiation, heat and mass parameters on the flow are depicted graphically and analysed. The study is an extension on the work by Zheng et al. (2012) and therefore the novelty is that the authors tend to take into account nanoparticles, Brownian motion and thermophoresis in the flow of a micropolar fluid.
APA, Harvard, Vancouver, ISO, and other styles
17

Kimura, Kazuhiro, Kota Sawada, Yoshiaki Toda, and Hideaki Kushima. "Creep Strength Assessment of High Chromium Ferritic Creep Resistant Steels." Materials Science Forum 539-543 (March 2007): 3112–17. http://dx.doi.org/10.4028/www.scientific.net/msf.539-543.3112.

Full text
Abstract:
Degradation mechanism and life prediction method of high chromium ferritic creep resistant steels have been investigated. In the high stress condition, easy and rapid extension of recovered soft region results in significant decrease in creep strength, however, ductility is high. In the low stress condition, extension of recovered soft region is mainly controlled by diffusion and it is slow, therefore, deformation is concentrated in the recovered soft region along grain boundaries and ductility is extremely low. Delta-ferrite produces concentration gap due to difference in equilibrium composition of austenite and ferrite phases at the normalizing temperature. It increases driving force of diffusion and promotes recovery of tempered martensite adjacent to delta-ferrite. Concentration gap may be produced also in heat affected zone (HAZ), especially in fine grain HAZ similar to dual phase steel, and it has possibilities to promote recovery and, therefore, to decrease creep strength. It has been confirmed the advantage of region splitting analysis of creep rupture strength for high chromium ferritic creep resistant steels, through a residual error analysis. It is important to avoid a generation of concentration gap in order to improve stability of microstructure and to maintain high creep strength.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Zengpeng, Can Xiang, and Chengyu Wang. "Oblivious Transfer via Lossy Encryption from Lattice-Based Cryptography." Wireless Communications and Mobile Computing 2018 (September 2, 2018): 1–11. http://dx.doi.org/10.1155/2018/5973285.

Full text
Abstract:
Authentication is the first defence line to prevent malicious entities to access smart mobile devices (or SMD). Essentially, there exist many available cryptographic primitives to design authentication protocols. Oblivious transfer (OT) protocol is one of the important cryptographic primitives to design authentication protocols. The first lattice-based OT framework under universal composability (UC) model was designed by dual mode encryption and promoted us to find an alternative efficient scheme. We note that “lossy encryption” scheme is an extension of the dual mode encryption and can be used to design UC-secure OT protocol, but the investigations of OT via lossy encryption over the lattice are absent. Hence, in order to obtain an efficient authentication protocol by improving the performance of the UC-secure OT protocol, in this paper, we first design a multibit lossy encryption under the decisional learning with errors (LWE) assumption and then design a new variant of UC-secure OT protocol for authenticated protocol via lossy encryption scheme. Additionally, our OT protocol is secure against semihonest (static) adversaries in the common reference string (CRS) model and within the UC framework.
APA, Harvard, Vancouver, ISO, and other styles
19

Arévalo-Verjel, Alba Nely, José Luis Lerma, Juan F. Prieto, Juan Pedro Carbonell-Rivera, and José Fernández. "Estimation of the Block Adjustment Error in UAV Photogrammetric Flights in Flat Areas." Remote Sensing 14, no. 12 (June 16, 2022): 2877. http://dx.doi.org/10.3390/rs14122877.

Full text
Abstract:
UAV-DAP (unmanned aerial vehicle-digital aerial photogrammetry) has become one of the most widely used geomatics techniques in the last decade due to its low cost and capacity to generate high-density point clouds, thus demonstrating its great potential for delivering high-precision products with a spatial resolution of centimetres. The questions is, how should it be applied to obtain the best results? This research explores different flat scenarios to analyse the accuracy of this type of survey based on photogrammetric SfM (structure from motion) technology, flight planning with ground control points (GCPs), and the combination of forward and cross strips, up to the point of processing. The RMSE (root mean square error) is analysed for each scenario to verify the quality of the results. An equation is adjusted to estimate the a priori accuracy of the photogrammetric survey with digital sensors, identifying the best option for μxyz (weight coefficients depending on the layout of both the GCP and the image network) for the four scenarios studied. The UAV flights were made in Lorca (Murcia, Spain). The study area has an extension of 80 ha, which was divided into four blocks. The GCPs and checkpoints (ChPs) were measured using dual-frequency GNSS (global navigation satellite system), with a tripod and centring system on the mark at the indicated point. The photographs were post-processed using the Agisoft Metashape Professional software (64 bits). The flights were made with two multirotor UAVs, a Phantom 3 Professional and an Inspire 2, with a Zenmuse X5S camera. We verify the influence by including additional forward and/or cross strips combined with four GCPs in the corners, plus one additional GCP in the centre, in order to obtain better photogrammetric adjustments based on the preliminary flight planning.
APA, Harvard, Vancouver, ISO, and other styles
20

Bo, Chunxin, Xiaohong Zhang, and Songtao Shao. "Non-Dual Multi-Granulation Neutrosophic Rough Set with Applications." Symmetry 11, no. 7 (July 12, 2019): 910. http://dx.doi.org/10.3390/sym11070910.

Full text
Abstract:
Multi-attribute decision-making (MADM) is a part of management decision-making and an important branch of the modern decision theory and method. MADM focuses on the decision problem of discrete and finite decision schemes. Uncertain MADM is an extension and development of classical multi-attribute decision making theory. When the attribute value of MADM is shown by neutrosophic number, that is, the attribute value is complex data and needs three values to express, it is called the MADM problem in which the attribute values are neutrosophic numbers. However, in practical MADM problems, to minimize errors in individual decision making, we need to consider the ideas of many people and synthesize their opinions. Therefore, it is of great significance to study the method of attribute information aggregation. In this paper, we proposed a new theory—non-dual multi-granulation neutrosophic rough set (MS)—to aggregate multiple attribute information and solve a multi-attribute group decision-making (MGDM) problem where the attribute values are neutrosophic numbers. First, we defined two kinds of non-dual MS models, intersection-type MS and union-type MS. Additionally, their properties are studied. Then the relationships between MS, non-dual MS, neutrosophic rough set (NRS) based on neutrosophic intersection (union) relationship, and NRS based on neutrosophic transitive closure relation of union relationship are outlined, and a figure is given to show them directly. Finally, the definition of non-dual MS on two universes is given and we use it to solve a MGDM problem with a neutrosophic number as the attribute value.
APA, Harvard, Vancouver, ISO, and other styles
21

Di Tocco, Joshua, Daniela Lo Presti, Alberto Rainer, Emiliano Schena, and Carlo Massaroni. "Silicone-Textile Composite Resistive Strain Sensors for Human Motion-Related Parameters." Sensors 22, no. 10 (May 23, 2022): 3954. http://dx.doi.org/10.3390/s22103954.

Full text
Abstract:
In recent years, soft and flexible strain sensors have found application in wearable devices for monitoring human motion and physiological parameters. Conductive textile-based sensors are good candidates for developing these sensors. However, their robust electro-mechanical connection and susceptibility to environmental factors are still an open challenge to date. In this work, the manufacturing process of a silicone-textile composite resistive strain sensor based on a conductive resistive textile encapsulated into a dual-layer of silicone rubber is reported. In the working range typical of biomedical applications (up to 50%), the proposed flexible, skin-safe and moisture resistant strain sensor exhibited high sensitivity (gauge factor of −1.1), low hysteresis (maximum hysteresis error 3.2%) and ease of shaping in custom designs through a facile manufacturing process. To test the developed flexible sensor, two applicative scenarios covering the whole working range have been considered: the recording of the chest wall expansion during respiratory activity and the capture of the elbow flexion/extension movements.
APA, Harvard, Vancouver, ISO, and other styles
22

MA, DA-ZHU, XIN WU, and FU-YAO LIU. "VELOCITY CORRECTIONS TO KEPLER ENERGY AND LAPLACE INTEGRAL." International Journal of Modern Physics C 19, no. 09 (September 2008): 1411–24. http://dx.doi.org/10.1142/s0129183108012996.

Full text
Abstract:
For each celestial body of multi-planet systems, there are two slowly varying quantities or quasi-integrals, Kepler energy and Laplace integral, which are closely associated with the orbital semimajor axis and eccentricity, respectively. To correct numerical errors of the quantities, we give an extension of Nacozy's approach and develop a new manifold correction method, where corresponding reference values of the quantities at every integration step are obtained from integral invariant relations, and only velocity corrections are used to approximately satisfy the two quasi-integrals. As a result, the scheme does enhance the quality of the integration by significantly raising the accuracy of the two elements. Especially, it is superior to the existing dual scaling method in the improvement of eccentricity in general when the adopted integrator provides a sufficient precision to the eccentricity.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Jing, Jian Liu, Cheng Li, Hui Zhang, and Yizuo Li. "Wearable Wrist Movement Monitoring Using Dual Surface-Treated Plastic Optical Fibers." Materials 13, no. 15 (July 24, 2020): 3291. http://dx.doi.org/10.3390/ma13153291.

Full text
Abstract:
Regarding high-sensitivity human wrist joint motion monitoring in exercise rehabilitation; we develop a pair of novel wearable and sensitivity-enhanced plastic optical fiber (POF) strain sensors consisting of an etched grating fiber and a side-polished fiber stitched into a polyamide wrist brace. The two flexible and surface-treated fibers are; respectively; featured with an etched periodic gratings with a pitch of 6 mm and a depth of 0.5 mm and a D-shaped side-polished zone of ~300 µm depth and ~30 mm length; which, correspondingly, show the sensitivities of around 0.0176/° and 0.0167/° in a normalized bending angle by far larger than a conventional commercial POF, because it achieves a more sensitive strain-induced evanescent field interaction with the side-machined fibers. Moreover, in terms of the sensor response to bending deformation in the range of −40°~+40°, the former exhibits a better sensitivity in lower angle change, while the latter is superior as the bending angle increases; thereby arranging the two modified POFs separately at the side and back of the human wrist, in order to decouple the wrist joint behaviors induced by typical flexion-extension or abduction-adduction movements. Then, the circular and pentagonal wrist motion trajectory patterns are investigated, to demonstrate the maximum average single-axis motion error of 2.94° via the transformation of spatial angle to plane coordinate for the fabricated couple of POF sensors, which is lower than a recognized standard of 5°, thus suggesting the great potential in wearable exercise rehabilitation of human joints in the field of medical treatment and healing.
APA, Harvard, Vancouver, ISO, and other styles
24

Bell, Michael M., and Wen-Chau Lee. "Objective Tropical Cyclone Center Tracking Using Single-Doppler Radar." Journal of Applied Meteorology and Climatology 51, no. 5 (May 2012): 878–96. http://dx.doi.org/10.1175/jamc-d-11-0167.1.

Full text
Abstract:
AbstractThis study presents an extension of the ground-based velocity track display (GBVTD)-simplex tropical cyclone (TC) circulation center–finding algorithm to further improve the accuracy and consistency of TC center estimates from single-Doppler radar data. The improved center-finding method determines a TC track that ensures spatial and temporal continuities of four primary characteristics: the radius of maximum wind, the maximum axisymmetric tangential wind, and the latitude and longitude of the TC circulation center. A statistical analysis improves the consistency of the TC centers over time and makes it possible to automate the GBVTD-simplex algorithm for tracking of landfalling TCs. The characteristics and performance of this objective statistical center-finding method are evaluated using datasets from Hurricane Danny (1997) and Bret (1999) over 5-h periods during which both storms were simultaneously observed by two coastal Weather Surveillance Radar-1988 Doppler (WSR-88D) units. Independent single-Doppler and dual-Doppler centers are determined and used to assess the absolute accuracy of the algorithm. Reductions of 50% and 10% in the average distance between independent center estimates are found for Danny and Bret, respectively, over the original GBVTD-simplex method. The average center uncertainties are estimated to be less than 2 km, yielding estimated errors of less than 5% in the retrieved radius of maximum wind and wavenumber-0 axisymmetric tangential wind, and ~30% error in the wavenumber-1 asymmetric tangential wind. The objective statistical center-finding method can be run on a time scale comparable to that of a WSR-88D volume scan, thus making it a viable tool for both research and operational use.
APA, Harvard, Vancouver, ISO, and other styles
25

Morales, Carlos R., Fernando Rangel de Sousa, Valner Brusamarello, and Nestor C. Fernandes. "Evaluation of Deep Learning Methods in a Dual Prediction Scheme to Reduce Transmission Data in a WSN." Sensors 21, no. 21 (November 6, 2021): 7375. http://dx.doi.org/10.3390/s21217375.

Full text
Abstract:
One of the most important challenges in Wireless Sensor Networks (WSN) is the extension of the sensors lifetime, which are battery-powered devices, through a reduction in energy consumption. Using data prediction to decrease the amount of transmitted data is one of the approaches to solve this problem. This paper provides a comparison of deep learning methods in a dual prediction scheme to reduce transmission. The structures of the models are presented along with their parameters. A comparison of the models is provided using different performance metrics, together with the percent of points transmitted per threshold, and the errors between the final data received by Base Station (BS) and the measured values. The results show that the model with better performance in the dataset was the model with Attention, saving a considerable amount of data in transmission and still maintaining a good representation of the measured data.
APA, Harvard, Vancouver, ISO, and other styles
26

CHAN, HONG-MO. "YANG–MILLS DUALITY AS THE ORIGIN OF FERMION GENERATIONS." Modern Physics Letters A 18, no. 08 (March 14, 2003): 537–43. http://dx.doi.org/10.1142/s0217732303009629.

Full text
Abstract:
A non-Abelian extension of electric-magnetic duality implies that dual to confined colour SU(3), there also ought to be a broken threefold symmetry which can play the role of fermion generations. A model constructed on these premises not only gives a raison d'être for 3 and only 3 generations as observed but also offers a natural explanation for the distinctive fermion mass and mixing patterns seen in experiment. A calculation to one-loop order in this model with only 3 fitted parameters already gives correct values, all within present experimental errors, for the following quantities: the mass ratios mc/mt, ms/mb, mμ/mτ, all 9 matrix elements of the CKM mixing matrix |Vrs| for quarks, plus the lepton MNS mixing matrix elements |Uμ3| and |Ue3| studied in neutrino oscillation experiments with respectively atmospheric and reactor neutrinos.
APA, Harvard, Vancouver, ISO, and other styles
27

Vuzitas, Alexis, Marian Petrica, and Claudiu Manea. "Signal void and pseudo-pneumatized sinus in fungal rhinosinusitis – Case report." Romanian Journal of Rhinology 7, no. 28 (December 1, 2017): 251–55. http://dx.doi.org/10.1515/rjr-2017-0027.

Full text
Abstract:
Abstract BACKGROUND. Signal void, or the absence of signal on MRI sequences, in the sinonasal region may be encountered in fungal rhinosinusitis cases with the aspect of a pseudo-pneumatized sinus, leading to diagnostic errors. CASE REPORT. We present the case of a 75-year-old woman referred to our clinic for complete and persistent right-sided nasal obstruction. The patient was evaluated using sinus CT and contrast-enhanced head MRI. Opacification of the right maxillary, ethmoid and frontal sinuses as well as of the right nasal fossa were seen on CT, with maxillary sinus expansion and osseous erosion. The MRI showed T2 signal void in the maxillary sinus with extension to the nasal fossa, creating the appearance of a pseudo-pneumatized sinus, and hyperintense signal in the ipsilateral anterior ethmoid and frontal sinuses. The patient underwent endoscopic sinus surgery. The dual imaging evaluation of the patient aided the preoperative differential diagnosis and choosing the surgical approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Kudomi, Nobuyuki, Yoshiyuki Hirano, Kazuhiro Koshino, Takuya Hayashi, Hiroshi Watabe, Kazuhito Fukushima, Hiroshi Moriwaki, Noboru Teramoto, Koji Iihara, and Hidehiro Iida. "Rapid Quantitative CBF and CMRO2 Measurements from a Single PET Scan with Sequential Administration of Dual 15O-Labeled Tracers." Journal of Cerebral Blood Flow & Metabolism 33, no. 3 (December 12, 2012): 440–48. http://dx.doi.org/10.1038/jcbfm.2012.188.

Full text
Abstract:
Positron emission tomography ( PET) with 15O tracers provides essential information in patients with cerebral vascular disorders, such as cerebral blood flow ( CBF), oxygen extraction fraction ( OEF), and metabolic rate of oxygen ( CMRO2). However, most of techniques require an additional C15O scan for compensating cerebral blood volume ( CBV). We aimed to establish a technique to calculate all functional images only from a single dynamic PET scan, without losing accuracy or statistical certainties. The technique was an extension of previous dual-tracer autoradiography (DARG) approach, but based on the basis function method (DBFM), thus estimating all functional parametric images from a single session of dynamic scan acquired during the sequential administration of H215O and 15O2. Validity was tested on six monkeys by comparing global OEF by PET with those by arteriovenous blood sampling, and tested feasibility on young healthy subjects. The mean DBFM-derived global OEF was 0.57 ± 0.06 in monkeys, in an agreement with that by the arteriovenous method (0.54 ± 0.06). Image quality was similar and no significant differences were seen from DARG; 3.57% ± 6.44% and 3.84% ± 3.42% for CBF, and −2.79% ± 11.2% and −6.68% ± 10.5% for CMRO2. A simulation study demonstrated similar error propagation between DBFM and DARG. The DBFM method enables accurate assessment of CBF and CMRO2 without additional CBV scan within significantly shortened examination period, in clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
29

Pedraza, Carlos, Nicola Clerici, Cristian Forero, América Melo, Diego Navarrete, Diego Lizcano, Andrés Zuluaga, Juliana Delgado, and Gustavo Galindo. "Zero Deforestation Agreement Assessment at Farm Level in Colombia Using ALOS PALSAR." Remote Sensing 10, no. 9 (September 13, 2018): 1464. http://dx.doi.org/10.3390/rs10091464.

Full text
Abstract:
Due to the fast deforestation rates in the tropics, multiple international efforts have been launched to reduce deforestation and develop consistent methodologies to assess forest extension and change. Since 2010 Colombia implemented the Mainstream Sustainable Cattle Ranching project with the participation of small farmers in a payment for environmental services (PES) scheme where zero deforestation agreements are signed. To assess the fulfillment of such agreements at farm level, ALOS-1 and ALOS-2 PALSAR fine beam dual imagery for years 2010 and 2016 was processed with ad-hoc routines to estimate stable forest, deforestation, and stable nonforest extension for 2615 participant farms in five heterogeneous regions of Colombia. Landsat VNIR imagery was integrated in the processing chain to reduce classification uncertainties due to radar limitations. Farms associated with Meta Foothills regions showed zero deforestation during the period analyzed (2010–2016), while other regions showed low deforestation rates with the exception of the Cesar River Valley (75 ha). Results, suggests that topography and dry weather conditions have an effect on radar-based mapping accuracy, i.e., deforestation and forest classes showed lower user accuracy values on mountainous and dry regions revealing overestimations in these environments. Nevertheless, overall ALOS Phased Array L-band SAR (PALSAR) data provided overall accurate, relevant, and consistent information for forest change analysis for local zero deforestation agreements assessment. Improvements to preprocessing routines and integration of high dense radar time series should be further investigated to reduce classification errors from complex topography conditions.
APA, Harvard, Vancouver, ISO, and other styles
30

Chiou, Shean-Juinn, Hsien-Ru Chu, I.-Hsum Li, and Lian-Wang Lee. "A Novel Wearable Upper-Limb Rehabilitation Assistance Exoskeleton System Driven by Fluidic Muscle Actuators." Electronics 12, no. 1 (December 31, 2022): 196. http://dx.doi.org/10.3390/electronics12010196.

Full text
Abstract:
This paper proposed a novel design using a torsion spring mechanism with a single fluidic muscle actuator (FMA) to drive a joint with one degree-of-freedom (DOF) through a steel wire and a proportional pressure regulating valve (PRV). We developed a 4-DOF wearable upper-limb rehabilitation assistance exoskeleton system (WURAES) that is suitable for assisting in the rehabilitation of patients with upper-limb injuries. This system is safe, has a simple mechanism, and exhibits upper-limb motion compliance. The developed WURAES enables patients with upper-limb musculoskeletal injuries and neurological disorders to engage in rehabilitation exercises. Controlling the joint is difficult because of the time-varying hysteresis properties of the FMA and the nonlinear motion between standard extension and flexion. To solve this problem, a proxy-based output feedback sliding mode control (POFSC) was developed to provide appropriate rehabilitation assistance power for the upper-limb exoskeleton and to maintain smooth and safe contact between the WURAES and the patient. The POFSC enables the overdamped dynamic of the WURAES to recover motion to be aligned with the target trajectory without a significant error overshoot caused by actuator saturation. The experimental results indicate that the proposed POFSC can control the designed WURAES effectively. The POFSC can monitor the exoskeleton system’s total disturbance and unknown state online and adapt to the exterior environment to enhance the control capability of the designed system. The results indicate that a single FMA with a torsion spring module exhibits a control response similar to a dual FMA configuration.
APA, Harvard, Vancouver, ISO, and other styles
31

Chu, Hsien-Ru, Shean-Juinn Chiou, I.-Hsum Li, and Lian-Wang Lee. "Design, Development, and Control of a Novel Upper-Limb Power-Assist Exoskeleton System Driven by Pneumatic Muscle Actuators." Actuators 11, no. 8 (August 10, 2022): 231. http://dx.doi.org/10.3390/act11080231.

Full text
Abstract:
An innovative wearable upper-limb power-assist exoskeleton system (UPES) was designed for laborers to improve work efficiency and reduce the risk of musculoskeletal disorders. This novel wearable UPES consists of four joints, each comprising a single actuated pneumatic muscle actuator (PMA) and a torsion spring module driven via a steel cable. Unlike most single-joint applications, where dual-PMAs are driven by antagonism, this design aims to combine a torsion spring module with a single-PMA via a steel cable for a 1-degree of freedom (1-DOF) joint controlled by a proportional-pressure regulator. The proposed four driving degrees of freedom wearable UPES is suitable for power assistance in work and characterizes a simple structure, safety, and compliance with the motion of an upper limb. However, due to the hysteresis, time-varying characteristics of the PMA, and non-linear movement between joint flexion and extension, the model parameters are difficult to identify accurately, resulting in unmeasurable uncertainties and disturbances of the wearable UPES. To address this issue, we propose an improved proxy-based sliding mode controller integrated with a linear extended state observer (IPSMC-LESO) to achieve accurate power-assisted control for the upper limb and ensure safe interaction between the UPES and the wearer. This control method can slow the underdamped dynamic recovery motion to tend the target trajectory without overshoots from large tracking errors that result in actuator saturation, and without deteriorating the power assist effect during regular operation. The experimental results show that IPSMC-LESO can effectively control a 4-DOF wearable UPES, observe the unknown states and total disturbance online of the system, and adapt to the external environment and load changes to improve system control performance. The results prove that the joint torsion spring module combining the single-PMA can reduce the number of PMAs and proportional-pressure regulators by half and obtain a control response similar to that of the dual-PMA structure.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Peng, Xingguang Duan, Guangli Sun, Xiang Li, Yang Zhou, and Yunhui Liu. "Design and control of a climbing robot for inspection of high mast lighting." Assembly Automation 39, no. 1 (February 4, 2019): 77–85. http://dx.doi.org/10.1108/aa-01-2018-006.

Full text
Abstract:
Purpose This paper aims to develop a climbing robot to help people inspect lamps of high-mast lighting. Design/methodology/approach The robot consists of driving mechanism, suspension mechanism and compression mechanism. The driving mechanism is realized by link chains and sprockets, which are arranged opposite to each other, to form a dual caterpillar mechanism. The compression mechanism squeezes the caterpillar, and rubber feet “grasps” the steel rope to generate enough adhesion forces. The suspension mechanism is used to compensate the contraction or extension of the chains. The robot is equipped with a DC motor with a rated power of 250 W and a wireless module to communicate with the operator’s console. The dynamic model of the robot and the control strategy is derived, and the stability of the controller is proofed. Findings The payload experiment shows the robot can afford up to 3.7 times payload versus its own weight. Even when the payload is 30 kg, the robot can maintain a speed of the 1 m/s. The experiments also show that the tracking error of the robot reaches zero. Practical implications The proposed moving mechanism has a high load/weight ratio, which is a verified solution for the cable inspection purpose. Originality/value A rope climbing robot for high mast lighting inspection is proposed. The developed mechanism can reach a speed of 1 m/s with the payload of 30 kg, while its own weight is only 15.6 kg. The payload/weight ratio of the robot is 2.24; this value is rather good in many climbing robots reported in other renowned journal.
APA, Harvard, Vancouver, ISO, and other styles
33

Filip, Ioan, Florin Dragan, Iosif Szeidert, and Adriana Albu. "Minimum-Variance Control System with Variable Control Penalty Factor." Applied Sciences 10, no. 7 (March 27, 2020): 2274. http://dx.doi.org/10.3390/app10072274.

Full text
Abstract:
The present paper proposes (as the main contribution) an additional self-tuning mechanism for an adaptive minimum-variance control system, whose main goal is to extend its functionality for a large value range of unmeasurable perturbations which disturb the controlled process. Through the standard design procedure, a minimum variance controller uses by default an internal self-tuning mechanism based on the process parameter estimates. However, the main parameter which overwhelmingly influences the control performance is the control penalty factor ( ρ ) . This parameter weights the term that describes the control variance in a criterion function whose minimization is the starting point of the control law design. The classical minimum-variance control involves an off-line tuning of this parameter, its value being set as constant throughout the entire operating regime. Based on the measurement of the process output error, the contribution of the proposed strategy consists in a real-time tuning of the control penalty factor, to ensure the stability of the control system, even under conditions of high disturbances. The proposed tuning mechanism adjusts this parameter by implementing a bipositional switching strategy based on a sharp hysteresis loop. Therefore, instead of the standard solution that involves a constant value of the control penalty factor ρ (a priori computed and set), this paper proposes a dual value for this controller parameter. The main objective is to allow the controlled process to operate in a stable fashion even in more strongly disturbed regimes (regimes where the control system becomes unstable and is usually switched off for safety reasons). To validate the proposed strategy, an induction generator integrated into a wind energy conversion system was considered as controlled plant. Operating under the action of strong disturbances (wind gusts, electrical load variations), the extension of safe operating range (thus avoiding the system disengagement) is an important goal of such a control system.
APA, Harvard, Vancouver, ISO, and other styles
34

Pimenta, Luciana Duarte, Danilo Alexandre Massini, Daniel Dos Santos, Leandro Oliveira Da Cruz Siqueira, Andrei Sancassani, Luiz Gustavo Almeida Dos Santos, Bianca Rosa Guimarães, Cassiano Merussi Neiva, and Dalton Muller Pessôa Filho. "WOMEN’S FEMORAL MASS CONTENT CORRELATES TO MUSCLE STRENGTH INDEPENDENTLY OF LEAN BODY MASS." Revista Brasileira de Medicina do Esporte 25, no. 6 (December 2019): 485–89. http://dx.doi.org/10.1590/1517-869220192506208956.

Full text
Abstract:
ABSTRACT Introduction There is limited consensus regarding the recommendation of the most effective form of exercise for bone integrity, despite the fact that weight training exercise promotes an increase in muscle mass and strength as recurrent responses. However, strength variations in women do not depend on muscle mass development as they do in men, but strength enhancement has shown the potential to alter bone mineral content (BMC) for both sexes. Objective This study analyzed the potential of muscle strength, as well as that of whole-body and regional body composition, to associate femoral BMC in young women. Methods Fifteen female college students (aged 24.9 ± 7.2 years) were assessed for regional and whole-body composition using dual-energy X-ray absorptiometry (DXA). Maximum muscle strength was assessed by the one-repetition maximum (1RM) test in the following exercises: bench press (BP), lat pulldown (LP), knee flexion (KF), knee extension (KE) and 45° leg press (45LP). Linear regression analyzed BMC relationships with regional composition and 1RM values. Dispersion and error measures (R 2 aj and SEE), were tested, defining p ≤0.05. Results Among body composition variables, only total lean body mass was associated with femoral BMC values (R 2 aj = 0.37, SEE = 21.3 g). Regarding strength values, 1RM presented determination potential on femoral BMC in the CE exercise (R 2 aj = 0.46, SEE = 21.3 g). Conclusions Muscle strength aptitude in exercises for femoral regions is relevant to the femoral mineralization status, having associative potential that is similar to and independent of whole-body lean mass. Therefore, training routines to increase muscle strength in the femoral region are recommended. In addition, increasing muscle strength in different parts of the body may augment bone remodeling stimulus, since it can effectively alter total whole-body lean mass. Level of Evidence II; Development of diagnostic criteria in consecutive patients (with universally applied reference ‘‘gold’’ standard).
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Yi, Wenpeng Cao, Wenjing Cao, X. Long Zheng, and Xiaohui Zhang. "Binding of Gpibα to VWF A2 Domain Alters Mechanical Unraveling of the A2 Domain." Blood 136, Supplement 1 (November 5, 2020): 24–25. http://dx.doi.org/10.1182/blood-2020-143275.

Full text
Abstract:
Background: Von Willebrand factor (VWF), is a large, multimeric plasma glycoprotein that plays a critical role in hemostasis. VWF is synthesized and secreted as ultra large (UL) multimers that contain up to 100 protomers. If not processed by ADAMTS13, a plasma metalloprotease, ULVWF may initiate the spontaneous formation of life-threatening thrombi, as seen in thrombotic thrombocytopenic purpura (TTP). The cleavage site is buried under the central β-sheet within the VWF-A2 domain and tensile force is required to expose the cleavage site for ADAMTS13. Our prior studies demonstrate that several VWF-binding proteins, including coagulation factor VIII, apoB100/LDL, as well as the ectodomains of platelet glycoprotein Iba (GPIba), appear to function as cofactors that facilitate the proteolytic cleavage of VWF by ADAMTS13 under shear. However, the mechanism underlying GPIba enhancing effect on ADAMTS13-mediated VWF proteolysis is yet to be determined. Methods: Recombinant human GPIbα was purchased from Sino Biological. The recombinant VWF-A2 domain fragment with a SpyTag on the N-terminus and an AviTag-HisTag on the C-terminus was expressed in the HEK 293T cells and affinity-purified by the Ni-NTA affinity chromatography. Biotinylation was performed in vitro using a biotin-labeling kit. The binding between VWF-A2 and GPIbα was studied by a custom-built atomic force microscope (AFM) using our established single-molecule binding study protocol. MicroScale Thermophoresis was conducted by a Monolith device to detect the binding affinity between the VWF-A2 and GPIbα. A dual-beam mini-tweezers instrument was utilized to characterize the force-induced conformational changes of the A2 domain in the absence and presence of GPIbα with a pulling speed of 200 nm/s. Results: AFM results indicated that specific binding interactions occurred between GPIbα and VWF-A2. Monolith MST assay revealed a strong binding affinity (Kd of ~20 nM) between GPIbα and the VWF-A2 fragment. In the optical tweezer study, pulling on a single VWF-A2 resulted in an unfolding event at 10-30 pN with an extension ranging from 30 to 40 nm (Fig. 1A). Gaussian fits of the unfolding extension distributions revealed a most probable force-induced extension of 34.87 ± 2.2 nm (mean ± SEM) (Fig. 1B). Addition of 100 nM of GPIbα led to a noticeable decrease in both unfolding force and extension of VWF-A2 (Fig. 1A). The most probable unfolding extension reduced to 16.05 ± 0.2 nm in the presence of 100 nM of GPIbα (Fig. 1B), indicating the binding of GPIbα may influence mechanical unfolding of VWF-A2. Further, the unfolding results were analyzed by a worm-like chain model fit, which yielded a contour length for the initially folded structure of VWF-A2 at 58.83 ± 2.0 nm (mean ± SEM) and 24.53 ± 0.2 nm in the absence and presence of GPIbα, respectively (Fig. 1C), indicating that the specific interactions between GPIbα and A2 domain may partially unfold the A2 domain. Conclusions: These results demonstrate for the first time that binding of GPIbα to VWF-A2 may alter the force-induced conformational changes in the A2 domain. Under physiological conditions, the glycocalicin (or soluble GP1bα) may bind the VWF-A2 and cause A2 partial unfolding, which may result in excessive cleavage of VWF by ADAMTS13, thus regulating hemostasis. Figure legend: Fig.1. GPIbα influences the mechanical unfolding of the A2 domain of VWF. (A) Typical optical tweezer pulling traces of A2 domain in the absence (red) and presence (blue) of GPIbα (100 nM). The arrows point to the unfolding events. The pulling speed is 200 nm/s. (B) The histograms of the unfolding extension of pulling VWF-A2 in the absence (red) or presence (blue) of 100 nM of GPIbα at 200 nm/s. Solid lines are Gaussian fits to the distributions. (C) The relationship between unfolding force (pN) and unfolding extension (nm) of pulling the VWF-A2 in the absence (red) or presence (blue) of 100 nM of GPIbα. The data are fitted to the worm-like chain model (solid lines). Horizontal and vertical error bars are one standard deviation for force and half width of the half bin width for extension, respectively. Figure Disclosures Cao: Ivygen: Consultancy; Bayer: Research Funding. Zheng:Sanofi: Consultancy, Speakers Bureau; Clotsolution: Other: Co-Founder; Alexion: Consultancy, Speakers Bureau; Takeda: Consultancy, Speakers Bureau.
APA, Harvard, Vancouver, ISO, and other styles
36

Larsen, Leif. "Pressure-Transient Behavior of Multibranched Wells in Layered Reservoirs." SPE Reservoir Evaluation & Engineering 3, no. 01 (February 1, 2000): 68–73. http://dx.doi.org/10.2118/60911-pa.

Full text
Abstract:
Summary Analytical methods are presented to determine the pressure-transient behavior of multibranched wells in layered reservoirs. The computational methods are based on Laplace transforms and numerical inversion to generate type curves for use in direct analyses of pressure-transient data. Any number of branches with arbitrary direction and deviation can in principle be handled, although the computational cost will increase considerable with increasing number of branches. However, due to practical considerations, a large number of branches is also unlikely in most cases. Introduction With increased interest in multibranched wells as a means to improve productivity, it is important to have computational methods for predictions and analyses of such wells. Ozkan et al.1 presented such solutions for dual lateral wells in homogeneous formations. The present paper extends these results to multibranched wells in layered reservoirs. The approach covers reservoirs both with and without formation crossflow, but cases without crossflow can also be handled similar to homogenous reservoirs. Boundary effects are not included, but can be added from an equivalent homogeneous model if pseudoradial flow is reached within the infinite-acting period. The methods used in this paper are direct extensions of methods presented by Larsen2 for deviated wells in layered reservoirs. The results in Ref. 2 apply for any deviation, and hence, also for horizontal segments within different layers. The approach was restricted, however, to cover at most one segment within each layer with no overlap vertically. In the approach used in the present paper, these restrictions have been removed. Mathematical Approach Except for simple cases with only vertical branches, general multibranched wells will require a three-dimensional flow equation within individual layers to capture the flow geometry. If the horizontal permeability is independent of direction within each layer, flow within Layer j can be described by the equation k j ( ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 ) p j + k j ′ ∂ 2 p j ∂ z 2 = μ ϕ j c t j ∂ p j ∂ t , ( 1 ) under normal assumptions, where kj and kj=′ denote horizontal and vertical permeability, and the other indexed variables have the standard meaning for each layer. Copying Ref. 2, an approach similar to Refs. 3 and 4 will be followed with vertical variation of pressure within each layer removed by passing to the vertical average. For Layer j, the new pressure p a j ( x , y , t ) = 1 h j ∫ z j − 1 z j p j ( x , y , z , t ) d z , ( 2 ) is then obtained, where z j?1 and zj= zj?1+hj are the z coordinates of the lower and upper boundaries of the layer. There is one major problem with the direct approach above. It cannot handle the boundary condition at the wellbore for nonvertical segments. To get around this problem, each perforated layer segment will be replaced by a uniform-flux fracture in the primary solution scheme. This approach is illustrated in Fig. 1 for a two-branched well in a three-layered reservoir, with Branch 1 fully perforated through the reservoir and Branch 2 fully perforated in Layers 1 and 2 and partially completed with a horizontal segment in Layer 3. Since an infinite-conductivity wellbore (consisting of the branches) will be assumed, a time-dependent skin factor is added to each fracture to get the actual branch (i.e., deviated well) pressure from the fracture solution. This is identical to the approach used in Ref. 2 for individual branches. With branch angle ?j (as a deviation from the vertical) and completed branch length Lwj in Layer j, the associated fracture half-length will be given by the identity x f j = 1 2 L w j s i n θ j ( 3 ) for each j. The completed branch length Lwj is assumed to consist of a single fully perforated interval. The fracture half-length in layers with vertical branch segments will be set equal to the wellbore radius rwa To capture deviated branches with more than one interval within a layer, the model can be subdivided by introducing additional layers. If Eq. 1 is integrated from zj?1 to z j, as shown in Eq. 2, then the new flow equation k j h j ( ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 ) p a j + k j ′ ∂ p j ∂ z | z j − k j ′ ∂ p j ∂ z | z j − 1 = μ ϕ j c t j h j ∂ p a j ∂ t ( 4 ) is obtained. The two gradient terms remaining in Eq. 4 represent flux through the upper and lower boundaries of Layer j. In the standard multiple-permeability modeling of layered reservoirs, the gradient terms are replaced by difference expressions in the form k j ′ ∂ p j ∂ z | z j = k j + 1 ′ ∂ p j + 1 ∂ z | z j = λ j ′ ( p a , j + 1 − p a j ) ( 5 ) for each j, where λj′ is a constant determined from reservoir parameters or adjusted to fit the response of the well. For details on how to choose crossflow parameters, see Refs. 3 and 4, and additional references cited in those papers. Additional fracture to well drawdown is assumed not to affect this approach. Since vertical flow components are important for deviated branches, the crossflow parameters in Eq. 5 will be important elements of the mathematical model. If, for instance, the standard choice from Refs. 3 and 4 is used, then vertical flow will be reduced even in isotropic homogeneous formations, but doubling the default ? is sufficient in many cases to remove this error. However, since these parameters will be quite uncertain in field data anyway, the modeling should be more than adequate.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Peiyang, Cunbo Li, Joyce Chelangat Bore, Yajing Si, Fali Li, Zehong Cao, Yangsong Zhang, et al. "L1-norm based time-varying brain neural network and its application to dynamic analysis for motor imagery." Journal of Neural Engineering 19, no. 2 (March 30, 2022): 026019. http://dx.doi.org/10.1088/1741-2552/ac59a4.

Full text
Abstract:
Abstract Objective . Electroencephalogram (EEG)-based motor imagery (MI) brain-computer interface offers a promising way to improve the efficiency of motor rehabilitation and motor skill learning. In recent years, the power of dynamic network analysis for MI classification has been proved. In fact, its usability mainly depends on the accurate estimation of brain connection. However, traditional dynamic network estimation strategies such as adaptive directed transfer function (ADTF) are designed in the L2-norm. Usually, they estimate a series of pseudo connections caused by outliers, which results in biased features and further limits its online application. Thus, how to accurately infer dynamic causal relationship under outlier influence is urgent. Approach . In this work, we proposed a novel ADTF, which solves the dynamic system in the L1-norm space (L1-ADTF), so as to restrict the outlier influence. To enhance its convergence, we designed an iteration strategy with the alternating direction method of multipliers, which could be used for the solution of the dynamic state-space model restricted in the L1-norm space. Furthermore, we compared L1-ADTF to traditional ADTF and its dual extension across both simulation and real EEG experiments. Main results . A quantitative comparison between L1-ADTF and other ADTFs in simulation studies demonstrates that fewer bias errors and more desirable dynamic state transformation patterns can be captured by the L1-ADTF. Application to real MI EEG datasets seriously noised by ocular artifacts also reveals the efficiency of the proposed L1-ADTF approach to extract the time-varying brain neural network patterns, even when more complex noises are involved. Significance . The L1-ADTF may not only be capable of tracking time-varying brain network state drifts robustly but may also be useful in solving a wide range of dynamic systems such as trajectory tracking problems and dynamic neural networks.
APA, Harvard, Vancouver, ISO, and other styles
38

Guimarães, Bianca Rosa, Luciana Duarte Pimenta, Danilo Alexandre Massini, Daniel dos Santos, Leandro Oliveira da Cruz Siqueira, Astor Reis Simionato, Luís Gustavo Almeida dos Santos, Cassiano Merussi Neiva, and Dalton Muller Pessôa Filho. "MUSCULAR STRENGTH AND REGIONAL LEAN MASS INFLUENCE BONE MINERAL HEALTH AMONG YOUNG FEMALES." Revista Brasileira de Medicina do Esporte 24, no. 3 (May 2018): 186–91. http://dx.doi.org/10.1590/1517-869220182403183956.

Full text
Abstract:
ABSTRACT Introduction: Strength training is able to stimulate bone tissue metabolism by increasing mechanical stress on the skeletal system. However, the direct relationship is not yet well established among younger women, since it is necessary to describe which strength enhancement level is able to produce effective changes in bone integrity. Objectives: This study analyzed the influence of muscle strength on bone mineral content (BMC) and bone mineral density (BMD) among female college students. Methods: Fifteen women (24.9 ± 7.2 years) were assessed for regional and whole-body composition by dual-energy X-ray absorptiometry (DXA). The one-repetition maximum (1-RM) tests were assessed on flat bench press (BP), lat pulldown (LPD), leg curl (LC), knee extension (KE), and 45 degree leg press (45LP). Linear regression analyzed the relationships of BMC/BMD with regional composition and 1-RM test values. Measures of dispersion and error (R2 adj and SEE) were tested, defining a p-value of 0.05. Results: The mean value of whole-body BMC was 1925.6 ± 240.4 g and the BMD was 1.03 ± 0.07 g/cm2. Lean mass (LM) was related to BMC (R2 adj = 0.86, p<0.01, and SEE = 35.6 g) and BMD (R2 adj = 0.46, p<0.01, SEE = 0.13 g) in the lower limbs (LL). The 1-RM tests in BP were associated with BMC and BMD (R2 adj = 0.52, p<0.01, SEE = 21.4 g, and R2 adj = 0.68, p<0.01, SEE = 0.05 g/cm2, respectively) in the upper limbs, while the 1-RM tests in KE were related to BMC and BMD (R2 adj = 0.56, p<0.01. SEE = 62.6 g, and R2 adj = 0.58, p<0.01, SEE = 0.11 g/cm2, respectively) in the lower limbs. Conclusions: Hence, the 1-RM tests for multi-joint exercises are relevant to the regional BMC/BMD, reinforcing the need to include resistance exercises in training routines with the purpose of improving muscular strength and regional lean mass, thereby ensuring a healthy bone mineral mass. Level of Evidence II; Development of diagnostic criteria in consecutive patients (with applied reference ‘‘gold’’ standard).
APA, Harvard, Vancouver, ISO, and other styles
39

Calais, Eric, Roger Bayer, Jean Chery, Fabrice Cotton, Erik Doerflinger, Mireille Flouzat, Francois Jouanne, et al. "REGAL; reseau GPS permanent dans les Alpes occidentales; configuration et premiers resultats." Bulletin de la Société Géologique de France 172, no. 2 (March 1, 2001): 141–58. http://dx.doi.org/10.2113/172.2.141.

Full text
Abstract:
Abstract The kinematics of the present-day deformation in the western Alps is still poorly known, mostly because of a lack of direct measurements of block motion and internal deformation. Geodetic measurements have the potential to provide quantitative estimates of crustal strain and block motion in the Alps, but the low expected rates, close to the accuracy of the geodetic techniques, make such measurements challenging. Indeed, an analysis of 2.5 years of continuous GPS data at Torino (Italy), Grasse (France), and Zimmerwald (Switzerland), showed that the present-day differential motion across the western Alps does not exceed 3 mm/yr [Calais, 1999]. Continuous measurements performed at permanent GPS stations provide unique data sets for rigorously assessing crustal deformation in regions of low strain rates by reducing the amount of time necessary to detect a significant strain signal, minimizing systematic errors, providing continuous position time series, and possibly capturing co- and post-seismic motion. In 1997, we started the implementation of a network of permanent GPS stations in the western Alps and their surroundings (REGAL network). The REGAL network mostly operates dual frequency Ashtech Z12 CGRS GPS stations with choke-ring antennae. In most cases, the GPS antenna is installed on top of a 1.5 to 2.5 m high concrete pilar directly anchored into the bedrock. The data are currently downloaded once daily and sent to a data center located at Geosciences Azur, Sophia Antipolis where they are converted into RINEX format, quality checked, archived, and made available to users. Data are freely available in raw and RINEX format at http://kreiz.unice.fr/regal/. The GPS data from the REGAL network are routinely processed with the GAMIT software, together with 10 global IGS stations (KOSG, WZTR, NOTO, MATE, GRAZ, EBRE, VILL, CAGL, MEDI, UPAD) that serve as ties with the ITRF97. We also include the stations ZIMM, TORI, GRAS, TOUL, GENO, HFLK, OBER because of their tectonic interest. We obtain long term repeatabilities on the order of 2-3 mm for the horizontal components, 8-10 mm for the vertical component. Using a noise model that combines white and coloured noise (flicker noise, spectral index 1), we find uncertainties on the velocities ranging from 1 mm/yr for the oldest stations (ZIMM, GRAS, TOUL, TORI, SJDV) to 4-5 mm/yr for the most recently installed (CHAT, MTPL). Station velocities obtained in ITRF97 are rotated into a Eurasian reference by substracting the rigid rotation computed from ITRF97 velocities at 11 central European sites located away from major active tectonic structures (GOPE, JOZE, BOR1, LAMA, ZWEN, POTS, WETT, GRAZ, PENC, Effelsberg, ONSA). The resulting velocity field shows residual motions with respect to Eurasia lower than 3 mm/yr. We obtain at TORI, in the Po plain, a residual velocity of 2.3+ or -0.8 mm/yr to the SSW and a velocity of 1.9+ or -1.1 mm/yr at SJDV, on the Alpine foreland. These results indicate that the current kinematic boundary conditions across the western Alps are extensional, as also shown by the SJDV-TORI baseline time series. We obtain at MODA (internal zones) a residual velocity of 1.2+ or -1.2 mm/yr to the SSE. The MODA-FCLZ baseline show lengthening at a rate of 1.6+ or -0.8 mm/yr. These results are still marginally significant but suggest that the current deformation regime along the Lyon-Torino transect is extension, as also indicated by from recent seismotectonic data. It is in qualitative agreement with local geodetic measurements in the internal zones (Briancon area) but excludes more than 2.4 mm/yr of extension (FCLZ-MODA baseline, upper uncertainty limit at 95% confidence). Our results indicate a different tectonic regime in the southern part of the western Alps and Provence, with NW-SE to N-S compression. The GRAS-TORI baseline, for instance, shows shortening at a rate of 1.4+ or -1.0 mm/an. This result is consistent with seismotectonic data and local geodetic measurements in these areas. The Middle Durance fault zone, one of the main active faults in this area, is crossed by the GINA-MICH baseline, which shows shortening at a rate of 1.0+ or -0.8 mm/an. This result is only marginally significant, but confirms the upper bound of 2 mm/yr obtained from triangulation-GPS comparisons. The REGAL permanent GPS network has been operating since the end of 1997 for the oldest stations and will continue to be densified. Although they are still close to or within their associated uncertainties, preliminary results provide, for the first time, a direct estimate of crustal deformation across and within the western Alps.
APA, Harvard, Vancouver, ISO, and other styles
40

Saleri, N. G. "Re-Engineering Simulation: Managing Complexity and Complexification in Reservoir Projects." SPE Reservoir Evaluation & Engineering 1, no. 01 (February 1, 1998): 5–11. http://dx.doi.org/10.2118/36696-pa.

Full text
Abstract:
Summary Managing complexity and technological complexification is a necessity in today's business environment. This paper outlines a method to increase value addition significantly by multidisciplinary reservoir studies. In this context, value addition refers to a positive impact on a business decision. The approach ensures a level of complexification in line both with business questions at hand and the realities of reservoirs. Sparse well control, seismic uncertainties, imperfect geologic models, time constraints, software viruses, and computing hardware limitations represent some common reservoir realities. The process model detailed in the paper uses these apparent shortcomings to moderate (i.e., guide) the level of complexification. Several project examples illustrate the implementation of the process model. The paper is an extension of three previous investigations1–3 that deal with issues of method and uncertainty in reservoir-performance forecasting. Introduction Multidisciplinary teams and data have become the standard 1990's methods to address large-scale reservoir-management issues. Concurrently, reservoir simulation has assumed the role as a "knowledge manager" of ever-growing quantities of information. The paper pursues three basic questions:How can we maximize the value added from integrated reservoir studies,How can we achieve a pragmatic balance between business objectives/timetables and problem complexification, andHow best can we use the technology dividend provided by the explosion of computing power Primarily because of their size, Saudi Arabian fields amplify the significance of these three questions. What has emerged is the realization that reservoir simulation needs to provide a proper demarcation between scientific and business objectives to remain business-relevant. The discussion that follows consists of two main parts. First, we present an analysis of complexity in general and reservoir systems in particular. This is followed by a process model (i.e., parallel planning plus) and a set of principles that link business needs, reservoir realities, and simulation in the context of multidisciplinary studies. The following definitions will facilitate the discussion that follows. Complex (adjective): Composed of interconnected parts. Complexity: The state of being intricate. The degree of interconnection among various parts. Complexification: The process of adding incremental levels of complexity to a system. Detail vs. Dynamic Complexity A vast array of multisourced information makes up reservoir systems (Fig. 1). Reservoir simulation is our attempt to link the "detail complexity" of such a system to the "dynamic complexity"4,5 expected in business decisions. In this regard, a systems engineering perspective to reservoir management is very relevant. Senge4 defines two types of complexity: detail and dynamic. Detail complexity entails defining individual ingredients in fine detail, while dynamic complexity refers to the dynamic, often unpredictable, outcomes of the interactions of the individual components. Senge4 states that "the real leverage in most management situations lies in understanding dynamic complexity, not detail complexity." This is precisely true for many of the questions facing reservoir-management project teams in the industry. When to initiate an EOR project or pattern realignment or how to develop a field are typical dynamic complexity problems. Relative-permeability data, field-management strategies, or wellbore hydraulics are examples of detail complexity. Geologic, geostatistical, and reservoir-simulation models are also examples of detail complexity, but represent higher orders of organization. Interestingly, reservoir-simulation models have a dual function: first, as an organizer of detail complexity, and, second, as a tool for interpreting dynamic complexity (a distinction from geologic models). Technological complexification is the process of adding incremental levels of detail complexity to a system to represent its dynamic complexity more rigorously. Each one of the components depicted in Fig. 1 offers an avenue of complexification. Perhaps ironically, every component also carries an element of uncertainty. New technologies are adding significantly to the detail complexity available to multidisciplinary teams. One can see that advances in computing technology, for instance, play a role in the cycle of complexification that Fig. 2 shows. As we acquire more computing power, we can build more complex models, which will further delineate the questions being addressed, calling for more computing power, and so on. The real question, however, is whether we are in fact getting a better answer to the questions posed. Or, alternatively, are we making a difference? Multidisciplinary studies are vulnerable to the tendency towards maximal detail complexity. As one of the constituent disciplines (e.g., seismic, geostatistics) produces a more detailed reservoir representation, the pressure mounts for the other disciplines to match the level of complexification in their respective areas. However, for many reservoir problems, we may have a nonlinear relationship between dynamic and detail complexity (Fig. 3). As the number of detail complexity elements rise, the number of interactions among the elements proliferate. Any one of these interactions can be a show stopper. For example, reservoir-simulation models constructed at the detail level (i.e., scale) of geocellular models can become numerically unstable or prohibitively central-processing-unit (CPU) intensive - either way, a nonsolution. Complexification vs. Error Expectations The reservoir system depicted in Fig. 1 does not represent a controlled data environment; i.e., we are not operating in a setting where we can control the quality and quantity (sufficiency) of data. Therefore, in reservoir systems, the concept of "garbage in/garbage out," when taken literally, is an oxymoron. There is always some contamination (error or uncertainty) in one of the detail complexity elements. Thus, we need to redefine our mission as "given the data environment as is, what is an acceptable error, and what is an appropriate level of complexification?"
APA, Harvard, Vancouver, ISO, and other styles
41

Ferrando, Pere J., and David Navarro-González. "A Multidimensional Item Response Theory Model for Continuous and Graded Responses With Error in Persons and Items." Educational and Psychological Measurement, March 10, 2021, 001316442199841. http://dx.doi.org/10.1177/0013164421998412.

Full text
Abstract:
Item response theory “dual” models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for (approximately) continuous responses, and the M-DTGRM (dual Thurstonian graded response model), intended for ordered-categorical responses (including binary). A rationale for the extension to the multiple-content-dimensions case, which is based on the concept of the multidimensional location index, is first proposed and discussed. Then, the models are described using both the factor-analytic and the item response theory parameterizations. Procedures for (a) calibrating the items, (b) scoring individuals, (c) assessing model appropriateness, and (d) assessing measurement precision are finally discussed. The simulation results suggest that the proposal is quite feasible, and an illustrative example based on personality data is also provided. The proposals are submitted to be of particular interest for the case of multidimensional questionnaires in which the number of items per scale would not be enough for arriving at stable estimates if the existing unidimensional DMs were fitted on a separate-scale basis.
APA, Harvard, Vancouver, ISO, and other styles
42

Ern, Alexndre, Martin Vohralík, and Mohammad Zakerzadeh. "Guaranteed and robust {$L^2$}-norm a posteriori error estimates for {1D} linear advection problems." ESAIM: Mathematical Modelling and Numerical Analysis, July 6, 2020. http://dx.doi.org/10.1051/m2an/2020041.

Full text
Abstract:
We propose a reconstruction-based a posteriori error estimate for linear advection problems in one space dimension. In our framework, a stable variational ultra-weak formulation is adopted, and the equivalence of the $L^2$-norm of the error with the dual graph norm of the residual is established. This dual norm is showed to be localizable over vertex-based patch subdomains of the computational domain under the condition of the orthogonality of the residual to the piecewise affine hat functions. We show that this condition is valid for some well-known numerical methods including continuous/discontinuous Petrov--Galerkin and discontinuous Galerkin methods. Consequently, a well-posed local problem on each patch is identified, which leads to a global conforming reconstruction of the discrete solution. We prove that this reconstruction provides a guaranteed upper bound on the $L^2$ error. Moreover, up to a constant, it also gives local lower bounds on the $L^2$ error, where the generic constant is proven to be independent of mesh-refinement, polynomial degree of the approximation, and the advective velocity. This leads to robustness of our estimates with respect to the advection as well as the polynomial degree. All the above properties are verified in a series of numerical experiments, additionally leading to asymptotic exactness. Motivated by these results, we finally propose a heuristic extension of our methodology to any space dimension, achieved by solving local least-squares problems on vertex-based patches. Though not anymore guaranteed, the resulting error indicator is numerically robust with respect to both advection velocity and polynomial degree, for a collection of two-dimensional test cases including discontinuous solutions aligned and not aligned with the computational mesh.
APA, Harvard, Vancouver, ISO, and other styles
43

May-Newman, Karen, Maria T. Matyska, and Martin N. Lee. "Design and Preliminary Testing of a Novel Dual-Chambered Syringe." Journal of Medical Devices 5, no. 2 (June 1, 2011). http://dx.doi.org/10.1115/1.4003822.

Full text
Abstract:
Intravenous catheterization is the most common invasive medical procedure today and is designed to introduce medication directly into the blood stream. Common practice is to administer medicine with one syringe, followed by a saline flush to clear the line of any residual medication. The risk of infection due to the introduction of bacteria in the catheter hub is increased with the number of times the hub is accessed. In addition, the two-step process adds millions of nursing hours per year and is prone to error. The goal of this effort was to design and test a dual-chamber syringe that could be reliably used for both dispensing medicine and the saline flush, and be produced at a low cost. The syringe has a novel dual-chamber design with a proximal chamber for medicine and a distal chamber that contains saline. The saline chamber has a fixed volume when the handle is locked into position, which allows the handle to control the variable volume of the medicine chamber. Between the two chambers is a plunger that surrounds the small channel (which is an extension of the distal chamber) that separates the saline from the medicine. When the distal chamber is unlocked, the handle controls the volume of the saline chamber. By this mechanism, the syringe is able inject the medicine followed by the saline flush with a single access to the catheter hub. The smooth operation of the device relies on a locking mechanism to control the rear plunger and volume of the distal saline chamber, and a bubble plug residing in the small channel between the chambers that prevents mixing of the medicine and saline fluids. The bubble plug is held in place by a balance of forces that depend on geometric variables and fluid properties. The chosen design prevents mixing of the two fluids during the operation of the device, which was experimentally validated with mass spectrometry. The dual-chamber syringe has successfully achieved the design goal of a single syringe for the two-step catheter procedure of dispensing medicine and a saline flush. This novel design will reduce the potential for catheter-based infection, medical errors, medical waste, and clinician time. Preliminary test results indicate that this innovation can significantly improve the safety and efficiency of catheter-based administration of medicine.
APA, Harvard, Vancouver, ISO, and other styles
44

Montenbruck, Oliver, Stefan Hackel, Martin Wermuth, and Franz Zangerl. "Sentinel-6A precise orbit determination using a combined GPS/Galileo receiver." Journal of Geodesy 95, no. 9 (September 2021). http://dx.doi.org/10.1007/s00190-021-01563-z.

Full text
Abstract:
AbstractThe Sentinel-6 (or Jason-CS) altimetry mission provides a long-term extension of the Topex and Jason-1/2/3 missions for ocean surface topography monitoring. Analysis of altimeter data relies on highly-accurate knowledge of the orbital position and requires radial RMS orbit errors of less than 1.5 cm. For precise orbit determination (POD), the Sentinel-6A spacecraft is equipped with a dual-constellation GNSS receiver. We present the results of Sentinel-6A POD solutions for the first 6 months since launch and demonstrate a 1-cm consistency of ambiguity-fixed GPS-only and Galileo-only solutions with the dual-constellation product. A similar performance (1.3 cm 3D RMS) is achieved in the comparison of kinematic and reduced-dynamic orbits. While Galileo measurements exhibit 30–50% smaller RMS errors than those of GPS, the POD benefits most from the availability of an increased number of satellites in the combined dual-frequency solution. Considering obvious uncertainties in the pre-mission calibration of the GNSS receiver antenna, an independent inflight calibration of the phase centers for GPS and Galileo signal frequencies is required. As such, Galileo observations cannot provide independent scale information and the estimated orbital height is ultimately driven by the employed forces models and knowledge of the center-of-mass location within the spacecraft. Using satellite laser ranging (SLR) from selected high-performance stations, a better than 1 cm RMS consistency of SLR normal points with the GNSS-based orbits is obtained, which further improves to 6 mm RMS when adjusting site-specific corrections to station positions and ranging biases. For the radial orbit component, a bias of less than 1 mm is found from the SLR analysis relative to the mean height of 13 high-performance SLR stations. Overall, the reduced-dynamic orbit determination based on GPS and Galileo tracking is considered to readily meet the altimetry-related Sentinel-6 mission needs for RMS height errors of less than 1.5 cm.
APA, Harvard, Vancouver, ISO, and other styles
45

Pillay, Narushan, HongJun Xu, and Fambirai Takawira. "Repeat-punctured superorthogonal convolutional turbo codes on AWGN and flat Rayleigh fading channels." South African Journal of Science 106, no. 11/12 (November 18, 2010). http://dx.doi.org/10.4102/sajs.v106i9/10.180.

Full text
Abstract:
Repeat-punctured turbo codes, an extension of the conventional turbo-coding scheme, has shown a significant increase in bit-error rate performance at moderate to high signal-to-noise ratios for short frame lengths. Superorthogonal convolutional turbo codes (SCTC) makes use of superorthogonal signals to improve the performance of the conventional turbo codes and a coding scheme that applies the repeat-punctured technique into SCTC has shown to perform better. We investigated two new low-rate coding schemes, repeat-punctured superorthogonal convolutional turbo codes (RPSCTC) and dual-repeat-punctured superorthogonal convolutional turbo codes (DRPSCTC), that make use of superorthogonal signaling, together with repetition and puncturing, to improve the performance of SCTC for reliable and effective communications. Simulation results in the additive white Gaussian noise (AWGN) channel and the frequency non-selective Rayleigh fading channel are presented together with analytical bounds of bit error probabilities, derived from transfer function bounding techniques. From the simulation results and the analytical bounds presented, it is evident that RPSCTC and DRPSCTC offer a more superior performance than SCTC in the AWGN channel, as well as in flat Rayleigh non-line-of-sight fading channels. The distance spectrum is also presented for the new schemes and accounts for the performance improvement rendered in simulations. It is important to note that the improved performance that SCTC, and consequently RPSCTC and DRPSCTC, exhibit is achieved at the expense of bandwidth expansion and complexity and would be ideal for power-limited satellite communication links or interference-limited systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Lu, Hai-Han, Chung-Yi Li, Wen-Shing Tsai, Poh-Suan Chang, Yan-Yu Lin, Yu-Ting Chen, Chen-Xuan Liu, and Ting Ko. "A two-way 224-Gbit/s PAM4-based fibre-FSO converged system." Scientific Reports 12, no. 1 (January 10, 2022). http://dx.doi.org/10.1038/s41598-021-04315-3.

Full text
Abstract:
AbstractA two-way 224-Gbit/s four-level pulse amplitude modulation (PAM4)-based fibre-free-space optical (FSO) converged system through a 25-km single-mode fibre (SMF) transport with 500-m free-space transmission is successfully constructed, which adopts injection-locked vertical-cavity surface-emitting lasers with polarisation-multiplexing mechanism for a demonstration. Compared with one-way transmission, two-way transmission is an attractive architecture for fibre-FSO converged system. Two-way transmission over SMF transport with free-space transmission not only reduces the required number of fibres and the setups of free-space transmission, but also provides the advantage of capacity doubling. Incorporating dual-wavelength four-level pulse amplitude modulation (PAM4) modulation with polarisation-multiplexing mechanism, the transmission capacity of fibre-FSO converged system is significantly enhanced to 224 Gbit/s (56 Gbit/s PAM4/wavelength × 2-wavelength × 2-polarisation) for downlink/uplink transmission. Bit error rate and PAM4 eye diagrams (downstream/upstream) perform well over 25-km SMF transport with 500-m free-space transmission. This proposed two-way fibre-FSO converged system is a prominent one not only because of its development in the integration of fibre backbone with optical wireless extension, but also because of its advantage in two-way transmission for affording high downlink/uplink data rate with good transmission performance.
APA, Harvard, Vancouver, ISO, and other styles
47

MILLER, G. D., and S. L. ROBINSON. "IMPACT OF BODY COMPOSITION ON PHYSICAL PERFORMANCE TASKS IN OLDER OBESE WOMEN UNDERGOING A MODERATE WEIGHT LOSS PROGRAM." Journal of Frailty & Aging, 2013, 1–6. http://dx.doi.org/10.14283/jfa.2013.5.

Full text
Abstract:
Background: Although obesity is a recognized risk factor for impaired physical function in olderadults, there is still debate on whether older obese adults should undergo intentional weight loss due to concern ofloss in lean body mass, including appendicular lean soft tissue mass. This may put them at risk for worseningmuscle strength and mobility. Objectives:Therefore, the purpose of this study was to examine the effect of aweight loss intervention on body composition and physical function in obese older women. Design:Womenwere randomized into either a weight stable (WS) (n=20) or an intensive weight loss (WL) (n=26) group.Setting:The study setting was at a university research facility. Participants:Women (age, 67.8±1.3 yrs; BMI,34.9 (0.7) kg/m2; mean±standard error of the mean) were recruited. Intervention:The WL intervention was for 6months and included moderate dietary energy restriction and aerobic and strength exercise training.Measurements:Variables were obtained at baseline and 6-months and included body weight, dual energy x-rayabsorptiometry (DXA), 6-minute walk distance, stair climb time, and concentric knee extension muscularstrength. Results:Estimated marginal means (SEM) for weight loss at 6-months was -8.5 (0.9)% for WL and +0.7(1.0)% for WS. There was a significant loss of body fat mass, lean body mass, appendicular lean soft tissue mass,relative muscle mass, and skeletal muscle index for WL vs. WS at 6-months. However, improvements for WLvs. WS were seen in 6-minute walk distance and stair climb time, and trends for improved relative strength andleg muscle quality. Change in body fat mass was positively related to improved physical function and musclestrength and quality. Conclusion:These results further support the use of a sound intentional weight loss programincorporating moderate dietary energy restriction and exercise training in older obese women to improve physicalfunction. Although lean soft tissue mass was lost, over the 6-month program there was no deleterious effect onmuscle strength or muscle quality.
APA, Harvard, Vancouver, ISO, and other styles
48

Aebischer, Jason, Christoph Bobeth, and Andrzej J. Buras. "$$\varepsilon '/\varepsilon $$ in the Standard Model at the Dawn of the 2020s." European Physical Journal C 80, no. 8 (August 2020). http://dx.doi.org/10.1140/epjc/s10052-020-8267-1.

Full text
Abstract:
Abstract We reanalyse the ratio $$\varepsilon '/\varepsilon $$ε′/ε in the Standard Model (SM) using most recent hadronic matrix elements from the RBC-UKQCD collaboration in combination with most important NNLO QCD corrections to electroweak penguin contributions and the isospin-breaking corrections. We illustrate the importance of the latter by using their latest estimate from chiral perturbation theory (ChPT) based on the octet approximation for lowest-lying mesons and a very recent estimate in the nonet scheme that takes into account the contribution of $$\eta _0$$η0. We find $$(\varepsilon '/\varepsilon )^{(8)}_\text {SM} = (17.4 \pm 6.1) \times 10^{-4}$$(ε′/ε)SM(8)=(17.4±6.1)×10-4 and $$(\varepsilon '/\varepsilon )^{(9)}_\text {SM} = (13.9 \pm 5.2) \times 10^{-4}$$(ε′/ε)SM(9)=(13.9±5.2)×10-4, respectively. Despite a very good agreement with the measured value $$(\varepsilon '/\varepsilon )_\text {exp} = (16.6 \pm 2.3) \times 10^{-4}$$(ε′/ε)exp=(16.6±2.3)×10-4, the large error in $$(\varepsilon '/\varepsilon )_\text {SM}$$(ε′/ε)SM still leaves room for significant new physics (BSM) contributions to this ratio. We update the 2018 master formula for $$(\varepsilon '/\varepsilon )_\text {BSM}$$(ε′/ε)BSM valid in any extension beyond the SM without additional light degrees of freedom. We provide new values of the penguin parameters $$B_6^{(1/2)}(\mu )$$B6(1/2)(μ) and $$B_8^{(3/2)}(\mu )$$B8(3/2)(μ) at the $$\mu $$μ-scales used by the RBC-UKQCD collaboration and at lower scales $$\mathcal {O}(1\, \text {GeV})$$O(1GeV) used by ChPT and Dual QCD (DQCD). We present semi-analytic formulae for $$(\varepsilon '/\varepsilon )_\text {SM}$$(ε′/ε)SM in terms of these parameters and $$\hat{\Omega }_\text {eff}$$Ω^eff that summarizes isospin-breaking corrections to this ratio. We stress the importance of lattice calculations of the $$\mathcal {O}(\alpha _{\text {em}})$$O(αem) contributions to the hadronic matrix elements necessary for the removal of renormalization scheme dependence at $$\mathcal {O}(\alpha _{\text {em}})$$O(αem) in the present analyses of $$\varepsilon '/\varepsilon $$ε′/ε.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Qiang, Jianyuan Xiao, and Peifeng Fan. "Gauge invariant canonical symplectic algorithms for real-time lattice strong-field quantum electrodynamics." Journal of High Energy Physics 2021, no. 2 (February 2021). http://dx.doi.org/10.1007/jhep02(2021)127.

Full text
Abstract:
Abstract A class of high-order canonical symplectic structure-preserving geometric algorithms are developed for high-quality simulations of the quantized Dirac-Maxwell theory based strong-field quantum electrodynamics (SFQED) and relativistic quantum plasmas (RQP) phenomena. With minimal coupling, the Lagrangian density of an interacting bispinor-gauge fields theory is constructed in a conjugate real fields form. The canonical symplectic form and canonical equations of this field theory are obtained by the general Hamilton’s principle on cotangent bundle. Based on discrete exterior calculus, the gauge field components are discreted to form a cochain complex, and the bispinor components are naturally discreted on a staggered dual lattice as combinations of differential forms. With pull-back and push-forward gauge covariant derivatives, the discrete action is gauge invariant. A well-defined discrete canonical Poisson bracket generates a semi-discrete lattice canonical field theory (LCFT), which admits the canonical symplectic form, unitary property, gauge symmetry and discrete Poincaré subgroup, which are good approximations of the original continuous geometric structures. The Hamiltonian splitting method, Cayley transformation and symmetric composition technique are introduced to construct a class of high-order numerical schemes for the semi-discrete LCFT. These schemes involve two degenerate fermion flavors and are locally unconditional stable, which also preserve the geometric structures. Admitting Nielsen-Ninomiya theorem, the continuous chiral symmetry is partially broken on the lattice. As an extension, a pair of discrete chiral operators are introduced to reconstruct the lattice chirality. Equipped with statistically quantization-equivalent ensemble models of the Dirac vacuum and non-trivial plasma backgrounds, the schemes are expected to have excellent performance in secular simulations of relativistic quantum effects, where the numerical errors of conserved quantities are well bounded by very small values without coherent accumulation. The algorithms are verified in detail by numerical energy spectra. Real-time LCFT simulations are successfully implemented for the nonlinear Schwinger mechanism induced e-e+ pairs creation and vacuum Kerr effect, where the nonlinear and non-perturbative features captured by the solutions provide a complete strong-field physical picture in a very wide range, which open a new door toward high-quality simulations in SFQED and RQP fields.
APA, Harvard, Vancouver, ISO, and other styles
50

Rogers, Ian, Dave Carter, Benjamin Morgan, and Anna Edgington. "Diminishing Dreams." M/C Journal 25, no. 2 (April 25, 2022). http://dx.doi.org/10.5204/mcj.2884.

Full text
Abstract:
Introduction In a 2019 report for the International Journal of Communication, Baym et al. positioned distributed blockchain ledger technology, and what would subsequently be referred to as Web3, as a convening technology. Riffing off Barnett, a convening technology “initiates and serves as the focus of a conversation that can address issues far beyond what it may ultimately be able to address itself” (403). The case studies for the Baym et al. research—early, aspirant projects applying the blockchain concept to music publishing and distribution—are described in the piece as speculations or provocations concerning music’s commercial and social future. What is convened in this era (pre-2017 blockchain music discourse and practice) is the potential for change: a type of widespread, broadly discussed, reimagination of the 21st-century music industries, productive precisely because near-future applications suggest the realisation of what Baym et al. call dreams. In this article, we aim to examine the Web3 music field as it lies some years later. Taking the latter half of 2021 as our subject, we present a survey of where music then resided within Web3, focussing on how the dreams of Baym et al. have morphed and evolved, and materialised and declined, in the intervening years. By investigating the discourse and functionality of 2021’s current crop of music NFTs—just one thread of music Web3’s far-reaching aspiration, but a potent and accessible manifestation nonetheless—we can make a detailed analysis of concept-led application. Volatility remains throughout the broader sector, and all of the projects listed here could be read as conditionally short-term and untested, but what they represent is a series of clearly evolved case studies of the dream, rich precisely because of what is assumed and disregarded. WTF Is an NFT? Non-fungible tokens inscribe indelible, unique ledger entries on a blockchain, detailing ownership of, or rights associated with, assets that exist off-chain. Many NFTs take the form of an ERC-721 smart-contract that functions as an indivisible token on the Ethereum blockchain. Although all ERC-721 tokens are NFTs, the inverse is not true. Similar standards exist on other blockchains, and bridges allow these tokens to be created on alternative networks such as Polygon, Solana, WAX, Cardano and Tezos. The creation (minting) and transfer of ownership on the Ethereum network—by far the dominant chain—comes with a significant and volatile transaction cost, by way of gas fees. Thus, even a “free” transaction on the main NFT network requires a currency and time investment that far outweighs the everyday routines of fiat exchange. On a technical level, the original proposal for the ERC-721 standard refers to NFTs as deeds intended to represent ownership of digital and physical assets like houses, virtual collectibles, and negative value assets such as loans (Entriken et al.). The details of these assets can be encoded as metadata, such as the name and description of the asset including a URI that typically points to either a file somewhere on the Internet or a file hosted via IPFS, a decentralised peer-to-peer hosting network. As noted in the standard, while the data inscribed on-chain are immutable, the asset being referred to is not. Similarly, while each NFT is unique, multiple NFTs could, in theory, point to a single asset. In this respect ERC-721 tokens are different from cryptocurrencies and other tokens like stable-coins in that their value is often contingent on their accurate and ongoing association with assets outside of the blockchain on which they are traded. Further complicating matters, it is often unclear if and how NFTs confer ownership of digital assets with respect to legislative or common law. NFTs rarely include any information relating to licencing or rights transfer, and high-profile NFTs such as Bored Ape Yacht Club appear to be governed by licencing terms held off-chain (Bored Ape Yacht Club). Finally, while it is possible to inscribe any kind of data, including audio, into an NFT, the ERC-721 standard and the underpinning blockchains were not designed to host multimedia content. At the time of writing, storing even a low-bandwidth stereo audio file on the ethereum network appears cost-prohibitive. This presents a challenge for how music NFTs distinguish themselves in a marketplace dominated by visual works. The following sections of this article are divided into what we consider to be the general use cases for NFTs within music in 2021. We’ve designated three overlapping cases: audience investment, music ownership, and audience and business services. Audience Investment Significant discourse around NFTs focusses on digital collectibles and artwork that are conceptually, but not functionally, unique. Huge amounts of money have changed hands for specific—often celebrity brand-led—creations, resulting in media cycles of hype and derision. The high value of these NFTs has been variously ascribed to their high novelty value, scarcity, the adoption of NFTs as speculative assets by investors, and the lack of regulatory oversight allowing for price inflation via practices such as wash-trading (Madeline; Das et al.; Cong et al.; Le Pennec, Fielder, and Ante; Fazil, Owfi, and Taesiri). We see here the initial traditional split of discourse around cultural activity within a new medium: dual narratives of utopianism and dystopianism. Regardless of the discursive frame, activity has grown steadily since stories reporting the failure of Blockchain to deliver on its hype began appearing in 2017 (Ellul). Early coverage around blockchain, music, and NFTs echoes this capacity to leverage artificial scarcity via the creation of unique digital assets (cf Heap; Tomaino). As NFTs have developed, this discourse has become more nuanced, arguing that creators are now able to exploit both ownership and abundance. However, for the most part, music NFTs have essentially adopted the form of digital artworks and collectibles in editions ranging from 1:1 or 1:1000+. Grimes’s February 2021 Mars NFT pointed to a 32-second rotating animation of a sword-wielding cherubim above the planet Mars, accompanied by a musical cue (Grimes). Mars sold 388 NFTs for a reported fixed price of $7.5k each, grossing $2,910,000 at time of minting. By contrast, electronic artists Steve Aoki and Don Diablo have both released 1:1 NFT editions that have been auctioned via Sotheby’s, Superrare, and Nifty Gateway. Interestingly, these works have been bundled with physical goods; Diablo’s Destination Hexagonia, which sold for 600 Eth or approximately US$1.2 million at the time of sale, proffered ownership of a bespoke one-hour film hosted online, along with “a unique hand-crafted box, which includes a hard drive that contains the only copy of the high-quality file of the film” (Diablo). Aoki’s Hairy was much less elaborate but still promised to provide the winner of the $888,888 auction with a copy of the 35-second video of a fur-covered face shaking in time to downbeat electronica as an Infinite Objects video print (Aoki). In the first half of 2021, similar projects from high-profile artists including Deadmau5, The Weekend, Snoop Dogg, Eminem, Blondie, and 3Lau have generated an extraordinary amount of money leading to a significant, and understandable, appetite from musicians wanting to engage in this marketplace. Many of these artists and the platforms that have enabled their sales have lauded the potential for NFTs to address an alleged poor remuneration of artists from streaming and/or bypassing “industry middlemen” (cf. Sounds.xyz); the millions of dollars generated by sales of these NFTs presents a compelling case for exploring these new markets irrespective of risk and volatility. However, other artists have expressed reservations and/or received pushback on entry into the NFT marketplace due to concerns over the environmental impact of NFTs; volatility; and a perception of NFT markets as Ponzi schemes (Poleg), insecure (Goodin), exploitative (Purtill), or scammy (Dash). As of late 2021, increased reportage began to highlight unauthorised or fraudulent NFT minting (cf. TFL; Stephen), including in music (Newstead). However, the number of contested NFTs remains marginal in comparison to the volume of exchange that occurs in the space daily. OpenSea alone oversaw over US$2.5 billion worth of transactions per month. For the most part, online NFT marketplaces like OpenSea and Solanart oversee the exchange of products on terms not dissimilar to other large online retailers; the space is still resolutely emergent and there is much debate about what products, including recently delisted pro-Nazi and Alt-Right-related NFTs, are socially and commercially acceptable (cf. Pearson; Redman). Further, there are signs this trend may impact on both the willingness and capacity of rightsholders to engage with NFTs, particularly where official offerings are competing with extant fraudulent or illegitimate ones. Despite this, at the time of writing the NFT market as a whole does not appear prone to this type of obstruction. What remains complicated is the contested relationship between NFTs, copyrights, and ownership of the assets they represent. This is further complicated by tension between the claims of blockchain’s independence from existing regulatory structures, and the actual legal recourse available to music rights holders. Music Rights and Ownership Baym et al. note that addressing the problems of rights management and metadata is one of the important discussions around music convened by early blockchain projects. While they posit that “our point is not whether blockchain can or can’t fix the problems the music industries face” (403), for some professionals, the blockchain’s promise of eliminating the need for trust seemed to provide an ideal solution to a widely acknowledged business-to-business problem: one of poor metadata leading to unclaimed royalties accumulating in “black boxes”, particularly in the case of misattributed mechanical royalties in the USA (Rethink Music Initiative). As outlined in their influential institutional research paper (partnered with music rights disruptor Kobalt), the Rethink Music Initiative implied that incumbent intermediaries were benefiting from this opacity, incentivising them to avoid transparency and a centralised rights management database. This frame provides a key example of one politicised version of “fairness”, directly challenging the interest of entrenched powers and status quo systems. Also present in the space is a more pragmatic approach which sees problems of metadata and rights flows as the result of human error which can be remedied with the proper technological intervention. O’Dair and Beaven argue that blockchain presents an opportunity to eliminate the need for trust which has hampered efforts to create a global standard database of rights ownership, while music business researcher Opal Gough offers a more sober overview of how decentralised ledgers can streamline processes, remove inefficiencies, and improve cash flow, without relying on the moral angle of powerful incumbents holding on to control accounts and hindering progress. In the intervening two years, this discourse has shifted from transparency (cf. Taghdiri) to a practical narrative of reducing system friction and solving problems on the one hand—embodied by Paperchain, see Carnevali —and ethical claims reliant on the concept of fairness on the other—exemplified by Resonate—but with, so far, limited widespread impact. The notion that the need for b2b collaboration on royalty flows can be successfully bypassed through a “trustless” blockchain is currently being tested. While these earlier projects were attempts to either circumvent or fix problems facing the traditional rights holders, with the advent of the NFT in particular, novel ownership structures have reconfigured the concept of a rights holder. NFTs promise fans an opportunity to not just own a personal copy of a recording or even a digitally unique version, but to share in the ownership of the actual property rights, a role previously reserved for record labels and music publishers. New NFT models have only recently launched which offer fans a share of IP revenue. “Collectors can buy royalty ownership in songs directly from their favorite artists in the form of tokens” through the service Royal. Services such as Royal and Vezt represent potentially massive cultural shifts in the traditional separation between consumers and investors; they also present possible new headaches and adventures for accountants and legal teams. The issues noted by Baym et al. are still present, and the range of new entrants into this space risks the proliferation, rather than consolidation, of metadata standards and a need to put money into multiple blockchain ecosystems. As noted in RMIT’s blockchain report, missing royalty payments … would suggest the answer to “does it need a blockchain?” is yes (although further research is needed). However, it is not clear that the blockchain economy will progress beyond the margins through natural market forces. Some level of industry coordination may still be required. (18) Beyond the initial questions of whether system friction can be eased and standards generated without industry cooperation lie deeper philosophical issues of what will happen when fans are directly incentivised to promote recordings and artist brands as financial investors. With regard to royalty distribution, the exact role that NFTs would play in the ownership and exploitation of song IP remains conceptual rather than concrete. Even the emergent use cases are suggestive and experimental, often leaning heavily on off-chain terms, goodwill and the unknown role of existing legal infrastructure. Audience and Business Services Aside from the more high-profile NFT cases which focus on the digital object as an artwork providing a source of value, other systemic uses of NFTs are emerging. Both audience and business services are—to varying degrees—explorations of the utility of NFTs as a community token: i.e. digital commodities that have a market value, but also unlock ancillary community interaction. The music industries have a longstanding relationship with the sale of exclusivity and access tailored to experiential products. Historically, one of music’s most profitable commodities—the concert ticket—contains very little intrinsic value, but unlocks a hugely desirable extrinsic experience. As such, NFTs have already found adoption as tools of music exclusivity; as gateways into fan experiences, digital communities, live events ticketing and closed distribution. One case study incorporating almost all of these threads is the Deathbats club by American heavy metal band Avenged Sevenfold. Conceived of as the “ultimate fan club”, Deathbats is, according to the band’s singer M. Shadows, “every single thing that [fans] want from us, which is our time, our energy” (Chan). At the time of writing, the Deathbats NFT had experienced expected volatility, but maintained a 30-day average sale price well above launch price. A second affordance provided by music NFTs’ ability to tokenise community is the application of this to music businesses in the form of music DAOs: decentralised autonomous organisations. DAOs and NFTs have so far intersected in a number of ways. DAOs function as digital entities that are owned by their members. They utilise smart contracts to record protocols, votes, and transactions on the blockchain. Bitcoin and Ethereum are often considered the first DAOs of note, serving as board-less venture capital funds, also known as treasuries, that cannot be accessed without the consensus of their members. More recently, DAOs have been co-opted by online communities of shared interests, who work towards an agreed goal, and operate without the need for leadership. Often, access to DAO membership is tokenised, and the more tokens a member has, the more voting rights they possess. All proposals must pass before members, and have been voted for by the majority in order to be enacted, though voting systems differ between DAOs. Proposals must also comply with the DAO’s regulations and protocols. DAOs typically gather in online spaces such as Discord and Zoom, and utilise messaging services such as Telegram. Decentralised apps (dapps) have been developed to facilitate DAO activities such as voting systems and treasury management. Collective ownership of digital assets (in the form of NFTs) has become commonplace within DAOs. Flamingo DAO and PleasrDAO are two well-established and influential examples. The “crypto-backed social club” Friends with Benefits (membership costs between $5,000 and $10,000) serves as a “music discovery platform, an online publication, a startup incubator and a kind of Bloomberg terminal for crypto investors” (Gottsegen), and is now hosting its own curated NFT art platform with work by the likes of Pussy Riot. Musical and cross-disciplinary artists and communities are also exploring the potential of DAOs to empower, activate, and incentivise their communities as an extension of, or in addition to, their adoption and exploration of NFTs. In collaboration with Never Before Heard Sounds, electronic artist and musical pioneer Holly Herndon is exploring ideological questions raised by the growing intelligence of AI to create digital likeness and cloning through voice models. Holly+ is a custom voice instrument that allows users to process pre-existing polyphonic audio through a deep neural network trained by recordings of Holly Herndon’s voice. The output is audio-processed through Holly Herndon’s distinct vocal sound. Users can submit their resulting audio to the Holly+ DAO, to whom she has distributed ownership of her digital likeness. DAO token-holders steward which audio is minted and certified as an NFT, ensuring quality control and only good use of her digital likeness. DAO token-holders are entitled to a percentage of profit from resales in perpetuity, thereby incentivising informed and active stewardship of her digital likeness (Herndon). Another example is LA-based label Leaving Records, which has created GENRE DAO to explore and experiment with new levels of ownership and empowerment for their pre-existing community of artists, friends, and supporters. They have created a community token—$GENRE—for which they intend a number of uses, such as “a symbol of equitable growth, a badge of solidarity, a governance token, currency to buy NFTs, or as a utility to unlock token-gated communities” (Leaving Records). Taken as a whole, the spectrum of affordances and use cases presented by music NFTs can be viewed as a build-up of interest and capital around the technology. Conclusion The last half of 2021 was a moment of intense experimentation in the realms of music business administration and cultural expression, and at the time of writing, each week seemed to bring a new high-profile music Web3 project and/or disaster. Narratives of emancipation and domination under capitalism continue to drive our discussions around music and technology, and the direct link to debates on ecology and financialisation make these conversations particularly polarising. High-profile cases of music projects that overstep norms of existing IP rights, such as Hitpiece’s attempt to generate NFTs of songs without right-holders’ consent, point to the ways in which this technology is portrayed as threatening and subversive to commercial musicians (Blistein). Meanwhile, the Water and Music research DAO promises to incentivise a research community to “empower music-industry professionals with the knowledge, network and skills to do more collaborative and progressive work with technology” through NFT tokens and a DAO organisational structure (Hu et al.). The assumption in many early narratives of the ability of blockchain to provide systems of remuneration that musicians would embrace as inherently fairer is far from the reality of a popular discourse marked by increasing disdain and distrust, currently centred on NFTs as lacking in artistic merit, or even as harmful. We have seen all this talk before, of course, when jukeboxes and player pianos, film synchronisation, radio, recording, and other new communication technologies steered new paths for commercial musicians and promised magical futures. All of these innovations were met with intense scrutiny, cries of inauthentic practice, and resistance by incumbent musicians, but all were eventually sustained by the emergence of new forms of musical expression that captured the interest of the public. On the other hand, the road towards musical nirvana passes by not only the more prominent corpses of the Digital Audio Tape, SuperAudio, and countless recording formats, but if you squint and remember that technology is not always about devices or media, you can see the Secure Download Music Initiative, PressPlay, the International Music Registry, and Global Repertoire Databases in the distance, wondering if blockchain might correct some of the problems they dreamed of solving in their day. The NFT presents the artistic and cultural face of this dream of a musical future, and of course we are first seeing the emergence of old models within its contours. While the investment, ownership, and service phenomena emerging might not be reminiscent of the first moment when people were able to summon a song recording onto their computer via a telephone modem, it is important to remember that there were years of text-based chat rooms before we arrived at music through the Internet. It is early days, and there will be much confusion, anger, and experimentation before music NFTs become either another mundane medium of commercial musical practice, or perhaps a memory of another attempt to reach that goal. References Aoki, Steve. “Hairy.” Nifty Gateway 2021. 16 Feb. 2022 <https://niftygateway.com/marketplace/collection/0xbeccd9e4a80d4b7b642760275f60b62608d464f7/1?page=1>. Baym, Nancy, Lana Swartz, and Andrea Alarcon. "Convening Technologies: Blockchain and the Music Industry." International Journal of Communication 13.20 (2019). 13 Feb. 2022 <https://ijoc.org/index.php/ijoc/article/view/8590>. Barnett, C. “Convening Publics: The Parasitical Spaces of Public Action.” The SAGE Handbook of Political Geography. Eds. K.R. Cox., M. Low, and J. Robinson. London: Sage, 2008. 403–418. Blistein, Jon. "Hitpiece Wants to Make Every Song in the World an NFT. But Artists Aren't Buying It." Rolling Stone 2022. 14 Feb, 2022 <https://www.rollingstone.com/music/music-news/hitpiece-nft-song-controversy-1294027/>. Bored Ape Yacht Club. "Terms & Conditions." Yuga Labs, Inc. 2020. 14 Feb. 2022 <https://boredapeyachtclub.com/#/terms>. Carnevali, David. "Paperchain Uses Defi to Speed Streaming Payments to Musicians; the Startup Gets Streaming Data from Music Labels and Distributors on Their Artists, Then Uses Their Invoices as Collateral for Defi Loans to Pay the Musicians More Quickly." Wall Street Journal 2021. 16 Feb. 2022 <https://www.wsj.com/articles/paperchain-uses-defi-to-speed-streaming-payments-to-musicians-11635548273>. Chan, Anna. “How Avenged Sevenfold Is Reinventing the Fan Club with Deathbats Club NFTs”. NFT Now. 2021. 16 Feb. 2022 <https://avengedsevenfold.com/news/nft-now-avenged-sevenfold-reinventing-fan-club-with-deathbats-club/>. Cong, Lin William, Xi Li, Ke Tang, and Yang Yang. “Crypto Wash Trading.” SSRN 2021. 15 Feb. 2022 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3530220>. Das, Dipanjan, Priyanka Bose, Nicola Ruaro, Christopher Kruegel, and Giovanni Vigna. "Understanding Security Issues in the NFT Ecosystem." ArXiv 2021. 16 Feb. 2022 <https://arxiv.org/abs/2111.08893>. Dash, Anil. “NFTs Weren’t Supposed to End like This.” The Atlantic 2021. 16 Feb. 2022 <https://www.theatlantic.com/ideas/archive/2021/04/nfts-werent-supposed-end-like/618488/>. Diablo, Don. “Destination Hexagonia.” SuperRare 2021. 16 Feb. 2022 <https://superrare.com/artwork-v2/d%CE%BEstination-h%CE%BExagonia-by-don-diablo-23154>. Entriken, William, Dieter Shirley, Jacob Evans, and Nastassia Sachs. “EIP-721: Non-Fungible Token Standard.” Ethereum Improvement Proposals, 2022. 16 Feb. 2022 <https://arxiv.org/abs/2111.08893>. Fashion Law, The. “From Baby Birkins to MetaBirkins, Brands Are Facing Issues in the Metaverse.” 2021. 16 Feb. 2022 <https://www.thefashionlaw.com/from-baby-birkins-to-metabirkins-brands-are-being-plagued-in-the-metaverse/>. Fazli, Mohammah Amin, Ali Owfi, and Mohammad Reza Taesiri. "Under the Skin of Foundation NFT Auctions." ArXiv 2021. 16 Feb. 2022 <https://arxiv.org/abs/2109.12321>. Friends with Benefits. “Pussy Riot Drink My Blood”. 2021. 28 Jan. 2022 <https://gallery.fwb.help/pussy-riot-drink-my-blood>. Gough, Opal. "Blockchain: A New Opportunity for Record Labels." International Journal of Music Business Research 7.1 (2018): 26-44. Gottsegen, Will. “What’s Next for Friends with Benefits.” Yahoo! Finance 2021. 16 Feb. 2022 <https://au.finance.yahoo.com/news/next-friends-benefits-204036081.html>. Heap, Imogen. “Blockchain Could Help Musicians Make Money Again.” Harvard Business Review 2017. 16 Feb. 2022 <https://hbr.org/2017/06/blockchain-could-help-musicians-make-money-again>. Herndon, Holly. Holly+ 2021. 1 Feb. 2022 <https://holly.mirror.xyz>. Hu, Cherie, Diana Gremore, Katherine Rodgers, and Alexander Flores. "Introducing $STREAM: A New Tokenized Research Framework for the Music Industry." Water and Music 2021. 14 Feb. 2022 <https://www.waterandmusic.com/introducing-stream-a-new-tokenized-research-framework-for-the-music-industry/>. Leaving Records. “Leaving Records Introducing GENRE DAO.” Leaving Records 2021. 12 Jan. 2022 <https://leavingrecords.mirror.xyz/>. LePenne, Guénolé, Ingo Fiedler, and Lennart Ante. “Wash Trading at Cryptocurrency Exchanges.” Finance Research Letters 43 (2021). Gottsegen, Will. “What’s Next for Friend’s with Benefits?” Coin Desk 2021. 28 Jan. 2021 <https://www.coindesk.com/layer2/culture-week/2021/12/16/whats-next-for-friends-with-benefits>. Goodin, Dan. “Really Stupid ‘Smart Contract’ Bug Let Hacker Steal $31 Million in Digital Coin.” ARS Technica 2021. 16 Feb. 2022 <https://arstechnica.com/information-technology/2021/12/hackers-drain-31-million-from-cryptocurrency-service-monox-finance/>. Grimes. “Mars.” Nifty Gateway 2021. 16 Feb. 2022 <https://niftygateway.com/itemdetail/primary/0xe04cc101c671516ac790a6a6dc58f332b86978bb/2>. Newstead, Al. “Artists Outraged at Website Allegedly Selling Their Music as NFTS: What You Need to Know.” ABC Triple J 2022. 16 Feb. 2022 <https://www.abc.net.au/triplej/news/musicnews/hitpiece-explainer--artists-outraged-at-website-allegedly-selli/13739470>. O’Dair, Marcus, and Zuleika Beaven. "The Networked Record Industry: How Blockchain Technology Could Transform the Record Industry." Strategic Change 26.5 (2017): 471-80. Pearson, Jordan. “OpenSea Sure Has a Lot of Hitler NFTs for Sale.” Vice: Motherboard 2021. 16 Feb. 2022 <https://www.vice.com/en/article/akgx9j/opensea-sure-has-a-lot-of-hitler-nfts-for-sale>. Poleg, Dror. In Praise of Ponzis. 2021. 16 Feb. 2022 <https://www.drorpoleg.com/in-praise-of-ponzis/>. Purtill, James. “Artists Report Discovering Their Work Is Being Stolen and Sold as NFTs.” ABC News: Science 2021. 16 Feb. 2022 <https://www.abc.net.au/news/science/2021-03-16/nfts-artists-report-their-work-is-being-stolen-and-sold/13249408>. Rae, Madeline. “Analyzing the NFT Mania: Is a JPG Worth Millions.” SAGE Business Cases 2021. 16 Feb. 2022 <https://sk-sagepub-com.ezproxy.lib.rmit.edu.au/cases/analyzing-the-nft-mania-is-a-jpg-worth-millions>. Redman, Jamie. “Political Cartoonist Accuses NFT Platforms Opensea, Rarible of Being 'Tools for Political Censorship'.” Bitcoin.com 2021. 16 Feb. 2022 <https://news.bitcoin.com/political-cartoonist-accuses-nft-platforms-opensea-rarible-of-being-tools-for-political-censorship/>. Rennie, Ellie, Jason Potts, and Ana Pochesneva. Blockchain and the Creative Industries: Provocation Paper. Melbourne: RMIT University. 2019. Resonate. "Pricing." 2022. 16 Feb. 2022 <https://resonate.is/pricing/>. Rethink Music Initiative. Fair Music: Transparency and Payment Flows in the Music Industry. Berklee Institute for Creative Entrepreneurship, 2015. Royal. "How It Works." 2022. 16 Feb. 2022 <https://royal.io/>. Stephen, Bijan. “NFT Mania Is Here, and So Are the Scammers.” The Verge 2021. 15 Feb. 2022 <https://www.theverge.com/2021/3/20/22334527/nft-scams-artists-opensea-rarible-marble-cards-fraud-art>. Sound.xyz. Sound.xyz – Music without the Middleman. 2021. 14 Feb. 2022 <https://sound.mirror.xyz/3_TAJe4y8iJsO0JoVbXYw3BM2kM3042b1s6BQf-vWRo>. Taghdiri, Arya. "How Blockchain Technology Can Revolutionize the Music Industry." Harvard Journal of Sports & Entertainment Law 10 (2019): 173–195. Tomaino, Nick. “The Music Industry Is Waking Up to Ethereum: In Conversation with 3LAU.” SuperRare 2020. 16 Feb. 2022 <https://editorial.superrare.com/2020/10/20/the-music-industry-is-waking-up-to-ethereum-in-conversation-with-3lau/>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography