Journal articles on the topic 'Applied computing not elsewhere classified'

To see the other types of publications on this topic, follow the link: Applied computing not elsewhere classified.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Applied computing not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

CSG-Ed team. "Global issues." ACM SIGCAS Computers and Society 49, no. 2 (January 22, 2021): 9. http://dx.doi.org/10.1145/3447903.3447906.

Full text
Abstract:
The growing role that computing will play in addressing the world's pressing global issues has begun to move to center state, as Big Data for the SDGs (Sustainable Development Goals) is now included among the United Nations' Global Issues. The UN summarizes this Big Data issue as "The volume of data in the world is increasing exponentially. New sources of data, new technologies, and new analytical approaches, if applied responsibly, can allow to better monitor progress toward achievement of the SDGs in a way that is both inclusive and fair" [2], Elsewhere, we have applauded and argued for computing initiatives, including computer science education, that specifically focus on such "pressing social, environment, and economic problems" [1] and we acknowledge our SIGs commitment to directly tackling such issues.
APA, Harvard, Vancouver, ISO, and other styles
2

Das, D., and S. Santhakumar. "An Euler correction method for computing two-dimensional unsteady transonic flows." Aeronautical Journal 103, no. 1020 (February 1999): 85–94. http://dx.doi.org/10.1017/s0001924000027780.

Full text
Abstract:
AbstractAn Euler correction method is developed for unsteady, transonic inviscid flows. The strategy of this method is to treat the flow-field behind the shock as rotational flow and elsewhere as irrotational flow. The solution for the irrotational flow is obtained by solving the unsteady full-potential equation using Jameson's rotated time-marching finite-difference scheme. Clebsch's representation of velocity is followed for rotational flow. In this representation the velocities are decomposed into a potential part and a rotational part written in terms of scalar functions. The potential part is computed from the unsteady full potential equation with appropriate modification based on Clebsch's representation of velocity. The rotational part is obtained analytically from the unsteady momentum equation written in terms of Clebsch variables. This method is applied to compute the unsteady flow-field characteristics for an oscillating NACA 64A010 aerofoil. The results of the present calculation are found to be in good agreement with both Euler solution and experimental results.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Yusen, Hongwei Li, and Ahmed Alsaedi. "Center Conditions and Bifurcation of Limit Cycles Created from a Class of Second-Order ODEs." International Journal of Bifurcation and Chaos 29, no. 01 (January 2019): 1950003. http://dx.doi.org/10.1142/s0218127419500032.

Full text
Abstract:
In this paper, a class of second-order ODEs is investigated. First of all, a method of computing singular point values is established for this kind of systems. Then, two classes of second-order ODEs are studied to illustrate the efficiency of the method, and the center conditions and bifurcation of limit cycles are obtained. Above all, center conditions of a class of non-polynomial systems are classified by using this method.
APA, Harvard, Vancouver, ISO, and other styles
4

Han, Daoqi, Songqi Wu, Zhuoer Hu, Hui Gao, Enjie Liu, and Yueming Lu. "A Novel Classified Ledger Framework for Data Flow Protection in AIoT Networks." Security and Communication Networks 2021 (February 19, 2021): 1–11. http://dx.doi.org/10.1155/2021/6671132.

Full text
Abstract:
The edge computing node plays an important role in the evolution of the artificial intelligence-empowered Internet of things (AIoTs) that converge sensing, communication, and computing to enhance wireless ubiquitous connectivity, data acquisition, and analysis capabilities. With full connectivity, the issue of data security in the new cloud-edge-terminal network hierarchy of AIoTs comes to the fore, for which blockchain technology is considered as a potential solution. Nevertheless, existing schemes cannot be applied to the resource-constrained and heterogeneous IoTs. In this paper, we consider the blockchain design for the AIoTs and propose a novel classified ledger framework based on lightweight blockchain (CLF-LB) that separates and stores data rights at the source and enables a thorough data flow protection in the open and heterogeneous network environment of AIoT. In particular, CLF-LB divides the network into five functional layers for optimal adaptation to AIoTs applications, wherein an intelligent collaboration mechanism is also proposed to enhance the across-layer operation. Unlike traditional full-function blockchain models, our framework includes novel technical modules, such as block regenesis, iterative reinforcement of proof-of-work, and efficient chain uploading via the system-on-chip system, which are carefully designed to fit the cloud-edge-terminal hierarchy in AIoTs networks. Comprehensive experimental results are provided to validate the advantages of the proposed CLF-LB, showing its potentials to address the secrecy issues of data storage and sharing in AIoTs networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Rodriguez, Diego, Diego Gomez, David Alvarez, and Sergio Rivera. "A Review of Parallel Heterogeneous Computing Algorithms in Power Systems." Algorithms 14, no. 10 (September 23, 2021): 275. http://dx.doi.org/10.3390/a14100275.

Full text
Abstract:
The power system expansion and the integration of technologies, such as renewable generation, distributed generation, high voltage direct current, and energy storage, have made power system simulation challenging in multiple applications. The current computing platforms employed for planning, operation, studies, visualization, and the analysis of power systems are reaching their operational limit since the complexity and size of modern power systems results in long simulation times and high computational demand. Time reductions in simulation and analysis lead to the better and further optimized performance of power systems. Heterogeneous computing—where different processing units interact—has shown that power system applications can take advantage of the unique strengths of each type of processing unit, such as central processing units, graphics processing units, and field-programmable gate arrays interacting in on-premise or cloud environments. Parallel Heterogeneous Computing appears as an alternative to reduce simulation times by optimizing multitask execution in parallel computing architectures with different processing units working together. This paper presents a review of Parallel Heterogeneous Computing techniques, how these techniques have been applied in a wide variety of power system applications, how they help reduce the computational time of modern power system simulation and analysis, and the current tendency regarding each application. We present a wide variety of approaches classified by technique and application.
APA, Harvard, Vancouver, ISO, and other styles
6

Çakır, Mustafa, Akhan Akbulut, and Yusuf Hatay Önen. "Analysis of the use of computational intelligence techniques for air-conditioning systems: A systematic mapping study." Measurement and Control 52, no. 7-8 (June 28, 2019): 1084–94. http://dx.doi.org/10.1177/0020294019858108.

Full text
Abstract:
In our systematic mapping study, we examined 289 published works to determine which intelligent computing methods (e.g. Artificial Neural Networks, Machine Learning, and Fuzzy Logic) used by air-conditioning systems can provide energy savings and improve thermal comfort. Our goal was to identify which methods have been used most in research on the topic, which methods of data collection have been employed, and which areas of research have been empirical in nature. We observed the rules for literature reviews in identifying published works on databases (e.g. the Institute of Electrical and Electronics Engineers database, the Association for Computing Machinery Digital Library, SpringerLink, ScienceDirect, and Wiley Online Library) and classified identified works by topic. After excluding works according to the predefined criteria, we reviewed selected works according to the research parameters motivating our study. Results reveal that energy savings is the most frequently examined topic and that intelligent computing methods can be used to provide better indoor environments for occupants, with energy savings of up to 50%. The most common intelligent method used has been artificial neural networks, while sensors have been the tools most used to collect data, followed by searches of databases of experiments, simulations, and surveys accessed to validate the accuracy of findings.
APA, Harvard, Vancouver, ISO, and other styles
7

Choi, Yoonjo, Namhun Kim, Seunghwan Hong, Junsu Bae, Ilsuk Park, and Hong-Gyoo Sohn. "Critical Image Identification via Incident-Type Definition Using Smartphone Data during an Emergency: A Case Study of the 2020 Heavy Rainfall Event in Korea." Sensors 21, no. 10 (May 20, 2021): 3562. http://dx.doi.org/10.3390/s21103562.

Full text
Abstract:
In unpredictable disaster scenarios, it is important to recognize the situation promptly and take appropriate response actions. This study proposes a cloud computing-based data collection, processing, and analysis process that employs a crowd-sensing application. Clustering algorithms are used to define the major damage types, and hotspot analysis is applied to effectively filter critical data from crowdsourced data. To verify the utility of the proposed process, it is applied to Icheon-si and Anseong-si, both in Gyeonggi-do, which were affected by heavy rainfall in 2020. The results show that the types of incident at the damaged site were effectively detected, and images reflecting the damage situation could be classified using the application of the geospatial analysis technique. For 5 August 2020, which was close to the date of the event, the images were classified with a precision of 100% at a threshold of 0.4. For 24–25 August 2020, the image classification precision exceeded 95% at a threshold of 0.5, except for the mudslide mudflow in the Yul area. The location distribution of the classified images showed a distribution similar to that of damaged regions in unmanned aerial vehicle images.
APA, Harvard, Vancouver, ISO, and other styles
8

el-Yazigi, A., K. Chaleby, and C. R. Martin. "A simplified and rapid test for acetylator phenotyping by use of the peak height ratio of two urinary caffeine metabolites." Clinical Chemistry 35, no. 5 (May 1, 1989): 848–51. http://dx.doi.org/10.1093/clinchem/35.5.848.

Full text
Abstract:
Abstract We describe a simplified liquid-chromatographic test in which acetylator phenotype is determined by measuring the peak height ratio of two urinary caffeine metabolites, 5-acetylamino-6-formylamino-3-methyluracil and 1-methylxanthine. We applied this test to determine the acetylator phenotypes of 52 subjects who regularly drink coffee, tea, or caffeinated beverages. Also, we determined the acetylator phenotypes of these subjects according to a well-established sulfasalazine test, which yielded identical results. We established the reproducibility of the described test by determining the acetylator phenotypes of 10 additional subjects on two different days separated by a period of two to five weeks. Of the 52 subjects examined by both tests, 40 (76.9%) were classified as slow acetylators, which agrees well with the percentage reported elsewhere for 297 similar subjects from the Saudi population.
APA, Harvard, Vancouver, ISO, and other styles
9

Conesa, Francesc C., Hector A. Orengo, Agustín Lobo, and Cameron A. Petrie. "An Algorithm to Detect Endangered Cultural Heritage by Agricultural Expansion in Drylands at a Global Scale." Remote Sensing 15, no. 1 (December 22, 2022): 53. http://dx.doi.org/10.3390/rs15010053.

Full text
Abstract:
This article presents AgriExp, a remote-based workflow for the rapid mapping and monitoring of archaeological and cultural heritage locations endangered by new agricultural expansion and encroachment. Our approach is powered by the cloud-computing data cataloguing and processing capabilities of Google Earth Engine and it uses all the available scenes from the Sentinel-2 image collection to map index-based multi-aggregate yearly vegetation changes. A user-defined index threshold maps the first per-pixel occurrence of an abrupt vegetation change and returns an updated and classified multi-temporal image aggregate in almost-real-time. The algorithm requires an input vector table such as data gazetteers or heritage inventories, and it performs buffer zonal statistics for each site to return a series of spatial indicators of potential site disturbance. It also returns time series charts for the evaluation and validation of the local to regional vegetation trends and the seasonal phenology. Additionally, we used multi-temporal MODIS, Sentinel-2 and high-resolution Planet imagery for further photo-interpretation of critically endangered sites. AgriExp was first tested in the arid region of the Cholistan Desert in eastern Pakistan. Here, hundreds of archaeological mound surfaces are threatened by the accelerated transformation of barren lands into new irrigated agricultural lands. We have provided the algorithm code with the article to ensure that AgriExp can be exported and implemented with little computational cost by academics and heritage practitioners alike to monitor critically endangered archaeological and cultural landscapes elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
10

LIN, SONG-SUN, and TZI-SHENG YANG. "ON THE SPATIAL ENTROPY AND PATTERNS OF TWO-DIMENSIONAL CELLULAR NEURAL NETWORKS." International Journal of Bifurcation and Chaos 12, no. 01 (January 2002): 115–28. http://dx.doi.org/10.1142/s0218127402004206.

Full text
Abstract:
This work investigates binary pattern formations of two-dimensional standard cellular neural networks (CNN) as well as the complexity of the binary patterns. The complexity is measured by the exponential growth rate in which the patterns grow as the size of the lattice increases, i.e. spatial entropy. We propose an algorithm to generate the patterns in the finite lattice for general two-dimensional CNN. For the simplest two-dimensional template, the parameter space is split up into finitely many regions which give rise to different binary patterns. Qualitatively, the global patterns are classified for each region. Quantitatively, the upper bound of the spatial entropy is estimated by computing the number of patterns in the finite lattice, and the lower bound is given by observing a maximal set of patterns of a suitable size which can be adjacent to each other.
APA, Harvard, Vancouver, ISO, and other styles
11

AGORE, A. L., C. G. BONTEA, and G. MILITARU. "CLASSIFYING COALGEBRA SPLIT EXTENSIONS OF HOPF ALGEBRAS." Journal of Algebra and Its Applications 12, no. 05 (May 7, 2013): 1250227. http://dx.doi.org/10.1142/s0219498812502271.

Full text
Abstract:
For a given Hopf algebra A we classify all Hopf algebras E that are coalgebra split extensions of A by H4, where H4is the Sweedler's four-dimensional Hopf algebra. Equivalently, we classify all crossed products of Hopf algebras A# H4by computing explicitly two classifying objects: the cohomological "group" [Formula: see text] and CRP (H4, A) ≔ the set of types of isomorphisms of all crossed products A# H4. All crossed products A# H4are described by generators and relations and classified: they are parameterized by the set [Formula: see text] of all central primitive elements of A. Several examples are worked out in detail: in particular, over a field of characteristic p ≥ 3 an infinite family of non-isomorphic Hopf algebras of dimension 4p is constructed. The groups of automorphisms of these Hopf algebras are also described.
APA, Harvard, Vancouver, ISO, and other styles
12

Baharlouii, M., D. Mafi Gholami, and M. Abbasi. "INVESTIGATING MANGROVE FRAGMENTATION CHANGES USING LANDSCAPE METRICS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 159–62. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-159-2019.

Full text
Abstract:
Abstract. Generally, investigation of long-term mangroves fragmentation changes can be used as an important tool in assessing sensitivity and vulnerability of these ecosystems to the multiple environmental hazards. Therefore, the aim of this study was to reveal the trend of mangroves fragmentation changes in Khamir habitat using satellite imagery and Fragstats software during a 30-year period (1986–2016). To this end, Landsat images of 1986, 1998, and 2016 were used and after computing the normalized difference vegetation index (NDVI) to distinguish mangroves from surrounding water and land areas, images were further processed and classified into two types of land cover (i.e., mangrove and non-mangrove areas) using the maximum likelihood classification method. By determining the extent of mangroves in the Khamir habitat in the years of 1986, 1998 and 2017, the trend of fragmentation changes was quantified using CA, NP, PD and LPI landscape metrics. The results showed that the extent of mangroves in Khamir habitat (CA) decreased in the period post-1998 (1998–2016). The results also showed that, the NP and PD increased in the period of post-1998 and in contrast, the LPI decrease in this period. These results revealed the high degree of vulnerability of mangroves in Khamir habitat to the drought occurrence and are thus threatened by climate change. We hope that the results of this study stimulate further climate change adaptation planning efforts and help decision-makers prioritize and implement conservative measures in the mangrove ecosystems on the northern coasts of the PG and the GO and elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
13

Elmazoghi, Hasan G., Vail Karakale (Waiel Mowrtage), and Lubna S. Bentaher. "Comparison of neural networks and neuro-fuzzy computing techniques for prediction of peak breach outflow." Journal of Hydroinformatics 18, no. 4 (January 29, 2016): 724–40. http://dx.doi.org/10.2166/hydro.2016.078.

Full text
Abstract:
Accurate prediction of peak outflows from breached embankment dams is a key parameter in dam risk assessment. In this study, efficient models were developed to predict peak breach outflows utilizing artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). Historical data from 93 embankment dam failures were used to train and evaluate the applicability of these models. Two scenarios were applied with each model by either considering the whole data set without classification or classifying the set into small dams (48 dams) and large dams (45 dams). In this way, nine models were developed and their results were compared to each other and to the results of the best available regression equations and recent gene expression programming. Among the different models, the ANFIS model of the first scenario exhibited better performance based on its higher efficiency (E = 0.98), higher coefficient of determination (R2 = 0.98) and lower mean absolute error (MAE = 840.9). Moreover, models based on classified data enhanced the prediction of peak outflows particularly for small dams. Finally, this study indicated the potential of the developed ANFIS and ANN models to be used as predictive tools of peak outflow rates of embankment dams.
APA, Harvard, Vancouver, ISO, and other styles
14

Khayyat, Manal, and Lamiaa Elrefaei. "A Deep Learning Based Prediction of Arabic Manuscripts Handwriting Style." International Arab Journal of Information Technology 17, no. 5 (September 1, 2020): 702–12. http://dx.doi.org/10.34028/iajit/17/5/3.

Full text
Abstract:
With the increasing amounts of existing unorganized images on the internet today and the necessity to use them efficiently in various types of applications. There is a critical need to discover rigid models that can classify and predict images successfully and instantaneously. Therefore, this study aims to collect Arabic manuscripts images in a dataset and predict their handwriting styles using the most powerful and trending technologies. There are many types of Arabic handwriting styles, including Al-Reqaa, Al-Nask, Al-Thulth, Al-Kufi, Al-Hur, Al-Diwani, Al-Farsi, Al-Ejaza, Al-Maghrabi, Al-Taqraa, etc. However, the study classified the collected dataset images according to the handwriting styles and focused on only six types of handwriting styles that existed in the collected Arabic manuscripts. To reach our goal, we applied the MobileNet pre-trained deep learning model on our classified dataset images to automatically capture and extract the features from them. Afterward, we evaluated the performance of the developed model by computing its recorded evaluation metrics. We reached that MobileNet convolutional neural network is a promising technology since it reached 0.9583 as the highest recorded accuracy and 0.9633 as the average F-score
APA, Harvard, Vancouver, ISO, and other styles
15

Mills, K. L., J. J. Filliben, and A. L. Haines. "Determining Relative Importance and Effective Settings for Genetic Algorithm Control Parameters." Evolutionary Computation 23, no. 2 (June 2015): 309–42. http://dx.doi.org/10.1162/evco_a_00137.

Full text
Abstract:
Setting the control parameters of a genetic algorithm to obtain good results is a long-standing problem. We define an experiment design and analysis method to determine relative importance and effective settings for control parameters of any evolutionary algorithm, and we apply this method to a classic binary-encoded genetic algorithm (GA). Subsequently, as reported elsewhere, we applied the GA, with the control parameter settings determined here, to steer a population of cloud-computing simulators toward behaviors that reveal degraded performance and system collapse. GA-steered simulators could serve as a design tool, empowering system engineers to identify and mitigate low-probability, costly failure scenarios. In the existing GA literature, we uncovered conflicting opinions and evidence regarding key GA control parameters and effective settings to adopt. Consequently, we designed and executed an experiment to determine relative importance and effective settings for seven GA control parameters, when applied across a set of numerical optimization problems drawn from the literature. This paper describes our experiment design, analysis, and results. We found that crossover most significantly influenced GA success, followed by mutation rate and population size and then by rerandomization point and elite selection. Selection method and the precision used within the chromosome to represent numerical values had least influence. Our findings are robust over 60 numerical optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
16

Sabir, Zulqurnain, Thongchai Botmart, Muhammad Asif Zahoor Raja, and Wajaree Weera. "An advanced computing scheme for the numerical investigations of an infection-based fractional-order nonlinear prey-predator system." PLOS ONE 17, no. 3 (March 21, 2022): e0265064. http://dx.doi.org/10.1371/journal.pone.0265064.

Full text
Abstract:
The purpose of this study is to present the numerical investigations of an infection-based fractional-order nonlinear prey-predator system (FONPPS) using the stochastic procedures of the scaled conjugate gradient (SCG) along with the artificial neuron networks (ANNs), i.e., SCGNNs. The infection FONPPS is classified into three dynamics, susceptible density, infected prey, and predator population density. Three cases based on the fractional-order derivative have been numerically tested to solve the nonlinear infection-based disease. The data proportions are applied 75%, 10%, and 15% for training, validation, and testing to solve the infection FONPPS. The numerical representations are obtained through the stochastic SCGNNs to solve the infection FONPPS, and the Adams-Bashforth-Moulton scheme is implemented to compare the results. The infection FONPPS is numerically treated using the stochastic SCGNNs procedures to reduce the mean square error (MSE). To check the validity, consistency, exactness, competence, and capability of the proposed stochastic SCGNNs, the numerical performances using the error histograms (EHs), correlation, MSE, regression, and state transitions (STs) are also performed.
APA, Harvard, Vancouver, ISO, and other styles
17

Adekiya, Aruna Olasekan, Christopher Muyiwa Aboyeji, Oluwagbenga Dunsin, Ojo Vincent Adebiyi, and Oreoluwa Titilope Oyinlola. "Effect of Urea Fertilizer and Maize Cob Ash on Soil Chemical Properties, Growth, Yield, and Mineral Composition of Okra, Abelmoschus esculentus (L.) MOENCH." Journal of Horticultural Research 26, no. 1 (June 1, 2018): 67–76. http://dx.doi.org/10.2478/johr-2018-0008.

Full text
Abstract:
Abstract Field experiments were carried out at the Teaching and Research Farm, Landmark University, Omu-Aran, Kwara State, Nigeria, in the cropping seasons of 2015 and 2016. The soil at the site of the experiment is an Alfisol classified as an Oxichaplustalf or a Luvisol. The trial consisted of sole and combined applications of urea fertilizer (U) applied at 0, 60, and 120 kg·ha−1 and maize cob ash (M) applied at 0, 3, and 6 t·ha−1. The results showed that U and M alone or in combinations increased the soil chemical properties, growth, yield, and mineral composition of okra compared with the control. M alone at 3 t·ha−1 produced optimum soil chemical properties, yield, and mineral composition of okra fruit. U alone at 60 kg·ha−1 produced optimum yield of okra, while growth and mineral composition were increased when urea fertilizer was applied at 120 kg·ha−1. The treatment with U applied at 60 kg·ha−1 in combination with M applied at 3 t·ha−1 (U60M3) produced the highest values of okra yield, while U applied at 120 kg·ha−1 in combination with M applied at 3 t·ha−1 (U120M3) has the highest growth and highest N, K, Ca, Cu, and Fe contents of okra fruit. Compared with the control and using the mean of the two years, U60M3 increased okra fruit yield by 93.3%. Therefore, for viable production of okra in low nutrient soil of the Nigeria derived savanna or similar soils elsewhere, 60 kg·ha−1 U + 3 t·ha−1 M (U60M3) is recommended. However, for improved mineral quality of okra, 120 kg·ha−1 U + 3 t·ha−1 M (U120M3) is recommended.
APA, Harvard, Vancouver, ISO, and other styles
18

Adler, Alexander, and Udo Kebschull. "ANaN — ANalyse And Navigate: Debugging Compute Clusters with Techniques from Functional Programming and Text Stream Processing." EPJ Web of Conferences 245 (2020): 01041. http://dx.doi.org/10.1051/epjconf/202024501041.

Full text
Abstract:
Monitoring is an indispensable tool for the operation of any large installation of grid or cluster computing, be it high energy physics or elsewhere. Usually, monitoring is configured to collect a small amount of data, just enough to enable detection of abnormal conditions. Once detected, the abnormal condition is handled by gathering all information from the affected components. This data is processed by querying it in a manner similar to a database. This contribution shows how the metaphor of a debugger (for software applications) can be transferred to a compute cluster. The concepts of variables, assertions and breakpoints that are used in debugging can be applied to monitoring by defining variables as the quantities recorded by monitoring and breakpoints as invariants formulated via these variables. It is found that embedding fragments of a data extracting and reporting tool such as the UNIX tool awk facilitates concise notations for commonly used variables since tools like awk are designed to process large event streams (in textual representations) with bounded memory. A functional notation similar to both the pipe notation used in the UNIX shell and the point-free style used in functional programming simplify the combination of variables that commonly occur when formulating breakpoints.
APA, Harvard, Vancouver, ISO, and other styles
19

Kolejka, Jaromír, and Martin Klimánek. "Identification and typology of Czech post-industrial landscapes on national level using GIS and publicly accessed geodatabases." Ekológia (Bratislava) 34, no. 2 (March 1, 2015): 121–36. http://dx.doi.org/10.1515/eko-2015-0013.

Full text
Abstract:
AbstractThe post-industrial landscape (PIL) is a generally accepted phenomenon of the present world. Its features are fossil in comparison to those ones in operating industrial landscapes. The required knowledge about the position, size, shape and type of PIL will help decision makers plan PIL future. The paper deals with the selection of identification features of PILs. Applicable data must be related to four landscape structures: natural, economic (land use), social (human) and spiritual. Present Czech geodatabases contain sufficient quantity and quality of data they can be interpreted as source of PIL identification criteria. GIS technology was applied for such data collection, geometric and format pre-processing, thematic reclassification and final processing. Using selected identification and classification criteria, 105 PILs were identified on the Territory of Czech Republic and classified into individual types. A SWOT analysis of results was carried out to identify the reliability level of data and the data processing. The identified PILs represent the primary results generally obtained in the Czech Republic. GIS approach allows repeated procedures elsewhere in EU member states because of some similarity of available geodatabases. Of course, an improvement of classification procedure depends on the real situation in each country.
APA, Harvard, Vancouver, ISO, and other styles
20

Sahu, Tirath Prasad, and Sarang Khandekar. "A Machine Learning-Based Lexicon Approach for Sentiment Analysis." International Journal of Technology and Human Interaction 16, no. 2 (April 2020): 8–22. http://dx.doi.org/10.4018/ijthi.2020040102.

Full text
Abstract:
Sentiment analysis can be a very useful aspect for the extraction of useful information from text documents. The main idea for sentiment analysis is how people think for a particular online review, i.e. product reviews, movie reviews, etc. Sentiment analysis is the process where these reviews are classified as positive or negative. The web is enriched with huge amount of reviews which can be analyzed to make it meaningful. This article presents the use of lexicon resources for sentiment analysis of different publicly available reviews. First, the polarity shift of reviews is handled by negations. Intensifiers, punctuation and acronyms are also taken into consideration during the processing phase. Second, words are extracted which have some opinion; these words are then used for computing score. Third, machine learning algorithms are applied and the experimental results show that the proposed model is effective in identifying the sentiments of reviews and opinions.
APA, Harvard, Vancouver, ISO, and other styles
21

Zaitsev, Dmitry A., Tatiana R. Shmeleva, and David E. Probert. "Applying Infinite Petri Nets to the Cybersecurity of Intelligent Networks, Grids and Clouds." Applied Sciences 11, no. 24 (December 14, 2021): 11870. http://dx.doi.org/10.3390/app112411870.

Full text
Abstract:
Correctness of networking protocols represents the principal requirement of cybersecurity. Correctness of protocols is established via the procedures of their verification. A classical communication system includes a pair of interacting systems. Recent developments of computing and communication grids for radio broadcasting, cellular networks, communication subsystems of supercomputers, specialized grids for numerical methods and networks on chips require verification of protocols for any number of devices. For analysis of computing and communication grid structures, a new class of infinite Petri nets has been introduced and studied for more than 10 years. Infinite Petri nets were also applied for simulating cellular automata. Rectangular, triangular and hexagonal grids on plane, hyper cube and hyper torus in multidimensional space have been considered. Composing and solving in parametric form infinite Diophantine systems of linear equations allowed us to prove the protocol properties for any grid size and any number of dimensions. Software generators of infinite Petri net models have been developed. Special classes of graphs, such as a graph of packet transmission directions and a graph of blockings, have been introduced and studied. Complex deadlocks have been revealed and classified. In the present paper, infinite Petri nets are divided into two following kinds: a single infinite construct and an infinite set of constructs of specified size (and number of dimensions). Finally, the paper discusses possible future work directions.
APA, Harvard, Vancouver, ISO, and other styles
22

Roodt, Sumarie, and Carina de Villiers. "Teaching Green Information Technology Inside and Outside the Classroom." International Journal of Innovation in the Digital Economy 3, no. 3 (July 2012): 60–71. http://dx.doi.org/10.4018/jide.2012070106.

Full text
Abstract:
One socio-economic and environmental challenge facing the leaders of tomorrow is how Green Information Technology can be applied effectively by organisations to contribute to the global green revolution. The author teaches 1500 undergraduate students yearly about Green Information Technology to influence awareness positively in terms of efficient ways that computer resources can be used. In order to facilitate this process, the author supplemented the theory component with a practical assignment leveraging a number of interactive learning tools, including: social networking, on-line collaboration, and 3-D programming. These tools can be classified as one of the components of social computing. Social computing is seen as the convergence of information technology with social behaviour, and the resulting interactions. The tools used include: Alice©, Facebook©, and pbWiki©. The students were tasked with creating an animation using Alice© teaching people about Green Information Technology. Upon completion of the assignment, a questionnaire was distributed in order to ascertain what their view of Green Information Technology was. This paper details the nature of the Green Information Technology teaching techniques that were employed and details the findings of the questionnaire. The paper merges theory and practical aspects of teaching Green IT and provides educators and researchers with insight in terms of interactive teaching tools that can be employed.
APA, Harvard, Vancouver, ISO, and other styles
23

Asemi, Asefeh, and Fezzeh Ebrahimi. "A Thematic Analysis of the Articles on the Internet of Things in the Web of Science With HAC Approach." International Journal of Distributed Systems and Technologies 11, no. 2 (April 2020): 1–17. http://dx.doi.org/10.4018/ijdst.2020040101.

Full text
Abstract:
This research was carried out using the bibliometric method to thematically analyze the articles on IoT in the Web of Science with Hierarchical Agglomerative Clustering approach. First, the descriptors of the related articles published from 2002 to 2016 were extracted from WoS, by conducting a keyword search using the “Internet of Things” keyword. Data analysis and clustering were carried out in SPSS, UCINET, and PreMap. The analysis results revealed that the scientific literature published on IoT during the period had grown exponentially, with an approximately 48% growth rate in the last two years of the study period (i.e. 2015 and 2016). After analyzing the themes of the documents, the resulting concepts were classified into twelve clusters. The twelve main clusters included: Privacy and Security, Authentication and Identification, Computing, Standards and Protocols, IoT as a component, Big Data, Architecture, Applied New Techniques in IoT, Application, Connection and Communication Tools, Wireless Network Protocols, and Wireless Sensor Networks.
APA, Harvard, Vancouver, ISO, and other styles
24

Kalimoldayev, Maksat, Aliya Kalizhanova, Waldemar Wójcik, Gulzhan Kashaganova, Saltanat Amirgaliyeva, Azhibek Dasibekov, Ainur Kozbakova, and Zhalau Aitkulov. "Research of the Spectral Characteristics of Apodized Fiber Bragg Gratings." ITM Web of Conferences 24 (2019): 01015. http://dx.doi.org/10.1051/itmconf/20192401015.

Full text
Abstract:
Fiber Bragg Gratings (FBG) are widely used in different areas of the state-of–the -art fiber optics. Every task imposes specified requirements to the FBG spectral characteristics, which are scheduled at the gratings manufacturing stage. Manufacturing and using the Bragg fiber-optic gratings is impossible without measuring their characteristics at every stage of manufacturing the gratings themselves and devices on their basis. To select FBG’s optimal parameter we will compare the parameter SGW with several different the most widely used apodization functions. Upon manufacturing the FBG there applied strict requirements to their parameters. Recording or manufacturing the fiber Bragg gratings might be classified according to the type of the laser being used, radiation wave length, recording techniques, irradiation material and grating type. The article is dedicated to the techniques of computing and measuring the FBG’s principal parameters; it is necessary to define optimal parameters of the characteristics for the grating quality operation.
APA, Harvard, Vancouver, ISO, and other styles
25

Onyelowe, Kennedy C., Fazal E. Jalal, Michael E. Onyia, Ifeanyichukwu C. Onuoha, and George U. Alaneme. "Application of Gene Expression Programming to Evaluate Strength Characteristics of Hydrated-Lime-Activated Rice Husk Ash-Treated Expansive Soil." Applied Computational Intelligence and Soft Computing 2021 (April 14, 2021): 1–17. http://dx.doi.org/10.1155/2021/6686347.

Full text
Abstract:
Gene expression programming has been applied in this work to predict the California bearing ratio (CBR), unconfined compressive strength (UCS), and resistance value (R value or Rvalue) of expansive soil treated with an improved composites of rice husk ash. Pavement foundations suffer failures due to poor design and construction, poor materials handling and utilization, and management lapses. The evolution of sustainable green materials and optimization and soft computing techniques have been deployed to improve on the deficiencies being suffered in the abovementioned areas of design and construction engineering. In this work, expansive soil classified as A-7-6 group soil was treated with hydrated-lime activated rice husk ash (HARHA) in an incremental proportion to produce 121 datasets, which were used to predict the behavior of the soil’s strength parameters utilizing the mutative and evolutionary algorithms of GEP. The input parameters were HARHA, liquid limit ( w L ), (plastic limit w P , plasticity index I P , optimum moisture content ( w OMC ), clay activity (AC), and (maximum dry density (δmax) while CBR, UCS, and R value were the output parameters. A multiple linear regression (MLR) was also conducted on the datasets in addition to GEP to serve as a check mechanism. At the end of the computing and iterations, MLR and GEP optimization methods proposed three equations corresponding to the output parameters of the work. The responses validation on the predicted models shows a good correlation above 0.9 and a great performance index. The predicted models’ performance has shown that GEP soft computing has predicted models that can be used in the design of CBR, UCS, and R value for soils being used as foundation materials and being treated with admixtures as a binding component.
APA, Harvard, Vancouver, ISO, and other styles
26

Sittón-Candanedo, Inés, Ricardo S. Alonso, Óscar García, Ana B. Gil, and Sara Rodríguez-González. "A Review on Edge Computing in Smart Energy by means of a Systematic Mapping Study." Electronics 9, no. 1 (December 28, 2019): 48. http://dx.doi.org/10.3390/electronics9010048.

Full text
Abstract:
Context: Smart Energy is a disruptive concept that has led to the emergence of new energy policies, technology projects, and business models. The development of those models is driven by world capitals, companies, and universities. Their purpose is to make the electric power system more efficient through distributed energy generation/storage, smart meter installation, or reduction of consumption/implementation costs. This work approaches Smart Energy as a paradigm that is concerned with systemic strategies involving the implementation of innovative technological developments in energy systems. However, many of the challenges encountered under this paradigm are yet to be overcome, such as the effective integration of solutions within Smart Energy systems. Edge Computing is included in this new technology group. Objective: To investigate developments that involve the use of Edge Computing and that provide solutions to Smart Energy problems. The research work will be developed using the methodology of systematic mapping of literature, following the guidelines established by Kitchenham and Petersen that facilitate the identification of studies published on the subject. Results: Inclusion and exclusion criteria have been applied to identify the relevant articles. We selected 80 papers that were classified according to the type of publication (journal, conferences, or book chapter), type of research (conceptual, experience, or validation), type of activity (implement, validate, analyze) and asset (architecture, framework, method, or models). Conclusion: A complete review has been conducted of the 80 articles that were closely related to the questions posed in this research. To reach the goal of building Edge Computing architectures for Smart Energy environments, several lines of research have been defined. In the future, such architectures will overcome current problems, becoming highly energy-efficient, cost-effective, and capacitated to process and respond in real-time.
APA, Harvard, Vancouver, ISO, and other styles
27

Vanhove, S., H. J. Lee, M. Beghyn, D. Van Gansbeke, S. Brockington, and M. Vincx. "The Metazoan Meiofauna in Its Biogeochemical Environment: The Case of an Antarctic Coastal Sediment." Journal of the Marine Biological Association of the United Kingdom 78, no. 2 (May 1998): 411–34. http://dx.doi.org/10.1017/s0025315400041539.

Full text
Abstract:
The metazoan meiobenthos was investigated in an Antarctic coastal sediment (Factory Cove, Signy Island, Antarctica). The fine sands contained much higher abundances compared to major sublittoral sediments worldwide. Classified second after Narrangansett Bay (North Atlantic) they reached numbers of 13 × 106ind m-2. The meiofauna was highly abundant in the surface layers, but densities decreased sharply below 2 cm. Vertical profiles mirrored steep gradients of microbiota, chloropigments and organic matter and were coincident with chemical stratification. Spatial patchiness manifested especially in the surface layer. Nematodes dominated (up to 90%), andAponema, Chromctdorita, Diplolaimella, Daptonema, MicrolaimusandNeochromadoraconstituted almost the entire community. Overall, the nematode fauna showed a strong similarity with fine sand communities elsewhere. The dominant trophic strategies were epistrarum and non-selective deposit feeding, but the applied classification for feeding guild structure of the nematodes of Factory Cove is discussed. High standing stock, low diversity and shallow depth distribution may have occurred because of the high nutritive (chlorophyll exceeded lOOOmgm-2and constituted almost 50% of the organic pool) and reductive character of the benthic environment. These observations must have originated from the substantial input of fresh organic matter from phytoplankton and microphytobenthic production, typical for an Antarctic coastal ecosystem during the austral summer.
APA, Harvard, Vancouver, ISO, and other styles
28

Fardinpour, Mojgan, Alireza Sadeghi Milani, and Monire Norouzi. "Towards techniques, challenges and efforts of software as a service layer based on business applications in cloud environments." Kybernetes 49, no. 12 (January 4, 2020): 2993–3018. http://dx.doi.org/10.1108/k-07-2019-0520.

Full text
Abstract:
Purpose Cloud computing is qualified to present proper limitless storage and computation resources to users as services throughout the internet. Software as a service (SaaS) layer is the key paradigm perspective in the software layer of the cloud computing. SaaS is connected by business applications to access consumers on existing public, private and hybrid cloud models. This purpose of this paper is to present a discussion and analysis on the SaaS layer based on business applications in the cloud environment in form of a classical taxonomy to recognize the existing techniques, challenges and efforts. Design/methodology/approach Existing techniques, challenges and efforts are classified into four categories: platform-dependent, application-dependent, data-dependent and security-dependent mechanisms. The SaaS layer mechanisms are compared with each other according to the important factors such as the structural properties, quality of service metrics, applied algorithms and measurement tools. Findings The benefits and weaknesses of each research study are analyzed. In the comparison results, the authors observed that the application-based method, the non-heuristic algorithms, the business process method have the highest percentage of the usage in this literature. Originality/value The SaaS layer mechanisms based on business applications have some main features such as high accessibility, compatibility, reusability and collaboration to provide activated application and operation services for user with help of Web browsers. A comprehensive analysis was presented as originality on the SaaS layer mechanisms based on business applications for high level of the cloud environment that 46 peer-reviewed studies were considered.
APA, Harvard, Vancouver, ISO, and other styles
29

Nwokolo, Samuel, and Julie Ogbulezie. "A critical review of theoretical models for estimating global so-lar radiation between 2012-2016 in Nigeria." International Journal of Physical Research 5, no. 2 (September 16, 2017): 60. http://dx.doi.org/10.14419/ijpr.v5i2.8160.

Full text
Abstract:
A routinely research of solar radiation is of vital requirement for surveys in agronomy, hydrology, ecology and sizing of the photovoltaic or thermal solar systems, solar architecture, molten salt power plant and supplying energy to natural processes like photosynthesis and estimates of their performances. However, measurement of global solar radiation is not available in most locations across in Nigeria. During the past 5 years in order to estimate global solar radiation on the horizontal surface on both daily and monthly mean daily basis, numerous empirical models have been developed for several locations in Nigeria. As a result, various input parameters have been utilized and different functional forms used. In this study aims at comparing, classifying and reviewing the empirical and soft computing models applied for estimating global solar radiation. The empirical models so far utilized were classified into eight main categories and presented based on the input parameters employed. The models were further reclassified into several main sub-classes and finally represented according to their developing year. On the whole, 145 empirical models and 42 functional forms, 8 artificial neural network models, 1 adaptive neural fuzzy inference system approach, and 1 Autoregressive Moving Average methods were recorded in literature for estimating global solar radiation in Nigeria. This review would provide solar-energy researchers in terms of identifying the input parameters and functional forms widely employed up until now as well as recognizing their importance for estimating global solar radiation using soft computing empirical models in several locations in Nigeria.
APA, Harvard, Vancouver, ISO, and other styles
30

Yuan, Ye, Zhi Qiang Huang, and Ze Min Cai. "Classification of Multi-Types of EEG Time Series Based on Embedding Dimension Characteristic Parameter." Key Engineering Materials 474-476 (April 2011): 1987–92. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.1987.

Full text
Abstract:
We have studied the detection of epileptic seizure by EEG signals based on embedding dimension as the input characteristic parameter of artificial neural networks has been studied in the research before. The results of the experiments showed that the overall accuracy as high as 100% can be achieved for distinguishing normal and epileptic EEG time series. In this paper, classification of multi-types of EEG time series based on embedding dimension as input characteristic parameter of artificial neural network will be studied, and the probabilistic neural network (PNN) will be also employed as the classifier for comparing the results with those obtained before. Cao’s method is also applied for computing the embedding dimension of normal and epileptic EEG time series. The results show that different types of EEG time series can be classified using the embedding dimension of EEG time series as characteristic parameter when the number of feature points exceed some value, however, the accuracy were not satisfied up to now, some work need to be done to improve the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Kimm, Geoff. "Actual and experiential shadow origin tagging: A 2.5D algorithm for efficient precinct-scale modelling." International Journal of Architectural Computing 18, no. 1 (December 26, 2019): 41–52. http://dx.doi.org/10.1177/1478077119895218.

Full text
Abstract:
This article describes a novel algorithm for built environment 2.5D digital model shadow generation that allows identities of shadowing sources to be efficiently precalculated. For any point on the ground, all sources of shadowing can be identified and are classified as actual or experiential obstructions to sunlight. The article justifies a 2.5D raster approach in the context of modelling of architectural and urban environments that has in recent times shifted from 2D to 3D, and describes in detail the algorithm which builds on precedents for 2.5D raster calculation of shadows. The algorithm is efficient and is applicable at even precinct scale in low-end computing environments. The simplicity of this new technique, and its independence of GPU coding, facilitates its easy use in research, prototyping and civic engagement contexts. Two research software applications are presented with technical details to demonstrate the algorithm’s use for participatory built environment simulation and generative modelling applications. The algorithm and its shadow origin tagging can be applied to many digital workflows in architectural and urban design, including those using big data, artificial intelligence or community participative processes.
APA, Harvard, Vancouver, ISO, and other styles
32

Pandey, Saroj Kumar, Gaurav Kumar, Shubham Shukla, Ankit Kumar, Kamred Udham Singh, and Shambhu Mahato. "Automatic Detection of Atrial Fibrillation from ECG Signal Using Hybrid Deep Learning Techniques." Journal of Sensors 2022 (September 22, 2022): 1–11. http://dx.doi.org/10.1155/2022/6732150.

Full text
Abstract:
In cardiac rhythm disorders, atrial fibrillation (AF) is among the most deadly. So, ECG signals play a crucial role in preventing CVD by promptly detecting atrial fibrillation in a patient. Unfortunately, locating trustworthy automatic AF in clinical settings remains difficult. Today, deep learning is a potent tool for complex data analysis since it requires little pre and postprocessing. As a result, several machine learning and deep learning approaches have recently been applied to ECG data to diagnose AF automatically. This study analyses electrocardiogram (ECG) data from the PhysioNet/Computing in Cardiology (CinC) Challenge 2017 to differentiate between atrial fibrillation (AF) and three other rhythms: normal, other, and too noisy for assessment. The ECG data, including AF rhythm, was classified using a novel model based on a combination of traditional machine learning techniques and deep neural networks. To categorize AF rhythms from ECG data, this hybrid model combined a convolutional neural network (Residual Network (ResNet)) with a Bidirectional Long Short Term Memory (BLSTM) network and a Radial Basis Function (RBF) neural network. Both the F1-score and the accuracy of the final hybrid model are relatively high, coming in at 0.80% and 0.85%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

Devi, Rias Kumalasari, Dana Indra Sensuse, Kautsarina, and Ryan Randy Suryono. "Information Security Risk Assessment (ISRA): A Systematic Literature Review." Journal of Information Systems Engineering and Business Intelligence 8, no. 2 (October 29, 2022): 207–17. http://dx.doi.org/10.20473/jisebi.8.2.207-217.

Full text
Abstract:
Background: Information security is essential for organisations, hence the risk assessment. Information security risk assessment (ISRA) identifies, assesses, and prioritizes risks according to organisational goals. Previous studies have analysed and discussed information security risk assessment. Therefore, it is necessary to understand the models more systematically. Objective: This study aims to determine types of ISRA and fill a gap in literature review research by categorizing existing frameworks, models, and methods. Methods: The systematic literature review (SLR) approach developed by Kitchenham is applied in this research. A total of 25 studies were selected, classified, and analysed according to defined criteria. Results: Most selected studies focus on implementing and developing new models for risk assessment. In addition, most are related to information systems in general. Conclusion: The findings show that there is no single best framework or model because the best framework needs to be tailored according to organisational goals. Previous researchers have developed several new ISRA models, but empirical evaluation research is needed. Future research needs to develop more robust models for risk assessments for cloud computing systems. Keywords: Information Security Risk Assessment, ISRA, Security Risk
APA, Harvard, Vancouver, ISO, and other styles
34

Yoo, Hyun, Soyoung Han, and Kyungyong Chung. "A Frequency Pattern Mining Model Based on Deep Neural Network for Real-Time Classification of Heart Conditions." Healthcare 8, no. 3 (July 26, 2020): 234. http://dx.doi.org/10.3390/healthcare8030234.

Full text
Abstract:
Recently, a massive amount of big data of bioinformation is collected by sensor-based IoT devices. The collected data are also classified into different types of health big data in various techniques. A personalized analysis technique is a basis for judging the risk factors of personal cardiovascular disorders in real-time. The objective of this paper is to provide the model for the personalized heart condition classification in combination with the fast and effective preprocessing technique and deep neural network in order to process the real-time accumulated biosensor input data. The model can be useful to learn input data and develop an approximation function, and it can help users recognize risk situations. For the analysis of the pulse frequency, a fast Fourier transform is applied in preprocessing work. With the use of the frequency-by-frequency ratio data of the extracted power spectrum, data reduction is performed. To analyze the meanings of preprocessed data, a neural network algorithm is applied. In particular, a deep neural network is used to analyze and evaluate linear data. A deep neural network can make multiple layers and can establish an operation model of nodes with the use of gradient descent. The completed model was trained by classifying the ECG signals collected in advance into normal, control, and noise groups. Thereafter, the ECG signal input in real time through the trained deep neural network system was classified into normal, control, and noise. To evaluate the performance of the proposed model, this study utilized a ratio of data operation cost reduction and F-measure. As a result, with the use of fast Fourier transform and cumulative frequency percentage, the size of ECG reduced to 1:32. According to the analysis on the F-measure of the deep neural network, the model had 83.83% accuracy. Given the results, the modified deep neural network technique can reduce the size of big data in terms of computing work, and it is an effective system to reduce operation time.
APA, Harvard, Vancouver, ISO, and other styles
35

Radha, S., and C. Nelson Kennedy Babu. "An Enhancement of Cloud Based Sentiment Analysis and BDAAs Using SVM Based Lexicon Dictionary and Adaptive Resource Scheduling." Journal of Computational and Theoretical Nanoscience 15, no. 2 (February 1, 2018): 437–45. http://dx.doi.org/10.1166/jctn.2018.7107.

Full text
Abstract:
At present, the cloud computing is emerging technology to run the large set of data capably, and due to fast data growth, processing of large scale data is becoming a main point of information method and customers can estimate the quality of brands of products employing the information given by new digital marketing channels in social media. Thus, every enterprise requires finding and analyzing a big amount of digital data in order to develop their reputation among the customers. Therefore, in this paper, SLA (Service Level Agreement) based BDAAs (Big Data Analytic Applications) using Adaptive Resource Scheduling and big data with cloud based sentiment analysis is proposed to provide the deep web mining, QoS and to analyze the customer behaviors about the product. In this process, the spatio-temporal compression technique can be applied to data compression for reduction of big data. The data is classified in to positive, negative or neutral by employing the SVM with lexicon dictionary based on the customers' behaviors about brand or products. In cloud computing environment, complex to the reduction of resources cost and fluctuation of resource requirements with BDAAs. As a result, it is needed to have a common Analytics as a Service (AaaS) platform that provides a BDAAs to customers in different fields as unpreserved services in a simple to utilize a way with lower cost. Therefore, SLA based BDAAs is developed to utilize the adaptive resource scheduling depending on the customer behaviors and it can provide visualization and data integrity. Our method can give privacy of cloud owner's information with help of data integrity and authentication process. Experimental results of proposed system shows that the sentiment analysis method for online product using cloud based big data is able to classify the opinions of customers accurately and effective of the algorithm in guarantee of SLA.
APA, Harvard, Vancouver, ISO, and other styles
36

Min, Li, Yang Xin, and Xiong Liyang. "POINT CLOUD ORIENTED SHOULDER LINE EXTRACTION IN LOESS HILLY AREA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 279–82. http://dx.doi.org/10.5194/isprs-archives-xli-b3-279-2016.

Full text
Abstract:
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
APA, Harvard, Vancouver, ISO, and other styles
37

Min, Li, Yang Xin, and Xiong Liyang. "POINT CLOUD ORIENTED SHOULDER LINE EXTRACTION IN LOESS HILLY AREA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 279–82. http://dx.doi.org/10.5194/isprsarchives-xli-b3-279-2016.

Full text
Abstract:
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Nan, Shanwu Sun, Ying Liu, and Senyue Zhang. "Business Process Model Abstraction Based on Fuzzy Clustering Analysis." International Journal of Cooperative Information Systems 28, no. 03 (September 2019): 1950007. http://dx.doi.org/10.1142/s0218843019500072.

Full text
Abstract:
The most prominent Business Process Model Abstraction (BPMA) use case is a construction of a process “quick view” for rapidly comprehending a complex process. Researchers propose various process abstraction methods to aggregate the activities most of which are based on [Formula: see text]-means hard clustering. This paper focuses on the limitation of hard clustering, i.e. it cannot identify the special activities (called “edge activities” in this paper) and each activity must be classified to some subprocess. A new method is proposed to classify activities based on fuzzy clustering which generates a fuzzy matrix by computing the possibilities of activities belonging to subprocesses. According to this matrix, the “edge activities” can be located. Considering the structure correlation feature of the activities in subprocesses, an approach is provided to generate the initial clusters based on the close connection characteristics of subprocesses. A hard partition algorithm is proposed to classify the edge activities and it evaluates the generated abstract models according to a new index designed by control flow order preserving requirement and the evaluation results guide the edge activities to be classified to the optimal hard partition. The proposed method is applied to a process model repository in use. The results verify the validity of the measurement based on the virtual document to generating fuzzy matrix. Also it mines the threshold parameter in the real world process model collection enriched with human designed subprocesses to compute the fuzzy matrix. Furthermore, a comparison is made between the proposed method and the [Formula: see text]-means clustering and the results show our approach more closely approximating the decisions of the involved modelers to cluster activities and it contributes to the development of modeling support for effective process model abstraction.
APA, Harvard, Vancouver, ISO, and other styles
39

Venkatappa, Sasaki, Shrestha, Tripathi, and Ma. "Determination of Vegetation Thresholds for Assessing Land Use and Land Use Changes in Cambodia using the Google Earth Engine Cloud-Computing Platform." Remote Sensing 11, no. 13 (June 26, 2019): 1514. http://dx.doi.org/10.3390/rs11131514.

Full text
Abstract:
As more data and technologies become available, it is important that a simple method is developed for the assessment of land use changes because of the global need to understand the potential climate mitigation that could result from a reduction in deforestation and forest degradation in the tropics. Here, we determined the threshold values of vegetation types to classify land use categories in Cambodia through the analysis of phenological behaviors and the development of a robust phenology-based threshold classification (PBTC) method for the mapping and long-term monitoring of land cover changes. We accessed 2199 Landsat collections using Google Earth Engine (GEE) and applied the Enhanced Vegetation Index (EVI) and harmonic regression methods to identify phenological behaviors of land cover categories during the leaf-shedding phenology (LSP) and leaf-flushing phenology (LFS) seasons. We then generated 722 mean phenology EVI profiles for 12 major land cover categories and determined the threshold values for selected land cover categories in the mid-LSP season. The PBTC pixel-based classified map was validated using very high-resolution (VHR) imagery. We obtained a cumulative overall accuracy of more than 88% and a cumulative overall accuracy of the referenced forest cover of almost 85%. These high accuracy values suggest that the very first PBTC map can be useful for estimating the activity data, which are critically needed to assess land use changes and related carbon emissions under the Reducing Emissions from Deforestation and forest Degradation (REDD+) scheme. We found that GEE cloud-computing is an appropriate tool to use to access remote sensing big data at scale and at no cost.
APA, Harvard, Vancouver, ISO, and other styles
40

Kelly, Matthew, and Yuriy Kuleshov. "Flood Hazard Assessment and Mapping: A Case Study from Australia’s Hawkesbury-Nepean Catchment." Sensors 22, no. 16 (August 19, 2022): 6251. http://dx.doi.org/10.3390/s22166251.

Full text
Abstract:
Floods are among the costliest natural hazards, in Australia and globally. In this study, we used an indicator-based method to assess flood hazard risk in Australia’s Hawkesbury-Nepean catchment (HNC). Australian flood risk assessments are typically spatially constrained through the common use of resource-intensive flood modelling. The large spatial scale of this study area is the primary element of novelty in this research. The indicators of maximum 3-day precipitation (M3DP), distance to river—elevation weighted (DREW), and soil moisture (SM) were used to create the final Flood Hazard Index (FHI). The 17–26 March 2021 flood event in the HNC was used as a case study. It was found that almost 85% of the HNC was classified by the FHI at ‘severe’ or ‘extreme’ level, illustrating the extremity of the studied event. The urbanised floodplain area in the central-east of the HNC had the highest FHI values. Conversely, regions along the western border of the catchment had the lowest flood hazard risk. The DREW indicator strongly correlated with the FHI. The M3DP indicator displayed strong trends of extreme rainfall totals increasing towards the eastern catchment border. The SM indicator was highly variable, but featured extreme values in conservation areas of the HNC. This study introduces a method of large-scale proxy flood hazard assessment that is novel in an Australian context. A proof-of-concept methodology of flood hazard assessment developed for the HNC is replicable and could be applied to other flood-prone areas elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
41

Indraratna, B., P. Nutalaya, K. S. Koo, and N. Kuganenthira. "Engineering behaviour of a low carbon, pozzolanic fly ash and its potential as a construction fill." Canadian Geotechnical Journal 28, no. 4 (August 1, 1991): 542–55. http://dx.doi.org/10.1139/t91-070.

Full text
Abstract:
Detailed laboratory investigations were conducted on Mae Moh fly ash from northern Thailand for the determination of its grain size distribution, mineralogy, pozzolanic activity, compaction and strength characteristics, and the collapse potential. On the basis of the experimental results, this fly ash is classified as ASTM class C, which is considered to be pozzolanic. It has good potential to be utilized as an effective fill for embankments (roads and dams), airfields, pavements, and building bricks, as well as for the stabilization of compressible or erodible foundations. Because of the fact that Mae Moh fly ash contains only a negligible amount of unburned carbon, its pozzolanic reactivity is accelerated, in comparison with the relatively inert, high-carbon fly ash produced elsewhere in Thailand and many other parts of Asia. It is also demonstrated that Mae Moh fly ash can be easily compacted to produce acceptable dry densities over a wide range of water contents. Curing with an adequate moisture supply in the presence of calcium oxide plays an important role in accelerating the pozzolanic reactions, hence improving the time-dependent-properties. This study further proposes that a curing period of 2–3 weeks is sufficient for this material to approach its maximum strength. Although the behaviour of one specific fly ash cannot generalize the wide array of other ashes, the test results obtained for Mae Moh fly ash may be applied to lignite ashes in the category of ASTM class C. Key words: fly ash, structural fill, compaction, compressive strength, shear strength, collapse potential, pozzolanic activity.
APA, Harvard, Vancouver, ISO, and other styles
42

Rus Makovec, Maja, Neli Vintar, and Samo Makovec. "Self – Reported Depression, Anxiety and Evaluation of Own Pain in Clinical Sample of Patients with Different Location of Chronic Pain / Samoocenjena Depresivnost in Anksioznost Ter Evalvacija Lastne Bolečine V Kliničnem Vzorcu Pacientov Z Različno Lokacijo Kronične Bolečine." Slovenian Journal of Public Health 54, no. 1 (March 1, 2015): 1–10. http://dx.doi.org/10.1515/sjph-2015-0001.

Full text
Abstract:
Abstract Background. Depression, anxiety and chronic pain are frequent co-occurrent disorders. Patients with these mental disorders experience more intense pain that lasts for a longer time. Method. Questionnaire with 228 variables was applied to 109 randomly chosen patients that were treated at an outpatient clinic for treatment of chronic pain of the University Clinical Centre Ljubljana from March to June 2013. 87 patients responded to the questionnaire (79.8%). Location of pain considering diagnosis was the criterion in the discriminant analysis (soft tissue disorders; headache; symptoms not elsewhere classified; back pain) and following summative scores as predictors: level of depression and anxiety (The Zung Self-Rating Depression/Anxiety Scale), evaluation of pain and perceptions of being threatened in social relations. Results. Average age of participants was M = 52.7 years (SD 13.9), with 70.9% female, 29.1% male participants. 63% of respondents achieved clinically important level of depression and 54% clinically important level of anxiety. On univariate level, the highest level of depression and anxiety was found for back pain and the lowest for headache. No significant difference was found in evaluation of pain and perceptions of being threatened in social relations regarding location of pain. Self-evaluation of depression has, in the framework of discriminant analysis, the largest weight for prediction of differentiation between different locations of pain. Conclusion. Different locations of pain have different connections with mood levels. The results of research on a preliminary level indicate the need to consider mental experience in the treatment of chronic pain
APA, Harvard, Vancouver, ISO, and other styles
43

Zhai, Xuesong, Xiaoyan Chu, Ching Sing Chai, Morris Siu Yung Jong, Andreja Istenic, Michael Spector, Jia-Bao Liu, Jing Yuan, and Yan Li. "A Review of Artificial Intelligence (AI) in Education from 2010 to 2020." Complexity 2021 (April 20, 2021): 1–18. http://dx.doi.org/10.1155/2021/8812542.

Full text
Abstract:
This study provided a content analysis of studies aiming to disclose how artificial intelligence (AI) has been applied to the education sector and explore the potential research trends and challenges of AI in education. A total of 100 papers including 63 empirical papers (74 studies) and 37 analytic papers were selected from the education and educational research category of Social Sciences Citation Index database from 2010 to 2020. The content analysis showed that the research questions could be classified into development layer (classification, matching, recommendation, and deep learning), application layer (feedback, reasoning, and adaptive learning), and integration layer (affection computing, role-playing, immersive learning, and gamification). Moreover, four research trends, including Internet of Things, swarm intelligence, deep learning, and neuroscience, as well as an assessment of AI in education, were suggested for further investigation. However, we also proposed the challenges in education may be caused by AI with regard to inappropriate use of AI techniques, changing roles of teachers and students, as well as social and ethical issues. The results provide insights into an overview of the AI used for education domain, which helps to strengthen the theoretical foundation of AI in education and provides a promising channel for educators and AI engineers to carry out further collaborative research.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Yan, Vitaliy Marchenko, and Robert F. Rogers. "Joint Probability-Based Neuronal Spike Train Classification." Computational and Mathematical Methods in Medicine 10, no. 3 (2009): 229–39. http://dx.doi.org/10.1080/17486700802448615.

Full text
Abstract:
Neuronal spike trains are used by the nervous system to encode and transmit information. Euclidean distance-based methods (EDBMs) have been applied to quantify the similarity between temporally-discretized spike trains and model responses. In this study, using the same discretization procedure, we developed and applied a joint probability-based method (JPBM) to classify individual spike trains of slowly adapting pulmonary stretch receptors (SARs). The activity of individual SARs was recorded in anaesthetized, paralysed adult male rabbits, which were artificially-ventilated at constant rate and one of three different volumes. Two-thirds of the responses to the 600 stimuli presented at each volume were used to construct three response models (one for each stimulus volume) consisting of a series of time bins, each with spike probabilities. The remaining one-third of the responses where used as test responses to be classified into one of the three model responses. This was done by computing the joint probability of observing the same series of events (spikes or no spikes, dictated by the test response) in a given model and determining which probability of the three was highest. The JPBM generally produced better classification accuracy than the EDBM, and both performed well above chance. Both methods were similarly affected by variations in discretization parameters, response epoch duration, and two different response alignment strategies. Increasing bin widths increased classification accuracy, which also improved with increased observation time, but primarily during periods of increasing lung inflation. Thus, the JPBM is a simple and effective method performing spike train classification.
APA, Harvard, Vancouver, ISO, and other styles
45

Bagherian, Mohammad Ali, Kamyar Mehranzamir, Amin Beiranvand Pour, Shahabaldin Rezania, Elham Taghavi, Hadi Nabipour-Afrouzi, Mohammad Dalvi-Esfahani, and Seyed Morteza Alizadeh. "Classification and Analysis of Optimization Techniques for Integrated Energy Systems Utilizing Renewable Energy Sources: A Review for CHP and CCHP Systems." Processes 9, no. 2 (February 12, 2021): 339. http://dx.doi.org/10.3390/pr9020339.

Full text
Abstract:
Energy generation and its utilization is bound to increase in the following years resulting in accelerating depletion of fossil fuels, and consequently, undeniable damages to our environment. Over the past decade, despite significant efforts in renewable energy realization and developments for electricity generation, carbon dioxide emissions have been increasing rapidly. This is due to the fact that there is a need to go beyond the power sector and target energy generation in an integrated manner. In this regard, energy systems integration is a concept that looks into how different energy systems, or forms, can connect together in order to provide value for consumers and producers. Cogeneration and trigeneration are the two most well established technologies that are capable of producing two or three different forms of energy simultaneously within a single system. Integrated energy systems make for a very strong proposition since it results in energy saving, fuel diversification, and supply of cleaner energy. Optimization of such systems can be carried out using several techniques with regards to different objective functions. In this study, a variety of optimization methods that provides the possibility of performance improvements, with or without presence of constraints, are demonstrated, pinpointing the characteristics of each method along with detailed statistical reports. In this context, optimization techniques are classified into two primary groups including unconstrained optimization and constrained optimization techniques. Further, the potential applications of evolutionary computing in optimization of Integrated Energy Systems (IESs), particularly Combined Heat and Power (CHP) and Combined Cooling, Heating, and Power (CCHP), utilizing renewable energy sources are grasped and reviewed thoroughly. It was illustrated that the employment of classical optimization methods is fading out, replacing with evolutionary computing techniques. Amongst modern heuristic algorithms, each method has contributed more to a certain application; while the Genetic Algorithm (GA) was favored for thermoeconomic optimization, Particle Swarm Optimization (PSO) was mostly applied for economic improvements. Given the mathematical nature and constraint satisfaction property of Mixed-Integer Linear Programming (MILP), this method is gaining prominence for scheduling applications in energy systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Redkina, N. S. "Global trends of libraries development: optimism vs pessimism (foreign literature review) Part 1." Bibliosphere, no. 4 (December 30, 2018): 87–94. http://dx.doi.org/10.20913/1815-3186-2018-4-87-94.

Full text
Abstract:
The dynamic development of the external technological environment, on the one hand, impacts on libraries questioning their future existence, on the other, helps libraries to work more productively, increases competitiveness and efficiency, expands the range of social projects, develops new ways and forms of work with users taking into account their preferences in information and services. The review is based on over 500 articles searched in the world's largest databases (Google Scholar, Web of Science, Scopus, etc.), which discuss trends and future development of libraries. Then the documents were classified according to sections and types of libraries, as well as advanced technologies. Examples of information technologies were collected and reviewed, as well as articles related to the implementation of information technologies when creating new services, with the emphasis on those that may affect libraries in the future. The latest information technologies that can be applied to the next generation library have been studied. The material is structured in blocks and presented in two parts. Thie 1st one presents such sections as: 1) challenges of the external environment and the future of libraries, 2) modern information technologies in libraries development (mobile technologies and applications, cloud computing, big data, internet of things, virtual and augmented reality, technical innovations, etc.), 4) Library 4.0 concept - new directions for libraries development. The 2nd part of the review article (Bibliosphere, 2019, 1) will touch the following issues: 1) user preferences and new library services (software for information literacy development, research data management, web archiving, etc.), 2) libraries as centers of intellectual leisure, communication platforms, places for learning, co-working, renting equipment, creativity, work, scientific experiments and leisure, 3) smart buildings and smart libraries, 4) future optimism. Based on the content analysis of publications, it is concluded that libraries should not only accumulate resources and provide access to them, but renew existing approaches to forms and content of their activities, as well as goals, missions and prospects for their development using various hard- and software, cloud computing technologies, mobile technologies and apps, social networks, etc.
APA, Harvard, Vancouver, ISO, and other styles
47

Elewa, Hossam, Martina Zelenakova, and Ahmed Nosair. "Integration of the Analytical Hierarchy Process and GIS Spatial Distribution Model to Determine the Possibility of Runoff Water Harvesting in Dry Regions: Wadi Watir in Sinai as a Case Study." Water 13, no. 6 (March 15, 2021): 804. http://dx.doi.org/10.3390/w13060804.

Full text
Abstract:
Runoff water harvesting (RWH) is considered as an important tool for overcoming water scarcity in arid and semi-arid regions. The present work focuses on identifying potential RWH sites in the Wadi Watir watershed in the south-eastern part of the Sinai Peninsula. This was carried out by means of significant integration of the analytical hierarchy process (AHP), distributed spatial model, geographical information system (GIS), watershed modeling system (WMS), and remote sensing techniques (RS). This integration of modern research tools has its own bearing on the accurate identification of optimum RWH sites, which could be relied upon in developmental planning for arid environments. Eight effective RWH parameters were chosen to apply a multi-parametric decision spatial model (MPDSM), namely the overland flow distance, volume of annual flood, drainage density, maximum flow distance, infiltration number, watershed slope, watershed area and watershed length. These parameters were used within ArcGIS 10.1© as thematic layers to build a distributed hydrological spatial model. The weights and ranks of each model parameter were assigned according to their magnitude of contribution in the RWH potentiality mapping using a pairwise correlation matrix verified by calculating the consistency ratio (CR), which governs the reliability of the model application. The CR value was found to be less than 0.1 (0.069), indicating acceptable consistency and validity for use. The resulting MPDSM map classified the watershed into five categories of RWH potential, ranging from very low to very high. The high and very high classes, which are the most suitable for RWH structures, make up approximately 33.24% of the total watershed area. Accordingly, four retention dams and seven ground cisterns (tanks) were proposed in these areas to collect and store the runoff water, whereby these proposed RWH structures were chosen according to the soil type and current land-use pattern. The resulting MPDSM map was validated using a topographic wetness index (TWI) map, created for the watershed. This integrative and applied approach is an important technique which can be applied in similar arid environments elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
48

Lenz, F. A., J. O. Dostrovsky, R. R. Tasker, K. Yamashiro, H. C. Kwan, and J. T. Murphy. "Single-unit analysis of the human ventral thalamic nuclear group: somatosensory responses." Journal of Neurophysiology 59, no. 2 (February 1, 1988): 299–316. http://dx.doi.org/10.1152/jn.1988.59.2.299.

Full text
Abstract:
1. We have studied the functional and somatotopic properties of 531 single mechanoreceptive thalamic neurons in humans undergoing stereotactic surgery for the control of movement disorders and pain. The majority of these somatosensory cells had small receptive fields (RFs) and were activated in a reproducible manner by mechanical stimuli applied to the skin or deep tissues. These neurons, which we termed "lemniscal," could be further classified into those responding to stimulation of cutaneous (76% of lemniscal sensory cells) or deep (24%) structures. 2. The incidence of neurons having cutaneous or mucosal RFs in the perioral region, thumb, and fingers (66%) was much higher than that of neurons having RFs elsewhere on the body. Most of the deep cells were activated by movements of and/or mechanical stimuli delivered to muscles or tendons controlling the elbow, wrist, and fingers. 3. Sequences of cells spanning several millimeters in the parasagittal plane often exhibited overlapping RFs. However, RFs changed markedly for cells separated by the same distances in the mediolateral direction. This suggests that the cutaneous somatotopic representation of each region of the body is organized into relatively thin sheets of cells oriented in the parasagittal plane. 4. By comparing neuronal RFs in different parasagittal planes in thalamus of individual patients we have identified a mediolateral representation of body surface following the sequence from: intraoral structures, face, thumb through fifth finger to palm, with forearm and leg laterally. 5. Along many trajectories in the parasagittal plane the sequence of cells with overlapping RFs was interrupted by another sequence of cells with RFs corresponding to a different body region. The RFs of the intervening sequence characteristically represented body regions known to be located more medially in thalamus (see 3 above). These findings could be explained if the lamellae postulated above were laterally convex. 6. Cells responding to deep stimulation (deep cells) could be further classified into those responding to joint movement (63%), deep pressure (15%), or both (22%). Deep cells were found usually at the anterior-dorsal border and sometimes at the posterior border of the region containing cells responding to cutaneous stimuli. Although there was some overlap in the RFs, deep cells representing wrist were found medial to those representing elbow, and both of these were found medial to cells representing leg.(ABSTRACT TRUNCATED AT 400 WORDS)
APA, Harvard, Vancouver, ISO, and other styles
49

Makhija, S., P.-Y. vod der Weid, J. Meddings, SJ Urbanski, and PL Beck. "Octreotide in Intestinal Lymphangiectasia: Lack of a Clinical Response and Failure to Alter Lymphatic Function in a Guinea Pig Model." Canadian Journal of Gastroenterology 18, no. 11 (2004): 681–85. http://dx.doi.org/10.1155/2004/176568.

Full text
Abstract:
Intestinal lymphangiectasia, which can be classified as primary or secondary, is an unusual cause of protein-losing enteropathy. The main clinical features include edema, fat malabsorption, lymphopenia and hypoalbuminemia. Clinical management generally includes a low-fat diet and supplementation with medium chain triglycerides. A small number of recent reports advocate the use of octreotide in intestinal lymphangiectasia. It is unclear why octreotide was used in these studies; although octreotide can alter splanchnic blood flow and intestinal motility, its actions on lymphatic function has never been investigated. A case of a patient with intestinal lymphangiectasia who required a shunt procedure after failing medium chain triglycerides and octreotide therapy is presented. During the management of this case, all existing literature on intestinal lymphangiectasia and all the known actions of octreotide were reviewed. Because some of the case reports suggested that octreotide may improve the clinical course of intestinal lymphangiectasia by altering lymphatic function, a series of experiments were undertaken to assess this. In an established guinea pig model, the role of octreotide in lymphatic function was examined. In this model system, the mesenteric lymphatic vessels responded to 5-hydroxytryptamine with a decrease in constriction frequency, while histamine administration markedly increased lymphatic constriction frequency. Octreotide failed to produce any change in lymphatic function when a wide range of concentrations were applied to the mesenteric lymphatic vessel preparation. In conclusion, in this case, octreotide failed to induce a clinical response and laboratory studies showed that octreotide did not alter lymphatic function. Thus, the mechanisms by which octreotide induced clinical responses in the cases reported elsewhere in the literature remain unclear, but the present study suggests that it does not appear to act via increasing lymphatic pumping.
APA, Harvard, Vancouver, ISO, and other styles
50

Clark, Bronwyn, Elisabeth Winker, Matthew Ahmadi, and Stewart Trost. "Comparison of Three Algorithms Using Thigh-Worn Accelerometers for Classifying Sitting, Standing, and Stepping in Free-Living Office Workers." Journal for the Measurement of Physical Behaviour 4, no. 1 (March 1, 2021): 89–95. http://dx.doi.org/10.1123/jmpb.2020-0019.

Full text
Abstract:
Accurate measurement of time spent sitting, standing, and stepping is important in studies seeking to evaluate interventions to reduce sedentary behavior. In this study, the authors evaluated the agreement in classification of these activities from three algorithms applied to thigh-worn ActiGraph accelerometers using predictions from the widely used activPAL device as a criterion. Participants (n = 29, 72% female, age 23–68 years) wore the activPAL3™ micro (processed by PAL software, version 7.2.32) and the ActiGraph™ GT9X accelerometer on the right front thigh concurrently for working hours on one full workday (7.2 ± 1.2 hr). ActiGraph output was classified via the three test algorithms: ActiGraph’s ActiLife software (inclinometer); an open source method; and, a machine-learning algorithm reported in the literature (Acti4). Performance at an instance level was evaluated by computing classification accuracy (F scores) for 15-s windows. The F scores showed high accuracy relative to the criterion for identifying sitting (96.7–97.1) and were 84.7–85.1 for identifying standing and 78.1–80.6 for identifying stepping. The four methods agreed strongly in total time spent sitting, standing, and stepping, with intraclass correlation coefficients of .96 (95% confidence interval [.92, .96]), .92 (95% confidence interval [.81, .96]), and .87 (95% confidence interval [.53, .95]) but sometimes overestimated sitting time and underestimated standing time relative to activPAL. These algorithms for identifying sitting, standing, and stepping from thigh-worn accelerometers provide estimates that are very similar to those obtained using the activPAL.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography