Academic literature on the topic 'General k-max search'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'General k-max search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "General k-max search":

1

Xian, Aiyong, Kaiyuan Zhu, Daming Zhu, Lianrong Pu, and Hong Liu. "Approximating Max NAE-k-SAT by anonymous local search." Theoretical Computer Science 657 (January 2017): 54–63. http://dx.doi.org/10.1016/j.tcs.2016.05.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ahr, Dino, and Gerhard Reinelt. "A tabu search algorithm for the min–max k-Chinese postman problem." Computers & Operations Research 33, no. 12 (December 2006): 3403–22. http://dx.doi.org/10.1016/j.cor.2005.02.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ciftci, Esra, Emel Sesli Cetin, Ebru Us, Hüseyin Haydar Kutlu, and Buket Cicioglu Arıdogan. "Investigation of Carbapenem resistance mechanisms in Klebsiella pneumoniae by using phenotypic tests and a molecular assay." Journal of Infection in Developing Countries 13, no. 11 (November 30, 2019): 992–1000. http://dx.doi.org/10.3855/jidc.10783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction: The aim of this study was to investigate the presence of carbapenemase production and carbapenem resistance mechanisms in 47 carbapenem resistant Klebsiella pneumoniae isolates by phenotypic confirmatory tests and molecular assay. Methodology: Carbapenem resistance genes KPC, OXA-48 and NDM were investigated with the BD MAX CRE assay kit in the BD MAX real time PCR instrument. Modified Hodge test, MBL gradient strip test, D70C Carbapenemase Detection Set, Temocillin gradient strip test methods were used as phenotypic confirmatory tests. Clonal relationship between study isolates was investigated with pulsed-field gel electrophoresis. Results: Analysis with BD MAX CRE assay revealed OXA-48 positivity in 17 (36%) strains, NDM positivity in 6 (13%) strains and coexistence of OXA-48 + NDM positivity in 8 (17%) strains. In 16 (34%) strains, none of the KPC, OXA-48 and NDM genes were detected. While MHT was the most sensitive phenotypic confirmatory test, D70C disc set had not been considered as a useful tool to assist the search for carbapenemase production. Temocillin gradient test alone could not be considered as sufficient to detect the presence of OXA-48. PFGE analyses revealed that 23 of 31 carbapenemase producing strains were in three major PFGE genotypes (A, B and C). Conclusions: This study revealed that carbapenem resistance observed in K. pneumoniae isolates was mainly due to OXA-48 and NDM genes and the increase of carbapenem resistance among K. pneumoniae strains in our hospital was due to the interhospital spread of especially 3 epidemic clones.
4

Lan, Tianming, and Lei Guo. "Discovery of User Groups Densely Connecting Virtual and Physical Worlds in Event-Based Social Networks." International Journal of Information Technologies and Systems Approach 16, no. 2 (July 28, 2023): 1–23. http://dx.doi.org/10.4018/ijitsa.327004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An essential task of the event-based social network (EBSN) platform is to recommend events to user groups. Usually, users are more willing to participate in events and interest groups with their friends, forming a particularly closely connected user group. However, such groups do not explicitly exist in EBSN. Therefore, studying how to discover groups composed of users who frequently participate in events and interest groups in EBSN has essential theoretical and practical significance. This article proposes the problem of discovering maximum k fully connected user groups. To address this issue, this article designs and implements three algorithms: a search algorithm based on Max-miner (MMBS), a search algorithm based on two vectors (TVBS) and enumeration tree, and a divide-and-conquer parallel search algorithm (DCPS). The authors conducted experiments on real datasets. The comparison of experimental results of these three algorithms on datasets from different cities shows that the DCPS algorithm and TVBS algorithm significantly accelerate their computational time when the minimum support rate is low. The time consumption of DCPS algorithm can reach one tenth or even lower than that of MMBS algorithm.
5

Gupta, Ankita, Lakhwinder Kaur, and Gurmeet Kaur. "Drought stress detection technique for wheat crop using machine learning." PeerJ Computer Science 9 (May 19, 2023): e1268. http://dx.doi.org/10.7717/peerj-cs.1268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The workflow of this research is based on numerous hypotheses involving the usage of pre-processing methods, wheat canopy segmentation methods, and whether the existing models from the past research can be adapted to classify wheat crop water stress. Hence, to construct an automation model for water stress detection, it was found that pre-processing operations known as total variation with L1 data fidelity term (TV-L1) denoising with a Primal-Dual algorithm and min-max contrast stretching are most useful. For wheat canopy segmentation curve fit based K-means algorithm (Cfit-kmeans) was also validated for the most accurate segmentation using intersection over union metric. For automated water stress detection, rapid prototyping of machine learning models revealed that there is a need only to explore nine models. After extensive grid search-based hyper-parameter tuning of machine learning algorithms and 10 K fold cross validation it was found that out of nine different machine algorithms tested, the random forest algorithm has the highest global diagnostic accuracy of 91.164% and is the most suitable for constructing water stress detection models.
6

Hamann, Arne, and Sabine Wölk. "Performance analysis of a hybrid agent for quantum-accessible reinforcement learning." New Journal of Physics 24, no. 3 (March 1, 2022): 033044. http://dx.doi.org/10.1088/1367-2630/ac5b56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract In the last decade quantum machine learning has provided fascinating and fundamental improvements to supervised, unsupervised and reinforcement learning (RL). In RL, a so-called agent is challenged to solve a task given by some environment. The agent learns to solve the task by exploring the environment and exploiting the rewards it gets from the environment. For some classical task environments, an analogue quantum environment can be constructed which allows to find rewards quadratically faster by applying quantum algorithms. In this paper, we analytically analyze the behavior of a hybrid agent which combines this quadratic speedup in exploration with the policy update of a classical agent. This leads to a faster learning of the hybrid agent compared to the classical agent. We demonstrate that if the classical agent needs on average ⟨J⟩ rewards and ⟨T⟩cl epochs to learn how to solve the task, the hybrid agent will take ⟨ T ⟩ q ⩽ α s α o ⟨ T ⟩ c l ⟨ J ⟩ epochs on average. Here, α s and α o denote constants depending on details of the quantum search and are independent of the problem size. Additionally, we prove that if the environment allows for maximally α o k max sequential coherent interactions, e.g. due to noise effects, an improvement given by ⟨T⟩q ≈ α o ⟨T⟩cl/(4k max) is still possible.
7

Wang, Yanhao, Francesco Fabbri, Michael Mathioudakis, and Jia Li. "Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models." Entropy 25, no. 7 (July 14, 2023): 1066. http://dx.doi.org/10.3390/e25071066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Diversity maximization is a fundamental problem with broad applications in data summarization, web search, and recommender systems. Given a set X of n elements, the problem asks for a subset S of k≪n elements with maximum diversity, as quantified by the dissimilarities among the elements in S. In this paper, we study diversity maximization with fairness constraints in streaming and sliding-window models. Specifically, we focus on the max–min diversity maximization problem, which selects a subset S that maximizes the minimum distance (dissimilarity) between any pair of distinct elements within it. Assuming that the set X is partitioned into m disjoint groups by a specific sensitive attribute, e.g., sex or race, ensuring fairness requires that the selected subset S contains ki elements from each group i∈[m]. Although diversity maximization has been extensively studied, existing algorithms for fair max–min diversity maximization are inefficient for data streams. To address the problem, we first design efficient approximation algorithms for this problem in the (insert-only) streaming model, where data arrive one element at a time, and a solution should be computed based on the elements observed in one pass. Furthermore, we propose approximation algorithms for this problem in the sliding-window model, where only the latest w elements in the stream are considered for computation to capture the recency of the data. Experimental results on real-world and synthetic datasets show that our algorithms provide solutions of comparable quality to the state-of-the-art offline algorithms while running several orders of magnitude faster in the streaming and sliding-window settings.
8

Jayanthi, Neelampalli, Burra Babu, and Nandam Rao. "Outlier Detection Method based on Adaptive Clustering Method and Density Peak." International Journal of Intelligent Engineering and Systems 13, no. 6 (December 31, 2020): 120–30. http://dx.doi.org/10.22266/ijies2020.1231.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The outlier detection technique is widely used in the data analysis for the clustering of data. Many techniques have been applied in the outlier detection to increase the efficiency of the data analysis. The Local Projection based Outlier Detection (LPOD) method effectively identifies neighbouring values of data, but this has the drawback of random selection of the cluster centre that affects the overall clustering performance of the system. In this study, the Adaptive Clustering by Fast Search and Find of Density Peak (ACFSFDP) is proposed to select the clustering centre and density peak. This ACFSFDP method is implemented with the min-max algorithm to find the number of categories that measured the local density and distance information. The density and distance are used to select the cluster centre, but density is not calculated on the existing distance based clustering techniques. The ACFSFDP method calculates cluster centre based on the density and distance during the clustering process, whereas the existing techniques randomly select the data centre. The results indicated that the ACFSFDP method is provided effective outlier detection compared with existing Clustering by Fast Search and Find of Density Peak (CFSFDP) methods. The ACFSFDP is tested on two datasets Pen-digits and waveform datasets. The experiment results proved that Area Under Curve (AUC) of the ACFSFDP is 99.08% on the Pen-Digit dataset, while the existing distance classifier method k-Nearest Neighbour has achieved 68.7% of AUC.
9

Gaspers, Serge, Eun Jung Kim, Sebastian Ordyniak, Saket Saurabh, and Stefan Szeider. "Don't Be Strict in Local Search!" Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 486–92. http://dx.doi.org/10.1609/aaai.v26i1.8128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Local Search is one of the fundamental approaches to combinatorial optimization and it is used throughout AI. Several local search algorithms are based on searching the k-exchange neighborhood. This is the set of solutions that can be obtained from the current solution by exchanging at most k elements. As a rule of thumb, the larger k is, the better are the chances of finding an improved solution. However, for inputs of size n, a naive brute-force search of the k-exchange neighborhood requires n(O(k)) time, which is not practical even for very small values of k. Fellows et al. (IJCAI 2009) studied whether this brute-force search is avoidable and gave positive and negative answers for several combinatorial problems. They used the notion of local search in a strict sense. That is, an improved solution needs to be found in the k-exchange neighborhood even if a global optimum can be found efficiently. In this paper we consider a natural relaxation of local search, called permissive local search (Marx and Schlotter, IWPEC 2009) and investigate whether it enhances the domain of tractable inputs. We exemplify this approach on a fundamental combinatorial problem, Vertex Cover. More precisely, we show that for a class of inputs, finding an optimum is hard, strict local search is hard, but permissive local search is tractable. We carry out this investigation in the framework of parameterized complexity.
10

Pechnikov, Gennady, Alexander Blinkov, and Elena Parshina. "K. Marx on his own form and content in legal relations." Legal Science and Practice: Journal of Nizhny Novgorod Academy of the Ministry of Internal Affairs of Russia 2022, no. 4 (December 27, 2022): 66–70. http://dx.doi.org/10.36511/2078-5356-2022-4-66-70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article shows the importance of legal relations, based on the work of K. Marx “Debate on the order to steal a forest”, in which he clearly raised the question of the independence of legal relations, whether it be criminal law, or criminal procedure, or any other relationship... Any legal relationship has only its own form and content, inextricably dialectically interconnected. This is the true scientific nature of legal relations. Therefore, one legal relationship cannot be made the content of other relationships. For example, it is impossible to squeeze the Chinese procedural procedure into the French procedure without distorting its true essence, while Marx sharply criticizes the position of the Landtag deputies who tried to make the selfish private interest of forest owners the property of state criminal law and criminal procedural relations. According to Marx: “a form is devoid of any value if it is not a form of content”. Hence, the criminal process cannot be considered as a form of criminal law, since then the content of the criminal procedural legal relationship changes, since inevitably then the accused will be identified with the guilty (criminal), and the measures of procedural coercion will be identified with the measures of criminal punishment, and the presumption of innocence will be replaced on the presumption of guilt. In the same way, it is impossible to make operational-search relations the content of criminal-procedural relations and thereby erase the differences between criminal-procedural evidence and operational-search data. At the same time, the very institution of interaction between the investigator and the body of inquiry, carrying out operational-search activities, is important.

Dissertations / Theses on the topic "General k-max search":

1

Schroeder, Pascal. "Performance guaranteeing algorithms for solving online decision problems in financial systems." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse contient quelques problèmes de décision financière en ligne et des solutions. Les problèmes sont formulés comme des problèmes en ligne (OP) et des algorithmes en ligne (OA) sont créés pour résoudre. Comme il peut y avoir plusieurs OAs pour le même OP, il doit y avoir un critère afin de pouvoir faire des indications au sujet de la qualité d’un OA. Dans cette thèse ces critères sont le ratio compétitif (c), la différence compétitive (cd) et la performance numérique. Un OA qui a un c ou cd plus bas qu’un autre est à préférer. Un OA qui possède le c le plus petit est appelé optimal. Nous considérons les OPs suivants. Le problème de conversion en ligne (OCP), le problème de sélection de portefeuille en ligne (PSP) et le problème de gestion de trésorerie en ligne (CMP). Après le premier chapitre d’introduction, les OPs, la notation et l’état des arts dans le champ des OPs sont présentés. Dans le troisième chapitre on résoudre trois variantes des OCP avec des prix interconnectés. En Chapitre 4 on travaille encore sur le problème de recherche de série chronologie avec des prix interconnectés et on construit des nouveaux OAs. À la fin de ce chapitre l’OA k-DIV est créé pour le problème de recherche générale des k maximums. k-DIV est aussi optimal. En Chapitre 5 on résout le PSP avec des prix interconnectés. L’OA créé s’appelle OPIP et est optimal. En utilisant les idées de OPIP, on construit un OA pour le problème d’échange bidirectionnel qui s’appelle OCIP et qui est optimal. Avec OPIP, on construit un OA optimal pour le problème de recherche bidirectionnel (OA BUND) sachant les valeurs de θ_1 et θ_2. Pour des valeurs inconnues, on construit l’OA RUN qui est aussi optimal. Les chapitres 6 et 7 traitent sur le CMP. Dans les deux chapitres il y a des tests numériques afin de pouvoir comparer la performance des OAs nouveaux avec celle des OAs déjà établies. En Chapitre 6 on construit des OAs optimaux ; en chapitre 7 on construit des OA qui minimisent cd. L’OA BCSID résoudre le CMP avec des demandes interconnectées ; LOA aBBCSID résoudre le problème lorsqu’ on connaît les valeurs de θ_1,θ_2,m et M. L’OA n’est pas optimal. En Chapitre 7 on résout le CMP par rapport à m et M en minimisant cd (OA MRBD). Ensuite on construit l’OA HMRID et l’OA MRID pour des demandes interconnectées. MRID minimise cd et HMRID est un bon compromis entre la performance numérique et la minimisation de cd
This thesis contains several online financial decision problems and their solutions. The problems are formulated as online problems (OP) and online algorithms (OA) are created to solve them. Due to the fact that there can be various OA for the same OP, there must be some criteria with which one can make statements about the quality of an OA. In this thesis these criteria are the competitive ratio (c), the competitive difference (cd) and the numerical performance. An OA with a lower c is preferable to another one with a higher value. An OA that has the lowest c is called optimal. We consider the following OPS. The online conversion problem (OCP), the online portfolio selection problem (PSP) and the cash management problem (CMP). After the introductory chapter, the OPs, the notation and the state of the art in the field of OPs is presented. In the third chapter, three variants of the OCP with interrelated prices are solved. In the fourth chapter the time series search with interrelated prices is revisited and new algorithms are created. At the end of the chapter, the optimal OA k-DIV for the general k-max search with interrelated prices is developed. In Chapter 5 the PSP with interrelated prices is solved. The created OA OPIP is optimal. Using the idea of OPIP, an optimal OA for the two-way trading is created (OCIP). Having OCIP, an optimal OA for the bi-directional search knowing the values of θ_1 and θ_2 is created (BUND). For unknown θ_1 and θ_2, the optimal OA RUNis created. The chapter ends with an empirical (for OPIP) and experimental (for OCIP, BUND and RUN) testing. Chapters 6 and 7 deal with the CMP. In both of them, a numerical testing is done in order to compare the numerical performance of the new OAs to the one of the already established ones. In Chapter 6 an optimal OA is constructed; in Chapter 7, OAs are designed which minimize cd. The OA BCSID solves the CMP with interrelated demands to optimality. The OA aBBCSID solves the CMP when the values of de θ_1, θ_2,m and M are known; however, this OA is not optimal. In Chapter 7 the CMP is solved, knowing m and M and minimizing cd (OA MRBD). For the interrelated demands, a heuristic OA (HMRID) and a cd-minimizing OA (MRID) is presented. HMRID is good compromise between the numerical performance and the minimization of cd. The thesis concludes with a short discussion about shortcomings of the considered OPs and the created OAs. Then some remarks about future research possibilities in this field are given

Book chapters on the topic "General k-max search":

1

Lopez, Kyra Mikaela M., and Ma Sheila A. Magboo. "A Clinical Decision Support Tool to Detect Invasive Ductal Carcinoma in Histopathological Images Using Support Vector Machines, Naïve-Bayes, and K-Nearest Neighbor Classifiers." In Machine Learning and Artificial Intelligence. IOS Press, 2020. http://dx.doi.org/10.3233/faia200765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study aims to describe a model that will apply image processing and traditional machine learning techniques specifically Support Vector Machines, Naïve-Bayes, and k-Nearest Neighbors to identify whether or not a given breast histopathological image has Invasive Ductal Carcinoma (IDC). The dataset consisted of 54,811 breast cancer image patches of size 50px x 50px, consisting of 39,148 IDC negative and 15,663 IDC positive. Feature extraction was accomplished using Oriented FAST and Rotated BRIEF (ORB) descriptors. Feature scaling was performed using Min-Max Normalization while K-Means Clustering on the ORB descriptors was used to generate the visual codebook. Automatic hyperparameter tuning using Grid Search Cross Validation was implemented although it can also accept user supplied hyperparameter values for SVM, Naïve Bayes, and K-NN models should the user want to do experimentation. Aside from computing for accuracy, the AUPRC and MCC metrics were used to address the dataset imbalance. The results showed that SVM has the best overall performance, obtaining accuracy = 0.7490, AUPRC = 0.5536, and MCC = 0.2924.
2

Jabari, Farkhondeh, Heresh Seyedia, Sajad Najafi Ravadanegh, and Behnam Mohammadi Ivatloo. "Stochastic Contingency Analysis Based on Voltage Stability Assessment in Islanded Power System Considering Load Uncertainty Using MCS and k-PEM." In Advances in Computer and Electrical Engineering, 12–36. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9911-3.ch002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Increased electricity demands and economic operation of large power systems in a deregulated environment with a limited investment in transmission expansion planning causes interconnected power grids to be operated closer to their stability limits. Meanwhile, the loads uncertainty will affect the static and dynamic stabilities. Therefore, if there is no emergency corrective control in time, occurrence of wide area contingency may lead to the catastrophic cascading outages. Studies show that many wide area blackouts which led to massive economic losses may have been prevented by a fast feasible controlled islanding decision making. This chapter introduces a novel computationally efficient approach for separating of bulk power system into several stable sections following a severe disturbance. The splitting strategy reduces the large initial search space to an interface boundary network considering coherency of synchronous generators and network graph simplification. Then, a novel islanding scenario generator algorithm denoted as BEM (Backward Elimination Method) based on PMEAs (Primary Maximum Expansion Areas) has been applied to generate all proper islanding solutions in the simplified network graph. The PPF (Probabilistic Power Flow) based on Newton-Raphson method and Q-V modal analysis has been used to evaluate the steady-state stability of created islands in each generated scenario. BICA (Binary Imperialistic Competitive Algorithm) has then been applied to minimize total load-generation mismatch considering integrity, voltage permitted range and steady-state voltage stability constraints. The best solution has then been applied to split the entire power network. A novel stochastic contingency analysis of islands based on PSVI (Probability of Static Voltage Instability) using MCS (Monte Carlo Simulation) and k-PEM (k-Point Estimate Method) have been proposed to identify the critical PQ buses and severe contingencies. In these approaches, the ITM (Inverse Transform Method) has been used to model uncertain loads with normal probability distribution function in optimal islanded power system. The robustness, effectiveness and capability of the proposed approaches have been validated on the New England 39-bus standard power system.

Conference papers on the topic "General k-max search":

1

Tönshoff, Jan, Berke Kisin, Jakob Lindner, and Martin Grohe. "One Model, Any CSP: Graph Neural Networks as Fast Global Search Heuristics for Constraint Satisfaction." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose a universal Graph Neural Network architecture which can be trained as an end-2-end search heuristic for any Constraint Satisfaction Problem (CSP). Our architecture can be trained unsupervised with policy gradient descent to generate problem specific heuristics for any CSP in a purely data driven manner. The approach is based on a novel graph representation for CSPs that is both generic and compact and enables us to process every possible CSP instance with one GNN, regardless of constraint arity, relations or domain size. Unlike previous RL-based methods, we operate on a global search action space and allow our GNN to modify any number of variables in every step of the stochastic search. This enables our method to properly leverage the inherent parallelism of GNNs. We perform a thorough empirical evaluation where we learn heuristics for well known and important CSPs, both decision and optimisation problems, from random data, including graph coloring, MAXCUT, and MAX-k-SAT, and the general RB model. Our approach significantly outperforms prior end-2-end approaches for neural combinatorial optimization. It can compete with conventional heuristics and solvers on test instances that are several orders of magnitude larger and structurally more complex than those seen during training.
2

Neagu, Laurentiumarian, Teodormihai Cotet, Mihai Dascalu, Stefan Trausanmatu, Laura Badescu, and Eugen Simion. "SEMANTIC AUTHOR RECOMMENDATIONS BASED ON THEIR BIOGRAPHY FROM THE GENERAL ROMANIAN DICTIONARY OF LITERATURE." In eLSE 2019. Carol I National Defence University Publishing House, 2019. http://dx.doi.org/10.12753/2066-026x-19-022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Romanian Language Dictionary is a centralized text repository which contains detailed biographies of all Romanian authors and can be used to perform various subsequent analyses. The aim of this paper is to introduce a novel method to recommend authors based on their biography from the Romanian Language Dictionary. Starting from multiple PDF input files made available by the "G. C?linescu" Institute of Literary History and Theory, we extracted relevant information on Romanian authors which was indexed into Elasticsearch, a non-relational database optimized for full-text indexing and search. The relevant information considers author's full name, their pseudonym (if any), year of birth and of death (if applicable), brief description (including studies, cities they lived in, important people they met, brief history), writings, critical references of others, etc. The indexed information is easily accessible through a RESTful API and provides a powerful starting point which may contribute to future Romanian cultural findings. Based on this consistent database, our aim is to create an interactive map showing all Romanian literature contributors, enabling the identification of similarities and differences between them based on specific features (e.g., similar writing styles, time periods, or similar text descriptions in terms of semantic models). In order to have a clearer image on how authors relate one to another, we employed the k-Means and agglomerative clustering algorithms from the Scikit-learn machine learning library. The results depict the distribution of Romanian authors throughout history and enable the identification of correlations between them based on the emerging clusters. This paper is a proof of concept that makes use of only the first volume of the Romanian Language Dictionary and represents the first step for follow-up analyses performed using the indexed dictionary.
3

Seena, Abu. "Determinant Search Method for the Large Structural Systems With Small Bandwidth." In ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-71761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract In a solution to large eigenvalue problems, the number of required eigenvalues and corresponding vectors is usually much smaller than the order of matrices. For small bandwidth problems, the determinant search method is very efficient and the oldest. Its from a family of nonlinear Eigen solution techniques also referred to as frequency scanning methods. By using the shift technique, it can also calculate the largest eigenvalues. This calculation process involves three steps: polynomial iteration, inverse iteration, and polynomial deflation. The polynomial iteration determines the eigenvalues of interest by looking at a plot of characteristic polynomial versus the eigenvalue λ, which involves evaluating the determinant |K–λM| at fine intervals of λ selected from accelerated secant iteration. The method was developed in the 1970s and was extensively used for only small-size problems due to its limitations. During that time, the memory available was very low for the in-core solver. Also, the numbers generated in polynomial iteration overflows the float point variables in a computer program. Of course, scaling was the obvious choice. However, the method still suffered the numerical scaling difficulties as a determinant of the matrix |K–λM| in structural problems is generally a very fast varying function. Despite different scaling, the number gets overflows after dozens of eigenvalues which made the technique unattractive. The variables type float, double and long double in a computer program can store values up to 1049, 10308, and 104932, respectively, which is not comparatively much smaller than the determinant of small size structural models. In the present paper, a power number representation is proposed in the polynomial iteration. The determinant |K–λM| can be stored in the power form ρD¯n, where prefactor ρ stores the determinant sign, n is the size of the matrix, and D¯ can be calculated and stored accordingly. The order analysis shows the storage of D¯ requires a variable of range 108 to 1010, which can be stored by float or double type variable. The determinant search method gets to be free from limitations and may easily estimate each large eigenpair at higher frequencies independently from all those previously calculated. The determinant search method can also estimate the clustered eigenvalues accurately. The models’ practical significance is in the FEA analysis of beams, frames, buckling analysis, and stress analysis of pipes in the industry. In the present work, the proposed approach has been applied to the piping model. The results show the determinant search algorithm can extract modes in a higher frequency range without any limitations. The modal analysis is the basis of dynamic analyses of structures such as Response spectrum analysis. The inclusion of higher modes ensures that more than 95% of modal mass has been included in the calculation.
4

Hu, Zijian, Fengjun Bai, Huajie Wang, Chuanhui Sun, Pinwei Li, Haoyan Li, Yunlong Fu, Jie Zhang, and Yin Luo. "Deep Learning Approaches in Tight Gas Field Pay Zone Classification." In SPE Argentina Exploration and Production of Unconventional Resources Symposium. SPE, 2023. http://dx.doi.org/10.2118/212394-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Log interpretation is critical in locating pay zones and evaluating their potential. Conventional log interpretation is done manually. In our work, deep learning methods are utilized to deal with preliminary pay zone classification. In this way, human expertise can be liberated from trivial and repetitive tasks during logging interpretation. In a fluvial depositional environment, the sand distribution varies both vertically and horizontally. Thus, a large dataset covering a large area may lead to a too "averaged" model. In our work, we select a relatively small dataset (e.g., seven wells) to reflect the regional features. Standard deep learning processes are employed. The log data are cleaned, visualized, and preprocessed for the algorithms. A preliminary random forest (RF) model is used to separate the sand (interpretation needed) from the shale (interpretation not needed) facies. In the classification model building and training stages, various types of algorithms are tried and compared, from the simple K-nearest neighbor (KNN) to dense neural network (DNN). To account for the continuity and influence of adjacent depths, a 1D convolutional neural network (CNN) model is tested. With the model, a simple self-training model is developed and discussed. K-fold validation methods are used to fully reflect the model's performance in such relatively small dataset. With the given dataset, common deep learning methods generate only moderate accuracy and are easily overfitted. On the other hand, the CNN outperforms the other approaches due its features for pattern recognition. With special caution, a self-learning approach can also further improve the performance. A comparison of different deep learning approaches in terms of time of computation, accuracy, and stability is established. Even trained from a small dataset, with the CNN model, it is possible to identify the zones of interest automatically and consistently. Due to the size of dataset, a series of techniques is utilized to reduce the impact of overfitting, including balance sampling, drop out, regularization, and early stopping, among others. During the optimization of critical hyperparameters, grid search with Bayesian statistics is used together with K-fold validation.
5

Ahmadov, Jamal. "Utilizing Data-Driven Models to Predict Brittleness in Tuscaloosa Marine Shale: A Machine Learning Approach." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/208628-stu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The Tuscaloosa Marine Shale (TMS) formation is a clay- and liquid-rich emerging shale play across central Louisiana and southwest Mississippi with recoverable resources of 1.5 billion barrels of oil and 4.6 trillion cubic feet of gas. The formation poses numerous challenges due to its high average clay content (50 wt%) and rapidly changing mineralogy, making the selection of fracturing candidates a difficult task. While brittleness plays an important role in screening potential intervals for hydraulic fracturing, typical brittleness estimation methods require the use of geomechanical and mineralogical properties from costly laboratory tests. Machine Learning (ML) can be employed to generate synthetic brittleness logs and therefore, may serve as an inexpensive and fast alternative to the current techniques. In this paper, we propose the use of machine learning to predict the brittleness index of Tuscaloosa Marine Shale from conventional well logs. We trained ML models on a dataset containing conventional and brittleness index logs from 8 wells. The latter were estimated either from geomechanical logs or log-derived mineralogy. Moreover, to ensure mechanical data reliability, dynamic-to-static conversion ratios were applied to Young's modulus and Poisson's ratio. The predictor features included neutron porosity, density and compressional slowness logs to account for the petrophysical and mineralogical character of TMS. The brittleness index was predicted using algorithms such as Linear, Ridge and Lasso Regression, K-Nearest Neighbors, Support Vector Machine (SVM), Decision Tree, Random Forest, AdaBoost and Gradient Boosting. Models were shortlisted based on the Root Mean Square Error (RMSE) value and fine-tuned using the Grid Search method with a specific set of hyperparameters for each model. Overall, Gradient Boosting and Random Forest outperformed other algorithms and showed an average error reduction of 5 %, a normalized RMSE of 0.06 and a R-squared value of 0.89. The Gradient Boosting was chosen to evaluate the test set and successfully predicted the brittleness index with a normalized RMSE of 0.07 and R-squared value of 0.83. This paper presents the practical use of machine learning to evaluate brittleness in a cost and time effective manner and can further provide valuable insights into the optimization of completion in TMS. The proposed ML model can be used as a tool for initial screening of fracturing candidates and selection of fracturing intervals in other clay-rich and heterogeneous shale formations.
6

Barreto Fernandes, Francisco António, and Bernabé Hernandis Ortuño. "Usability and User-Centered Design - User Evaluation Experience in Self-Checkout Technologies." In Systems & Design 2017. Valencia: Universitat Politècnica València, 2017. http://dx.doi.org/10.4995/sd2017.2017.6634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The increasing advance of the new technologies applied in the retail market, make it common to sell products without the personal contact between seller and buyer, being the registration and payment of the products made in electronic equipment of self-checkout. The large-scale use of these devices forces the consumer to participate in the service process, which was previously done through interaction with the company's employees. The user of the self-checkout system thus performs all the steps of the purchase, from weighing the products, registering them and making the payment. This is seen as a partial employee, whose participation or performance in providing services can be used by the company to improve the quality of its operations (KELLEY, et al 1993). However this participation does not always satisfy the user, and may cause negative experiences related to usability failures. This article presents the results of the evaluation by the users of the self-checkout system. The data were collected in Portugal through a questionnaire to 400 users. The study analyzes the degree of satisfaction regarding the quality and usability of the system, the degree of motivation for its adoption, as well as the profile of the users. Analysis of the sample data reveals that users have basic or higher education and use new technologies very often. They also have a high domain of the system and an easy learning of its use. The reason for using self-checkout instead of the traditional checkout is mainly due to "queues at checkout with operator" and "at the small volume of products". In general, the sample reveals a high degree of satisfaction with the service and with quality, however, in comparative terms, self-checkout is not considered better than operator checkout. The evaluation of the interaction with the self-checkout was classified according to twenty-six attributes of the system. The analysis identifies five groups with similar characteristics, of which two have low scores. "Cancellation of registered articles", "search for articles without a bar code", "manual registration", "bagging area", "error messages", "weight sensor" and “invoice request "are seven critical attributes of the system. The results indicate that the usability analysis oriented to the self-checkout service can be determinant for the user-system interaction. The implications of empirical findings are discussed together with guidelines for future research.Keywords: Interaction Design, Self service, Self-checkout, User evaluation, UsabilityReferencias ABRAHÃO, J., et al (2013). Ergonomia e Usabilidade. 1ª Edição. São Paulo: Blucher. ALEXANDRE, J. W. C., et al (2013). Análise do número de categorias da escala de Likert aplicada à gestão pela qualidade total através da teoria da resposta ao item. In: XXIII Encontro Nacional de Engenharia de Produção, Ouro Preto. BOOTH, P. (2014). An Introduction to Human-Computer Interaction (Psychology Revivals). London Taylor and Francis. CASTRO, D., ATKINSON, R., EZELL, J., (2010). Embracing the Self-Service Economy, Information Technology and Innovation Foundation. Available at SSRN: http://dx.doi.org/10.2139/ssrn.1590982 CHANG, L.A. (1994). A psychometric evaluation of 4-point and 6-point Likert-type scale in relation to reliability and validity. Applied Psychological Measurement. v. 18, n. 2, p. 05-15. DABHOLKAR, P. A. (1996). Consumer Evaluations of New Technology-based Self-service Options: An Investigation of Alternative Models of Service Quality. International Journal of Research in Marketing, Vol. 13, pp. 29-51. DABHOLKAR, P. A., BAGOZZI, R. P. (2002). An Attitudinal Model of Technology-based Selfservice: Moderating Effects of Consumer Traits and Situational Factors. Journal of the Academy of Marketing Science, Vol. 30 (3), pp. 184-201. DABHOLKAR, P. A., BOBBITT, L. M. & LEE, E. (2003). Understanding Consumer Motivation and Behavior related to Self-scanning in Retailing. International Journal of Service Industry Management, Vol. 14 (1), pp. 59-95. DIX, A. et al (2004). Human-Computer Interaction. Third edition. Pearson/Prentice-Hall. New York. FERNANDES, F. et al, (2015). Do Ensaio à Investigação – Textos Breves Sobre a Investigação, Bernabé Hernandis, Carmen Lloret e Francisco Sanmartín (Editores), Oficina de Acción Internacional - Universidade Politécnica de Valência Edições ESAD.cr/IPL, Leiria. HELANDER, M., LANDAUER, T., PRABHU, P. (1997). Handbook of Human – Computer Interaction. North–Holland: Elsevier. KALLWEIT, K., SPREER, P. & TOPOROWSKI, W. (2014). Why do Customers use Self-service Information Technologies in Retail? The Mediating Effect of Perceived Service Quality. Journal of Retailing and Consumer Services, Vol. 21, pp. 268-276. KELLEY SW, HOFFMAN KD, DAVIS MA. (1993). A typology of retail failures and recoveries. J Retailing. 69(4):429 – 52.
7

Andov, Stojan, Violeta Cvetkoska, and Tea Mijac. "Unveiling Global Road Accident Patterns - Insights, Analytics, and Implications for Safer Driving Practices." In Economic and Business Trends Shaping the Future. Ss Cyril and Methodius University, Faculty of Economics-Skopje, 2023. http://dx.doi.org/10.47063/ebtsf.2023.0031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Every day, we are confronted with alarming news of serious injuries and fatalities resulting from car accidents. In the past decade, these incidents have been on the rise, posing a significant concern for individuals and societies worldwide. The impact of these accidents is particularly devastating when innocent lives, including children, are affected by the long-lasting consequences. While driver behavior remains a major contributing factor to road accidents, there are also other indirect reasons such as infrastructure issues and weather conditions. Addressing this global problem is of utmost importance to safeguard lives and create a safer driving environment for everyone. Sunkpho and Wipulanusat (2020) utilized Business Intelligence (BI) methods, specifically data visualization and analytics, to analyze accident data and provincial data obtained from the Talend Data Integration tool, loaded into a MySQL database, and visualized using Tableau. Their aim was to provide insights into highway accidents and advise the Thai government on adopting this system for formulating strategy options and contingency plans to improve the accident situation. Nour et al. (2020) employ advanced data analytics methods, specifically predictive modeling techniques, to predict injury severity levels and evaluate their performance using publicly available road accident data from the UK Department of Transport spanning 2005 to 2019. Golhar and Kshirsagar M (2021) propose and implement various strategies using the Map-Reduce framework, combining video surveillance and big data analytics, to address the issues of increasing on-road traffic, road congestion, rule violations, and road accidents, aiming to improve road traffic management and make urban population life more comfortable. Yuksel and Atmaca S. (2021) use accelerometer and gyroscope sensor data and applied various machine learning algorithms, including C4.5 Decision Tree, Random Forest, Artificial Neural Network, Support-Vector Machine, K-Nearest Neighbor, Naive Bayes, and K-Star algorithms, to model and evaluate risky driving behaviors, ultimately developing a highly accurate and cost-effective system capable of recording and identifying risky driving behaviors, with potential applications in usage-based insurance policies to incentivize safe driving practices. Mesquitela et al. (2022) use a data fusion process, incorporating information from various sources such as road accidents, weather conditions, local authority reports, traffic, and fire brigade, to analyze and identify geo-referenced accident hotspots in urban areas using ArcGIS Pro and Kernel Density and Hot Spot Analysis tools, aiming to evaluate the factors influencing accident severity and provide knowledge for local municipalities to improve their infrastructure and quality of life, with the results validated by an expert committee, and the approach being applicable to other cities with similar data availability. Based on our Scopus search on "road accidents" and "analytics," no existing references were found directly aligned with our research idea. This highlights the originality of our paper, which aims to raise awareness about road accidents as a significant global issue and provide a comprehensive understanding of their key contributing factors through the analysis of road accident data from six representative countries across different continents including the UK, USA, Chile, Australia, Japan/UAE, and South Africa/Egypt. Our research sheds light on critical aspects of these incidents, explores trends, identifies influential factors, determines countries with low accident rates and casualties, and evaluates the potential impact of data analysis techniques on enhancing road safety. We will use datasets from the selected representative countries, focusing on road accidents that occurred between 2021 and 2022. By employing various analytical methods, we will explore the data from different angles, including descriptive analytics, diagnostic analytics, predictive analytics, prescriptive analytics, and cognitive analytics. Each method will contribute valuable insights to our analysis and understanding of the problem. We will employ Power BI for descriptive and diagnostic analytics, Python for predictive analytics using multilinear regression, Power BI for visualizing regression results, MaxDea Lite and Microsoft Excel for prescriptive analytics such as Data Envelopment Analysis (DEA) and Linear Programming, and also simulations to aid decision-making. Through our analysis, we will address key questions related to road accidents and their impact. For instance, we will determine whether the number of road accidents decreased or increased from 2021 to 2022 and identify the major contributing factors. Furthermore, we will assess the countries with the lowest accident rates and casualties based on ratios per million inhabitants for both years. By leveraging visualization techniques in Power BI, we will present the findings in an accessible and informative manner, enabling stakeholders to grasp the insights easily. The visualization and analysis will provide a deeper understanding of the trends, underlying factors, and the potential of data analysis techniques, such as DEA and Linear Programming, in addressing road safety. The importance of this research lies in its potential to generate significant impact. By shedding light on road accidents as a pressing global issue, the findings will raise awareness among individuals worldwide. Understanding the data from the six representative countries will enable comparisons, identification of best practices, and the formulation of informed strategies to reduce accidents and casualties. The results will benefit researchers, policymakers, and organizations involved in road safety initiatives. The insights gained will help shape evidence-based decisions, implement targeted interventions, and promote safer driving practices to prevent tragic outcomes caused by road accidents.

To the bibliography