Academic literature on the topic 'Linear programming Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Linear programming Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Linear programming Data processing"

1

Dzyubetsʹka, M. O., and P. O. Yahanov. "Data processing modules for automated systems building management." Electronics and Communications 16, no. 3 (March 28, 2011): 92–100. http://dx.doi.org/10.20535/2312-1807.2011.16.3.266193.

Full text
Abstract:
New program modules of ACS «The Intelligent Building» which use for data processing the methods of regression procedure, neural networks, systems of mass service, linear programming, fuzzy logic were developed. The control panels were developed for the operator of the ACS «The Intelligent Building» using the technologies of virtual devices of the environment of graphic programming LabVEW. The developed modules are convenient in use and can be easily integrated into already existing ACS
APA, Harvard, Vancouver, ISO, and other styles
2

Shevchenko, Hanna, and Mykola Petrushenko. "Managing change in nature-based tourism: A decision-making model using linear programming." Problems and Perspectives in Management 20, no. 2 (May 4, 2022): 199–219. http://dx.doi.org/10.21511/ppm.20(2).2022.17.

Full text
Abstract:
In conditions of forced isolation, nature-based tourism meets the needs of safe and comfortable recreation and travel combined with the solution of acute issues of medical treatment and rehabilitation during the pandemic and post-pandemic periods. This study aims to develop a model for decision-making on change management in nature tourism based on the approach of linear economic and mathematical programming. The paper formalized changes in the variability of objective function parameters of the model and the system of its restrictions, following the structure of assets of nature-based tourism, balanced by the sustainability principle. The algorithm for implementing the model includes four stages: collection and processing of relevant data on nature-based tourism; considering changes in the objective function and the system of its limitations; linear programming with variability tests using the simplex method; defining ranges/limits in which decisions are made. The initial data are summarized and averaged based on the primary data analysis on the functioning of sanatoriums and other tourist and recreational facilities in Ukraine. Short-term nature-based tourism is considered, the services of which are classified according to the criterion of the primary purpose of travel: “wow-effect” tourism, sports tourism, health tourism, traditional recreation, and green tourism. The results make it possible to substantiate decisions on changes in recreational land areas and human resources, on the limits of changes in income due to the dynamics of service prices, as well as determine the price range while maintaining income structure and sustainability limits for natural and human assets of nature-based tourism. AcknowledgmentThe paper contains the results of a study conducted under the National Academy of Science of Ukraine’s grant Formation and Use of Natural-Resource Assets of the Recreational and Tourism Sphere (0120U100159) and the Nominal Scholarship of the Verkhovna Rada of Ukraine for Young Scientists-Doctors of Sciences for 2021 (0121U113482).
APA, Harvard, Vancouver, ISO, and other styles
3

KODAMA, KOICHI, KOHEI SUENAGA, and NAOKI KOBAYASHI. "Translation of tree-processing programs into stream-processing programs based on ordered linear type." Journal of Functional Programming 18, no. 3 (May 2008): 333–71. http://dx.doi.org/10.1017/s0956796807006570.

Full text
Abstract:
AbstractThere are two ways to write a program for manipulating tree-structured data such as XML documents: One is to write a tree-processing program focusing on the logical structure of the data and the other is to write a stream-processing program focusing on the physical structure. While tree-processing programs are easier to write than stream-processing programs, tree-processing programs are less efficient in memory usage since they use trees as intermediate data. Our aim is to establish a method for automatically translating a tree-processing program to a stream-processing one in order to take the best of both worlds. We first define a programming language for processing binary trees and a type system based on ordered linear type, and show that every well-typed program can be translated to an equivalent stream-processing program. We then extend the language and the type system to deal with XML documents. We have implemented an XML stream processor generator based on our algorithm, and obtained promising experimental results.
APA, Harvard, Vancouver, ISO, and other styles
4

Indriyanti, Aries Dwi, Dedy Rahman Prehanto, Ginanjar Setyo Permadi, Chamdan Mashuri, and Tanhella Zein Vitadiar. "Using Fuzzy Time Series (FTS) and Linear Programming for Production Planning and Planting Pattern Scheduling Red Onion." E3S Web of Conferences 125 (2019): 23007. http://dx.doi.org/10.1051/e3sconf/201912523007.

Full text
Abstract:
This study discusses the production planning system and scheduling shallots planting patterns using fuzzy time series and linear programming methods. In this study fuzzy time series to predict the number of requests and the results of predictions from fuzzy time series methods become one of the variables in the calculation of linear programming. Using supporting variables, demand data, production data, employment data, land area data, production profit data, data on the number of seedlings and planting time data are indicators used in processing the system. The system provides recommendations for cropping patterns and the number of seeds that must be planted in one period. The age of harvesting onions is ± 3-4 months from the planting process, the number of planting seeds is adjusted to the number of requests that have been predicted by using fuzzy time series and cropping pattern calculation process is carried out using linear programming. The results of this system are recommendations for farmers to plant seedlings, planting schedules, and harvest schedules to meet market demand.
APA, Harvard, Vancouver, ISO, and other styles
5

Shaikhha, Amir, Mathieu Huot, Jaclyn Smith, and Dan Olteanu. "Functional collection programming with semi-ring dictionaries." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (December 8, 2022): 1–33. http://dx.doi.org/10.1145/3527333.

Full text
Abstract:
This paper introduces semi-ring dictionaries, a powerful class of compositional and purely functional collections that subsume other collection types such as sets, multisets, arrays, vectors, and matrices. We developed SDQL, a statically typed language that can express relational algebra with aggregations, linear algebra, and functional collections over data such as relations and matrices using semi-ring dictionaries. Furthermore, thanks to the algebraic structure behind these dictionaries, SDQL unifies a wide range of optimizations commonly used in databases (DB) and linear algebra (LA). As a result, SDQL enables efficient processing of hybrid DB and LA workloads, by putting together optimizations that are otherwise confined to either DB systems or LA frameworks. We show experimentally that a handful of DB and LA workloads can take advantage of the SDQL language and optimizations. SDQL can be competitive with or outperforms a host of systems that are state of the art in their own domain: in-memory DB systems Typer and Tectorwise for (flat, not nested) relational data; SciPy for LA workloads; sparse tensor compiler taco; the Trance nested relational engine; and the in-database machine learning engines LMFAO and Morpheus for hybrid DB/LA workloads over relational data.
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Zhi Yong, and Zhi Gang Han. "Data Processing with Transportation Cost Allocation Method of Formwork and Scaffold Leasing Companies Based on Supply Chain." Advanced Materials Research 977 (June 2014): 520–24. http://dx.doi.org/10.4028/www.scientific.net/amr.977.520.

Full text
Abstract:
The formwork and scaffold leasing companies become one of important links in the industry supply chain. The sustainable development of the company requires the alliance and the further cooperation between the leasing companies. This thesis attempts to study how to decrease the alliance cost and to achieve reasonable cost allocation through linear programming and cooperative games in a sense of transportation cost allocation. Furthermore, the thesis puts forward the data processing method of minimizing the ratio difference of cost saving. Based on this study, it can be concluded that the method is more rational for cost allocation.
APA, Harvard, Vancouver, ISO, and other styles
7

Volkova, Elena, and Vladimir Gisin. "Privacy-Preserving Two-Party Computation of Parameters of a Fuzzy Linear Regression." Voprosy kiberbezopasnosti, no. 3(43) (2021): 11–19. http://dx.doi.org/10.21681/2311-3456-2021-3-11-19.

Full text
Abstract:
Purpose: describe two-party computation of fuzzy linear regression with horizontal partitioning of data, while maintaining data confidentiality. Methods: the computation is designed using a transformational approach. The optimization problems of each of the two participants are transformed and combined into a common problem. The solution to this problem can be found by one of the participants. Results: A protocol is proposed that allows two users to obtain a fuzzy linear regression model based on the combined data. Each of the users has a set of data about the results of observations, containing the values of the explanatory variables and the values of the response variable. The data structure is shared: both users use the same set of explanatory variables and a common criterion. Regression coefficients are searched for as symmetric triangular fuzzy numbers by solving the corresponding linear programming problem. It is assumed that both users are semihonest (honest but curious, or passive and curious), i.e. they execute the protocol, but can try to extract information about the source data of the partner by applying arbitrary processing methods to the received data that are not provided for by the protocol. The protocol describes the transformed linear programming problem. The solution of this problem can be found by one of the users. The number of observations of each user is known to both users. The observation data remains confidential. The correctness of the protocol is proved and its security is justified. Keywords: fuzzy numbers, collaborative solution of a linear programming problem, two-way computation, transformational approach, cloud computing, federated machine learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Gilles, Jean-François, and Thomas Boudier. "TAPAS: Towards Automated Processing and Analysis of multi-dimensional bioimage data." F1000Research 9 (July 2, 2021): 1278. http://dx.doi.org/10.12688/f1000research.26977.2.

Full text
Abstract:
Modern microscopy is based on reproducible quantitative analysis, image data should be batch-processed by a standardized system that can be shared and easily reused by others. Furthermore, such system should require none or minimal programming from the users. We developed TAPAS (Towards an Automated Processing and Analysis System). The goal is to design an easy system for describing and exchanging processing workflows. The protocols are simple text files comprising a linear list of commands used to process and analyse the images. An extensive set of 60 modules is already available, mostly based on the tools proposed in the 3D ImageJ Suite. We propose a wizard, called TAPAS menu, to help the user design the protocol by listing the available modules and the parameters associated. Most modules will have default parameters values for most common tasks. Once the user has designed the protocol, he/she can apply the protocol to a set of images, that can be either stored locally or on a OMERO database. An extensive documentation including the list of modules, various tutorials and link to the source code is available at https://imagej.net/TAPAS.
APA, Harvard, Vancouver, ISO, and other styles
9

Gilles, Jean-François, and Thomas Boudier. "TAPAS: Towards Automated Processing and Analysis of multi-dimensional bioimage data." F1000Research 9 (October 28, 2020): 1278. http://dx.doi.org/10.12688/f1000research.26977.1.

Full text
Abstract:
Modern microscopy is based on reproducible quantitative analysis, image data should be batch-processed by a standardized system that can be shared and easily reused by others. Furthermore such system should require none or minimal programming from the users. We developed TAPAS (Towards an Automated Processing and Analysis System). The goal is to design an easy system for describing and exchanging processing workflows. The protocols are simple text files comprising a linear list of commands used to process and analyse the images. An extensive set of 60 modules is already available, mostly based on the tools proposed in the 3D ImageJ Suite. We propose a wizard, called TAPAS menu, to help the user design her protocol by listing the available modules and the parameters associated. Most modules will have default parameters values for most common tasks. Once the user has designed her protocol, she can apply the protocol to a set of images, that can be either stored locally or on a OMERO database. An extensive documentation including the list of modules, various tutorials and link to the source code is available at https://imagej.net/TAPAS.
APA, Harvard, Vancouver, ISO, and other styles
10

Pan, Ying Hui, Shi Yin Fang, and Bo Guo. "An Algorithm of Adaptive Variable Spacing for Arc Fitting Non-Circular Curve." Advanced Materials Research 753-755 (August 2013): 1960–65. http://dx.doi.org/10.4028/www.scientific.net/amr.753-755.1960.

Full text
Abstract:
The Numerical Control (NC) systems are only linear and circular interpolation function, and the non-circular curve must be fitted with them before processed. There are many algorithms of fitting curve with arc, but most of them have some shortcomings, such as the numbers of fitting data points and arcs segments are too many, the error controlling are difficult in arc fitting , the programming are difficult to meet the real-time NC processing requirement and so on. This paper proposed a new algorithm, which could quickly and automatically determine the best node for fitting curve with arc according to the change of curvature and the machining error. It could ensure the machining precision and efficiency. The algorithm could be directly completed programming and processing by the NCs macro program function without any other auxiliary software. The detailed algorithm process and the application programming were given.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Linear programming Data processing"

1

Olivier, Hannes Friedel. "The expected runtime of the (1+1) evolutionary algorithm on almost linear functions." Virtual Press, 2006. http://liblink.bsu.edu/uhtbin/catkey/1356253.

Full text
Abstract:
This Thesis expands the theoretical research done in the area of evolutionary algorithms. The (1+1)EA is a simple algorithm which allows to gain some insight in the behaviour of these randomized search heuristics. This work shows ways to possible improve on existing bounds. The general good runtime of the algorithm on linear functions is also proven for classes of quadratic functions. These classes are defined by the relative size of the quadratic and the linear weights. One proof of the paper looks at a worst case algorithm which always shows a worst case behaviour than many other functions. This algorithm is used as an upper bound for a lot of different classes.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

Barboza, Angela Olandoski. "Simulação e técnicas da computação evolucionária aplicadas a problemas de programação linear inteira mista." Centro Federal de Educação Tecnológica do Paraná, 2005. http://repositorio.utfpr.edu.br/jspui/handle/1/74.

Full text
Abstract:
Presently, companies live a reality of rapid economic transformations generated by globalization. The growth of the products and services international trade, the constant exchange of information and the cultural interchange challenge administrators to define new paths for their companies. This dynamics and the increasing competitiveness demand new knowledge and abilities from professionals. In this way, new technologies are researched in order to improve operational efficiency. The Brazilian oil industry in particular has invested in applied research, as well as on development and technological qualification to keep its competitiveness in the international market. Many are the problems that must still be studied in this production sector. Among these, and due their importance, the problems of products storage and transference can be pointed out. This work approaches a scheduling problem that involves diesel oil storage and distribution in an oil refinery. The Mixed Integer Linear Programming (MILP) techniques with representation in the discrete and continuous time were used. The models that were developed were solved by the LINGO 8.0 software, using the branch and bound algorithm. However, due to their combinatorial nature, the expended computational time used for thesolution was excessive. Thus, four new methodologies were developed: Hybrid Steady State Genetic Algorithm (HSSGA) and Transgenetic ProtoG Algorithm, both integrated to Linear Programming (LP), for the representation of discrete time; simulation with optimization using the Genetic Algorithm (GA) and simulation with optimization using the Transgenetic ProtoG Algorithm, for the representation of continuous time. The results obtained through several tests with these new methodologies have shown that they can reach good results in an acceptable computational time. The two techniques for the representation of discrete time have shown satisfactory performance in terms of quality of solution and computational time. Among these, the methodology that uses the Transgenetic ProtoG Algorithm showed the best results. Also, the simulator with optimization using GA and the one that used the Transgenetic ProtoG Algorithm for the representation of continuous time were adequate to substitute the resolution through PLIM, because they reach solutions with a reduced computational time when compared with the time used for the solution with branch and bound.
As empresas vivem hoje uma realidade de transformações econômicas advindas da globalização. O crescimento do comércio internacional de produtos e serviços, a troca constante de informações e o intercâmbio cultural vêm desafiando os administradores a definir novos rumos para suas empresas. Esta dinâmica e a crescente competitividade exigem novos conhecimentos e habilidades dos profissionais. Desta forma, buscam-se novas tecnologias para conseguir-se a melhoria da eficiência operacional. Em especial, a indústria petrolífera brasileira tem investido na pesquisa aplicada, desenvolvimento e capacitação tecnológica para manter-se competitiva no mercado internacional. Muitos são os problemas que ainda devem ser estudados neste setor produtivo. Dentre estes, pode-se destacar os problemas de transferência e estocagem de produtos. Este trabalho aborda um problema de programação da produção (scheduling) envolvendo estocagem e distribuição de diesel em uma refinaria de petróleo. Para solucionar este problema foram utilizados a princípio modelos de Programação Linear Inteira Mista (PLIM) com abordagens para a representação no tempo discreto e contínuo. Os modelos desenvolvidos foram resolvidos com o uso do aplicativo computacional LINGO 8.0 através do algoritmo branch and bound. Devido à natureza combinatorial destes, o tempo computacional despendido na resolução mostrou-se excessivo. Desta forma, foram desenvolvidas quatro novas metodologias buscando amenizar este problema: Algoritmo Genético de Estado Estacionário Híbrido (AGEEH) e Algoritmo Transgenético ProtoG integrados à Programação Linear (PL) para a representação de tempo discreto; simulação com otimização através de Algoritmo Genético (AG) e simulação com otimização através de Algoritmo Transgenético ProtoG na representação de tempo contínuo. Os resultados obtidos através de vários testes com as novas metodologias mostraram que estas podem encontrar bons resultados em tempo computacional aceitável. Para a representação de tempo discreto as duas abordagens obtiveram desempenho satisfatório em termos de qualidade de solução e tempo computacional. Dentre estas, a metodologia que utilizou o Algoritmo Transgenético ProtoG apresentou os melhores resultados. Ainda, o simulador com otimização usando AG e o que utilizou Algoritmo Transgenético ProtoG na representação de tempo contínuo mostraram-se adequados para substituir a resolução através de PLIM por encontrar soluções com tempo computacional muito aquém do tempo despendido na resolução com o branch and bound.
APA, Harvard, Vancouver, ISO, and other styles
3

Vanden, Berghen Frank. "Constrained, non-linear, derivative-free, parallel optimization of continuous, high computing load, noisy objective functions." Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211177.

Full text
Abstract:
The main result is a new original algorithm: CONDOR ("COnstrained, Non-linear, Direct, parallel Optimization using trust Region method for high-computing load, noisy functions"). The aim of this algorithm is to find the minimum x* of an objective function F(x) (x is a vector whose dimension is between 1 and 150) using the least number of function evaluations of F(x). It is assumed that the dominant computing cost of the optimization process is the time needed to evaluate the objective function F(x) (One evaluation can range from 2 minutes to 2 days). The algorithm will try to minimize the number of evaluations of F(x), at the cost of a huge amount of routine work. CONDOR is a derivate-free optimization tool (i.e. the derivatives of F(x) are not required. The only information needed about the objective function is a simple method (written in Fortran, C++,) or a program (a Unix, Windows, Solaris, executable) which can evaluate the objective function F(x) at a given point x. The algorithm has been specially developed to be very robust against noise inside the evaluation of the objective function F(x). This hypotheses are very general, the algorithm can thus be applied on a vast number of situations. CONDOR is able to use several CPU's in a cluster of computers. Different computer architectures can be mixed together and used simultaneously to deliver a huge computing power. The optimizer will make simultaneous evaluations of the objective function F(x) on the available CPU's to speed up the optimization process. The experimental results are very encouraging and validate the quality of the approach: CONDOR outperforms many commercial, high-end optimizer and it might be the fastest optimizer in its category (fastest in terms of number of function evaluations). When several CPU's are used, the performances of CONDOR are currently unmatched (may 2004). CONDOR has been used during the METHOD project to optimize the shape of the blades inside a Centrifugal Compressor (METHOD stands for Achievement Of Maximum Efficiency For Process Centrifugal Compressors THrough New Techniques Of Design). In this project, the objective function is based on a 3D-CFD (computation fluid dynamic) code which simulates the flow of the gas inside the compressor.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
4

Karamalis, Constantinos. "Data perturbation analyses for linear programming." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6709.

Full text
Abstract:
This thesis focuses on several aspects of data perturbation for Linear Programming. Classical questions of degeneracy and post-optimal analysis are given a unified presentation, in a view of new interior point methods of linear programming. The performance of these methods is compared to the simplex algorithm; interior point methods are shown to alleviate some difficulties of representation and solution of linear programs. An affine scaling algorithm is implemented in conjunction with a simple rounding heuristic to asses the benefit of interior point trajectories to provide approximate solutions of linear integer programming.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jun-Sheng. "Design and scheduling of chemical bath processing lines." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/24380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Geske, Ulrich, and Hans-Joachim Goltz. "Efficiency of difference-list programming." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4156/.

Full text
Abstract:
The difference-list technique is described in literature as effective method for extending lists to the right without using calls of append/3. There exist some proposals for automatic transformation of list programs into differencelist programs. However, we are interested in construction of difference-list programs by the programmer, avoiding the need of a transformation step. In [GG09] it was demonstrated, how left-recursive procedures with a dangling call of append/3 can be transformed into right-recursion using the unfolding technique. For simplification of writing difference-list programs using a new cons/2 procedure was introduced. In the present paper, we investigate how efficieny is influenced using cons/2. We measure the efficiency of procedures using accumulator technique, cons/2, DCG’s, and difference lists and compute the resulting speedup in respect to the simple procedure definition using append/3. Four Prolog systems were investigated and we found different behaviour concerning the speedup by difference lists. A result of our investigations is, that an often advice given in the literature for avoiding calls append/3 could not be confirmed in this strong formulation.
APA, Harvard, Vancouver, ISO, and other styles
7

Wilkes, Charles Thomas. "Programming methodologies for resilience and availability." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/8308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nader, Babak. "Parallel solution of sparse linear systems." Full text open access at:, 1987. http://content.ohsu.edu/u?/etd,138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zongyan 1969. "Implementation of distributed data processing in a database programming language." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79201.

Full text
Abstract:
This thesis discusses the design and implementation of integrating the Internet capability into a database programming language JRelix, so that it not only possesses data organization; storage and indexing capabilities of normal DBMS, but also possesses remote data processing capabilities across the Internet.
A URL-based name extension to database elements in a database programming language is adopted, which gives it collaborative and distributed capability over the Internet with no changes in syntax or semantics apart from the new structure in names. Relations, computations, statements (or queries) and relational expression are treated uniformly as database elements in our implementation. These database elements are enabled to be accessed or executed remotely. As a result, remote data accessing or processing, as well as Remote Procedure Call (RPC) are supported.
Sharing resource is a main achievement of the implementation. In addition, site autonomy and performance transparency are accomplished; distributed view management is provided; sites need not be geographically distant; security management is implemented.
APA, Harvard, Vancouver, ISO, and other styles
10

Ashoor, Khalil Layla Ali. "Performance analysis integrating data envelopment analysis and multiple objective linear programming." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/performance-analysis-integrating-data-envelopment-analysis-and-multiple-objective-linear-programming(65485f28-f6c5-4eff-b422-6dd05f1b46fe).html.

Full text
Abstract:
Firms or organisations implement performance assessment to improve productivity but evaluating the performance of firms or organisations may be complex and complicated due to the existence of conflicting objectives. Data Envelopment Analysis (DEA) is a non-parametric approach utilized to evaluate the relative efficiencies of decision making units (DMUs) within firms or organizations that perform similar tasks. Although DEA measures the relative efficiency of a set of DMUs the efficiency scores generated do not consider the decision maker’s (DM’s) or expert preferences. DEA is used to measure efficiency and can be extended to include DM’s and expert preferences by incorporating value judgements. Value judgements can be implemented by two techniques: weight restrictions or constructing an equivalence Multiple Objective Linear Programming (MOLP) model. Weight restrictions require prior knowledge to be provided by the DM and moreover the DM cannot interfere during the assessment analysis. On the other hand, the second approach enables the DM to interfere during performance assessment without prior knowledge whilst providing alternative objectives that allow the DM to reach the most preferred decision subject to available resources. The main focus of this research was to establish interactive frameworks to allow the DM to set targets, according to his preferences, and to test alternatives that can realistically be measured through an interactive procedure. These frameworks are based on building an equivalence model between extended DEA and MOLP minimax formulation incorporating an interactive procedure. In this study two frameworks were established. The first is based on an equivalence model between DEA trade-off approach and MOLP minimax formulation which allows for incorporating DM’s and expert preferences. The second is based on an equivalence model between DEA bounded model and MOLP minimax formulation. This allows for integrating DM’s preferences through interactive steps to measure the whole efficiency score (i.e. best and worst efficiency) of individual DMU. In both approaches a gradient projection interactive approach is implemented to estimate, regionally, the most preferred solution along the efficient frontier. The second framework was further extended by including ranking based on the geometric average. All the frameworks developed and presented were tested through implementation on two real case studies.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Linear programming Data processing"

1

R, Garside Gerald, ed. Linear programming in Pascal. London: Edward Arnold, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bhavikatti, S. S. Structural optimisation using sequential linear programming. New Delhi: Vikas Publishing House Pvt., 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hürlimann, Tony. LPL: A structured language for modeling linear programs. Bern: P. Lang, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1934-, Mangasarian Olvi L., and Wright Stephen J. 1960-, eds. Linear programming with MATLAB. Philadelphia: Society for Industrial and Applied Mathematics, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Arbel, Ami. Exploring interior-point linear programming: Algorithms and software. Cambridge, Mass: MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bagus, Erich. Computergestützte Zeitreihenprognose mit linear-rekursiven Modellen. Idstein: Schulz-Kirchner Verlag, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rommelfanger, Heinrich. PC software FULPAL 2.0: An interactive algorithm for solving multicriteria fuzzy linear programs controlled by aspiration levels. Frankfurt/Main: Johann Wolfgang Goethe-Universität Frankfurt, Fachbereich Wirtschaftswissenschaften, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nazareth, J. L. Computer solution of linear programs. New York: Oxford University Press, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Christen, Peter. A parallel iterative linear system solver with dynamic load balancing. Aachen: Shaker, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arboit, Geneviève. Average case reductions for subset sum and decoding of linear codes. [Toronto]: Arboit, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Linear programming Data processing"

1

Johnson, Chris F. A., and Jayant Varma. "Data Processing." In Pro Bash Programming, 161–81. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4842-0121-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Johnson, Chris F. A. "Data Processing." In Pro Bash Programming, 157–77. Berkeley, CA: Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-1998-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bingham, John. "Systems Design and Programming." In Data Processing, 139–50. London: Macmillan Education UK, 1989. http://dx.doi.org/10.1007/978-1-349-19938-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Liwu. "Linear Data Structures." In Java: Data Structures and Programming, 229–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-95851-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Yong, Yingjie Tian, Gang Kou, Yi Peng, and Jianping Li. "Multiple Criteria Linear Programming." In Advanced Information and Knowledge Processing, 119–32. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-504-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Enes, Kristina. "Web Service for Point Cloud Supported Robot Programming Using Machine Learning." In Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021, 253–62. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-74032-0_21.

Full text
Abstract:
AbstractIn industrial automation, the use of robots is already standard. But there is still a lot of room for further automation. One such place where improvements can be made is in the adjustment of a production system to new and unknown products. Currently, this task includes the reprogramming of the robot and a readjustment of the image processing algorithms if sensors are involved. This takes time, effort, and a specialist, something especially small and middle-sized companies shy away from. We propose to represent a physical production line with a digital twin, using the simulated production system to generate labeled data to be used for training in a deep learning component. An artificial neural network will be trained to both recognize and localize the observed products. This allows the production line to handle both known and unknown products more flexible. The deep learning component itself is located in a cloud and can be accessed through a web service, allowing any member of the staff to initiate the training, regardless of their programming skills. In summary, our approach addresses not only further automation in manufacturing but also the use of synthesized data for deep learning.
APA, Harvard, Vancouver, ISO, and other styles
7

Ennals, Robert, Richard Sharp, and Alan Mycroft. "Linear Types for Packet Processing." In Programming Languages and Systems, 204–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24725-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roosta, Seyed H. "Data Parallel Programming." In Parallel Processing and Parallel Algorithms, 477–99. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wiley, Matt, and Joshua F. Wiley. "GLMMs: Linear." In Advanced R Statistical Programming and Data Models, 479–552. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-2872-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Roosta, Seyed H. "Data Flow and Functional Programming." In Parallel Processing and Parallel Algorithms, 411–37. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Linear programming Data processing"

1

Filar, J. A., K. E. Avrachenkov, and E. Altman. "An asymptotic simplex method for parametric linear programming." In 1999 Information, Decision and Control. Data and Information Fusion Symposium, Signal Processing and Communications Symposium and Decision and Control Symposium. Proceedings (Cat. No.99EX251). IEEE, 1999. http://dx.doi.org/10.1109/idc.1999.754195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jun, Guo, and Yongbo Wang. "An Algorithm of Maximum Lifetime of Data Gathering Aggregation Issue Based on Linear Programming in WSNs." In 2010 Third International Symposium on Information Processing (ISIP). IEEE, 2010. http://dx.doi.org/10.1109/isip.2010.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anbu Yazhini, S., and D. S. Harish Ram. "High level synthesis of data flow graphs using integer linear programming for switching power reduction." In 2011 International Conference on Signal Processing, Communication, Computing and Networking Technologies (ICSCCN 2011). IEEE, 2011. http://dx.doi.org/10.1109/icsccn.2011.6024597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Babu, P., and P. Stoica. "A combined linear programming-maximum likelihood approach to radial velocity data analysis for extrasolar planet detection." In ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jovanoska, Dijana, and Gjorgji Mancheski. "On-Line Big Data Processing Using Python Libraries for Multiple Linear Regression in Complex Environment." In 27th International Scientific Conference Strategic Management and Decision Support Systems in Strategic Management. University of Novi Sad, Faculty of Economics in Subotica, 2022. http://dx.doi.org/10.46541/978-86-7233-406-7_228.

Full text
Abstract:
The phenomenon called Big Data today is one of the most significant and least visible consequences of the development of technology and the Internet. Namely, the data generated by today's globally connected world is growing at an exponential rate and they are a real "gold mine" for those users who know how to correctly interpret such data and make successful decisions based on them. Data analysis and processing is one of the most important components of a large data system, and in this branch of data science the most popular is the Python programming language, which provides its users with a large number of constantly maintained program libraries and developing environments. The most important thing for legal entities and individuals is that almost all program libraries and functions provided by this programming language come with free licenses and possess open code, maintained and quality technical documentation, which provides each company with significant money savings and time. This research paper is dedicated to the possibility of determining and creating a multi regression model of large amounts of data by using Python, on the basis of large amounts of data provided by two market retailers in order to display a multi regression model and assess its predictive power. Because the number of variables is large, several models have been made in this research paper and a comparative analysis of the different models has been made, which shows that Python is a good tool that can be used repeatedly to select different variants and evaluate the resulting model for which a graphical interface can be made and would be much more acceptable as an end user, can be placed on a server on the Internet or on a modern Cloud platform and used by users as an on-demand concept and the results can be embedded in end-user interfaces and models made in this way (with dynamic data extraction)can be used in BI and machine learning processes.
APA, Harvard, Vancouver, ISO, and other styles
6

Trösser, Fulya, Simon de Givry, and George Katsirelos. "Improved Acyclicity Reasoning for Bayesian Network Structure Learning with Constraint Programming." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/584.

Full text
Abstract:
Bayesian networks are probabilistic graphical models with a wide range of application areas including gene regulatory networks inference, risk analysis and image processing. Learning the structure of a Bayesian network (BNSL) from discrete data is known to be an NP-hard task with a superexponential search space of directed acyclic graphs. In this work, we propose a new polynomial time algorithm for discovering a subset of all possible cluster cuts, a greedy algorithm for approximately solving the resulting linear program, and a generalized arc consistency algorithm for the acyclicity constraint. We embed these in the constraint programming-based branch-and-bound solver CPBayes and show that, despite being suboptimal, they improve performance by orders of magnitude. The resulting solver also compares favorably with GOBNILP, a state-of-the-art solver for the BNSL problem which solves an NP-hard problem to discover each cut and solves the linear program exactly.
APA, Harvard, Vancouver, ISO, and other styles
7

Weatheritt, Jack, Richard D. Sandberg, Julia Ling, Gonzalo Saez, and Julien Bodart. "A Comparative Study of Contrasting Machine Learning Frameworks Applied to RANS Modeling of Jets in Crossflow." In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-63403.

Full text
Abstract:
Classical RANS turbulence models have known deficiencies when applied to jets in crossflow. Identifying the linear Boussinesq stress-strain hypothesis as a major contribution to erroneous prediction, we consider and contrast two machine learning frameworks for turbulence model development. Gene Expression Programming, an evolutionary algorithm that employs a survival of the fittest analogy, and a Deep Neural Network, based on neurological processing, add non-linear terms to the stress-strain relationship. The results are Explicit Algebraic Stress Model-like closures. High fidelity data from an inline jet in crossflow study is used to regress new closures. These models are then tested on a skewed jet to ascertain their predictive efficacy. For both methodologies, a vast improvement over the linear relationship is observed.
APA, Harvard, Vancouver, ISO, and other styles
8

Negrut, Dan, Alessandro Tasora, and Mihai Anitescu. "Large-Scale Parallel Multibody Dynamics With Frictional Contact on the GPU." In ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2139.

Full text
Abstract:
In the context of simulating the frictional contact dynamics of large systems of rigid bodies, this paper reviews a novel method for solving large cone complementarity problems by means of a fixed-point iteration algorithm. The method is an extension of the Gauss-Seidel and Gauss-Jacobi methods with overrelaxation for symmetric convex linear complementarity problems. Convergent under fairly standard assumptions, the method is implemented in a parallel framework by using a single instruction multiple data (SIMD) computation paradigm promoted by the Compute Unified Device Architecture (CUDA) library for graphical processing unit (GPU) programming. The framework is anticipated to become a viable tool for investigating the dynamics of complex systems such as ground vehicles running on sand, powder composites, and granular material flow.
APA, Harvard, Vancouver, ISO, and other styles
9

Tasora, Alessandro, Dan Negrut, and Mihai Anitescu. "A GPU-Based Implementation of a Cone Convex Complementarity Approach for Simulating Rigid Body Dynamics With Frictional Contact." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-66766.

Full text
Abstract:
In the context of simulating the frictional contact dynamics of large systems of rigid bodies, this paper reviews a novel method for solving large cone complementarity problems by means of a fixed-point iteration algorithm. The method is an extension of the Gauss-Seidel and Gauss-Jacobi methods with over-relaxation for symmetric convex linear complementarity problems. Convergent under fairly standard assumptions, the method is implemented in a parallel framework by using a single instruction multiple data computation paradigm promoted by the Compute Unified Device Architecture library for graphical processing unit programming. The framework supports the analysis of problems with a large number of rigid bodies in contact. Simulation thus becomes a viable tool for investigating the dynamics of complex systems such as ground vehicles running on sand, powder composites, and granular material flow.
APA, Harvard, Vancouver, ISO, and other styles
10

Guida, Francesco Ermanno, and Ernesto Voltaggio. "Programming Visual Representations. Evolutions of Visual Identities between Tangible and Intangible." In Systems & Design: Beyond Processes and Thinking. Valencia: Universitat Politècnica València, 2016. http://dx.doi.org/10.4995/ifdp.2016.3334.

Full text
Abstract:
The communication design field it's considerably changed in the last 20 years and more as well as the role of the designer. Technology has modified the daily work tools and new possible relations between the designer, the commitment and the final user can be underlined. Observing some of the most experimental practices, new visual languages have draw the attention, affected by innovative approaches and mixed competencies. The area of visual identities is especially of interest, not excluding other areas of experimentations.The phenomenon of the so-called dynamic or post-logo identities underlined the possibilities of using more fluid and expressive, variable, context related, processual, performative, non-linear, consistent visual languages instead of the usual and static repetition of a logo or an imposed series of rules (Felsing, 2010). But also their contradictions in making recognizable an organization and in the visual identity daily management.An interesting evolution to be underlined is in the use of the digital tools, not anymore in a passive way but in an active way. Visual designers can build their digital tools basing them on design and esthetic needs. Innovation is in the creative process, instead of in the final result, is in the “way to live our own creativeness” as affirmed precisely by Soddu (1998).The designer is not anymore just the user of ready-made digital tools, becoming himself programmer of customized digital toolboxes by using open source codes like Processing or VVVV or hardware like Arduino. This allows to affirm that visual designers are are becoming designer-producers (Bianchini & Maffei, 2012) too, as its happening for the colleagues of the product design field. Not just a DIY attitude but something that it's changing the control knobs of a design system in all its process and development. As far as technology support is relevant, technical matters are relegated in the background on behalf of abstraction and data parametrization that means on behalf of a meta-design level. The use of programming in creative and visual communication design processes “empowers the designer, freeing he from the constraints of predefined computational tools, and promoting creative freedom in the construction of visual metaphors” (Duro, Machado, Rebelo, 2012). The aim of this paper is to argue this recent evolution in the field of visual identities and in the wider area of communication design practices.DOI: http://dx.doi.org/10.4995/IFDP.2016.3334
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Linear programming Data processing"

1

Cook, Samantha, Marissa Torres, Nathan Lamie, Lee Perren, Scott Slone, and Bonnie Jones. Automated ground-penetrating-radar post-processing software in R programming. Engineer Research and Development Center (U.S.), September 2022. http://dx.doi.org/10.21079/11681/45621.

Full text
Abstract:
Ground-penetrating radar (GPR) is a nondestructive geophysical technique used to create images of the subsurface. A major limitation of GPR is that a subject matter expert (SME) needs to post-process and interpret the data, limiting the technique’s use. Post-processing is time-intensive and, for detailed processing, requires proprietary software. The goal of this study is to develop automated GPR post-processing software, compatible with Geophysical Survey Systems, Inc. (GSSI) data, in open-source R programming. This would eliminate the need for an SME to process GPR data, remove proprietary software dependencies, and render GPR more accessible. This study collected GPR profiles by using a GSSI SIR4000 control unit, a 100 MHz antenna, and a Trimble GPS. A standardized method for post-processing data was then established, which includes static data removal, time-zero correction, distance normalization, data filtering, and stacking. These steps were scripted and automated in R programming, excluding data filtering, which was used from an existing package, RGPR. The study compared profiles processed using GSSI software to profiles processed using the R script developed here to ensure comparable functionality and output. While an SME is currently still necessary for interpretations, this script eliminates the need for one to post-process GSSI GPR data.
APA, Harvard, Vancouver, ISO, and other styles
2

Modlo, Yevhenii O., Serhiy O. Semerikov, Pavlo P. Nechypurenko, Stanislav L. Bondarevskyi, Olena M. Bondarevska, and Stanislav T. Tolmachev. The use of mobile Internet devices in the formation of ICT component of bachelors in electromechanics competency in modeling of technical objects. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3264.

Full text
Abstract:
Computer simulation of technical objects and processes is one of the components of the system of professional training of a modern electromechanics engineer. It has been established that despite the fact that mobile Internet devices (MID) are actively used by electrical engineers, the methods of using them in the process of bachelor in electromechanics training is considered only in some domestic scientific studies. The article highlights the components of the methods of using MID in the formation of the ICT component of the competence of the bachelor in electromechanics in modeling of technical objects, providing for students to acquire basic knowledge in the field of Computer Science and modern ICT and skills to use programming systems, math packages, subroutine libraries, and the like. For processing tabular data, it is proposed to use various freely distributed tools that do not significantly differ in functionality, such as Google Sheets, Microsoft Excel, for processing text data – QuickEdit Text Editor, Google Docs, Microsoft Word. For 3D-modeling and viewing the design and technological documentation, the proposed comprehensive use of Autodesk tools in the training process.
APA, Harvard, Vancouver, ISO, and other styles
3

Shabelnyk, Tetiana V., Serhii V. Krivenko, Nataliia Yu Rotanova, Oksana F. Diachenko, Iryna B. Tymofieieva, and Arnold E. Kiv. Integration of chatbots into the system of professional training of Masters. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4439.

Full text
Abstract:
The article presents and describes innovative technologies of training in the professional training of Masters. For high-quality training of students of technical specialties, it becomes necessary to rethink the purpose, results of studying and means of teaching professional disciplines in modern educational conditions. The experience of implementing the chatbot tool in teaching the discipline “Mathematical modeling of socio-economic systems” in the educational and professional program 124 System Analysis is described. The characteristics of the generalized structure of the chatbot information system for investment analysis are presented and given: input information, information processing system, output information, which creates a closed cycle (system) of direct and feedback interaction. The information processing system is represented by accounting and analytical data management blocks. The investment analysis chatbot will help masters of the specialty system analysis to manage the investment process efficiently based on making the right decisions, understanding investment analysis in the extensive structure of financial management and optimizing risks in these systems using a working mobile application. Also, the chatbot will allow you to systematically assess the disadvantages and advantages of investment projects or the direction of activity of a system analyst, while increasing interest in performing practical tasks. A set of software for developing a chatbot integrated into training is installed: Kotlin programming, a library for network interaction Retrofit, receiving and transmitting data, linking processes using the HTTP API. Based on the results of the study, it is noted that the impact of integrating a chatbot into the training of Masters ensures the development of their professional activities, which gives them the opportunity to be competent specialists and contributes to the organization of high-quality training.
APA, Harvard, Vancouver, ISO, and other styles
4

Searcy, Stephen W., and Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568747.bard.

Full text
Abstract:
This project includes two main parts: Development of a “Selective Wavelength Imaging Sensor” and an “Adaptive Classifiery System” for adaptive imaging and sorting of agricultural products respectively. Three different technologies were investigated for building a selectable wavelength imaging sensor: diffraction gratings, tunable filters and linear variable filters. Each technology was analyzed and evaluated as the basis for implementing the adaptive sensor. Acousto optic tunable filters were found to be most suitable for the selective wavelength imaging sensor. Consequently, a selectable wavelength imaging sensor was constructed and tested using the selected technology. The sensor was tested and algorithms for multispectral image acquisition were developed. A high speed inspection system for fresh-market carrots was built and tested. It was shown that a combination of efficient parallel processing of a DSP and a PC based host CPU in conjunction with a hierarchical classification system, yielded an inspection system capable of handling 2 carrots per second with a classification accuracy of more than 90%. The adaptive sorting technique was extensively investigated and conclusively demonstrated to reduce misclassification rates in comparison to conventional non-adaptive sorting. The adaptive classifier algorithm was modeled and reduced to a series of modules that can be added to any existing produce sorting machine. A simulation of the entire process was created in Matlab using a graphical user interface technique to promote the accessibility of the difficult theoretical subjects. Typical Grade classifiers based on k-Nearest Neighbor techniques and linear discriminants were implemented. The sample histogram, estimating the cumulative distribution function (CDF), was chosen as a characterizing feature of prototype populations, whereby the Kolmogorov-Smirnov statistic was employed as a population classifier. Simulations were run on artificial data with two-dimensions, four populations and three classes. A quantitative analysis of the adaptive classifier's dependence on population separation, training set size, and stack length determined optimal values for the different parameters involved. The technique was also applied to a real produce sorting problem, e.g. an automatic machine for sorting dates by machine vision in an Israeli date packinghouse. Extensive simulations were run on actual sorting data of dates collected over a 4 month period. In all cases, the results showed a clear reduction in classification error by using the adaptive technique versus non-adaptive sorting.
APA, Harvard, Vancouver, ISO, and other styles
5

Varga, Gabriella A., Amichai Arieli, Lawrence D. Muller, Haim Tagari, Israel Bruckental, and Yair Aharoni. Effect of Rumen Available Protein, Amimo Acids and Carbohydrates on Microbial Protein Synthesis, Amino Acid Flow and Performance of High Yielding Cows. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568103.bard.

Full text
Abstract:
The effect of rumen available protein amino acids and carbohydrates on microbial protein synthesis, amino acid flow and performance of high yielding dairy cows was studied. A significant relationship between the effective degradabilities of OM in feedstuffs and the in vivo ruminal OM degradation of diets of dairy cows was found. The in situ method enabled the prediction of ruminal nutrients degradability response to processing of energy and nitragenous supplements. The AA profile of the rumen undegradable protein was modified by the processing method. In a continuous culture study total N and postruminal AA flows, and bacterial efficiency, is maximal at rumen degradable levels of 65% of the CP. Responses to rumen degradable non carbohydrate (NSC) were linear up to at least 27% of DM. Higher CP flow in the abomasum was found for cows fed high ruminally degradable OM and low ruminally degradable CP diet. It appeared that in dairy cows diets, the ratio of rumen degradable OM to rumenally degradable CP should be at least 5:1 in order to maximize postruminal CP flow. The efficiency of microbial CP synthesis was higher for diets supplemented with 33% of rumen undegradable protein, with greater amounts of bacterial AA reaching the abomasum. Increase in ruminal carbohydrate availability by using high moisture corn increased proportions of propionate, postruminal nutrients flow, postruminal starch digestibility, ruminal availability of NSC, uptake of energy substrates by the mammory gland. These modifications resulted with improvement in the utilization of nonessential AA for milk protein synthesis, in higher milk protein yield. Higher postruminal NSC digestibility and higher efficiency of milk protein production were recorded in cows fed extruded corn. Increasing feeding frequency increased flow of N from the rumen to the blood, reduced diurnal variation in ruminal and ammonia, and of plasma urea and improved postruminal NSC and CIP digestibility and total tract digestibilities. Milk and constituent yield increased with more frequent feeding. In a study performed in a commercial dairy herd, changes in energy and nitrogenous substrates level suggested that increasing feeding frequency may improve dietary nitrogen utilization and may shift metabolism toward more glucogenesis. It was concluded that efficiency of milk protein yield in high producing cows might be improved by an optimization of ruminal and post-ruminal supplies of energy and nitrogenous substrates. Such an optimization can be achieved by processing of energy and nitrogenous feedstuffs, and by increasing feeding frequency. In situ data may provide means for elucidation of the optimal processing conditions.
APA, Harvard, Vancouver, ISO, and other styles
6

Hamlin, Alexandra, Erik Kobylarz, James Lever, Susan Taylor, and Laura Ray. Assessing the feasibility of detecting epileptic seizures using non-cerebral sensor. Engineer Research and Development Center (U.S.), December 2021. http://dx.doi.org/10.21079/11681/42562.

Full text
Abstract:
This paper investigates the feasibility of using non-cerebral, time-series data to detect epileptic seizures. Data were recorded from fifteen patients (7 male, 5 female, 3 not noted, mean age 36.17 yrs), five of whom had a total of seven seizures. Patients were monitored in an inpatient setting using standard video electroencephalography (vEEG), while also wearing sensors monitoring electrocardiography, electrodermal activity, electromyography, accelerometry, and audio signals (vocalizations). A systematic and detailed study was conducted to identify the sensors and the features derived from the non-cerebral sensors that contribute most significantly to separability of data acquired during seizures from non-seizure data. Post-processing of the data using linear discriminant analysis (LDA) shows that seizure data are strongly separable from non-seizure data based on features derived from the signals recorded. The mean area under the receiver operator characteristic (ROC) curve for each individual patient that experienced a seizure during data collection, calculated using LDA, was 0.9682. The features that contribute most significantly to seizure detection differ for each patient. The results show that a multimodal approach to seizure detection using the specified sensor suite is promising in detecting seizures with both sensitivity and specificity. Moreover, the study provides a means to quantify the contribution of each sensor and feature to separability. Development of a non-electroencephalography (EEG) based seizure detection device would give doctors a more accurate seizure count outside of the clinical setting, improving treatment and the quality of life of epilepsy patients.
APA, Harvard, Vancouver, ISO, and other styles
7

Modlo, Yevhenii O., Serhiy O. Semerikov, Stanislav L. Bondarevskyi, Stanislav T. Tolmachev, Oksana M. Markova, and Pavlo P. Nechypurenko. Methods of using mobile Internet devices in the formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3677.

Full text
Abstract:
An analysis of the experience of professional training bachelors of electromechanics in Ukraine and abroad made it possible to determine that one of the leading trends in its modernization is the synergistic integration of various engineering branches (mechanical, electrical, electronic engineering and automation) in mechatronics for the purpose of design, manufacture, operation and maintenance electromechanical equipment. Teaching mechatronics provides for the meaningful integration of various disciplines of professional and practical training bachelors of electromechanics based on the concept of modeling and technological integration of various organizational forms and teaching methods based on the concept of mobility. Within this approach, the leading learning tools of bachelors of electromechanics are mobile Internet devices (MID) – a multimedia mobile devices that provide wireless access to information and communication Internet services for collecting, organizing, storing, processing, transmitting, presenting all kinds of messages and data. The authors reveals the main possibilities of using MID in learning to ensure equal access to education, personalized learning, instant feedback and evaluating learning outcomes, mobile learning, productive use of time spent in classrooms, creating mobile learning communities, support situated learning, development of continuous seamless learning, ensuring the gap between formal and informal learning, minimize educational disruption in conflict and disaster areas, assist learners with disabilities, improve the quality of the communication and the management of institution, and maximize the cost-efficiency. Bachelor of electromechanics competency in modeling of technical objects is a personal and vocational ability, which includes a system of knowledge, skills, experience in learning and research activities on modeling mechatronic systems and a positive value attitude towards it; bachelor of electromechanics should be ready and able to use methods and software/hardware modeling tools for processes analyzes, systems synthesis, evaluating their reliability and effectiveness for solving practical problems in professional field. The competency structure of the bachelor of electromechanics in the modeling of technical objects is reflected in three groups of competencies: general scientific, general professional and specialized professional. The implementation of the technique of using MID in learning bachelors of electromechanics in modeling of technical objects is the appropriate methodic of using, the component of which is partial methods for using MID in the formation of the general scientific component of the bachelor of electromechanics competency in modeling of technical objects, are disclosed by example academic disciplines “Higher mathematics”, “Computers and programming”, “Engineering mechanics”, “Electrical machines”. The leading tools of formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects are augmented reality mobile tools (to visualize the objects’ structure and modeling results), mobile computer mathematical systems (universal tools used at all stages of modeling learning), cloud based spreadsheets (as modeling tools) and text editors (to make the program description of model), mobile computer-aided design systems (to create and view the physical properties of models of technical objects) and mobile communication tools (to organize a joint activity in modeling).
APA, Harvard, Vancouver, ISO, and other styles
8

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
9

Alchanatis, Victor, Stephen W. Searcy, Moshe Meron, W. Lee, G. Y. Li, and A. Ben Porath. Prediction of Nitrogen Stress Using Reflectance Techniques. United States Department of Agriculture, November 2001. http://dx.doi.org/10.32747/2001.7580664.bard.

Full text
Abstract:
Commercial agriculture has come under increasing pressure to reduce nitrogen fertilizer inputs in order to minimize potential nonpoint source pollution of ground and surface waters. This has resulted in increased interest in site specific fertilizer management. One way to solve pollution problems would be to determine crop nutrient needs in real time, using remote detection, and regulating fertilizer dispensed by an applicator. By detecting actual plant needs, only the additional nitrogen necessary to optimize production would be supplied. This research aimed to develop techniques for real time assessment of nitrogen status of corn using a mobile sensor with the potential to regulate nitrogen application based on data from that sensor. Specifically, the research first attempted to determine the system parameters necessary to optimize reflectance spectra of corn plants as a function of growth stage, chlorophyll and nitrogen status. In addition to that, an adaptable, multispectral sensor and the signal processing algorithm to provide real time, in-field assessment of corn nitrogen status was developed. Spectral characteristics of corn leaves reflectance were investigated in order to estimate the nitrogen status of the plants, using a commercial laboratory spectrometer. Statistical models relating leaf N and reflectance spectra were developed for both greenhouse and field plots. A basis was established for assessing nitrogen status using spectral reflectance from plant canopies. The combined effect of variety and N treatment was studied by measuring the reflectance of three varieties of different leaf characteristic color and five different N treatments. The variety effect on the reflectance at 552 nm was not significant (a = 0.01), while canonical discriminant analysis showed promising results for distinguishing different variety and N treatment, using spectral reflectance. Ambient illumination was found inappropriate for reliable, one-beam spectral reflectance measurement of the plants canopy due to the strong spectral lines of sunlight. Therefore, artificial light was consequently used. For in-field N status measurement, a dark chamber was constructed, to include the sensor, along with artificial illumination. Two different approaches were tested (i) use of spatially scattered artificial light, and (ii) use of collimated artificial light beam. It was found that the collimated beam along with a proper design of the sensor-beam geometry yielded the best results in terms of reducing the noise due to variable background, and maintaining the same distance from the sensor to the sample point of the canopy. A multispectral sensor assembly, based on a linear variable filter was designed, constructed and tested. The sensor assembly combined two sensors to cover the range of 400 to 1100 nm, a mounting frame, and a field data acquisition system. Using the mobile dark chamber and the developed sensor, as well as an off-the-shelf sensor, in- field nitrogen status of the plants canopy was measured. Statistical analysis of the acquired in-field data showed that the nitrogen status of the com leaves can be predicted with a SEP (Standard Error of Prediction) of 0.27%. The stage of maturity of the crop affected the relationship between the reflectance spectrum and the nitrogen status of the leaves. Specifically, the best prediction results were obtained when a separate model was used for each maturity stage. In-field assessment of the nitrogen status of corn leaves was successfully carried out by non contact measurement of the reflectance spectrum. This technology is now mature to be incorporated in field implements for on-line control of fertilizer application.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography