Academic literature on the topic 'Markov Decision Process Planning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Markov Decision Process Planning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Markov Decision Process Planning"

1

Pinder, Jonathan P. "An Approximation of a Markov Decision Process for Resource Planning." Journal of the Operational Research Society 46, no. 7 (July 1995): 819. http://dx.doi.org/10.2307/2583966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pinder, Jonathan P. "An Approximation of a Markov Decision Process for Resource Planning." Journal of the Operational Research Society 46, no. 7 (July 1995): 819–30. http://dx.doi.org/10.1057/jors.1995.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mouaddib, Abdel-Illah. "Vector-Value Markov Decision Process for multi-objective stochastic path planning." International Journal of Hybrid Intelligent Systems 9, no. 1 (March 13, 2012): 45–60. http://dx.doi.org/10.3233/his-2012-0146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Naguleswaran, Sanjeev, and Langford B. White. "Planning without state space explosion: Petri net to Markov decision process." International Transactions in Operational Research 16, no. 2 (March 2009): 243–55. http://dx.doi.org/10.1111/j.1475-3995.2009.00674.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schell, Greggory J., Wesley J. Marrero, Mariel S. Lavieri, Jeremy B. Sussman, and Rodney A. Hayward. "Data-Driven Markov Decision Process Approximations for Personalized Hypertension Treatment Planning." MDM Policy & Practice 1, no. 1 (July 2016): 238146831667421. http://dx.doi.org/10.1177/2381468316674214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Truong-Huy, David Hsu, Wee-Sun Lee, Tze-Yun Leong, Leslie Kaelbling, Tomas Lozano-Perez, and Andrew Grant. "CAPIR: Collaborative Action Planning with Intention Recognition." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 7, no. 1 (October 9, 2011): 61–66. http://dx.doi.org/10.1609/aiide.v7i1.12425.

Full text
Abstract:
We apply decision theoretic techniques to construct non-player characters that are able to assist a human player in collaborative games. The method is based on solving Markov decision processes, which can be difficult when the game state is described by many variables. To scale to more complex games, the method allows decomposition of a game task into subtasks, each of which can be modelled by a Markov decision process. Intention recognition is used to infer the subtask that the human is currently performing, allowing the helper to assist the human in performing the correct task. Experiments show that the method can be effective, giving near-human level performance in helping a human in a collaborative game.
APA, Harvard, Vancouver, ISO, and other styles
7

Hamasha, Mohammad M., and George Rumbe. "Determining optimal policy for emergency department using Markov decision process." World Journal of Engineering 14, no. 5 (October 2, 2017): 467–72. http://dx.doi.org/10.1108/wje-12-2016-0148.

Full text
Abstract:
Purpose Emergency departments (ED) are faced with the challenge of capacity planning that caused by the high demand for patients and limited resources. Consequently, inadequate resources lead to increased delays, impacts on the quality of care and increase the health-care costs. Such circumstances necessitate utilizing operational research modules, such as the Markov decision process (MDP) to enable better decision-making. The purpose of this paper is to demonstrate the applicability and usage of MDP on ED. Design/methodology/approach The adoption of MDP provides invaluable insights into system operations based on the different system states (e.g. very busy to unoccupied) to ensure optimal assigning of resources and reduced costs. In this paper, a descriptive health system model based on the MDP is presented, and a numerical example is illustrated to elaborate its appropriateness in optimal policy decision determination. Findings Faced with numerous decisions, hospital managers have to ensure that the appropriate technique is used to minimize any undesired outcomes. MDP has been shown to be a robust approach which provides support to the critical decision-making processes. Additionally, MDP also provides insights on the associated costs which enable the hospital managers to efficiently allocate resources ensuring quality health care and increased throughput while minimizing costs. Originality/value Applying MDP in the ED is a unique and good starting. MDP is powerful tool helps in making a decision in the critical situations, and the ED needs such tool.
APA, Harvard, Vancouver, ISO, and other styles
8

Yordanova, Veronika, Hugh Griffiths, and Stephen Hailes. "Rendezvous planning for multiple autonomous underwater vehicles using a Markov decision process." IET Radar, Sonar & Navigation 11, no. 12 (December 2017): 1762–69. http://dx.doi.org/10.1049/iet-rsn.2017.0098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hai-Feng, Jiu, Chen Yu, Deng Wei, and Pang Shuo. "Underwater chemical plume tracing based on partially observable Markov decision process." International Journal of Advanced Robotic Systems 16, no. 2 (March 1, 2019): 172988141983187. http://dx.doi.org/10.1177/1729881419831874.

Full text
Abstract:
Chemical plume tracing based on autonomous underwater vehicle uses chemical as a guidance to navigate and search in the unknown environments. To solve the key issue of tracing and locating the source, this article proposes a path-planning strategy based on partially observable Markov decision process algorithm and artificial potential field algorithm. The partially observable Markov decision process algorithm is used to construct a source likelihood map and update it in real time with environmental information from the sensors on autonomous underwater vehicle in search area. The artificial potential field algorithm uses the source likelihood map for accurately planning tracing path and guiding the autonomous underwater vehicle to track along the path until the source is detected. This article carries out simulation experiments on the proposed algorithm. The experimental results show that the algorithms have good performance, which is suitable for chemical plume tracing via autonomous underwater vehicle. Compared with the bionic method, the simulation results show that the proposed method has higher success rate and better stability than the bionic method.
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Yong, Xingjia Lu, and Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models." International Journal on Artificial Intelligence Tools 24, no. 04 (August 2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.

Full text
Abstract:
Markov decision process (MDP) based heuristic algorithms have been considered as simple, fast, but imprecise solutions for partially observable Markov decision processes (POMDPs). The main reason comes from how we approximate belief points. We use weighted graphs to model the state space and the belief space, in order for a detailed analysis of the MDP heuristic algorithm. As a result, we provide the prerequisite conditions to build up a robust belief graph. We further introduce a dynamic mechanism to manage belief space in the belief graph, so as to improve the efficiency and decrease the space complexity. Experimental results indicate our approach is fast and has high quality for POMDPs.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Markov Decision Process Planning"

1

Geng, Na. "Combinatorial optimization and Markov decision process for planning MRI examinations." Phd thesis, Saint-Etienne, EMSE, 2010. http://tel.archives-ouvertes.fr/tel-00566257.

Full text
Abstract:
This research is motivated by our collaborations with a large French university teaching hospital in order to reduce the Length of Stay (LoS) of stroke patients treated in the neurovascular department. Quick diagnosis is critical for stroke patients but relies on expensive and heavily used imaging facilities such as MRI (Magnetic Resonance Imaging) scanners. Therefore, it is very important for the neurovascular department to reduce the patient LoS by reducing their waiting time of imaging examinations. From the neurovascular department perspective, this thesis proposes a new MRI examinations reservation process in order to reduce patient waiting times without degrading the utilization of MRI. The service provider, i.e., the imaging department, reserves each week a certain number of appropriately distributed contracted time slots (CTS) for the neurovascular department to ensure quick MRI examination of stroke patients. In addition to CTS, it is still possible for stroke patients to get MRI time slots through regular reservation (RTS). This thesis first proposes a stochastic programming model to simultaneously determine the contract decision, i.e., the number of CTS and its distribution, and the patient assignment policy to assign patients to either CTS or RTS. To solve this problem, structure properties of the optimal patient assignment policy for a given contract are proved by an average cost Markov decision process (MDP) approach. The contract is determined by a Monte Carlo approximation approach and then improved by local search. Computational experiments show that the proposed algorithms can efficiently solve the model. The new reservation process greatly reduces the average waiting time of stroke patients. At the same time, some CTS cannot be used for the lack of patients.To reduce the unused CTS, we further explore the possibility of the advance cancellation of CTS. Structure properties of optimal control policies for one-day and two-day advance cancellation are established separately via an average-cost MDP approach with appropriate modeling and advanced convexity concepts used in control of queueing systems. Computational experiments show that appropriate advance cancellations of CTS greatly reduce the unused CTS with nearly the same waiting times.
APA, Harvard, Vancouver, ISO, and other styles
2

Dai, Peng. "FASTER DYNAMIC PROGRAMMING FOR MARKOV DECISION PROCESSES." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/428.

Full text
Abstract:
Markov decision processes (MDPs) are a general framework used by Artificial Intelligence (AI) researchers to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature. This paper discusses two main groups of approaches in solving MDPs. The first group of approaches combines the strategies of heuristic search and dynamic programming to expedite the convergence process. The second makes use of graphical structures in MDPs to decrease the effort of classic dynamic programming algorithms. Two new algorithms proposed by the author, MBLAO* and TVI, are described here.
APA, Harvard, Vancouver, ISO, and other styles
3

Alizadeh, Pegah. "Elicitation and planning in Markov decision processes with unknown rewards." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCD011/document.

Full text
Abstract:
Les processus décisionnels de Markov (MDPs) modélisent des problèmes de décisionsséquentielles dans lesquels un utilisateur interagit avec l’environnement et adapte soncomportement en prenant en compte les signaux de récompense numérique reçus. La solutiond’unMDP se ramène à formuler le comportement de l’utilisateur dans l’environnementà l’aide d’une fonction de politique qui spécifie quelle action choisir dans chaque situation.Dans de nombreux problèmes de décision du monde réel, les utilisateurs ont despréférences différentes, donc, les gains de leurs actions sur les états sont différents et devraientêtre re-décodés pour chaque utilisateur. Dans cette thèse, nous nous intéressonsà la résolution des MDPs pour les utilisateurs ayant des préférences différentes.Nous utilisons un modèle nommé MDP à Valeur vectorielle (VMDP) avec des récompensesvectorielles. Nous proposons un algorithme de recherche-propagation qui permetd’attribuer une fonction de valeur vectorielle à chaque politique et de caractériser chaqueutilisateur par un vecteur de préférences sur l’ensemble des fonctions de valeur, où levecteur de préférence satisfait les priorités de l’utilisateur. Etant donné que le vecteurde préférences d’utilisateur n’est pas connu, nous présentons plusieurs méthodes pourrésoudre des MDP tout en approximant le vecteur de préférence de l’utilisateur.Nous introduisons deux algorithmes qui réduisent le nombre de requêtes nécessairespour trouver la politique optimale d’un utilisateur: 1) Un algorithme de recherchepropagation,où nous propageons un ensemble de politiques optimales possibles pourle MDP donné sans connaître les préférences de l’utilisateur. 2) Un algorithme interactifd’itération de la valeur (IVI) sur les MDPs, nommé algorithme d’itération de la valeurbasé sur les avantages (ABVI) qui utilise le clustering et le regroupement des avantages.Nous montrons également comment l’algorithme ABVI fonctionne correctement pourdeux types d’utilisateurs différents: confiant et incertain.Nous travaillons finalement sur une méthode d’approximation par critére de regret minimaxcomme méthode pour trouver la politique optimale tenant compte des informationslimitées sur les préférences de l’utilisateur. Dans ce système, tous les objectifs possiblessont simplement bornés entre deux limites supérieure et inférieure tandis que le systèmeine connaît pas les préférences de l’utilisateur parmi ceux-ci. Nous proposons une méthodeheuristique d’approximation par critère de regret minimax pour résoudre des MDPsavec des récompenses inconnues. Cette méthode est plus rapide et moins complexe queles méthodes existantes dans la littérature
Markov decision processes (MDPs) are models for solving sequential decision problemswhere a user interacts with the environment and adapts her policy by taking numericalreward signals into account. The solution of an MDP reduces to formulate the userbehavior in the environment with a policy function that specifies which action to choose ineach situation. In many real world decision problems, the users have various preferences,and therefore, the gain of actions on states are different and should be re-decoded foreach user. In this dissertation, we are interested in solving MDPs for users with differentpreferences.We use a model named Vector-valued MDP (VMDP) with vector rewards. We propose apropagation-search algorithm that allows to assign a vector-value function to each policyand identify each user with a preference vector on the existing set of preferences wherethe preference vector satisfies the user priorities. Since the user preference vector is notknown we present several methods for solving VMDPs while approximating the user’spreference vector.We introduce two algorithms that reduce the number of queries needed to find the optimalpolicy of a user: 1) A propagation-search algorithm, where we propagate a setof possible optimal policies for the given MDP without knowing the user’s preferences.2) An interactive value iteration algorithm (IVI) on VMDPs, namely Advantage-basedValue Iteration (ABVI) algorithm that uses clustering and regrouping advantages. Wealso demonstrate how ABVI algorithm works properly for two different types of users:confident and uncertain.We finally work on a minimax regret approximation method as a method for findingthe optimal policy w.r.t the limited information about user’s preferences. All possibleobjectives in the system are just bounded between two higher and lower bounds while thesystem is not aware of user’s preferences among them. We propose an heuristic minimaxregret approximation method for solving MDPs with unknown rewards that is faster andless complex than the existing methods in the literature
APA, Harvard, Vancouver, ISO, and other styles
4

Ernsberger, Timothy S. "Integrating Deterministic Planning and Reinforcement Learning for Complex Sequential Decision Making." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1354813154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Al, Sabban Wesam H. "Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/82297/1/Wesam%20H_Al%20Sabban_Thesis.pdf.

Full text
Abstract:
One of the main challenges facing online and offline path planners is the uncertainty in the magnitude and direction of the environmental energy because it is dynamic, changeable with time, and hard to forecast. This thesis develops an artificial intelligence for a mobile robot to learn from historical or forecasted data of environmental energy available in the area of interest which will help for a persistence monitoring under uncertainty using the developed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Poulin, Nolan. "Proactive Planning through Active Policy Inference in Stochastic Environments." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1267.

Full text
Abstract:
In multi-agent Markov Decision Processes, a controllable agent must perform optimal planning in a dynamic and uncertain environment that includes another unknown and uncontrollable agent. Given a task specification for the controllable agent, its ability to complete the task can be impeded by an inaccurate model of the intent and behaviors of other agents. In this work, we introduce an active policy inference algorithm that allows a controllable agent to infer a policy of the environmental agent through interaction. Active policy inference is data-efficient and is particularly useful when data are time-consuming or costly to obtain. The controllable agent synthesizes an exploration-exploitation policy that incorporates the knowledge learned about the environment's behavior. Whenever possible, the agent also tries to elicit behavior from the other agent to improve the accuracy of the environmental model. This is done by mapping the uncertainty in the environmental model to a bonus reward, which helps elicit the most informative exploration, and allows the controllable agent to return to its main task as fast as possible. Experiments demonstrate the improved sample efficiency of active learning and the convergence of the policy for the controllable agents.
APA, Harvard, Vancouver, ISO, and other styles
7

Pokharel, Gaurab. "Increasing the Value of Information During Planning in Uncertain Environments." Oberlin College Honors Theses / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1624976272271825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stárek, Ivo. "Plánování cesty robota pomocí dynamického programování." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228655.

Full text
Abstract:
This work is dedicated to robot path planning with using principles of dynamic programing in discrete state space. Theoretical part is dedicated to actual situation in this field and to principle of applying Markov decission process to path planning. Practical part is dedicated to implementation of two algorithms based on MDP principles.
APA, Harvard, Vancouver, ISO, and other styles
9

Pinheiro, Paulo Gurgel 1983. "Localização multirrobo cooperativa com planejamento." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276155.

Full text
Abstract:
Orientador: Jacques Wainer
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-09-11T21:14:07Z (GMT). No. of bitstreams: 1 Pinheiro_PauloGurgel_M.pdf: 1259816 bytes, checksum: a4783df9aa3755becb68ee233ad43e3c (MD5) Previous issue date: 2009
Resumo: Em um problema de localização multirrobô cooperativa, um grupo de robôs encontra-se em um determinado ambiente, cuja localização exata de cada um dos robôs é desconhecida. Neste cenário, uma distribuição de probabilidades aponta as chances de um robô estar em um determinado estado. É necessário então, que os robôs se movimentem pelo ambiente e gerem novas observações que serão compartilhadas, para calcular novas estimativas. Nos últimos anos, muitos trabalhos têm focado no estudo de técnicas probabilísticas, modelos de comunicação e modelos de detecções, para resolver o problema de localização. No entanto, a movimentação dos robôs é, em geral, definida por ações aleatórias. Ações aleatórias geram observações que podem ser inúteis para a melhoria da estimativa. Este trabalho apresenta uma proposta de localização com suporte a planejamento de ações. O objetivo é apresentar um modelo cujas ações realizadas pelos robôs são definidas por políticas. Escolhendo a melhor ação a ser realizada, é possível receber informações mais úteis dos sensores internos e externos e estimar as posturas mais rapidamente. O modelo proposto, denominado Modelo de Localização Planejada - MLP, utiliza POMDPs para modelar os problemas de localização e algoritmos específicos de geração de políticas. Foi utilizada a localização de Markov como técnica probabilística de localização e implementadas versões de modelos de detecção e propagação de informação. Neste trabalho, um simulador de problemas de localização multirrobô foi desenvolvido, no qual foram realizados experimentos em que o modelo proposto foi comparado a um modelo que não faz uso de planejamento de ações. Os resultados obtidos apontam que o modelo proposto é capaz de estimar as posturas dos robôs com uma menor quantidade de passos, sendo significativamente mais e ciente do que o modelo comparado sem planejamento.
Abstract: In a cooperative multi-robot localization problem, a group of robots is in a certain environment, where the exact location of each robot is unknown. In this scenario, there is only a distribution of probabilities indicating the chance of a robot to be in a particular state. It is necessary for the robots to move in the environment generating new observations, which will be shared to calculate new estimates. Currently, many studies have focused on the study of probabilistic techniques, models of communication and models of detection to solve the localization problem. However, the movement of robots is generally defined by random actions. Random actions generate observations that can be useless for improving the estimate. This work describes a proposal for multi-robot localization with support planning of actions. The objective is to describe a model whose actions performed by robots are defined by policies. Choosing the best action to be performed, the robot gets more useful information from internal and external sensors and estimates the posture more quickly. The proposed model, called Model of Planned Localization - MPL, uses POMDPs to model the problems of location and specific algorithms to generate policies. The Markov localization was used as probabilistic technique of localization and implemented versions of detection models and information propagation model. In this work, a simulator to multi-robot localization problems was developed, in which experiments were performed. The proposed model was compared to a model that does not make use of planning actions. The results showed that the proposed model is able to estimate the positions of robots with lower number of steps, being more e-cient than model compared.
Mestrado
Inteligencia Artificial
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
10

Junyent, Barbany Miquel. "Width-Based Planning and Learning." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672779.

Full text
Abstract:
Optimal sequential decision making is a fundamental problem to many diverse fields. In recent years, Reinforcement Learning (RL) methods have experienced unprecedented success, largely enabled by the use of deep learning models, reaching human-level performance in several domains, such as the Atari video games or the ancient game of Go. In contrast to the RL approach in which the agent learns a policy from environment interaction samples, ignoring the structure of the problem, the planning approach for decision making assumes known models for the agent's goals and domain dynamics, and focuses on determining how the agent should behave to achieve its objectives. Current planners are able to solve problem instances involving huge state spaces by precisely exploiting the problem structure that is defined in the state-action model. In this work we combine the two approaches, leveraging fast and compact policies from learning methods and the capacity to perform lookaheads in combinatorial problems from planning methods. In particular, we focus on a family of planners called width-based planners, that has demonstrated great success in recent years due to its ability to scale independently of the size of the state space. The basic algorithm, Iterated Width (IW), was originally proposed for classical planning problems, where a model for state transitions and goals, represented by sets of atoms, is fully determined. Nevertheless, width-based planners do not require a fully defined model of the environment, and can be used with simulators. For instance, they have been recently applied in pixel domains such as the Atari games. Despite its success, IW is purely exploratory, and does not leverage past reward information. Furthermore, it requires the state to be factored into features that need to be pre-defined for the particular task. Moreover, running the algorithm with a width larger than 1 in practice is usually computationally intractable, prohibiting IW from solving higher width problems. We begin this dissertation by studying the complexity of width-based methods when the state space is defined by multivalued features, as in the RL setting, instead of Boolean atoms. We provide a tight upper bound on the amount of nodes expanded by IW, as well as overall algorithmic complexity results. In order to deal with more challenging problems (i.e., those with a width higher than 1), we present a hierarchical algorithm that plans at two levels of abstraction. A high-level planner uses abstract features that are incrementally discovered from low-level pruning decisions. We illustrate this algorithm in classical planning PDDL domains as well as in pixel-based simulator domains. In classical planning, we show how IW(1) at two levels of abstraction can solve problems of width 2. To leverage past reward information, we extend width-based planning by incorporating an explicit policy in the action selection mechanism. Our method, called π-IW, interleaves width-based planning and policy learning using the state-actions visited by the planner. The policy estimate takes the form of a neural network and is in turn used to guide the planning step, thus reinforcing promising paths. Notably, the representation learned by the neural network can be used as a feature space for the width-based planner without degrading its performance, thus removing the requirement of pre-defined features for the planner. We compare π-IW with previous width-based methods and with AlphaZero, a method that also interleaves planning and learning, in simple environments, and show that π-IW has superior performance. We also show that the π-IW algorithm outperforms previous width-based methods in the pixel setting of Atari games suite. Finally, we show that the proposed hierarchical IW can be seamlessly integrated with our policy learning scheme, resulting in an algorithm that outperforms flat IW-based planners in Atari games with sparse rewards.
La presa seqüencial de decisions òptimes és un problema fonamental en diversos camps. En els últims anys, els mètodes d'aprenentatge per reforç (RL) han experimentat un èxit sense precedents, en gran part gràcies a l'ús de models d'aprenentatge profund, aconseguint un rendiment a nivell humà en diversos dominis, com els videojocs d'Atari o l'antic joc de Go. En contrast amb l'enfocament de RL, on l'agent aprèn una política a partir de mostres d'interacció amb l'entorn, ignorant l'estructura del problema, l'enfocament de planificació assumeix models coneguts per als objectius de l'agent i la dinàmica del domini, i es basa en determinar com ha de comportar-se l'agent per aconseguir els seus objectius. Els planificadors actuals són capaços de resoldre problemes que involucren grans espais d'estats precisament explotant l'estructura del problema, definida en el model estat-acció. En aquest treball combinem els dos enfocaments, aprofitant polítiques ràpides i compactes dels mètodes d'aprenentatge i la capacitat de fer cerques en problemes combinatoris dels mètodes de planificació. En particular, ens enfoquem en una família de planificadors basats en el width (ample), que han tingut molt èxit en els últims anys gràcies a que la seva escalabilitat és independent de la mida de l'espai d'estats. L'algorisme bàsic, Iterated Width (IW), es va proposar originalment per problemes de planificació clàssica, on el model de transicions d'estat i objectius ve completament determinat, representat per conjunts d'àtoms. No obstant, els planificadors basats en width no requereixen un model de l'entorn completament definit i es poden utilitzar amb simuladors. Per exemple, s'han aplicat recentment a dominis gràfics com els jocs d'Atari. Malgrat el seu èxit, IW és un algorisme purament exploratori i no aprofita la informació de recompenses anteriors. A més, requereix que l'estat estigui factoritzat en característiques, que han de predefinirse per a la tasca en concret. A més, executar l'algorisme amb un width superior a 1 sol ser computacionalment intractable a la pràctica, el que impedeix que IW resolgui problemes de width superior. Comencem aquesta tesi estudiant la complexitat dels mètodes basats en width quan l'espai d'estats està definit per característiques multivalor, com en els problemes de RL, en lloc d'àtoms booleans. Proporcionem un límit superior més precís en la quantitat de nodes expandits per IW, així com resultats generals de complexitat algorísmica. Per fer front a problemes més complexos (és a dir, aquells amb un width superior a 1), presentem un algorisme jeràrquic que planifica en dos nivells d'abstracció. El planificador d'alt nivell utilitza característiques abstractes que es van descobrint gradualment a partir de decisions de poda en l'arbre de baix nivell. Il·lustrem aquest algorisme en dominis PDDL de planificació clàssica, així com en dominis de simuladors gràfics. En planificació clàssica, mostrem com IW(1) en dos nivells d'abstracció pot resoldre problemes de width 2. Per aprofitar la informació de recompenses passades, incorporem una política explícita en el mecanisme de selecció d'accions. El nostre mètode, anomenat π-IW, intercala la planificació basada en width i l'aprenentatge de la política usant les accions visitades pel planificador. Representem la política amb una xarxa neuronal que, al seu torn, s'utilitza per guiar la planificació, reforçant així camins prometedors. A més, la representació apresa per la xarxa neuronal es pot utilitzar com a característiques per al planificador sense degradar el seu rendiment, eliminant així el requisit d'usar característiques predefinides. Comparem π-IW amb mètodes anteriors basats en width i amb AlphaZero, un mètode que també intercala planificació i aprenentatge, i mostrem que π-IW té un rendiment superior en entorns simples. També mostrem que l'algorisme π-IW supera altres mètodes basats en width en els jocs d'Atari. Finalment, mostrem que el mètode IW jeràrquic proposat pot integrar-se fàcilment amb el nostre esquema d'aprenentatge de la política, donant com a resultat un algorisme que supera els planificadors no jeràrquics basats en IW en els jocs d'Atari amb recompenses distants.
La toma secuencial de decisiones óptimas es un problema fundamental en diversos campos. En los últimos años, los métodos de aprendizaje por refuerzo (RL) han experimentado un éxito sin precedentes, en gran parte gracias al uso de modelos de aprendizaje profundo, alcanzando un rendimiento a nivel humano en varios dominios, como los videojuegos de Atari o el antiguo juego de Go. En contraste con el enfoque de RL, donde el agente aprende una política a partir de muestras de interacción con el entorno, ignorando la estructura del problema, el enfoque de planificación asume modelos conocidos para los objetivos del agente y la dinámica del dominio, y se basa en determinar cómo debe comportarse el agente para lograr sus objetivos. Los planificadores actuales son capaces de resolver problemas que involucran grandes espacios de estados precisamente explotando la estructura del problema, definida en el modelo estado-acción. En este trabajo combinamos los dos enfoques, aprovechando políticas rápidas y compactas de los métodos de aprendizaje y la capacidad de realizar búsquedas en problemas combinatorios de los métodos de planificación. En particular, nos enfocamos en una familia de planificadores basados en el width (ancho), que han demostrado un gran éxito en los últimos años debido a que su escalabilidad es independiente del tamaño del espacio de estados. El algoritmo básico, Iterated Width (IW), se propuso originalmente para problemas de planificación clásica, donde el modelo de transiciones de estado y objetivos viene completamente determinado, representado por conjuntos de átomos. Sin embargo, los planificadores basados en width no requieren un modelo del entorno completamente definido y se pueden utilizar con simuladores. Por ejemplo, se han aplicado recientemente en dominios gráficos como los juegos de Atari. A pesar de su éxito, IW es un algoritmo puramente exploratorio y no aprovecha la información de recompensas anteriores. Además, requiere que el estado esté factorizado en características, que deben predefinirse para la tarea en concreto. Además, ejecutar el algoritmo con un width superior a 1 suele ser computacionalmente intratable en la práctica, lo que impide que IW resuelva problemas de width superior. Empezamos esta tesis estudiando la complejidad de los métodos basados en width cuando el espacio de estados está definido por características multivalor, como en los problemas de RL, en lugar de átomos booleanos. Proporcionamos un límite superior más preciso en la cantidad de nodos expandidos por IW, así como resultados generales de complejidad algorítmica. Para hacer frente a problemas más complejos (es decir, aquellos con un width superior a 1), presentamos un algoritmo jerárquico que planifica en dos niveles de abstracción. El planificador de alto nivel utiliza características abstractas que se van descubriendo gradualmente a partir de decisiones de poda en el árbol de bajo nivel. Ilustramos este algoritmo en dominios PDDL de planificación clásica, así como en dominios de simuladores gráficos. En planificación clásica, mostramos cómo IW(1) en dos niveles de abstracción puede resolver problemas de width 2. Para aprovechar la información de recompensas pasadas, incorporamos una política explícita en el mecanismo de selección de acciones. Nuestro método, llamado π-IW, intercala la planificación basada en width y el aprendizaje de la política usando las acciones visitadas por el planificador. Representamos la política con una red neuronal que, a su vez, se utiliza para guiar la planificación, reforzando así caminos prometedores. Además, la representación aprendida por la red neuronal se puede utilizar como características para el planificador sin degradar su rendimiento, eliminando así el requisito de usar características predefinidas. Comparamos π-IW con métodos anteriores basados en width y con AlphaZero, un método que también intercala planificación y aprendizaje, y mostramos que π-IW tiene un rendimiento superior en entornos simples. También mostramos que el algoritmo π-IW supera otros métodos basados en width en los juegos de Atari. Finalmente, mostramos que el IW jerárquico propuesto puede integrarse fácilmente con nuestro esquema de aprendizaje de la política, dando como resultado un algoritmo que supera a los planificadores no jerárquicos basados en IW en los juegos de Atari con recompensas distantes.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Markov Decision Process Planning"

1

Kolobov, Mausam, and Andrey Kolobov. Planning with Markov Decision Processes. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-01559-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Section, Auckland Regional Council Resource Management Division Regional Planning Dept Strategic Planning. The Regional strategic decision-making process: [report. [Auckland]: Auckland Regional Council, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Corps, United States Marine. Marine Corps planning process. Washington, D.C: Headquarters, U.S. Marine Corps, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Corps, United States Marine, ed. Marine Corps planning process. Washington, D.C: Headquarters, U.S. Marine Corps, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Curriculum improvement: Decision making and process. 8th ed. Boston: Allyn and Bacon, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Doll, Ronald C. Curriculum improvement: Decision making and process. 9th ed. Boston: Allyn and Bacon, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Curriculum improvement: Decision making and process. 7th ed. Boston: Allyn and Bacon, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Curriculum improvement: Decision making and process. 6th ed. Boston: Allyn and Bacon, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Supply chain planning for the process industry. Wilmington, DE: Supply Chain Consultants, Inc., 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

The analytic hierarchy process: Planning, priority setting, resource allocation. 2nd ed. Pittsburgh: RWS publications, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Markov Decision Process Planning"

1

Thiébaux, Sylvie, and Olivier Buffet. "Operations Planning." In Markov Decision Processes in Artificial Intelligence, 425–52. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118557426.ch15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kolobov, Mausam, and Andrey Kolobov. "Fundamental Algorithms." In Planning with Markov Decision Processes, 31–58. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-01559-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kolobov, Mausam, and Andrey Kolobov. "MDPs." In Planning with Markov Decision Processes, 7–29. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-01559-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kolobov, Mausam, and Andrey Kolobov. "Advanced Notes." In Planning with Markov Decision Processes, 143–62. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-01559-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kolobov, Mausam, and Andrey Kolobov. "Symbolic Algorithms." In Planning with Markov Decision Processes, 83–95. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-01559-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kolobov, Mausam, and Andrey Kolobov. "Heuristic Search Algorithms." In Planning with Markov Decision Processes, 59–82. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-01559-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Naveed, Munir, Andrew Crampton, Diane Kitchin, and Lee McCluskey. "Real-Time Path Planning using a Simulation-Based Markov Decision Process." In Research and Development in Intelligent Systems XXVIII, 35–48. London: Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2318-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Veeramani, Satheeshkumar, Sreekumar Muthuswamy, Keerthi Sagar, and Matteo Zoppi. "Multi-Head Path Planning of SwarmItFIX Agents: A Markov Decision Process Approach." In Advances in Mechanism and Machine Science, 2237–47. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20131-9_221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dawid, R., D. McMillan, and M. Revie. "Time series semi-Markov decision process with variable costs for maintenance planning." In Risk, Reliability and Safety: Innovating Theory and Practice, 1145–50. Taylor & Francis Group, 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742: CRC Press, 2016. http://dx.doi.org/10.1201/9781315374987-172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Souidi, Mohammed El Habib, Toufik Messaoud Maarouk, and Abdeldjalil Ledmi. "Multi-agent Ludo Game Collaborative Path Planning based on Markov Decision Process." In Inventive Systems and Control, 37–51. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1395-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Markov Decision Process Planning"

1

Al Khafaf, Nameer, Mahdi Jalili, and Peter Sokolowski. "Demand Response Planning Tool using Markov Decision Process." In 2018 IEEE 16th International Conference on Industrial Informatics (INDIN). IEEE, 2018. http://dx.doi.org/10.1109/indin.2018.8472098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leal Gomes Leite, Joao Marcelo, Edilson F. Arruda, Laura Bahiense, and Lino G. Marujo. "Mine-to-client planning with Markov Decision Process." In 2020 European Control Conference (ECC). IEEE, 2020. http://dx.doi.org/10.23919/ecc51009.2020.9143651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ting, Lei, Zhu Cheng, and Zhang Weiming. "Planning for target system striking based on Markov decision process." In 2013 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI). IEEE, 2013. http://dx.doi.org/10.1109/soli.2013.6611401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lane, Terran, and Leslie Pack Kaelbling. "Approaches to macro decompositions of large Markov decision process planning problems." In Intelligent Systems and Advanced Manufacturing, edited by Douglas W. Gage and Howie M. Choset. SPIE, 2002. http://dx.doi.org/10.1117/12.457435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shobeiry, Poorya, Ming Xin, Xiaolin Hu, and Haiyang Chao. "UAV Path Planning for Wildfire Tracking Using Partially Observable Markov Decision Process." In AIAA Scitech 2021 Forum. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2021. http://dx.doi.org/10.2514/6.2021-1677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mouhagir, Hafida, Reine Talj, Veronique Cherfaoui, Franck Guillemard, and Francois Aioun. "A Markov Decision Process-based approach for trajectory planning with clothoid tentacles." In 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2016. http://dx.doi.org/10.1109/ivs.2016.7535551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yousefi, Shamim, Farnaz Derakhshan, and Ayub Bokani. "Mobile Agents for Route Planning in Internet of Things Using Markov Decision Process." In 2018 IEEE International Conference on Smart Energy Grid Engineering (SEGE). IEEE, 2018. http://dx.doi.org/10.1109/sege.2018.8499517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Qingpeng, Pan Zhang, Yu'e Su, Yi Huang, Kai Zhong, and Zhongwei Li. "View planning for inspection of sheet metal parts based on Markov decision process." In International Conference on Optical and Photonic Engineering (icOPEN 2022), edited by Chao Zuo, Haixia Wang, Shijie Feng, and Qian Kemao. SPIE, 2023. http://dx.doi.org/10.1117/12.2667031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Qiming, Jiandong Zhang, and Guoqing Shi. "Path planning for unmanned aerial vehicle passive detection under the framework of partially observable markov decision process." In 2018 Chinese Control And Decision Conference (CCDC). IEEE, 2018. http://dx.doi.org/10.1109/ccdc.2018.8407800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Chenyu, Zhijun Li, Chengyao Zhang, Yufei Yan, and Rui Zhang. "Gait Planning and Control for a Hexapod Robot on Uneven Terrain Based on Markov Decision Process." In 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2019. http://dx.doi.org/10.1109/iciea.2019.8834181.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Markov Decision Process Planning"

1

Davis, Charles N., and Jr. A Decision Support Process for Planning Air Operations. Fort Belvoir, VA: Defense Technical Information Center, February 1991. http://dx.doi.org/10.21236/ada236580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bigl, Matthew, Caitlin Callaghan, Brandon Booker, Kathryn Trubac, Jacqueline Willan, Paulina Lintsai, and Marissa Torres. Energy Atlas—mapping energy-related data for DoD lands in Alaska : Phase 2—data expansion and portal development. Engineer Research and Development Center (U.S.), January 2022. http://dx.doi.org/10.21079/11681/43062.

Full text
Abstract:
As the largest Department of Defense (DoD) land user in Alaska, the U.S. Army oversees over 600,000 hectares of land, including remote areas accessible only by air, water, and winter ice roads. Spatial information related to the energy resources and infrastructure that exist on and adjacent to DoD installations can help inform decision makers when it comes to installation planning. The Energy Atlas−Alaska portal provides a secure value-added resource to support the decision-making process for energy management, investments in installation infrastructure, and improvements to energy resiliency and sustainability. The Energy Atlas–Alaska portal compiles spatial information and provides that information through a secure online portal to access and examine energy and related resource data such as energy resource potential, energy corridors, and environmental information. The information database is hosted on a secure Common Access Card-authenticated portal that is accessible to the DoD and its partners through the Army Geospatial Center’s Enterprise Portal. This Enterprise Portal provides effective visualization and functionality to support analysis and inform DoD decision makers. The Energy Atlas–Alaska portal helps the DoD account for energy in contingency planning, acquisition, and life-cycle requirements and ensures facilities can maintain operations in the face of disruption.
APA, Harvard, Vancouver, ISO, and other styles
3

Mwamba, Isaiah C., Mohamadali Morshedi, Suyash Padhye, Amir Davatgari, Soojin Yoon, Samuel Labi, and Makarand Hastak. Synthesis Study of Best Practices for Mapping and Coordinating Detours for Maintenance of Traffic (MOT) and Risk Assessment for Duration of Traffic Control Activities. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317344.

Full text
Abstract:
Maintenance of traffic (MOT) during construction periods is critical to the success of project delivery and the overall mission of transportation agencies. MOT plans may include full road closures and coordination of detours near construction areas. Various state DOTs have designed their own manuals for detour mapping and coordination. However, very limited information is provided to select optimal detour routes. Moreover, closures or detours should provide not only measurable consequences, such as vehicle operating costs and added travel time, but also various unforeseen qualitative impacts, such as business impacts and inconvenience to local communities. Since the qualitative aspects are not easily measurable they tend to be neglected in systematic evaluations and decision-making processes. In this study, the current practices obtained based on an extensive literature review, a nation-wide survey, as well as a series of interviews with INDOT and other state DOTs are leveraged to (1) identify a comprehensive set of Key Performance Indicators (KPIs) for detour route mapping, (2) understand how other state DOTs address the qualitative criteria, (3) identify how the involved risks during the planning, service time, and closure of the detour routes are managed, and (4) recommend process improvements for INDOT detour mapping guidelines. As demonstrated by two sample case studies, the proposed KPIs can be taken as a basis for developing a decision-support tool that enables decision-makers to consider both qualitative and quantitative aspects for optimal detour route mapping. In addition, the current INDOT detour policy can be updated based on the proposed process improvements.
APA, Harvard, Vancouver, ISO, and other styles
4

Dopfer, Jaqui. Öffentlichkeitsbeteiligung bei diskursiven Konfliktlösungsverfahren auf regionaler Ebene. Potentielle Ansätze zur Nutzung von Risikokommunikation im Rahmen von e-Government. Sonderforschungsgruppe Institutionenanalyse, 2003. http://dx.doi.org/10.46850/sofia.3933795605.

Full text
Abstract:
Whereas at the end of the 20th century there were still high expectations associated with the use of new media in terms of a democratisation of social discourse and new potential for citizens to participate in political decision-making, disillusionment is now spreading. Even today, the internet is often seen only as a technical tool for the transmission of information and communication, which serves as a structural supplement to "real" discourse and decision-making processes. In fact, however, the use of new media can open up additional, previously non-existent possibilities for well-founded and substantial citizen participation, especially at regional and supra-regional level. According to the results of this study, the informal, mediative procedures for conflict resolution in the context of high-risk planning decisions, which are now also increasingly used at the regional level, have two main problem areas. Firstly, in the conception and design chosen so far, they do not offer citizens direct access to the procedure. Citizens are given almost no opportunities to exert substantial influence on the content and procedure of the process, or on the solutions found in the process. So far, this has not been remedied by the use of new media. On the other hand, it is becoming apparent that the results negotiated in the procedure are not, or only inadequately, reflected in the subsequent sovereign decision. This means that not only valuable resources for identifying the problem situation and for integrative problem-solving remain unused, but it is also not possible to realise the effects anticipated with the participation procedures within the framework of context or reflexive self-management. With the aim of advancing the development of institutionally oriented approaches at the practice level, this study discusses potential solutions at the procedural level. This takes into account legal implications as well as the action logics, motives and intentions of the actors involved and aims to improve e-government structures. It becomes evident that opening up informal participation procedures for citizen participation at the regional level can only be realised through the (targeted) use of new media. However, this requires a fundamentally new approach not only in the participation procedures carried out but also, for example, in the conception of information or communication offerings. Opportunities for improving the use of the results obtained from the informal procedures in the (sovereign) decision-making process as well as the development of potentials in the sense of stronger self-control of social subsystems are identified in a stronger interlinking of informal and sovereign procedures. The prerequisite for this is not only the establishment of suitable structures, but above all the willingness of decision-makers to allow citizens to participate in decision-making, as well as the granting of participation opportunities and rights that go beyond those previously granted in sovereign procedures.
APA, Harvard, Vancouver, ISO, and other styles
5

Vakaliuk, Tetiana, Valerii Kontsedailo, Dmytro Antoniuk, Olha Korotun, Serhiy Semerikov, and Iryna Mintii. Using Game Dev Tycoon to Create Professional Soft Competencies for Future Engineers-Programmers. [б. в.], November 2020. http://dx.doi.org/10.31812/123456789/4129.

Full text
Abstract:
The article presents the possibilities of using game simulator Game Dev Tycoon to develop professional soft competencies for future engineer programmers in higher education. The choice of the term “gaming simulator” is substantiated, a generalization of this concept is given. The definition of such concepts as “game simulation” and “professional soft competencies” are given. Describes how in the process of passing game simulations students develop the professional soft competencies. Professional soft competencies include: the ability to work in a team; ability to cooperate; ability to problem-solving; ability to communicative; ability to decision-making; ability to orientation to the result; ability to support of interpersonal relations; ability to use of rules and procedures; ability to reporting; ability to attention to detail; ability to customer service; ability to sustainability; ability to the manifestation of professional honesty and ethics; ability to planning and prioritization; ability to adaptation; ability to initiative; ability to Innovation; ability to external and organizational awareness.
APA, Harvard, Vancouver, ISO, and other styles
6

Rusk, Todd, Ryan Siegel, Linda Larsen, Tim Lindsey, and Brian Deal. Technical and Financial Feasibility Study for Installation of Solar Panels at IDOT-owned Facilities. Illinois Center for Transportation, August 2021. http://dx.doi.org/10.36501/0197-9191/21-024.

Full text
Abstract:
The Smart Energy Design Assistance Center assessed the administrative, technical, and economic aspects of feasibility related to the procurement and installation of photovoltaic solar systems on IDOT-owned buildings and lands. To address administrative feasibility, we explored three main ways in which IDOT could procure solar projects: power purchase agreement (PPA), direct purchase, and land lease development. Of the three methods, PPA and direct purchase are most applicable for IDOT. While solar development is not free of obstacles for IDOT, it is administratively feasible, and regulatory hurdles can be adequately met given suitable planning and implementation. To evaluate IDOT assets for solar feasibility, more than 1,000 IDOT sites were screened and narrowed using spatial analytic tools. A stakeholder feedback process was used to select five case study sites that allowed for a range of solar development types, from large utility-scale projects to small rooftop systems. To evaluate financial feasibility, discussions with developers and datapoints from the literature were used to create financial models. A large solar project request by IDOT can be expected to generate considerable attention from developers and potentially attractive PPA pricing that would generate immediate cash flow savings for IDOT. Procurement partnerships with other state agencies will create opportunities for even larger projects with better pricing. However, in the near term, it may be difficult for IDOT to identify small rooftop or other small on-site solar projects that are financially feasible. This project identified two especially promising solar sites so that IDOT can evaluate other solar site development opportunities in the future. This project also developed a web-based decision-support tool so IDOT can identify potential sites and develop preliminary indications of feasibility. We recommend that IDOT begin the process of developing at least one of their large sites to support solar electric power generation.
APA, Harvard, Vancouver, ISO, and other styles
7

BALYSH, A., and O. CHIRICOVA. PROBLEMS OF PRODUCTION AND USE OF SHEALING SLEEVES IN THE USSR BEFORE AND DURING THE GREAT PATRIOTIC WAR. Science and Innovation Center Publishing House, 2021. http://dx.doi.org/10.12731/2077-1770-2021-13-4-2-24-33.

Full text
Abstract:
The aim of the article. One of the most interesting and topical problems in the USSR military industry development is the establishment and development of the USSR ammunition industry. The article is devoted to the study of one of the reasons for the poor supply of the Red Army by ammunition in the initial period of the war of 1941 - a lack of sleeves, which limited the production of artillery shells. The author sets the purpose of revealing the reasons for the unsatisfactory state of affairs in the field of manufacture by the industrial enterprises of the USSR industrial enterprises before the war, as well as the influence of this factor on the production and use of the sleeves during the war years. Methodology. General principles of historism and objectivity are the theoretical-methodological base of this work. Author also uses special historical methods: logic, systematic, chronological, actualisation and periodizing. Results. This article is based on documents storing in the Russian State Archive and Russian State Economical Archive. With the help of this documents and materials the author make the following decision: in 30th years of XX century in the USSR under forcing of industrial development the governmental bodies were not able to perform the efficient planning policy in the field of enterprises control especially in the defense branches. High-level personnel purposively disturbed technological process. It spoiled enterprises operation and it was the reason of defect production manufacturing. Practical application. Practical significance of this work is as follows: the archive data, which are for the first time used for scientific investigation and also the conclusions formulated in this article can be used for further scientific research of the USSR military industry in the industrialization period and on military production lend-lease during the Great Patriotic War and also in Soviet history in general.
APA, Harvard, Vancouver, ISO, and other styles
8

Lessons Learned from the Cambodia Enterprise Infirmary Guidelines development process. Population Council, 2018. http://dx.doi.org/10.31899/sbsr2018.1002.

Full text
Abstract:
Women of reproductive age in Cambodia, and many other developing countries, comprise a large part of factories’ workforce. Integrating family planning and reproductive health information and services into factories can improve workers’ health and help countries achieve FP2020 commitments. This case study looks at the process of how the Cambodian Ministry of Labor and Vocational Training launched, as formal policy, a set of workplace health infirmary guidelines for enterprises. What made this policy process unique for Cambodia—and what can be replicated by health advocates elsewhere—is that a group of organizations typically focused on public health policy successfully engaged on labor policy with a labor ministry. This case study describes the policy process, which was underpinned by the strategic use of evidence in decision-making and has been hailed by government, donors, civil society and industry as a success. The learnings presented in this case study should be useful to health advocates, labor advocates, and program designers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography