Academic literature on the topic 'Learning for planning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Learning for planning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Learning for planning"
Sahraoui, Sofiane. "Learning through Planning." Journal of Organizational and End User Computing 15, no. 2 (April 2003): 37–53. http://dx.doi.org/10.4018/joeuc.2003040103.
Full textMally, Kristi. "Planning for Learning." Journal of Physical Education, Recreation & Dance 80, no. 4 (April 2009): 39–47. http://dx.doi.org/10.1080/07303084.2009.10598309.
Full textHodgson, David, and Heather Walford. "Planning for learning and learning about planning in social work fieldwork." Journal of Practice Teaching and Learning 7, no. 1 (January 1, 2006): 50–66. http://dx.doi.org/10.1921/17466105.7.1.50.
Full textHodgson, David, and Heather Walford. "Planning for learning and learning about planning in social work fieldwork." Journal of Practice Teaching and Learning 7, no. 1 (December 20, 2012): 50–66. http://dx.doi.org/10.1921/jpts.v7i1.343.
Full textSafra, S., and M. Tennenholtz. "On Planning while Learning." Journal of Artificial Intelligence Research 2 (September 1, 1994): 111–29. http://dx.doi.org/10.1613/jair.51.
Full textCowley, Jennifer S. Evans, Thomas W. Sanchez, Nader Afzalan, Abel Silva Lizcano, Zachary Kenitzer, and Thomas Evans. "Learning About E-Planning." International Journal of E-Planning Research 3, no. 3 (July 2014): 53–76. http://dx.doi.org/10.4018/ijepr.2014070104.
Full textSchaeffer, Jonathan. "Games: Planning and Learning." ICGA Journal 17, no. 1 (March 1, 1994): 40–41. http://dx.doi.org/10.3233/icg-1994-17113.
Full textHufford, Jon R. "Planning for Distance Learning." Journal of Library Administration 32, no. 1-2 (January 2001): 259–66. http://dx.doi.org/10.1300/j111v32n01_04.
Full textZorc, Samo. "Learning in Assembly Planning." IFAC Proceedings Volumes 31, no. 7 (May 1998): 17–22. http://dx.doi.org/10.1016/s1474-6670(17)40250-3.
Full textZuiderwijk, Dianka C., Riana Steen, and Pedro N. P. Ferreira. "Learning from operational planning." International Journal of Business Continuity and Risk Management 13, no. 2 (2023): 165–87. http://dx.doi.org/10.1504/ijbcrm.2023.131863.
Full textDissertations / Theses on the topic "Learning for planning"
Goodspeed, Robert (Robert Charles). "Planning support systems for spatial planning through social learning." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81739.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 240-271).
This dissertation examines new professional practices in urban planning that utilize new types of spatial planning support systems (PSS) based on geographic information systems (GIS) software. Through a mixed-methods research design, the dissertation investigates the role of these new technologies in planning workshops, processes, and as metropolitan infrastructures. In particular, PSS are viewed as supporting social learning in spatial planning processes. The study includes cases in Boston, Kansas City, and Austin. The findings indicate high levels of social learning, broadly confirming the collaborative planning theory literature. Participants at planning workshops that incorporated embodied computing interaction designs reported higher levels of two forms of learning drawn from Argyris and Schöns' theory of organizational learning: single and double loop learning. Single loop learning is measured as reported learning. Double loop learning, characterized by deliberation about goals and values, is measured with a novel summative scale. These workshops utilized PSS to contribute indicators to the discussion through the use of paper maps for input and human operators for output. A regression analysis reveals that the PSS contributed to learning by encouraging imagination, engagement, and alignment. Participantsʼ perceived identities as planners, personality characteristics, and frequency of meeting attendance were also related to the learning outcomes. However, less learning was observed at workshops with many detailed maps and limited time for discussion, and exercises lacking PSS feedback. The development of PSS infrastructure is investigated by conducting a qualitative analysis of focus groups of professional planners, and a case where a PSS was planned but not implemented. The dissertation draws on the research literatures on learning, PSS and urban computer models, and planning theory. The research design is influenced by a sociotechnical perspective and design research paradigms from several fields. The dissertation argues social learning is required to achieve many normative goals in planning, such as institutional change and urban sustainability. The relationship between planning processes and outcomes, and implications of information technology trends for PSS and spatial planning are discussed.
by Robert Goodspeed.
Ph.D.
Zettlemoyer, Luke S. (Luke Sean) 1978. "Learning probabilistic relational planning rules." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87896.
Full textPark, Sooho S. M. Massachusetts Institute of Technology. "Learning for informative path planning." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45887.
Full textIncludes bibliographical references (p. 104-108).
Through the combined use of regression techniques, we will learn models of the uncertainty propagation efficiently and accurately to replace computationally intensive Monte- Carlo simulations in informative path planning. This will enable us to decrease the uncertainty of the weather estimates more than current methods by enabling the evaluation of many more candidate paths given the same amount of resources. The learning method and the path planning method will be validated by the numerical experiments using the Lorenz-2003 model [32], an idealized weather model.
by Sooho Park.
S.M.
Junyent, Barbany Miquel. "Width-Based Planning and Learning." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672779.
Full textLa presa seqüencial de decisions òptimes és un problema fonamental en diversos camps. En els últims anys, els mètodes d'aprenentatge per reforç (RL) han experimentat un èxit sense precedents, en gran part gràcies a l'ús de models d'aprenentatge profund, aconseguint un rendiment a nivell humà en diversos dominis, com els videojocs d'Atari o l'antic joc de Go. En contrast amb l'enfocament de RL, on l'agent aprèn una política a partir de mostres d'interacció amb l'entorn, ignorant l'estructura del problema, l'enfocament de planificació assumeix models coneguts per als objectius de l'agent i la dinàmica del domini, i es basa en determinar com ha de comportar-se l'agent per aconseguir els seus objectius. Els planificadors actuals són capaços de resoldre problemes que involucren grans espais d'estats precisament explotant l'estructura del problema, definida en el model estat-acció. En aquest treball combinem els dos enfocaments, aprofitant polítiques ràpides i compactes dels mètodes d'aprenentatge i la capacitat de fer cerques en problemes combinatoris dels mètodes de planificació. En particular, ens enfoquem en una família de planificadors basats en el width (ample), que han tingut molt èxit en els últims anys gràcies a que la seva escalabilitat és independent de la mida de l'espai d'estats. L'algorisme bàsic, Iterated Width (IW), es va proposar originalment per problemes de planificació clàssica, on el model de transicions d'estat i objectius ve completament determinat, representat per conjunts d'àtoms. No obstant, els planificadors basats en width no requereixen un model de l'entorn completament definit i es poden utilitzar amb simuladors. Per exemple, s'han aplicat recentment a dominis gràfics com els jocs d'Atari. Malgrat el seu èxit, IW és un algorisme purament exploratori i no aprofita la informació de recompenses anteriors. A més, requereix que l'estat estigui factoritzat en característiques, que han de predefinirse per a la tasca en concret. A més, executar l'algorisme amb un width superior a 1 sol ser computacionalment intractable a la pràctica, el que impedeix que IW resolgui problemes de width superior. Comencem aquesta tesi estudiant la complexitat dels mètodes basats en width quan l'espai d'estats està definit per característiques multivalor, com en els problemes de RL, en lloc d'àtoms booleans. Proporcionem un límit superior més precís en la quantitat de nodes expandits per IW, així com resultats generals de complexitat algorísmica. Per fer front a problemes més complexos (és a dir, aquells amb un width superior a 1), presentem un algorisme jeràrquic que planifica en dos nivells d'abstracció. El planificador d'alt nivell utilitza característiques abstractes que es van descobrint gradualment a partir de decisions de poda en l'arbre de baix nivell. Il·lustrem aquest algorisme en dominis PDDL de planificació clàssica, així com en dominis de simuladors gràfics. En planificació clàssica, mostrem com IW(1) en dos nivells d'abstracció pot resoldre problemes de width 2. Per aprofitar la informació de recompenses passades, incorporem una política explícita en el mecanisme de selecció d'accions. El nostre mètode, anomenat π-IW, intercala la planificació basada en width i l'aprenentatge de la política usant les accions visitades pel planificador. Representem la política amb una xarxa neuronal que, al seu torn, s'utilitza per guiar la planificació, reforçant així camins prometedors. A més, la representació apresa per la xarxa neuronal es pot utilitzar com a característiques per al planificador sense degradar el seu rendiment, eliminant així el requisit d'usar característiques predefinides. Comparem π-IW amb mètodes anteriors basats en width i amb AlphaZero, un mètode que també intercala planificació i aprenentatge, i mostrem que π-IW té un rendiment superior en entorns simples. També mostrem que l'algorisme π-IW supera altres mètodes basats en width en els jocs d'Atari. Finalment, mostrem que el mètode IW jeràrquic proposat pot integrar-se fàcilment amb el nostre esquema d'aprenentatge de la política, donant com a resultat un algorisme que supera els planificadors no jeràrquics basats en IW en els jocs d'Atari amb recompenses distants.
La toma secuencial de decisiones óptimas es un problema fundamental en diversos campos. En los últimos años, los métodos de aprendizaje por refuerzo (RL) han experimentado un éxito sin precedentes, en gran parte gracias al uso de modelos de aprendizaje profundo, alcanzando un rendimiento a nivel humano en varios dominios, como los videojuegos de Atari o el antiguo juego de Go. En contraste con el enfoque de RL, donde el agente aprende una política a partir de muestras de interacción con el entorno, ignorando la estructura del problema, el enfoque de planificación asume modelos conocidos para los objetivos del agente y la dinámica del dominio, y se basa en determinar cómo debe comportarse el agente para lograr sus objetivos. Los planificadores actuales son capaces de resolver problemas que involucran grandes espacios de estados precisamente explotando la estructura del problema, definida en el modelo estado-acción. En este trabajo combinamos los dos enfoques, aprovechando políticas rápidas y compactas de los métodos de aprendizaje y la capacidad de realizar búsquedas en problemas combinatorios de los métodos de planificación. En particular, nos enfocamos en una familia de planificadores basados en el width (ancho), que han demostrado un gran éxito en los últimos años debido a que su escalabilidad es independiente del tamaño del espacio de estados. El algoritmo básico, Iterated Width (IW), se propuso originalmente para problemas de planificación clásica, donde el modelo de transiciones de estado y objetivos viene completamente determinado, representado por conjuntos de átomos. Sin embargo, los planificadores basados en width no requieren un modelo del entorno completamente definido y se pueden utilizar con simuladores. Por ejemplo, se han aplicado recientemente en dominios gráficos como los juegos de Atari. A pesar de su éxito, IW es un algoritmo puramente exploratorio y no aprovecha la información de recompensas anteriores. Además, requiere que el estado esté factorizado en características, que deben predefinirse para la tarea en concreto. Además, ejecutar el algoritmo con un width superior a 1 suele ser computacionalmente intratable en la práctica, lo que impide que IW resuelva problemas de width superior. Empezamos esta tesis estudiando la complejidad de los métodos basados en width cuando el espacio de estados está definido por características multivalor, como en los problemas de RL, en lugar de átomos booleanos. Proporcionamos un límite superior más preciso en la cantidad de nodos expandidos por IW, así como resultados generales de complejidad algorítmica. Para hacer frente a problemas más complejos (es decir, aquellos con un width superior a 1), presentamos un algoritmo jerárquico que planifica en dos niveles de abstracción. El planificador de alto nivel utiliza características abstractas que se van descubriendo gradualmente a partir de decisiones de poda en el árbol de bajo nivel. Ilustramos este algoritmo en dominios PDDL de planificación clásica, así como en dominios de simuladores gráficos. En planificación clásica, mostramos cómo IW(1) en dos niveles de abstracción puede resolver problemas de width 2. Para aprovechar la información de recompensas pasadas, incorporamos una política explícita en el mecanismo de selección de acciones. Nuestro método, llamado π-IW, intercala la planificación basada en width y el aprendizaje de la política usando las acciones visitadas por el planificador. Representamos la política con una red neuronal que, a su vez, se utiliza para guiar la planificación, reforzando así caminos prometedores. Además, la representación aprendida por la red neuronal se puede utilizar como características para el planificador sin degradar su rendimiento, eliminando así el requisito de usar características predefinidas. Comparamos π-IW con métodos anteriores basados en width y con AlphaZero, un método que también intercala planificación y aprendizaje, y mostramos que π-IW tiene un rendimiento superior en entornos simples. También mostramos que el algoritmo π-IW supera otros métodos basados en width en los juegos de Atari. Finalmente, mostramos que el IW jerárquico propuesto puede integrarse fácilmente con nuestro esquema de aprendizaje de la política, dando como resultado un algoritmo que supera a los planificadores no jerárquicos basados en IW en los juegos de Atari con recompensas distantes.
Dearden, Richard W. "Learning and planning in structured worlds." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0020/NQ56531.pdf.
Full textMadigan-Concannon, Liam. "Planning for life : involving adults with learning disabilities in service planning." Thesis, London School of Economics and Political Science (University of London), 2003. http://etheses.lse.ac.uk/2664/.
Full textMäntysalo, R. (Raine). "Land-use planning as inter-organizational learning." Doctoral thesis, University of Oulu, 2000. http://urn.fi/urn:isbn:9514258444.
Full textGrant, Timothy John. "Inductive learning of knowledge-based planning operators." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1996. http://arno.unimaas.nl/show.cgi?fid=6686.
Full textBaldassarre, Gianluca. "Planning with neural networks and reinforcement learning." Thesis, University of Essex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252285.
Full textNewton, Muhammad Abdul Hakim. "Wizard : learning macro-actions comprehensively for planning." Thesis, University of Strathclyde, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501841.
Full textBooks on the topic "Learning for planning"
Serret, Natasha, and Catherine Gripton, eds. Purposeful Planning for Learning. Abingdon, Oxon; New York, NY: Routledge, 2020.: Routledge, 2020. http://dx.doi.org/10.4324/9780429489266.
Full textSparks-Linfield, Rachel. Planning for learning through weather. Leamington Spa: Step Forward Publishing, 2005.
Find full textLinfield, Rachel Sparks. Planning for learning through summer. Leamington: Step Forward Publishing, 1998.
Find full textAssociation, International Technology Education. Planning learning: Developing technology curricula. Reston, Va: International Technology Education Association, 2005.
Find full textPenny, Coltman, ed. Planning for learning through spring. Leamington Spa: Step Forward, 1998.
Find full textPenny, Coltman, ed. Planning for learning through winter. Leamington Spa: Step Forward, 1998.
Find full textPenny, Coltman, and Hughes Cathy, eds. Planning for learning through minibeasts. Leamington Spa: Step Forward Publishing, 1999.
Find full textDebra, Maltas, and Hughes Cathy illustrator, eds. Planning for learning through ICT. London: Practical Pre-Schools Books, 2010.
Find full textColtman, Penny. Planning for learning through toys. Leamington: Step Forward Publishing, 1998.
Find full textLinfield, Rachel Sparks. Planning for Learning Through Autumn. Leamington: Step Forward Publishing, 1998.
Find full textBook chapters on the topic "Learning for planning"
Weinstein, Yana, Megan Sumeracki, and Oliver Caviglioli. "Planning learning." In Understanding How We Learn, 88–100. Abingdon, Oxon ; New York, NY : Routledge, 2019.: Routledge, 2018. http://dx.doi.org/10.4324/9780203710463-9.
Full textPlaat, Aske. "Heuristic Planning." In Learning to Play, 71–112. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59238-7_4.
Full textTrede, Franziska, Lina Markauskaite, Celina McEwen, and Susie Macfarlane. "Planning Learning Activities." In Understanding Teaching-Learning Practice, 99–109. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7410-4_7.
Full textHaydn, Terry, and Alison Stephen. "Planning for learning." In Learning to Teach History in the Secondary School, 70–105. 5th ed. London: Routledge, 2021. http://dx.doi.org/10.4324/9780429060885-4.
Full textDas, J. P. "Simultaneous-Successive Processing and Planning." In Learning Strategies and Learning Styles, 101–29. Boston, MA: Springer US, 1988. http://dx.doi.org/10.1007/978-1-4899-2118-5_5.
Full textCappellini, Mary. "Thematic Planning." In Balancing Reading and Language Learning, 79–96. New York: Routledge, 2024. http://dx.doi.org/10.4324/9781003579069-7.
Full textKullberg, Angelika, Åke Ingerman, and Ference Marton. "Learning Study." In Planning and Analyzing Teaching, 30–41. London: Routledge, 2024. http://dx.doi.org/10.4324/9781003194903-4.
Full textHindmarsh, Sarah, and Susan Hunt. "Outdoor learning." In Purposeful Planning for Learning, 67–74. Abingdon, Oxon; New York, NY: Routledge, 2020.: Routledge, 2020. http://dx.doi.org/10.4324/9780429489266-9.
Full textSerret, Natasha, and Catherine Gripton. "What is planning?" In Purposeful Planning for Learning, 1–3. Abingdon, Oxon; New York, NY: Routledge, 2020.: Routledge, 2020. http://dx.doi.org/10.4324/9780429489266-1.
Full textHaywood, Elaine. "Planning for sustainability." In Purposeful Planning for Learning, 75–81. Abingdon, Oxon; New York, NY: Routledge, 2020.: Routledge, 2020. http://dx.doi.org/10.4324/9780429489266-10.
Full textConference papers on the topic "Learning for planning"
Zheng, Yuanhang, Peng Li, Ming Yan, Ji Zhang, Fei Huang, and Yang Liu. "Budget-Constrained Tool Learning with Planning." In Findings of the Association for Computational Linguistics ACL 2024, 9039–52. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.536.
Full textWakefield, Joshua J., Adam Neal, Stewart Haslinger, and Jason F. Ralph. "Sonar Path Planning Using Reinforcement Learning." In 2024 27th International Conference on Information Fusion (FUSION), 1–8. IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706484.
Full textWu, Yi, Mengsha Hu, Runxiang Jin, and Rui Liu. "Physics Representation Learning for Dexterous Manipulation Planning." In 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), 1177–82. IEEE, 2024. http://dx.doi.org/10.1109/ro-man60168.2024.10731363.
Full textXian-Yi Liu, Ming-Hao Yin, and Jia-Nan Wang. "Mapping contingent planning into multi-valued planning." In 2008 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2008. http://dx.doi.org/10.1109/icmlc.2008.4620739.
Full textWeiß, Gerhard. "Planning and learning together." In the fourth international conference. New York, New York, USA: ACM Press, 2000. http://dx.doi.org/10.1145/336595.337059.
Full textYANG, LIN. "E-LEARNING PLANNING PERSPECTIVE." In Proceedings of the Third International Conference on Web-based Learning (ICWL 2004). WORLD SCIENTIFIC, 2004. http://dx.doi.org/10.1142/9789812702494_0010.
Full textBing Li and Wen-Xiang Gu. "Process semaphore planning is a entire new planning method." In 2008 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2008. http://dx.doi.org/10.1109/icmlc.2008.4620751.
Full textHolman, Caitlin, Stephen J. Aguilar, Adam Levick, Jeff Stern, Benjamin Plummer, and Barry Fishman. "Planning for success." In LAK '15: the 5th International Learning Analytics and Knowledge Conference. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2723576.2723632.
Full textYou-hong Zhang, Ming-hao Yin, and Wen-xiang Gu. "Realize utility-driven decision theoretic planning on the planning graph." In Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527172.
Full textLiu, Jie, Ruishi Liang, and Junwei Xian. "An AI Planning Approach to Factory Production Planning and Scheduling." In 2022 International Conference on Machine Learning and Knowledge Engineering (MLKE). IEEE, 2022. http://dx.doi.org/10.1109/mlke55170.2022.00027.
Full textReports on the topic "Learning for planning"
Tadepalli, Prasad, and Alan Fern. Partial Planning Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, August 2012. http://dx.doi.org/10.21236/ada574717.
Full textChen, Pang C. Learning to improve path planning performance. Office of Scientific and Technical Information (OSTI), April 1995. http://dx.doi.org/10.2172/71654.
Full textParker, Robert. Linking Experiential Learning to Community Transportation Planning. Portland State University Library, May 2008. http://dx.doi.org/10.15760/trec.90.
Full textIlghami, Okhtay, Dana S. Nau, Hector Munoz-Avila, and David W. Aha. CaMeL: Learning Method Preconditions for HTN Planning. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada448055.
Full textRosenbloom, Paul S., Soowon Lee, and Amy Unruh. Bias in Planning and Explanation-Based Learning. Fort Belvoir, VA: Defense Technical Information Center, May 1993. http://dx.doi.org/10.21236/ada269608.
Full textThrun, Sebastian. MAPLE: Multi-Agent Planning, Learning, and Execution. Fort Belvoir, VA: Defense Technical Information Center, February 2004. http://dx.doi.org/10.21236/ada421529.
Full textMcCormick, Michael J. Warning and Planning: Learning to Live With Ambiguity. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada441101.
Full textCrabbe, Frderick L., and Rebecca Hwa. Robot Imitation Learning of High-Level Planning Information. Fort Belvoir, VA: Defense Technical Information Center, May 2005. http://dx.doi.org/10.21236/ada460420.
Full textRam, Ashwin. Modeling Multistrategy Learning as a Deliberative Process of Planning. Fort Belvoir, VA: Defense Technical Information Center, December 2000. http://dx.doi.org/10.21236/ada399291.
Full textMunoz-Avila, Hector. Transfer Learning and Hierarchical Task Network Representations and Planning. Fort Belvoir, VA: Defense Technical Information Center, February 2008. http://dx.doi.org/10.21236/ada500020.
Full text