Academic literature on the topic 'Vision model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vision model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vision model"

1

Pece, Arthur E. C. "Generative model based vision." Image and Vision Computing 21, no. 1 (January 2003): 1–3. http://dx.doi.org/10.1016/s0262-8856(02)00124-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pece, Arthur E. C., and Rasmus Larsen. "Generative model based vision." Computer Vision and Image Understanding 106, no. 1 (April 2007): 3–4. http://dx.doi.org/10.1016/j.cviu.2006.10.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Naisberg, Y. "Biophysical vision model and learning paradigms about vision: review." Medical Hypotheses 57, no. 4 (October 2001): 409–18. http://dx.doi.org/10.1054/mehy.2001.1290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kantabutra, Sooksan, and Gayle C. Avery. "Proposed Model for Investigating Relationships between Vision Components and Business Unit Performance." Journal of Management & Organization 8, no. 2 (2002): 22–39. http://dx.doi.org/10.1017/s1833367200005009.

Full text
Abstract:
ABSTRACTLeaders are widely exhorted to employ visions, yet surprisingly little research has been conducted on what constitutes an “effective” vision. A research model is proposed for investigating relationships between vision components, and business unit performance as measured by employee and customer satisfaction. The model, expressed both graphically and as three propositions, proposes that vision attributes of brevity, clarity, abstractness, challenge, future orientation, stability, and desirability, plus vision content relating to employee and customer satisfaction, can directly affect performance. However, the model also predicts indirect effects on performance mediated by six realization factors and two intervening variables.
APA, Harvard, Vancouver, ISO, and other styles
5

Kantabutra, Sooksan, and Gayle C. Avery. "Proposed Model for Investigating Relationships between Vision Components and Business Unit Performance." Journal of the Australian and New Zealand Academy of Management 8, no. 2 (2002): 22–39. http://dx.doi.org/10.5172/jmo.2002.8.2.22.

Full text
Abstract:
ABSTRACTLeaders are widely exhorted to employ visions, yet surprisingly little research has been conducted on what constitutes an “effective” vision. A research model is proposed for investigating relationships between vision components, and business unit performance as measured by employee and customer satisfaction. The model, expressed both graphically and as three propositions, proposes that vision attributes of brevity, clarity, abstractness, challenge, future orientation, stability, and desirability, plus vision content relating to employee and customer satisfaction, can directly affect performance. However, the model also predicts indirect effects on performance mediated by six realization factors and two intervening variables.
APA, Harvard, Vancouver, ISO, and other styles
6

Higuchi, Kazutoshi, Mitsuru Kaise, Hiroto Noda, Go Ikeda, Teppei Akimoto, Hiroshi Yamawaki, Osamu Goto, Nobue Ueki, Seiji Futagami, and Katsuhiko Iwakiri. "Usefulness of 3-Dimensional Flexible Endoscopy in Esophageal Endoscopic Submucosal Dissection in an Ex Vivo Animal Model." Gastroenterology Research and Practice 2019 (November 3, 2019): 1–5. http://dx.doi.org/10.1155/2019/4051956.

Full text
Abstract:
Background and Aims. Three-dimensional (3D) rigid endoscopy has been clinically introduced in surgical fields to enable safer and more accurate procedures. To explore the feasibility of 3D flexible endoscopy, we conducted a study comparing 2-dimensional (2D) and 3D visions for the performance of esophageal endoscopic submucosal dissection (ESD). Methods. Six endoscopists (3 experts and 3 trainees) performed ESD of target lesions in isolated porcine esophagus using a prototype 3D flexible endoscope under 2D or 3D vision. Study endpoints were procedure time, speed of mucosal incision and submucosal dissection, number of technical adverse events (perforation, muscle layer damage, and sample damage), and degree of sense of security, fatigue, and eye strain. Results. Procedure time and speed of mucosal incision/submucosal dissection were equivalent for 2D and 3D visions in both experts and trainees. The number of technical adverse events using 2D vision (mean [standard deviation], 3.5 [4.09]) tended to be higher than that using 3D vision in trainees (1.33 [2.80]; P=.06). In experts, 2D and 3D visions were equivalent. The degree of sense of security using 3D vision (3.67 [0.82]) was significantly higher than that using 2D vision (2.67 [0.52]) in trainees (P=.04), but was equivalent in experts. The degree of eye strain using 3D vision (3.00 [0.00]) was significantly higher than that using 2D vision (2.17 [0.41]) in trainees, but was equivalent in experts. Conclusions. 3D vision improves the sense of security during ESD and may reduce technical errors, especially in trainees, indicating the feasibility of a clinical trial of ESD under 3D vision.
APA, Harvard, Vancouver, ISO, and other styles
7

HASEGAWA, Tsutomu. "Model-based Vision Robotic Manipulation." Journal of the Robotics Society of Japan 10, no. 2 (1992): 153–58. http://dx.doi.org/10.7210/jrsj.10.153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lockwood, P., S. Lu, and C. Martin. "A Model for Binocular Vision." IFAC Proceedings Volumes 28, no. 14 (June 1995): 71–74. http://dx.doi.org/10.1016/s1474-6670(17)46808-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Jin Fang, Zhen Wei Su, and Guo Hui Li. "Computational Model for Machine Vision Inspection Based on Vision Attention." Advanced Materials Research 383-390 (November 2011): 2398–403. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.2398.

Full text
Abstract:
Human vision system exploits this fact by visual selective attention mechanisms towards important and informative regions. A computational model of combination both bottom-up and top-down simulating human vision system for machine vision inspection is proposed. In this model, top-down knowledge-based information is highlighted to integrate into bottom-up stimulus-based process of vision attention. The model is tested on inspecting contaminants in cotton images. Experiment result shows that the proposed model is feasible and effective in visual inspection. And it is available and quasi-equivalent to human vision attention.
APA, Harvard, Vancouver, ISO, and other styles
10

Oh, Il Kweon, Seong Won Yeom, and Dong Weon Lee. "Modal Reduced Order Model for Vision Sensing of IPMC Actuator." Key Engineering Materials 326-328 (December 2006): 1523–26. http://dx.doi.org/10.4028/www.scientific.net/kem.326-328.1523.

Full text
Abstract:
In order to control the IPMC (Ionic Polymer Metal Composite) actuators, it is necessary to use a vision sensing system and a reduced order model from the vision sensing data. In this study, the MROVS (Modal Reduced Order Vision Sensing) model using the least square method has been developed for implementation of the biomimetic motion generation. The simulated transverse displacement is approximated with a sum of the lower mode shapes of the cantilever beam. The NIPXI 1409 image acquisition board and CCD camera (XC-HR50) are used in the experimental setup. Present results show that the MROVS model can efficiently process the vision sensing of the biomimetic IPMC actuator with cost-effective computational time.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Vision model"

1

Gomes, Herman M. "Model learning in iconic vision." Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/323.

Full text
Abstract:
Generally, object recognition research falls into three main categories: (a) geometric, symbolic or structure based recognition, which is usually associated with CAD-based vision and 3-D object recognition; (b) property, vector or feature based recognition, involving techniques that vary from specific feature vectors, multiple filtering to global descriptors for shape, texture and colour; and (c) iconic or image based recognition, which either complies with the traditional sensor architecture of an uniform array of sampling units, or uses alternative representations. An example is the log-polar image, which is inspired by the human visual system and besides requiring less pixels, has some useful mathematical properties. The context of this thesis is a combination of the above categories in the sense that it investigates the area of iconic based recognition using image features and geometric relationships. It expands an existing vision system that operates by fixating at interesting regions in a scene, extracting a number of raw primal sketch features from a log-polar image and matching new regions to previously seen ones. Primal sketch features like edges, bars, blobs and ends are believed to take part of early visual processes in humans providing cues for an attention mechanism and more compact representations for the image data. In an earlier work, logic operators were defined to extract these features, but the results were not satisfactory. This thesis initially investigates the question of whether or not primal sketch features could be learned from log-polar images, and gives an affirmative answer. The feature extraction process was implemented using a neural network which learns examples of features in a window of receptive fields of the log-polar image. An architecture designed to encode the feature’s class, position, orientation and contrast has been proposed and tested. Success depended on the incorporation of a function that normalises the feature’s orientation and a PCA pre-processing module to produce better separation in the feature space. A strategy that combines synthetic and real features is used for the learning process. This thesis also provides an answer to the important, but so far not well explored, question of how to learn relationships from sets of iconic object models obtained from a set of images. An iconic model is defined as a set of regions, or object instances, that are similar to each other, organised into a geometric model specified by the relative scales, orientations, positions and similarity scores for each pair of image regions. Similarities are measured with a cross-correlation metric and relative scales and orientations are obtained from the best matched translational variants generated in the log-polar space. A solution to the structure learning problem is presented in terms of a graph based representation and algorithm. Vertices represent instances of an image neighbourhood found in the scenes. An edge in the graph represents a relationship between two neighbourhoods. Intra and inter model relationships are inferred by means of the cliques found in the graph, which leads to rigid geometric models inferred from the image evidence.
APA, Harvard, Vancouver, ISO, and other styles
2

Dickinson, John William. "Image structure and model-based vision." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brown, Gary. "An object oriented model of machine vision." Thesis, Kingston University, 1997. http://eprints.kingston.ac.uk/20614/.

Full text
Abstract:
In this thesis an object oriented model is proposed that satisfies the requirements for a generic, customisable, reusable and flexible machine vision framework. These requirements are identified as being: ease of customisation for a particular application domain; independence from image definition; independence from shape representation scheme; ability to add new domain specific shape descriptors; independence from implemented machine vision algorithms; and the ability to maximise reuse of the generic framework. The thesis begins with a review of key machine vision functions and traditional architectures. In particular, machine vision architectures predicated on a process oriented framework are examined in detail and evaluated against the criteria stated above. An object oriented model is developed within the thesis, identifying the key classes underlying the machine vision domain. The responsibilities of these classes, and the relationships between them, are analysed in the context of high level machine vision tasks, for example object recognition. This object oriented approach is then contrasted with the more traditional process oriented approach. The object oriented model and framework is subsequently evaluated through a customisation, to illustrate an example machine vision application, namely Surface Mounted Electronic Assembly inspection. The object oriented model is also evaluated in the context of two functional machine vision applications described in literature. The model developed in this thesis incorporates the fundamental object oriented concepts of abstraction, encapsulation, inheritance and polymorphism. The results show that an object oriented approach does achieve the requirements for a generic, customisable, reusable and flexible machine vision framework.
APA, Harvard, Vancouver, ISO, and other styles
4

Vasilaki, Eleni. "A biologically inspired dynamic model for vision." Thesis, University of Sussex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Broun, A. "Autonomous model building using vision and manipulation." Thesis, University of the West of England, Bristol, 2016. http://eprints.uwe.ac.uk/25851/.

Full text
Abstract:
It is often the case that robotic systems require models, in order to successfully control themselves, and to interact with the world. Models take many forms and include kinematic models to plan motions, dynamics models to understand the interaction of forces, and models of 3D geometry to check for collisions, to name but a few. Traditionally, models are provided to the robotic system by the designers that build the system. However, for long-term autonomy it becomes important for the robot to be able to build and maintain models of itself, and of objects it might encounter. In this thesis, the argument for enabling robotic systems to autonomously build models is advanced and explored. The main contribution of this research is to show how a layered approach can be taken to building models. Thus a robot, starting with a limited amount of information, can autonomously build a number of models, including a kinematic model, which describes the robot’s body, and allows it to plan and perform future movements. Key to the incremental, autonomous approach is the use of exploratory actions. These are actions that the robot can perform in order to gain some more information, either about itself, or about an object with which it is interacting. A method is then presented whereby a robot, after being powered on, can home its joints using just vision, i.e. traditional methods such as absolute encoders, or limit switches are not required. The ability to interact with objects in order to extract information is one of the main advantages that a robotic system has over a purely passive system, when attempting to learn about or build models of objects. In light of this, the next contribution of this research is to look beyond the robot’s body and to present methods with which a robot can autonomously build models of objects in the world around it. The first class of objects examined are flat pack cardboard boxes, a class of articulated objects with a number of interesting properties. It is shown how exploratory actions can be used to build a model of a flat pack cardboard box and to locate any hinges the box may have. Specifically, it is shown how when interacting with an object, a robot can combine haptic feedback from force sensors, with visual feedback from a camera to get more information from an object than would be possible using just a single sensor modality. The final contribution of this research is to present a series of exploratory actions for a robotic text reading system that allow text to be found and read from an object. The text reading system highlights how models of objects can take many forms, from a representation of their physical extents, to the text that is written on them.
APA, Harvard, Vancouver, ISO, and other styles
6

Hnatow, Justin Michael. "A theoretical eye model for uncalibrated real-time eye gaze estimation /." Online version of thesis, 2006. http://hdl.handle.net/1850/2606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shao, Yuan. "A Bayesian reasoning framework for model-driven vision." Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Li. "The viewpoint consistency constraint in model-based vision." Thesis, University of Reading, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Becker, Shawn C. (Shawn Carter). "Vision-assisted modeling for model-based video representations." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Min. "Novel frameworks for deformable model and nonrigid motion analysis." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 2.87Mb, 145 p, 2005. http://wwwlib.umi.com/dissertations/fullcit/3181869.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Vision model"

1

The etiology of vision disorders: A neuroscience model. Santa Ana, CA: Optometric Extension Program, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bab-Hadiashar, Alireza, and David Suter, eds. Data Segmentation and Model Selection for Computer Vision. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-0-387-21528-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lepetit, Vincent. Monocular model-based 3D tracking of rigid objects. Boston, MA: NOW Publishers, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Policing Sri Lanka: A vision beyond the colonial model. Ratmalana: Sarvodaya Vishva Lekha Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

CIE Technical Committee TC-8-01. A colour appearance model for colour management systems: CIEAMO2. Vienna, Austria: CIE Central Bureau, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sturm, Jürgen. Approaches to Probabilistic Model Learning for Mobile Manipulation Robots. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Panin, Giorgio. Model-based visual tracking: The OpenTL framework. Hoboken, N.J: Wiley, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cavadini, Marco. Concept and model of a multiprocessor system for high resolution image correlation. Konstanz: Hartung-Gorre, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Faxin. Three-dimensional model analysis and processing. Hangzhuou: Zhejiang University Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Verghese, Gilbert. Perspective alignment back-projection for real-time monocular three-dimensional model-based computer vision. Toronto: Dept. of Computer Science, University of Toronto, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Vision model"

1

Zhang, Zhengyou. "Camera Model." In Computer Vision, 77–80. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tominaga, Shoji. "Color Model." In Computer Vision, 1–6. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_449-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tominaga, Shoji. "Color Model." In Computer Vision, 116–20. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sturm, Peter. "Pinhole Camera Model." In Computer Vision, 610–13. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tan, Ping. "Phong Reflectance Model." In Computer Vision, 1–3. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_536-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ghosh, Abhijeet. "Cook-Torrance Model." In Computer Vision, 146–52. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tominaga, Shoji. "Dichromatic Reflection Model." In Computer Vision, 191–93. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Ping. "Phong Reflectance Model." In Computer Vision, 592–94. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tominaga, Shoji. "Dichromatic Reflection Model." In Computer Vision, 1–3. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-03243-2_532-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tan, Ping. "Oren-Nayar Reflectance Model." In Computer Vision, 1–3. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_535-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vision model"

1

Benelallam, Amine, Thomas Hartmann, Ludovic Mouline, Francois Fouquet, Johann Bourcier, Olivier Barais, and Yves Le Traon. "Raising Time Awareness in Model-Driven Engineering: Vision Paper." In 2017 ACM/IEEE 20th International Conference on Model-Driven Engineering Languages and Systems (MODELS). IEEE, 2017. http://dx.doi.org/10.1109/models.2017.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Osorio, D., and Peter J. Sobey. "Insect vision as model for machine vision." In Applications in Optical Science and Engineering, edited by David P. Casasent. SPIE, 1992. http://dx.doi.org/10.1117/12.131606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moorhead, Ian R. "Computational color vision model." In Photonics West '98 Electronic Imaging, edited by Bernice E. Rogowitz and Thrasyvoulos N. Pappas. SPIE, 1998. http://dx.doi.org/10.1117/12.320155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moorhead, Ian R. "Spatiochromatic model of vision." In Electronic Imaging: Science & Technology, edited by Bernice E. Rogowitz and Jan P. Allebach. SPIE, 1996. http://dx.doi.org/10.1117/12.238721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Podladchikova, Lubov N., Valentina I. Gusakova, Dmitry G. Shaposhnikov, Alain Faure, Alexander V. Golovan, and Natalia A. Shevtsova. "MARR: active vision model." In Intelligent Systems & Advanced Manufacturing, edited by David P. Casasent. SPIE, 1997. http://dx.doi.org/10.1117/12.290313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nazaruks, Vladislavs, and Jānis Osis. "Vision of the TFM-driven Code Acquisition." In Special Session on Model-Driven Innovations for Software Engineering. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007843306170624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Teng, Chungte, and Panos A. Ligomenides. "ANN-implemented robust vision model." In Boston - DL tentative, edited by David P. Casasent. SPIE, 1991. http://dx.doi.org/10.1117/12.25200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rogers, M., and J. Graham. "Structured Point Distribution Models: Model Intermittently Present Features." In British Machine Vision Conference 2001. British Machine Vision Association, 2001. http://dx.doi.org/10.5244/c.15.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abouelaziz, Ilyass, and Mohammed El Hassouni. "New models of visual saliency: Contourlet transform based model and hybrid model." In 2015 Intelligent Systems and Computer Vision (ISCV). IEEE, 2015. http://dx.doi.org/10.1109/isacv.2015.7105547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Haoshuo, Vihan Jain, Harsh Mehta, Jason Baldridge, and Eugene Ie. "Multi-modal Discriminative Model for Vision-and-Language Navigation." In Proceedings of the Combined Workshop on Spatial Language Understanding (. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-1605.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Vision model"

1

Jacob J. Jacobson, Robert F. Jeffers, Gretchen E. Matthern, Steven J. Piet, Benjamin A. Baker, and Joseph Grimm. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model. Office of Scientific and Technical Information (OSTI), August 2009. http://dx.doi.org/10.2172/968564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lindquist, Goerge H., J. Richard Freeling, and Allyn W. Dunstan. Computational Vision Model (CVM) Research and Development. Fort Belvoir, VA: Defense Technical Information Center, March 1998. http://dx.doi.org/10.21236/ada361237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Huey, Katsushi Ikeuchi, and Takeo Kanade. Model-Based Vision System by Object-Oriented Programming. Fort Belvoir, VA: Defense Technical Information Center, February 1988. http://dx.doi.org/10.21236/ada195819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

D. E. Shropshire and W. H. West. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model. Office of Scientific and Technical Information (OSTI), November 2005. http://dx.doi.org/10.2172/910990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

J. J. Jacobson, D. E. Shropshire, and W. B. West. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model. Office of Scientific and Technical Information (OSTI), November 2005. http://dx.doi.org/10.2172/911264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

D. E. Shropshire and W. H. West. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model. Office of Scientific and Technical Information (OSTI), November 2005. http://dx.doi.org/10.2172/911281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Antonio, Joseph C., and William E. Berkley. Night Vision Goggle Model F4949 Preflight Adjustment/Assessment Procedures. Fort Belvoir, VA: Defense Technical Information Center, August 1993. http://dx.doi.org/10.21236/ada271079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaplan, Robert, and Phillip B. Hanselman. Suitability of Ada for Real-Time Model Based Vision Applications. Fort Belvoir, VA: Defense Technical Information Center, November 1991. http://dx.doi.org/10.21236/ada245710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jacob J. Jacobson, Robert F. Jeffers, Gretchen E. Matthern, Steven J. Piet, and Wendell D. Hintze. User Guide for VISION 3.4.7 (Verifiable Fuel Cycle Simulation) Model. Office of Scientific and Technical Information (OSTI), July 2011. http://dx.doi.org/10.2172/1027943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rhodes, Anthony. Leveraging Model Flexibility and Deep Structure: Non-Parametric and Deep Models for Computer Vision Processes with Applications to Deep Model Compression. Portland State University Library, May 2020. http://dx.doi.org/10.15760/etd.7320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography