Academic literature on the topic 'Intelligence testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Intelligence testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Intelligence testing"

1

Richardson, Theresa, and Erwin V. Johanningmeier. "Intelligence testing." International Journal of Educational Research 27, no. 8 (February 1998): 699–714. http://dx.doi.org/10.1016/s0883-0355(98)00007-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Garner, J. Bradley. "Intelligence Testing or Testing Inteligently." School Psychology International 6, no. 4 (October 1985): 235–38. http://dx.doi.org/10.1177/0143034385064008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Braaten, Ellen B., and Dennis Norman. "Intelligence (IQ) Testing." Pediatrics in Review 27, no. 11 (November 2006): 403–8. http://dx.doi.org/10.1542/pir.27-11-403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Braaten, Ellen B., and Dennis Norman. "Intelligence (IQ) Testing." Pediatrics In Review 27, no. 11 (November 1, 2006): 403–8. http://dx.doi.org/10.1542/pir.27.11.403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

&NA;. "Standardized Intelligence Testing." Journal of Developmental & Behavioral Pediatrics 16, no. 6 (December 1995): 425???427. http://dx.doi.org/10.1097/00004703-199512000-00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nicolas, Serge, and Zachary Levine. "Beyond Intelligence Testing." European Psychologist 17, no. 4 (January 1, 2012): 320–25. http://dx.doi.org/10.1027/1016-9040/a000117.

Full text
Abstract:
Though Alfred Binet was a prolific writer, many of his 1893–1903 works are not well known. This is partly due to a lack of English translations of the many important papers and books that he and his collaborators created during this period. Binet’s insights into intelligence testing are widely celebrated, but the centennial of his death provides an occasion to reexamine his other psychological examinations. His studies included many diverse aspects of mental life, including memory research and the science of testimony. Indeed, Binet was a pioneer of psychology and produced important research on cognitive and experimental psychology, developmental psychology, social psychology, and applied psychology. This paper seeks to elucidate these aspects of his work.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaufman, Melvin E., and Wayne L. Sengstock. "Intelligence testing of children." Postgraduate Medicine 81, no. 5 (April 1987): 249–55. http://dx.doi.org/10.1080/00325481.1987.11699799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aftarini, Tari, Indawan Syahri, and Mulyadi Mulyadi. "How is the effect of Multiple Intelligences on Students’ Self-Confidence and English Daily Exam Scores?" Journal of Social Work and Science Education 4, no. 3 (June 24, 2023): 1208–29. http://dx.doi.org/10.52690/jswse.v4i3.667.

Full text
Abstract:
The research aims to understand the impact of multiple intelligences toward students. Confidence and daily exam score of English Subject. This research was carried out at SMP Negeri 2 Abab, Penukal Abab Lematang Ilir Regency. This research used a quantitative approach. Data testing techniques were used in this research. 193 students were given a multiple intelligences test to find out what is their dominant intelligence and then used the data testing techniques to find out whether there was an impact or not toward the students’ confidence and daily exam score in English Subjects. The result of this research showed that (1) the student’s dominant intelligence did not significantly affect the English daily exam score. (2) some of the students’ dominant intelligence affected the students’ confidence in learning the English process, they were verbal intelligence, kinesthetic intelligence, interpersonal intelligence, intrapersonal intelligence, and naturalistic intelligence. So, it can be concluded that the students’ dominant intelligence didn’t have any impact on the student’s exam scores while some of the students’ dominant intelligence had an impact to the student’s self-confidence.
APA, Harvard, Vancouver, ISO, and other styles
9

R, Mysiuk, Yuzevych V, and Mysiuk I. "Api test automation of search functionality with artificial intelligence." Artificial Intelligence 27, jai2022.27(1) (June 20, 2022): 269–74. http://dx.doi.org/10.15407/jai2022.01.269.

Full text
Abstract:
One of the steps in software development is to test the software product. With the development of technology, the testing process has improved to automated testing, which reduces the impact of the human factor on error and speeds up testing. The main software products for testing are considered to be web applications, web services, mobile applications and performance testing. According to the testing pyramid, when testing web services, you need to develop more test cases than when testing a web application. Because automation involves writing software code for testing, the use of ready-made tools will speed up the software development process. One of the most important test indicators is the coverage of search functionality. The search functionality of a web application or web service requires a large number of cases, as you need to provide many conditions for its operation through the free entry of any information on the web page. There is an approach to data-based testing, which involves working with a test data set through files such as CSV, XLS, JSON, XML and others. However, finding input for testing takes a lot of time when creating test cases and automated test scenarios. It is proposed to use artificial data set generators based on real values and popular queries on the website to form a test data set. In addition, it is possible to take into account the probable techniques of developing test cases. It is proposed to conditionally divide the software for testing into several layers: client, test, work with data, checks and reports. The Java programming language has a number of libraries for working at each of these levels. It is proposed to use Rest Assured as a Restful client, TestNG as a library for writing tests with checks, and Allure report for generating reports. It is noted that the proposed approach uses artificial intelligence for automated selection of test cases when creating a test to diversify test approaches and simulate human input and behavior to maximize the use of cases.
APA, Harvard, Vancouver, ISO, and other styles
10

Deary, Ian J., Elizabeth J. Austin, and Peter G. Caryl. "Testing versus understanding human intelligence." Psychology, Public Policy, and Law 6, no. 1 (2000): 180–90. http://dx.doi.org/10.1037/1076-8971.6.1.180.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Intelligence testing"

1

Stockton, Patricia. "Dementia as a major public health concern : intelligence testing revisited." Thesis, London School of Economics and Political Science (University of London), 1996. http://etheses.lse.ac.uk/1441/.

Full text
Abstract:
In 1976 it was proposed that senile dementia, a potential affliction of old age, be redefined as Alzheimer's disease, a rare diagnosis previously assigned to presenile dementia occurring in middle life. In response to a "public culture" generated by those caring for the afflicted, together with leaders of the biomedical reserch community, substantial financing has been allocated by the U.S. Congress to the National Institutes of Health, for investigation of senile dementia redefined as a "dread disease". This has funded studies in the neurosciences, and a range of epidemiological and high technology diagnostic investigations for which psychiatry developed a "case-finding" method. The "cognitive paradigm" for dementia was conceived by American psychiatry within a now dominant "biological" model which imputes physical causation to mental disorders and stresses "objectivity" in diagnosis. This has legitimated the use of "mental test" instruments based upon, or validated against, "intelligence tests" developed by psychologists for quantification of "intelligence" now redefined as "cognition". In a study funded by the National Institute of Mental Health, three cognitive assessment instruments were administered to a sample of individuals aged 60-93 with a broad range of educational experience across the age spectrum. Education rather than age was found to be the most significant predictor of test results for each instrument, and when the tests were repeated a marked "learning effect" was detected among those with the least education and lowest baseline scores. However, the identification of low education as a predictor, albeit less powerful than age for "cognitive impairment" indicative of dementia in other investigations has now been interpreted as a "risk factor" rather than a confounding variable and now enters into genetic mental testing models. Negative stereotyping of "old age", strongly associated with images of "senility", and "burden of ageing" economic arguments have therefore been reinforced by the dissemination of prevalence estimates from epidemiological studies conducted in communities in which there is an inverse correlation between age and education. In the meantime, basic scientists have failed to discriminate precisely between neuropathological changes indicative of "disease" and those of "normal ageing" or to establish a functional link between such changes and dementia behaviour in vivo. In consequence the legitimating rationale for public financing of the "Alzheimer's enterprise", i.e. "clinical benefit" remains elusive.
APA, Harvard, Vancouver, ISO, and other styles
2

Nilsson, Joakim, and Andreas Jonasson. "Using Artificial Intelligence for Gameplay Testing On Turn-Based Games." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16716.

Full text
Abstract:
Background. Game development is a constantly evolving multi billion dollar in-dustry, and the need for quality products is very high. Testing the games however isa very time consuming and tedious task, often coming down to repeating sequencesuntil a requirement has been met. But what if some parts of it could be automated,handled by an artificial intelligence that can play the game day and night, givingstatistics about the gameplay as well as reports about errors that occurred duringthe session? Objectives. This thesis is done in cooperation with Fall Damage Studio AB, andaims to find and implement a suitable artificial intelligent agent to perform auto-mated test on a game Fall Damage Studio AB are currently developing, ProjectFreedom. The objective is to identify potential problems, benefits, and use casesof using a technique such as this. A secondary objective is to also identify what isneeded by the game for this kind of technique to be useful. Methods. To test the technique, aMonte-Carlo Tree Searchalgorithm was identi-fied as the most suitable algorithm and implemented for use in two different typesof experiments. The first being to evaluate how varying limitations in terms of thenumber of iterations and depth affected the results of the algorithm. This was doneto see if it was possible to change these factors and find a point where acceptablelevels of plays were achieved and further increases to these factors gave limited en-hancements to this level but increased the time. The second experiment aimed toevaluate what useful data can be extracted from a game, both in terms of gameplayrelated data as well as error information from crashes. Project Freedom was onlyused for the second test due to constraints that was out of scope for this thesis totry and repair. Results. The thesis has identified several requirements that is needed for a game touse a technique such as this in an useful way. For Monte-Carlo Tree Search specifi-cally, the game is required to have a gamestate that is quick to create a copy of anda game simulation that can be run in a short time. The game must also be testedfor the depth and iteration point that hit the point where the value of increasingthese values diminish. More generally, the algorithm of choice must be a part of thedesign process and different games might require different kind of algorithms to use.Adding this type of algorithm at a late stage in development, as was done for thisthesis, might be possible if precautions are taken. Conclusions. This thesis shows that using artificial intelligence agents for game-play testing is definitely possible, but it needs to be considered in the early part ofthe development process as no one size fits all approach is likely to exist. Differentgames will have their own requirements, some potentially more general for that typeof game, and some will be unique for that specific game. Thus different algorithmswill work better on certain types of games compared to other ones, and they willneed to be tweaked to perform optimally on a specific game.
APA, Harvard, Vancouver, ISO, and other styles
3

Hartman, Chad. "Field-testing the intelligence estimate : a strategy for genuine learning /." Maxwell AFB, Ala. : School of Advanced Air and Space Studies, 2008. https://www.afresearch.org/skins/rims/display.aspx?moduleid=be0e99f3-fc56-4ccb-8dfe-670c0822a153&mode=user&action=downloadpaper&objectid=b63f14d9-aca5-49a8-b0ba-538c42a24fb3&rs=PublishedSearch.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Orrison, Nancy Lynn Robertson. "Adequate and appropriate intelligence testing of moderately mentally retarded children." W&M ScholarWorks, 1992. https://scholarworks.wm.edu/etd/1539618394.

Full text
Abstract:
The intelligence of moderately mentally retarded (MR) children is difficult to assess because they often have concurrent physical or sensory impairments which adversely affect their test performance. The purpose of this study was to determine if necessary adaptations are made when assessing children who are moderately MR for educational placement in the State of Virginia.;A survey was sent to public school psychologists in the State of Virginia as identified by the 1990-91 roster obtained from the Virginia Department of Education. The survey inquired as to their normal methods of intelligence testing used with the moderately mentally retarded population. The results of the survey and a review of literature were used to determine methods of successful assessment of children who are moderately mentally retarded.;The results of the study indicate that more than one intelligence measure must be made to validate the results. The inclusion of adaptive behavior scales is necessary to satisfy the criteria for mental retardation. Modifications are often necessary to prevent physical handicaps from suppressing the child's scores on standard intelligence tests. What is needed are precisely stated modifications, included with standard intelligence tests, which accommodate for the needs of moderately mentally retarded children.
APA, Harvard, Vancouver, ISO, and other styles
5

Goward, L. M. "An investigation of the factors contributing to scores on intelligence tests." Thesis, University of Manchester, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liau, Chee Hong [Verfasser]. "Computational Intelligence-based Testing for Robust Circuit Design / Chee Hong Liau." Aachen : Shaker, 2006. http://d-nb.info/1186583703/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Emery, Kristine Louise. "Testing in the schools individualized intelligence tests and curriculum based measurements /." Online version, 2003. http://www.uwstout.edu/lib/thesis/2003/2003emeryk.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maller, Susan Joyce. "Validity and item bias of the WISC-III with deaf children." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186756.

Full text
Abstract:
The Wechsler Intelligence Scale for Children-Third Edition (WISC-III) is likely to become the most widely used test of intelligence with deaf children, based on the popularity of the previous versions of the test. Because the test was constructed for hearing children who use spoken English, the following major research questions were asked: (a) Does the WISC-III demonstrate adequate construct validity? and (b) Do specific items exhibit differential item functioning (DIF), and does the nature of the content of each item that exhibits DIF imply that the item is biased? The test was translated into sign language and administered to a total of 110 deaf children at three different sites. The deaf children ranged from ages 8 through 16 (M = 13.25, SD = 2.37), had hearing losses identified as severe or worse, were prelingually deaf, used sign language as their primary means of communication, and were not identified as having any additional handicapping conditions. The sample of deaf children was compared to a sample of 110 hearing children similar in age and Performance IQ. Construct validity was examined using a LISREL multi-sample covariance structure analysis. The covariance structures were different (χ ² (91) = 119.42, p =.024). A Rasch Model was used to detect DIF on the following subtests: Picture Completion, Information, Similarities, Arithmetic, Vocabulary, Comprehension. All of these subtests exhibited DIF, and DIF plus the differences in mean logit ability resulted in numerous items that were more difficult for deaf children on the above Verbal subtests. Item bias was judged by examining the contents of items that exhibited DIF. Items were biased due generally to translation issues and differences in the educational curricula. Thus, deaf children are at a distinct disadvantage when taking these WISC-III subtests. Practitioners are urged to consider these findings when assessing deaf children.
APA, Harvard, Vancouver, ISO, and other styles
9

Steffey, Dixie Rae. "The relationship between the Wechsler Adult Intelligence Scale - Revised and the Stanford-Binet Intelligence Scale: Fourth Edition in brain-damaged adults." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184412.

Full text
Abstract:
This study investigated the relationship between the Wechsler Adult Intelligence Scale-Revised (WAIS-R) and the Stanford-Binet Intelligence Scale: Fourth Edition (SBIV) in a brain-damaged adult sample. The sample in this study was composed of 30 adult patients at two residential treatment programs who completed comprehensive psychological evaluations between August, 1986 and November, 1987. Each patient was administered both the WAIS-R and the SBIV as part of these evaluations. Data gathered in this study was submitted to Pearson product moment correlational statistical procedures. Significant correlations were found in the following pairs of summary scores: the SBIV Test Composite Standard Age Score (SAS) and the WAIS-R Full Scale IQ; the SBIV Abstract/Visual Reasoning Area SAS and the WAIS-R Performance IQ; the SBIV Quantitative Reasoning Area SAS and the WAIS-R Verbal Scale IQ; the SBIV Verbal Reasoning Area SAS and the WAIS-R Verbal Scale IQ; the SBIV Short-Term Memory Area SAS and the WAIS-R Verbal Scale IQ; and the SBIV Short-Term Memory Area SAS and the WAIS-R Full Scale IQ. Significant correlations were also found in the following pairs of individual subtest results: the SBIV and WAIS-R Vocabulary subtests; the SBIV Memory for Digits subtest and the WAIS-R Digit Span subtest; the SBIV Pattern Analysis subtest and the WAIS-R Block Design subtest; and the SBIV Paper Folding and Cutting subtest and the WAIS-R Picture Arrangement subtest. Directions for future research were also suggested upon review of the subtest correlation matrix and the descriptive statistics of data generated.
APA, Harvard, Vancouver, ISO, and other styles
10

Horn, Jocelyn L. "An examination of shortened measures of intelligence in the assessment of giftedness." Virtual Press, 2006. http://liblink.bsu.edu/uhtbin/catkey/1354647.

Full text
Abstract:
The overall purpose of this study was to examine the relationships between two recently revised measures of intelligence (Woodcock-Johnson Tests of Cognitive Ability, Third Edition and Stanford Binet Intelligence Scale, Fifth Edition) and three shortened measures of intelligence (Woodcock-Johnson Tests of Cognitive Ability, Third Edition Brief Intellectual Ability Score, Stanford Binet Intelligence Scale, Fifth Edition Abbreviated IQ, and the Kaufman Brief Intelligence Test IQ Composite). Specifically, this study examined the accuracy of the three shortened scores in their ability to predict giftedness based on children's scores on the two full measures, with the intention of examining the implications of using shortened measures in a screening process for gifted identification.Participants were a group of 202 third-grade students enrolled in a suburban school district located in the Midwest. These students were selected for the study based on high achievement and/or cognitive scores on a state standardized test. The participants ranged in age from 8 years, 4 months to 10 years, 11 months and were assessed during the spring of their third grade year in 2003 and 2004. These children were administered the three measures over a two day period in a counterbalanced order.A set of univariate and multivariate procedures were used to examine hypothesized relationships between full and shortened measures. Significant positive relationships were observed between all five measures examined, although the highest correlations were produced between the full measure scores and their short forms. Discriminant function analyses were conducted to determine the accuracy of the three shortened measures in their prediction of giftedness based on five separate criteria using two full scale measures of intelligence. The results of all five multivariate discriminant function analyses were significant, indicating that the three shortened measures were able to group children accurately as compared to full scale scores, with classification rates ranging between 76.7 and 90.6. These analyses further revealed that the WJ III COG BIA was best able to predict giftedness in most cases, regardless of the criteria used. These results are intended to provide educators with information about the accuracy of three different shortened measures of intelligence so that informed decisions can be made regarding the use of these measures in selection processes for gifted programming.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Intelligence testing"

1

Glaser, Sarah. Intelligence Testing. 2455 Teller Road, Thousand Oaks California 91320 United States: CQ Press, 1993. http://dx.doi.org/10.4135/cqresrre19930730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fletcher, Richard (Richard B.), ed. Intelligence and intelligence testing. London: Routledge, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Coon, Kathy. Kathy Coon's dog intelligence test. Baton Rouge, La: K. Coon, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Darrin, Evans, ed. Know your child's IQ. New York: Penguin Books, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Luther, Michael Gerald. The genie in the lamp: Intelligence testing reconsidered. 2nd ed. North York, Ont: Captus Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Luther, Michael Gerald. The genie in the lamp: Intelligence testing reconsidered. North York, Ont: Captus Press, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eysenck, H. J. Know your child's IQ. [Great Britain]: Mind Games, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dudink, A. C. M. Verstand op nul: Leerplicht en intelligentieverloop. Amersfoort: Acco, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Howard Andrew Knox: Pioneer of intelligence testing at Ellis Island. New York: Columbia University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Wang, ed. Er tong zhi shang ce ding yu ti gao. [Guangzhou Shi]: Guangdong gao deng jiao yu chu ban she, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Intelligence testing"

1

Ninnemann, Kristi. "Intelligence Testing." In Encyclopedia of Immigrant Health, 923–25. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-5659-0_409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Wen, Yijun Chen, Shen Liu, and Xiaochu Zhang. "Intelligence Testing." In Encyclopedia of Evolutionary Psychological Science, 1–3. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-16999-6_2186-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McCallum, R. Steve. "Intelligence Testing." In Encyclopedia of Child Behavior and Development, 826–29. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-79061-9_1520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Issarraras, Abigail, and Johnny L. Matson. "Intelligence Testing." In Handbook of Childhood Psychopathology and Developmental Disabilities Assessment, 59–70. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93542-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Wen, Yijun Chen, Shen Liu, and Xiaochu Zhang. "Intelligence Testing." In Encyclopedia of Evolutionary Psychological Science, 4172–74. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-19650-3_2186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lochner, Katharina. "Intelligence and Intelligence Testing." In Successful Emotions, 21–41. Wiesbaden: Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-12231-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fehlmann, Thomas. "Testing Artificial Intelligence." In Communications in Computer and Information Science, 709–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28005-5_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Numan, Gerard. "Testing Artificial Intelligence." In The Future of Software Quality Assurance, 123–36. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29509-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Spores, John M. "Intelligence Tests." In Psychological Assessment and Testing, 213–50. 2nd ed. New York: Routledge, 2022. http://dx.doi.org/10.4324/9780429326820-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reynolds, Cecil R., Robert A. Altmann, and Daniel N. Allen. "Assessment of Intelligence." In Mastering Modern Psychological Testing, 331–82. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-59455-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Intelligence testing"

1

Zhu, Hong, Dongmei Liu, Ian Bayley, Rachel Harrison, and Fabio Cuzzolin. "Datamorphic Testing: A Method for Testing Intelligent Applications." In 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). IEEE, 2019. http://dx.doi.org/10.1109/aitest.2019.00018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Talukdar, S. "Low Intelligence Agents for Market Testing." In Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06). IEEE, 2006. http://dx.doi.org/10.1109/hicss.2006.257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mao, Ke, Mark Harman, and Yue Jia. "Crowd intelligence enhances automated mobile testing." In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017. http://dx.doi.org/10.1109/ase.2017.8115614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mujtaba, Dena F., and Nihar R. Mahapatra. "Artificial Intelligence in Computerized Adaptive Testing." In 2020 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2020. http://dx.doi.org/10.1109/csci51800.2020.00116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Akshay, and Omar Al-Azzam. "Artificial Intelligence Applied to Software Testing." In 2nd International Conference on Software Engineering and Automation. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.132001.

Full text
Abstract:
The study investigates the background, advantages, and difficulties of AI-based testing. The use of artificial intelligence (AI) has shown great promise as a means of enhancing software testing procedures. To improve test case generation, bug prediction, and test result analysis, AI-based testing approaches use machine learning, NLP (natural language Processing), GUIs(graphical user interfaces), genetic algorithms, and robotic process automation. We also provide a brief literature review of recent studies in the field, focusing on the various approaches and tools proposed for AI-based software testing. We conclude with a strategy for introducing AI-based testing and a list of possible approaches and resources. Overall, this paper provides a comprehensive survey of AI-based software testing and highlights the potential benefits and challenges of this emerging field.
APA, Harvard, Vancouver, ISO, and other styles
6

Paces, Pavel. "Avionics Testing with Artificial Intelligence Support." In 2020 IEEE/AIAA 39th Digital Avionics Systems Conference (DASC). IEEE, 2020. http://dx.doi.org/10.1109/dasc50938.2020.9256563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Коростелев, Дмитрий Александрович, Dmitriy Aleksandrovich Korostelev, Алексей Радченко, Aleksey Radchenko, Никита Сильченко, Nikita Silchenko, Ростислав Крылов, Rostislav Krylov, Павел Мигаль, and Pavel Migal. "Software Platform for Designing and Running Artificial Intelligence Competitions with a Visualization Subsystem." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-2-295-299.

Full text
Abstract:
The paper describes the solution to the problem of testing the efficiency of new ideas and algorithms for intelligent systems. Simulation of interaction of the corresponding intelligent agents in a competitive form implementing different algorithms is proposed to use as the main approach to the solution. To support this simulation, a specialized software platform is used. The paper describes the platform developed for running competitions in artificial intelligence and its subsystems: a server, a client and visualization. Operational testing of the developed system is also described which helps to evaluate the efficiency of various algorithms of artificial intelligence in relation to the simulation like "Naval Battle".
APA, Harvard, Vancouver, ISO, and other styles
8

Sztipanovits, J., S. Padalkar, C. Krishnamurthy, and R. B. Purves. "Testing And Validation In Artificial Intelligence Programming." In Robotics and IECON '87 Conferences, edited by Wun C. Chiou, Sr. SPIE, 1987. http://dx.doi.org/10.1117/12.942907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Singhal, Priyank, Shakti Kundu, Harshita Gupta, and Harsh Jain. "Application of Artificial Intelligence in Software Testing." In 2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART). IEEE, 2021. http://dx.doi.org/10.1109/smart52563.2021.9676244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Crowder, James A., and John N. Carbone. "Artificial Emotional Intelligence Testing for AI Avatars." In 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE). IEEE, 2023. http://dx.doi.org/10.1109/csce60160.2023.00086.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Intelligence testing"

1

Chappelle, Wayne, N. V. Tran, William Thompson, Tanya Goodman, Kellie Hyde, and Jennifer Heaton. Intelligence and Neuropsychological Aptitude Testing of U.S. Air Force MQ-1 Predator Pilot Training Candidates. Fort Belvoir, VA: Defense Technical Information Center, November 2012. http://dx.doi.org/10.21236/ada577826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mütterlein, Joschka, Tamara Ranner, and Mareike Müller. Artificial Intelligence at Universities: Impact on Grades, Student Experience, and Teaching. Macromedia University, 2024. http://dx.doi.org/10.56843/jm001tr002.

Full text
Abstract:
The rapid technological progress in artificial intelligence (AI) has placed universities in a field of tension in which regulation and the urgently needed teaching of AI skills must be balanced. The discussion often neglects the unique position that universities can offer as a protected space for testing, making mistakes, and learning. After all, one of the fundamental tasks of a university is to prepare students for highly qualified jobs in life after gradua-tion, and AI has already become an integral part of this.
APA, Harvard, Vancouver, ISO, and other styles
3

Chappelle, Wayne L., Bret D. Heerema, and William T. Thompson. Factor Analysis of Computer-Based Multidimensional Aptitude Battery-Second Edition Intelligence Testing from Rated U.S. Air Force Pilots. Fort Belvoir, VA: Defense Technical Information Center, September 2012. http://dx.doi.org/10.21236/ada583710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mitra, Sayan. Continuous Integration and Deployment Infrastructure for Rapid Testing of Autonomous Transportation Systems. Illinois Center for Transportation, June 2024. http://dx.doi.org/10.36501/0197-9191/24-017.

Full text
Abstract:
This project has led to the creation of an automated-testing infrastructure for autonomy code. The framework uses Jenkins, AWS Lambda, Docker, Kubernetes, and other open-source technologies. It was utilized and evaluated both for the Generalized Racing Intelligence Competition (GRAIC) and for evaluating student programming assignments for the principles of safe autonomy course (ECE484). This infrastructure has improved our capability to evaluate (autograde) student design assignments, and students can also receive precise feedback on their work as they progress through various design challenges. This infrastructure has significantly improved our capability in both automatic, cloud-based testing of autonomy code and making this service available to a large population of students.
APA, Harvard, Vancouver, ISO, and other styles
5

Swearingen, Julie, Wayne Chappelle, Tanya Goodman, and William Thompson. Multidimensional Aptitude Battery-Second Edition Intelligence Testing of Remotely Piloted Aircraft Training Candidates Compared with Manned Airframe Training Candidates. Fort Belvoir, VA: Defense Technical Information Center, March 2015. http://dx.doi.org/10.21236/ada623714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rinaudo, Christina, William Leonard, Jaylen Hopson, Christopher Morey, Robert Hilborn, and Theresa Coumbe. Enabling understanding of artificial intelligence (AI) agent wargaming decisions through visualizations. Engineer Research and Development Center (U.S.), April 2024. http://dx.doi.org/10.21079/11681/48418.

Full text
Abstract:
The process to develop options for military planning course of action (COA) development and analysis relies on human subject matter expertise. Analyzing COAs requires examining several factors and understanding complex interactions and dependencies associated with actions, reactions, proposed counteractions, and multiple reasonable outcomes. In Fiscal Year 2021, the Institute for Systems Engineering Research team completed efforts resulting in a wargaming maritime framework capable of training an artificial intelligence (AI) agent with deep reinforcement learning (DRL) techniques within a maritime scenario where the AI agent credibly competes against blue agents in gameplay. However, a limitation of using DRL for agent training relates to the transparency of how the AI agent makes decisions. If leaders were to rely on AI agents for COA development or analysis, they would want to understand those decisions. In or-der to support increased understanding, researchers engaged with stakeholders to determine visualization requirements and developed initial prototypes for stakeholder feedback in order to support increased understanding of AI-generated decisions and recommendations. This report describes the prototype visualizations developed to support the use case of a mission planner and an AI agent trainer. The prototypes include training results charts, heat map visualizations of agent paths, weight matrix visualizations, and ablation testing graphs.
APA, Harvard, Vancouver, ISO, and other styles
7

Aiken, Catherine. Classifying AI Systems. Center for Security and Emerging Technology, November 2021. http://dx.doi.org/10.51593/20200025.

Full text
Abstract:
This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.
APA, Harvard, Vancouver, ISO, and other styles
8

Goodwin, Sarah, Yigal Attali, Geoffrey LaFlair, Yena Park, Andrew Runge, Alina von Davier, and Kevin Yancey. Duolingo English Test - Writing Construct. Duolingo, March 2023. http://dx.doi.org/10.46999/arxn5612.

Full text
Abstract:
Assessments, especially those used for high-stakes decision making, draw on evidence-based frameworks. Such frameworks inform every aspect of the testing process, from development to results reporting. The frameworks that language assessment professionals use draw on theory in language learning, assessment design, and measurement and psychometrics in order to provide underpinnings for the evaluation of language skills including speaking, writing, reading, and listening. This paper focuses on the construct, or underlying trait, of writing ability. The paper conceptualizes the writing construct for the Duolingo English Test, a digital-first assessment. “Digital-first” includes technology such as artificial intelligence (AI) and machine learning, with human expert involvement, throughout all item development, test scoring, and security processes. This work is situated in the Burstein et al. (2022) theoretical ecosystem for digital-first assessment, the first representation of its kind that incorporates design, validation/measurement, and security all situated directly in assessment practices that are digital first. The paper first provides background information about the Duolingo English Test and then defines the writing construct, including the purposes for writing. It also introduces principles underpinning the design of writing items and illustrates sample items that assess the writing construct.
APA, Harvard, Vancouver, ISO, and other styles
9

Muelaner, Jody Emlyn. Generative Design in Aerospace and Automotive Structures. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, July 2024. http://dx.doi.org/10.4271/epr2024016.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">Semi-automated computational design methods involving physics-based simulation, optimization, machine learning, and generative artificial intelligence (AI) already allow greatly enhanced performance alongside reduced cost in both design and manufacturing. As we progress, developments in user interfaces, AI integration, and automation of workflows will increasingly reduce the human inputs required to achieve this. With this, engineering teams must change their mindset from designing products to specifying requirements, focusing their efforts on testing and analysis to provide accurate specifications.</div><div class="htmlview paragraph"><b>Generative Design in Aerospace and Automotive Structures</b> discusses generative design in its broadest sense, including the challenges and recommendations regarding multi-stage optimizations.</div><div class="htmlview paragraph"><a href="https://www.sae.org/publications/edge-research-reports" target="_blank">Click here to access the full SAE EDGE</a><sup>TM</sup><a href="https://www.sae.org/publications/edge-research-reports" target="_blank"> Research Report portfolio.</a></div></div>
APA, Harvard, Vancouver, ISO, and other styles
10

Lines, Lisa M., Marque C. Long, Jamie L. Humphrey, Crystal T. Nguyen, Suzannah Scanlon, Olivia K. G. Berzin, Matthew C. Brown, and Anupa Bir. Artificially Intelligent Social Risk Adjustment: Development and Pilot Testing in Ohio. RTI Press, September 2022. http://dx.doi.org/10.3768/rtipress.2022.rr.0047.2209.

Full text
Abstract:
Prominent voices have called for a better way to measure, predict, and adjust for social factors in healthcare and population health. Local area characteristics are sometimes framed as a proxy for patient characteristics, but they are often independently associated with health outcomes. We have developed an “artificially intelligent” approach to risk adjustment for local social determinants of health (SDoH) using random forest models to understand life expectancy at the Census tract level. Our Local Social Inequity score draws on more than 150 neighborhood-level variables across 10 SDoH domains. As piloted in Ohio, the score explains 73 percent of the variation in life expectancy by Census tract, with a mean squared error of 4.47 years. Accurate multidimensional, cross-sector, small-area social risk scores could be useful in understanding the impact of healthcare innovations, payment models, and SDoH interventions in communities at higher risk for serious illnesses and diseases; identifying neighborhoods and areas at highest risk of poor outcomes for better targeting of interventions and resources; and accounting for factors outside of providers’ control for more fair and equitable performance/quality measurement and reimbursement.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography