Academic literature on the topic 'Overseas country screening'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Overseas country screening.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Overseas country screening"

1

Ma, Yingyi. "Academic Elites or Economic Elites." Journal of International Students 10, no. 3 (August 15, 2020): xxiii—xxiv. http://dx.doi.org/10.32674/jis.v10i4.2003.

Full text
Abstract:
At an academic conference, I chatted with the Dean of Admissions from a prestigious public university in the mid-West and was struck by a story he told me: A Chinese doctoral student walked into his office one day and blasted him for admitting so many undergraduates from China, saying that this devalued his own credentials, as the qualities of those Chinese undergraduates, in his opinion, were no comparison to his. The dean narrated this story half-jokingly, apparently feeling it was funny. However, he might not fully understand the roots of this student’s complaint. In a test-oriented Chinese education system, students are ranked by test scores, and by test scores only. In this student’s eyes, he had been able to score high on the competitive Gaokao, and then was selected by an equally, if not more, competitive screening to study at this famous U.S. university (Liu 2016). In his view, he had abilities superior to those who were not able to score high on the Gaokao but, instead, paid to study at the same university he had tried so hard to get into. This student’s statements may sound crude and cruel, but they are based on the perspective from his small world. However, the larger world is changing and getting flatter (Friedman 2005). In part, that means an increasing number of Chinese students have access to world-class universities. Despite the massive growth of higher education sector in China, only two Chinese universities are ranked among the top 100 best universities in the world, while 41 out of these top 100 are located in the United States (Times higher education 2018). With the increasing proportion of upper middle-class families in today’s China, more and more Chinese students do not have to rely on American scholarships to study at American institutions. The recent history of Chinese students’ dependency on full American scholarships to study abroad was merely a reflection of the economic deprivation and limited education opportunities of the country at that time. This gave rise to the mindset of academic elitism exhibited by this doctoral student, which sees prestigious universities as belonging to the few students who can outscore the masses. Perhaps, instead, he should feel happy for the younger generation of Chinese students who have the freedom to choose. This change, of Chinese international students’ academic and social backgrounds and their ensuing experiences abroad, has motivated my research over the past 7 years. My book (Ma 2020) Ambitious and Anxious has shown a diverse set of Chinese students in terms of both family backgrounds and education trajectories. Their capacity to pay for the overseas education has often obscured their socioeconomic diversity, the parental sacrifices and their own academic and social challenges and struggles. In other words, this freedom to choose and access a wider set of education options overseas is backed by economic resources that are vastly unequally distributed among Chinese students and their families. Perhaps this doctoral student is frustrated partly because American universities often admit Chinese undergraduates who have the resources to study here. This touches upon a thorny identity issue that American universities, particularly selective ones, have to grapple with. How can they avoid being considered bastions of privilege and wealth? Over the past few decades, American universities have made efforts to recruit students from humble backgrounds. However, these efforts have been almost exclusively limited to domestic students. For many institutions, the tuition dollars of international students are a key revenue source for funding financial aid for domestic students. This logic may help balance the books, but it runs the risk of challenging institutions’ meritocratic ideals. The increasing concentration of economic elites from foreign countries may not enter into the diversity metrics of campus administrators, but surely it tacitly reinforces the culture of privilege and wealth that our universities strive to break out of.
APA, Harvard, Vancouver, ISO, and other styles
2

Warren, Robert, and Donald Kerwin. "The 2,000 Mile Wall in Search of a Purpose: Since 2007 Visa Overstays have Outnumbered Undocumented Border Crossers by a Half Million." Journal on Migration and Human Security 5, no. 1 (March 2017): 124–36. http://dx.doi.org/10.1177/233150241700500107.

Full text
Abstract:
The Trump administration has made the construction of an “impregnable” 2,000-mile wall across the length of the US-Mexico border a centerpiece of its executive orders on immigration and its broader immigration enforcement strategy. This initiative has been broadly criticized based on: • Escalating cost projections: an internal Department of Homeland Security (DHS) study recently set the cost at $21.6 billion over three and a half years; • Its necessity given the many other enforcement tools — video surveillance, drones, ground sensors, and radar technologies — and Border Patrol personnel, that cover the US-Mexico border: former DHS Secretary Michael Chertoff and other experts have argued that a wall does not add enforcement value except in heavy crossing areas near towns, highways, or other “vanishing points” (Kerwin 2016); • Its cost-effectiveness given diminished Border Patrol apprehensions (to roughly one-fourth the level of historic highs) and reduced illegal entries (to roughly one-tenth the 2005 level according to an internal DHS study) (Martinez 2016); • Its efficacy as an enforcement tool: between FY 2010 and FY 2015, the current 654-mile pedestrian wall was breached 9,287 times (GAO 2017, 22); • Its inability to meet the administration's goal of securing “operational control” of the border, defined as “the prevention of all unlawful entries to the United States” (White House 2017); • Its deleterious impact on bi-national border communities, the environment, and property rights (Heyman 2013); and • Opportunity costs in the form of foregone investments in addressing the conditions that drive large-scale migration, as well as in more effective national security and immigration enforcement strategies. The Center for Migration Studies (CMS) has reported on the dramatic decline in the US undocumented population between 2008 and 2014 (Warren 2016). In addition, a growing percentage of border crossers in recent years have originated in the Northern Triangle states of Central America (CBP 2016). These migrants are fleeing pervasive violence, persecution, and poverty, and a large number do not seek to evade arrest, but present themselves to border officials and request political asylum. Many are de facto refugees, not illegal border crossers. This report speaks to another reason to question the necessity and value of a 2,000-mile wall: It does not reflect the reality of how the large majority of persons now become undocumented. It finds that two-thirds of those who arrived in 2014 did not illegally cross a border, but were admitted (after screening) on non-immigrant (temporary) visas, and then overstayed their period of admission or otherwise violated the terms of their visas. Moreover, this trend in increasing percentages of visa overstays will likely continue into the foreseeable future. The report presents information about the mode of arrival of the undocumented population that resided in the United States in 2014. To simplify the presentation, it divides the 2014 population into two groups: overstays and entries without inspection (EWIs). The term overstay, as used in this paper, refers to undocumented residents who entered the United States with valid temporary visas and subsequently established residence without authorization. The term EWI refers to undocumented residents who entered without proper immigration documents across the southern border. The estimates are based primarily on detailed estimates of the undocumented population in 2014 compiled by CMS and estimates of overstays for 2015 derived by DHS. Major findings include the following: • In 2014, about 4.5 million US residents, or 42 percent of the total undocumented population, were overstays. • Overstays accounted for about two-thirds (66 percent) of those who arrived (i.e., joined the undocumented population) in 2014. • Overstays have exceeded EWIs every year since 2007, and 600,000 more overstays than EWIs have arrived since 2007. • Mexico is the leading country for both overstays and EWIs; about one-third of undocumented arrivals from Mexico in 2014 were overstays. • California has the largest number of overstays (890,000), followed by New York (520,000), Texas (475,000), and Florida (435,000). • Two states had 47 percent of the 6.4 million EWIs in 2014: California (1.7 million) and Texas (1.3 million). • The percentage of overstays varies widely by state: more than two-thirds of the undocumented who live in Hawaii, Massachusetts, Connecticut, and Pennsylvania are overstays. By contrast, the undocumented population in Kansas, Arkansas, and New Mexico consists of fewer than 25 percent overstays.
APA, Harvard, Vancouver, ISO, and other styles
3

Cobanoglu, Cihan, Muhittin Cavusoglu, and Gozde Turktarhan. "A beginner’s guide and best practices for using crowdsourcing platforms for survey research: The Case of Amazon Mechanical Turk (MTurk)." Journal of Global Business Insights 6, no. 1 (March 2021): 92–97. http://dx.doi.org/10.5038/2640-6489.6.1.1177.

Full text
Abstract:
Introduction Researchers around the globe are utilizing crowdsourcing tools to reach respondents for quantitative and qualitative research (Chambers & Nimon, 2019). Many social science and business journals are receiving studies that utilize crowdsourcing tools such as Amazon Mechanical Turk (MTurk), Qualtrics, MicroWorkers, ShortTask, ClickWorker, and Crowdsource (e.g., Ahn, & Back, 2019; Ali et al., 2021; Esfahani, & Ozturk, 2019; Jeong, & Lee, 2017; Zhang et al., 2017). Even though the use of these tools presents a great opportunity for sharing large quantities of data quickly, some challenges must also be addressed. The purpose of this guide is to present the basic ideas behind the use of crowdsourcing for survey research and provide a primer for best practices that will increase their validity and reliability. What is crowdsourcing research? Crowdsourcing describes the collection of information, opinions, or other types of input from a large number of people, typically via the internet, and which may or may not receive (financial) compensation (Hargrave, 2019; Oxford Dictionary, n.d.). Within the behavioral science realm, crowdsourcing is defined as the use of internet services for hosting research activities and for creating opportunities for a large population of participants. Applications of crowdsourcing techniques have evolved over the decades, establishing the strong informational power of crowds. The advent of Web 2.0 has expanded the possibilities of crowdsourcing, with new online tools such as online reviews, forums, Wikipedia, Qualtrics, or MTurk, but also other platforms such as Crowdflower and Prolific Academic (Peer et al., 2017; Sheehan, 2018). Crowdsourcing platforms in the age of Web 2.0 use remote labor recruited via the internet to assist employers complete tasks that cannot be left to machines. Key characteristics of crowdsourcing include payment for workers, their recruitment from any location, and the completion of tasks (Behrend et al., 2011). They also allow for a relatively quick collection of data compared to data collection in the field, and participants are rewarded with an incentive—often financial compensation. Crowdsourcing not only offers a large participation pool but also a streamlined process for the study design, participant recruitment, and data collection as well as integrated participant compensation system (Buhrmester et al., 2011). Also, compared to other traditional marketing firms, crowdsourcing makes it easier to detect possible sampling biases (Garrow et al., 2020). Due to advantages such as reduced costs, diversity of participants, and flexibility, crowdsourcing platforms have surged in popularity for researchers. Advantages MTurk is one of the most popular crowdsourcing platforms among researchers, allowing Requesters to submit tasks for Workers to complete (Cummings & Sibona, 2017). MTurk has been used as an online crowdsourcing platform for the recruitment of human subjects for research purposes (Paolacci & Chandler, 2014). Research has also shown MTurk to be a reliable and cost-effective tool, capable of providing representative data for research in the behavioral sciences (e.g., Crump et al., 2013; Goodman et al., 2013; Mason & Suri, 2012; Rand, 2012; Simcox & Fiez, 2014). In addition to its use in social science studies, the platform has been used in marketing, hospitality and tourism, psychology, political science, communication, and sociology contexts (Sheehan, 2018). To illustrate, between 2012 and 2017, more than 40% of the studies published in the Journal of Consumer Research used crowdsourcing websites for their data collection (Goodman & Paolacci, 2017). Disadvantages Although researchers have assessed crowdsourcing platforms as reliable and cost-effective for data collection in the behavioral sciences, they are not exempt of flaws. One disadvantage is the possibility of unsatisfactory data quality. In fact, the virtual setting of the survey implies that the investigator is physically separated from the participant, and this lack of monitoring could lead to data quality issues (Sheehan, 2018). In addition, participants in survey research on crowdsourcing platforms are not always who they claim to be, creating issues of trust with the data provided and, ultimately, the quality of the research findings (McGonagle, 2015; Smith et al., 2016). A recurrent concern with MTurk workers, for instance, is their assessment as experienced survey takers (Chandler et al., 2015). This experience is mainly acquired through completion of dozens of surveys per day, especially when they are faced with similar items and scales. Smith et al. (2016) identified two types of problems performing data collection using MTurk; namely, cheaters and speeders. As compared to Qualtrics—which has a strict screening and quality-control processes to ensure that participants are who they claim to be—MTurk appears to be less exigent regarding the workers. However, a downside for data collection with Qualtrics is more expensive fees—about $5.00 per questionnaire on Qualtrics, against $0.50 to $1.50 on MTurk (Ford, 2017). Hence, few researchers were able to conduct surveys and compare respondent pools with Qualtrics or other traditional marketing research firms (Garrow et al., 2020). Another challenge using MTurk arises when trying to collect a desired number of responses from a population targeted to a specific city or area (Ross et al., 2010). The issues inherent to the selection process of MTurk have been the subject of investigations in several studies (e.g., Berinsky et al., 2012; Chandler et al., 2014; 2015; Harms & DeSimone, 2015; Paolacci et al., 2010; Rand, 2012). Feitosa et al. (2015) pointed out that international respondents may still identify themselves as U.S. respondents with the use of fake addresses and accounts. They found that 5% to 10% of participants identifying themselves as U.S. respondents were actually from overseas locations. Moreover, Babin et al. (2016) assessed that the use of trap questions allowed researchers to uncover that many respondents change their genders, ages, careers, or income within the course of a single survey. The issues of (a) experienced workers for the quality control of questions and (b) speeders, which, for MTurk can be attributed to the platform being the main source of revenue for a given respondent, remain the inherent issues of crowdsourcing platforms used for research purposes. Best practices Some best practices can be recommended in the use of crowdsourcing platforms for data collection purposes. Workers IDs can be matched with IDs from previous studies, thus allowing researchers to exclude responses from workers who had answered previous similar studies (Goodman & Paolacci, 2017). Furthermore, proceed to a manual assignment of qualification on MTurk prior to data collection (Litman et al., 2015; Park & Park, 2020). When dealing with experienced workers, both using multiple attention checks and optimizing the survey in a way to have the participants exposed to the stimuli for a sufficient length of time to better address the questions are also recommended (Sheehan, 2018). In this sense, shorter surveys are preferred to longer ones, which affect the participant’s concentration, and may, in turn, adversely impact the quality of their answers. Most importantly, pretest the survey to make sure that all parts are working as expected. Researchers should also keep in mind that in the context of MTurk, the primary method for measurement is the web interface. Thus, to avoid method biases, researchers should ponder whether or not method factors emerge in the latent measurement models (Podsakoff et al., 2012). As such, time-lagged research designs may be preferred as predictor and criterion variables can be measured at different points in time or administered in different platforms, such as Qualtrics vs MTurk (Cheung et al., 2017). In general, the use of crowdsourcing platforms including MTurk may be appropriate according to the research question; and the quality of data is reliant on the quality-control strategies used by researchers to enhance data quality. Trade-offs between various validity types need to be prioritized according to the research objectives (Cheung et al., 2017). From our experience using crowdsourcing tools for our own research as the editorial team members of several journals and chair of several conferences, we provide the best practices as outlined below: MTurk Worker (Respondent) Selection: Researchers should consider their study population before using MTurk for data collection. The MTurk platform should be used for the appropriate study population. For example, if the study targets restaurant owners or company CEOs, MTurk workers may not be suitable for the study. However, if the target population is diners, hotel guests, grocery shoppers, online shoppers, students, or hourly employees, utilizing a sample from MTurk would be suitable. Researchers should use the selection tool in the software. For example, if you target workers only from one country, exclude responses that came from an internet protocol (IP) address outside the targeted country and report the results in the method section. Researchers should consider the demographics of workers on MTurk which must reflect the study targeted population. For example, if the study focuses on baby boomers use of technology, then the MTurk sample should include only baby boomers. Similarly, the gender balance, racial composition, and income of people on MTurk should mirror the targeted population. Researchers should use multiple screening tools that identify quality respondents and avoid problematic response patterns. For example, MTurk provides the approval rate for the respondents. This refers to how many times a respondent is rejected for various reasons (i.e., wrong code entered). We recommend using a 90% or higher approval rate. Researchers should include screening questions in different places with different type of questions to make sure that the respondents are appropriate for your study. One way is to use knowledge-based questions about the subject. For example, rather than asking “How experienced are you with accounting practices?”, a supplemental question such as “Which of the following is a component of an income statement?” should be integrated into the study in a different section of the survey. Survey Validity: Researchers should conduct a pilot survey from MTurk workers to identify and fix any potential data quality and programming problems before the entire data set is collected. Researcher can estimate time required to complete the survey from the pilot study. This average time should be used in calculating incentive payment for the workers in such a way that the payment should equate or exceed minimum wage in the targeted country. Researchers should build multiple validity-check tools into the survey. One of them is to ask attention check questions such as “please click on ‘strongly agree’ in this question” or “What is 2+2? Please choose 5” (Cobanoglu et al., 2016) Even though these attention questions are good and should be implemented, experienced survey takers or bots easily identify them and answer them correctly, but then give random answers to other questions. Instead, we recommend building in more involved validity check questions. One of the best is asking the same question in different places and in different forms. For example, asking the age of the respondent in the beginning of the survey and then asking them the year of their birth at the end of the survey is an effective way to check that they are replying to the survey honestly. Exclude all those who answered the same question differently. Report the results of these validity checks in the methodology. Cavusoglu (2019) found that almost 20% of the surveys were eliminated due to the failure of the validity check questions which were embedded in different places and in different forms in his survey. Researchers should be aware of internet bot, which is a software that runs automated tasks. Some respondents use a bot to reply to the surveys. To avoid this, use Captcha verification, which forces respondents to perform random tasks such as moving the bar to a certain area, clicking in boxes that has cars, or checking boxes to verify the person taking the survey is not a bot. Whenever appropriate, researchers should use time limit options offered by online survey tools such as Qualtrics to control the time that a survey taker must spend to advance to the next question. We found that this is a great tool, especially when you want the respondents to watch a video, read a scenario, or look at a picture before they respond to other questions. Researchers should collect data in different days and times during the week to collect a more diverse and representative sample. Data Cleaning: Researchers should be aware that some respondents do not read questions. They simply select random answers or type nonsense text. To exclude them from the study, manually inspect the data. Exclude anyone who filled out the survey too quickly. We recommend excluding all responses filled out less than 40% of the average time to take the survey. For example, if it takes 10 minutes to fill out a survey, we exclude everyone who fills out this survey in 4 minutes or less. After we separated these two groups, we compared them and found that the speeders’ (aka cheaters) data was significantly different than the regular group. Researchers should always collect more data than needed. Our rule of thumb is to collect 30% more data than needed. For example, if 500 clean data responses are wanted, collect at least 650 data. The targeted number of data will still be available after cleaning the data. Report the process of cleaning data in the method section of your article, showing the editor and reviewers that you have taken steps to increase the validity and reliability of the survey responses. Calculating a response rate for the samples using MTurk is not possible. However, it is possible to calculate active response rate (Ali et al., 2021). It can be calculated as the raw response numbers deducted from all screening and validity check question results. For example, if you have 1000 raw responses and you eliminated 100 responses for coming from IP address outside of the United States, another 100 surveys for failing the validity check questions, then your active response rate would be 800/1000= 80%.
APA, Harvard, Vancouver, ISO, and other styles
4

Widdup, K. H., and D. L. Ryan. "Development of GSO alsike clover for the South Island high country." Proceedings of the New Zealand Grassland Association, January 1, 1994, 107–11. http://dx.doi.org/10.33584/jnzg.1994.56.2147.

Full text
Abstract:
A breeding programme to improve. the herbage yields and persistence of alsike clover (Trifolium hybniium L.)' for the South Island high country was initiated in 1984. A screening trial with gennplasm from the Baltic region of Russia, local types collected in the Mackenzie Basin, selected plants from high. country trials and overseas cultivars was established under grazing at Mt John, Tekapo. Material was assessed over 3 years for seasonal herbage yields, shoot density, growth habit and plant survival. Principal Component Analysis was used to order the agronomic performance of the.alsike lines. A set of superior alsike lines from the Russian and local New Zealand groups was identified. These lines were not significantly better than commercial alsike but showed a consistent pattern of higher yields in all seasons and years. Overseas cultivars had average to poor yields and many had low shoot densities. Elite plants were selected from the superior lines and combined in a polycross in 1988. A progeny test was sown to determine the lines with high breeding value to make up a cultivar. Similar parameters to the screenings, including seedling establishment, were assessed in the progeny test. Seventeen elite progeny were identified in 1991 and the best four plants removed from each progeny and isolated to form the 'G50' alsike clover selection. The selection is currently in comparative grazing trials in the high country. Keywords: high country, progeny test, screening, selection, Trifolium hybridum
APA, Harvard, Vancouver, ISO, and other styles
5

Estrada, Sylvia C. "Medical Genetics: …still growing and expanding…" Acta Medica Philippina 54, no. 4 (August 28, 2020). http://dx.doi.org/10.47895/amp.v54i4.1942.

Full text
Abstract:
Today, it is not uncommon to read about novel treatments for conditions that were considered “untreatable” 30 or 40 years ago. Learning about biochemical or metabolic disorders then was challenging because there were no confirmed patients to speak of in the Philippines. Unexplained neonatal deaths were attributed to sepsis and there was no genetic test to challenge the diagnosis which had sound clinical basis. Multiple congenital anomalies were viewed as normal deviations or variants of an embryologic process. Did we even think then that these anomalies were part of a syndrome? With the establishment and expansion of genetic services in the country1, among them cytogenetics, newborn screening, molecular and biochemical genetics, our pediatric “unknowns” now have names of conditions that were only encountered in textbooks: maple syrup urine disease, PKU, methyl malonic acidemia, glutaric aciduria, mosaic Trisomy 13 and Tetrasomy 9p Syndrome. The Philippines joined the rest of the world in December 2018 when the Department of Health implemented the expanded newborn screening nationwide.2 This provided a platform to screen for at least 28 metabolic conditions, including hematologic and endocrine disorders. This ushered in a better understanding of the clinical course of conditions detected at birth and treated promptly. It opened doors for research collaboration between specialties within the country and overseas.3 More importantly, it paved the way for better care for patients through sharing of expertise, best practices and the crafting of clinical practice guidelines.4 Acta Medica Philippina Genetics 6 highlights the diversity of applications and reach of medical genetics into various aspects of health care: preconception, early neonatal detection of treatable genetic metabolic conditions, disease risk detection and allelic associations, gaps in education of genetic conditions and genetic counselling. The case reports and case series describe various conditions and scenarios that offer opportunities to be acquainted with uncommon genetic phenotypes. May Acta Genetics 6 inspire the reader to explore the limitless possibilities of the applications of medical genetics. Sylvia C. Estrada, MD, FPPSInstitute of Human GeneticsNational Institutes of HealthUniversity of the Philippines Manila REFERENCES1. Padilla CD, de la Paz EMC. Genetic services and testing in the Philippines. J Community Genet. 2013 Jul; 4(3):399-411.2. DOH Administrative Order Number 2018-0025 on National Policy and Strategic Framework on Expanded Newborn Screening for 2017-2030 [Internet]. [cited 2020 Aug]. Available from https://www.doh.gov.ph/newborn-screening3. Abad PJB, Laurino MY, Daack-Hirsch S, Abad LR, Padilla CD. Parent-child communication about Congenital Adrenal Hyperplasia: Filipino mother’s experience. Acta Med Phillip. 2017; 51(3):175-80.4. Abad PJB, Laurino MY. Preconception genetic counselling in a Filipino couple with family history of Trisomy 18. Acta Med Phillip. 2017; 51(3):248-50.
APA, Harvard, Vancouver, ISO, and other styles
6

Warren, Robert, and Donald Kerwin. "The 2,000 Mile Wall in Search of a Purpose: Since 2007 Visa Overstays have Outnumbered Undocumented Border Crossers by a Half Million." Journal on Migration and Human Security 5, no. 1 (March 6, 2017). http://dx.doi.org/10.14240/jmhs.v5i1.77.

Full text
Abstract:
The Trump administration has made the construction of an “impregnable” 2,000-mile wall across the length of the US-Mexico border a centerpiece of its executive orders on immigration and its broader immigration enforcement strategy. This initiative has been broadly criticized based on: Escalating cost projections: an internal Department of Homeland Security (DHS) study recently set the cost at $21.6 billion over three and a half years; Its necessity given the many other enforcement tools — video surveillance, drones, ground sensors, and radar technologies — and Border Patrol personnel, that cover the US-Mexico border: former DHS Secretary Michael Chertoff and other experts have argued that a wall does not add enforcement value except in heavy crossing areas near towns, highways, or other “vanishing points” (Kerwin 2016); Its cost-effectiveness given diminished Border Patrol apprehensions (to roughly one-fourth the level of historic highs) and reduced illegal entries (to roughly one-tenth the 2005 level according to an internal DHS study) (Martinez 2016); Its efficacy as an enforcement tool: between FY 2010 and FY 2015, the current 654-mile pedestrian wall was breached 9,287 times (GAO 2017, 22); Its inability to meet the administration’s goal of securing “operational control” of the border, defined as “the prevention of all unlawful entries to the United States” (White House 2017); Its deleterious impact on bi-national border communities, the environment, and property rights (Heyman 2013); and Opportunity costs in the form of foregone investments in addressing the conditions that drive large-scale migration, as well as in more effective national security and immigration enforcement strategies. The Center for Migration Studies (CMS) has reported on the dramatic decline in the US undocumented population between 2008 and 2014 (Warren 2016). In addition, a growing percentage of border crossers in recent years have originated in the Northern Triangle states of Central America (CBP 2016). These migrants are fleeing pervasive violence, persecution, and poverty, and a large number do not seek to evade arrest, but present themselves to border officials and request political asylum. Many are de facto refugees, not illegal border crossers. This report speaks to another reason to question the necessity and value of a 2,000-mile wall: It does not reflect the reality of how the large majority of persons now become undocumented. It finds that two-thirds of those who arrived in 2014 did not illegally cross a border, but were admitted (after screening) on non-immigrant (temporary) visas, and then overstayed their period of admission or otherwise violated the terms of their visas. Moreover, this trend in increasing percentages of visa overstays will likely continue into the foreseeable future. The report presents information about the mode of arrival of the undocumented population that resided in the United States in 2014. To simplify the presentation, it divides the 2014 population into two groups: overstays and entries without inspection (EWIs). The term overstay, as used in this paper, refers to undocumented residents who entered the United States with valid temporary visas and subsequently established residence without authorization. The term EWI refers to undocumented residents who entered without proper immigration documents across the southern border. The estimates are based primarily on detailed estimates of the undocumented population in 2014 compiled by CMS and estimates of overstays for 2015 derived by DHS. Major findings include the following: In 2014, about 4.5 million US residents, or 42 percent of the total undocumented population, were overstays. Overstays accounted for about two-thirds (66 percent) of those who arrived (i.e., joined the undocumented population) in 2014. Overstays have exceeded EWIs every year since 2007, and 600,000 more overstays than EWIs have arrived since 2007. Mexico is the leading country for both overstays and EWIs; about one- third of undocumented arrivals from Mexico in 2014 were overstays. California has the largest number of overstays (890,000), followed by New York (520,000), Texas (475,000), and Florida (435,000). Two states had 47 percent of the 6.4 million EWIs in 2014: California (1.7 million) and Texas (1.3 million). The percentage of overstays varies widely by state: more than two-thirds of the undocumented who live in Hawaii, Massachusetts, Connecticut, and Pennsylvania are overstays. By contrast, the undocumented population in Kansas, Arkansas, and New Mexico consists of fewer than 25 percent overstays.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Overseas country screening"

1

Gould, Richard Robert, and RichardGould@ozemail com au. "International market selection-screening technique: replacing intuition with a multidimensional framework to select a short-list of countries." RMIT University. Social Science & Planning, 2002. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20081125.145312.

Full text
Abstract:
The object of this research was to develop an international market screening methodology which selects highly attractive markets, allowing for the ranges in diversity amongst organisations, countries and products. Conventional business thought is that, every two to five years, dynamic organisations which conduct business internationally should decide which additional foreign market or markets to next enter. If they are internationally inexperienced, this will be their first market; if they are experienced, it might be, say, their 100th market. How should each organisation select their next international market? One previous attempt has been made to quantitatively test which decision variables, and what weights, should be used when choosing between the 230 countries of the world. The literature indicate that a well-informed selection decision could consider over 150 variables that measure aspects of each foreign market's economic, political, legal, cultural, technical and physical environments. Additionally, attributes of the organisation have not been considered when selecting the most attractive short-list of markets. The findings presented in the dissertation are that 30 criteria accounted for 95 per cent of variance at cross-classification rates of 95 per cent. The weights of each variable, and the markets selected statistically as being the most attractive, were found to vary with the capabilities, goals and values of the organisation. This frequently means that different countries will be best for different organisations selling the same product. A
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography