Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

U.S.

AI Policy Report Card


By Hodan Omaar | July 27, 2022

Policy discourse on artificial intelligence (AI) in the United States is


at an all-time high. The 117th Congress was the most AI-focused
congressional session in history with 130 AI bills proposed in 2021
compared with just 1 in 2015. 1 Given this high level of interest from
lawmakers, it is a good moment to take stock of the
accomplishments of U.S. AI policy to date, as well as areas where
there is room for continued progress toward U.S. leadership in this
space.

EXECUTIVE SUMMARY
In the United States, there are three sets of institutions that are primarily
responsible for initiating, importing, modifying, and diffusing AI: private firms,
publicly funded national laboratories, and universities with funding from
government, industry, and donors. AI policy refers to the public means for
nurturing the capabilities and activities of this AI ecosystem and optimizing its
applications in the service of national goals and the public good. When
government institutions and policies act properly, AI innovation flourishes.
When government fails to act or misfires, so too does AI innovation. 2

This report analyzes how the United States is performing across nine of the
most prominent policies the U.S. government uses to support AI innovation
and competitiveness. We split these policies into two groups. First are
innovation policies that directly spur AI innovation and competitiveness.
These include six types of policies that support AI research, strengthen the AI
workforce, spread AI tech hubs across the country, facilitate access to AI
resources, promote government adoption of AI, and help develop technical
standards for AI. Second are legal and regulatory policies that shape the
environment for AI innovation. These include three types of policies that
regulate the use of AI systems, incentivize AI activity through intellectual
property (IP) rights, and support AI development through international trade.

The following report card summarizes our findings for each policy area and
provides an achievement level. To be failing expectations in a policy area
means the policies in place—or lack thereof—are causing the United States to
fall behind its competitors. To be approaching expectations means the
policies in place are only partially or inconsistently bolstering U.S. AI
innovation and competitiveness. To be meeting expectations means the

CENTER FOR DATA INNOVATION 1


policies in place are keeping the nation on par with its competitors. And to be
surpassing expectations means the policies in place are properly supporting
U.S. AI leadership. In the following sections, we provide detailed
recommendations for how U.S. policymakers can improve.

CENTER FOR DATA INNOVATION 2


AI INNOVATION POLICIES

SUPPORTING AI RESEARCH & DEVELOPMENT


Overall grade: Approaching expectations

Reason: Current levels of direct federal AI spending and tax


support are below what is needed to sufficiently support AI
R&D at levels keep the country competitive.

DIRECT AI R&D SPENDING


Robust federal research and development (R&D) investment is needed to
advance AI for two main reasons. First, to make up for the fact that the
private sector invests less than societally optimal levels in AI research
because they are almost never able to capture all the spillover benefits of
their initial investment, or capture these benefits fast enough, to justify
investing at the same level as the government. The knowledge they create
spills over into the knowledge commons and competitors are able to
capitalize on it. This is especially true in the case of basic research, which is
more costly and riskier than applied R&D. 3 Second, the private sector tends
to narrowly focus its research on only the AI fields that are commercially
relevant and economically beneficial, rather than on all those that might
advance the public good. To see this, consider that only 18 percent of the AI
algorithms in use today originated from the private sector, according to a
2020 study by researchers at the Massachusetts Institute of Technology and
the University of Pennsylvania. 4 The other 82 percent originated from
research at federally funded institutions.

The Obama administration was the first to commission a federal strategy to


guide federal investment in AI R&D, which the Networking and Information
Technology Research and Development (NITRD) subcommittee published in
October 2016. Called the National AI R&D Strategic Plan, it outlined seven
strategies to help guide the overall portfolio of federal investments:

• Make long-term investments in AI research


• Develop effective methods for human-AI collaboration
• Understand and address the ethical, legal, and societal implications
of AI
• Ensure the safety and security of AI systems
• Develop shared public datasets and environments for AI training and
testing
• Measure and evaluate AI technologies through standards and
benchmarks
• Better understand the national AI R&D workforce needs 5

The Trump administration updated the plan in 2019 with an eighth strategy
to expand public-private partnerships to accelerate advances in AI and
imbued the existing plan with a stronger focus on maintaining U.S.
leadership. 6

CENTER FOR DATA INNOVATION 3


While NITRD’s AI R&D Strategic Plan provides a national framework for AI
R&D priorities, it does not direct policy or funding. Instead, the president and
Congress set nondefense AI R&D priorities and funding for each federal
agency through an annual fiscal year budget, with defense spending set
through a separate bill called the National Defense Authorization Act (NDAA).
Figure 2 shows a breakdown of the Biden administration’s budget request for
nondefense AI R&D in FY 2022 and the amount enacted in FY 2021. In total,
funding for nondefense AI R&D increased from around $1.6 billion to $1.7
billion.

Figure 1: AI R&D investment (in millions of dollars) 7

Treasury 0.5
0.8

NOAA 2.2
2.2

NIOSH 3.8
3.8

NASA 6.6
6.5

DOT 11.2
11.3

NIJ 11.5
11.5

DOI 8.5
13

VA 18.4
24.7

NIST 31.8
46.9

FDA 46
50

DHS 56.4
77.8

USDA 161.2
161.2

DOE 278.8
231.7

NIH 337.9
352.2

NSF 639
744.7

0 100 200 300 400 500 600 700 800

FY 2021 Enacted FY 2022 Requested

CENTER FOR DATA INNOVATION 4


The Biden administration has been pushing for even more AI R&D, identifying
AI as one of the breakthrough technologies in line for increased federal
investment over the next four years. The administration’s budget request for
FY 2022 includes a little more than $1.7 billion, which includes funding for
federal agencies to create a national network of AI research centers, as
Congress directed in the National AI Initiative Act of 2020. 8 The
administration announced its budget request for FY 2023 in March 2022,
indicating it will request even more funding for major investments in AI. For
instance, the budget includes a request for $187 million for the National
Institute of Standards and Technology (NIST) to expand research initiatives
focused on accelerating AI adoption through technical standards
development. 9 Additionally, the Senate passed the U.S. Innovation and
Competition Act (USICA) in 2021, which includes a proposal to create a new
National Science Foundation (NSF) directorate focused on technology and
innovation. The Senate’s bill would authorize $9.3 billion for the directorate
by FY 2026 to strengthen the leadership of the United States in a range of
critical technologies, not just AI. 10

The question becomes, How much federal AI R&D funding is enough to


accelerate AI innovation and keep the nation competitive? According to the
National Security Commission on AI (NSCAI), an independent commission
established by Congress in 2018 to review the steps the United States needs
to take to advance AI development, the United States should be doubling
investments in nondefense AI R&D annually from the baseline of $1 billion in
FY 2020 in order to reach $32 billion in FY 2026, which would bring federal
AI spending to a level on par with biomedical research. 11 Federal funding
should therefore be at least $2 billion in FY 2022 and increase to $4 billion
in the FY 2023 budget.

To improve the overall effectiveness and productivity of federal AI R&D, the


National AI Initiative Act of 2020 also established an agency called the
National AI Initiative Office (NAIIO) within the Office of Science and
Technology Policy (OSTP) to coordinate federal support for AI R&D, education
and training, and research infrastructure. The office is still new, and how
effective it will be at coordinating federal AI R&D activities remains to be
seen.

Recommendations for improvement:


 Congress should increase AI R&D funding in FY 2023 to at least $4
billion. The Senate’s proposed funding of $9.3 billion by FY 2026 for
a new NSF directorate would get the United States part of the way to
this goal if this provision is included in the final competitiveness
legislation both chambers of Congress are working to bring across the
finish line. But to ensure the United States is on track to reach
funding levels that keep the nation competitive in AI specifically and
across agencies, Congress should increase overall AI R&D funding to
$4 billion in FY 2023.

 NITRD should update the National AI R&D Strategic Plan to include


measuring and monitoring the capabilities of AI systems. Today, it is

CENTER FOR DATA INNOVATION 5


difficult for policymakers to identify the specific needs and priorities
of the research community because government lacks the capacity
and infrastructure to systematically measure and monitor the
capabilities of the AI ecosystem. 12 Investing in better metrics would
enable policymakers to better identify where research could support
policy needs.

R&D TAX CREDIT


Economists have shown that the R&D tax credit, which provides a tax break
for companies incurring R&D costs, is an effective tool to encourage private
companies to invest in R&D. Importantly, studies of R&D incentives show that
they not only spur firms to do more R&D than they would otherwise do, but
also lead to more of that R&D to be performed in the jurisdictions with the
incentives. 13

Stimulating private investment in AI R&D is crucial to cementing U.S.


leadership in AI because the private sector in the United States plays a
uniquely important role in conducting AI R&D. 14 Consider the findings from
Stanford University’s 2021 AI Index report on R&D activities around the
world. The report finds that the highest proportion of peer-reviewed AI papers
in every major nation come from academic institutions, but the United States
is distinct in that the second most important originators come from industry,
with corporate-affiliated research representing 19.2 percent of the total
publications (figure 1). 15 By contrast, government is the second most
important in China (15.6 percent) and the European Union (17.2 percent).

Figure 2: The biggest AI R&D originators, excluding academia, in China, the


EU, and the United States

Unfortunately, tax incentives in the United States for R&D are quite minimal.
The country ranks 32nd out of 34 comparable Organization for Economic
Cooperation and Development (OECD) and BRIC (Brazil, Russia, India, and
China) nations, having slipped from 24th place in 2020. 16 As of 2022, a
provision in the 2017 Tax Cuts and Jobs Act (TCJA) no longer allows
companies to expense current R&D costs in the first year (to deduct the costs
of R&D from their taxable income in the year they incur those costs) and
instead requires costs to be amortized over a period of five years, effectively
reducing the R&D subsidy by about 5 percentage points, from around 9
percent to 4 percent. 17

CENTER FOR DATA INNOVATION 6


This change reduces the U.S. comparative advantage in all innovation-based
industries. But because the private sector plays a unique role in AI R&D
compared with other countries, the impact on AI competitiveness is outsized.

Fortunately, the Biden administration has indicated support for enhancing tax
incentives for R&D. In response to a question from Senator Todd Young (R-IN)
about maintaining the immediate deductibility of R&D expenses, U.S.
Treasury Secretary Janet Yellen said:

[P]romoting innovation is a critical priority for President Biden, and it


is a very important contributor to productivity growth in this country.
And we’re absolutely looking for ways to do that, and certainly
continuing to allow firms to expense R&D rather than shifting to
amortizing could be one very effective way to bring that about. There
could also be more generous R&D tax credits. There might be other
approaches, but many OECD countries do permit expensing of R&D.
So this is something we certainly would want to work with you on and
find a way to be supportive of more tax support for R&D. 18

Recommendations for improvement:


 Congress should lift the overall R&D subsidy rate to levels on par with
comparable countries to better incentivize private sector AI R&D. As
the Information Technology and Innovation Foundation (ITIF)
explained in its 2020 report Enhanced Tax Incentives for R&D Would
Make Americans Richer, a fiscally responsible target would be to
increase the overall subsidy rate to at least 15.5 percent from 9.5
percent. This target could be reached by increasing the rates for the
two main investment tax credits the government uses (the Regular
Credit and the Alternative Simplified Credit) to 43.5 percent and 30.5
percent, respectively. 19

 Congress should broaden and expand the R&D credit for


collaborative research. The United States provides a 20 percent tax
credit for collaborative R&D to encourage private sector investment in
research conducted at universities, federal laboratories, and research
consortia. But it only applies to energy research. Congress should
eliminate the energy restriction to support collaborative research in
other research areas, including AI. 20

SPREADING AI TECH HUBS


Overall grade: Meeting expectations

Reason: The government could take a more rigorous


approach to better choose the most promising potential AI
growth centers for investment.

Many innovation and competitiveness scholars point out that America’s most
innovative firms frequently cluster together in small, relatively well-defined
regions. 21 Innovation clusters offer several benefits for the AI industry. For
one, in the early stages of technology development, clusters tend to produce
and exchange industrial “gossip” about production processes that helps

CENTER FOR DATA INNOVATION 7


nearby firms be more competitive and productive while distant firms either
never receive this information or hear about it too late. 22 For another,
industry clusters can create efficient markets for inputs that are specific to an
industry. For example, manufacturers of customized AI chips want to be as
close to as many AI companies as possible because outside of AI system
production, there is little market for their products.

The problem is, while the clustering of AI skills and firms promotes innovation
and growth within regions, hyperconcentration of the associated economic
gains may further entrench the already imbalanced geography of the nation’s
economy. As ITIF explained in The Case for Growth Centers, the innovation
sector has generated significant technology gains and wealth but has also
helped spawn a growing gap between the nation’s dynamic “superstar”
metropolitan areas and most everywhere else. 23 Among the superstar metro
areas, the “winner take most” dynamics of the innovation economy have led
to dominance, but also to livability and competitiveness crises: spiraling real
estate costs, traffic gridlock, and increasingly uncompetitive wage and salary
costs. Meanwhile, in many of the “left-behind places,” the struggle to keep up
has brought stagnation and frustration. These uneven realities represent a
serious productivity, competitiveness, and equity problem.

Current AI activity in the United States is clustered around these existing


superstar tech hubs. Indeed, figure 4 shows that the AI employment is
highest in big tech hubs such as Austin, Boston, New York, the San Francisco
Bay area, Seattle, and Washington, D.C. The graph also shows some less
densely populated areas with higher shares of AI employment, such as in
Colorado and New Mexico where there are U.S. national labs. 24

Figure 3: U.S. employment share in AI is highest in major cities 25

This is not surprising. AI is still an emerging technology, and the earliest


stages of technology development are often spatially concentrated near the
sites of key innovations. 26 However, without federal efforts to counter the
self-reinforcing dynamics inherent to network-based systems such as the AI
industry, it is unlikely the AI economy will become more geographically
dispersed.

CENTER FOR DATA INNOVATION 8


Fortunately, the federal government is in the midst of a new realization that
national action will be necessary to counter the current concentration of AI
investments in just a few geographic regions. NSF is leading the effort to
create the national network of AI research centers mentioned earlier, and in
2021, it established 11 new AI research institutes with ties to 40 states,
representing an investment of over $220 million and building on an earlier
first round of institutes funded in 2020. 27 NSF together with OSTP is also
planning to spread centers of AI excellence across the country as part of the
implementation plan of a National AI Research Resource (NAIRR), which is
envisioned as a shared computing and data infrastructure. 28

As a 2021 Brookings report on the geography of AI notes, however, creating


AI clusters in practice is a difficult task, and development strategies for
regions should be realistic. There are at least 87 regions in the United States
that have some AI research and commercialization capacities and are
“potential AI adoption centers.” But these regions vary widely in their starting
points with different research sectors and business activities. 29 Policymakers
should more comprehensively use data on their AI capacity and positioning to
inform strategic decisions.

Recommendations for improvement:


 NSF should coordinate selecting AI growth centers with the
Department of Commerce (DOC), which Congress has directed to
create a regional innovation hub program. The Senate’s USICA would
provide $10 billion to DOC to establish and grow at least 20 regional
technology hubs while the House’s America COMPETES (Creating
Opportunities for Manufacturing, Pre-Eminence in Technology and
Economic Strength) Act authorizes $7 billion for at least 10 hubs. As
NSF works to select the next most promising AI potential growth
centers, it should coordinate its process for selection with DOC to
ensure efforts are not duplicative and promising regions receive
sufficient federal investment to make them successful.

STRENGTHENING THE AI WORKFORCE


Overall grade: Failing expectations

Reason: AI education is uneven in scope and depth while


outmoded immigration policies are preventing much-needed
foreign talent from contributing to U.S. AI innovation.

AI EDUCATION

Primary and Secondary AI Education


In the United States, the responsibility for primary and secondary education,
including school financing, teaching credentials, and curricula fall on the
states. Federal programs, including ones in the departments of Education,
Agriculture, Health and Human Services, and Labor, also distribute billions of
dollars in funding to address specific needs, such as low-income schools and
child nutrition. Since states set their own educational priorities, the extent to

CENTER FOR DATA INNOVATION 9


which schools implement curriculum standards that address AI varies across
states and local school districts.

On one hand, this decentralized approach allows for greater creativity and
innovation in how schools develop and implement AI curricula, potentially
enabling an increase in the quality of education. But divergent approaches
can exacerbate disparities in how rigorous curricula are and the qualifications
of educators. 30 Indeed, integration of AI curricula in the United States is
already uneven in both depth and scope. Many schools do not teach CS,
which is seen as the first step in AI specialization. 31 Only 51 percent of U.S.
high schools offer foundational CS and only 23 of the 50 states and District
of Columbia require all high schools to offer CS. 32 The few schools that offer
explicit AI courses also vary in the content and scope of their curricula.
Consider the two U.S. public high schools with the most prominent AI
curriculums: The North Carolina School of Science and Mathematics, a
public, two-year boarding school that is administered by North Carolina’s
university system, received a $2 million gift to launch an AI program in 2018
to teach students how to use and create AI systems, with a strong focus on
understanding the ethical implications associated with these technologies. 33
Seckinger High School, established with $79 million in funding by the
Gwinnett County Public Schools district in Georgia, is incorporating AI courses
in core subjects for its entire K-12 cohort. 34 Its program introduces
elementary school students to block coding—a basic form of computer
programming that uses visual instruction blocks to construct games—while
middle and high school students can learn programming languages and apply
robotics and sensors to real-world applications. 35

An alternative to the U.S. approach is a more centralized, national


government-mandated approach to AI education such as that of China,
Korea, Bulgaria, and Kuwait. 36 In China, for instance, the Ministry of
Education revised its national education requirements for high schools in
January 2018 to officially include pedagogical content for AI in its information
technology curricula. All high school students, from those in the most
prestigious schools in Beijing to those in the hundreds of rural classrooms in
China’s outskirts, are required to complete an AI coursework module, which
includes data encoding techniques; collecting, analyzing, and visualizing
data; and learning and using a programming language to design simple
algorithms, as part of a compulsory information technology course. 37 One
pitfall of top-down approaches that mandate AI curricula, especially those
with discrete AI curricula that have specific time allocations, textbooks, and
resources, is that they can encourage students to engage in rote learning and
discourage the type of independent and creative thinking that appears to play
a supportive role in innovation and entrepreneurship. Germany has also
mandated a national AI curriculum but is implementing a flexible integration
mechanism that allows regions, school networks, and individual schools to
decide whether the curriculum is embedded into other subject areas or
delivered through out-of-school methods such as extracurricular activities. 38
These types of approaches may not be realistic within the realities of the U.S.

CENTER FOR DATA INNOVATION 10


education system, but the federal government can play an important role in
ensuring AI education is equitable and functions as effectively as possible.

While the U.S. school system has not fully responded to the increased
importance of AI skills, more employers, parents, and even students
recognize the benefits of learning AI. Several nonprofits and advocacy groups,
learning-programs, and courses have sprung up in response. Nonprofits such
as Girls in Artificial Intelligence and Technology Education (GAITway), AI4ALL,
Black Girls Code, and Girls who Code seek to increase access to AI education
across gender lines and socioeconomic divides, introduce AI to students at a
younger age, train more teachers, and put AI into more schools. The private
sector is also reinforcing AI integration in schools through a number of
different initiatives ranging from after-school programs to hackathons to
summer camps. A 2021 report finds that for-profit companies are responsible
for hosting 59 percent of the close to 450 AI and AI-related summer camps
that exist across the United States. 39 For example, iD Tech Camps is a
summer computer camp held at more than 150 U.S. college campuses that
offers AI and machine learning courses for 13 to 17 year olds. 40 Some of
these programs target populations typically underrepresented in AI and CS,
such as Microsoft’s DigiGirlz High Tech Camps, which are multi-day tech
programs for girls in middle and high school. 41 Other private sector initiatives
support teachers in-classroom learning by providing resources that aid
learning, such as Google’s CS First curriculum, a virtual free-to-use CS
curriculum for students ages 9 to 14. 42

Higher AI Education
Unlike U.S. high schools, where AI and CS education is deemed subpar, some
U.S. institutions of higher education boast strong AI programs, drawing
students from around the world. Moreover, interest in studying AI and AI-
related courses at U.S. universities is surging as the market for these skills
soars. 43 But U.S. institutions are frequently unable to meet demand because
too few universities are willing or able to sufficiently persuade the qualified
professors they need to increase their faculties and offer more classes to
students to pick working in academia over lucrative private sector jobs. 44
Moreover, many colleges and universities respond inadequately to
“customer” demand because they are unwilling to reallocate resources for
less in-demand academic programs to make budget space for more in-
demand ones such as CS.

At the undergraduate level, very few universities offer specific AI majors.


Carnegie Mellon University's School of Computer Science began offering an
undergraduate AI degree to a maximum of 35 students each year beginning
in 2018, but it was the first U.S. university to do so. 45 While most colleges
and universities offer CS bachelor degrees with AI concentrations or
specializations, it is these CS majors in colleges across the country, from
major state universities to small private colleges, that are increasingly being
chosen by students. Swarthmore College, a private liberal arts college in
Pennsylvania, has resorted to using a lottery system to select which students
may enroll in CS classes—and in 2019, it implemented caps on the number of
courses students may take in response to consistent “high enrollment

CENTER FOR DATA INNOVATION 11


pressures.” It had hoped it could allay placing restrictions on entry with “more
faculty lines for the department or an abatement of increasing enrollments,”
but neither came to fruition. 46 The University of Maryland, a public research
university facing similar pressures, made its CS program a limited enrollment
major in 2019, meaning students must now complete a series of gateway
courses before being admitted. 47 It also began instituting a differential pricing
model for CS majors in 2015, along with engineering and business majors,
charging students in these programs $5,600 more than their classmates for
a four-year degree. 48 Increasing the cost of CS tuition allows the university to
reduce demand by leading some students who otherwise would major in CS,
especially those who are financially disadvantaged, to turn to other fields and
enables the college to expand its CS programs by hiring more professors and
introducing more minors. 49 But barriers like caps and weed-out classes often
exacerbate gender and racial disparities in AI and CS. 50 A better solution
would be for the state of Maryland to provide more funding for CS education,
while at the same time for the university to reallocate resources from lower to
higher priority programs. Doing so could prove difficult in practice, however,
due in large part to internal institutional resistance. Consider data from the
National Center for Education Statistics, which shows that the number of
bachelor’s degrees conferred in social sciences and history decreased by 7
percent between 2009 and 2018 while those in CS increased by 124
percent. 51 Reallocating resources to support more CS courses means taking
resources from social science and humanities courses, and as Stanford
economist Paul Romer noted, a “university that has fixed investment in
faculty who teach in areas outside of the sciences and that faces internal
political pressures to maintain the relative sizes of different departments may
respond to this pressure by making it more difficult for students to complete
a degree in science.” 52

It is clear that educational institutions do not adequately respond to market


signals. As a result, it is incumbent on states and the federal government to
require or incentivize tertiary education institutions to expand their ability to
train a broader group of students in AI and CS. Many federal agencies are
also prioritizing investment in AI higher education. For example, the 2021
National Defense Authorization Act directs NSF to fund AI initiatives for higher
education (e.g., fellowships for faculty recruitment in AI) as well as AI
curricula, certifications, and other adult learning and retraining programs.

The private sector has been playing a role in supporting tertiary AI education
just as it does in primary and secondary education, providing everything from
certificate and online learning programs to subsidized access to AI systems
for teaching. 53 Perhaps no public-private AI partnership is as comprehensive
as that of the University of Florida (UF) and Nvidia, a U.S. company that
develops graphics chips. Anchored by a $25 million gift from UF alumnus and
founder of Nvidia Chris Malachowsky, and an additional $25 million in
hardware, software, training, and services from Nvidia, UF has launched an
initiative to become an “AI University.” 54 As part of this initiative, UF has
incorporated AI into all undergraduate majors and graduate programs,

CENTER FOR DATA INNOVATION 12


developed an undergraduate AI certificate program, and committed to hiring
100 faculty members in AI and applications. 55

WORKFORCE TRAINING
Providing AI education is important, but national policy needs to also help
incumbent workers. One problem impeding successful workforce policies is
that while schools are primarily responsibility for equipping the future AI
workforce with the requisite skills and knowledge they will need to succeed in
an AI economy, there is little agreement about the respective responsibilities
of individual workers, employers, and government in training the existing
workforce.

Complicating matters is that there is no commonly agreed upon definition of


what constitutes “AI expertise” or the “AI workforce,” which means there is no
common definition of a skills gap problem despite broad consensus that
there is one. Indeed, existing literature on the AI labor market vastly
disagrees on the pervasiveness, scale, and concentration of skills
misalignments. As a 2019 report from the Center for Security and Emerging
Technology (CSET) explains, there are many types of AI expertise one can
include in a measure of the AI workforce, ranging from a top computer
scientist who can lead an AI R&D team to an entry-level engineer who is not
an AI specialist but has sufficient skills to execute coding tasks. 56 There are
also many different domains of expertise; AI systems require hardware,
software, and data, while successful AI teams need expertise in all three. In a
subsequent report, CSET identified and measured the labor dynamics of four
groups of AI workers that include those that provide technical inputs to AI
applications, perform technical roles on an AI team, complement AI technical
occupations in product development (e.g., legal compliance officers), and
provide support for scaling, marketing, and acquisition of AI at the
occupational level. 57 It finds that in 2019, the U.S. AI workforce consisted of
14 million workers, or about 9 percent of total U.S. employment. Moreover, it
finds that there are divergent trends in these AI occupations, reflecting a
difference in the supply and demand gap for different segments of the AI
workforce. For example, the wage and employment growth of computer and
information research scientists, which are small but important occupations
within the AI workforce, is four times greater than the national average, while
there is no notable gap for project management specialists and user
experience designers. 58

Where skills misalignments in AI jobs do exist, evidence suggests employers


are investing in upskilling workers at suboptimal rates. Employer-provided
training takes various forms, including formal and informal on-the-job
training, tuition subsidies, classroom training, and apprenticeships. The
composition and intensity of firm-provided training is hard to quantify as
companies invest differently based on factors such as how much firm-specific
training is needed to perform tasks effectively, the cost, and the extent to
which they can share a portion of skill investment costs with workers (e.g.,
directly or in the form of lower wages). But research shows that few
companies are investing substantially and examples of firm-provided AI-
specific training are sporadic. According to a 2020 Deloitte report, only 18

CENTER FOR DATA INNOVATION 13


percent of organizations around the world have significantly invested in AI-
specific reskilling initiatives. 59 Still, private investments in the United States
likely exceed federal government investment in training programs. Consider
that Amazon invested $700 million in 2019 to retrain 100,000 employees,
including by creating a "Machine Learning University.” And Microsoft
partnered with education provider General Assembly in the same year to
upskill 15,000 workers in AI-related skills by 2022, while Shell has created AI
courses that it offers to its range of employees, including petroleum
engineers, chemists, and geophysicists. 60 As the primary federal workforce
development program, the Workforce Innovation and Opportunity Act (WIOA),
is funded at about $5 billion each year, private company investments are a
key input to closing AI talent gaps.

It is unlikely that the rate of firms choosing to train employees will reach one
that is optimal from a societal and economic perspective without government
intervention. Corporate investment in workforce training in general has been
on a downward trend as more and more firms seek to simply hire workers
with the requisite skills instead of paying to train them. 61 After all, why would
a company want to invest in training workers in costly AI skills when so many
are leaving their jobs in record numbers? 62 The United States can work to
reverse this trend by creating incentives for firms. One option is to allow
qualified expenditures on workforce training to be taken as a knowledge tax
credit. To ensure companies use this credit to focus on the skills of most of
their workers, and not just managers, firms taking advantage of the credit
could be required to abide by rules such as those for pension program
distribution, which limit focus on highly compensated employees. 63

The federal government could further improve workforce policies for AI by


establishing wider use of skills credentialing so companies have a better way
to assess the AI skills of prospective and current workers, and workers have a
better way to identify and gain the AI skills they need to be successful. The
idea of using AI credentials to provide alternative pathways to AI jobs is not a
new one. In fact, AI certifications have proliferated over the past few years,
with several large tech companies such as Google and Microsoft launching
their own AI certifications, and many traditional online certification providers
such as Udacity and Udemy offering AI-related certifications as well. The issue
is that there is little market demand for these certifications in lieu of a four-
year degree—even from the tech companies that make their own
certifications. What’s needed is a national approach and for the government
to encourage the private sector to accept alternative certifications for AI,
namely by accepting a suitable set as a substitute for a college degree when
filling federal government jobs. 64

Fortunately, the government has already begun to recognize the need for
coordinated AI workforce policies at the national level. As part of the National
Defense Authorization Act of 2021, Congress charged NAIIO with developing
a strategic plan that establishes goals, priorities, and metrics to “support and
coordinate federal education and workforce training related to artificial
intelligence.” 65 NAIIO’s interagency committee for education and workforce
training will create AI workforce goals and priorities at the federal level.

CENTER FOR DATA INNOVATION 14


Recommendations for improvement:
 Congress should provide funding for low-income and rural school
districts to incorporate AI into their high school curricula. 66 Funding
for educational resources for AI remains fragmented. Policymakers
should ensure that schools that have the least access to AI resources
for education can receive specific funding. Moreover, the Department
of Education should work with NAIIO to collect and disseminate best
practices in education models and materials through a centralized
hub.

 Congress should create incentives for more tertiary AI by charging


and funding NSF to provide grants to public universities (including
Minority-Serving Institutions) that have increased or are implementing
programs to increase enrollment and retention in AI. At the university
level, policymakers need to address the barriers that limit the number
of students able to take AI-related courses. Schools seeking to
expand course offerings, hire more faculty, and provide students in AI-
related programs such as CS with more resources to improve
retention rates should be eligible to apply for these grants. A set of
these grants should specifically target Historically Black College and
Universities (HBCUs), Hispanic-Serving Institutions (HSIs), and Tribal
Colleges and Universities (TCUs) to ensure underrepresented
students have equal opportunity to pursue an AI education.

 Congress should fund a program at NSF to provide competitive


awards for up to 1,000 AI researchers to remain in academia for a
period of five years. Even though businesses may benefit from
attracting the best AI faculty talent from universities, the overall AI
innovation ecosystem suffers as it reduces the number of AI experts
that can help new students cultivate these skills. These awards would
incentivize more AI researchers to stay in academia and thereby help
U.S. universities meet the demand for AI skills. 67

 Congress should create a knowledge tax credit to incentivize AI


workforce training investment. Employers are underinvesting in
workforce training for AI in the midst of a labor market in which
Americans are quitting their jobs in record numbers. Allowing
corporations to take a tax credit for at least 50 percent of training
expenditures would provide a much stronger incentive for businesses
to expand training investments.

 The Office of Personnel Management (OPM) should change current


requirements for many AI positions within the federal government to
also allow individuals with acceptable AI certifications to be eligible
rather than just those with college degrees. Doing so would
demonstrate to the private sector the feasibility of using alternative
credentials for AI. 68 OPM should work with agencies to create the list
of acceptable AI certifications for key AI job categories within
government and update the list annually.

CENTER FOR DATA INNOVATION 15


ATTRACTING FOREIGN AI TALENT
Attracting and securing highly skilled foreign-born talent has played a vital
role in U.S. innovation and competitiveness by making up for the deficits in
the current U.S. education system in turning out sufficient AI talent. Indeed,
66 percent of students in America’s top AI PhD programs are foreign born,
more than 50 percent of computer scientists employed in the United States
are foreign born, as are about 65 percent of Silicon Valley computer and
mathematics workers, and 66 percent of the “most promising” U.S.-based AI
start-ups have at least one immigrant founder. 69

Given the importance of foreign-born AI workers to U.S. innovation success in


AI, the United States needs policies to strengthen and expand the
immigration pipeline that allows highly trained AI talent to innovate in the
United States, including foreign STEM (science, technology, engineering, and
mathematics) graduates of U.S. colleges and universities. But while many
competitor nations, including the United Kingdom, China, Canada, France,
and Australia, have adopted flexible immigration policies to attract foreign
talent in AI and other technical fields, the U.S. immigration system has
remained largely the same for the last 50 years. These outmoded visa laws,
as well as recent anti-immigrant rhetoric and international competition for AI
talent from other countries, are causing many international AI scientists and
engineers to look outside the United States for education and employment.

Table 1 summarizes the current immigration pathways for four key


populations in the AI workforce: students who are pursuing higher education
in a field related to AI; workers with higher education degrees who are
employed or seeking AI-related positions; “superstar” AI workers who are
internationally recognized for their extraordinary ability or achievements in AI
or related fields; and entrepreneurs who are planning to start AI-related
businesses.

CENTER FOR DATA INNOVATION 16


Table 1: Current immigration pathways for foreign-born AI workers to the United States. 70

CENTER FOR DATA INNOVATION 17


For foreign-born students who wish to pursue higher education in a field
related to AI, getting a student visa is relatively easy, but staying and working
in the United States after graduating is hard. AI graduates can use the STEM
Optional Practical Training program to work in the United States for up to
three years without getting another visa, but to stay longer, they need to find
a job with an employer that is able and willing to sponsor an H-1B temporary
work visa, the most important and sought-after channel into the U.S. AI sector
for all foreign workers. Most H-1B visas are subject to an annual cap of
85,000, meaning the United States Citizenship and Immigration Services
(USCIS) distributes a maximum of 85,000 H-1Bs each year through a lottery-
based system that randomly selects grantees from a pool of qualified
applicants. Demand for these visas is extremely high, with USCIS receiving
approximately 274,000 H-1B registrations in FY 2021 and 308,000 in FY
2022. 71 Even when an AI graduate can find a company willing to complete
the expensive H-1B administrative process on their behalf and secure one of
the few three-to-six-year H-1B visas, their chances of being able to stay in the
According to 2020 United States in the long term are low because the employment-based green
data from the cards that confer permanent residency are even harder to come by, as USCIS
Congressional distributes only 140,000 employment-based green cards each year, at least
Research Center, the half of which go to workers’ spouses and families.72 Moreover, under the per-
country cap set in the Immigration Act of 1990, workers from any one country
average wait time
cannot be issued more than 7 percent of these green cards, a rule that has
under current law for not changed in over 30 years even though the sources of migration flows for
an EB-2 visa, an high-skilled workers have. 73 Today, most international students and workers
employment-based come from just two countries: China and India. 74 Moreover, Chinese and
visa for those who Indian students have the highest rates of intention to stay when compared
with students from OECD member countries. 75 However, the numerical and
hold an advanced
per-country limits have created employment-based immigration backlogs,
degree or equivalent, which have inordinately long wait times for Chinese and Indian nationals.
is 18 years for According to 2020 data from the Congressional Research Center, the
Chinese nationals and average wait time under current law for an EB-2 visa, an employment-based
195 years for Indian visa for those who hold an advanced degree or equivalent, is 18 years for
Chinese nationals and 195 years for Indian nationals. 76 Given the uncertainty
nationals.
and unpredictability that has come to characterize the U.S. immigration
process—the Trump administration abruptly began denying Chinese graduate
students visas in 2020 based on the Chinese universities they attended amid
tensions with the country—it should be no wonder foreign student enrollment
declined every year from 2016 to 2020 and international students who
graduate with PhDs from U.S. institutions are increasingly taking jobs in other
countries. 77 Indeed, 14 percent of all new international AI PhDs that studied
at U.S. institutions took jobs outside the United States in 2020, compared
with 8.6 percent in 2019. 78

For distinguished AI talent that possess extraordinary ability in their field,


such as outstanding professors or researchers, there is a temporary,
renewable three-year O-1 visa or an employment-based, first-preference EB-1
green card they can apply for. There are no caps on the number of O-1 visas
issued each year, but the eligibility requirements have historically been so

CENTER FOR DATA INNOVATION 18


demanding and extensive that few organizations rely on these visas to secure
talent. 79 In January 2022, the Department of Homeland Security and the
Department of State announced that PhD holders in STEM fields would be
eligible for the O-1 visa and clarified how STEM talent can meet the
requirements for O-1 classification to better attract and retain foreign
talent. 80 While a step in the right direction, the guidelines fall short of the
policy changes needed to address the severe bottlenecks that exist. There
are approximately 40,000 EB-1 visas available to distinguished workers, but
current backlogs mean Chinese and Indian workers who have been approved
for this visa have to wait an average of 5 and 8 years, respectively, to receive
it, which the Congressional Research Center estimates will become an
average of 15 and 18 years for these nationals by 2030. 81

For foreign-born entrepreneurs, there is currently no visa category in the


United States, which has long been a beacon of entrepreneurialism,
attracting the kinds of people who spark the economy and propel it forward
by producing new discoveries, commercializing big ideas, and growing
successful companies. But faced with an immigration system that simply
ignores them, many foreign-born AI innovators are flocking to cities in
countries with more liberal immigration policies. 82 In the United States,
foreign-born entrepreneurs are forced to apply for residency through other
visa categories that are already oversubscribed, restrictive, complicated, and
costly. Take the example of Purva Gupta, cofounder and CEO of the
multimillion-dollar retail tech start-up Lily AI, who moved from India to the
United States on a student spouse visa. Gupta had to apply and receive six
different types of visas before finally obtaining a green card. 83 Over the past
few years, however, several countries including Australia, Canada, France,
and the United Kingdom have introduced start-up visas they are using to
entice foreign entrepreneurs to create Silicon Valley-like tech hubs in their
own countries (figure 4). 84

Figure 4: A billboard over highway 101 in Silicon Valley 85

CENTER FOR DATA INNOVATION 19


Recently, U.S. policymakers have recognized the importance of entrepreneurs
to maintaining U.S. competitiveness. The Biden administration revived an
immigration program called the International Entrepreneur Parole program,
first proposed by President Obama in 2017, which allows foreign
entrepreneurs to work for up to five years in the United States granted they
hire 10 employees and attract at least $250,000, or meet other
benchmarks. 86 The program does not create a new visa category but instead
allows the Department of Homeland Security to use its existing authority to
permit temporary admission to qualified individuals. The House has already
passed the America COMPETES Act, which includes a bill that would create a
new temporary visa for eligible international entrepreneurs and essential
employees affiliated with the management or operations of a start-up
entity. 87 To qualify for the new visa, entrepreneurs must have received at
least $250,000 from U.S. investors or $100,000 from government grants,
have at least a 10 percent ownership stake and play a central role in the
start-up, which itself must be less than five years old. The bill would allow an
entrepreneur to receive lawful permanent residence so long as the start-up
entity meets certain additional benchmarks, while families of visa holders
would be eligible for dependent visas. Ensuring this bill passes through the
Senate will be particularly important for AI competitiveness given many of the
nation’s largest and most successful AI companies were founded by foreign-
born entrepreneurs. A 2020 report on AI start-ups finds that two thirds of
Forbes’s list of the “most promising” U.S.-based AI start-ups have at least one
first-generation immigrant founder and 42 percent have founders that are
exclusively first-generation immigrants. 88 In crafting the visa, the United
States should look to what other nations have established in the past few
years. While many of those policies may sound promising, a number of them
have failed in practice due to unrealistic and vague metrics for business
success or long processing times. 89

Recommendations for improvement:


 Congress should better enable immigrants holding AI-relevant
graduate degrees to apply for and receive a green card, with
preference given to those with degrees from U.S. universities,. To do
this, Congress should eliminate per-country caps on employment-
based green cards that have created a bottleneck preventing AI
graduates with job offers in hand from contributing to the U.S. AI
workforce in the long term.

 Congress should pilot a visa program for AI entrepreneurs.


Entrepreneurs should be required to show evidence of how their
business would support U.S. AI innovation or competitiveness and
have received funding in the range of $500,000 from U.S. investors.
Policymakers should look at the successes and failures of
entrepreneur visas created by several peer countries in the past five
years to ensure a U.S. one is successful.

CENTER FOR DATA INNOVATION 20


FACILITATING ACCESS TO AI RESOURCES
Overall grade: Approaching expectations

Reason: There is not sufficient access to computing


resources for AI researchers in the public sector.

Access to data and computing facilities is a key enabler of AI innovation. AI


systems often rely on vast quantities of data for training, as large datasets
help AI systems develop highly accurate models to perform tasks ranging
from identifying faces to answering search queries. Moreover, machine
learning models can recognize subtle patterns in large datasets that are
difficult or impossible for humans to perceive. This is one reason many AI
systems perform certain tasks better than human experts do, such as
identifying the signs of breast cancer in mammograms. 90 In addition,
technologies such as high-performance computing, which expands the
capabilities of AI systems through massive computational power, and cloud
computing, which is a powerful technical architecture for AI that makes
access easy and economical, are driving growth, productivity, and innovation.
For example, researchers have combined supercomputers and machine
learning techniques to model climate change, and companies in the finance
and insurance industry are using cloud computing and AI to detect fraud,
identify financial risk, and predict cash flow events.

The role of government in increasing access to AI resources for academic and


private sector researchers is different. Academic researchers typically
conduct crucial early stage AI research that provides foundational, generic
knowledge that everyone—including industry—can draw on for ideas and
innovation. However, only well-resourced institutions provide access to
expensive AI resources, such as powerful AI compute. The government’s role
is to ensure as many qualified academic researchers as possible have access
to AI resources in order to expand the pool of general AI knowledge for the
benefit of everyone. Private sector researchers typically conduct later-stage
R&D, which is important in bringing innovations to market. The private sector
already has incentives to invest in AI resources. The role for government is to
ensure the private sector’s incentives to invest in R&D for AI are sufficient to
maximize overall economic welfare. 91

Currently, publicly funded academic researchers requiring access to high-


performance computing capabilities for AI, which includes access to relevant
hardware, software, and expertise, can use resources that are hosted at
either their academic institutions or national High Performance Computing
(HPC) centers. Allocations for computing time on HPC systems at the national
level are made principally through competitive processes managed by the
Department of Energy and NSF, respectively. As the Center for Data
Innovation found in a 2020 report, however, the demand for access to the
systems these agencies provide is more than three times greater than the
supply, which is “hampering the ability of AI researchers to develop new
products and services that are vital in maintaining U.S. competitiveness,
inhibiting AI practitioners from applying AI to defense innovation, and slowing

CENTER FOR DATA INNOVATION 21


innovation needed to address important societal challenges, including in
health care and the environment.” 92

Moreover, there is a growing divide in the computing resources and


opportunities available to both researchers in academia and those in the
private sector, which is weighing the nation’s research portfolio toward
applied, market-driven endeavors. Consider, for example, that in January
2022, Meta (a.k.a. Facebook) announced its state-of-the-art AI research
supercluster, a computing system it believes “is among the fastest AI
supercomputers running today and will be the fastest in the world once fully
built out in mid-2022.” 93

Fortunately, the United States has begun an ambitious initiative to increase


access to AI resources for academic researchers. As part of the National AI
Initiative Act of 2020, Congress directed a task force to create a roadmap for
an NAIRR, envisioned as “a shared computing and data infrastructure that
would provide AI researchers and students across scientific fields with access
to a holistic advanced computing ecosystem.” 94 As of April 2022, the task
force had held seven public meetings to investigate the feasibility and
advisability of the resource and to develop the roadmap for how it should be
established and sustained. 95 The EU is also working to spread access to
resources that will support AI development. Consider European High
Performance Computing Joint Undertaking (EuroHPC JU), a joint initiative
between the EU, other European countries, and the private sector to develop
a high-end HPC ecosystem in Europe. 96 The goal of the initiative is to
coordinate and pool public and private resources to fund high-end systems in
a number of designated sites across the continent. The EU also has a
strategy for data and is establishing common European data spaces. 97
Meanwhile, the U.S. effort is unique in its laser focus on AI. The chief driving
force behind the initiative is to drive U.S. innovation and competitiveness in
AI specifically rather than U.S. innovation and competitiveness generally—and
strategic decisions from what systems should be included to what data
should be shared and how to which users should have access are being
decided with this goal in mind.

Recommendations for improvement:


 U.S. policymakers should promote secure, energy-efficient AI
compute. Opponents to creating a national computing and data
resource claim a new resource would consume too much energy,
accelerate climate change, raise serious privacy and security
concerns, increase economic inequality, further entrench big tech
monopolies, and fuel the proliferation of inherently biased AI
systems. 98 While some of the issues raised around data security and
energy use are real and deserve smart, considered responses, most
claims are at best misleading and lack context, and at worst are just
plain wrong and therefore should not shape policy responses. To
address legitimate concerns, policymakers should embrace
pragmatic responses to minimize any negative impacts from creating
the resource, such as minimizing its energy consumption by

CENTER FOR DATA INNOVATION 22


prioritizing computationally efficient hardware and algorithms when
designing and developing it. 99

 OSTP and NSF should prioritize the development of tools and metrics
to quantify the AI computing needs and resources of the academic
community. There is little literature on what level and type of compute
AI researchers need. 100 Without this information, policymakers
cannot effectively make decisions about what resources to invest in
or how much.

 The NAIRR Task Force should prioritize providing local AI computing


resources in regions where the gap between AI compute demand and
supply is greatest. As the Center for Data Innovation explained in its
comments to OSTP and NSF, some communities, institutions, and
regions already have high access to HPC availability, while others are
conducting high levels of AI research but have little access to
powerful systems. 101 There should be demonstrable evidence that
providing access to AI compute in a community, institution, or region
would result in an increase in AI research, because democratizing
access to AI compute is a means to an end, not an end in and of
itself.

PROMOTING GOVERNMENT ADOPTION OF AI


Overall grade: Approaching expectations

Reason: Policy actions are not sufficiently focused on


addressing structural issues that are stalling government
adoption of AI including approach and culture; financing;
metrics and incentives; procurement; and oversight and
review.

One of the most important things government can do to spur AI is to be a


robust adopter of AI technologies. Beyond improving agency mission delivery,
removing barriers to public-sector adoption of AI would help reduce the
perceived risk of the technology and boost domestic demand for AI
innovation in the private sector.

Congress and the White House have taken important steps recently to
facilitate greater government adoption of AI, but many agencies still face
unique challenges to becoming more AI-mature organizations. Indeed,
deployment of AI in the federal government is relatively low despite 70
percent of public sector IT leaders agreeing that AI is “mission critical.” 102 The
challenges facing agencies include outdated IT infrastructures, limited
funding for capital expenditures, lack of awareness about the technology, and
risk aversion, among others.

One key and oft-cited challenge is a shortage of government workers


equipped to work with AI. While this obstacle is not unique to government,
federal agencies struggle to compete with the private sector in attracting and
retaining AI talent. And as the demand for workers with AI skills increases, the
government has an even harder time recruiting this talent, as the private

CENTER FOR DATA INNOVATION 23


sector has greater flexibility to offer more attractive salaries and benefits.
According to a 2019 New York Times article, AI specialists with little industry
experience can make between $300,000 and $500,000 a year in salary and
stock in the private sector, with top names in the field receiving
compensation packages that extend into the millions. 103 The government
cannot match these salaries. Without AI expertise, procurement managers in
government are less able to effectively facilitate AI adoption and government
agencies will be less aware of the ways in which AI could benefit their
missions.

The bipartisan AI Training Act, which passed in December 2020, was an


effort to address this challenge by directing the Office of Management and
Budget (OMB) to establish an AI training program for a variety of federal
employees, with a focus on courses that teach the basics of AI, the ways the
technology can benefit the federal government, and the risks it poses,
particularly to privacy and discrimination risks. 104 Similarly, the AI in
Government Act of 2020 directs OPM to study how to foster the necessary
workforce skills for effective AI adoption within government. This legislation
also established an AI Center of Excellence (AI COE) within the General
Services Administration (GSA) to deploy and scale AI solutions across
government agencies.

To be sure, having federal managers and employees better understand and


care about the process of AI innovation and how to apply it to their work will
help facilitate government adoption of AI, but structural factors play a much
more important role in limiting and enabling innovation across the federal
enterprise. 105 It is usually not the case that federal managers don’t innovate
with AI because they don’t know innovation is useful; they don’t innovate
because there are few rewards and many barriers. Policy actions focused
predominantly on the importance of federal managers embracing innovation
is unlikely to do much to move the needle on large-scale government
adoption of AI. Policymakers should be focused on addressing structural
factors related to approach and culture; financing; metrics and incentives;
procurement; and oversight and review. The U.S. government has great
potential to use AI to improve its public services and gain strategic economic
advantages— in 2021, it ranked highest out of 160 countries in a
government AI readiness index by consultancy firm Oxford Insights. 106
Policymaker inaction to overcome structural challenges is wasteful.

Recommendations for improvement:


 The AI COE situated within GSA should identify 20 to 50 core
processes to be transformed with AI. The challenge of innovation in
the federal government is to innovate on large-scale, core processes,
but senior managers are typically attracted to novel, pilot-scale AI
services that are often useful but do little to change the status quo.
According to a 2017 report by Deloitte, more than 10 percent of the
4.3 billion work hours federal government employees spend is used
for documenting and recording information, and another 10 percent
is spent on monitoring resources or processes and surroundings. 107
AI COE should identify the most important core processes in which AI

CENTER FOR DATA INNOVATION 24


can make a difference. Ideally, these would be ones where AI would
either lead to significant improvements in customer service and
quality or reductions in cost (to both the government and users of
government services).

 Congress should allow agencies to divert a small share of their


operating budgets to AI innovation projects. Congress should create a
federal analogue to the Small Business Innovation Research
program, which allocates a small share of federal extramural R&D to
small business innovation contracts. The analogue here would be
that Congress could allow agencies to allocate a small share of their
operating budgets (perhaps half a percent) to serve as an internal
innovation seed fund to let agencies start pilot projects more easily.
IT leaders within the federal enterprise such as chief information
officers or chief AI officers should have discretion over these funds to
strengthen their agency roles. The authority could expire after five
years, after which the U.S. Government Accountability Office
(GAO)would assess the results.

 Each federal agency should develop its own AI strategy and appoint a
chief AI officer. One reason certain agencies devote so little attention
to AI is that it is generally not formally recognized as part of agency
agendas or strategic plans. Each agency should explicitly identify
specific steps for how it will connect its data, users, and mission
priorities to support AI transformation, much like the Department of
Defense, Department of Veteran Affairs, and Food and Drug
Administration already have. To really coordinate and drive
implementation of AI, each federal agency should consider appointing
a chief AI officer, like the Department of Health and Human Services
has done.

 AI COE should develop an all-encompassing procurement website for


federal AI contracts. One-stop e-procurement websites and e-quoting
allows private sector firms to easily locate and apply for government
contracts. Currently, most federal contracts for AI services are
awarded to companies concentrated on the East Coast, close to
where federal agencies are located. Indeed, approximately 87
percent of the federal contracts awarded for robotic process
automation went to companies in Virginia and New York. 108 An online
portal for all contracts could help make public procurements
available to firms all across the country, ensuring the government
gets the best services and spreads economic opportunity across the
United States.

 GAO and Council of the Inspectors General Should Call Out Agencies
for Not Innovating with AI. Rather than looking at waste, fraud, and
abuse alone, these organizations should look at waste and inertia
from lack of AI innovation. The federal government has been aware of
the need to digitize paper form processing and automate manual
processes since the early 2000s, and slow adoption of AI has cost
taxpayers a lot of money. It should therefore hold federal agencies
accountable for not innovating. 109

CENTER FOR DATA INNOVATION 25


DEVELOPING TECHNICAL STANDARDS
Overall grade: Meeting expectations

Reason: Greater government engagement in international


standards setting is needed to promote the voluntary,
industry-led approach to standards that has been successful
at bolstering AI innovation in the United States.

When one considers policies designed to drive innovation and


competitiveness, those related to standards development and
implementation are often underappreciated, or even ignored. But a robust
ecosystem of standards is foundational to a nation’s ability to effectively
develop and implement AI systems for two key reasons. First, technical
standards for AI, which can encompass a wide variety of issues, including
safety, accuracy, usability, interoperability, security, reliability, data, and even
ethics, can provide developers with clear guidelines for the design of AI
systems. This helps maximize the utility from AI systems by ensuring they can
be easily integrated with other technologies, utilize best practices for
cybersecurity and safety, and adhere to a variety of different technical
specifications. Second, common standards can serve as a mechanism to
evaluate and compare AI systems. For example, in some contexts, there may
be a legal requirement for transparency for a decision-making process, such
as judicial decision-making. 110 However, without clear standards defining
what algorithmic transparency actually is and how to measure it, it can be
prohibitively difficult to objectively evaluate whether a particular AI system
meets these requirements or expectations, or does so better than another
similar system, which discourages the adoption of these technologies. 111

The U.S. approach to standards development for AI follows the general U.S.
standards system, which has been exceptionally successful in generating
technological innovation in the United States. The U.S. standards system
focuses on voluntary consensus standards that are created by private sector
standards development organizations in response to particular needs or
issues identified by industry stakeholders, government, or consumers. For
instance, the Society of Automotive Engineers has developed definitions and
specifications of autonomy that autonomous vehicle manufactures rely on to
adhere to rules from the Department of Transportation. And the U.S. tech
association brought together 50 technology and health organizations in 2020
to establish the first ever standard for AI in health care accredited by the
American National Standards Institute (ANSI), building consensus on such
terminology as “telehealth” and “remote patient monitoring.” 112

There are two categories that broadly describe how standards achieve
adoption in America. De facto standards achieve adoption through
competition among rival standards consortia. Consider the Open Neural
Network Exchange (ONNX) and the Neural Network Exchange Format (NNEF),
two examples of open data exchange protocols developed by private-sector
consortia to enable interoperability between different frameworks for training,
executing, and deploying machine learning models. 113 In the de facto
method, the market informally decides which protocol achieves the dominant

CENTER FOR DATA INNOVATION 26


position, ensuring that the one with the best technical merit wins out. De jure
standards are also adopted through consensus, but they are usually
approved and endorsed by formal standards authorities. For instance, NIST, a
nonregulatory federal agency within DOC, has developed and approved a
succession of data format standards for the interchange of fingerprint, facial,
and other biometric information in response to government and market
needs by collaborating with other federal agencies, academia, and
industry. 114

The role the federal government has played in standards has historically
been limited, primarily focusing on orchestrating and supporting industry-led
efforts through technical assistance with reference materials, data, and
instrumentation. In February 2019, the Trump administration called on NIST
to take a more engaged role in developing AI standards, issuing an executive
order that, among other things, directed the agency to create “a plan for
Federal engagement in the development of technical standards and related
tools in support of reliable, robust, and trustworthy systems that use AI
technologies.” 115 The plan, which was released in August 2019, helped to
establish a federal AI standards coordinator within NIST charged with
gathering and sharing AI standards-related needs and best practices, and to
promote research that underlies technically sound standards for trustworthy
AI. Regarding the latter, NIST drafted “A Taxonomy and Terminology of
Adversarial Machine Learning” in 2019, “Four Principles of Explainable
Artificial Intelligence” in 2020, and “A Proposal for Identifying and Managing
Bias in Artificial Intelligence” in 2021. 116 In response to a directive from
Congress in the 2021 omnibus spending bill, NIST is also developing a
voluntary framework to manage the risks to individuals, organizations, and
society from AI systems. 117

While the generally pluralistic, demand-driven, market-led approach to


standardization remains successful in generating innovation, times have
changed since the first standards organizations were established in the late
19th century. The U.S. economy is no longer localized and agricultural as it
was then and has instead transformed into a globalized, data-driven, and
algorithmic economy in which the ability to use AI is proving critical to firms’
success. The role the government plays in international standards setting is
therefore increasingly critical, as divergent AI standards make it more difficult
and costly for global firms to sell their AI products because it means they
have to reconfigure preexisting design and production processes to suit the
specific standards in different markets and pay royalty fees for providing
products using the local standard. 118 Even worse, divergent standards can
impede the development and deployment of AI systems if stakeholders don’t
coalesce around one widely agreed upon approach. For example, AI firms
(and investors) may choose to reduce or hedge their investments as they wait
and see which standard prevails. 119

DOC, NIST, ANSI, and other agencies are rightly involved in developing
international standards for AI—the United States plays a leading role in the
international standards committee responsible for developing AI standards
(ISO/IEC JTC 1/SC 42)—but U.S. policymakers should more actively counter

CENTER FOR DATA INNOVATION 27


state-directed, restrictive, and discriminatory approaches to standards setting
from other countries. The European Union’s AI Act, for example, would
mandate firms developing or implementing high-risk AI systems use
standards developed and published by two regional organizations: CEN and
the European Committee for Electrotechnical Standardization (CENELEC).
While mirror agreements between CEN and the International Organization for
Standardization (ISO) and CENELEC and the International Electrotechnical
Commission (IEC), respectively, give priority to the adoption of international
standards as harmonized European standards, Article 41 of the EU’s AI Act
creates legal channels for the EU to develop and apply region-specific
technical specifications where it is determined that relevant standards are
insufficient or do not exist. 120 This presents a clear risk of the EU developing
specifications outside transparent, consensus-based, and industry-driven
international standards development organizations that hurt U.S. firms.

Finally, U.S. policymakers are both justifiably concerned about a loss of AI


competitiveness to China—which has established a comprehensive and state-
driven strategy for standards to bolster its own competitiveness—and wary of
the potential for unfair strategic gamesmanship in AI standards-setting
organizations by Chinese actors. In the past, China has intentionally created
domestic standards that differ from prevailing international standards as a
way of favoring Chinese products and keeping out foreign ones, such as in
2003 when it mandated that all wireless devices support the WAPI encryption
standard, which is incompatible with encryption standards used by other
nations. 121 Recently, China has indicated it is seeking more international
alignment. In its national strategy for technical standards released in 2021,
China outlines that it wants to align 85 percent of its domestic standards with
international ones. 122 While many policymakers may look at the target itself
with skepticism, it more broadly signals a move toward harmonization. The
concern among U.S. policymakers, including the Department of Justice, is
that China may intend to use its increased engagement with international
standards organizations to bias global standard development processes in
favor of their own interests, including for AI competitiveness. 123

Recommendations for improvement:


 NIST and DOC should work with the United States Trade
Representative (USTR) to launch an Indo-Pacific Standards Strategy
for AI. 124 The White House has already launched an Indo-Pacific
Economic Framework (IPEF) to strengthen the U.S. relationship with
the region. It should use this opportunity to better connect standards-
making bodies and related government agencies (and relevant
industry experts) on the development and use of standards for AI,
especially given that China’s national strategy for technical standards
calls for more alignment within countries that are participating in the
Belt and Road Initiative and standards-related dialogues with
members of BRIC and the Asia-Pacific Economic Cooperation
forum. 125

 The National AI Office should work with NIST to create an AI


Standards Hub. The United Kingdom is piloting such a hub, intended

CENTER FOR DATA INNOVATION 28


to "create practical tools for business, bring the UK's AI community
together... and develop education materials to help organizations
develop and benefit from global standards." 126 Not only should the
United States pilot its own hub to better enable organizations to
engage in creating technical standards for AI, but it should also
collaborate with the United Kingdom and bolster information sharing.

 The United States should use the U.S.-EU Trade and Technology
Council (TTC) discussions to counter EU proposals to pursue regional
AI standards. Because the EU’s AI Act creates legal loopholes for the
bloc to create and apply region-specific technical specifications for AI
where it is deemed that relevant standards are insufficient, the
United States should use the TTC working group on tech standards to
establish commitments on AI standards that ensure those that are
developed are based on industry-driven, consensus-built standards.

LEGAL AND REGULATORY POLICIES

ENSURING AI REGULATION IS INNOVATION FRIENDLY


Overall grade: Meeting expectations

Reason: Recent policies and rhetoric signal a shift away from


what has been a successful light-touch regulatory approach to
AI.

Designed properly, regulations can spur AI innovation and productivity by


reducing regulatory uncertainty and rewarding beneficial actions. A good
regulatory climate certainly does not simply mean the absence of regulations.
Instead, it is one that supports rather than blocks AI innovators and creates
the conditions to spur ever more innovation and market entry, while at the
same time providing more regulatory flexibility and efficiency for industries in
traded sectors. 127

The U.S. approach to AI regulation is generally sector specific with executive


branch agencies promulgating regulations in their domain. For instance, the
Department of Transportation regulates the use of autonomous vehicles
while the Food and Drug Administration regulates AI-based medical devices.
All agencies go through an extensive public notice and comment period in
which individuals and organizations can submit written comments the
agencies are required to review. This has generally been a strength of the
U.S. system, which enjoys a legislative framework that works to hold
government executive agencies accountable for obtaining public input and
basing rules on evidence.

Congress can sometimes require executive branch agencies to promulgate


regulations or can pass legislation itself. For instance, Senator Cory Booker
(D-NJ), Senator Ron Wyden (D-OR), and Representative Yvette Clark (D-NY)
introduced the Algorithmic Accountability Act in the House in February 2022,
which would direct the Federal Trade Commission (FTC) to develop
regulations requiring large firms to conduct impact assessments for existing
and new high-risk automated decision systems. 128 The number of proposed

CENTER FOR DATA INNOVATION 29


bills that relate to AI in the federal legislative record has increased sharply
over the past few years. In 2015, only one federal bill was proposed, while in
2021, there were 130. 129 Still, very few federal-level AI bills are being passed
into law. For instance, only 3 of the 130 bills proposed in 2021 were
passed. 130

In general, the U.S. federal government, more so than any other government,
has adhered to the innovation principle in its early regulation of AI, which
holds that because the overwhelming majority of AI innovations benefit
society and pose modest and not irreversible risks, government’s role should
be to pave the way for widespread innovation while building guardrails, where
necessary, to limit harms. 131 This approach recognizes that market forces,
tort law, existing laws and regulations, or light-touch targeted interventions
can usually manage the risks new AI technologies pose.132

Under the Trump administration in early 2020, the White House unveiled a
set of principles for AI regulation as a follow-up to a 2019 executive order
titled “Maintaining American Leadership in Artificial Intelligence,” which
outlined a series of steps for the federal government to ensure the United
States remains at the forefront of technological innovation. The 10 principles
outlined what federal agencies should take into consideration when crafting
their approaches to AI:

• Promote “reliable, robust, and trustworthy” AI


• Provide opportunities for the public to weigh in during the rulemaking
process on quality
• Hold information to high standards of “quality, transparency, and
compliance”
• Assess and manage risks recognizing that all activities involve trade-
offs
• Seek to maximize the “net benefits” of AI
• Prioritize adaptability in order to keep up with rapid technological
advancement
• Be mindful of the potential for discrimination and bias
• Weigh existing and potential new measures for transparency and
disclosure
• Consider safety and security throughout the development and
deployment process
• Take a “whole-of-government approach” to ensure consistency and
predictability of AI-related policies.133

OMB issued guidance in November 2020 reaffirming these 10 principles the


White House drafted, while also reflecting a shift from principles to practice
by establishing a framework for federal agencies to assess potential
regulatory and nonregulatory approaches to emerging AI issues. For example,
the new guidance instructs agencies to precede any regulatory action with an
impact analysis that clearly articulates the problem an agency is seeking to
address, whether it be a market failure (e.g., asymmetric information),
protecting privacy or civil liberties, preventing unlawful discrimination, or

CENTER FOR DATA INNOVATION 30


advancing the United States’ economic and national security. 134 While
indicating the potential for limited, focused regulations in certain areas, the
guidance promotes a governance framework that requires agencies to
impose regulation only when the benefits of doing so outweigh the costs to
AI-driven innovation and growth. The guidelines instruct agencies when
deciding whether and how to regulate in an area that may affect AI
applications to “adopt a tiered approach in which the degree of risk and
consequences of both success and failure of the technology determines the
regulatory approach, including the option of not regulating.” 135

President Biden has indicated a shift from this approach, supporting stronger
regulations. In April 2021, the FTC published a widely noted blog post on how
companies can use AI “truthfully, fairly, and equitably” and shortly after
began a rule-making process “to curb lax security practices, limit privacy
abuses, and ensure that algorithmic decision-making does not result in
unlawful discrimination.” 136 Further signaling its shift toward a greater focus
on issues of AI harm, the FTC appointed Meredith Whittaker, cofounder of the
AI Now Institute, who has written that “the vast majority of AI systems and
related technologies are being put in place with minimal oversight, few
accountability mechanisms, and little information about their broader
implications,” to serve as a senior advisor on AI to FTC Chair Lina Khan. 137
Moreover, the president’s science advisor and director of OSTP began
developing an AI Bill of Rights in October 2021 based on the premise that
current AI and biometric technologies have led to serious problems regarding
discrimination and bias. 138

Recommendations for improvement:


 Policymakers should pursue an innovation-friendly framework built
around the principle of “algorithmic accountability” in which the
operators of algorithms are held accountable for explicit and severe
harms. The framework advocates that governments hold companies
accountable for the outcomes of the AI they use by discerning
whether there was injury, the operator had sufficient controls to verify
its AI worked as intended, and the operator rectified harmful
outcomes. 139

 Congress and the administration should support increasing the


technical expertise of regulators and policymakers. Regulators should
foster relationships with communities of developers, academics, civil
society groups, and private-sector organizations invested in
algorithmic decision-making to stay abreast of technical
developments and concerns about algorithmic harms that could
influence how algorithmic accountability is achieved or enforced. This
requires ensuring regulators have the resources to hire staff with the
necessary technical expertise to scrutinize algorithms. 140

 Policymakers should continue the tried-and-true approach of


addressing AI concerns by sector. U.S. policymakers should recognize
that AI is a tool, and the locus of regulation should not be the tool but
rather the application of the tool. The focus should be on a discrete

CENTER FOR DATA INNOVATION 31


number of sector-specific applications and tailoring regulations that
prevent specific harms.

 Congress and the administration should caution regulators against


viewing the mere act of collecting or possessing large amounts of
data (which is necessary for specific uses of AI) as anticompetitive
behavior. 141

CULTIVATING STRONG INTELLECTUAL PROPERTY (IP)


RIGHTS
Overall grade: Approaching expectations

Reason: There are uncertainties in the IP system for AI that


are hindering innovation.

IP rights have long been recognized as fostering innovation. The idea is that
those who combine the spark of imagination with the grit and determination
to see their vision become reality in books, technology, medicines, designs,
sculpture, services, and more deserve the opportunity to reap the benefits of
their innovation—and that these rewards incentivize more creative output. 142
While the United States is a global leader in IP rights protection, certain
developments—both domestically and internationally—are creating some
uncertainty for global entities. 143 In particular, the advent of AI raises the
prospect that some works are now the direct output of computer systems,
including some operating autonomously. Jurisdictions around the world are
divided on how to handle the “AI inventor” while continuing to enable
innovation. 144

U.S. policy on IP rights as they relate to AI has focused predominantly on two


questions: whether AI-created works are eligible for protections, and if they
are, who should be recognized as the author or inventor with controlling
rights. In general, the U.S. Patent and Trademark Office (USPTO), which is
responsible for granting patents and trademarks, has rightly recognized that
AI is a tool and that the owner and operator of the AI system should be the
default owner of any IP it produces. It stated, in a recent report, “AI inventions
should not be treated any differently than other computer-implemented
inventions. This is consistent with how the USPTO examines AI inventions
today. AI inventions are treated like all other inventions that come before the
Office.” 145 The U.S. Copyright Office, which registers copyrights, requires a
minimum threshold of human creativity for a work to qualify for copyright
protection and will not grant a registration to a wholly AI-generated work to a
system. 146

That does not mean the overall IP system does not need reform in light of AI.
For one, offices and courts are facing challenges deciding which AI-based
inventions are patent eligible under the law, which becomes more
problematic as the volume and share of AI patents in the United States
increases rapidly (figure 5).

CENTER FOR DATA INNOVATION 32


Figure 5: The volume and share of public AI patent applications 147

Patent eligibility in the United States is based on section 101 of the Patent
Act, which says that an invention is new, has some practical use, and is non-
obvious. However, the Supreme Court through some landmark cases has
identified three categories of work that are judicial exceptions, meaning they
are ineligible from this broad conception of eligible subject matter. These are
laws of nature, natural phenomena, and abstract ideas. Because AI patents
generally rely strongly on mathematical relationships and algorithms, they
may be considered abstract ideas under patent law. Patent examiners, who
must determine whether an AI invention is patentable, are hindered by such
uncertainties and in turn have rejected a significant portion of AI patents that
should have received protections. To address this issue, the USPTO issued
guidance in 2019 that clarifies what would be considered ineligible concepts
and provides examples to guide the examination process. According to
former USPTO director Andrei Iancu, the guidance has cut rejection rates for
AI from 60 percent to 32 percent.

Still, because of the unpredictable and uncertain nature of the U.S. patent
system, many AI innovators turn to trade secrets to protect their work. 148
Trade secrets have a number of advantages over other IP. The information
protected by trade secrets does not need to be novel, is protectable
immediately without the cost or lengthy registration timelines other IPRs
require, and is protected for as long as the information is commercially
valuable and can be maintained as secret. 149 However, the key distinction
between trade secrets and other forms of IP that is relevant to AI innovation
is the trade secrets work by protecting information that is undisclosed as
opposed to patents, for example, where the IP is publicly disclosed. According
to NSCAI, because trade secrets do not contribute to accessible technical
knowledge in the public domain, they may hinder AI innovation in the long

CENTER FOR DATA INNOVATION 33


term more than they bolster it. 150 Another disadvantage is trade secrets do
not offer protection from reverse engineering.

Recommendations for improvement:


 The USPTO should reassess any obstacles that may be hindering
patent examiners in their prosecution of AI patents. As it did in 2019,
the USPTO should continue to revise guidelines to streamline the
process as much as possible.

 Congress should direct both the secretary of Commerce and the


secretary of Commerce for Intellectual Property to review the impact
of trade secrets on AI innovation. They should evaluate any reforms to
IP policies and regimes that may be needed to incentivize, expand,
and protect AI and work with the director of the USPTO to obtain AI
patent-related data.

 Congress and the White House should work with the USPTO,
Copyright Office, State Department, and any other relevant agencies
to craft a national strategy for AI that cultivates strong IP rights.

FOSTERING AI DEVELOPMENT THROUGH TRADE POLICIES


Overall grade: Approaching expectations

Reason: More agreements for cross-border data flows and


export controls that focus on AI manufacturing equipment are
needed.

CROSS-BORDER DATA FLOWS


The practices of other countries can have a significant impact on how
effectively U.S. firms can develop and deploy AI. In particular, efforts to
restrict how data can move across borders limits the amount of data at the
disposal of U.S. businesses innovating with AI. The number of these types of
measures in force around the world has more than doubled in four years. In
2017, 35 countries had implemented 67 barriers to restrict the free flow of
certain kinds of valuable data, including certain kinds of financial data,
personal data, and data from emerging digital services such as online
publishing. In 2021, 62 countries had imposed 144 restrictions—and dozens
more were under consideration. 151 The justifications for these measures are
often seemingly legitimate, such as those to preserve privacy and security,
but rules that require data to be stored domestically do not guarantee
either. 152 In reality, the primary motivation behind these approaches is
mercantilist in nature, designed to prop up domestic industries at the
expense of productivity. 153

The United States has had mixed success in protecting cross-border data
flows in past trade agreements. The Trans-Pacific Partnership (TPP), now the
Comprehensive and Progressive Agreement for Trans-Pacific Partnership
(CPTPP), was the first international trade agreement with explicit language
governing the flow of data across borders. The CPTPP includes prohibitions
against localization requirements that would force businesses to build data

CENTER FOR DATA INNOVATION 34


storage centers or use local computing facilities when providing digital
services; protections for proprietary software source code; and a commitment
to cooperate on cybersecurity through coordinated national computer
emergency response teams. 154 Unfortunately, the United States withdrew
from the agreement in 2017, and the remaining 11 nations—Australia,
Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru,
Singapore, and Vietnam—forged ahead with a deal that concluded in 2018. It
appears unlikely the Biden administration will rejoin given residual political
opposition to it. 155 However, the United States secured protections for cross-
border data flows in the United States-Mexico-Canada Agreement (USMCA),
which came into force in July 2020. 156 In addition to preventing parties from
enacting protectionist data localization requirements, the USMCA includes
protections for algorithmic source code and promotes the publication of open
government data. Regarding the latter, the deal does not require parties to
publish open government data but instead supports the availability of
valuable open data as a public resource that can spur AI development. 157
These sorts of data-related provisions are important for AI development and
should serve as a model for future trade negotiations.

In the same month USMCA came into force, the European Court of Justice
made a decision to invalidate the EU-U.S. Privacy Shield, which thousands of
organizations relied on to legally transfer data abroad for operations,
customer service, communications, R&D, and human resources. 158 While
both the EU and the United States have agreed on legal tools to establish
transatlantic data flows in the past—initially the U.S.-EU Safe Harbor in 2000,
and more recently the EU-U.S. Privacy Shield—EU courts have undermined
these efforts twice with the Schrems I and Schrems II rulings. If policymakers
do not create an alternative to the EU-U.S. Privacy Shield, firms from a broad
range of sectors on both sides of the Atlantic will suffer. 159

In October 2021, President Biden announced the United States’ intention to


explore the development of an IPEF to strengthen U.S. ties in the Asian
region. While the IPEF will not be a trade agreement, it will include trade
commitments and is therefore an opportunity for the United States to create
frameworks for data sharing and data trust that support AI.

Recommendations for improvement:


 The United States should use the IPEF to support the development of
joint data trusts and other data-sharing models to improve the quality
(and quantity) of the data that is a key input to AI. The IPEF presents
an opportunity for the United States and its trading partners to
identify, develop, and support data-sharing models organizations in
many sectors will not develop on their own. 160

 The United States and EU should conclude a new Privacy Shield


framework to guarantee the free flow of data across the two
jurisdictions. Without such an agreement, the entire transatlantic
digital economy risks fracturing in the coming years as courts strike
down ever-greater numbers of data flow arrangements. Any such
agreement should also clarify the legal definition of “personal data”

CENTER FOR DATA INNOVATION 35


under Article 4(1) of the General Data Protection Regulation
(GDPR). 161

 The United States Trade Representative (USTR) should continue to


fight source code disclosure requirements other nations may enact to
unfairly disadvantage U.S. firms or exploit their IP.

AI CHIPS
Trade disputes can put a nation’s ability to secure semiconductors, including
AI chips, at risk. Having access to state-of-the-art AI chips is important to
ensure AI developers and users can remain competitive in AI R&D and
deployment. There is already an emerging set of AI chips that are specialized
for different tasks, which fall broadly into three categories. The first is
graphics processing units (GPUs), which are mostly used to train and develop
AI algorithms. The second is field programmable gate arrays (FPGAs), which
are mostly used to apply trained AI algorithms to new data inputs. FPGAs are
different from other AI chips because their architecture can be modified by
programmers after fabrication. 162 The third group of AI chips is application-
specific integrated circuits (ASICs), which can be used for either training or
inference tasks. ASICs have hardware that is customized for a specific
algorithm and typically provides more efficiency than FPGAs do, but because
they are so narrow in their application, they grow obsolete more quickly as
new AI algorithms are created.

There is increasing demand from AI developers for specialized chips that are
more efficient for AI because the rate of improvements in traditional
processing chips is getting slower as the ability to pack more transistors onto
a single processor is beginning to reach its physical limits. Fortunately, the
United States is still the world leader in designing chips for AI systems. The
Center for Data Innovation’s 2021 report Who Is Winning the AI Race: China,
the EU, or the United States? — 2021 Update finds that at least 62 firms in
the United States are developing AI chips, compared with 29 firms in China
and 14 in the European Union. 163 The United States has many advantages for
AI chip production, including high-quality infrastructure and logistics,
innovation clusters, leading universities, and a history of leadership in the
field. Moreover, Chinese AI chip firms are reliant on U.S. electronic design
automation software, which is the category of software tools for designing
electronic systems such as integrated circuits. 164

However, continued leadership is not promised. China has targeted the


industry for a global competitive advantage, as detailed in a number of
government plans, including “Made in China 2025,” and while some of its
policy actions are fair and legitimate, many seek to unfairly benefit Chinese
firms at the expense of more-innovative foreign firms. 165 Even though some
argue it should not matter where AI chips are fabricated so long as U.S.
companies have access to the ones they need, it matters for a multitude of
economic and national security reasons, including that the industry supports
hundreds of thousands of U.S. jobs, both directly and indirectly, and that AI is
critical to the Department of Defense’s mission. 166 To keep America’s AI chip
industry competitive, Congress needs to pass two critical pieces of legislation

CENTER FOR DATA INNOVATION 36


that would support the United States manufacturing more semiconductors
domestically: the Creating Helpful Incentives to Produce Semiconductors
(CHIPS) Act, which is part of USICA, and the Facilitating American-Built
Semiconductors (FABS) Act.

Recommendations for improvement:


 The United States should coordinate the development of AI chips with
like-minded countries. Successfully innovating in the semiconductor
sector requires an expense and scale that make it tough for any one
country to do alone. As ITIF explained in An Allied Approach to
Semiconductor Leadership, the United States should coordinate
technology development with its allies. This could include establishing
Manufacturing USA Institute(s) to support AI chip industry
innovation—in activities including R&D, manufacturing, and
packaging—and invite participation by semiconductor enterprises
headquartered in like-minded nations. 167

 The United States should coordinate export controls of AI chip


manufacturing equipment with its allies. Overly broad export controls
on AI technologies, such as general-purpose AI software, can delay
U.S. firms in getting innovative products to market, thus harming their
competitiveness. 168 Policymakers should be pursuing tailored export
controls on application-specific AI software and dual-use datasets.
However, they should note existing export control regimes already
adequately protect many of these. The area where new export control
regulations are likely to be most effective is on AI chip manufacturing
equipment, where the United States and its allies dominate the
market. 169 By controlling the export of semiconductor equipment, the
United States can better protect against unfair replication, illicit
transfer, and theft of its semiconductor technology. The United States
should coordinate the development of controls with its allies because
export control regimes are most successful when they are
coordinated internationally. 170

CENTER FOR DATA INNOVATION 37


Endnotes
1 Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering
Committee, Stanford Institute for Human-Centered AI, Stanford University,
March 2022), https://aiindex.stanford.edu/wp-
content/uploads/2022/03/2022-AI-Index-Report_Master.pdf.
2 Mark Zachary Taylor, The Politics of Innovation: Why Some Countries Are Better
Than Others at Science and Technology (New York: Oxford University Press,
2011), 83.
3 Ibid.
4 Neil C. Thompson, “Building the algorithm commons: Who discovered the
algorithms that underpin computing in the modern enterprise?” Wiley
(September 2020),
https://onlinelibrary.wiley.com/doi/epdf/10.1002/gsj.1393.
5 The National Artificial Intelligence Research and Development Strategic Plan
(Washington, D.C.: Networking and Information Technology Research and
Development Subcommittee, October 2016),
https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf.
6 The National Artificial Intelligence Research and Development Strategic Plan: 2019
Update (Washington, D.C.: Networking and Information Technology Research
and Development Subcommittee, June 2019),
https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf.
7 The Networking & Information Technology Research & Development Program,
Supplement to The President’s FY2021 Budget, National Science &
Technology Council (Aug. 14, 2020), https://www.nitrd.gov/pubs/FY2021-
NITRD-Supplement.pdf.
8 Ibid.
9 U.S. Department of Commerce, “President Biden’s Fiscal Year 2023 Budget Calls
for Critical Investments in Key Commerce Priorities,” press release, March
2022, https://www.commerce.gov/news/press-
releases/2022/03/president-bidens-fiscal-year-2023-budget-calls-critical-
investments-key?.
10 Robert D. Atkinson, “The Senate Proposal for a New NSF Directorate Is Superior to
the House Alternative” (ITIF, January 2022),
https://itif.org/publications/2022/01/28/senate-proposal-new-nsf-
directorate-superior-house-alternative.
11 National Security Commission on Artificial Intelligence (NSCAI) Final Report
(Washington D.C.: NSCAI Commission Members, March 2021),
https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-
1.pdf.
12 Jess Whittlestone and Jack Clark, “Why Governments Should Monitor AI
Development” (Centre for the Study of Existential Risk, August 2021),
https://www.cser.ac.uk/news/paper-why-and-how-governments-should-
monitor-ai-de/.
13 Robert D. Atkinson, “The Case for Repealing the R&D Amortization Provision in the
2017 Tax Cuts and Jobs Act” (ITIF, September 2021),
https://itif.org/publications/2021/09/07/case-repealing-rd-amortization-
provision-2017-tax-cuts-and-jobs-act.
14 John Lester and Jacek Warda, “Enhanced Tax Incentives for R&D Would Make
Americans Richer” (ITIF, September 2020),
https://itif.org/publications/2020/09/08/enhanced-tax-incentives-rd-
would-make-americans-richer.
15 Daniel Zhang et al., “The AI Index 2021 Annual Report” (AI Index Steering
Committee, Stanford Institute for Human-Centered AI, Stanford University,

CENTER FOR DATA INNOVATION 38


March 2021), https://aiindex.stanford.edu/wp-
content/uploads/2022/03/2022-AI-Index-Report_Master.pdf.
16 John Lester and Jacek Warda, “Enhanced Tax Incentives for R&D Would Make
Americans Richer” (ITIF, September 2020),
https://itif.org/publications/2020/09/08/enhanced-tax-incentives-rd-
would-make-americans-richer.
17 Robert D. Atkinson, “The Case for Repealing the R&D Amortization Provision in the
2017 Tax Cuts and Jobs Act.”
18 “Janet Yellen Testimony on Economic Recovery, 2022 Budget Transcript,” last
modified June 16, 2021, https://www.rev.com/blog/transcripts/janet-yellen-
testimony-on-economic-recovery-2022-budget-transcript.
19 John Lester and Jacek Warda, “Enhanced Tax Incentives for R&D Would Make
Americans Richer.”
20 Robert D. Atkinson, “Effective Corporate Tax Reform in the Global Innovation
Economy” (ITIF, July 2009),
https://itif.org/publications/2009/07/19/effective-corporate-tax-reform-
global-innovation-economy.
21 Mark Zachary Taylor, The Politics of Innovation.
22 Michael Storper and Anthony J. Venables, “Buzz: face-to-face contact and the
urban economy,” Journal of Economic Geography, Volume 4, Issue 4 (August
2004), 351–370, https://doi.org/10.1093/jnlecg/lbh027.
23 Robert D. Atkinson, Mark Muro, and Jacob Whiton, “The Case for Growth Centers:
How to spread tech innovation across America” (ITIF and Brookings,
December 2019), https://www2.itif.org/2019-growth-centers.pdf.
24 Diana Gehlhaus and Ilya Rahkovsky, “U.S. AI Workforce Labor Market Dynamics”
(CSET, April 2021), https://cset.georgetown.edu/wp-content/uploads/CSET-
U.S.-AI-Workforce-Labor-Market-Dynamics.pdf.
25 Author’s calculations using county-level occupations from American Communities
Survey, “EEO 1W. Detailed Census Occupation by Sex And Race/Ethnicity For
Worksite Geography” (U.S. Census Bureau, 2018), accessed via
https://data.census.gov/cedsci/. AI workforce here is the sum of technical
team, product team, and commercial team occupations that are either
working directly with or capable of working with AI. Occupation codes
(Census EEO codes) for each team are selected based on previous
methodology from Diana Gehlhaus and Santiago Mutis, “The U.S. AI
Workforce: Understanding the Supply of AI Talent” (Center for Security and
Emerging Technology, 2021), https://cset.georgetown.edu/wp-
content/uploads/CSET-U.S.-AI-Workforce-Labor-Market-Dynamics.pdf.
26 Mark Muro and Sifan Liu, “The geography of AI: Which Cities Will Drive the Artificial
Intelligence Revolution?” (Brookings, August 2021),
https://www.brookings.edu/wp-content/uploads/2021/08/AI-
report_Full.pdf.
27 National Science Foundation, “NSF partnerships expand National AI Research
Institutes to 40 states,” news release, July 29, 2021,
https://www.nsf.gov/news/news_summ.jsp?cntn_id=303176&WT.
28 Hodan Omaar and Daniel Castro, “Comments to OSTP and NSF on a National AI
Research Resource (NAIRR),” September 28, 2021,
https://datainnovation.org/2021/09/comments-to-the-ostp-and-nsf-on-a-
national-airesearch-resource-nairr/.
29 Ibid.
30 Dahlia Peterson, Kayla Goode, and Diana Gehlhaus, “AI Education in China and the
United States” (CSET, September 2021), https://cset.georgetown.edu/wp-
content/uploads/CSET-AI-Education-in-China-and-the-United-States-1.pdf.

CENTER FOR DATA INNOVATION 39


31 Ibid.
32 Katie Hendrickson et al., “2021 State of computer science education: Accelerating
action through advocacy,” https://advocacy.code.org/stateofcs.
33 “7th-9th Online Courses Artificial Intelligence,” North Carolina School of Science
and Mathematics website, accessed March 15, 2022,
https://www.ncssm.edu/summer-programs/accelerator/accelerator-7th-
9th/7th-9th-online-courses/artificial-intelligence.
34 Alia Malik, “New Gwinnett high school brings tech curriculum update,” The Atlanta
Journal Constitution, December 22, 2020, https://www.ajc.com/news/new-
gwinnett-high-school-brings-tech-curriculum-
update/WEBH6NEN5JEDJPL45HNCOLEBV4/.
35 Taylor Denman, “What will a theme-cluster look like? A deep dive into Seckinger
High School's AI curriculum,” Gwinnett Daily Post, February 28, 2020
https://www.gwinnettdailypost.com/local/what-will-a-theme-cluster-look-like-
a-deep-dive-into-seckinger-high-schools-ai/article_17ccb0ea-5814-11ea-
8495-c3b589b5cfbe.html.
36 United Nations Educational, Scientific and Cultural Organization (UNESCO), “K-12
AI Curricula: A mapping of government-endorsed AI curricula” (Paris:
UNESCO, 2022),
https://unesdoc.unesco.org/ark:/48223/pf0000380602/PDF/380602eng.
pdf.multi.
37 Dahlia Peterson, Kayla Goode, and Diana Gehlhaus, “AI Education in China and the
United States.”
38 UNESCO, “K-12 AI Curricula.”
39 Claire Perkins and Kayla Goode, “U.S. AI Summer Camps” (CSET, August 2021),
https://cset.georgetown.edu/publication/u-s-ai-summer-camps/.
40 ”Artificial Intelligence and Machine Learning,” iDTech website, accessed February
14, 2022, https://www.idtech.com/courses/artificial-intelligence-and-
machine-learning.
41 “Microsoft DigiGirlz,” Microsoft website, accessed February 14, 2022,
https://www.microsoft.com/en-us/diversity/programs/digigirlz/default.aspx.
42 ”CS First,” Google CS website, accessed February 14, 2022,
https://csfirst.withgoogle.com/s/en/home.
43 Cade Metz, “A.I. Researchers Are Making More Than $1 Million, Even at a
Nonprofit,” The New York Times, April 19, 2018,
https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-
salaries-openai.html.
44 Natasha Singer, “The Hard Part of Computer Science? Getting Into Class,” The New
York Times, January 24, 2019,
https://www.nytimes.com/2019/01/24/technology/computer-science-
courses-college.html.
45 Bryon Spice, “Carnegie Mellon Launches Undergraduate Degree in Artificial
Intelligence,” Carnegie Mellon University News, May 10, 2018,
https://www.cmu.edu/news/stories/archives/2018/may/ai-undergraduate-
degee.html.
46 “2018-19 Changes to the CS Major,” Swarthmore website, accessed January 31,
2022,” https://www.swarthmore.edu/computer-science/2018-19-changes-
to-cs-major.
47 “FAQs About UMD's Computer Science Limited Enrollment Program,” University of
Maryland website, accessed January 31, 2022,
https://cmns.umd.edu/undergraduate/admissions/new-faqs-computer-
science-limited-enrollment.

CENTER FOR DATA INNOVATION 40


48 University of Maryland, College Park, Office of the President, “Differential Tuition,”
accessed March 31, 2022,
https://admissions.umd.edu/finance/differential-tuition; “Additional Tuition
Information: Supplemental tuition information for business, engineering and
computer science majors,” University of Maryland website, accessed March
31, 2022, https://www.admissions.umd.edu/costs/DifferentialTuition.php.
49 Adams Nager and Robert D. Atkinson, “The Case for Improving U.S. Computer
Science Education” (ITIF, May 2016), https://www2.itif.org/2016-computer-
science-education.pdf.
50 Ibid.
51 National Center for Education Statistics website, “What are the most popular
majors for postsecondary students?” accessed June 9, 2022,
https://nces.ed.gov/fastfacts/display.asp?id=37.
52 Paul M. Romer, “Should the Government Subsidize Supply or Demand in the
Market for Scientists and Engineers?” National Bureau of Economic
Research (June 2000), DOI: 10.3386/w7723.
53 Dahlia Peterson, Kayla Goode, and Diana Gehlhaus, “AI Education in China and the
United States.”
54 “AI Across the Curriculum,” University of Florida website, accessed March 12,
2022,” https://ai.ufl.edu/.
55 Ibid.
56 Remco Zwetsloot, “Strengthening the U.S. AI Workforce (CSET, September 2019),
https://cset.georgetown.edu/publication/strengthening-the-u-s-ai-
workforce/.
57 Diana Gehlhaus and Ilya Rahkovsky, “U.S. AI Workforce Labor Market Dynamics”
(CSET, April 2021).
58 Ibid.
59 Murat Krystal, “The upskilling imperative: Building a future-ready workforce for the
AI age” (Deloitte, 2020),
https://www2.deloitte.com/content/dam/Deloitte/ca/Documents/deloitte-
analytics/ca-covid19-upskilling-EN-AODA.pdf.
60 Alex McFarland, “5 ambitious private sector initiatives preparing the workforce for
a future of AI,” Ross Dawson blog, accessed May 23, 2020,
https://rossdawson.com/futurist/implications-of-ai/5-ambitious-private-
sector-initiatives-preparing-the-workforce-for-a-future-of-ai/.
61 Robert D. Atkinson, “How to Reform Worker-Training and Adjustment Policies for
an Era of Technological Change” (ITIF, February 2018),
https://www2.itif.org/2018-innovation-employment-workforce-policies.pdf.
62 The Financial Times Editorial Board, “The great resignation is not going away,”
Financial Times, February 1,2022, https://www.ft.com/content/857bdeba-
b61b-4012-ab82-3c9eb19506df.
63 Robert D. Atkinson, “How a knowledge tax credit could stop decline in corporate
training,” The Hill, September 3, 2015, https://thehill.com/blogs/pundits-
blog/finance/235018-how-a-knowledge-tax-credit-could-stop-decline-in-
corporate/.
64 Joseph Kennedy, Daniel Castro, and Robert D. Atkinson, “Why It’s Time to Disrupt
Higher Education by Separating Learning From Credentialing” (ITIF, August
2016), https://www2.itif.org/2016-disrupting-higher-education.pdf.
65 National Defense Authorization Act for Fiscal Year 2021, Sec. 5301(d).

CENTER FOR DATA INNOVATION 41


66 Diana Gehlhaus et al., “U.S. AI Workforce: Policy Recommendations” (CSET,
October 2021), https://cset.georgetown.edu/publication/u-s-ai-workforce-
policy-recommendations.
67 Joshua New, “Why the United States Needs a National Artificial Intelligence
Strategy and What It Should Look Like” (Center for Data Innovation,
December 2018), http://www2.datainnovation.org/2018-national-ai-
strategy.pdf
68 Ibid.
69 Tina Huang, Zachary Arnold, and Remco Zwetsloot, “Most of America’s ‘Most
Promising’ AI Startups Have Immigrant Founders” (CSET, October 2020),
https://cset.georgetown.edu/publication/most-of-americas-most-promising-
ai-startups-have-immigrant-founders/; Tina Huang and Zachary Arnold,
“Immigration Policy and the Global Competition for AI Talent” (CSET, June
2020), https://cset.georgetown.edu/publication/immigration-policy-and-the-
global-competition-for-ai-talent/.
70 Based on information from Tina Huang and Zachary Arnold, “Immigration Policy
and the Global Competition for AI Talent” (CSET, June 2020).
71 U.S. Citizen and Immigration Services, “H-1B Electronic Registration Process,” last
updated March 11, 2022, https://www.uscis.gov/working-in-the-united-
states/temporary-workers/h-1b-specialty-occupations-and-fashion-
models/h-1b-electronic-registration-process.
72 John R. Dearie, “An ‘entrepreneur visa’ can help us keep our competitive edge,”
The Hill, April 10, 2019, https://thehill.com/opinion/immigration/438195-
an-entrepreneur-visa-can-help-us-keep-our-competitive-edge/.
73 Immigration Act of 1990, S.358, 101st Congress (1989–1990).
74 Stuart Anderson, “Biden Keeps Costly Trump Visa Policy Denying Chinese Grad
Students,” Forbes, August 10, 2021,
https://www.forbes.com/sites/stuartanderson/2021/08/10/biden-keeps-
costly-trump-visa-policy-denying-chinese-grad-students/?sh=913f1ee36419.
75 Remco Zwetsloot et al., “Keeping Top AI Talent in the United States” (CSET,
December 2019), https://cset.georgetown.edu/wp-
content/uploads/Keeping-Top-AI-Talent-in-the-United-States.pdf.
76 William A. Kandel, “The Employment-Based Immigration Backlog” (Congressional
Research Service, March 2020),
https://www.everycrsreport.com/files/20200326_R46291_a4731eeb00f3
9050b6df66d326d0356438414f03.pdf.
77 Edward Wong and Julian E. Barnes, “U.S. to Expel Chinese Graduate Students With
Ties to China’s Military Schools,” The New York Times, May 28, 2020,
https://www.nytimes.com/2020/05/28/us/politics/china-hong-kong-trump-
student-visas.html.
78 Daniel Zhang et al., “The AI Index 2022 Annual Report.”
79 Zachary Arnold et al., “Immigration Policy and the U.S. AI Sector” (CSET, September
2019), https://cset.georgetown.edu/wp-
content/uploads/CSET_Immigration_Policy_and_AI.pdf.
80 The White House, “FACT SHEET: Biden-⁠Harris Administration Actions to Attract
STEM Talent and Strengthen our Economy and Competitiveness,” press
release, January 21, 2022, https://www.whitehouse.gov/briefing-
room/statements-releases/2022/01/21/fact-sheet-biden-harris-
administration-actions-to-attract-stem-talent-and-strengthen-our-economy-
and-competitiveness/.
81 William A. Kandel, “The Employment-Based Immigration Backlog.”

CENTER FOR DATA INNOVATION 42


82 “Indian technology talent is flocking to Canada,” The Economist, December 22,
2018, https://www.economist.com/business/2018/12/22/indian-
technology-talent-is-flocking-to-canada.
83 Nicole Goodkind, “Immigrant Founders of Multi-Million Dollar Startups Struggle To
Remain in the U.S.,” Fortune, December 11, 2019,
https://fortune.com/2019/12/11/immigrant-startup-founders-struggle-to-
remain-in-us/.
84 Somini Sengupta, “Countries Seek Entrepreneurs From Silicon Valley,” The New
York Times, June 5, 2013,
https://www.nytimes.com/2013/06/06/technology/wishing-you-and-your-
start-up-were-here.html.
85 Picture taken by Matthew Hertz,
https://twitter.com/mattahertz/status/987333483768045574.
86 Michelle Hackman, “Foreign Entrepreneurs to Gain More Access to Immigration
Program,” The Wall Street Journal, May 10, 2021,
https://www.wsj.com/articles/foreign-entrepreneurs-to-gain-more-access-to-
immigration-program-11620647100.
87 Danilo Zak, ”Analysis: Immigration Provisions in the America COMPETES Act,”
National Immigration Forum, March 29, 2022,
https://immigrationforum.org/article/analysis-immigration-provisions-in-the-
america-competes-act/.
88 Tina Huang, Zachary Arnold, and Remco Zwetsloot, “Most of America’s ‘Most
Promising’ AI Startups Have Immigrant Founders.”
89 Tina Huang and Zachary Arnold, “Immigration Policy and the Global Competition for
AI Talent.”
90 Jessica Hamzelou, “AI system is better than human doctors at predicting breast
cancer,” New Scientist, January 1, 2020,
www.newscientist.com/article/2228752-ai-system-is-better-than-human-
doctors-at-predicting-breast-cancer/.
91 Hodan Omaar and Daniel Castro, “Comments to OSTP and NSF on a National AI
Research Resource (NAIRR).”
92 Hodan Omaar, “How the United States Can Increase Access to Supercomputing”
(Center for Data Innovation, December 2020),
https://www2.datainnovation.org/2020-how-the-united-states-can-increase-
access-to-supercomputing.pdf.
93 “Introducing Meta’s Next-Gen AI Supercomputer,” Meta website, last modified
January 24, 2022, https://about.fb.com/news/2022/01/introducing-metas-
next-gen-ai-supercomputer/.
94 “Request for Information (RFI) on an Implementation Plan for a National Artificial
Intelligence Research Resource,” Federal Register, July 23, 2021,
https://www.federalregister.gov/documents/2021/07/23/2021-
15660/request-for-information-rfi-on-an-implementation-plan-for-a-national-
artificial-intelligence.
95 “About the Task Force,” National AI Research Resource website, accessed April 6,
2022, https://www.ai.gov/nairrtf/.
96 EuroHPC website, accessed September 17, 2021, https://eurohpc-ju.europa.eu/.
97 Common European Data Spaces website, accessed April 4, 2022,
http://dataspaces.info/common-european-data-spaces/#page-content.
98 Hodan Omaar, “Innovation Wars: Episode AI – The Techlash Strikes Back” (Center
for Data Innovation, January 2022),
https://datainnovation.org/2022/01/innovation-wars-episode-ai-the-
techlash-strikes-back/.

CENTER FOR DATA INNOVATION 43


99 Ibid.
100 Khari Johnson, “Why the OECD wants to calculate the AI compute needs of
national governments,” VentureBeat, January 26, 2021,
https://venturebeat.com/2021/01/26/why-the-oecd-wants-to-calculate-the-
ai-compute-needs-of-national-governments/.
101 Hodan Omaar, “Comments to OSTP and NSF on a National AI Research Resource
(NAIRR).”
102 Eric Egan, “Federal IT Modernization Needs a Strategy and More Money” (ITIF,
May 2022), https://www2.itif.org/2022-federal-it-modernization.pdf.
103 Cade Metz, ”A.I. Researchers Are Making More Than $1 Million, Even at a
Nonprofit.”
104 AI Training Act of 2021, S.2551, 117th Congress (2021–2022).
105 Robert D. Atkinson, Daniel Castro, and Stephen Ezell, “Enabling Customer-Driven
Innovation in the Federal Government” (ITIF, July 2017),
https://www2.itif.org/2017-federal-innovation-agenda.pdf.
106 Pablo Fuentes Nettel et al., “Government AI Readiness Index 2021” (Oxford
Insights, January 2022), https://www.oxfordinsights.com/government-ai-
readiness-index2021.
107 Peter Viechnicki and William D. Eggers, “How much time and money can AI save
government?” Deloitte website, April 26, 2017,
https://www2.deloitte.com/us/en/insights/focus/cognitive-
technologies/artificial-intelligence-government-analysis.html.
108 “CoE and GSA AI Highlight,” General Services Administration YouTube video
(8:13), https://www.youtube.com/watch?v=ryclBwyjBFg.
109 Robert D. Atkinson, Daniel Castro, and Stephen Ezell, “Enabling Customer-Driven
Innovation in the Federal Government.”
110 Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic
Accountability” (Center for Data Innovation, May 21, 2018),
http://www2.datainnovation.org/2018-algorithmic-accountability.pdf.
111 Daniel Castro and Joshua New, “Comments to NIST on AI Standards” (Center for
Data Innovation, May 2019), https://www2.datainnovation.org/2019-nist-ai-
standards.pdf.
112 Riya Anandwala and Danielle Cassagnol, ”CTA Launches First-Ever Industry-Led
Standard for AI in Health Care,” CTA press release, February 25, 2020,
https://www.cta.tech/Resources/Newsroom/Media-
Releases/2020/February/CTA-Launches-First-Ever-Industry-Led-Standard.
113 Onnx website, accessed March 4, 2022, https://onnx.ai/about.html and
https://www.khronos.org/about/.
114 National Institute of Standards and Technology (NIST), “Facial Recognition
Technology (FRT)” (testimony before Committee on Homeland Security),
February 6, 2020, https://www.nist.gov/speech-testimony/facial-
recognition-technology-frt-0.
115 “Maintaining American Leadership in Artificial Intelligence,” Federal Register,
February 1, 2019,
https://www.federalregister.gov/documents/2019/02/14/2019-
02544/maintaining-american-leadership-in-artificial-intelligence.
116 NIST website on Fundamental AI, accessed March 6, 2022,
https://www.nist.gov/fundamental-ai.
117 Consolidated Appropriations Act of 2021, H.R.133, 116th Congress (2019–
2020).

CENTER FOR DATA INNOVATION 44


118 Nigel Cory, “Comments to the U.S. Commerce Department on the Indo-Pacific
Economic Framework” (ITIF, March 2022), https://www2.itif.org/2022-indo-
pacific-economic-framework.pdf.
119 Ibid
120 European Commission, Artificial Intelligence Act, Article 41.
121 Brian DeLacey et al., “Government Intervention in Standardization: The Case of
Wapi,” SSRN (September 2006),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=930930.
122 Matt Sheehan et al., “Three Takeaways From China’s New Standards Strategy”
(Carnegie Endowment for International Peace, October 2021),
https://carnegieendowment.org/2021/10/28/three-takeaways-from-china-
s-new-standards-strategy-pub-85678.
123 U.S. Department of Justice, “Comments on the U.S. Standards Strategy,”
September 8, 2020,
https://www.justice.gov/atr/page/file/1314196/download.
124 Nigel Cory, “Comments to the U.S. Commerce Department on the Indo-Pacific
Economic Framework.”
125 Matt Sheehan et al., “Three Takeaways From China’s New Standards Strategy.”
126 United Kingdom government, “New UK initiative to shape global standards for
Artificial Intelligence,” press release, January 12, 2022,
https://www.gov.uk/government/news/new-uk-initiative-to-shape-global-
standards-for-artificial-intelligence.
127 Robert D. Atkinson et al., “President-Elect Biden’s Agenda on Technology and
Innovation Policy” (ITIF, November 2020), https://www2.itif.org/2020-biden-
tech-innovation-policy.pdf.
128 Algorithmic Accountability Act of 2022, H.R.6580, 117th Congress (2021–2022).
129 Daniel Zhang et al., “The AI Index 2022 Annual Report.”
130 Ibid.
131 Daniel Castro and Michael McLaughlin, “Ten Ways the Precautionary Principle
Undermines Progress in Artificial Intelligence” (ITIF, February 2019),
https://itif.org/publications/2019/02/04/ten-ways-precautionary-principle-
undermines-progress-artificial-intelligence.
132 Ibid
133 U.S. Office of Management and Budget, “Memorandum for the Heads of Executive
Departments and Agencies,” January 2020,
https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-
Memo-on-Regulation-of-AI-1-7-19.pdf.
134 Hodan Omaar, “Biden Should Avoid Unwinding OMB’s Rules for AI Regulation”
(Center for Data Innovation, January 2021),
https://datainnovation.org/2021/01/biden-should-avoid-unwinding-ombs-
rules-for-ai-regulation/.
135 U.S. Office of Management and Budget, “Memorandum for the Heads of Executive
Departments and Agencies.”
136 Elisa Jillson, “Aiming for truth, fairness, and equity in your company’s use of AI,”
Federal Trade Commission (FTC) blog, https://www.ftc.gov/business-
guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
137 AI Now Institute website, accessed March 1, 2022,
https://ainowinstitute.org/about.html.

CENTER FOR DATA INNOVATION 45


138 Eric Lander and Alondra Nelson, “Americans Need a Bill of Rights for an AI-
Powered World,” Wired, October 8, 2021,
https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/.
139 Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic
Accountability” (Center for Data Innovation, May 2018),
https://www2.datainnovation.org/2018-algorithmic-accountability.pdf.
140 Joshua New, “Why the United States Needs a National Artificial Intelligence
Strategy and What It Should Look Like.”
141 Ibid.
142 Nigel Cory and Daniel Castro, “Comments to the U.S. Patent and Trademark Office
on the Impact of Artificial Intelligence on Intellectual Property Law and
Policy” (ITIF, January 2020), https://www2.itif.org/2020-ustpo-ip-ai.pdf.
143 Ibid.
144 “AI As A Patent Inventor – An Update From South Africa And Australia,” JDSUPRA,
September 9, 2021, https://www.jdsupra.com/legalnews/ai-as-a-patent-
inventor-an-update-from-2776042/.
145 U.S. Patent and Trademark Office (USPTO), “Public Views on Artificial Intelligence
and Intellectual Property Policy” (October 2020),
https://www.uspto.gov/sites/default/files/documents/USPTO_AI-
Report_2020-10-07.pdf.
146 Ibid.
147 USPTO, “Inventing AI: Tracing the diffusion of artificial intelligence with U.S.
patents” (October 2020),
https://www.uspto.gov/sites/default/files/documents/OCE-DH-AI.pdf.
148 Jordan R. Jaffe et al., “The Rising Importance of Trade Secret Protection for
AIRelated Intellectual Property,” blog post,
https://www.quinnemanuel.com/media/wi2pks2s/the-rising-importance-of-
trade-secret-protection-for-ai-related-intellec.pdf.
149 Jessica M. Meyers, “Artificial Intelligence and Trade Secrets,” American Bar
Association website (February 2019),
https://www.americanbar.org/groups/intellectual_property_law/publication
s/landslide/2018-19/january-february/artificial-intelligence-trade-secrets-
webinar/.
150 NSCAI Final Report.
151 Nigel Cory and Luke Dascoli, “How Barriers to Cross-Border Data Flows Are
Spreading Globally, What They Cost, and How to Address Them” (ITIF, July
2021), https://itif.org/publications/2021/07/19/how-barriers-cross-border-
data-flows-are-spreading-globally-what-they-cost.
152 Daniel Castro, “The False Promise of Data Nationalism” (ITIF, December 2013),
https://www2.itif.org/2013-false-promise-data-nationalism.pdf.
153 Joshua New, “Why the United States Needs a National Artificial Intelligence
Strategy and What It Should Look Like.”
154 Australia Department of Foreign Affairs and Trade, “CPTPP outcomes: Trade in the
digital age,” https://www.dfat.gov.au/trade/agreements/in-
force/cptpp/outcomes-documents/Pages/cptpp-digital.
155 Nigel Cory, “U.S. Options to Engage on Digital Trade and Economic Issues in the
Asia-Pacific” (ITIF, February 2022),
https://itif.org/publications/2022/02/08/us-options-engage-digital-trade-
and-economic-issues-asia-pacific.

CENTER FOR DATA INNOVATION 46


156 “United States-Mexico-Canada Agreement Text,” Office of the United States Trade
Representative, Accessed February 26, 2022, https://ustr.gov/trade-
agreements/free-trade-agreements/united-states-mexico-canada-agreement
157 Ibid.
158 Nigel Cory, Daniel Castro, and Ellysse Dick, “‘Schrems II’: What Invalidating the
EU-U.S. Privacy Shield Means for Transatlantic Trade and Innovation” (ITIF,
December 2020), https://itif.org/sites/default/files/2020-privacy-
shield.pdf.
159 Ibid.
160 Nigel Cory, “Comments to the U.S. Commerce Department on the Indo-Pacific
Economic Framework.”
161 Benjamin Mueller, “As the EU Begins to Fight for Liberalism in Earnest, Its
Technology Policy Must Catch Up” (Center for Data Innovation, March 2022),
https://datainnovation.org/2022/03/as-the-eu-begins-to-fight-for-liberalism-
in-earnest-its-technology-policy-must-catch-up/.
162 Saif M. Khan and Alexander Mann, “AI Chips: What They Are and Why They
Matter” (CSET, April 2020), https://cset.georgetown.edu/publication/ai-
chips-what-they-are-and-why-they-matter.
163 Daniel Castro and Michael McLaughlin, “Who Is Winning the AI Race: China, the
EU, or the United States? — 2021 Update” (Center for Data Innovation,
January 2021), https://datainnovation.org/2021/01/who-is-winning-the-ai-
race-china-the-eu-or-the-united-states-2021-update.
164 Saif M. Khan and Alexander Mann, “AI Chips: What They Are and Why They
Matter.”
165 Stephen Ezell, “Moore’s Law Under Attack: The Impact of China’s Policies on
Global Semiconductor Innovation” (ITIF, February 2021),
https://itif.org/publications/2021/02/18/moores-law-under-attack-impact-
chinas-policies-global-semiconductor.
166 Ibid.
167 Stephen Ezell, “An Allied Approach to Semiconductor Leadership” (ITIF,
September 2020), https://itif.org/publications/2020/09/17/allied-
approach-semiconductor-leadership.
168 Stephen Ezell and Caleb Foote, “How Stringent Export Controls on Emerging
Technologies Would Harm the U.S. Economy” (ITIF, May 2019),
https://www2.itif.org/2019-export-controls.pdf.
169 Carrick Flynn, “Recommendations on Export Controls for Artificial Intelligence”
(CSET, February 2020), https://cset.georgetown.edu/wp-
content/uploads/Recommendations-on-Export-Controls-for-Artificial-
Intelligence.pdf.
170 Stephen Ezell and Caleb Foote, “How Stringent Export Controls on Emerging
Technologies Would Harm the U.S. Economy.”

CENTER FOR DATA INNOVATION 47


ABOUT THE AUTHOR
Hodan Omaar is a senior policy analyst at the Center for Data Innovation.
Previously, she worked as a senior consultant on technology and risk
management in London and as an economist at a blockchain start-up in
Berlin. She has an MA in Economics and Mathematics from the University of
Edinburgh.

ABOUT THE CENTER FOR DATA INNOVATION


The Center for Data Innovation is the leading global think tank studying the
intersection of data, technology, and public policy. With staff in Washington,
D.C., and Brussels, the Center formulates and promotes pragmatic public
policies designed to maximize the benefits of data-driven innovation in the
public and private sectors. It educates policymakers and the public about
the opportunities and challenges associated with data, as well as
technology trends such as open data, artificial intelligence, and the Internet
of Things. The Center is part of the nonprofit, nonpartisan Information
Technology and Innovation Foundation (ITIF).

contact: [email protected]

datainnovation.org

CENTER FOR DATA INNOVATION 48

You might also like