What is AI?

| 6 min read

If we look past the hype, and beyond vendors' marketing materials: there is no officially agreed upon definition. This view is supported by the literature: "there is no generally accepted definition of the term "artificial intelligence" (Schuett, 2019). However, in working terms we may refer to AI as a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more (GCP).

There is 'narrow intelligence', 'reactive', 'strong', 'limited memory' types of AI and others, but for the purpose of an introduction, let's focus on a subset of 'narrow AI' and the various 'narrow AI' applications that you may have heard of.

'Narrow AI' (or ANI: Artificial Narrow Intelligence) "is a type of Artificial Intelligence which mainly focuses on executing specific commands. These AI tools can perform proficient tasks as per the instructions provided to them. These systems fulfill particular tasks without the capacity to learn beyond their intended purpose, such as image recognition software, self-driving cars, and AI virtual assistants like Siri." (Cloud Academy)

This document provides a general, not academic or comprehensive view, of the state of the field and how it might apply to a person working in the public sector.

If your boss is saying 'we need to use AI!' as of early 2024, chances are they are wanting you to learn more about generative AI.

There are other related applications and concepts, like visual inference systems, deep learning and natural learning processing. We'll set those aside for now and focus on what your boss and elected officials are likely saying you should be learning about: generative AI.

What is Generative AI

Generative AI "can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on." (MIT News) Tools like ChatGPT, DALL-E, Midjourney, are all examples of generative AI.

Within this category, there are different models. The different models of generative AI can use different inputs (text, images, videos) to perform tasks and also to create or solve problems.

If you have used ChatGPT and given it a prompt like, 'Design a math lesson plan for grade 3 students focusing on basic algebra', you have used generative AI.

One of the biggest debates in the field at the moment is, are generative AI tools an example of 'stochastic parrots', or at they actually intelligent? That's an ideological, computational and linguistic debate. We recommend reading Bender, Gebru and co-authors on stochastic parrots.

At its core, the term “stochastic parrots” refers to large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing. These models rely on statistical patterns in data to generate responses but are not capable of true reasoning or understanding. (Towards AI)

The results you received from ChatGPT and other tools in response to prompts, can vary widely from 'oh cool, neat' (like when I got an interesting outline of a lesson plan) to darker problems (they hallucinate, they work from biased data and then produce biased outputs, they plagiarize and more).

There is no doubt that the world of work may be revolutionized by generative AI, but how exactly?

As public servants, it's important to understand the brief outlines of the current research, tools, and problems. Programs that utilize generative AI still require a lot of human intervention. That's not a new idea: "people and machines can be collaborators, not competitors" (Stanford HAI).

What are some proposed use cases for Generative AI

If vendors and consultants (and politicians) are to be believed, generative AI will "improve productivity and service quality" (BCG) and "help government to leverage the vast amounts of data it collects to create value in unprecedented ways".

That's a lot of buzzwords.

McKinsey believes that generative AI can help those of us in the public sector to:

Summarize and synthesize content

In Singapore, my home country, the government has Pair app, "a large language model powered assistant for public officers". Apparently, it helps "with a broad range of writing, research and operational tasks".

Speed up software development

McKinsey provides the example of the United Kingdom's HM Treasury (economic and finance ministry) "testing GitHub co-pilot to accelerate software development".

Boost customer engagement

McKinsey provides the example of the city of Heidelberg's Lumi chatbot to "enables people to easily navigate government services such as applying for a new identity card, getting a driving license, and registering a place of residence".

Generate content

Lastly, McKinsey thinks that generative AI can help to "produce a vast variety of content, including emails, social media posts, contracts, and proposals." They provide the example of the Department of Defense's "Acqbot" AI Bot that might speed up contract writing.

It is also thought that generative AI can "streamline government procurement" and "transfer more citizen services to digital platforms" (State Tech Magazine). We'll get to that in a future post.

DHS's Foray into Generative AI

The Department of Homeland Security recently announced that they would take the lead on using generative AI. Since they are the first federal agency to do so, it's important to break down what they are trying to do.

While the announcement is low on details ("Unveils Artificial Intelligence Roadmap, Announces Pilot Projects to Maximize Benefits of Technology, Advance Homeland Security Mission"; what are words, even?), the accompanying New York Times report says:

  • DHS will 'build up a AI corps' of 50 people
  • They will try to hire talent for this 'AI corps' from the private sector and are 'open to remote work' (they even traveled to Mountain View to recruit!)
  • DHS will "use chatbots to train immigration officials who have worked with other employees and contractors posing as refugees and asylum seekers. The A.I. tools will enable officials to get more training with mock interviews."
  • "The chatbots will also comb information about communities across the country to help them create disaster relief plans." (The Verge)
  • DHS "picked OpenAI, Anthropic and Meta to experiment with a variety of tools and will use cloud providers Microsoft, Google and Amazon in its pilot programs"
  • DHS "will report on the results of the pilot by the end of the year." (NYTimes)

Looking forward to parsing the results then.

AI Skills in Government

While there has been a mandate to 'hire AI workers', it's unclear what that really means in terms of qualifications.

According to the National AI Talent Surge program, open roles in the field are largely 'technical': data scientists, Python programmers, STEM students, AI/ML skill sets.

DHS's AI recruitment page also speaks of responsibilities such as, "develop solutions and responsibly implement AI/ML technologies using industry best practices, principles, concepts, and standards", "ensure data security, data management, and risk assessment and management", "uUnderstand relevant laws, policies, and ethical considerations and apply them to the DHS mission", "employ agile product lifecycle management".

"The hottest new job is 'head of AI' and nobody knows what they do." (Vox)

The White House has also just ordered federal agencies to name Chief AI Officers.

Clearly, there is a surge for talent, but if you're not a data scientist or AI researcher, what are your AI-adjacent job options in the public sector? What kinds of training can you undergo, what certifications can you take, to prepare yourself for a future where AI skills may be essential in the public sector? That's one of the areas this site hopes to address.

Personal thoughts

While generative AI has been hyped for the past year, we are still waiting to see published results of these experiments in the public sector.

Many government officials, especially elected officials, are perhaps sensitive to the perception of not wanting to be 'behind' on the adoption of new technologies.

Those of us who have jobs in implementing services to the public, through the building of or usage of technologies, should learn about the latest developments but also ask questions of the tools, of vendors, and of officials and of each other.

Here's a list of questions you can refer to, to start asking important questions before AI projects are proposed or implemented in your organization.

If my five years of public service have taught me anything, it's that our jobs are never about using the latest technology: it's always about using stable and safe technology, as far as possible, to deliver services that the public needs. Our jobs are not to use AI or to be AI researchers or experts, but if down the line, the field has proven itself so thoroughly that it's a vital part of our work, then it will be one of many tools we use.

The AI hiring surge is an interesting move, that points to how much brain trust there is in the field and how many potentially useful applications there are.

What types of learning and training can a mid-career public sector professional take on to develop more AI literacy? What kind of ethics knowledge do we need? What kinds of guardrails should we advocate for?

On a personal level, I am interested: I already use AI tools to help me run faster and stronger (it's REALLY good!), write repetitive personal emails, create outlines of learning plans for personal studies, and more.

But before we jump into using generative AI in all of our tasks, perhaps we can ask ourselves these questions.

My personal position as of March 2024 is: cool tools, maybe a few interesting applications, but let's also learn about ethics, think about harms, ask lots of questions, and never forget the people we serve. Let's also review the results of some of these early forays into generative AI in government when they emerge.

Personally, I also suspect that for any push towards generative AI to be truly successful, we will need to expand our definition of AI professionals to also include so-called 'non-technical people'. Program managers, who are domain experts in implementing government policies; content strategists and content designers, whose simplified language will (to my mind) beat any chatbot for information retrieval or at worst, assist future generative AI use cases by creating better structured content inputs in simple language; UX researchers, who should probably be consulted, and quite soon, as in yesterday, on user testing of these experiments.

(This document, as with all content on this site, is open content. You may publish anywhere, with citations and credit.)

Published by Adrianna Tan, director of product management at San Francisco Digital Services. All views expressed are her own, not her employer's. All links provided are not endorsements.

Edit this page on GitHub