Questions to Ask

| 1 min read

This list will be updated on an ongoing basis.

  • What did I need generative AI for?
  • What value did it provide, beyond being able to say that I used it?
  • Could I have done what I wanted to do without it?
  • Were there different outcomes if I used generative AI, versus not using it?
  • What were the inputs in my use of generative AI?
  • Was the data secure, or did I send private information about the public to a third party vendor, like Microsoft?
  • Is a chatbot the best interface for the service I am providing?
  • Does the chatbot work equally well for non-English speakers as well as for English speakers?
  • Do I have evidence or information on how a chatbot works with people in crisis, people with ADHD, people with disabilities?
  • If 'the computer says no', what happens? Do members of the public have recourse?
  • If the chatbot provides inaccurate (or even criminal) information, what is the process with which your office will address it?
  • Who is left out from the use of chatbots?
  • If the generative AI creates images or text and other content that is offensive or discriminatory, what are the legal risks?
  • Do you have publishing safeguards to avoid going straight from generation to live, just as you would check any content before it goes live, do you have similar or more levels of review for AI-generated content?
  • Is there a risk of surveillance with what I am doing with generative AI?
  • Does my generative AI project reduce personal agency with anyone I am serving?

Edit this page on GitHub