Generative AI Prompt Engineering

Prompt Engineering is the practice of designing input prompts for generative AI tools like ChatGPT, UM-GPT, and DALL-E to get the optimal desired output.

Prompt engineering is often an iterative process, meaning it's a process of making changes to your prompt and improving it as you learn more from your outputs. You may need to:

  • give clarifying information in a follow up prompt, or
  • craft more specific instructions, or
  • write the prompt in a different way, or
  • ask the AI to give you a specific variation on your response

You will likely need to refine your prompts several times which is actually a smart way for you, the user, to understand prompt engineering. Keep all of these iterations in the same prompt thread as you narrow down your search–do not begin a ‘new search’ when refining.

Strategies for crafting of prompt:

It might help to think of this as a conversation between you and the AI. Generally, the quality of the responses from generative AI tools depends significantly on the quality of the prompts you feed it. Consider incorporating the following elements to help you create a well-crafted prompt:


  • Role: Describe who you are and, therefore, the point of view from which the response will be written.
    • Ex: “I am a first-year college student and an expert in climate change anxiety.”
    • Ex:  Assign the generative AI tool a role. “You are an environmental engineer with expertise in X….”
  • Task: Give the AI a specific task to do. Be as detailed as possible and include as much context as necessary. 
    • Ex: Instead of telling the AI to “Write a poem about the University of Michigan,” tell it to “Write a poem about the University Michigan that touches on its location, the diversity of its students, and the traditions of the University.” 
  • Requirements: Include examples of what to do or not do. Include what format, elements, or style are you looking for in the output.  Be precise. 
    • Ex: Instead of prompting “Give a short description of the text,” try telling the AI “Give a 3-5 sentence description of the text which does not include quotes from the text.” 
  • Details: Aside from the above, consider including information about the audience or what genre you prefer in the response. 
    • Ex: “Write a X for an audience of elderly persons with not much computer experience.”
  • Settings: For more advanced users, custom instructions can be set in the Settings Menu of some Generative AI tools if you would like a consistent framework for your prompts. The conversation can be set to reflect tone, format/style, audience, etc. 
  • Refine: To practice the iteration of your prompt, ask UM-GPT to help!  Try: “Can you suggest ways to improve my analysis of topics X and Y?”

Updating Software: Each prompt you create, in a new chat, will yield a different answer as more users have/will contribute to the Large Language Model (GPT 4) collection in the intervening minutes since your last question.  

Verify the Results:

No matter how carefully your prompt is constructed, you should verify the information provided. Careful prompt engineering can get you higher quality outputs from generative AI tools, but they will also “confidently” fabricate false responses. Even carefully-crafted prompts need to be verified.

Fact checking strategies like SIFT (Stop; Investigate the source; Find trusted sources; Trace claims, quotes, and media back the original context) can help you carefully investigate information provided and help you decide if you should seek elsewhere for better information.  


Unconscious and implicit bias can become embedded in AI models and lead to biased outputs. While bias can and does often originate in the way AI models are created, (through data collection, training, and testing), bias can also be introduced in the way you craft prompts. The language you use will influence the outputs you (and others) receive. If you use fear-invoking or alarmist language in your prompts, you will likely produce negative outputs. 

Question your own assumptions, and attempt to  counteract known biases within an AI model, by being both specific and unambiguous.  Thus, including specific instructions can help generate unbiased and diverse responses. For example, instead of prompting “Create a list of common names,” you could ask, “Create a list of names from different cultures throughout the world.” 

Prompt Examples

For more information on prompt engineering, please see the following resources: