U-M’s new generative AI tools, explained

September 6, 2023

Electrical and Computer Engineering Professor Paul Watta breaks down why the university decided to release its own custom versions of popular AI tools — and what they do.

Photo of computer screen with AI website

Last month, the University of Michigan’s Information and Technology Services released three custom generative AI tools for use in the U-M community, a first-of-its-kind effort from a major American university. But why exactly did U-M decide to offer its own set of ChatGPT-like services? And what would you use them for? We recently asked Electrical and Computer Engineering Professor Paul Watta, who’s on a tri-campus generative AI advisory committee, to get us up to speed. 

So the first tool is called U-M GPT. What exactly does this one do?

So this one is meant for everybody — students, faculty and staff — and it's sort of a U-M-wrapped ChatGPT. Right now, whenever you use ChatGPT, your information is going out to the internet and can be used by OpenAI. But when you use U-M GPT, you’ll be behind the proverbial U-M paywall, so your information never leaves U-M’s protected servers. Some people had expressed privacy concerns over using these tools, and we also wanted to provide equitable access. Right now, the regular GPT-4 from OpenAI costs you $20 a month, which some students might not be able to afford. U-M GPT, at least for now, is free for all U-M faculty, staff and students to use. Another perk is that it has been fine tuned for U-M data. So it’ll have locations of all our buildings and information about professors and things like that. The idea is that you would get more UM-specific information than if you just went to ChatGPT. 

Paul Watta
Electrical and Computer Engineering Professor Paul Watta

And the second tool is called Maizey.

Maizey is really interesting because it allows you to index your own data. So for one of my courses, I could, say, put in all the PDFs of my lecture notes or all the information I have in Canvas and it will index them. And then you can generate a chat application that can answer questions specific to my course. Departments could also use it to index all their forms, or course pages or anything they want. So the application could be something like a chatbot on a department web page or a Canvas course page. Maizey is free until Oct. 1 so faculty and staff can play around with it to see if they want to keep using it.

And then this third tool, U-M GPT Toolkit, seems geared toward researchers.

That’s right. This one is really for researchers who want to work with their own generative AI models. So right now, for example, there’s a lot of interest in specializing these generative AI tools, whether it's ChatGPT or something like Llama 2, which is Meta’s open-source version. So let’s say I wanted to fine tune Llama 2 for astronomy or electrical engineering. I could give it information that wasn’t included in the original training of Llama 2, or I could add more reinforcement so it would emphasize the information I’m most interested in. That could potentially give you more useful answers than just the regular version of Llama, which is more general and doesn’t have that specialization. Another reason this could be useful is that the training of these models stops at a certain date. ChatGPT, for example, doesn’t include information after September 2021, and you could build a custom model that has more current information. So, yeah, I think this is mainly for researchers and graduate students who are trying to study generative AI models, make new ones and improve their performance. 

As you’re describing these tools, I see the potential on the one hand. On the other, I’m wondering, do faculty and staff have time or interest or expertise to make this happen? It seems like people would need a little help. So are there resources for that?

That’s a great point, and it’s something we talked a lot about on the committee. One of our recommendations is that we’re going to need training sessions on all levels, for students, faculty and staff, to get people up to speed. We’re really looking at the fall semester as a time of experimentation. We don’t know all the potential applications or problems yet. So we want people to be excited about trying these tools, figuring out what works and what doesn’t work. Another one of our recommendations is that we need to make it easy for people to share what they've learned, so if someone does something cool, other faculty can learn from that.

There have been a lot of interesting discussions about the technology’s risks and benefits, and this whole move to release our own generative AI tools seems, on the whole, really pro-AI. As someone who is on this advisory committee, can you describe the general mood of the conversations that are happening around generative AI at the university? 

The tri-campus committee was formed in the spring, and our first real meeting was a retreat, so we had this full day where we were listening to thought leaders from across the campuses about the technology. I was pretty blown away by the presentations and I really learned a lot. For example, there is a growing sense that we may already be approaching some limits in the sense that we may not be able to make these models much bigger or better, namely because we’re running out of information for them to digest. But there could be all these emerging applications for the underlying technology, like planning or tasks where we give the models more agency. People also seem confident that the problems of hallucination and misinformation are solvable — and probably within years, not decades. So I think the general view of the committee is that there is no escaping these technologies and that we should be on the side of using them, so we can figure out how to use them wisely and productively.

The whole process has been really interesting. There is much more concern about cheating, and how this changes how we teach, particularly how we teach writing. There was also a lot of concern about student information getting out, so there was some relief that we’re going to be protected under the U-M umbrella so students don't have to expose their information. But in general, I’d say that on our campus there is more trepidation.


Want to learn more about generative AI services at U-M? Start with this ITS page, which has lots of good information and videos about the new AI tools. Or dive into the full report from the U-M Tri-Campus Generative Artificial Intelligence Advisory Committee.  UM-Dearborn online guides around specific generative AI topics will also be available within a few weeks. Watch for announcements from the Hub and the Mardigian Library regarding upcoming training sessions.

Interview by Lou Blouin.