setupaiagents.com
Case Notes7 min read

From Custom GPT to Workspace Agent: A Migration Story

One of the cleanest ways to understand Workspace Agents is to watch a Custom GPT get migrated. Here's what that actually looks like, with the gotchas that surprised us.

Custom GPTs aren't dead — they still work fine for single-turn assistants. But some Custom GPTs should be Workspace Agents, and until you've migrated one, it's hard to appreciate what changes and what doesn't.

This is a story about one migration. Client name and details abstracted, but the shape is real. Six hours of skilled work, a handful of surprises, and a clear answer to 'was this worth doing.'

The starting point

The team had a Custom GPT called 'RFP Helper.' It had custom instructions (~800 words), uploaded knowledge (50 MB of past proposals and approved answers), and two Actions — one to push drafted answers to a Notion database, one to fetch the latest pricing from a pricing-catalog API.

People used it constantly. An SE would paste questions from an RFP into the chat, the GPT would return drafted answers, the SE would edit and copy into the actual RFP document. It worked. The team was happy with it.

Why migrate at all

The reason wasn't 'Custom GPTs are worse.' The reason was that the team wanted to change the trigger model. Instead of the SE pasting questions into a chat, they wanted the agent to watch a specific Notion database for new RFP uploads, process the whole document automatically, and deliver a draft doc for review.

That's an agentic workflow, not a chat workflow. Custom GPTs can't do it. Workspace Agents can. So: migrate.

What carried over directly

  1. 01

    The system prompt

    Lightly edited — removed references to 'when the user asks' and added trigger-specific context. ~15 minutes of work. The substance was unchanged; the voice guide, the rules about confidence scoring, the instruction to never hallucinate compliance answers — all preserved verbatim.

  2. 02

    The knowledge files

    Converted from uploaded knowledge to live connector references. Instead of static uploads, the agent now reads a Notion database of approved answers live. Benefit: when the team updates an answer in Notion, the agent immediately uses the new version. Cost: ~1 hour to wire up the Notion connector with explicit per-page sharing.

What had to be rebuilt

  1. 01

    The Actions

    Custom GPT Actions port conceptually but not mechanically. Reused the OpenAPI specs we already had, but re-wired authentication through the Workspace Agent's admin controls (service account instead of per-user OAuth). ~2 hours, mostly spent on the compliance conversation with IT about the new service account.

  2. 02

    The trigger

    Entirely new. Custom GPTs trigger on user messages; Workspace Agents can trigger on events (new Notion database row), schedule (nightly), or user message. We chose 'on new row in the RFP Uploads database' — no more pasting questions, the agent sees the whole document.

  3. 03

    The memory

    Custom GPTs don't have persistent memory across sessions. Workspace Agents do. We added a memory structure that tracks 'answers I've given that required SE edits' — so the agent learns which of its confident answers were actually wrong and flags similar cases for earlier human review.

  4. 04

    The governance

    Custom GPTs have basic sharing. Workspace Agents have per-connector scoping, audit logs, and per-agent usage caps. We set up audit retention, scoped connectors to the specific Notion pages and Drive folders, and configured a credit cap. About an hour of admin work but none of it avoidable.

The unexpected work

The biggest time sink wasn't technical. It was spec clarification. Writing the trigger was easy; agreeing on what the trigger should be took a 45-minute call with the team. Example debates: 'should the agent process the whole RFP, or wait for an SE to mark which sections need answering?' 'Should it attempt answers even when retrieval confidence is low, or only fill in confident ones and flag the rest?' 'Where does the draft land — Notion, Drive, email?'

These weren't questions we asked when the Custom GPT was built, because a chat-driven workflow doesn't force them. The SE pasted the questions they wanted answered, in the order they wanted answered, and the GPT responded. Now the agent decides those things on its own — which means someone has to specify the decision rules.

This was the real migration cost. Not porting code. Not learning a new platform. It was the forcing function of moving from user-driven to autonomous, which required the team to articulate rules that had been implicit.

Was it worth it?

For this team, yes. The SE team went from 'paste questions in, copy drafts out' (~45 min per RFP) to 'upload RFP, review the draft doc' (~15 min per RFP). The agent handles the tedious middle step. On a monthly basis: 30 min × 8 RFPs = 4 hours of SE time returned. Not huge, but real, and applied to their highest-cost humans.

Secondary benefit: the rule-clarification exercise during migration uncovered inconsistencies in how the team handled RFPs. Two SEs had different default assumptions about which sections to auto-fill. The migration forced that decision and documented it.

When NOT to migrate

Not every Custom GPT should become a Workspace Agent. Good reasons to leave a Custom GPT alone:

  • The workflow genuinely requires a human to decide what to do next — chat is the right interaction model
  • The GPT is published to the GPT Store — Workspace Agents don't distribute externally
  • The GPT gets used infrequently (a few times per month) — migration cost exceeds payback
  • The team likes the current interaction model and there's no specific autonomy benefit to capture

Most Custom GPTs fall into this category. Don't migrate what works.

Questions

Ready to ship your first agent?

20-min intro call. I'll tell you which first agent is right for your team and what it would take to ship.

More from the blog

free tools

Turn this into something actionable