As I am using AI assistants more and more during the limited time I have to actually prototype and build features out, I seem to be adopting a particular workflow:

  1. Design a feature
  2. Prompt cursor to build it out
  3. Test the flow is fundamentally working
  4. Push a WIP branch
  5. Hand off the work to another member of the team to actually do the work

I find this is working well for the sort of task where there are a few unknowns and you just want to prototype it out to verify your assumptions to avoid sending a member of the team off on a wild goose chase. It does mean, however, a bunch of code committed that has had very little review and is potentially a pile of horrid slop.

As such, I’ve started to put a header at the top of these files to identify this and highlight the expectations around its use. I haven’t quite settled on one, but it tends to be something like:

/*
** This code was generated by AI and could be complete slop.
** Do not use this in production environments without thorough review and assessment of whether
** it meets other code guidelines and objectives.
*/

After it has been reviewed and updated to match the project’s guidelines, this can then be removed. I find it a useful way to monitor what code has not really had any critical thought applied and should be treated with caution (and probably rewritten).