How the zip file development process works
When a new software development process is started in ChatGPT, I ask for a downloadable zip file such as projectname1.zip , which contains all generated code and required environment information.
For a Flask project, that includes all the SQLAlchemy model files (database schema), all the UI templates, all the environment variables in a .env file, the libraries needed in a requirements.txt file, all supporting config files, JSON files, image files, documents (.csv, .md, .xlsx, .txt, .doc form-fill templates), etc. - everything required to make the project run, in the typical directory structures they're expected to be in, all in a single zip file.
If I'm continuing a project from an existing workflow somewhere else, I package everything required into a similar single zip file.
Along with the zip file, in the chat conversation, I upload any other required files & documentation, explain all surrounding perspective, and provide any necessary data/information which is required to build the proposed functionality in the current development step.
Any requirements which need to be satisfied for the current iteration of development work get explained in as much detail as possible in the conversation with ChatGPT. As always, with LLM prompting, more detail and more concise context generally yields better output. Dividing work into smaller iteration steps is almost always better. Focusing on one manageably sized iterative goal at a time, within a well engineered overall process, is what works.
When working with the ChatGPT zip file workflow, you should ask for the final output of every development session to be provided as an incrementally numbered version of the project zip file (projectname2.zip, projectname3.zip, etc.).
Download each new complete zip file to your local machine (ChatGPT will make a download link available directly in the chat), and then send it to your server with SCP (for example: scp projectname1.zip username@yourdomain.com:~/projectfolder). SCP is built into most operating systems by default, and ChatGPT runs in a browser, so you shouldn't need to install any tooling to enable this workflow.
I typically save all the versions of my project zip file in /saved and /unused folders, within a larger project folder that also contains /documentation, /mhtml, and other supporting folders. This gives me a consistent structure to store the complete history of an entire project, and all historical context, notes, emails with clients, etc., in a single folder.
When I travel or use a new machine, I simply zip up the entire project folder and transfer it to another machine. Every time I make a transfer to a new machine, I update the date in that master project folder name. Using this routine, I can transfer the entire history of multiple projects, with every piece of surrounding context, and everything needed to work on every project, in just a few minutes.
I back up each of these dated master project zip files on local physical hard drives that I keep automatically synced, and in online file servers which are also automatically synced, so there are many multiple redundancies, some of which are physically portable (I always keep a portable flash drive, micro SD, and external drive with me when I travel), and all of which are available online.
Because Flask apps are so small, most of my complete project zip files are only a few megabytes. When they get too big, I prune any unneeded versions of the project zip files (v1.zip, v2.zip, etc.) from the archive - since those versions are always available in previous backup archives, nothing ever gets lost.
On the server, you can have as many applications running simultaneously in separate tmux sessions, each with their own environments containing the necessary installed libraries (in Python, using venv and pip, for example). Simply:
- SSH into your server command line
- attach the tmux session of the project you're working on (tmux a -t projectname)
- unzip -o projectname1.zip in the active environment
- then run the application (python3 run.py)
You can instantly revert to any previous version on server, just by unzipping and running the required zip file.
I keep a text file in each master project folder on my local machine, which contains all the SCP commands and credentials I need to SCP and SSH into the project server, so logging in on any new machine is just a matter of copy/pasting a line into the command console (it doesn't matter what OS I'm using as my local client - all these pieces like SCP and SSH are generally 100% portable).
Because SSH is built into every common OS by default, again, you don't need to install any software to enable this workflow. You may just need to apt-install unzip or tmux on your server (and perhaps nano and/or whatever other small tools you need, and perhaps whatever particular version of Python, or Node, or whatever other ecosystems tools you use) when you use the server for the first time, but using the tool kits I prefer, that's a trivial one-time task that takes just a few seconds.
To be clear, none of this process involves editing any code files on your local machine or on the server, although you can choose to manually edit anything you want - just be sure to share any manual code edits you make, back in your ChatGPT conversation, so GPT can incorporate any manual changes you've made into the current working project zip file. Even if you change something as simple as the port an application runs on, let GPT know, so you don't have to continue to make those edits manually every time you unpack a new zip file version of the application.
Typically, in this process, GPT writes all the production code - the current models of GPT do a better job than most human developers do at writing code, so I tend to simply prompt GPT to make any edits which I would have previously performed manually.
The important thing that happens with a zip file project package, is that a large majority of the work you previously needed to perform in code editors, and/or in interactions with a chat bot (copying/pasting code, for example, from a conversation, or using an IDE or locally hosted agentic system integrated with an LLM API to edit code), is all eliminated. You don't need to edit any code, or install any tools whatsoever. You don't need to give a locally running agent access to your production system, or even your development system. You just have GPT edit your project zip files.
One key point is that with the zip file workflow, the context size of what the LLM can handle, is tremendously expanded - and that work is automatically divided up and handed off to multiple agentic processes built into ChatGPT.
So instead of copying/pasting generated code into files that you need to manage manually (or doing this work in an IDE connected to an LLM coding agent), you let ChatGPT use its own built-in agentic tooling to do that work for you. It will intelligently use its own built-in tools, an work within its own internal workspace, to open up the zip file, and for example, use regular expressions to search for and intelligently explore the code in every file contained in the zip package.
Most important, ChatGPT will use its own agentic capabilities to spawn as many sub-process contexts to achieve any required development goal, so that your main conversation context doesn't get polluted and filled up by each of those mini-goals. A separate process is launched to open the zip file, and another process is launched to summarize the content of the contained files, and separate processes are typically spawned to write and test newly generated code, and separate processes are launched to surgically replace code in each existing file within the zip package, based on all that writing and testing, and separate processes may likely get spawned to package those files back into a newly downloadable file, and so on.
None of those separate processes use up context in your main conversation. Each process completes its task, provides an artifact, and/or reports a summary back to the context that spawned it. GPT does all this entirely automatically with its integrated tooling, knowledge, and the workspace it has available (all separate from your local or hosted dev/prod workspaces).
When working with zip files, GPT automatically knows how to break down complex requirements into smaller steps that can be spawned as separate contexts, and the work completed in those contexts can be coordinated by GPT's built-in agentic workflow capabilities, using built-in tooling, MCP servers installed by OpenAI, etc. By automatically breaking up tasks into throw-away processes that have their own separate contexts, the context size of a project becomes virtually unlimited, and the main conversation never needs to contain the entire workflow methodology devised by GPT.
What I've been amazed by is just how much work OpenAI gives you for free with the $20 subscription. I've never hit a rate limit with the zip file process, although clearly burning many tens of millions of tokens at a time in ChatGPT, sometimes building multiple projects at the same time in multiple open ChatGPT sessions, nonstop all day long, for days in a row, for many months in a row. This sort of work load would cost thousands of dollars per month using Claude Code and the Anthropic LLMs - and GPT does a comparably fantastic job not just writing code, but also intuiting your intended goals based on less than perfect prompts.
Keep in mind that your success with any LLM based software development effort depends tremendously on the tech stack you chose. Using Flask and the Python ecosystem is a guaranteed win - there are many billions of lines of code and documentation published in that ecosystem, which every LLM gets trained on deeply. The same is likely true of HTML/CSS/JavaScript, React and the other popular web UI libraries, Java, and other mainstream programming language tools.
Just don't expect to get good code results from an LLM if you're using any sort of lesser known language/library/tooling. You can accomplish some in-context learning by providing documentation - and many lesser know tools actually do provide LLM files to help orient and guide LLMs to more effectively work in-context with unknown tools - but LLMs will always work orders of magnitude more effectively with tools they've been trained on deeply.
During the process of building code, the context of your conversation with GPT should be focused on steering the LLM towards clear and achievable goal small goals, within a larger set of engineered steps. You should avoid filling the conversation context with written out code - that code should stay within zip files. The conversation context should hold links to developed artifacts (new zip file packages that contain the entire newly developed project and all information/context needed to continue from that point forward), and should display information needed to understand the decisions which were made, and the efforts that were taken during the time consuming process of building and testing iterative solutions. The results of all that work are contained in the zip file artifact, that artifact can be explored later as needed, and it becomes the solid basis of all future work. You shouldn't need to forever share previous conversations, just share the current zip file and work from it.
GPT generally does a great job of compacting (summarizing) your main conversation as context gets filled up, but you should do your best to keep it lean.
When a conversation context gets too big, GPT's code writing and reasoning performance will begin to degrade. When that happens, save your entire current conversation as a .mhtml file, start a new conversation with the current project .zip file and that .mhtml file attached, and tell GPT to continue working from where you left off. GPT will spawn a process to read and summarize what it needs to know from the .mhtml file - and it can continue to refer back to that file as your new conversation progresses, especially if you point out important pieces of the previous conversation which it should pay attention too.
I pay a lot of attention to ensuring that I have logic pointed out (in displayed text) in any chat conversations, if I think that information may need to be available to future conversations (i.e., I make sure the displayed text of my current conversation contains everything I'd want a future conversation to be able to read and understand), so that I never need to re-do work in a future conversation. I never need to re-type any explanations, and ChatGPT should never need to perform any development work again, if everything is contained in the project zip file.
Another really important pointer is to always include this sentence in your prompts to edit code: 'Please be very careful not to change any other functionality in the application'. Current LLMs are getting much better at avoiding unintended regressions, but I still adding that to prompts helps. I also specifically tell GPT that that sentence is very important to remember every time we're building software code.
Sometimes I have GPT actually check its own work with prompts like 'Please ensure that no app functionality has been changed, beyond what has been requested in this conversation' and 'Please confirm that the changes between v530.zip and v540.zip are only those specified in this conversation, and that no other functionality in the application has changed'. GPT will perform diffs between all the files in your project, explain what the code changes do, and explain whether or not those code diffs could lead to any unexpected behavior (that's rarely the case, if ever).
Along the same lines, you can use GPT to evaluate entire existing functionalities, whenever you need to understand exactly what the code currently does, and how any functionality needs to be changed. You can ask for anything from a high level synopsis, to the lowest level details. You can choose to never touch a database schema, or be fully involved with how any schema is created and altered. You can choose to never make a decision about which libraries should be used, or how a logical process is devised, or you can choose to control every step of the logic, down to the individual characters in the code. However involved you are in the details, you can ask GPT to explain what already exists in the functionality of the app, and change that functionality, build upon it, etc. In most cases, if you understand what you're asking GPT to build, it will write very solid code. Just test absolutely everything before pushing to production, and if there are errors or functionality issues, provide GPT with enough debug information to understand anything that's not working, and the full scope of context needed to understand how any unexpected edge case data values may need to be attended to. Treat the process like working with a team of human developers. Communicate well, clarify intent, provide complete requirement details, test and iterate, and it will do a good job.
One other tip that really helps to make development with GPT move more quickly, is to be clear about where in the code a particular functionality is found. For example, I'm constantly using verbiage such as 'In patient chats such as https://mydomain.com/patient/262 and staff chats such as https://mydomain.com/conversation/5 the blah blah blah functionality needs to be updated to do blah blah blah blah. Providing those route paths saves lot of work for GPT having to search among pieces of code scattered throughout the entire code base, to reason about where it needs to work. If you can point out a URL, or a template file, or a function name, etc., let GPT know. It very rarely hurts to provide more concrete details and to clarify context.
Another bit that may save you some time, is that at this point I almost always run GPT in plain old default thinking mode - I have not recently needed use extended thinking mode for any typical work (even for complex tasks - as long as tasks are broken down so that every detail in a task can basically be explained in a single prompt). The current default GPT models are so smart, they typically get most coding tasks right first-shot, without any extended thinking, so you likely don't need to waste the time or the tokens on extended thinking.
Once you get the pipeline of this workflow fully established (all server infrastructure installed, and environments configured, local SCP and SSH commands saved, project folder established, etc.), it's not uncommon to literally paste emails from clients into existing ChatGPT conversations, and get new functionality completely built without any work whatsoever - and those sorts of improvement can continue over hundreds of iterations steps (or more)!
Being accustomed to working within a well conceived set of iterative steps, and understanding how to provide all required context, is the reason experienced engineers typically have better results completing software projects with LLM generated code, than non-technical users do. Understanding how all connected architecture works, and understanding how every piece fits together, is still essential: understanding supporting infrastructure, ecosystem tooling, network/database configuration, logic, CPU cycle and other hardware usage optimization (reducing big O complexity everywhere), etc., along with understanding what the potential solution might be to any issue, and simply being able to communicate with clients and collect requirements about what they intend to build, is a huge part of the work required to build any big project, and that work can always be steered more effectively by an engineer who has decades of experience doing all those things entirely from scratch with hand written code and manual system configuration, throughout every stage and in every detail of writing code and setting up the larger context of a project. The more experience you have with general software development, networking/IT, and business domain knowledge (understanding the workflows your clients engage in daily, and what all their data actually means, how it fits together, and how it's used to make decisions, etc.), the better you'll be able to succeed using LLMs to complete software projects. Treat LLMs like very talented and knowledgeable team members who work at the speed of light, but who still need to understand the context and specific details of any project, as well as the preferences and points of view of your clients, the purpose of the project within the bigger set of their established unique business practices, etc., and you'll do far better than just expecting magic output from the AI.