This topic documents some posts from the old Rebolforum topic at https://rebolforum.1y1z.com/topic/818 , which track my final path from developing software with traditional tools, especially the productive no-code tools I had begun to like in 2024-2025 (Baserow, NocoDB, and others), to basically just relying on GPT and the zip file routine to do everything.
Prior to this, for about 3 years, I had been primarily using the Python Anvil framework to build production applications for clients (the 30 years before that involved hundreds of other tools, frameworks, languages, etc. - anything that helped improve productivity).
I began to really use GPT to help write code in the beginning of 2023, but it's context size and code quality was very limited (GPT 3.5 initially had a context of only 4000 tokens!). It was mostly useful for completing the code in single functions - that was still a tremendous help which improved speed and reduced fatigue with things like SQL and SQLAlchemy queries, finding syntax errors in hand-written code, evaluating debug errors, and writing code based on API documentation learned in-context in chat conversations - but any code generated by GPT needed to be tested very carefully (never trusted), and it wasn't yet capable of making any sort of sweeping changes to codebases across multiple files.
At the time when AI coding capabilities really improved, I had been focused on using no-code tools to help ease CRUD development fatigue. No-code tools had also given me a new approach to involve stakeholders in the development process. I was doing lots of plain old CRUD work, so become interested in Jam.py some time in 2024, but never put it to use in any big projects, because it just wasn't a well supported framework.
2025 was a big year for me to use Baserow (a newer no-code tool) in production, and I also really began seeing AI capabilities grow in 2025, to build apps connected to the Baserow API - that hybrid approach was dramatically productive compared to anything I had experienced previously.
By early 2026, it became obvious that GPT 5 and all the contemporary LLMs were capable of building entire large projects, without the need for big tools like no-code systems. Claude Code and other agentic systems could manage nearly endless context size, spawning sub-agents, and handling the entire iterative process of building, testing, and deploying.
My zip file project management process with GPT used the agentic capabilities built into ChatGPT to further simplify complicated development routines, without needing to install any local agentic tooling (and it kept my costs for very heavy development usage of OpenAI models to $20 per month!)
Throughout this entire learning process, one of the most important repeated realizations was that the choice of development tooling was dictated more and more by which tools the LLMs were trained best in. I settled on Flask framework and all its ecosystem, because all the LLMs were absolute whizzes with it.
The deep volume of training materials available online (likely many millions of pages of tutorials, documentation, and many billions of lines of code which had already been written to solve problems many times over) gave the LLMs phenomenal capability with this ecosystem. I'm sure the LLMs are likely to be at least as good with React and other popular ecosystems, but I wanted something very light weight, easy to install and manage on any OS/platform, and I was more focused on server side capability, with Python's ecosystem capabilities as a first priority.
So this topic is really just meant to provide some historical context for future users, to help understand how this change in development practice evolved over a very short time while AI improved during the past few years. All the original posts, and much more context/history is available at the old rebolforum, but here are just a few of the most telling posts:
Nick — 3-Oct-2025/11:13:11-7:00
Lately in practice I've been using Baserow and NocoDB for the no-code pieces of projects. Baserow has been working fantasticly for the clients who've been using it. We've gotten some significantly sized production databases created with it (some with schema that has been expanded daily over the past 10 months), and it's being used in production apps which connect to many of those database tables via the Baserow API.
I have *not used the Baserow app builder nearly as much as I had initially expected. It's now so easy to build the typical sorts of CRUD interfaces that would be created with the app builder, using GPT instead, that I'm not sure there's really any need for any CRUD application builder of any sort, in any environment really. The sorts of apps that can be built with the Baserow application builder in a day or 2, can typically be built with GPT in an hour or 2, and the features of apps built in Flask with GPT, for example, have none of the limitations inherent in no-code app building tools.
I just don't see end-users and stakeholders being too interested in building applications, in the same way they are in building their databases.
It seems that the killer features of Baserow are really the schema and view creation features, along with the ability for users to create and share forms, filters, sorts, groupings, etc. as well as the 'IT' related features such as workspace creation/management, server migrations, backups and trash/undelete, webhooks, and of course API connectivity.
Nick — 5-Oct-2025/12:30:21-7:00
When I take a step back from the software development work I'm focused on completing, I have to really acknowledge just how much no-code tools have changed the nature of my software development work this past year.
Of course it's always at the forefront of my attention how LLMs have tremendously improved my bandwidth and ability to get development work completed quickly. They don't just improve my ability to learn quickly, to get up to speed with understanding the context of requirements, within complex organizational workflows, compliance obligations, etc., to communicate better with all parties involved in a project, to dive right into building solutions without worry that I'll hit painful dead ends - and of course they reduce my stress and fatigue while building code bases, they help me make technical choices about which libraries and tools to use, help me understand the purpose and functionality of data structures involved, etc. - LLMs have been the star of the show lately in my head.
But when I really look at what's happened at organizations using Baserow and NocoDB, I'm struck hard by what's actually been *accomplished with those tools.
With very little hand-holding, I've watched stakeholders build some pretty staggeringly deep and useful databases - with the all the UI forms and views needed to make them actually help users get work completed. From beginning to end, no-code schema, views, and forms have been used to complete a shockingly large number of workflows which would have required a lot of drudging CRUD development work in the past.
Only once this year have I had to help completely engineer a no-code schema from the ground up, along with UI forms, views, webhooks, etc., to manage a very complex multi-part workflow which involved several different stages of user group involvement, along with deep integration in an application which I built separately. Otherwise, I tend to help users understand only how to normalize typical many-to-1 and many-to-many relationships, and help them understand how to create basic views and UI form workflows. That's been simple and fun work which everyone involved tends to find satisfying - not too technically challenging or frustrating - more like enabling and useful.
After basic introductory instruction and some ongoing simple help, what users have completed on their own with these tools, is absolutely staggering. If they had needed to rely on me to build schema and queries for these databases using full code tools (SQL, SQLAlchemy, and/or any ORM(s)), I could have employed 10 developers full time throughout the year.
Since I'm focused on building applications which connect to these databases, my attention has been on the effectiveness of LLM code generation, but when I really consider how far my client organizations have been able to improve their data management situations, the no-code tools have been rock stars.
For me, the really interesting projects always involve novel application features which I'm asked to build - but the CRUD capabilities which form the basis of that work still form the critical necessary foundation of every project - and the no-code tools have turned that massive amount of work into a mostly trivial side concern for me (just helping workers when they need guidance and support).
The owners, managers, and employees have gotten a massive amount of CRUD work done for me - and they're happier to be enabled to do that. They know their workflows and their data, and their vision about how they want to manage their data effectively. They consult with me regularly about how to best organize and link tables of data, and how to build filtered views, UI input forms, etc., as well as how to build schema that I can connect with in applications I'm commissioned to build - but then *they do that CRUD work, all on their own. That has only very rarely been possible using any sort of code tools in the past. Using a few select no-code tools, it's now easy for them to accomplish, and they have the control they want - and neither I nor they have to go through endless agile revisions of requirements preparation, prototypes, communication about how the data structures need to be adjusted, etc. They're enabled to work with real schema and data in malleable ways, entirely on their own - and that's a game changer which has far reaching repercussions throughout the life cycle of every project.
Baserow and NocoDB are powerful, versatile, safe and effective solutions, which keep end-users from having to wait for me to build CRUD software - and everything they build with those no-code tools, forms structured data which I can use in full code application development (and of course, I can use LLMs to help reduce the complexity of that process).
When I look at what's been accomplished with no-code tools in my projects this year, and compare it to solutions and workflows I used to complete big projects which began just 2-3 years ago, it's strikingly clear just how much more productive, effective, practical and painless the no-code tools are, throughout every phase of development and the lifecycle of software projects. It's fantastic that users of software can help build their own software as needed and imagined. That capability adds an extraordinarily powerful and elegant solution to the common problems and pitfalls of traditional development approaches.
The practical work that gets accomplished with no-code tools can't really be replaced by LLMs yet, because what no-code tools do is enable a large group of stakeholders to build out the database of a project - in the ways they need and want. What I've been discovering is that in practice, that's work which is really best left to them, instead of a development team. It's easy for them to accomplish with the right tools, and not only does it form a real foundation for actual software development, it simply eliminates the need for most CRUD software development work. The CRUD foundations they complete can be wired up easily in real software development tools.
I couldn't ask any non-technical stakeholders to try and build reliable CRUD software with LLMs - that would be an utter mess, and I could never trust anything which anyone built that way. It would likely take longer to review and test 'vibe-coded' CRUD solutions than it would be to just build it myself. But those issues are entirely solved with Baserow and NocoDB. Those tools enable enormous swaths of CRUD work to get completed easily, quickly, reliably, in a consistently structured environment, by a team of non-developers.
Looking back, I'm amazed at how effective Baserow and NocoDB have been - how much work they've eliminated, how much has been built with relatively effortless ease, and with a lot of practical satisfaction among clients (owners, managers, end-users, their contractors and clients, etc.). It feels pretty darn magical to have gotten so much work done in such a slick and useful way - and everyone is better for it.
The users who build with no-code have gained a new capability and understanding, and we can work together so much more easily when they have that understanding. They've stopped relying so much on disparate Google docs, spreadsheets, emails, and all sorts of other scattered personal tools - we just put everything in their managed, secure Baserow and NocoDB environments, where it can all be immediately integrated directly with real software development tools. And I don't spend any of my time or energy writing/debugging/testing/updating schema or query code any more - doing that would feel so barbaric at this point - even though it constituted a large majority of my time in all the decades leading up to this moment. And of course that work forms the overwhelming effort and time needed to complete any typical business software project - it's basically eliminated at this point, replaced by much more natural and intuitive tooling which even non-technical users begin to master immediately.
It's so important to understand that not all no-code tools are created equal. The way that Baserow and NocoDB (as well as Teable, which I haven't used yet in any big production projects) enable users to create and move schema around, to enter, copy, paste, and drag entire selected grids of data, for example, so easily - that's what makes all the difference in the world. For non-technical users, the slick interface is what enables its use. Tools with old 'admin' types of UIs simply would not get used. They'd be hated.
I've gotten to love Baserow, but I think I could do all this with just NocoDB (and Teable may end up being an alternative, but certainly not required). That's a tiny tool in the scheme of things.
Just as importantly, all the LLM development help I enjoy using could be accomplished entirely with GPT-OSS:20b running on a local PC, along with Python and a local repository of the most common libraries (especially the Flask ecosystem), and SQLite, Postgres (plus MySQL, MSSQL if required in client environments).
I'll of course continue to use GPT and Gemini, pip, pipy, and all the Internet based, remotely hosted tools, but it's reassuring to know I could do all this work entirely with tooling that can fit on a flash drive, and which can be installed and run in a few minutes, entirely on an inexpensive self-hosted laptop (and/or accessed by remote desktop share on my phone, etc.).
Anvil and some other tools like jam.py were great stepping stones which led to this current environment. If LLMs didn't exist the way they do, I'd likely still be using them. Anvil and jam.py include no-code DB tools, visual tools to build UIs which I now use LLM code generation to speed up, project management tools, code completion, etc. which can be handled more effectively with LLM code generation and a simpler pipeline.
The point is to clarify just how staggeringly productive and earth shakingly better software development has gotten, with just a few simple to use tools. Give me NocoDB, local GPT20b, and a local repository of libraries, and everything about software development is entirely different - worlds of productivity and joy away from where it was just a few years ago. Give me a project that would have taken hundreds of hours and lots of pain to complete a few years ago, and it's now a part-time week of effort and lots of joy to complete - for everyone involved. That's staggering progress.
Nick — 1-Nov-2025/11:04:28-7:00
I'm exploring Lodefy a bit as a side interest. It ticks a lot of boxes needed for simple, high productivity full stack application development.
Nick — 2-Nov-2025/14:58:42-8:00
I made a leap of faith with one particular large project during the past few months, really relying on no-code, together with using GPT to write most of the custom application code. After 2 1/2 years of learning what does and doesn't work with LLM code generation (tooling, frameworks, prompts, context management, codebase management, the workflow I described above, etc.), the project has so far been outrageously successful.
A significant production database has evolved with Baserow, over the past half year, with hundreds of tables, many non-trivial relationships, UI input forms and sortable/filterable/sharable views, authentication, etc., in a workspace that's been super easy to manage, back up, duplicate and migrate between servers - all with no-code, which has been immediately usable by all the stakeholders (owner, managers, employees, end-users). We have the whole project hosted with Atlantic.net's HIPAA VPS offering, and the entire system has been built to be rock-solid compliant every inch of the way.
I regularly spend time training users how to use the Baserow interface and to normalize schema, but that mostly consists of simple work, such as showing users how to import CSV files, manipulate columns, set up data types, copy/paste values, avoid wide tables with duplicated values, link many-to-one relationships, use lookups, write formulas, aggregations, etc. - just generally how to use the dead-simple UI system.
Having stakeholders involved in doing this labor has saved me many hundreds of hours of grunt work building schema and performing data entry, but more important, it's changed how the entire engineering approach works, from the ground up and throughout the lifecycle of the project, because those stakeholders know their data and intimately understand its meaning, as well as what they want to do with the data.
Is this environment I've no longer had to rely on endless meetings or constant cycles demonstrating prototypes to discover schema requirements. I give users a few rules about never deleting schema, and only ever making changes to duplicated columns, whenever any sort of schema alteration needs to be migrated. And Baserow has a full suite of tools to keep data/schema from ever being permanently lost (undo, trash can, backup/restore and audit features, etc.).
So far, everyone has just immediately understood how to use Baserow - it's been a rock star which has been utterly painless to work with. I've had zero complaints from any users. They all love it.
In this project, stakeholder involvement in the development process has ended with the database and CRUD interactions. They build schema and malleable views, and enter data. That's what they want to be able to do, and that's what I want them to be able to do.
Basically all integrated custom software development for this project has been completed by connecting auto-generated Baserow API endpoints to requests from Python functions in Flask, and the UI has been completed with typical Jinja HTML templates, Bootstrap, jQuery, Datatables.net, any other required JS libraries, etc. - even for all the real-time features (any other web UI can be implemented in Flask - those are just the ones I've found LLMs can use 100% reliably, without hiccups).
I collect requirements from the stakeholders, write detailed SRSs for GPT (and GPT typically helps with reasoning, planning, and chosen directions to satisfy those requirements, researching library and infrastructure choices, for example).
And in this project, I've let GPT take the wheel for all custom application code generation. I've spent a huge amount of time writing prompts, debugging, and testing code, but the coding work has been nearly entirely completed by GPT - and the results have been breathtaking, both in terms of speed and quality of output.
A complicated real-time messaging system has been built, with a massive set of hard-to-implement features, a pile of deep workflows for patient intake and care management, with an entirely in-house geolocation/mapping system, all sorts of multimedia bells and whistles, notification systems, scheduled background processes, specialized user role features, etc.
Once the project got rolling, and the patterns used to communicate requirements to GPT became established, I was regularly able to get difficult, large pieces of the project completed in a few hours, which would have taken weeks using the most productive tools previously available.
The engineering workflow is still the same - there's always a phase of communicating and gathering requirements, researching infrastructure, planning schema, logic, UI, etc., but all that is sped up by integrating every bit of it into conversations with the LLM, where the generation of code is all just a part of the same process - even the emails I write to communicate with stakeholders, IT staff, end-users, etc., all involve the LLM. I leverage the LLM every step of the way.
This project currently has 35,000+ lines of production code which have been written from scratch, tested, and delivered to users, and the CRUD pieces automatically created by people using Baserow represent at least a few tens of thousands of additional lines of drudge code which would have had to be written manually, using any other highly productive framework tooling - and it's all rock solid, eliminating so many write/debug/test cycles, fatigue, careful detailed attention, labor, and frustration.
Baserow automatically creates documentation for any entire database built with it, and GPT can immediate use those docs to build any required custom queries. That documentation is currently about 15Mb, for the existing Baserow database - so integrating the Baserow API is dead simple and is basically handled automatically and instantly by LLM code generation. It's hard to explain how much work that saves. And of course generating code to use an internal database, built in the custom application code (with SQLAlchemy, SQL, etc.) is just as easy and fast.
What's really struck me in all this has been the painlessness of the work. Most of the hard challenges are gone. It still does take the same kind of patience, attention to detail, understanding about how all the parts in the system work together, as well as time spent setting up infrastructure, communicating with other humans, etc., but the difficulties associate with all those things, not least the code-writing routine, have really basically come to an end at this point, and I don't expect they'll return.
GPT has guided every complex installation and troubleshooting routine related to standing up infrastructure (which in the past would have taken 100x as long, at every step), and I've used Gemini, Grok, Kimi, GLM, etc., whenever I've wanted an additional critical eye to help test and review code, evaluate vulnerabilities, etc., but the painful time-consuming work is dissolving very quickly.
What's also been surprising to me is how well the most recent zip file technique I described above has scaled. From what I've experienced, I don't think there's an end to the scope of context which can be wrangled comfortably with it. This project encompasses several dozen separate custom applications in a suite, each of which would likely have been significant months-long projects in the past - all neatly connected, entirely malleable, adjustable, morphable, and manageable as needed, whenever changes are required. In fact, that's been one of the greatest improvements in my outlook - I don't hesitate to tear things down, re-integrate working parts with new features, etc. The LLMs are truly spectacular at re-forming and putting together pieces into new, larger, constructed works.
I'm astounded at how successful these tools have been at building powerful systems. I've been using them all, in parts, for several years now, and among different incarnations of evolving useful concepts - but the progress in terms of capability and quality has absolutely exploded, even just in the past few months. My workflow is absolutely nothing like it was a 1/2 year ago, and it just continues to improve by leaps and bounds.
Nick — 3-Nov-2025/13:54:47-8:00
It's interesting, the only time I ever previously used any sort of auto-complete, or project management integration tooling, was with Anvil, because Anvil's tooling was a complete system - very productive because every piece was integrated (file management, Git versioning, visual UI layout, front-end code, back-end code (and calls to server code directly from UI code), database, ORM, code editor, etc.) and all of it was controlled with one integrated language interface, so in that system, auto-complete really beautifully connected all those pieces, in uniform way, which fit a very manageable, small mental model & language vocabulary.
Objects throughout the Anvil system persist as lightweight references, so autocomplete in that environment is especially powerful. For example, rows from the built-in database, performed by ORM queries in back-end code, can be accessed directly by front end code (without any serialization to JSON or AJAX machinery, for example) - and those data references appear as autocomplete selections - consistently throughout the entire system. So you can perform a search on the backend, and those results appear as auto-complete choices in your front end code, and then they appear again in any back-end code which get sent from front-end, an so on... and since those objects are passed around as references, rather than as actual data value (as they are with serialized JSON, typically used in web frameworks with AJAX), they don't take up any significant memory (like pointers in C, but much safer/easier/high-level to use). That tight integration is a great thing to love about Anvil.
But now I've moved on to even deeper levels of integration, and much more powerful 'autocomplete', with LLM driven development, because LLMs not only complete the actual code for all my engineering intentions (not just references to data, functions, and other labeled pieces), they also take part in communicating with clients, users, IT departments, etc., at the human level, as well as writing requirements documents - and reasoning, planning, chosen directions to satisfy those requirements, researching library and infrastructure choices, etc. Because all that activity happens in a single environment (the text of chat conversations), which the LLM has access to entirely - not just all the code, but a complete understanding of the real-world context of the application's purpose and goals, the environment it runs in, etc., it can help with every single piece of work that I do, including installing infrastructure, performing code review for pieces written by other developers, etc., etc...
And at this point, the quality of code written by GPT is fantastic, and I can use other LLMs to perform code review, just as I would ask a team of experienced developers to perform code review, to uncover security issues, to suggest alternate engineering solutions, tools, code patterns, etc.
I've never spent more than $20/month on LLMs (ChatGPT is the only one I pay for), and have never hit a rate limit at that expense, often using it all day for days at a time. For my needs, I've only ever used Gemini for free in Google AI studio, and the LLMs I have running on my little home servers are fantastically capable of generating quality code (GPT-OSS 20b & 120b, GLM 4.5 Air, and Mixtral8x22b are my favorites for cheap consumer GPUs), and bigger open source models like the full GLM 4.6, MiniMax M2, Kimi K2, DeepSeek-V3.2-Exp, Qwen3-Coder-480B-A35B-Instruct are as deeply capable as GPT, Gemini, Grok, etc.
I still perform all the implementation of every piece of my projects -installation, debugging, all engineering decisions, tool choices, etc. by hand (as opposed to giving agentic systems access to a file system and command line tools, and letting them iterate automatically). Working with zip files in GPT makes that a piece of cake, and super fast, while still keeping me in the loop about every single choice made, every line of code produced, etc., directing every choice intentionally.
I keep a full history of all the versions ever explored - reverting to a previous version is accomplished by simply unzipping a file already uploaded to the server. The decisions which went into every version, as well as all the code changes are fully documented in extraordinary detail in each chat conversation, which I store as a single .mhtml file - so reviewing everything that went into creating any piece of a project is super simple to track down - not just for me, but for the LLM too.
I typically attach a single zip file which stores an *entire project and all related files (all code, resources, supporting files of any sort, user documentation, environment variables) - everything, in the exact file structure, as it exists on the server, and ask GPT to work with it, surgically changing/extending/integrating any features which need to built - always asking it to provide the complete updated code as incrementally numbered vXXX.zip files (i.e., v148.zip). I use SCP to upload those files to the server, where they're simply unzipped with -o (automatic overwrite).
The LLM examines everything involved in the project, and can reason about how everything is connected. There's nothing from the project missing - and by using zip files (instead of cutting/pasting code in the conversation), GPT is able to perform absolutely massive context management.
If the LLM needs more context about previous decisions which have been made, previous debugging steps which have been completed, or anything else about the history of the project, that's all contained in single .mthml files which can be attached to a conversation. Sometimes (rarely) I'll attach two separate zip files, for applications/projects which need to be integrated, along with multiple previous conversation .mhtml files, and GPT goes to work integrating the projects (it's update multiple zip files in a single conversation). I rarely need to type more than a sentence to provide all that context.
The main reason I'm not using Anvil in this process, is that it's far more difficult to provide full context about the project, especially the visual UI and visual database management pieces of projects. It's entirely possible to work entirely in Anvil with code (and not use any visual tools), but all the LLMs are staggeringly proficient with Flask and the Flask ecosystem: SQLalchemy, SQL, other Python ORM tools, and any well-known tool in the Python ecosystem, as well as any well-known UI framework in the web development ecosystem - they are wizards at integrating solutions with those tools.
Another important characteristic of everything in the Flask ecosystem is it's all super light weight and fast/easy to install on any operating system. There are basically no limitations to what can be accomplished with these ecosystems, and LLMs can help perform research and make informed choices about the best tools to fit any project goal - although they tend to use a simple core set of tools for any mainstream purpose: HTML/CSS, well known JS libraries, Bootstrap, jQuery (or choose React, Vue, Svelte, etc.) - whatever tools you prefer, as long as they are well known and well documented by millions/billions of pages of text and code on the Internet. Using the best known, most used, best documented tools in the industry is the key to successful LLM code generation.
Nick — 2026-03-02 14:25:17
I've reached the point at which development with GPT will likely eliminate the usefulness of no-code databases for my future projects, even though Baserow has been used with complete success in every production project where it's been implemented as the core database technology. My clients love Baserow, it's incredibly productive, it's been 100% reliable, it's so easy to connect with via the API, etc.
There's nothing to not like about Baserow, and if coding with GPT wasn't so outrageously productive, I would likely continue to use Baserow as a core tool for many years (and I'll likely still support Baserow community users, for whom it's still a great fit). The thing is, using GPT to write code is just so productive that I think my workflows with it are set to eliminate the need for Baserow in any work that I expect to see in the future.
The real benefit of Baserow has been that clients get to do the work of building their own database. Not only does this eliminate much of the CRUD drudgery for me, it entirely changes the process of gathering requirements from clients. Long term, clients are able to *reliably update and maintain there own database schema and all required basic CRUD interactions. Clients know their data, and I've found that they can typically communicate requirements by building schema - and asking questions about how to instantiate the details of that schema in a no-code tool like Baserow - better than they can by trying to communicate conceptual requirements in plain English. Baserow has enabled non-technical users to do this, at a very deeply detailed technical level, so much more effectively than I ever would have believed possible.
With Baserow, the development process is just a different experience than it is when trying to collect detailed requirements outside the structure of a database system that non-technical users can understand and take part in building. And of course, when users can build all their CRUD requirements, a huge amount of drudge work, data entry and coding is eliminated on my part.
With Baserow, my involvement in much of the development process shifts more fundamentally to teaching, training, and helping users engineer schema, with good decisions about normalization, understanding how to build views, forms, auth, user management, etc. All doubts about whether this is an effective approach, have been completely eliminated in my experience.
Now step back and consider everything I just said, and ... GPT can do ALL that BETTER. Even though hooking up a custom app to the Baserow API is dead simple with GPT. Even though you can simply export your API documentation from Baserow, label the columns you want to pull data from and write to, and GPT can build API calls correctly, first shot.
Working with an internal database built by GPT is even easier and more productive. You don't even need to refer to tables/columns when engineering the implementation of any imagined software feature. GPT now does all that work intuitively, when prompted at the level of building *functionality in an application. I've built many multiple projects over the past year, across literally hundreds of deployed versions, and there is no limit in sight regarding how deeply an application can be modified, or how complex its structure can become (using the single zip file approach I've described previously).
GPT can also help with the process of collecting and clarifying requirements - not just at the level at which engineers/developers think about specifying details of architecture, but across every level of communicating the intended purpose of an application, the technical decisions involved in choosing libraries, frameworks, ORM/databases, UI tools, etc. GPT can literally decipher the intent of emails from users, help form emails in response to clarify that intent, evaluate engineering options which are likely to be best in the long run for any intended feature, guide an engineer through every step of installation and server configuration using new tools/frameworks, explain and discuss methodology decisions, write and debug all the code needed, etc., etc., etc. And when all the code for a project lives in a single zip file, which provides the complete context about database schema, application structure, logic, and UI (everything - all functions and wiring across every piece of front end and back end), as well as everything else about the project's environment, server configurations, etc., then you need to explain so much less, for GPT to reason through solutions, when only prompted at the extremely high-level of functionality requirements.
The fact that GPT can even be involved in conversations with clients, IT personnel, all stakeholders, users, etc., to define the purpose of required functionalities, as well as every technical decision and trade-off that needs to be considered regarding resource use, the financial viability of any infrastructure choice, maintenance requirements, etc., beats virtually any other imaginable approach.
And of course, the fewer tools and infrastructure involved, the better. The experiences I've had implementing so many sorts of complex functionalities in large projects, consisting of connected applications that are all tens of thousands of lines of code each, constantly evolving applications through hundreds of iterations of morphing requirements, including in one instance a large and deeply specific UI redesign by a hired graphic artist (which I documented in my AI thread on this forum), has proved that the simple and lightweight Flask ecosystem can handle every requirement my clients throw at me. Even all the great benefits of using a tool like Baserow can't beat the flexibility, capability, productivity, ease of implementation, and inherent workflow simplicity of using GPT in every phase to software development work.