This was a little post from the old Rebolforum 15-Oct-2023/10:10:30-7:00, which was when AI software workflows were still young, posted here for some historical perspective:

This week, I had to integrate several thousand lines of code from a desktop app written in Python, made to generate Word documents from form inputs previously collected and processed via Google Forms. The job was to recreate the user interface as a web app they could run in house, integrate the legacy Word document creation code into the web app, and make dozens of alterations to the layout and content of the generated Word docs.

That's a super straightforward task, but the original forms were 8-9 pages long, each full of many form inputs, so it involved many hundreds of variables representing each response to questions in the original Google forms, many hundreds of conditional rules to parse the form values and produce appropriate content in the generated Word documents, each with associated CSV parsing operations (lists of values were contained in each of dozens of CSV fields, with different conditional parsing operations, based upon the configuration and values in each list), and a Python library which I had never used before was implemented to produce the Word document content.

This was the simplest challenge of all the projects I completed during the past month, but it required a large volume of repetitive, detailed grunt work. GPT saved me an enormous amount of time and frustration getting that grunt work done. I uploaded the full legacy code base and simply asked which variables needed to be changed, and which/how each line of code needed to edited for every change to text and formatting rules in the output document. This not only saved hours tracing paths through function calls and their inconsistently named arguments in the legacy code, but also saved hours of boring mindless work looking up function call signatures in the documentation of the library that was used to generate each piece of the Word document (tables, bullets, headers with colors based on conditional content evaluations, lists of grouped data values within text, etc.). In most cases, GPT not only helped by instantly identifying variables which would have required painstaking grunt work following the path of each of hundreds of values through many functions, each with different and inconsistent variable naming conventions - it also instantly generated the code needed to produce each conditional change in the output document, for free.

Increasing the speed of this grunt work freed my time to work on much more complicated engineering challenges in other projects that I was working on simultaneously with other clients. The improved response times also made the clients really happy, as I was able to handle dozens of tickets at a time to perform document updates, typically within an hour, and almost always the same day, when previously those sorts of tickets took the previous developer days or weeks to complete. And of course, Anvil made this whole process much faster because users could instantly see and interact with each new version of the app instantly, and Anvil's handling of development/production versioning, syntax highlighting in the IDE, etc., made project management, repetitive coding chores, error checking, etc., super simple and fast to complete.

In this little project, there were many other small tasks, such as removing legacy Tkinter desktop UI code, and converting data saved to files in the legacy code, to in-memory Anvil media objects, and lots of detailed little related data transformations and paths through function calls, logic, etc., for which GPT instantly generated working code to ease what would have taken even more painstaking grunt work - all for free. It's ability to integrate code from multiple previously existing examples, into one final block of working code, is really impressive and productive.

Working with unknown libraries and APIs are the sort of thing which used to require lots of Googling, reading documentation, and looking up questions on StackOverflow, etc. And it used to take an enormous amount of painstaking detailed grunt work to complete this sort of work. GPT and other LLMs never get tired and they never complain :) This sort of work can now be very much automated with LLMs, and the quality of generated output is staggeringly good, when working with well known libraries. I'm consistently impressed with how GPT can generate code examples from documentation - even when there are no examples in the documentation (this is in place learning) - and then integrate each of those pieces into a larger context, based on an explanation of the reasoning required. It's honestly often astounding how well it does, compared to how well a human would do on a first pass reasoning through the requirements - and it can output hundreds of working lines of code instantly, when that work would have taken lots of time, regardless of how simple or complex the task.

This not only improves my productivity, but my whole outlook and state of mind when approaching projects with lots of time consuming detail work.

But that's just the tip of the iceberg. I think one of the most fantastic capabilities of GPT is it's ability to interact with questions about reasoning required to complete a task. Using GPT's code interpreter, you can provide instructions for a task to be completed, and it will perform multiple iterations of code generation, based on reasoned evaluations of whether the code actually works to produce the correct outcome. It will change it's approach and devise new thoughtful approaches based upon how the output does not conform to the requested outcome. It will display the code used to generate each step along the way, and it will explain the thought process chosen to discover a solution.