top of page

Supercharging ChatGPT Coding with Context Packs

  • Writer: Ryan Johnston
    Ryan Johnston
  • May 30
  • 7 min read

Leverage the power of context packs, smart chunking, and ChatGPT’s Projects & Custom Instructions to turn a massive API reference into a nimble, AI-augmented coding workflow.


  1. Introduction to Supercharging ChatGPT Coding with Context Packs

Ever stared at a 10-MB CHM file or an API website for the first time and wondered how to make sense of the thousands of API topics inside? By extracting, filtering, and packaging only what you need—and by pairing that with laser-focused prompts and ChatGPT’s built-in organization features—you can treat your AI assistant like a true coding partner. This post walks through:


  1. Building & using context packs

  2. Understanding ChatGPT’s rate limits & chunking strategy

  3. Harnessing Projects & Custom Instructions

  4. Best practices for ongoing context updates


Let’s dive in.


  1. Creating Context Packs

ChatGPT needs context to help you in the most effective way. Sometimes it can find the references itself, but it's most effective when you can point it directly to what you want it to look at. This is why we need to make context packs out of relevant data. For my purposes, this was an API that was in a CHM (compiled HTML) file that was hosted on my computer. This is basically a large, locally-stored, data set that can be opened from windows explorer - the problem was that ChatGPT couldn't access it. It wasn't available online, it wasn't able to accept that file type and that's what led me down this context pack route. The quick summary of this process is that I find relevant information to my coding project and turn it into a txt file that I upload in to the ChatGPT coding project so that ChatGPT can reference it. Here's how I do it:


While CHM extraction is my go-to for the API docs I need, you can build context packs from nearly any text-based source. Here are a few approaches:


2.1 Extracting Your CHM into TXT Files

Use a Python extractor to spill your CHM into a directory of .txt files—one topic per file. The output is thousands of small searchable text files. Code for the extraction:

#html_to_txt.py
from bs4 import BeautifulSoup
import os

input_folder  = r"C:\Users\%USERNAME%\Downloads\NET API"
output_folder = r"C:\Users\%USERNAME%\Downloads\NET API_TEXT"

if not os.path.exists(output_folder):
    os.makedirs(output_folder)

print("Checking folder:", input_folder)

html_files_found = False

for subdir, dirs, files in os.walk(input_folder):
    for file in files:
        if file.endswith(".html") or file.endswith(".htm"): 
            html_files_found = True
            input_file       = os.path.join(subdir, file)
        
        try:
            with open(input_file, 'r', encoding='utf-8') as f: 
                soup = BeautifulSoup(f, 'html.parser')
                text = soup.get_text(separator='\n', strip=True)
            
            # Save as .txt
            output_file = os.path.join(output_folder, file.replace('.html', '.txt').replace('.htm', '.txt'))
            with open(output_file, 'w', encoding='utf-8') as out: 
                out.write(text)
            print(f"Converted: {file}")

        except Exception as e:
            print(f"Failed to process {file}: {e}")

if not html_files_found:
    print("No HTML/HTM files found in the input folder.")

print("Conversion complete")

2.2 Scraping a Documentation Website

Many APIs publish docs on the web. You can pull those into text too. The outcome is plain-text dumps of live, versioned docs. This script is ChatGPT generated and hasn't been tested, but looks like a good start:

import requests
from bs4 import BeautifulSoup

def scrape_page(url, out_path):
	resp = requests.get(url)
	soup = BeautifulSoup(resp.text, "html.parser")
    content = soup.select_one("div.main-content")  # adjust selector
    text = content.get_text(separator="\n")
    with open(out_path, "w", encoding="utf-8") as f:
        f.write(text)

# Example: scrape all pages listed in a sidebar nav
base = "https://docs.example.com/api/" 
for endpoint in ["ClipPlane", "Viewpoint", "ClashTest"]:
    scrape_page(base + endpoint, f"webdocs_{endpoint}.txt")

2.3 Cloning GitHub Repositories

If you need code examples or README docs:

Then pull only the relevant files into your pack:

mkdir contextpack_github
cp Navisworks-API-Examples/ClipPlane/*.cs contextpack_github/
cp Navisworks-API-Examples/Transactions/*.md contextpack_github/

Or script it with python:

import shutil, glob

for fname in glob.glob("Navisworks-API-Examples/**/*.cs", recursive=True):
    if "ClipPlane" in fname:
        shutil.copy(fname, "contextpack_github/")

2.4 Parsing PDFs or Word Docs

If you have a PDF spec or DOCX design doc:

import textract

text = textract.process("design_spec.pdf").decode("utf-8", errors="ignore")
with open("spec_design.txt", "w", encoding="utf-8") as f:
    f.write(text)

2.5 Aggregating Code Comments

Don’t overlook inline code comments in your own repo:

grep -R "// TODO" -n src/ > contextpack_todos.txt
grep -R "/// <summary>" -n src/ > contextpack_xml_comments.txt

2.6 Combining Your Snippets

Depending on what method you used to make txt files we likely have a lot of text files at this point. My CHM file produced over 3,000 txt files totaling 4.16 MB. If we uploaded each individual txt file to ChatGPT, it either wouldn't upload or wouldn't work well because it would have too much context. This makes next step is to search for what we need and combine the results into a context pack via a python script.


Results of Extracting CHM file as individual txt files:

Results of Extracting CHM file as individual txt files

So my first step is to go within this folder and use the Window Explorer search to find specific API items I want to make into a context pack. For me, this is usually classes, namespaces, or methods. It's important to not just search the name of the files, but also search the content of the files. Once the search is completed,

  • grab all the files referenced (see section 3 about Rate Limits and Chunking),

  • copy search results them into their own folder

  • run this python script to concatenate all those text files into a single txt file for upload

    • the python script creates a new txt file, takes the name of the file and the contents and adds it to a txt file

    • name the combined file something representative of the context pack such as the namespace/class/method you searched for (ie: __contextpack_PluginsNamespace)

  • Upload combine txt file into ChatGPT project (more on setting up the project below)

import os

SOURCE_DIR = r"C:\Users\%USERNAME%\Downloads\ContextPack_PluginsNamespace"
OUTPUT_FILE = SOURCE_DIR + r"\__contextpack_PluginsNamespace.txt"

# Combine Txt files into a single text file separated by 3 new lines and exclude the output file from the source directory
def combine_txt_files(source_dir, output_file):
    with open(output_file, 'w', encoding='utf-8') as outfile:
        for filename in sorted(os.listdir(source_dir)):
            if filename.endswith('.txt') and filename != os.path.basename(output_file):
                file_path = os.path.join(source_dir, filename)
                with open(file_path, 'r', encoding='utf-8') as infile:
                    outfile.write(f"=== {filename} ===\n\n")
                    outfile.write(infile.read())
                    outfile.write('\n\n\n=======================================================')     

combine_txt_files(SOURCE_DIR, OUTPUT_FILE)
Context Pack Folder with Concatenated Txt File
Context Pack Folder with Concatenated Txt File
Example of the Contents of the Concatenated Txt File
Example of the Contents of the Concatenated Txt File

congrats.. hard part is done.



  1. Navigating Rate Limits & Chunking


3.1 Token & Context Budget

  • Limit: ~32 K tokens (≈20 K–25 K words).

  • Practical ceiling per upload: ~20 K tokens (≈80 K characters) to leave room for your AI-generated response.

3.2 How to Tell You’re Overloading

  • ChatGPT stops referencing earlier context.

  • You get a “context truncated” warning.

  • Answers start repeating or going off-topic.

3.3 Chunking Heuristics

Need..

Strategy

Single feature (e.g. ClipPlanes)

5–10 small files → one 5–10 KB pack

2–3 related areas

Two medium packs (~10–20 KB each), upload sequentially

Broad API survey

Split by namespace: api_transactions.txt, api_boundingboxes.txt, etc.

Pro tip: Keep each combined pack under 100 KB (≈80 K characters) and name it clearly.



  1. Organize ChatGPT with Projects


4.1 Why Use Projects?

  • Centralize all chats within a single folder

  • Upload Project Files (your context packs and code context) so they’re always at hand.

  • Track milestones by using a new chat within the project for every feature (UI done, grouping logic merged, etc.).

  • Give Custom Instructions so that ChatGPT can understand this project without you having to repeat yourself upon every new chat created


4.2 Getting Started

  1. In the sidebar, click + New Project.

  2. Name it for your application or feature.

  3. Drag in any existing chats, your contextpack_combined.txt, and your compiled code.

  4. Pin critical messages or code snippets.



  1. Tailor with Custom Instructions

5.1 What to Include

  • Your role & goals: “I’m building a MVVM WPF plugin in C# version 7.3 for Navisworks. The purpose of this plugin is to ____.”

  • Coding standards: “Use ICommand and service-based ViewModels. Show try/catch around API calls.”

  • Tone & format: “Respond like a colleague: concise, actionable snippets, numbered steps. It's very important that if you think I'm going about something the wrong way to tell me and propose other solutions. This is a collaborative effort between you and me.”

  • Reference Context Packs: "I have the API broken up into txt files that I can feed you if you feel like you're missing anything. We'll call those contextpacks. These contextpacks will include the names of the [hopefully] all related files and the contents of the files in a single txt file. Please try to reference the specific file name you're using if we find ourselves bouncing around a solution so I can help with referencing."


5.2 Setup

  1. Projects → Instructions.

  2. Fill in “How Can ChatGPT best help you with this project”



  1. Keeping Your AI Partner Updated

6.1 Best Practices for Context Updates

  • Incremental packs: When you add a new module (e.g. bounding boxes), build a mini-pack and upload.

  • Changelog messages: At the top of a new pack, prepend:

# Update 2025-06-01 
- Added contextpack_boundingbox.txt exploring BoundingBoxDataContext
- Updated ViewCreationService.cs snippets
  • Pin & comment: In your Project sidebar, pin the update and add a one-sentence summary.

  • Reference back: In your prompt:

  • “Using the June 1 update pack, how do I integrate bounding-box filters into my dashboard?”

  • Master context pack: Generate a weekly combined pack of any changed doc or code comments.

  • Feature packs: For each sprint, upload only the new or updated modules.

  • Git-driven packs: Hook your CI/CD to run a script that collects changed .cs, .md, .txt into a dated pack and alerts you.

  • In-chat summaries: Start each new conversation with a quick bullet list of recent commits or feature flags.


6.2 Automate Notifications (Optional)

  • Daily check-ins: Use a scheduled reminder in ChatGPT’s Automations to ask:

    “Any thoughts on the latest context updates?”

  • Branch-based updates: If your code is in Git, script a CI step to auto-generate a combined pack of changed .txt files whenever you push and notify ChatGPT.



7. Prompting for Success

7.1 Example Prompts

Prompt

Why It Works

“Using the combined pack, show me how to wrap edits in BeginTransaction/CommitTransaction around multiple ClashTest changes.”

Specifies API, method names, and context.

“Write a WPF ICommand implementation in C# that toggles a ClipPlaneSet on/off.”

Calls out MVVM, language, and the API area.

“What exceptions should I catch when loading a plugin via reflection?”

Targets plugin-loading context.

7.2 Anatomy of a Good Prompt

  1. Scope: Clear feature or class names.

  2. Pattern: MVVM, WPF, C#… whatever architecture you’re using.

  3. Reference: “Using the combined pack…” ensures ChatGPT uses your uploaded text.

  4. Output format: Full snippet, method signature, XAML + code-behind, etc.



  1. Conclusion

By extracting, filtering, and chunking your massive API references into bite-sized context packs—from CHM files, live websites, GitHub repos, PDFs, and code comments—and by pairing that with precision prompts, Projects, and Custom Instructions, you can supercharge ChatGPT coding with Context Packs and make it a truly integrated coding teammate. Whether you’re doing something simple, want feedback, or crafting a WPF dashboard, this workflow keeps your AI partner in sync with your codebase—no context lost, no tedious re-uploads, and maximum productivity.


Happy coding!

Kommentare


Wanna learn more?

Sign up to my email list, and you will receive my blog directly to your inbox with my promise of no spam

Thanks for subscribing!

Menu

Follow me on

  • LinkedIn
  • YouTube
bottom of page