GitHub-Copilot-Free is a local proxy app for Windows. It helps you use local or open-source AI models in code editors like VS Code and JetBrains. It works by replacing the GitHub Copilot endpoint on your machine.
Use it if you want:
- AI code completion in your editor
- Low delay when you type
- A local setup with no subscription fee
- Support for open-source and local LLMs
You need a Windows PC and a code editor.
Recommended setup:
- Windows 10 or Windows 11
- VS Code or JetBrains IDE
- An internet connection for the first download
- A local LLM tool such as Ollama, LM Studio, or another OpenAI-compatible server
- At least 8 GB RAM
- 16 GB RAM or more for larger models
If you use a small model, a basic modern PC should work. For larger models, a stronger CPU or GPU helps.
Go to this page to download the app:
GitHub-Copilot-Free download page
- Open the link in your browser.
- Download the files from the repository page.
- If the project offers a release file, download that file.
- If you get a ZIP file, extract it to a folder you can find again.
- Keep the app folder in a simple path, such as
C:\GitHub-Copilot-Free.
If you see a Windows security prompt, allow the app to run only if you downloaded it from the link above.
This app needs a local or open-source model service.
A simple setup path is:
- Install a local model tool such as Ollama or LM Studio.
- Download a code model, such as a coding-focused LLM.
- Start the model server.
- Make sure it listens on a local address, such as
localhost.
Common choices for coding include:
- Small models for fast response
- Medium models for better code suggestions
- OpenAI-compatible local servers for easier editor setup
If you already use a local model server, you can keep that setup.
- Open the folder where you saved the app.
- Start the executable or launcher file in that folder.
- Keep the app running in the background.
- Open your editor after the proxy starts.
- Leave the window open while you code.
If the app uses a command window, do not close it while your editor is connected.
To use it in VS Code:
- Open VS Code.
- Install the GitHub Copilot extension if you already use it.
- Set the editor to use the local proxy endpoint from this app.
- Sign in if your setup requires it.
- Test completion by typing a function name or a short comment.
If the app supports a Copilot-style endpoint, your editor can send requests to the local proxy instead of the remote service.
To use it in JetBrains IDEs:
- Open your JetBrains app.
- Install the related Copilot or AI plugin if needed.
- Point the plugin to the local proxy address.
- Save the settings.
- Open a code file and test inline suggestions.
JetBrains users can use the same local model setup when the plugin supports an OpenAI-style endpoint.
GitHub-Copilot-Free sits between your editor and the model service.
Basic flow:
- Your editor asks for a code suggestion.
- This app receives the request.
- The app forwards it to your local model server.
- The model returns a result.
- Your editor shows the suggestion
This setup keeps the model on your machine and gives you a fast response path.
After setup, test it with a simple file.
- Open a
.py,.js, or.cppfile. - Type a short comment like
# sort a list. - Press Enter and wait for the suggestion.
- Check whether the editor shows a code block or inline completion.
- If it works, the link between the editor and local model is active.
Use these tips if you have trouble getting a response:
- Make sure the local model server is running
- Check that the proxy app is open
- Confirm the port number in your editor settings
- Restart VS Code or JetBrains after each change
- Use a smaller model if your PC feels slow
- Keep only one AI tool active at a time
A clean setup often works better than a complex one.
This app is built for local use. Your code can stay on your PC when you use a local model server.
Good use cases:
- Private projects
- Offline work
- Local testing
- Coding on a slow network
- Learning with open-source models
If you want fewer cloud calls and more control, a local proxy can help.
You may see files and folders for:
- The main app
- Config files
- Logs
- Model or endpoint settings
- Startup scripts
Keep these files together. If you move the app, move the full folder.
A normal daily setup looks like this:
- Start your local model server
- Run GitHub-Copilot-Free
- Open your code editor
- Start coding
- Keep both tools open while you work
This keeps your editor connected and ready for suggestions
Check these items in order:
- The download finished fully
- You extracted the ZIP file
- The app is running
- The local model server is running
- Your editor points to the right local address
- The port number matches in both tools
- Your firewall allows local connections
If you still have trouble, close the editor, restart the proxy app, then open the editor again
For code work, these model types usually fit well:
- Small coding models for quick replies
- General code models for mixed tasks
- Instruction-tuned models for chat and code help
- OpenAI-compatible local models for simple editor setup
Start small if your PC has limited memory. Move to a larger model only if you need more context or better code quality
People often use this setup for:
- Autocomplete in code editors
- Quick code snippets
- Refactoring help
- Comment-to-code tasks
- Local AI chat for coding
- Testing open-source models in a real editor
Repository: GitHub-Copilot-Free
Description: An ultra-fast C++ daemon proxy that replaces the official GitHub Copilot endpoint, allowing you to use completely free local or open-source LLMs inside VS Code and JetBrains
Topics:
- ai-coding
- copilot-alternative
- copilot-chat-free
- copilot-free
- free-copilot
- github-copilot
- github-copilot-chat
- github-copilot-for-azure
- github-copilot-free
- github-copilot-training
- local-llm
If you need to get the files again, use this page: