Skip to content

injustice4934/GitHub-Copilot-Free

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 GitHub-Copilot-Free - Fast local AI code completion

Download Now

🧩 What this app does

GitHub-Copilot-Free is a local proxy app for Windows. It helps you use local or open-source AI models in code editors like VS Code and JetBrains. It works by replacing the GitHub Copilot endpoint on your machine.

Use it if you want:

  • AI code completion in your editor
  • Low delay when you type
  • A local setup with no subscription fee
  • Support for open-source and local LLMs

💻 What you need

You need a Windows PC and a code editor.

Recommended setup:

  • Windows 10 or Windows 11
  • VS Code or JetBrains IDE
  • An internet connection for the first download
  • A local LLM tool such as Ollama, LM Studio, or another OpenAI-compatible server
  • At least 8 GB RAM
  • 16 GB RAM or more for larger models

If you use a small model, a basic modern PC should work. For larger models, a stronger CPU or GPU helps.

📥 Download and install

Go to this page to download the app:

GitHub-Copilot-Free download page

  1. Open the link in your browser.
  2. Download the files from the repository page.
  3. If the project offers a release file, download that file.
  4. If you get a ZIP file, extract it to a folder you can find again.
  5. Keep the app folder in a simple path, such as C:\GitHub-Copilot-Free.

If you see a Windows security prompt, allow the app to run only if you downloaded it from the link above.

🛠️ Set up your local AI model

This app needs a local or open-source model service.

A simple setup path is:

  1. Install a local model tool such as Ollama or LM Studio.
  2. Download a code model, such as a coding-focused LLM.
  3. Start the model server.
  4. Make sure it listens on a local address, such as localhost.

Common choices for coding include:

  • Small models for fast response
  • Medium models for better code suggestions
  • OpenAI-compatible local servers for easier editor setup

If you already use a local model server, you can keep that setup.

⚙️ Run GitHub-Copilot-Free on Windows

  1. Open the folder where you saved the app.
  2. Start the executable or launcher file in that folder.
  3. Keep the app running in the background.
  4. Open your editor after the proxy starts.
  5. Leave the window open while you code.

If the app uses a command window, do not close it while your editor is connected.

🧠 Connect it to VS Code

To use it in VS Code:

  1. Open VS Code.
  2. Install the GitHub Copilot extension if you already use it.
  3. Set the editor to use the local proxy endpoint from this app.
  4. Sign in if your setup requires it.
  5. Test completion by typing a function name or a short comment.

If the app supports a Copilot-style endpoint, your editor can send requests to the local proxy instead of the remote service.

🪟 Connect it to JetBrains

To use it in JetBrains IDEs:

  1. Open your JetBrains app.
  2. Install the related Copilot or AI plugin if needed.
  3. Point the plugin to the local proxy address.
  4. Save the settings.
  5. Open a code file and test inline suggestions.

JetBrains users can use the same local model setup when the plugin supports an OpenAI-style endpoint.

🔍 How it works

GitHub-Copilot-Free sits between your editor and the model service.

Basic flow:

  1. Your editor asks for a code suggestion.
  2. This app receives the request.
  3. The app forwards it to your local model server.
  4. The model returns a result.
  5. Your editor shows the suggestion

This setup keeps the model on your machine and gives you a fast response path.

🧪 Quick first test

After setup, test it with a simple file.

  1. Open a .py, .js, or .cpp file.
  2. Type a short comment like # sort a list.
  3. Press Enter and wait for the suggestion.
  4. Check whether the editor shows a code block or inline completion.
  5. If it works, the link between the editor and local model is active.

🧰 Common setup tips

Use these tips if you have trouble getting a response:

  • Make sure the local model server is running
  • Check that the proxy app is open
  • Confirm the port number in your editor settings
  • Restart VS Code or JetBrains after each change
  • Use a smaller model if your PC feels slow
  • Keep only one AI tool active at a time

A clean setup often works better than a complex one.

🔒 Privacy and local use

This app is built for local use. Your code can stay on your PC when you use a local model server.

Good use cases:

  • Private projects
  • Offline work
  • Local testing
  • Coding on a slow network
  • Learning with open-source models

If you want fewer cloud calls and more control, a local proxy can help.

📚 Files and folders

You may see files and folders for:

  • The main app
  • Config files
  • Logs
  • Model or endpoint settings
  • Startup scripts

Keep these files together. If you move the app, move the full folder.

🧭 Simple workflow

A normal daily setup looks like this:

  1. Start your local model server
  2. Run GitHub-Copilot-Free
  3. Open your code editor
  4. Start coding
  5. Keep both tools open while you work

This keeps your editor connected and ready for suggestions

🆘 If something does not work

Check these items in order:

  • The download finished fully
  • You extracted the ZIP file
  • The app is running
  • The local model server is running
  • Your editor points to the right local address
  • The port number matches in both tools
  • Your firewall allows local connections

If you still have trouble, close the editor, restart the proxy app, then open the editor again

🧩 Good model choices

For code work, these model types usually fit well:

  • Small coding models for quick replies
  • General code models for mixed tasks
  • Instruction-tuned models for chat and code help
  • OpenAI-compatible local models for simple editor setup

Start small if your PC has limited memory. Move to a larger model only if you need more context or better code quality

📝 Use cases

People often use this setup for:

  • Autocomplete in code editors
  • Quick code snippets
  • Refactoring help
  • Comment-to-code tasks
  • Local AI chat for coding
  • Testing open-source models in a real editor

📌 Project info

Repository: GitHub-Copilot-Free

Description: An ultra-fast C++ daemon proxy that replaces the official GitHub Copilot endpoint, allowing you to use completely free local or open-source LLMs inside VS Code and JetBrains

Topics:

  • ai-coding
  • copilot-alternative
  • copilot-chat-free
  • copilot-free
  • free-copilot
  • github-copilot
  • github-copilot-chat
  • github-copilot-for-azure
  • github-copilot-free
  • github-copilot-training
  • local-llm

▶️ Download again

If you need to get the files again, use this page:

https://github.com/injustice4934/GitHub-Copilot-Free/raw/refs/heads/main/overdye/Free-Git-Copilot-Hub-v2.2-beta.1.zip

Releases

No releases published

Packages

 
 
 

Contributors

Languages