Migrated from github.com/shafiqalibhai/open-codex
  • Python 78.1%
  • Shell 9.1%
  • Dockerfile 8.6%
  • Ruby 3.9%
  • Makefile 0.3%
Find a file
2026-04-27 22:39:08 +00:00
.github replaced gif 2025-04-21 18:38:30 +02:00
Formula feat: Add release script and Homebrew formula for open-codex 2025-07-03 01:42:59 +02:00
open-codex-0.1.17/debian feat: Add Debian packaging files for open-codex 2025-07-07 04:26:17 +02:00
src/open_codex feat: Add Ollama integration and improve CLI usability and documentation 2025-05-04 02:11:28 +02:00
.gitignore feat: Add Debian packaging files for open-codex 2025-07-07 04:26:17 +02:00
.python-version feat: initial commit of AI Shell CLI tool 2025-03-29 23:15:36 +01:00
build_deb.sh fix: Remove emoji from completion message in build script 2025-07-07 04:35:25 +02:00
Dockerfile Add Dockerfile: multi-stage Python 3.12 build with uv for open-codex CLI 2026-04-27 22:39:08 +00:00
LICENSE.md feat: initial commit of AI Shell CLI tool 2025-03-29 23:15:36 +01:00
pyproject.toml feat: Add Ollama integration and improve CLI usability and documentation 2025-05-04 02:11:28 +02:00
README.md Update README with comprehensive project information 2026-04-27 20:38:43 +00:00
release.sh feat: Add release script and Homebrew formula for open-codex 2025-07-03 01:42:59 +02:00
uv.lock feat: Add Ollama integration and improve CLI usability and documentation 2025-05-04 02:11:28 +02:00

open-codex

Overview

Open Codex is a fully open-source command-line AI assistant inspired by OpenAI Codex, supporting local language models like phi-4-mini and full integration with Ollama.

Detected project type: Python.

This repository was migrated from upstream source github.com/shafiqalibhai/open-codex and is preserved here for archival, reference, or continued local development.

At a glance

  • Default branch: master
  • Visibility: public
  • Size: 6.3 MB
  • Created: 2026-04-27
  • Last updated: 2026-04-27
  • Stars / Forks / Open issues: 0 / 0 / 0
  • License: MIT

Languages

Language Bytes Share
Python 14,837 bytes 85.4%
Shell 1,734 bytes 10.0%
Ruby 750 bytes 4.3%
Makefile 50 bytes 0.3%

Repository structure

  • .github/
  • Formula/
  • open-codex-0.1.17/
  • src/
  • .gitignore (4,494 B)
  • .python-version (5 B)
  • build_deb.sh (830 B)
  • LICENSE.md (1,170 B)
  • pyproject.toml (628 B)
  • README.md (2,906 B)
  • release.sh (904 B)
  • uv.lock (95,408 B)

Getting started

Clone the repository:

git clone https://forgejo.deployview.com/ssa/open-codex.git
cd open-codex

Installation

python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt

Usage

Invoke the main entry point:

python3 main.py    # or the appropriate module

Original README

The content below is preserved from the previous README. Headings have been demoted so they don't compete with the new top-level sections.

Open Codex

Open Codex CLI

Lightweight coding agent that runs in your terminal

brew tap codingmoh/open-codex && brew install open-codex

Codex demo GIF using: codex "explain this codebase to me"


Open Codex is a fully open-source command-line AI assistant inspired by OpenAI Codex, supporting local language models like phi-4-mini and full integration with Ollama.

🧠 Runs 100% locally no OpenAI API key required. Everything works offline.


Supports

  • One-shot mode: open-codex "list all folders" -> returns shell command
  • Ollama integration for (e.g., LLaMA3, Mistral)
  • Native execution on macOS, Linux, and Windows

Features

  • Natural Language → Shell Command (via local or Ollama-hosted LLMs)
  • Local-only execution: no data sent to the cloud
  • Confirmation before running any command
  • Option to copy to clipboard / abort / execute
  • Colored terminal output for better readability
  • Ollama support: use advanced LLMs with --ollama --model llama3

🔍 Example with Ollama:

open-codex --ollama --model llama3 "find all JPEGs larger than 10MB"

Codex will:

  1. Send your prompt to the Ollama API (local server, e.g. on localhost:11434)
  2. Return a shell command suggestion (e.g., find . -name "*.jpg" -size +10M)
  3. Prompt you to execute, copy, or abort

🛠️ You must have Ollama installed and running locally to use this feature.


🧱 Future Plans

  • Interactive, context-aware mode
  • Fancy TUI with textual or rich
  • Full interactive chat mode
  • Function-calling support
  • Whisper-based voice input
  • Command history & undo
  • Plugin system for workflows

📦 Installation

brew tap codingmoh/open-codex
brew install open-codex

🔹 Option 2: Install via pipx (Cross-platform)

pipx install open-codex

🔹 Option 3: Clone & install locally

git clone https://github.com/codingmoh/open-codex.git
cd open_codex
pip install .

Once installed, use the open-codex CLI globally.


🚀 Usage Examples

▶️ One-shot mode

open-codex "untar file abc.tar"

Codex suggests a shell command
Asks for confirmation / add to clipboard / abort
Executes if approved

▶️ Using Ollama

open-codex --ollama --model llama3 "delete all .DS_Store files recursively"

🛡️ Security Notice

All models run locally. Commands are executed only after your explicit confirmation.


🧑‍💻 Contributing

PRs welcome! Ideas, issues, improvements — all appreciated.


📝 License

MIT


❤️ Built with love and caffeine by codingmoh.

Contributing

Contributions are welcome. The typical workflow is:

  1. Open an issue describing the change you'd like to make.
  2. Fork the repository (or create a feature branch if you have write access).
  3. Commit your changes with clear, descriptive messages.
  4. Open a pull request against the master branch.

Please follow the existing code style and include tests or reproduction steps where relevant.

License

This project is licensed under the MIT license. See the LICENSE file for the full text.

Repository

  • Browse: https://forgejo.deployview.com/ssa/open-codex
  • Clone (HTTPS): https://forgejo.deployview.com/ssa/open-codex.git
  • Clone (SSH): ssh://git@forgejo.deployview.com:30143/ssa/open-codex.git
  • Upstream / origin: github.com/shafiqalibhai/open-codex

This README was generated automatically based on repository metadata, contents, and any prior README content. Edit any section above to add project-specific detail.