🚀 Full Library Coming Soon!

We're excited to announce the full Context Manager library will be released soon, featuring one of our top context rerankers for enhanced AI performance.

Context Manager Light

A modular, plug-and-play memory and context management layer for AI agents

Overview

Context Manager Light is a powerful library designed to enhance AI agents with intelligent memory and context management capabilities. It provides a seamless way to add memory to your existing AI applications without requiring architectural changes.

Short-Term Memory

Manages recent conversation turns with automatic token-aware eviction and fast access for immediate context.

Long-Term Memory

Vector-based semantic storage using FAISS with hierarchical summaries and persistent storage across sessions.

Key Benefit: Context Manager Light automatically handles the complexity of memory management, allowing you to focus on building your AI applications while maintaining optimal context for your language models.

🚀 Coming Soon: Full Library Release

Context Manager Light is just the beginning! We're preparing to release the full Context Manager library with advanced features including:

  • Advanced Context Reranker: One of our top-performing rerankers for enhanced context relevance
  • Multi-Modal Support: Handle text, images, and structured data
  • Enterprise Features: Advanced security, scalability, and monitoring
  • Cloud Integration: Seamless deployment and management

Stay tuned! The full release is just around the corner.

Quick Start

Installation

pip install artiik

Basic Usage

from artiik import ContextManager

# Initialize with default settings
cm = ContextManager()

# Your agent workflow
user_input = "Can you help me plan a 10-day trip to Japan?"
context = cm.build_context(user_input)
response = call_llm(context)  # Your LLM call
cm.observe(user_input, response)

Installation

From PyPI

pip install artiik

From Source

git clone https://github.com/BoualamHamza/Context-Manager.git
cd Context-Manager
pip install -e .

Dependencies

The library automatically installs these dependencies:

  • openai >= 1.0.0
  • anthropic >= 0.7.0
  • faiss-cpu >= 1.7.4
  • sentence-transformers >= 2.2.0
  • pydantic >= 2.0.0
  • tiktoken >= 0.5.0

Basic Usage

Simple Integration

from artiik import ContextManager
import openai

cm = ContextManager()
openai.api_key = "your-api-key"

def simple_agent(user_input: str) -> str:
    context = cm.build_context(user_input)
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": context}],
        max_tokens=500
    )
    assistant_response = response.choices[0].message.content
    cm.observe(user_input, assistant_response)
    return assistant_response

Memory Querying

from artiik import ContextManager

cm = ContextManager()

# Add conversation history
conversation = [
    ("I'm planning a trip to Japan", "That sounds exciting!"),
    ("I want to visit Tokyo and Kyoto", "Great choices!"),
    ("What's the best time to visit?", "Spring for cherry blossoms!"),
    ("How much should I budget?", "Around $200-300 per day.")
]

for user_input, response in conversation:
    cm.observe(user_input, response)

# Query memory
results = cm.query_memory("Japan budget", k=3)
for text, score in results:
    print(f"Score {score:.2f}: {text}")

Features

🔧 Drop-in Integration

Works with existing agents without architecture changes

🧠 Intelligent Memory

Automatic short-term and long-term memory management

📝 Hierarchical Summarization

Multi-level conversation summarization

🔍 Semantic Search

Vector-based memory retrieval with FAISS

📥 External Indexing

Ingest files and directories into long-term memory

💰 Token Optimization

Smart context assembly within budget constraints

🔄 Multi-LLM Support

OpenAI, Anthropic, and extensible adapters

📊 Debug Tools

Context building visualization and monitoring

Configuration

Custom Configuration

from artiik import Config, ContextManager

# Custom configuration
config = Config(
    memory=MemoryConfig(
        stm_capacity=8000,          # Short-term memory tokens
        chunk_size=2000,            # Summarization chunk size
        recent_k=5,                 # Recent turns in context
        ltm_hits_k=7,               # Long-term memory results
        prompt_token_budget=12000,  # Final context limit
    ),
    llm=LLMConfig(
        provider="openai",
        model="gpt-4",
        api_key="your-api-key"
    )
)

cm = ContextManager(config)

Examples

Indexing External Data

from artiik import ContextManager

cm = ContextManager()

# Ingest a single file
chunks = cm.ingest_file("docs/README.md", importance=0.8)
print(f"Ingested {chunks} chunks from README.md")

# Ingest a directory
total = cm.ingest_directory(
    "./my_repo",
    file_types=[".py", ".md"],
    recursive=True,
    importance=0.7,
)
print(f"Total chunks ingested: {total}")

# Now you can ask questions about your indexed data
context = cm.build_context("Where is authentication handled?")

Memory Statistics

from artiik import ContextManager

cm = ContextManager()

# Add some conversation
for i in range(5):
    cm.observe(f"User {i}", f"Assistant {i}")

# Get memory statistics
stats = cm.get_stats()

print("Memory Statistics:")
print(f"STM turns: {stats['short_term_memory']['num_turns']}")
print(f"STM tokens: {stats['short_term_memory']['current_tokens']}")
print(f"STM utilization: {stats['short_term_memory']['utilization']:.2%}")
print(f"LTM entries: {stats['long_term_memory']['num_entries']}")
print(f"LTM index size: {stats['long_term_memory']['index_size']}")

API Reference

ContextManager

The main class for managing context and memory.

Constructor

ContextManager(
    config: Optional[Config] = None,
    session_id: Optional[str] = None,
    task_id: Optional[str] = None,
    allow_cross_session: bool = False,
    allow_cross_task: bool = False,
)

Key Methods

  • observe(user_input: str, assistant_response: str) - Store a conversation turn
  • build_context(user_input: str) -> str - Build optimized context for LLM
  • query_memory(query: str, k: int = 5) -> List[Tuple[str, float]] - Query memory for relevant information
  • ingest_file(file_path: str, importance: float = 0.5) -> int - Ingest a file into long-term memory
  • get_stats() -> Dict[str, Any] - Get memory statistics