We're excited to announce the full Context Manager library will be released soon, featuring one of our top context rerankers for enhanced AI performance.
A modular, plug-and-play memory and context management layer for AI agents
Context Manager Light is a powerful library designed to enhance AI agents with intelligent memory and context management capabilities. It provides a seamless way to add memory to your existing AI applications without requiring architectural changes.
Manages recent conversation turns with automatic token-aware eviction and fast access for immediate context.
Vector-based semantic storage using FAISS with hierarchical summaries and persistent storage across sessions.
Key Benefit: Context Manager Light automatically handles the complexity of memory management, allowing you to focus on building your AI applications while maintaining optimal context for your language models.
Context Manager Light is just the beginning! We're preparing to release the full Context Manager library with advanced features including:
Stay tuned! The full release is just around the corner.
from artiik import ContextManager
# Initialize with default settings
cm = ContextManager()
# Your agent workflow
user_input = "Can you help me plan a 10-day trip to Japan?"
context = cm.build_context(user_input)
response = call_llm(context) # Your LLM call
cm.observe(user_input, response)
git clone https://github.com/BoualamHamza/Context-Manager.git
cd Context-Manager
pip install -e .
The library automatically installs these dependencies:
from artiik import ContextManager
import openai
cm = ContextManager()
openai.api_key = "your-api-key"
def simple_agent(user_input: str) -> str:
context = cm.build_context(user_input)
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": context}],
max_tokens=500
)
assistant_response = response.choices[0].message.content
cm.observe(user_input, assistant_response)
return assistant_response
from artiik import ContextManager
cm = ContextManager()
# Add conversation history
conversation = [
("I'm planning a trip to Japan", "That sounds exciting!"),
("I want to visit Tokyo and Kyoto", "Great choices!"),
("What's the best time to visit?", "Spring for cherry blossoms!"),
("How much should I budget?", "Around $200-300 per day.")
]
for user_input, response in conversation:
cm.observe(user_input, response)
# Query memory
results = cm.query_memory("Japan budget", k=3)
for text, score in results:
print(f"Score {score:.2f}: {text}")
Works with existing agents without architecture changes
Automatic short-term and long-term memory management
Multi-level conversation summarization
Vector-based memory retrieval with FAISS
Ingest files and directories into long-term memory
Smart context assembly within budget constraints
OpenAI, Anthropic, and extensible adapters
Context building visualization and monitoring
from artiik import Config, ContextManager
# Custom configuration
config = Config(
memory=MemoryConfig(
stm_capacity=8000, # Short-term memory tokens
chunk_size=2000, # Summarization chunk size
recent_k=5, # Recent turns in context
ltm_hits_k=7, # Long-term memory results
prompt_token_budget=12000, # Final context limit
),
llm=LLMConfig(
provider="openai",
model="gpt-4",
api_key="your-api-key"
)
)
cm = ContextManager(config)
from artiik import ContextManager
cm = ContextManager()
# Ingest a single file
chunks = cm.ingest_file("docs/README.md", importance=0.8)
print(f"Ingested {chunks} chunks from README.md")
# Ingest a directory
total = cm.ingest_directory(
"./my_repo",
file_types=[".py", ".md"],
recursive=True,
importance=0.7,
)
print(f"Total chunks ingested: {total}")
# Now you can ask questions about your indexed data
context = cm.build_context("Where is authentication handled?")
from artiik import ContextManager
cm = ContextManager()
# Add some conversation
for i in range(5):
cm.observe(f"User {i}", f"Assistant {i}")
# Get memory statistics
stats = cm.get_stats()
print("Memory Statistics:")
print(f"STM turns: {stats['short_term_memory']['num_turns']}")
print(f"STM tokens: {stats['short_term_memory']['current_tokens']}")
print(f"STM utilization: {stats['short_term_memory']['utilization']:.2%}")
print(f"LTM entries: {stats['long_term_memory']['num_entries']}")
print(f"LTM index size: {stats['long_term_memory']['index_size']}")
The main class for managing context and memory.
ContextManager(
config: Optional[Config] = None,
session_id: Optional[str] = None,
task_id: Optional[str] = None,
allow_cross_session: bool = False,
allow_cross_task: bool = False,
)