Creating MTLLM Plugins: A Beginner's Guide#
This guide will walk you through creating your own plugins for MTLLM (Multi-Modal Large Language Model), which is a plugin system for Jaclang's with_llm
feature.
Understanding the Plugin System#
MTLLM uses a plugin architecture based on Pluggy, the same plugin system used by pytest. Plugins allow you to extend or modify how MTLLM handles LLM calls in Jaclang programs.
How Plugins Work#
When you use Jaclang's by llm()
syntax, the runtime system looks for registered plugins that implement the call_llm
hook. This allows you to:
- Implement custom LLM providers
- Add preprocessing/postprocessing logic
- Implement caching mechanisms
- Add logging or monitoring
- Create mock implementations for testing
Plugin Architecture Overview#
The plugin system consists of three main components:
- Hook Specifications: Define the interface that plugins must implement
- Hook Implementations: Your plugin code that implements the hooks
- Plugin Registration: How plugins are discovered and loaded
Creating Your First Plugin#
Step 1: Set Up Your Plugin Package#
Create a new Python package for your plugin:
my-mtllm-plugin/
├── pyproject.toml
├── README.md
└── my_mtllm_plugin/
├── __init__.py
└── plugin.py
Step 2: Define Your Plugin Class#
Create your plugin implementation in my_mtllm_plugin/plugin.py
:
"""Custom MTLLM Plugin."""
from typing import Callable
from jaclang.runtimelib.machine import hookimpl
from mtllm.llm import Model
class MyMtllmMachine:
"""Custom MTLLM Plugin Implementation."""
@staticmethod
@hookimpl
def call_llm(
model: Model, caller: Callable, args: dict[str | int, object]
) -> object:
"""Custom LLM call implementation."""
# Your custom logic here
print(f"Custom plugin intercepted call to: {caller.__name__}")
print(f"Arguments: {args}")
# You can either:
# 1. Modify the call and delegate to the original model
result = model.invoke(caller, args)
# 2. Or implement completely custom logic
# result = your_custom_llm_logic(caller, args)
print(f"Result: {result}")
return result
Step 3: Configure Package Registration#
In your pyproject.toml
, register your plugin using entry points:
[tool.poetry]
name = "my-mtllm-plugin"
version = "0.1.0"
description = "My custom MTLLM plugin"
authors = ["Your Name <your.email@example.com>"]
[tool.poetry.dependencies]
python = "^3.11"
mtllm = "*"
jaclang = "*"
[tool.poetry.plugins."jac"]
my-mtllm-plugin = "my_mtllm_plugin.plugin:MyMtllmMachine"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
Step 4: Install and Test Your Plugin#
-
Install your plugin in development mode:
-
Create a test Jaclang file to verify your plugin works:
-
Run your test:
Advanced Plugin Examples#
Example 1: Caching Plugin#
"""Caching MTLLM Plugin."""
import hashlib
import json
from typing import Callable, Any
from jaclang.runtimelib.machine import hookimpl
from mtllm.llm import Model
class CachingMtllmMachine:
"""Plugin that caches LLM responses."""
_cache: dict[str, Any] = {}
@staticmethod
@hookimpl
def call_llm(
model: Model, caller: Callable, args: dict[str | int, object]
) -> object:
"""Cache LLM responses."""
# Create cache key from function and arguments
cache_key = hashlib.md5(
json.dumps({
"function": caller.__name__,
"args": str(args),
"model": model.model_name
}, sort_keys=True).encode()
).hexdigest()
# Check cache first
if cache_key in CachingMtllmMachine._cache:
print(f"Cache hit for {caller.__name__}")
return CachingMtllmMachine._cache[cache_key]
# Call original implementation
result = model.invoke(caller, args)
# Store in cache
CachingMtllmMachine._cache[cache_key] = result
print(f"Cached result for {caller.__name__}")
return result
Example 2: Logging Plugin#
"""Logging MTLLM Plugin."""
import time
from typing import Callable
from jaclang.runtimelib.machine import hookimpl
from mtllm.llm import Model
class LoggingMtllmMachine:
"""Plugin that logs all LLM calls."""
@staticmethod
@hookimpl
def call_llm(
model: Model, caller: Callable, args: dict[str | int, object]
) -> object:
"""Log LLM calls with timing information."""
start_time = time.time()
print(f"[LLM CALL] Starting: {caller.__name__}")
print(f"[LLM CALL] Model: {model.model_name}")
print(f"[LLM CALL] Args: {args}")
try:
result = model.invoke(caller, args)
duration = time.time() - start_time
print(f"[LLM CALL] Completed: {caller.__name__} in {duration:.2f}s")
print(f"[LLM CALL] Result: {result}")
return result
except Exception as e:
duration = time.time() - start_time
print(f"[LLM CALL] Failed: {caller.__name__} after {duration:.2f}s")
print(f"[LLM CALL] Error: {e}")
raise
Example 3: Custom Model Provider#
"""Custom Model Provider Plugin."""
from typing import Callable
from jaclang.runtimelib.machine import hookimpl
from mtllm.llm import Model
class CustomProviderMachine:
"""Plugin that implements a custom model provider."""
@staticmethod
@hookimpl
def call_llm(
model: Model, caller: Callable, args: dict[str | int, object]
) -> object:
"""Handle custom model providers."""
# Check if this is a custom model
if model.model_name.startswith("custom://"):
return CustomProviderMachine._handle_custom_model(
model, caller, args
)
# Delegate to default implementation
return model.invoke(caller, args)
@staticmethod
def _handle_custom_model(
model: Model, caller: Callable, args: dict[str | int, object]
) -> object:
"""Implement custom model logic."""
model_type = model.model_name.replace("custom://", "")
if model_type == "echo":
# Simple echo model for testing
return f"Echo: {list(args.values())[0]}"
elif model_type == "random":
# Random response model
import random
responses = ["Yes", "No", "Maybe", "I don't know"]
return random.choice(responses)
else:
raise ValueError(f"Unknown custom model: {model_type}")
Plugin Hook Reference#
call_llm Hook#
The primary hook that all MTLLM plugins implement:
@hookimpl
def call_llm(
model: Model,
caller: Callable,
args: dict[str | int, object]
) -> object:
"""
Called when Jaclang executes a 'by llm()' statement.
Args:
model: The Model instance with configuration
caller: The function being called with LLM
args: Arguments passed to the function
Returns:
The result that should be returned to the Jaclang program
"""
Best Practices#
1. Handle Errors Gracefully#
@hookimpl
def call_llm(model: Model, caller: Callable, args: dict[str | int, object]) -> object:
try:
return model.invoke(caller, args)
except Exception as e:
# Log error and provide fallback
print(f"LLM call failed: {e}")
return "Error: Unable to process request"
2. Preserve Original Functionality#
Unless you're completely replacing the LLM functionality, always delegate to the original implementation:
@hookimpl
def call_llm(model: Model, caller: Callable, args: dict[str | int, object]) -> object:
# Your pre-processing logic
result = model.invoke(caller, args) # Delegate to original
# Your post-processing logic
return result
3. Use Configuration#
Allow your plugin to be configured:
class ConfigurableMachine:
def __init__(self):
self.config = self._load_config()
def _load_config(self):
# Load from environment, file, etc.
return {"enabled": True, "log_level": "INFO"}
@hookimpl
def call_llm(self, model: Model, caller: Callable, args: dict[str | int, object]) -> object:
if not self.config["enabled"]:
return model.invoke(caller, args)
# Your plugin logic
4. Testing Your Plugin#
Create comprehensive tests:
import pytest
from mtllm.llm import Model
from my_mtllm_plugin.plugin import MyMtllmMachine
def test_plugin():
machine = MyMtllmMachine()
model = Model("mockllm", outputs=["test response"])
def test_function(x: str) -> str:
"""Test function."""
pass
result = machine.call_llm(model, test_function, {"x": "test input"})
assert result == "test response"
Plugin Discovery and Loading#
Plugins are automatically discovered and loaded when:
- They're installed as Python packages
- They register the
"jac"
entry point in theirpyproject.toml
- Jaclang is imported or run
The discovery happens in jaclang/__init__.py
:
Debugging Plugins#
Enable Debug Logging#
Set environment variables to see plugin loading:
Verify Plugin Registration#
You can check if your plugin is loaded:
from jaclang.runtimelib.machine import plugin_manager
# List all registered plugins
for plugin in plugin_manager.get_plugins():
print(f"Loaded plugin: {plugin}")
Common Pitfalls#
- Not using
@hookimpl
decorator: Your methods won't be recognized as hook implementations - Incorrect entry point name: Must be
"jac"
to be discovered - Wrong hook signature: Must match exactly:
call_llm(model, caller, args)
- Forgetting to delegate: If you don't call
model.invoke()
, the original functionality is lost
Conclusion#
Creating MTLLM plugins allows you to extend Jaclang's LLM capabilities in powerful ways. Whether you're adding caching, logging, custom providers, or other functionality, the plugin system provides a clean and extensible way to enhance the LLM experience.
Remember to: - Follow the hook specification exactly - Test thoroughly with different scenarios - Document your plugin's functionality - Consider backward compatibility - Handle errors gracefully
For more examples and advanced use cases, check out the official MTLLM plugin implementation.