Skip to content

LLM API Reference

Auto-generated API documentation for the LLM module.

iso8583sim.llm

LLM module public interface.

llm

LLM-powered features for ISO 8583 message handling.

This module provides AI-powered tools for explaining and generating ISO 8583 messages using various LLM providers.

Features: - MessageExplainer: Explain messages in plain English - MessageGenerator: Generate messages from natural language

Supported Providers: - Anthropic (Claude) - OpenAI (GPT) - Google (Gemini) - Ollama (local models)

Example

from iso8583sim.llm import MessageExplainer, MessageGenerator

Explain a message

explainer = MessageExplainer() # Auto-detects provider print(explainer.explain(message))

Generate a message

generator = MessageGenerator(provider="anthropic") message = generator.generate("$100 VISA purchase")

Installation

Install with all LLM providers

pip install iso8583sim[llm]

Or install specific providers

pip install iso8583sim[anthropic] pip install iso8583sim[openai] pip install iso8583sim[google] pip install iso8583sim[ollama]

GenerationError

Bases: LLMError

Raised when message generation fails.

LLMError

Bases: Exception

Base exception for LLM-related errors.

LLMProvider

Bases: ABC

Abstract base class for LLM providers.

All LLM providers must implement this interface to be compatible with MessageExplainer and MessageGenerator.

Example

class MyProvider(LLMProvider): ... def complete(self, prompt, system=None): ... return "response" ... @property ... def name(self): ... return "my-provider"

name abstractmethod property

name: str

Return the provider name for logging/display.

model property

model: str

Return the model name being used.

complete abstractmethod

complete(prompt: str, system: str | None = None) -> str

Send a prompt to the LLM and return the response text.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
str

The LLM's response text

Raises:

Type Description
LLMError

If the API call fails

Source code in iso8583sim/llm/base.py
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
@abstractmethod
def complete(self, prompt: str, system: str | None = None) -> str:
    """Send a prompt to the LLM and return the response text.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        The LLM's response text

    Raises:
        LLMError: If the API call fails
    """
    pass

complete_with_metadata

complete_with_metadata(prompt: str, system: str | None = None) -> LLMResponse

Send a prompt and return response with metadata.

Default implementation wraps complete(). Providers can override for more detailed metadata.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
LLMResponse

LLMResponse with content and metadata

Source code in iso8583sim/llm/base.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def complete_with_metadata(self, prompt: str, system: str | None = None) -> LLMResponse:
    """Send a prompt and return response with metadata.

    Default implementation wraps complete(). Providers can override
    for more detailed metadata.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        LLMResponse with content and metadata
    """
    content = self.complete(prompt, system)
    return LLMResponse(
        content=content,
        model=self.model,
        provider=self.name,
    )

LLMResponse dataclass

LLMResponse(content: str, model: str, provider: str, usage: dict[str, int] | None = None)

Response from an LLM provider.

ProviderConfigError

Bases: LLMError

Raised when provider configuration is invalid.

ProviderNotAvailableError

ProviderNotAvailableError(provider: str, package: str)

Bases: LLMError

Raised when a provider's dependencies are not installed.

Source code in iso8583sim/llm/base.py
32
33
34
35
36
37
38
def __init__(self, provider: str, package: str):
    self.provider = provider
    self.package = package
    super().__init__(
        f"{provider} provider requires '{package}' package. "
        f"Install with: pip install iso8583sim[{provider.lower()}]"
    )

MessageExplainer

MessageExplainer(provider: LLMProvider | str | None = None)

Explains ISO 8583 messages in plain English using an LLM.

This class uses an LLM provider to generate human-readable explanations of ISO 8583 messages, making them easier to understand for developers and analysts.

Example

explainer = MessageExplainer() # Auto-detects provider message = parser.parse("0100...") print(explainer.explain(message)) "This is a VISA authorization request for $100.00..."

Or explain a raw message string

print(explainer.explain("0100702406C120E09000..."))

Use a specific provider

explainer = MessageExplainer(provider="anthropic")

Initialize the MessageExplainer.

Parameters:

Name Type Description Default
provider LLMProvider | str | None

LLM provider to use. Can be: - LLMProvider instance: Use directly - str: Provider name ('anthropic', 'openai', 'google', 'ollama') - None: Auto-detect available provider

None
Source code in iso8583sim/llm/explainer.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def __init__(self, provider: LLMProvider | str | None = None):
    """Initialize the MessageExplainer.

    Args:
        provider: LLM provider to use. Can be:
            - LLMProvider instance: Use directly
            - str: Provider name ('anthropic', 'openai', 'google', 'ollama')
            - None: Auto-detect available provider
    """
    if isinstance(provider, LLMProvider):
        self._provider = provider
    elif isinstance(provider, str):
        self._provider = get_provider(provider)
    else:
        self._provider = get_provider()

    self._parser = ISO8583Parser()

provider property

provider: LLMProvider

Return the LLM provider being used.

explain

explain(message: ISO8583Message | str, verbose: bool = False) -> str

Explain an ISO 8583 message in plain English.

Parameters:

Name Type Description Default
message ISO8583Message | str

ISO8583Message object or raw message string to explain

required
verbose bool

If True, include more technical details

False

Returns:

Type Description
str

Human-readable explanation of the message

Source code in iso8583sim/llm/explainer.py
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
def explain(self, message: ISO8583Message | str, verbose: bool = False) -> str:
    """Explain an ISO 8583 message in plain English.

    Args:
        message: ISO8583Message object or raw message string to explain
        verbose: If True, include more technical details

    Returns:
        Human-readable explanation of the message
    """
    # Parse if string
    if isinstance(message, str):
        parsed = self._parser.parse(message)
        raw_message = message
    else:
        parsed = message
        raw_message = message.raw_message

    # Format prompt
    network = parsed.network.value if parsed.network else None
    prompt = format_explainer_prompt(
        mti=parsed.mti,
        fields=parsed.fields,
        network=network,
        raw_message=raw_message,
    )

    if verbose:
        prompt += "\n\nPlease include additional technical details about field formats and validation."

    # Get explanation from LLM
    return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

explain_field

explain_field(field_number: int, value: str) -> str

Explain a specific ISO 8583 field value.

Parameters:

Name Type Description Default
field_number int

The field number (e.g., 2, 39, 55)

required
value str

The field value to explain

required

Returns:

Type Description
str

Human-readable explanation of the field and its value

Source code in iso8583sim/llm/explainer.py
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def explain_field(self, field_number: int, value: str) -> str:
    """Explain a specific ISO 8583 field value.

    Args:
        field_number: The field number (e.g., 2, 39, 55)
        value: The field value to explain

    Returns:
        Human-readable explanation of the field and its value
    """
    # Get field definition for context
    field_def = get_field_definition(field_number)
    field_name = field_def.description if field_def else f"Field {field_number}"

    prompt = format_field_explainer_prompt(
        field_number=field_number,
        value=value,
        field_name=field_name,
    )

    return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

explain_error

explain_error(error: str, message: ISO8583Message | str) -> str

Explain a validation or parsing error in context.

Parameters:

Name Type Description Default
error str

The error message to explain

required
message ISO8583Message | str

The message that caused the error

required

Returns:

Type Description
str

Human-readable explanation of the error and how to fix it

Source code in iso8583sim/llm/explainer.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
def explain_error(self, error: str, message: ISO8583Message | str) -> str:
    """Explain a validation or parsing error in context.

    Args:
        error: The error message to explain
        message: The message that caused the error

    Returns:
        Human-readable explanation of the error and how to fix it
    """
    # Parse if string
    if isinstance(message, str):
        try:
            parsed = self._parser.parse(message)
        except Exception:
            # If parsing fails, create minimal context
            parsed = ISO8583Message(mti="????", fields={})
    else:
        parsed = message

    prompt = format_error_explainer_prompt(
        error=error,
        mti=parsed.mti,
        fields=parsed.fields,
    )

    return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

explain_response_code

explain_response_code(code: str) -> str

Explain an ISO 8583 response code.

Parameters:

Name Type Description Default
code str

The response code (e.g., "00", "51", "05")

required

Returns:

Type Description
str

Human-readable explanation of the response code

Source code in iso8583sim/llm/explainer.py
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
    def explain_response_code(self, code: str) -> str:
        """Explain an ISO 8583 response code.

        Args:
            code: The response code (e.g., "00", "51", "05")

        Returns:
            Human-readable explanation of the response code
        """
        prompt = f"""Explain ISO 8583 response code "{code}".

Include:
1. What this code means
2. Common scenarios when this response is returned
3. Recommended actions for the merchant/cardholder"""

        return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

MessageGenerator

MessageGenerator(provider: LLMProvider | str | None = None)

Generates ISO 8583 messages from natural language descriptions.

This class uses an LLM to interpret natural language descriptions and generate valid ISO 8583 messages.

Example

generator = MessageGenerator() message = generator.generate("$100 VISA purchase at a gas station") print(message.mti) # "0100" print(message.fields[4]) # "000000010000"

Generate without validation

message = generator.generate("refund $50", validate=False)

Initialize the MessageGenerator.

Parameters:

Name Type Description Default
provider LLMProvider | str | None

LLM provider to use. Can be: - LLMProvider instance: Use directly - str: Provider name ('anthropic', 'openai', 'google', 'ollama') - None: Auto-detect available provider

None
Source code in iso8583sim/llm/generator.py
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
def __init__(self, provider: LLMProvider | str | None = None):
    """Initialize the MessageGenerator.

    Args:
        provider: LLM provider to use. Can be:
            - LLMProvider instance: Use directly
            - str: Provider name ('anthropic', 'openai', 'google', 'ollama')
            - None: Auto-detect available provider
    """
    if isinstance(provider, LLMProvider):
        self._provider = provider
    elif isinstance(provider, str):
        self._provider = get_provider(provider)
    else:
        self._provider = get_provider()

    self._builder = ISO8583Builder()
    self._validator = ISO8583Validator()

provider property

provider: LLMProvider

Return the LLM provider being used.

generate

generate(description: str, validate: bool = True) -> ISO8583Message

Generate an ISO 8583 message from a natural language description.

Parameters:

Name Type Description Default
description str

Natural language description of the desired message (e.g., "$100 VISA purchase at a gas station in NYC")

required
validate bool

Whether to validate the generated message

True

Returns:

Type Description
ISO8583Message

Valid ISO8583Message object

Raises:

Type Description
GenerationError

If the message cannot be generated or validated

Source code in iso8583sim/llm/generator.py
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def generate(self, description: str, validate: bool = True) -> ISO8583Message:
    """Generate an ISO 8583 message from a natural language description.

    Args:
        description: Natural language description of the desired message
                    (e.g., "$100 VISA purchase at a gas station in NYC")
        validate: Whether to validate the generated message

    Returns:
        Valid ISO8583Message object

    Raises:
        GenerationError: If the message cannot be generated or validated
    """
    prompt = format_generator_prompt(description)

    # Get response from LLM
    response = self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

    # Parse JSON from response
    try:
        message_data = self._extract_json(response)
    except json.JSONDecodeError as e:
        raise GenerationError(f"Failed to parse LLM response as JSON: {e}\nResponse: {response}") from e

    # Validate structure
    if "mti" not in message_data or "fields" not in message_data:
        raise GenerationError(f"Invalid message structure. Expected 'mti' and 'fields'. Got: {message_data.keys()}")

    # Create message
    try:
        # Convert field keys to integers
        fields = {}
        for key, value in message_data["fields"].items():
            field_num = int(key)
            fields[field_num] = str(value)

        # Ensure field 0 matches MTI
        fields[0] = message_data["mti"]

        message = ISO8583Message(
            mti=message_data["mti"],
            fields=fields,
        )
    except (KeyError, ValueError, TypeError) as e:
        raise GenerationError(f"Failed to create message from LLM response: {e}") from e

    # Validate if requested
    if validate:
        errors = self._validator.validate_message(message)
        if errors:
            # Try to fix common issues
            message = self._fix_common_issues(message, errors)
            errors = self._validator.validate_message(message)
            if errors:
                raise GenerationError(f"Generated message failed validation: {errors}")

    return message

suggest_fields

suggest_fields(partial_message: ISO8583Message) -> dict[int, str]

Suggest missing fields for a partial message.

Parameters:

Name Type Description Default
partial_message ISO8583Message

Partial ISO8583Message with some fields populated

required

Returns:

Type Description
dict[int, str]

Dictionary of suggested field values

Source code in iso8583sim/llm/generator.py
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
    def suggest_fields(self, partial_message: ISO8583Message) -> dict[int, str]:
        """Suggest missing fields for a partial message.

        Args:
            partial_message: Partial ISO8583Message with some fields populated

        Returns:
            Dictionary of suggested field values
        """
        # Format current fields
        fields_str = "\n".join(f"  F{num}: {value}" for num, value in sorted(partial_message.fields.items()))

        prompt = f"""Analyze this partial ISO 8583 message and suggest values for commonly required missing fields.

**Current Message:**
- MTI: {partial_message.mti}
- Network: {partial_message.network.value if partial_message.network else "Unknown"}
- Fields present:
{fields_str}

Suggest values for missing required fields. Return as JSON:
```json
{{
  "suggested_fields": {{
    "11": "123456",
    ...
  }},
  "reasoning": "explanation"
}}
```"""

        response = self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

        try:
            data = self._extract_json(response)
            suggested = data.get("suggested_fields", {})
            # Convert keys to integers
            return {int(k): str(v) for k, v in suggested.items()}
        except (json.JSONDecodeError, KeyError, ValueError):
            return {}

get_provider

get_provider(name: str | None = None, **kwargs) -> LLMProvider

Get an LLM provider instance.

If name is provided, creates that specific provider. If name is None, auto-detects the first available provider.

Parameters:

Name Type Description Default
name str | None

Provider name ('anthropic', 'openai', 'google', 'ollama') or None for auto-detection

None
**kwargs

Additional arguments passed to the provider constructor

{}

Returns:

Type Description
LLMProvider

An initialized LLMProvider instance

Raises:

Type Description
ProviderConfigError

If no provider is available or configured

Example

provider = get_provider() # Auto-detect provider = get_provider("anthropic") # Specific provider provider = get_provider("openai", model="gpt-4-turbo")

Source code in iso8583sim/llm/providers/__init__.py
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_provider(name: str | None = None, **kwargs) -> LLMProvider:
    """Get an LLM provider instance.

    If name is provided, creates that specific provider.
    If name is None, auto-detects the first available provider.

    Args:
        name: Provider name ('anthropic', 'openai', 'google', 'ollama')
              or None for auto-detection
        **kwargs: Additional arguments passed to the provider constructor

    Returns:
        An initialized LLMProvider instance

    Raises:
        ProviderConfigError: If no provider is available or configured

    Example:
        >>> provider = get_provider()  # Auto-detect
        >>> provider = get_provider("anthropic")  # Specific provider
        >>> provider = get_provider("openai", model="gpt-4-turbo")
    """
    if name:
        # Specific provider requested
        provider_class, is_available = _import_provider(name.lower())
        if provider_class is None:
            raise ProviderConfigError(
                f"{name} provider package is not installed. Install with: pip install iso8583sim[{name.lower()}]"
            )
        return provider_class(**kwargs)

    # Auto-detect available provider
    for provider_name in _PROVIDER_PRIORITY:
        provider_class, is_available = _import_provider(provider_name)
        if provider_class and is_available:
            return provider_class(**kwargs)

    # No provider available
    raise ProviderConfigError(
        "No LLM provider available. Install one of:\n"
        "  pip install iso8583sim[anthropic]  # Claude\n"
        "  pip install iso8583sim[openai]     # GPT\n"
        "  pip install iso8583sim[google]     # Gemini\n"
        "  pip install iso8583sim[ollama]     # Local\n"
        "  pip install iso8583sim[llm]        # All providers\n"
        "\nThen set the appropriate API key environment variable."
    )

list_available_providers

list_available_providers() -> list[str]

List all available and configured providers.

Returns:

Type Description
list[str]

List of provider names that are installed and configured

Source code in iso8583sim/llm/providers/__init__.py
107
108
109
110
111
112
113
114
115
116
117
118
def list_available_providers() -> list[str]:
    """List all available and configured providers.

    Returns:
        List of provider names that are installed and configured
    """
    available = []
    for name in _PROVIDER_PRIORITY:
        _, is_available = _import_provider(name)
        if is_available:
            available.append(name)
    return available

list_installed_providers

list_installed_providers() -> list[str]

List all installed providers (may not be configured).

Returns:

Type Description
list[str]

List of provider names that have their packages installed

Source code in iso8583sim/llm/providers/__init__.py
121
122
123
124
125
126
127
128
129
130
131
132
def list_installed_providers() -> list[str]:
    """List all installed providers (may not be configured).

    Returns:
        List of provider names that have their packages installed
    """
    installed = []
    for name in _PROVIDER_PRIORITY:
        provider_class, _ = _import_provider(name)
        if provider_class is not None:
            installed.append(name)
    return installed

iso8583sim.llm.base

Base classes and exceptions.

base

Base classes for LLM providers.

This module defines the abstract base class for LLM providers, allowing multiple backend implementations (Anthropic, OpenAI, Google, Ollama).

LLMResponse dataclass

LLMResponse(content: str, model: str, provider: str, usage: dict[str, int] | None = None)

Response from an LLM provider.

LLMError

Bases: Exception

Base exception for LLM-related errors.

ProviderNotAvailableError

ProviderNotAvailableError(provider: str, package: str)

Bases: LLMError

Raised when a provider's dependencies are not installed.

Source code in iso8583sim/llm/base.py
32
33
34
35
36
37
38
def __init__(self, provider: str, package: str):
    self.provider = provider
    self.package = package
    super().__init__(
        f"{provider} provider requires '{package}' package. "
        f"Install with: pip install iso8583sim[{provider.lower()}]"
    )

ProviderConfigError

Bases: LLMError

Raised when provider configuration is invalid.

GenerationError

Bases: LLMError

Raised when message generation fails.

LLMProvider

Bases: ABC

Abstract base class for LLM providers.

All LLM providers must implement this interface to be compatible with MessageExplainer and MessageGenerator.

Example

class MyProvider(LLMProvider): ... def complete(self, prompt, system=None): ... return "response" ... @property ... def name(self): ... return "my-provider"

name abstractmethod property

name: str

Return the provider name for logging/display.

model property

model: str

Return the model name being used.

complete abstractmethod

complete(prompt: str, system: str | None = None) -> str

Send a prompt to the LLM and return the response text.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
str

The LLM's response text

Raises:

Type Description
LLMError

If the API call fails

Source code in iso8583sim/llm/base.py
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
@abstractmethod
def complete(self, prompt: str, system: str | None = None) -> str:
    """Send a prompt to the LLM and return the response text.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        The LLM's response text

    Raises:
        LLMError: If the API call fails
    """
    pass

complete_with_metadata

complete_with_metadata(prompt: str, system: str | None = None) -> LLMResponse

Send a prompt and return response with metadata.

Default implementation wraps complete(). Providers can override for more detailed metadata.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
LLMResponse

LLMResponse with content and metadata

Source code in iso8583sim/llm/base.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def complete_with_metadata(self, prompt: str, system: str | None = None) -> LLMResponse:
    """Send a prompt and return response with metadata.

    Default implementation wraps complete(). Providers can override
    for more detailed metadata.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        LLMResponse with content and metadata
    """
    content = self.complete(prompt, system)
    return LLMResponse(
        content=content,
        model=self.model,
        provider=self.name,
    )

iso8583sim.llm.explainer

Message explanation using LLMs.

explainer

Message Explainer using LLM to provide human-readable explanations.

MessageExplainer

MessageExplainer(provider: LLMProvider | str | None = None)

Explains ISO 8583 messages in plain English using an LLM.

This class uses an LLM provider to generate human-readable explanations of ISO 8583 messages, making them easier to understand for developers and analysts.

Example

explainer = MessageExplainer() # Auto-detects provider message = parser.parse("0100...") print(explainer.explain(message)) "This is a VISA authorization request for $100.00..."

Or explain a raw message string

print(explainer.explain("0100702406C120E09000..."))

Use a specific provider

explainer = MessageExplainer(provider="anthropic")

Initialize the MessageExplainer.

Parameters:

Name Type Description Default
provider LLMProvider | str | None

LLM provider to use. Can be: - LLMProvider instance: Use directly - str: Provider name ('anthropic', 'openai', 'google', 'ollama') - None: Auto-detect available provider

None
Source code in iso8583sim/llm/explainer.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def __init__(self, provider: LLMProvider | str | None = None):
    """Initialize the MessageExplainer.

    Args:
        provider: LLM provider to use. Can be:
            - LLMProvider instance: Use directly
            - str: Provider name ('anthropic', 'openai', 'google', 'ollama')
            - None: Auto-detect available provider
    """
    if isinstance(provider, LLMProvider):
        self._provider = provider
    elif isinstance(provider, str):
        self._provider = get_provider(provider)
    else:
        self._provider = get_provider()

    self._parser = ISO8583Parser()

provider property

provider: LLMProvider

Return the LLM provider being used.

explain

explain(message: ISO8583Message | str, verbose: bool = False) -> str

Explain an ISO 8583 message in plain English.

Parameters:

Name Type Description Default
message ISO8583Message | str

ISO8583Message object or raw message string to explain

required
verbose bool

If True, include more technical details

False

Returns:

Type Description
str

Human-readable explanation of the message

Source code in iso8583sim/llm/explainer.py
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
def explain(self, message: ISO8583Message | str, verbose: bool = False) -> str:
    """Explain an ISO 8583 message in plain English.

    Args:
        message: ISO8583Message object or raw message string to explain
        verbose: If True, include more technical details

    Returns:
        Human-readable explanation of the message
    """
    # Parse if string
    if isinstance(message, str):
        parsed = self._parser.parse(message)
        raw_message = message
    else:
        parsed = message
        raw_message = message.raw_message

    # Format prompt
    network = parsed.network.value if parsed.network else None
    prompt = format_explainer_prompt(
        mti=parsed.mti,
        fields=parsed.fields,
        network=network,
        raw_message=raw_message,
    )

    if verbose:
        prompt += "\n\nPlease include additional technical details about field formats and validation."

    # Get explanation from LLM
    return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

explain_field

explain_field(field_number: int, value: str) -> str

Explain a specific ISO 8583 field value.

Parameters:

Name Type Description Default
field_number int

The field number (e.g., 2, 39, 55)

required
value str

The field value to explain

required

Returns:

Type Description
str

Human-readable explanation of the field and its value

Source code in iso8583sim/llm/explainer.py
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def explain_field(self, field_number: int, value: str) -> str:
    """Explain a specific ISO 8583 field value.

    Args:
        field_number: The field number (e.g., 2, 39, 55)
        value: The field value to explain

    Returns:
        Human-readable explanation of the field and its value
    """
    # Get field definition for context
    field_def = get_field_definition(field_number)
    field_name = field_def.description if field_def else f"Field {field_number}"

    prompt = format_field_explainer_prompt(
        field_number=field_number,
        value=value,
        field_name=field_name,
    )

    return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

explain_error

explain_error(error: str, message: ISO8583Message | str) -> str

Explain a validation or parsing error in context.

Parameters:

Name Type Description Default
error str

The error message to explain

required
message ISO8583Message | str

The message that caused the error

required

Returns:

Type Description
str

Human-readable explanation of the error and how to fix it

Source code in iso8583sim/llm/explainer.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
def explain_error(self, error: str, message: ISO8583Message | str) -> str:
    """Explain a validation or parsing error in context.

    Args:
        error: The error message to explain
        message: The message that caused the error

    Returns:
        Human-readable explanation of the error and how to fix it
    """
    # Parse if string
    if isinstance(message, str):
        try:
            parsed = self._parser.parse(message)
        except Exception:
            # If parsing fails, create minimal context
            parsed = ISO8583Message(mti="????", fields={})
    else:
        parsed = message

    prompt = format_error_explainer_prompt(
        error=error,
        mti=parsed.mti,
        fields=parsed.fields,
    )

    return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

explain_response_code

explain_response_code(code: str) -> str

Explain an ISO 8583 response code.

Parameters:

Name Type Description Default
code str

The response code (e.g., "00", "51", "05")

required

Returns:

Type Description
str

Human-readable explanation of the response code

Source code in iso8583sim/llm/explainer.py
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
    def explain_response_code(self, code: str) -> str:
        """Explain an ISO 8583 response code.

        Args:
            code: The response code (e.g., "00", "51", "05")

        Returns:
            Human-readable explanation of the response code
        """
        prompt = f"""Explain ISO 8583 response code "{code}".

Include:
1. What this code means
2. Common scenarios when this response is returned
3. Recommended actions for the merchant/cardholder"""

        return self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

iso8583sim.llm.generator

Message generation from natural language.

generator

Message Generator using LLM to create ISO 8583 messages from natural language.

MessageGenerator

MessageGenerator(provider: LLMProvider | str | None = None)

Generates ISO 8583 messages from natural language descriptions.

This class uses an LLM to interpret natural language descriptions and generate valid ISO 8583 messages.

Example

generator = MessageGenerator() message = generator.generate("$100 VISA purchase at a gas station") print(message.mti) # "0100" print(message.fields[4]) # "000000010000"

Generate without validation

message = generator.generate("refund $50", validate=False)

Initialize the MessageGenerator.

Parameters:

Name Type Description Default
provider LLMProvider | str | None

LLM provider to use. Can be: - LLMProvider instance: Use directly - str: Provider name ('anthropic', 'openai', 'google', 'ollama') - None: Auto-detect available provider

None
Source code in iso8583sim/llm/generator.py
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
def __init__(self, provider: LLMProvider | str | None = None):
    """Initialize the MessageGenerator.

    Args:
        provider: LLM provider to use. Can be:
            - LLMProvider instance: Use directly
            - str: Provider name ('anthropic', 'openai', 'google', 'ollama')
            - None: Auto-detect available provider
    """
    if isinstance(provider, LLMProvider):
        self._provider = provider
    elif isinstance(provider, str):
        self._provider = get_provider(provider)
    else:
        self._provider = get_provider()

    self._builder = ISO8583Builder()
    self._validator = ISO8583Validator()

provider property

provider: LLMProvider

Return the LLM provider being used.

generate

generate(description: str, validate: bool = True) -> ISO8583Message

Generate an ISO 8583 message from a natural language description.

Parameters:

Name Type Description Default
description str

Natural language description of the desired message (e.g., "$100 VISA purchase at a gas station in NYC")

required
validate bool

Whether to validate the generated message

True

Returns:

Type Description
ISO8583Message

Valid ISO8583Message object

Raises:

Type Description
GenerationError

If the message cannot be generated or validated

Source code in iso8583sim/llm/generator.py
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def generate(self, description: str, validate: bool = True) -> ISO8583Message:
    """Generate an ISO 8583 message from a natural language description.

    Args:
        description: Natural language description of the desired message
                    (e.g., "$100 VISA purchase at a gas station in NYC")
        validate: Whether to validate the generated message

    Returns:
        Valid ISO8583Message object

    Raises:
        GenerationError: If the message cannot be generated or validated
    """
    prompt = format_generator_prompt(description)

    # Get response from LLM
    response = self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

    # Parse JSON from response
    try:
        message_data = self._extract_json(response)
    except json.JSONDecodeError as e:
        raise GenerationError(f"Failed to parse LLM response as JSON: {e}\nResponse: {response}") from e

    # Validate structure
    if "mti" not in message_data or "fields" not in message_data:
        raise GenerationError(f"Invalid message structure. Expected 'mti' and 'fields'. Got: {message_data.keys()}")

    # Create message
    try:
        # Convert field keys to integers
        fields = {}
        for key, value in message_data["fields"].items():
            field_num = int(key)
            fields[field_num] = str(value)

        # Ensure field 0 matches MTI
        fields[0] = message_data["mti"]

        message = ISO8583Message(
            mti=message_data["mti"],
            fields=fields,
        )
    except (KeyError, ValueError, TypeError) as e:
        raise GenerationError(f"Failed to create message from LLM response: {e}") from e

    # Validate if requested
    if validate:
        errors = self._validator.validate_message(message)
        if errors:
            # Try to fix common issues
            message = self._fix_common_issues(message, errors)
            errors = self._validator.validate_message(message)
            if errors:
                raise GenerationError(f"Generated message failed validation: {errors}")

    return message

suggest_fields

suggest_fields(partial_message: ISO8583Message) -> dict[int, str]

Suggest missing fields for a partial message.

Parameters:

Name Type Description Default
partial_message ISO8583Message

Partial ISO8583Message with some fields populated

required

Returns:

Type Description
dict[int, str]

Dictionary of suggested field values

Source code in iso8583sim/llm/generator.py
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
    def suggest_fields(self, partial_message: ISO8583Message) -> dict[int, str]:
        """Suggest missing fields for a partial message.

        Args:
            partial_message: Partial ISO8583Message with some fields populated

        Returns:
            Dictionary of suggested field values
        """
        # Format current fields
        fields_str = "\n".join(f"  F{num}: {value}" for num, value in sorted(partial_message.fields.items()))

        prompt = f"""Analyze this partial ISO 8583 message and suggest values for commonly required missing fields.

**Current Message:**
- MTI: {partial_message.mti}
- Network: {partial_message.network.value if partial_message.network else "Unknown"}
- Fields present:
{fields_str}

Suggest values for missing required fields. Return as JSON:
```json
{{
  "suggested_fields": {{
    "11": "123456",
    ...
  }},
  "reasoning": "explanation"
}}
```"""

        response = self._provider.complete(prompt, system=ISO8583_SYSTEM_PROMPT)

        try:
            data = self._extract_json(response)
            suggested = data.get("suggested_fields", {})
            # Convert keys to integers
            return {int(k): str(v) for k, v in suggested.items()}
        except (json.JSONDecodeError, KeyError, ValueError):
            return {}

iso8583sim.llm.providers

Provider factory and utilities.

providers

LLM provider factory and auto-detection.

This module provides a factory function to create LLM providers and auto-detect available providers based on installed packages and configured API keys.

get_provider

get_provider(name: str | None = None, **kwargs) -> LLMProvider

Get an LLM provider instance.

If name is provided, creates that specific provider. If name is None, auto-detects the first available provider.

Parameters:

Name Type Description Default
name str | None

Provider name ('anthropic', 'openai', 'google', 'ollama') or None for auto-detection

None
**kwargs

Additional arguments passed to the provider constructor

{}

Returns:

Type Description
LLMProvider

An initialized LLMProvider instance

Raises:

Type Description
ProviderConfigError

If no provider is available or configured

Example

provider = get_provider() # Auto-detect provider = get_provider("anthropic") # Specific provider provider = get_provider("openai", model="gpt-4-turbo")

Source code in iso8583sim/llm/providers/__init__.py
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_provider(name: str | None = None, **kwargs) -> LLMProvider:
    """Get an LLM provider instance.

    If name is provided, creates that specific provider.
    If name is None, auto-detects the first available provider.

    Args:
        name: Provider name ('anthropic', 'openai', 'google', 'ollama')
              or None for auto-detection
        **kwargs: Additional arguments passed to the provider constructor

    Returns:
        An initialized LLMProvider instance

    Raises:
        ProviderConfigError: If no provider is available or configured

    Example:
        >>> provider = get_provider()  # Auto-detect
        >>> provider = get_provider("anthropic")  # Specific provider
        >>> provider = get_provider("openai", model="gpt-4-turbo")
    """
    if name:
        # Specific provider requested
        provider_class, is_available = _import_provider(name.lower())
        if provider_class is None:
            raise ProviderConfigError(
                f"{name} provider package is not installed. Install with: pip install iso8583sim[{name.lower()}]"
            )
        return provider_class(**kwargs)

    # Auto-detect available provider
    for provider_name in _PROVIDER_PRIORITY:
        provider_class, is_available = _import_provider(provider_name)
        if provider_class and is_available:
            return provider_class(**kwargs)

    # No provider available
    raise ProviderConfigError(
        "No LLM provider available. Install one of:\n"
        "  pip install iso8583sim[anthropic]  # Claude\n"
        "  pip install iso8583sim[openai]     # GPT\n"
        "  pip install iso8583sim[google]     # Gemini\n"
        "  pip install iso8583sim[ollama]     # Local\n"
        "  pip install iso8583sim[llm]        # All providers\n"
        "\nThen set the appropriate API key environment variable."
    )

list_available_providers

list_available_providers() -> list[str]

List all available and configured providers.

Returns:

Type Description
list[str]

List of provider names that are installed and configured

Source code in iso8583sim/llm/providers/__init__.py
107
108
109
110
111
112
113
114
115
116
117
118
def list_available_providers() -> list[str]:
    """List all available and configured providers.

    Returns:
        List of provider names that are installed and configured
    """
    available = []
    for name in _PROVIDER_PRIORITY:
        _, is_available = _import_provider(name)
        if is_available:
            available.append(name)
    return available

list_installed_providers

list_installed_providers() -> list[str]

List all installed providers (may not be configured).

Returns:

Type Description
list[str]

List of provider names that have their packages installed

Source code in iso8583sim/llm/providers/__init__.py
121
122
123
124
125
126
127
128
129
130
131
132
def list_installed_providers() -> list[str]:
    """List all installed providers (may not be configured).

    Returns:
        List of provider names that have their packages installed
    """
    installed = []
    for name in _PROVIDER_PRIORITY:
        provider_class, _ = _import_provider(name)
        if provider_class is not None:
            installed.append(name)
    return installed

iso8583sim.llm.providers.anthropic

Anthropic (Claude) provider.

anthropic

Anthropic (Claude) LLM provider implementation.

AnthropicProvider

AnthropicProvider(api_key: str | None = None, model: str | None = None, max_tokens: int | None = None)

Bases: LLMProvider

LLM provider using Anthropic's Claude API.

Example

provider = AnthropicProvider() # Uses ANTHROPIC_API_KEY env var response = provider.complete("Explain ISO 8583") print(response)

Or with explicit API key

provider = AnthropicProvider(api_key="sk-ant-...")

Initialize the Anthropic provider.

Parameters:

Name Type Description Default
api_key str | None

Anthropic API key. If not provided, uses ANTHROPIC_API_KEY env var.

None
model str | None

Model to use. Defaults to claude-sonnet-4-20250514.

None
max_tokens int | None

Maximum tokens in response. Defaults to 4096.

None

Raises:

Type Description
ProviderNotAvailableError

If anthropic package is not installed.

ProviderConfigError

If no API key is available.

Source code in iso8583sim/llm/providers/anthropic.py
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
def __init__(
    self,
    api_key: str | None = None,
    model: str | None = None,
    max_tokens: int | None = None,
):
    """Initialize the Anthropic provider.

    Args:
        api_key: Anthropic API key. If not provided, uses ANTHROPIC_API_KEY env var.
        model: Model to use. Defaults to claude-sonnet-4-20250514.
        max_tokens: Maximum tokens in response. Defaults to 4096.

    Raises:
        ProviderNotAvailableError: If anthropic package is not installed.
        ProviderConfigError: If no API key is available.
    """
    if not _ANTHROPIC_AVAILABLE:
        raise ProviderNotAvailableError("Anthropic", "anthropic")

    self._api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
    if not self._api_key:
        raise ProviderConfigError(
            "Anthropic API key not found. Set ANTHROPIC_API_KEY environment variable or pass api_key parameter."
        )

    self._model = model or self.DEFAULT_MODEL
    self._max_tokens = max_tokens or self.MAX_TOKENS
    self._client = anthropic.Anthropic(api_key=self._api_key)

name property

name: str

Return the provider name.

model property

model: str

Return the model name being used.

complete

complete(prompt: str, system: str | None = None) -> str

Send a prompt to Claude and return the response.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
str

The response text from Claude

Raises:

Type Description
LLMError

If the API call fails

Source code in iso8583sim/llm/providers/anthropic.py
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
def complete(self, prompt: str, system: str | None = None) -> str:
    """Send a prompt to Claude and return the response.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        The response text from Claude

    Raises:
        LLMError: If the API call fails
    """
    try:
        message = self._client.messages.create(
            model=self._model,
            max_tokens=self._max_tokens,
            system=system or "",
            messages=[{"role": "user", "content": prompt}],
        )
        return message.content[0].text
    except anthropic.APIError as e:
        raise LLMError(f"Anthropic API error: {e}") from e
    except Exception as e:
        raise LLMError(f"Unexpected error calling Anthropic: {e}") from e

complete_with_metadata

complete_with_metadata(prompt: str, system: str | None = None) -> LLMResponse

Send a prompt and return response with metadata.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
LLMResponse

LLMResponse with content and usage metadata

Source code in iso8583sim/llm/providers/anthropic.py
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
def complete_with_metadata(self, prompt: str, system: str | None = None) -> LLMResponse:
    """Send a prompt and return response with metadata.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        LLMResponse with content and usage metadata
    """
    try:
        message = self._client.messages.create(
            model=self._model,
            max_tokens=self._max_tokens,
            system=system or "",
            messages=[{"role": "user", "content": prompt}],
        )
        return LLMResponse(
            content=message.content[0].text,
            model=message.model,
            provider=self.name,
            usage={
                "input_tokens": message.usage.input_tokens,
                "output_tokens": message.usage.output_tokens,
            },
        )
    except anthropic.APIError as e:
        raise LLMError(f"Anthropic API error: {e}") from e
    except Exception as e:
        raise LLMError(f"Unexpected error calling Anthropic: {e}") from e

is_available

is_available() -> bool

Check if Anthropic provider is available.

Returns:

Type Description
bool

True if anthropic package is installed and API key is configured.

Source code in iso8583sim/llm/providers/anthropic.py
130
131
132
133
134
135
136
137
138
def is_available() -> bool:
    """Check if Anthropic provider is available.

    Returns:
        True if anthropic package is installed and API key is configured.
    """
    if not _ANTHROPIC_AVAILABLE:
        return False
    return bool(os.environ.get("ANTHROPIC_API_KEY"))

iso8583sim.llm.providers.openai

OpenAI (GPT) provider.

openai

OpenAI (GPT) LLM provider implementation.

OpenAIProvider

OpenAIProvider(api_key: str | None = None, model: str | None = None, max_tokens: int | None = None)

Bases: LLMProvider

LLM provider using OpenAI's GPT API.

Example

provider = OpenAIProvider() # Uses OPENAI_API_KEY env var response = provider.complete("Explain ISO 8583") print(response)

Or with explicit API key

provider = OpenAIProvider(api_key="sk-...")

Initialize the OpenAI provider.

Parameters:

Name Type Description Default
api_key str | None

OpenAI API key. If not provided, uses OPENAI_API_KEY env var.

None
model str | None

Model to use. Defaults to gpt-4o.

None
max_tokens int | None

Maximum tokens in response. Defaults to 4096.

None

Raises:

Type Description
ProviderNotAvailableError

If openai package is not installed.

ProviderConfigError

If no API key is available.

Source code in iso8583sim/llm/providers/openai.py
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
def __init__(
    self,
    api_key: str | None = None,
    model: str | None = None,
    max_tokens: int | None = None,
):
    """Initialize the OpenAI provider.

    Args:
        api_key: OpenAI API key. If not provided, uses OPENAI_API_KEY env var.
        model: Model to use. Defaults to gpt-4o.
        max_tokens: Maximum tokens in response. Defaults to 4096.

    Raises:
        ProviderNotAvailableError: If openai package is not installed.
        ProviderConfigError: If no API key is available.
    """
    if not _OPENAI_AVAILABLE:
        raise ProviderNotAvailableError("OpenAI", "openai")

    self._api_key = api_key or os.environ.get("OPENAI_API_KEY")
    if not self._api_key:
        raise ProviderConfigError(
            "OpenAI API key not found. Set OPENAI_API_KEY environment variable or pass api_key parameter."
        )

    self._model = model or self.DEFAULT_MODEL
    self._max_tokens = max_tokens or self.MAX_TOKENS
    self._client = openai.OpenAI(api_key=self._api_key)

name property

name: str

Return the provider name.

model property

model: str

Return the model name being used.

complete

complete(prompt: str, system: str | None = None) -> str

Send a prompt to GPT and return the response.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
str

The response text from GPT

Raises:

Type Description
LLMError

If the API call fails

Source code in iso8583sim/llm/providers/openai.py
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
def complete(self, prompt: str, system: str | None = None) -> str:
    """Send a prompt to GPT and return the response.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        The response text from GPT

    Raises:
        LLMError: If the API call fails
    """
    try:
        messages = []
        if system:
            messages.append({"role": "system", "content": system})
        messages.append({"role": "user", "content": prompt})

        response = self._client.chat.completions.create(
            model=self._model,
            max_tokens=self._max_tokens,
            messages=messages,
        )
        return response.choices[0].message.content or ""
    except openai.APIError as e:
        raise LLMError(f"OpenAI API error: {e}") from e
    except Exception as e:
        raise LLMError(f"Unexpected error calling OpenAI: {e}") from e

complete_with_metadata

complete_with_metadata(prompt: str, system: str | None = None) -> LLMResponse

Send a prompt and return response with metadata.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
LLMResponse

LLMResponse with content and usage metadata

Source code in iso8583sim/llm/providers/openai.py
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
def complete_with_metadata(self, prompt: str, system: str | None = None) -> LLMResponse:
    """Send a prompt and return response with metadata.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        LLMResponse with content and usage metadata
    """
    try:
        messages = []
        if system:
            messages.append({"role": "system", "content": system})
        messages.append({"role": "user", "content": prompt})

        response = self._client.chat.completions.create(
            model=self._model,
            max_tokens=self._max_tokens,
            messages=messages,
        )

        usage = None
        if response.usage:
            usage = {
                "input_tokens": response.usage.prompt_tokens,
                "output_tokens": response.usage.completion_tokens,
            }

        return LLMResponse(
            content=response.choices[0].message.content or "",
            model=response.model,
            provider=self.name,
            usage=usage,
        )
    except openai.APIError as e:
        raise LLMError(f"OpenAI API error: {e}") from e
    except Exception as e:
        raise LLMError(f"Unexpected error calling OpenAI: {e}") from e

is_available

is_available() -> bool

Check if OpenAI provider is available.

Returns:

Type Description
bool

True if openai package is installed and API key is configured.

Source code in iso8583sim/llm/providers/openai.py
143
144
145
146
147
148
149
150
151
def is_available() -> bool:
    """Check if OpenAI provider is available.

    Returns:
        True if openai package is installed and API key is configured.
    """
    if not _OPENAI_AVAILABLE:
        return False
    return bool(os.environ.get("OPENAI_API_KEY"))

iso8583sim.llm.providers.google

Google (Gemini) provider.

google

Google (Gemini) LLM provider implementation.

GoogleProvider

GoogleProvider(api_key: str | None = None, model: str | None = None)

Bases: LLMProvider

LLM provider using Google's Gemini API.

Example

provider = GoogleProvider() # Uses GOOGLE_API_KEY env var response = provider.complete("Explain ISO 8583") print(response)

Or with explicit API key

provider = GoogleProvider(api_key="...")

Initialize the Google provider.

Parameters:

Name Type Description Default
api_key str | None

Google API key. If not provided, uses GOOGLE_API_KEY env var.

None
model str | None

Model to use. Defaults to gemini-1.5-flash.

None

Raises:

Type Description
ProviderNotAvailableError

If google-generativeai package is not installed.

ProviderConfigError

If no API key is available.

Source code in iso8583sim/llm/providers/google.py
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
def __init__(
    self,
    api_key: str | None = None,
    model: str | None = None,
):
    """Initialize the Google provider.

    Args:
        api_key: Google API key. If not provided, uses GOOGLE_API_KEY env var.
        model: Model to use. Defaults to gemini-1.5-flash.

    Raises:
        ProviderNotAvailableError: If google-generativeai package is not installed.
        ProviderConfigError: If no API key is available.
    """
    if not _GOOGLE_AVAILABLE:
        raise ProviderNotAvailableError("Google", "google-generativeai")

    self._api_key = api_key or os.environ.get("GOOGLE_API_KEY")
    if not self._api_key:
        raise ProviderConfigError(
            "Google API key not found. Set GOOGLE_API_KEY environment variable or pass api_key parameter."
        )

    self._model_name = model or self.DEFAULT_MODEL
    genai.configure(api_key=self._api_key)
    self._model = genai.GenerativeModel(self._model_name)

name property

name: str

Return the provider name.

model property

model: str

Return the model name being used.

complete

complete(prompt: str, system: str | None = None) -> str

Send a prompt to Gemini and return the response.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
str

The response text from Gemini

Raises:

Type Description
LLMError

If the API call fails

Source code in iso8583sim/llm/providers/google.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
def complete(self, prompt: str, system: str | None = None) -> str:
    """Send a prompt to Gemini and return the response.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        The response text from Gemini

    Raises:
        LLMError: If the API call fails
    """
    try:
        # Combine system and user prompts for Gemini
        full_prompt = prompt
        if system:
            full_prompt = f"{system}\n\n{prompt}"

        response = self._model.generate_content(full_prompt)
        return response.text
    except Exception as e:
        raise LLMError(f"Google API error: {e}") from e

complete_with_metadata

complete_with_metadata(prompt: str, system: str | None = None) -> LLMResponse

Send a prompt and return response with metadata.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
LLMResponse

LLMResponse with content and metadata

Source code in iso8583sim/llm/providers/google.py
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
def complete_with_metadata(self, prompt: str, system: str | None = None) -> LLMResponse:
    """Send a prompt and return response with metadata.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        LLMResponse with content and metadata
    """
    try:
        full_prompt = prompt
        if system:
            full_prompt = f"{system}\n\n{prompt}"

        response = self._model.generate_content(full_prompt)

        usage = None
        if hasattr(response, "usage_metadata") and response.usage_metadata:
            usage = {
                "input_tokens": response.usage_metadata.prompt_token_count,
                "output_tokens": response.usage_metadata.candidates_token_count,
            }

        return LLMResponse(
            content=response.text,
            model=self._model_name,
            provider=self.name,
            usage=usage,
        )
    except Exception as e:
        raise LLMError(f"Google API error: {e}") from e

is_available

is_available() -> bool

Check if Google provider is available.

Returns:

Type Description
bool

True if google-generativeai package is installed and API key is configured.

Source code in iso8583sim/llm/providers/google.py
127
128
129
130
131
132
133
134
135
def is_available() -> bool:
    """Check if Google provider is available.

    Returns:
        True if google-generativeai package is installed and API key is configured.
    """
    if not _GOOGLE_AVAILABLE:
        return False
    return bool(os.environ.get("GOOGLE_API_KEY"))

iso8583sim.llm.providers.ollama

Ollama (local models) provider.

ollama

Ollama (local) LLM provider implementation.

OllamaProvider

OllamaProvider(model: str | None = None, host: str | None = None)

Bases: LLMProvider

LLM provider using local Ollama server.

Example

provider = OllamaProvider() # Uses llama3.2 on localhost response = provider.complete("Explain ISO 8583") print(response)

Or with custom model and host

provider = OllamaProvider(model="mistral", host="http://192.168.1.100:11434")

Initialize the Ollama provider.

Parameters:

Name Type Description Default
model str | None

Model to use. Defaults to llama3.2.

None
host str | None

Ollama server URL. Defaults to http://localhost:11434.

None

Raises:

Type Description
ProviderNotAvailableError

If ollama package is not installed.

Source code in iso8583sim/llm/providers/ollama.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
def __init__(
    self,
    model: str | None = None,
    host: str | None = None,
):
    """Initialize the Ollama provider.

    Args:
        model: Model to use. Defaults to llama3.2.
        host: Ollama server URL. Defaults to http://localhost:11434.

    Raises:
        ProviderNotAvailableError: If ollama package is not installed.
    """
    if not _OLLAMA_AVAILABLE:
        raise ProviderNotAvailableError("Ollama", "ollama")

    self._model_name = model or self.DEFAULT_MODEL
    self._host = host or self.DEFAULT_HOST
    self._client = ollama.Client(host=self._host)

name property

name: str

Return the provider name.

model property

model: str

Return the model name being used.

complete

complete(prompt: str, system: str | None = None) -> str

Send a prompt to Ollama and return the response.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
str

The response text from Ollama

Raises:

Type Description
LLMError

If the API call fails

Source code in iso8583sim/llm/providers/ollama.py
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
def complete(self, prompt: str, system: str | None = None) -> str:
    """Send a prompt to Ollama and return the response.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        The response text from Ollama

    Raises:
        LLMError: If the API call fails
    """
    try:
        messages = []
        if system:
            messages.append({"role": "system", "content": system})
        messages.append({"role": "user", "content": prompt})

        response = self._client.chat(
            model=self._model_name,
            messages=messages,
        )
        return response["message"]["content"]
    except Exception as e:
        raise LLMError(f"Ollama error: {e}") from e

complete_with_metadata

complete_with_metadata(prompt: str, system: str | None = None) -> LLMResponse

Send a prompt and return response with metadata.

Parameters:

Name Type Description Default
prompt str

The user prompt to send

required
system str | None

Optional system prompt for context

None

Returns:

Type Description
LLMResponse

LLMResponse with content and metadata

Source code in iso8583sim/llm/providers/ollama.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
def complete_with_metadata(self, prompt: str, system: str | None = None) -> LLMResponse:
    """Send a prompt and return response with metadata.

    Args:
        prompt: The user prompt to send
        system: Optional system prompt for context

    Returns:
        LLMResponse with content and metadata
    """
    try:
        messages = []
        if system:
            messages.append({"role": "system", "content": system})
        messages.append({"role": "user", "content": prompt})

        response = self._client.chat(
            model=self._model_name,
            messages=messages,
        )

        usage = None
        if "eval_count" in response or "prompt_eval_count" in response:
            usage = {
                "input_tokens": response.get("prompt_eval_count", 0),
                "output_tokens": response.get("eval_count", 0),
            }

        return LLMResponse(
            content=response["message"]["content"],
            model=self._model_name,
            provider=self.name,
            usage=usage,
        )
    except Exception as e:
        raise LLMError(f"Ollama error: {e}") from e

is_available

is_available() -> bool

Check if Ollama provider is available.

Returns:

Type Description
bool

True if ollama package is installed. Note: doesn't check if server is running.

Source code in iso8583sim/llm/providers/ollama.py
126
127
128
129
130
131
132
def is_available() -> bool:
    """Check if Ollama provider is available.

    Returns:
        True if ollama package is installed. Note: doesn't check if server is running.
    """
    return _OLLAMA_AVAILABLE