Prompts

Create re‑usable, parameterised prompt templates for MCP clients.

Prompts are structured message blueprints that guide an LLM toward consistent, purposeful output. Hyperia lets you define them with a simple @mcp.prompt() decorator.


What Are Prompts?

Workflow:

  1. Client requests a prompt by name.

  2. Hyperia locates the definition.

  3. Parameters (if any) are validated against the function signature.

  4. The function runs and returns message content.

  5. Messages are streamed to the LLM to steer its response.

The result: one canonical place to maintain complex system/user prompts that every downstream app can share.


Defining a Prompt with @prompt

from hyperia import Hyperia
from hyperia.prompts import Message, PromptMessage, TextContent

mcp = Hyperia("PromptServer")

@mcp.prompt()
def ask_about(topic: str) -> str:
    """Ask the LLM to explain a concept."""
    return f"Can you please explain the concept of '{topic}'?"

@mcp.prompt()
def code_request(language: str, task: str) -> PromptMessage:
    content = f"Write a {language} function that {task}."
    return PromptMessage(role="user", content=TextContent(text=content))
  • Name – defaults to function name (ask_about).

  • Description – defaults to docstring.

Variadic *args / **kwargs not supported (need full schema).


Return Types

Return value
How Hyperia treats it

str

Wrapped as single PromptMessage (role=user)

PromptMessage

Used verbatim

`list[str

PromptMessage]`

other

Coerced to strPromptMessage

from hyperia.prompts import Message

@mcp.prompt()
def roleplay(character: str, situation: str) -> list[Message]:
    return [
        Message(f"Let's role‑play. You are {character}. Situation: {situation}"),
        Message("Okay, I'm ready.", role="assistant"),
    ]

Type Annotations & Validation

Annotate parameters for schema generation and runtime validation:

from pydantic import Field
from typing import Literal, Optional

@mcp.prompt()
def content_request(
    topic: str = Field(description="Main subject"),
    fmt: Literal["blog", "email", "social"] = "blog",
    tone: str = "professional",
    words: Optional[int] = None,
) -> str:
    prompt = f"Write a {fmt} about {topic} in a {tone} tone."
    if words:
        prompt += f" ~{words} words."
    return prompt

Required vs Optional

Parameters without defaults are required; others optional.


Metadata Overrides

@mcp.prompt(
    name="analyse_data_request",
    description="Ask the LLM to analyse a dataset with parameters.",
    tags={"analysis", "data"},
)
def analysis_prompt(uri: str, kind: str = "summary") -> str:
    return f"Please perform a '{kind}' analysis on data at {uri}."
  • name – explicit identifier

  • description – overrides docstring

  • tags – categorisation for UI/clients


Async Prompts

Perfect for prompts that depend on remote data.

import aiohttp

@mcp.prompt()
async def data_based(data_id: str) -> str:
    async with aiohttp.ClientSession() as s:
        async with s.get(f"https://api.example.com/data/{data_id}") as r:
            data = (await r.json())["content"]
    return f"Analyse this data: {data}"

Accessing Context

from hyperia import Hyperia, Context

mcp = Hyperia("PromptServer")

@mcp.prompt()
async def report_request(kind: str, ctx: Context) -> str:
    return f"Please create a {kind} report. Request ID: {ctx.request_id}"

See Context docs for logging, sampling, etc.


Server Behaviour Flags

Duplicate Prompt Names

mcp = Hyperia(on_duplicate_prompts="error")

Options: "warn" (default), "error", "replace", "ignore".

Last updated