Structured Output
In many use cases, you may want the LLM to output a specific structure, such as a list or a dictionary with predefined keys.
There are several approaches to achieve a structured output:
Prompting the LLM to strictly return a defined structure.
Using LLMs that natively support schema enforcement.
Post-processing the LLM's response to extract structured content.
In practice, Prompting is simple and reliable for modern LLMs.
Example Use Cases
Extracting Key Information
product:
name: Widget Pro
price: 199.99
description: |
A high-quality widget designed for professionals.
Recommended for advanced users.Summarizing Documents into Bullet Points
summary:
- This product is easy to use.
- It is cost-effective.
- Suitable for all skill levels.Generating Configuration Files
Prompt Engineering
When prompting the LLM to produce structured output:
Wrap the structure in code fences (e.g.,
yaml).Validate that all required fields exist (and let
Nodehandles retry).
Example Text Summarization
{% endtab %}
{% tab title="TypeScript" %}
{% endtab %} {% endtabs %}
{% hint style="info" %} Besides using assert statements, another popular way to validate schemas is Pydantic {% endhint %}
Why YAML instead of JSON?
Current LLMs struggle with escaping. YAML is easier with strings since they don't always need quotes.
In JSON
Every double quote inside the string must be escaped with
\".Each newline in the dialogue must be represented as
\n.
In YAML
No need to escape interior quotes—just place the entire text under a block literal (
|).Newlines are naturally preserved without needing
\n.
Last updated