Skip to main content

2 posts tagged with "LLM"

View All Tags

· 6 min read
DahnM20

Getting an LLM to return structured JSON output reliably usually requires extra work: writing a system prompt that demands JSON, handling cases where the model doesn't comply, parsing the response, and validating the schema. If you're doing this in Python, it's manageable. If you're repeating it across multiple pipelines, it becomes repetitive boilerplate.

AI-Flow has a dedicated node for this: GPT Structured Output. You define a JSON Schema, connect your data, and the model always returns a valid object matching that schema. No parsing code. No validation loop. This article walks through a practical pipeline built with it.

How structured JSON output works in AI-Flow

The GPT Structured Output node uses OpenAI's native structured output mode. You pass it three things:

  • Context — the data you want the model to process (a document, a customer message, a product description, anything text-based)
  • Prompt — the task instruction (e.g., "Extract the job title, company, and required skills from this posting")
  • JSON Schema — the exact structure you expect back

The node enforces strict mode automatically, so the model is constrained to always return a valid object. You get a native JSON output — not a string that contains JSON — which means downstream nodes can work with it directly.

If you prefer Gemini models, there's also a Gemini Structured Output node with the same interface, supporting Gemini 2.5 Flash, 2.5 Pro, and newer Gemini 3 models.

Your API keys (OpenAI or Google) are stored in AI-Flow's secure key store, accessible from the settings tab — you configure them once and all nodes can use them.

Workflow example: extracting structured data from job postings

The scenario: you have raw job posting text and want to extract a clean, consistent object with the title, company, location, salary range, and a list of required skills — for every posting, in the same format.

Step 1 — Add a Text Input node

Drop a Text Input node on the canvas. Paste a raw job posting into it. This is your data source; in a real pipeline you'd connect this to an API input or a scraping node, but for testing a direct input works fine.

Text Input node with a sample job posting

Step 2 — Add the GPT Structured Output node

Drop a GPT Structured Output node on the canvas. Connect the output of the Text Input node to its Context field.

In the Prompt field, write the extraction instruction:

Extract the job information from the posting in the context.
Return only the fields defined in the schema.

Select your model — GPT-4o-mini is sufficient for extraction tasks and is cheap to run.

GPT Structured Output node with prompt field and model selector

Step 3 — Define the JSON Schema

In the json_schema field, enter the structure you want back:

{
"type": "object",
"properties": {
"title": { "type": "string" },
"company": { "type": "string" },
"location": { "type": "string" },
"salary_range": { "type": "string" },
"required_skills": {
"type": "array",
"items": { "type": "string" }
}
}
}

That's the full configuration. AI-Flow handles the strict mode enforcement internally — you don't need to add required arrays or additionalProperties: false yourself, the node does it before sending the request to the API.

JSON schema field filled in

Step 4 — Run and inspect the output

Hit Run. The result appears beneath the node as a JSON object:

{
"title": "Senior Backend Engineer",
"company": "Acme Corp",
"location": "Remote",
"salary_range": "$130,000 – $160,000",
"required_skills": ["Go", "PostgreSQL", "Kubernetes", "gRPC"]
}

Every run returns the same structure. Change the input text and run again — the schema stays consistent across all inputs.

Step 5 — Use the output downstream

Because the node outputs native JSON, you can connect it to other nodes without any conversion step.

Extract a single field: Connect the GPT Structured Output node to an Extract JSON node. Set the mode to Extract Key and enter required_skills. The output is just the skills array — useful if you want to pass it to another prompt or format it separately.

Format the result as text: Connect to a JSON Template node. Write a template like:

**${json.title}** at ${json.company}
Location: ${json.location}
Salary: ${json.salary_range:Not specified}
Skills: {% for skill in json.required_skills %}${skill}, {% endfor %}

The JSON Template node supports path access, loops, conditionals, and fallback values — it turns the structured object into whatever text format you need downstream.

Canvas showing the full pipeline with JSON output visible

When to use Gemini Structured Output instead

The Gemini Structured Output node works identically from a workflow perspective — same fields, same JSON schema interface, same native JSON output. Use it when:

  • You already have a Gemini API key and want to keep costs on one provider
  • You need to process files alongside text (the Gemini node accepts file URLs, including PDFs and images)
  • You want to compare output quality between GPT and Gemini on your specific extraction task — both nodes can sit on the same canvas for easy side-by-side testing

Exposing the pipeline as an API

Once the extraction workflow is working, you can expose it as a REST endpoint using AI-Flow's API Builder. Add an API Input node in place of the Text Input node and an API Output node at the end. AI-Flow generates an endpoint and a key — you POST the raw text, get back the structured JSON. No server to maintain, no framework to configure.

This is useful for integrating the extraction pipeline into a larger application without copying the LLM logic into your codebase.

Starting from a template

If you'd rather start from an existing pipeline than build from scratch, the AI-Flow templates library has data extraction and processing workflows you can adapt. Load one, swap the schema and prompt for your use case, and run.

Try it

The GPT Structured Output node is available on the AI-Flow free tier. You need an OpenAI API key — add it once in the key store under settings, and it's available to all nodes in all your workflows.

· 6 min read
DahnM20

n8n is a genuinely great tool. It's open source, self-hostable, has hundreds of integrations, and a large active community. If you're automating business processes — syncing CRMs, routing webhooks, connecting SaaS tools — it's hard to beat.

This article isn't about whether n8n is good. It is. It's about a specific scenario where the two tools diverge: pipelines where the primary work is done by AI models. Chaining LLM calls, generating images, extracting structured data, routing based on model output. That's where the difference starts to matter.

How they're designed differently

n8n is a general-purpose automation platform. AI capabilities were added on top of a foundation built for connecting business applications.

AI-Flow is built specifically for AI model pipelines. The node set, the canvas, the output handling — everything is oriented around calling models, routing their outputs, and iterating on the results.

Working with AI models directly

In n8n, native AI nodes cover OpenAI and Anthropic basics. For anything outside that list — a specific Replicate model, a newer Gemini variant, a specialized image model — you reach for the HTTP Request node and write the API call yourself. That means handling the schema, authentication, polling for async results, and managing file URLs manually.

AI-Flow has dedicated nodes for Claude, GPT, Gemini, Grok, DeepSeek, DALL-E, Stability AI, and a Replicate node that gives you access to 1000+ models through a searchable catalog. You pick a model, the node generates the input form from the model's schema, and you connect it to the rest of the pipeline. No raw API calls to write.

Every parameter the model exposes is available in the node — nothing is abstracted away or hidden behind simplified controls. If the model has 15 input fields, you see all 15.

Replicate node model selector with input fields visible after selection

This matters most during iteration. When you're tweaking a pipeline — swapping a model, adjusting a parameter, comparing outputs — doing it through a visual node is faster than editing API call parameters in code or HTTP request bodies.

Structured output

A common pattern in AI pipelines: use an LLM to extract structured data from unstructured input — a document, a customer message, a product listing — and feed the result into downstream steps.

AI-Flow has dedicated GPT Structured Output and Gemini Structured Output nodes. You define a JSON Schema in the node, and the model is constrained via native structured output mode — it always returns a valid object matching that schema. The output is a native JSON object that downstream nodes work with directly. No parsing code, no validation loop.

{
"type": "object",
"properties": {
"sentiment": { "type": "string" },
"score": { "type": "number" },
"key_topics": {
"type": "array",
"items": { "type": "string" }
}
}
}

Define that schema, write a prompt, and every run returns that exact structure regardless of input variation.

GPT Structured Output node with schema field and JSON output

Visual iteration on model pipelines

n8n's canvas works well for automation logic. For AI pipelines specifically, the visual feedback loop matters more — you're often running a chain multiple times, looking at intermediate outputs, adjusting a prompt, and running again.

AI-Flow's canvas shows live output beneath each node as the workflow runs. You can see the result of each step — the LLM output, the extracted JSON, the generated image — without navigating away from the canvas. When something looks wrong, you can identify exactly which node produced it.

Canvas mid-run with output visible beneath each node

This isn't a feature n8n lacks so much as a different design priority — AI-Flow treats the canvas as an interactive scratchpad for building and debugging model pipelines, not just a diagram of a deployed automation.

BYOK and cost model

Both tools support BYOK. In AI-Flow, you add your Anthropic, OpenAI, Replicate, and Google keys once in the secure key store — every node in every workflow uses them automatically. You pay the model provider directly at their rate. AI-Flow's platform fee is a one-time credit purchase, not a subscription, and there's no markup on what you pay to providers.

For workflows running heavy model workloads, that cost structure is worth understanding before choosing a platform.

Exposing pipelines as APIs

Both tools let you call a workflow from an external application. In n8n, you configure a Webhook trigger node. In AI-Flow, you add an API Input node at the start and an API Output node at the end — AI-Flow generates a REST endpoint with an API key automatically. You can also expose the workflow as a simplified form-based UI without any frontend code, which is useful for sharing a pipeline with non-technical collaborators.

When to use n8n

If your automation connects business applications — CRMs, databases, Slack, email, hundreds of SaaS tools — n8n is the right choice. It has deeper integrations, a larger community, self-hosting options, and years of production use behind it. AI-Flow has some utility integrations (HTTP, Webhooks, Notion, Airtable, Telegram) but it's not competing on that dimension.

When to use AI-Flow

If the pipeline's primary work is model calls — chaining LLMs, generating images or video, extracting structured data, routing based on model output — AI-Flow is designed for that. The model coverage is broader, structured output is a first-class feature, the canvas is built for iterating on prompts and parameters, and every model parameter is exposed without abstraction.

The specific cases where it tends to matter:

  • You need models beyond OpenAI and Anthropic basics, especially Replicate's catalog
  • You want structured JSON output without writing parsing code
  • You're iterating quickly on prompt chains and want visual feedback at each step
  • You want to expose the pipeline as a REST API with minimal setup
  • You want to avoid per-execution platform fees on top of model costs

If you're building something AI-model-heavy, the templates library has pre-built pipelines to start from. The free tier works with your own API keys — open AI-Flow and try building the pipeline there.