Blocks

Evaluator

Evaluate content

Assess content quality using customizable evaluation metrics and scoring criteria. Create objective evaluation frameworks with numeric scoring to measure performance across multiple dimensions.

Back to Catalog
Toolsevaluator

Block Preview

Evaluator

Evaluate content

Usage

  1. Add the block to your workflow and connect it to the upstream step.
  2. Configure any required credentials or tokens in the inputs.
  3. Fill in required inputs and optional parameters for the run.
  4. Run a test execution, inspect outputs, and iterate before deploying.
  5. Deploy the evaluator block with monitoring enabled in production.

Inputs (UI)

Evaluation Metrics

eval-input

Layout: full

Content

short-input

Placeholder: Enter the content to evaluate

Layout: full

Model

dropdown

Layout: half

Options: Dynamic options

API Key

short-input

Placeholder: Enter your API key

Layout: full

System Prompt

code

Layout: full

Hidden by default

Inputs (API)

metrics

json

Required

Array of metrics to evaluate against

model

string

Required

apiKey

string

Required

content

string

Required

Outputs

Primary response type:

{
  "content": "string",
  "model": "string",
  "tokens": "json"
}

Conditional output based on: metrics

When empty: {"content":"string","model":"string","tokens":"json"} | When filled: "json"

Tool Access

openai_chatanthropic_chatgoogle_chatxai_chatdeepseek_chatdeepseek_reasoner
Evaluator | NowFlow Blocks