How to Add Human Review to Your AI Agent's Output

Tutorial 8 min read

Your AI agent just generated a 2,000-word report. It looks good. Probably. But "probably" isn't enough when the output goes to a client, gets published, or feeds into a downstream system. You need a human to review it — and you need the agent to act on that feedback.

This is the human-in-the-loop problem. Most teams solve it with email attachments, shared Google Docs, or Slack threads full of copy-pasted snippets. The agent never sees the feedback programmatically. Someone has to manually relay corrections back into the system.

Notebind gives your agent a proper ai agent feedback loop: push a document, get a share link, let humans comment and suggest changes, then pull that feedback via API. The agent processes structured data — not chat messages.

Let's build this workflow step by step.

1. Agent generates markdown locally

Your agent writes its output as a markdown file. This is the starting point — it could be a report, a draft blog post, a proposal, anything.

Agent output Python
# Your agent generates a document
report = agent.generate_report(data)

with open("report.md", "w") as f:
    f.write(report)

2. Push to Notebind

Now push the document to Notebind so reviewers can access it in the browser. You can use the CLI or the API directly.

With the CLI:

Terminal bash
notebind push report.md

With the API:

Request curl
curl -X POST https://notebind.com/api/documents \
  -H "Authorization: Bearer nb_sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Q1 Performance Report",
    "content": "# Q1 Performance Report\n\nRevenue grew 23%..."
  }'
Response JSON
{
  "data": {
    "id": "doc_a1b2c3d4",
    "title": "Q1 Performance Report",
    "created_at": "2026-03-13T10:30:00Z"
  },
  "error": null
}

Or in Python:

Push Python
import requests

API = "https://notebind.com/api"
HEADERS = {
    "Authorization": "Bearer nb_sk_...",
    "Content-Type": "application/json"
}

# Push the document
with open("report.md") as f:
    content = f.read()

doc = requests.post(f"${API}/documents", headers=HEADERS, json={
    "title": "Q1 Performance Report",
    "content": content
}).json()["data"]

doc_id = doc["id"]
print(f"Document created: {doc_id}")

3. Generate a share link

Create a share link so your reviewer can access the document without needing an account. Set the permission to comment so they can leave feedback but not edit the source.

Request curl
curl -X POST https://notebind.com/api/documents/doc_a1b2c3d4/share \
  -H "Authorization: Bearer nb_sk_..." \
  -H "Content-Type: application/json" \
  -d '{"permission": "comment"}'
Response JSON
{
  "data": {
    "url": "https://notebind.com/s/abc123",
    "permission": "comment"
  },
  "error": null
}

Send that URL to your reviewer — via email, Slack, or however your workflow runs. They open it in a browser and see the rendered markdown with commenting tools.

4. Human reviews the document

This is the part where humans do what they're good at. Your reviewer reads the document in Notebind's editor and can:

  • Comment — highlight text and leave a note ("This number looks wrong" or "Add more context here")
  • Suggest — propose specific text changes with track-changes style diffs ("Revenue grew 23%" → "Revenue grew 23% YoY")

Every comment and suggestion is anchored to a specific span of text in the document. No ambiguity about what the feedback refers to.

5. Agent pulls feedback

Once the reviewer is done, the agent fetches structured feedback via two API calls — one for comments, one for suggestions.

Pull feedback Python
# Fetch comments
comments = requests.get(
    f"${API}/documents/{doc_id}/comments",
    headers=HEADERS
).json()["data"]

# Fetch suggestions
suggestions = requests.get(
    f"${API}/documents/{doc_id}/suggestions",
    headers=HEADERS
).json()["data"]

print(f"Got {len(comments)} comments, {len(suggestions)} suggestions")

Comments come back as structured objects with the anchored text and the reviewer's note:

Comment object JSON
{
  "id": "cmt_x1y2z3",
  "body": "This should be year-over-year, not quarter-over-quarter",
  "anchor_text": "Revenue grew 23%",
  "anchor_start": 45,
  "anchor_end": 63,
  "resolved": false
}

Suggestions include the original text and the proposed replacement:

Suggestion object JSON
{
  "id": "sug_p4q5r6",
  "original_text": "Revenue grew 23%",
  "suggested_text": "Revenue grew 23% year-over-year",
  "status": "pending"
}

6. Agent processes the feedback

Now the agent can programmatically apply suggestions and reason about comments. Suggestions are straightforward — find-and-replace with the reviewer's proposed text. Comments might require the agent to regenerate a section.

Process feedback Python
# Apply suggestions directly
doc = requests.get(
    f"${API}/documents/{doc_id}", headers=HEADERS
).json()["data"]
content = doc["content"]

for sug in suggestions:
    content = content.replace(sug["original_text"], sug["suggested_text"])

    # Accept the suggestion
    requests.patch(
        f"${API}/documents/{doc_id}/suggestions/{sug['id']}",
        headers=HEADERS,
        json={"action": "accept"}
    )

# Feed comments to your LLM for reasoning
for comment in comments:
    revision = llm.revise(
        section=comment["anchor_text"],
        feedback=comment["body"],
        full_document=content
    )
    content = content.replace(comment["anchor_text"], revision)

    # Resolve the comment
    requests.patch(
        f"${API}/documents/{doc_id}/comments/{comment['id']}",
        headers=HEADERS,
        json={"resolved": true}
    )

7. Push the updated document

Push the revised content back to Notebind. The reviewer can check the updates and leave more feedback if needed — completing the loop.

Update Python
# Push the revised version
requests.patch(
    f"${API}/documents/{doc_id}",
    headers=HEADERS,
    json={"content": content}
)

print("Document updated with reviewer feedback")

Or with the CLI, after saving the updated content locally:

Terminal bash
notebind push report.md

The full loop

That's the complete human review ai agent workflow. Your agent writes, a human reviews, and the agent incorporates feedback — all through structured APIs, not copy-paste.

1

Write — Agent generates markdown

2

Push — Sync to Notebind via CLI or API

3

Share — Send a review link to the human

4

Review — Human comments and suggests changes

5

Pull — Agent fetches structured feedback

6

Process — Agent applies changes and resolves feedback

7

Repeat — Push updates, review again if needed

Each cycle tightens the output. The first pass catches factual errors and tone issues. The second might refine structure. By the third, you have a document that neither the agent nor the human could have produced alone.

This is what a real human-in-the-loop system looks like — not a one-time gate, but a continuous feedback loop where both sides contribute what they're best at.

Add human review to your agent today

Notebind is free with no limits. Get an API key and start building your feedback loop in minutes.

Get started free