Your AI agent just generated a 2,000-word report. It looks good. Probably. But "probably" isn't enough when the output goes to a client, gets published, or feeds into a downstream system. You need a human to review it — and you need the agent to act on that feedback.
This is the human-in-the-loop problem. Most teams solve it with email attachments, shared Google Docs, or Slack threads full of copy-pasted snippets. The agent never sees the feedback programmatically. Someone has to manually relay corrections back into the system.
Notebind gives your agent a proper ai agent feedback loop: push a document, get a share link, let humans comment and suggest changes, then pull that feedback via API. The agent processes structured data — not chat messages.
Let's build this workflow step by step.
1. Agent generates markdown locally
Your agent writes its output as a markdown file. This is the starting point — it could be a report, a draft blog post, a proposal, anything.
# Your agent generates a document
report = agent.generate_report(data)
with open("report.md", "w") as f:
f.write(report)2. Push to Notebind
Now push the document to Notebind so reviewers can access it in the browser. You can use the CLI or the API directly.
With the CLI:
notebind push report.mdWith the API:
curl -X POST https://notebind.com/api/documents \
-H "Authorization: Bearer nb_sk_..." \
-H "Content-Type: application/json" \
-d '{
"title": "Q1 Performance Report",
"content": "# Q1 Performance Report\n\nRevenue grew 23%..."
}'{
"data": {
"id": "doc_a1b2c3d4",
"title": "Q1 Performance Report",
"created_at": "2026-03-13T10:30:00Z"
},
"error": null
}Or in Python:
import requests
API = "https://notebind.com/api"
HEADERS = {
"Authorization": "Bearer nb_sk_...",
"Content-Type": "application/json"
}
# Push the document
with open("report.md") as f:
content = f.read()
doc = requests.post(f"${API}/documents", headers=HEADERS, json={
"title": "Q1 Performance Report",
"content": content
}).json()["data"]
doc_id = doc["id"]
print(f"Document created: {doc_id}")3. Generate a share link
Create a share link so your reviewer can access the document without needing an account. Set the permission to comment so they can leave feedback but not edit the source.
curl -X POST https://notebind.com/api/documents/doc_a1b2c3d4/share \
-H "Authorization: Bearer nb_sk_..." \
-H "Content-Type: application/json" \
-d '{"permission": "comment"}'{
"data": {
"url": "https://notebind.com/s/abc123",
"permission": "comment"
},
"error": null
}Send that URL to your reviewer — via email, Slack, or however your workflow runs. They open it in a browser and see the rendered markdown with commenting tools.
4. Human reviews the document
This is the part where humans do what they're good at. Your reviewer reads the document in Notebind's editor and can:
- Comment — highlight text and leave a note ("This number looks wrong" or "Add more context here")
- Suggest — propose specific text changes with track-changes style diffs ("Revenue grew 23%" → "Revenue grew 23% YoY")
Every comment and suggestion is anchored to a specific span of text in the document. No ambiguity about what the feedback refers to.
5. Agent pulls feedback
Once the reviewer is done, the agent fetches structured feedback via two API calls — one for comments, one for suggestions.
# Fetch comments
comments = requests.get(
f"${API}/documents/{doc_id}/comments",
headers=HEADERS
).json()["data"]
# Fetch suggestions
suggestions = requests.get(
f"${API}/documents/{doc_id}/suggestions",
headers=HEADERS
).json()["data"]
print(f"Got {len(comments)} comments, {len(suggestions)} suggestions")Comments come back as structured objects with the anchored text and the reviewer's note:
{
"id": "cmt_x1y2z3",
"body": "This should be year-over-year, not quarter-over-quarter",
"anchor_text": "Revenue grew 23%",
"anchor_start": 45,
"anchor_end": 63,
"resolved": false
}Suggestions include the original text and the proposed replacement:
{
"id": "sug_p4q5r6",
"original_text": "Revenue grew 23%",
"suggested_text": "Revenue grew 23% year-over-year",
"status": "pending"
}6. Agent processes the feedback
Now the agent can programmatically apply suggestions and reason about comments. Suggestions are straightforward — find-and-replace with the reviewer's proposed text. Comments might require the agent to regenerate a section.
# Apply suggestions directly
doc = requests.get(
f"${API}/documents/{doc_id}", headers=HEADERS
).json()["data"]
content = doc["content"]
for sug in suggestions:
content = content.replace(sug["original_text"], sug["suggested_text"])
# Accept the suggestion
requests.patch(
f"${API}/documents/{doc_id}/suggestions/{sug['id']}",
headers=HEADERS,
json={"action": "accept"}
)
# Feed comments to your LLM for reasoning
for comment in comments:
revision = llm.revise(
section=comment["anchor_text"],
feedback=comment["body"],
full_document=content
)
content = content.replace(comment["anchor_text"], revision)
# Resolve the comment
requests.patch(
f"${API}/documents/{doc_id}/comments/{comment['id']}",
headers=HEADERS,
json={"resolved": true}
)7. Push the updated document
Push the revised content back to Notebind. The reviewer can check the updates and leave more feedback if needed — completing the loop.
# Push the revised version
requests.patch(
f"${API}/documents/{doc_id}",
headers=HEADERS,
json={"content": content}
)
print("Document updated with reviewer feedback")Or with the CLI, after saving the updated content locally:
notebind push report.mdThe full loop
That's the complete human review ai agent workflow. Your agent writes, a human reviews, and the agent incorporates feedback — all through structured APIs, not copy-paste.
Write — Agent generates markdown
Push — Sync to Notebind via CLI or API
Share — Send a review link to the human
Review — Human comments and suggests changes
Pull — Agent fetches structured feedback
Process — Agent applies changes and resolves feedback
Repeat — Push updates, review again if needed
Each cycle tightens the output. The first pass catches factual errors and tone issues. The second might refine structure. By the third, you have a document that neither the agent nor the human could have produced alone.
This is what a real human-in-the-loop system looks like — not a one-time gate, but a continuous feedback loop where both sides contribute what they're best at.
Add human review to your agent today
Notebind is free with no limits. Get an API key and start building your feedback loop in minutes.
Get started free