BanklyzeBanklyze/Developer Docs
Sign In

Pagination

All list endpoints in the Banklyze API use page-based pagination. Pass page and per_page as query parameters to control which slice of results you receive. Every list response includes a meta object with the total record count and total page count so you can determine when to stop fetching.

Response Format

Every list endpoint returns an object with two top-level keys: data (the array of records for the current page) and meta (pagination metadata).

NameTypeRequiredDescription
dataarrayOptionalArray of resource objects for the current page. Empty array when there are no results.
meta.pageintegerOptionalThe current page number (1-indexed).
meta.per_pageintegerOptionalThe number of records per page as requested.
meta.totalintegerOptionalTotal number of matching records across all pages.
meta.total_pagesintegerOptionalTotal number of pages. When page equals total_pages, you have reached the last page.

Query Parameters

NameTypeRequiredDescription
pageintegerDefault: 1Page number to fetch. Must be greater than or equal to 1. Returns 422 if the requested page exceeds total_pages.
per_pageintegerDefault: 25Number of records per page. Maximum is 100 for deals, statements, and most endpoints. The transactions endpoint allows up to 200.
Most list endpoints also support filtering and sorting query parameters such as q (search), status, sort (field name), and order (asc or desc). See individual endpoint documentation in the API Reference for the full parameter list.

Example Request & Response

Fetch the second page of ten ready deals, sorted by most recently updated:

curl — paginated deals list
curl -G https://api.banklyze.com/v1/deals \
  -H "X-API-Key: bk_your_api_key" \
  --data-urlencode "page=2" \
  --data-urlencode "per_page=10" \
  --data-urlencode "status=ready" \
  --data-urlencode "sort=updated_at" \
  --data-urlencode "order=desc"
Response — 200 OK
{
  "data": [
    {
      "id": 42,
      "business_name": "Acme Trucking LLC",
      "status": "ready",
      "health_score": 81.4,
      "health_grade": "B",
      "created_at": "2026-02-10T09:00:00Z",
      "updated_at": "2026-02-15T14:30:00Z"
    },
    {
      "id": 41,
      "business_name": "Sunrise Bakery Inc",
      "status": "ready",
      "health_score": 67.2,
      "health_grade": "C",
      "created_at": "2026-02-09T11:20:00Z",
      "updated_at": "2026-02-14T16:45:00Z"
    }
  ],
  "meta": {
    "page": 2,
    "per_page": 10,
    "total": 47,
    "total_pages": 5
  }
}

From the meta object you can see there are 47 total deals across 5 pages. You are currently on page 2 of 10 records per page.

Iterating Through Pages

Use meta.total_pages to iterate through all pages. Stop when page >= total_pages.

Python — using the Banklyze SDK

Python — iterate all deals with SDK
import os
from banklyze import BanklyzeClient

client = BanklyzeClient(api_key=os.environ["BANKLYZE_API_KEY"])


def iter_all_deals(status: str | None = None):
    """Yield all deals across all pages."""
    page = 1
    while True:
        result = client.deals.list(
            page=page,
            per_page=100,
            status=status,
        )
        yield from result.data

        if page >= result.meta.total_pages:
            break
        page += 1


# Iterate over every ready deal
for deal in iter_all_deals(status="ready"):
    print(f"Deal {deal.id}: {deal.business_name} — Grade {deal.health_grade}")

Python — iterating transactions (per_page up to 200)

Python — iterate all transactions
import os
from banklyze import BanklyzeClient

client = BanklyzeClient(api_key=os.environ["BANKLYZE_API_KEY"])


def iter_all_transactions(statement_id: int):
    """Yield every transaction for a statement using max per_page."""
    page = 1
    while True:
        result = client.transactions.list(
            statement_id=statement_id,
            page=page,
            per_page=200,  # Transactions endpoint allows up to 200
        )
        yield from result.data

        if page >= result.meta.total_pages:
            break
        page += 1


# Count debits vs credits
debits = credits = 0
for txn in iter_all_transactions(statement_id=15):
    if txn.amount < 0:
        debits += 1
    else:
        credits += 1

print(f"{debits} debits, {credits} credits")

bash — loop with jq

bash — page through all deals
#!/usr/bin/env bash
# Page through all deals and print business names

API_KEY="bk_your_api_key"
PAGE=1
TOTAL_PAGES=1

while [ "$PAGE" -le "$TOTAL_PAGES" ]; do
  RESPONSE=$(curl -s -G "https://api.banklyze.com/v1/deals" \
    -H "X-API-Key: $API_KEY" \
    --data-urlencode "page=$PAGE" \
    --data-urlencode "per_page=100")

  # Extract and print business names
  echo "$RESPONSE" | jq -r '.data[].business_name'

  # Update total pages from meta
  TOTAL_PAGES=$(echo "$RESPONSE" | jq -r '.meta.total_pages')

  PAGE=$((PAGE + 1))
done

echo "Done — fetched $((PAGE - 1)) page(s)"

Batch Processing

When you need to process or export all records, fetch pages in sequence using per_page=100 (or per_page=200 for transactions) to minimize the number of API requests:

Python — bulk export all deals
import os
import requests

API_KEY = os.environ["BANKLYZE_API_KEY"]
BASE_URL = "https://api.banklyze.com/v1"


def fetch_page(endpoint: str, page: int, per_page: int = 100, **params) -> dict:
    resp = requests.get(
        f"{BASE_URL}{endpoint}",
        headers={"X-API-Key": API_KEY},
        params={"page": page, "per_page": per_page, **params},
    )
    resp.raise_for_status()
    return resp.json()


def batch_export_deals():
    """Export all deals in batches of 100 for bulk processing."""
    first_page = fetch_page("/deals", page=1, per_page=100)
    total_pages = first_page["meta"]["total_pages"]
    total = first_page["meta"]["total"]

    print(f"Exporting {total} deals across {total_pages} page(s)...")

    all_deals = list(first_page["data"])  # Include first page results

    for page in range(2, total_pages + 1):
        result = fetch_page("/deals", page=page, per_page=100)
        all_deals.extend(result["data"])
        print(f"Fetched page {page}/{total_pages} ({len(all_deals)}/{total})")

    return all_deals


deals = batch_export_deals()
print(f"Export complete: {len(deals)} deals")

Best Practices

Tips for efficient pagination:
  • Use per_page=100 (or 200 for transactions) when doing batch exports to reduce total request count.
  • Always check meta.total_pages to know when to stop — never assume a fixed number of pages.
  • Apply filters (status, q, date ranges) to narrow results rather than fetching all pages and filtering client-side.
  • For real-time updates, prefer webhooks over polling paginated endpoints.
  • If you need to process a large dataset, add a small delay between pages (100 ms) to stay well within your rate limit.