Write YAML. brz clicks the buttons.
One Go binary. Stealth baked in.
Works alone or inside your LLM agent.
You know the drill. The web app you depend on has your data, but no way to get it out programmatically. So you log in, click through four pages, hit Export, wait, download a CSV, then do it again tomorrow. brz does that whole sequence from one terminal command. You write the steps in YAML, brz drives Chrome, and you get JSON back.
Your CRM, your analytics dashboard, your admin panel... they all have an Export button buried three clicks deep. brz logs in, finds it, clicks it, downloads the file. Set it up once, run it forever.
brz run exports.yaml login,download-report
That thing you do every Monday morning where you log into two apps, download a CSV from one, upload it to the other, and then check it worked? That's a 6-line YAML file and a cron job now.
brz run weekly-sync.yaml export,upload
Your agent needs to fill out a form, scrape a table, or download a report. brz inspect tells it what's on the page. The agent writes the YAML. brz run executes it. JSON comes back. That's the whole loop.
brz inspect $URL --compact --json
App A has pricing data. App B needs it. Neither has an API. Chain two actions in one brz session: export from the first, upload to the second. Cookies persist, so you're not re-authenticating every time.
brz run sync.yaml export-a,import-b
Sometimes you just need one value: a price, a stock count, a status. brz eval runs JavaScript on any page and returns the result as JSON. Wrap it in a shell script. Check it every hour.
brz eval $URL "document.querySelector('.price').textContent"
The export completed, but was the CSV empty? Did the page redirect to an error? Eval assertions check the actual outcome: row counts, column names, URL, visible text. If the data's wrong, brz fails loud.
eval: [download_min_rows: 1, url_contains: "/success"]
Same pattern every time. Find the selectors, write the YAML, run it.
Point brz inspect at any URL. Get back every interactive element with its CSS selector, tag, type, and name. Filter with --tag and --name. Pipe to jq.
Define your workflow as a YAML file. Steps are declarative: fill, click, select, wait, download, upload, eval. No imperative code. Validate with brz validate.
Execute with brz run. Chain multiple actions in one browser session. Get JSON output with download paths, timing, and eval results.
Each one does one thing. Pipe them together. Let your agent call them in a loop.
Shows every interactive element on a page with CSS selectors you can paste straight into YAML.
Runs a workflow action in Chrome. Comma-separate to chain them: login,export.
Capture a full-page screenshot. Returns path + size as JSON.
Execute JavaScript on any page. Promises auto-awaited.
Check YAML syntax. Count actions and steps. Fast feedback.
List all actions in a workflow with URLs and step counts.
Print the embedded LLM agent prompt. Copy into your agent's system prompt.
Print the version. That's it.
The binary has zero site-specific code. Your workflow YAML is the whole program. Write it yourself or let your agent generate it.
Each step is a single YAML line. Selectors come from brz inspect. Environment variables interpolate with ${VAR}.
| Step | What it does |
|---|---|
| click | Click by selector, text, or nth |
| fill | Type into input fields |
| select | Native or Select2 dropdowns |
| upload | Set file on input[type=file] |
| download | Capture file downloads |
| navigate | Go to a URL |
| wait_visible | Wait for element |
| wait_text | Wait for text on page |
| wait_url | Wait for URL match |
| eval | Run JS in page context |
| screenshot | Capture page state |
| sleep | Pause for a duration |
brz outputs JSON when piped, returns meaningful exit codes, and includes similar selectors in error messages so your agent can self-correct.
Run it in a terminal, you get readable text. Pipe it somewhere, you get JSON. Automatic.
When a step fails, the JSON includes page_elements with nearby selectors. The agent fixes the YAML without having to re-inspect.
brz prompt prints a system prompt you paste into your agent. It covers all the commands, flags, and failure modes.
Cookies stick around between runs. Log in once with --headed, then every brz command after that is authenticated.
It's a Go binary. You install it, you run it. There is no config file.
Masks the webdriver flag, spoofs Client Hints, uses a real User-Agent. Sites behind Cloudflare and DataDome don't know it's automated.
BRZ_HEADED=auto starts headless but pops open a real window if it hits a CAPTCHA or stale session. Solve it manually, brz picks back up.
Captures before/after JPEGs when a step fails. Kept in a ring buffer in memory, so you pay nothing on success. When something breaks, you see exactly what Chrome saw.
After the steps run, check the result: did the CSV have rows? Did the URL end up where it should? Was the error text absent? Nine different checks.
If the next action targets the same URL, brz skips the navigation. Actions with no URL just keep working on whatever page is loaded.
The select step figures out if it's a real <select> or a Select2 widget and handles both. Retries if the dropdown is still loading from AJAX.
Finds and uses the Chrome already on your machine. Only downloads Chromium if you don't have Chrome installed at all.
Import the workflow/ package into your own Go code. It's mutex-protected and safe to call from multiple goroutines.