bilig

X Reply Growth Playbook

Status: public, account-safe outreach playbook for growing proompteng/bilig.

Goal: earn legitimate attention from developers who already care about spreadsheet engines, workbook automation, local-first software, coding agents, formula semantics, or open-source infrastructure. Do not optimize for evading platform enforcement. Optimize for useful replies that would still make sense if the link were removed.

Ground Rules

X’s own rules, updated in April 2026, say unsolicited automated replies based only on keyword searches are not permitted. Its developer guidelines require official API use instead of scraping or browser automation, and its behavior guidance treats repeated duplicated unsolicited replies as spam. That makes the correct strategy simple: fewer replies, higher fit, and no automation.

Daily Reply Budget

Use a hard cap until the account has real inbound discussion:

This is not about hiding from filters. It is about keeping the account useful enough that a person reading the reply sees a relevant contribution, not a growth tactic.

Thread Fit

Good targets:

Bad targets:

Voice

Keep the tone close to the common builder style on X: lowercase, direct, short, and specific. Avoid polished marketing language.

Useful shape:

  1. agree or add one concrete distinction
  2. name the implementation evidence
  3. link only if it clarifies the claim

Avoid:

Current Style Scan

Checked on 2026-05-07 from public X pages and indexed web results for @sama. Recent high-engagement posts and replies tend to be compressed, casual, and low-friction:

Use that shape only as a tone reference. Do not impersonate anyone, copy phrasing, or turn posts into vague ai hype. For bilig, the useful version is lowercase and human but still specific: workbook state, formula parity, readback, fixtures, examples, and measured caveats.

Research Refresh - 2026-05-07

What is working:

Working tone for bilig:

Good one-line reply shapes:

the grid is the ui. workbook state is the api the agent needs to mutate and verify.
the hard part is not generating formulas. it is proving ranges, formulas, recalc, and export still match after edits.
i think "excel compatible" has to be replaced with fixture-scoped claims and verifier commands.

Live Reply Queue - 2026-05-07

These are hand-picked targets from the logged-in Atlas X session. Do not treat this as a scraping queue. Use it as a small manual queue and stop after the first reply unless a real follow-up appears.

ChatGPT spreadsheet add-on post

Target: https://x.com/ChatGPTapp/status/2051776032127238266

Why it fits:

Draft reply:

this is exactly the direction. one thing we keep hitting while building bilig is
that agents need typed workbook operations and verification hooks, not
screenshots of grids.

open-source node api if useful for anyone experimenting:
https://github.com/proompteng/bilig

Lower-promotion variant:

this is exactly the direction. the thing i keep wanting for agents is typed
workbook operations + verification hooks, not screenshots of grids.

the grid is the ui. workbook state is the api.

Use the linked version only if the maintainer account is comfortable clearly owning the project in the reply. Use the no-link version when the thread feels too crowded or the account needs more normal participation before linking.

Follow-up artifact if anyone engages:

AI Excel agent startups thread

Target: https://x.com/IM_Aeneas/status/2050729841947709822

Why it fits:

Draft reply:

probably means the serious ones have to go deeper than chat around cells.

the hard part is typed workbook ops, import/export fidelity, recalculation
correctness, and verification after edits.

Only add a repo link if someone asks what an implementation of that boundary looks like. If that happens, link the website or adoption kit rather than dropping the repository into the first reply.

PopSheets AI spreadsheet mention

Target: https://x.com/AudreyLimsAi/status/2052019390225555807

Why it fits:

Draft reply:

this is the right product surface.

the infrastructure layer i keep watching is whether the agent can prove what it
changed: ranges, formulas, recalc, readback, and export fidelity.

ChatGPT Excel add-in personalization reply

Target: https://x.com/KieranJame86217/status/2051998803771949424

Why it fits:

Draft reply:

yeah, the interesting bit is making the instructions operational.

not just "remember how i like models", but stable workbook ops the add-in can
run and then verify after edits.

Native Excel agent launch reply

Target: https://x.com/gardnersmitha/status/2051456316942458898

Why it fits:

Draft reply:

direct data access is a big deal.

the other half i care about for spreadsheet agents is writeback verification:
after the model changes a workbook, can you inspect formulas/ranges and prove
what changed.

Atlas Search Pass - 2026-05-07

Query used manually in Atlas:

("excel add-in" OR "spreadsheet automation" OR "workbook automation") (agent OR chatgpt OR ai)

Do not work this like a lead list. Pick one high-fit thread, reply once, and then wait. If a thread is mostly a launch announcement for another product, use a no-link technical reply or skip it.

OpenCode Excel agent demo

Target: https://x.com/moalzq/status/2051147753993224594

Why it fits:

Draft reply:

this is the right direction for spreadsheet agents.

the part i keep wanting after "it writes code until done" is a boring
verification layer: what ranges changed, what formulas recalculated, and can the
workbook round-trip after the edit.

Only add a link if someone asks for implementation evidence. Then use:

i have been building the headless version of that layer here:
https://github.com/proompteng/bilig/tree/main/examples/headless-workpaper

the useful command is npm run agent:verify because it checks writeback instead
of only showing a grid demo.

PDF to spreadsheet automation failure thread

Target: https://x.com/Kcherupalli/status/2049149610187165716

Why it fits:

Draft reply:

this is the part most demos skip.

spreadsheet automation needs fixtures and failure states, not just a happy-path
agent run. once it breaks, you need to know whether extraction, formulas,
writeback, or export changed.

Do not include a link in the first reply. If the author asks about tooling, point to the XLSX verifier walkthrough or the headless WorkPaper example.

ChatGPT Excel add-in differentiation question

Target: https://x.com/joserod__/status/2047402008882085973

Why it fits:

Draft reply:

i would separate product surface from automation substrate.

an add-in is where the user talks to the workbook. the substrate is the boring
part underneath: typed workbook ops, recalc, writeback checks, persistence, and
import/export fidelity.

No link first. If someone asks for a concrete open-source example, use @bilig/headless and link the runnable example, not the root repo.

Reply Templates

Use these as starting points, not copy/paste automation.

agents and spreadsheets

yeah, screenshots are the weak primitive here.

the useful thing is a workbook api the agent can mutate and verify. i wrote up
the shape we use in bilig:
https://github.com/proompteng/bilig/blob/main/docs/why-agents-need-workbook-apis.md

formula parity

i think the honest version is fixture-scoped parity, not "excel compatible".

for bilig i'm trying to make each claim point at the exact fixture + verifier
command, like this xlookup exact case:
https://github.com/proompteng/bilig/blob/main/docs/formula-edge-xlookup-exact-fixture.md
the useful version is one behavior per fixture.

for criteria aggregates, this is the kind of claim i trust: one sumifs case,
expected rows, registry status, and the verifier command:
https://github.com/proompteng/bilig/blob/main/docs/formula-edge-sumifs-paired-criteria-fixture.md

dynamic arrays

the interesting spreadsheet-agent cases are not just scalar formulas.

for bilig i am trying to keep grouped spill claims small and auditable: one
groupby fixture, exact expected output, and the verifier command:
https://github.com/proompteng/bilig/blob/main/docs/formula-edge-groupby-spill-fixture.md

xlsx compatibility

cached xlsx parity is a useful test, but it should be described as corpus parity.

this is the report shape we use: matching formulas, mismatches, skipped formulas,
and why they were skipped:
https://github.com/proompteng/bilig/blob/main/docs/xlsx-corpus-verifier-walkthrough.md

local-first workbooks

the part that gets interesting is when the workbook document can round-trip
through json and still preserve formula-backed state.

this is the small node example i keep pointing people at:
https://github.com/proompteng/bilig/tree/main/examples/headless-workpaper

benchmarks

the benchmark claim has to stay narrow or it becomes noise.

for bilig the public claim is 46/46 mean wins on scorecard-eligible comparable
workloads, with the p95 caveat left attached:
https://github.com/proompteng/bilig/blob/main/docs/what-workpaper-benchmark-proves.md

Use these when a link would feel premature:

this is where i think spreadsheet engines need more boring audit trails: exact
fixture, expected value, verifier command, and explicit gaps.
the thing i would separate is "can import the file" vs "can prove the formulas
match cached results for this corpus".
for agent workflows, i think the grid is the ui, not the api. the api needs
stable workbook state and readback.

Follow-Up Loop

For every useful reply:

  1. Save the thread URL.
  2. Note the actual question or objection.
  3. Convert repeated questions into a doc, example, test, fixture, or issue.
  4. Add substantial feedback to https://github.com/proompteng/bilig/discussions/115 when it is not yet a concrete issue.
  5. Reply once with the new artifact only if it directly answers the thread.

This compounds better than raw posting volume because it turns market feedback into repository evidence.

Continuous Growth Cadence

Run this as a weekly loop:

  1. Ship one small proof artifact in the repo: fixture walkthrough, runnable example, benchmark note, compatibility caveat, or starter issue.
  2. Publish one maintainer post that explains the proof in lowercase, direct language.
  3. Spend 3 days watching related X discussions and add at most 2 high-context replies per day.
  4. Log every serious objection and convert repeated questions into docs, issues, fixtures, or examples before posting the next link.
  5. At the end of the week, compare GitHub stars, npm downloads, GitHub traffic referrers, issue quality, and repeat questions.

Do not optimize for reply count. The compounding unit is a public artifact that is good enough to link when the same question appears again.

Sources