Article 6 min read

Data-Driven Decisions: When to Bring Numbers

Not every decision needs data — but when numbers are relevant, they transform the conversation. Learn the data relevance test, structuring tips, and three real examples of data done right.

The Bo1 Team |

Data-Driven Decisions: When to Bring Numbers

Not every decision needs data. Some of the best meetings on Board of One involve zero numbers — a founder working through a positioning question, a middle manager deciding how to restructure a team, a growth-stage company debating whether to enter a new market.

But when data is relevant, it transforms the conversation. The expert panel stops reasoning from general principles and starts reasoning from your reality. The advice shifts from "typically, companies should..." to "given your numbers, you should..."

The trick is knowing when data helps, when it's noise, and how to structure it so the experts can actually use it.

The Data Relevance Test

Before uploading a spreadsheet, ask yourself one question: Does this decision involve measurable outcomes where my specific numbers change the recommendation?

If yes, bring data. If the recommendation would be the same regardless of your numbers, skip it.

    Data helps:
  • Pricing decisions (your revenue, churn, and usage data change the math)
  • Resource allocation (your capacity numbers determine what's feasible)
  • Growth strategy (your funnel metrics reveal where leverage exists)
  • Hiring timing (your workload data shows whether you're actually at capacity)
    Data is noise:
  • Brand positioning (this is about perception, not metrics)
  • Partnership evaluation (the data that matters — partner reliability, cultural fit — isn't in a spreadsheet)
  • Team structure decisions (org design is about communication patterns, not KPIs)
  • "Should I build this product?" (early-stage market validation is qualitative)

There's a middle zone too. A market entry decision might benefit from your current revenue mix by geography, but the core question is strategic. Use judgment. When in doubt, include the data — the expert panel will use what's relevant and ignore what's not.

How to Structure Data for AI Reasoning

Board of One can process CSVs and structured data. But not all data formats are equally useful. The goal is to make it easy for the expert panel to extract insights without getting lost in noise.

Keep It Focused

Don't dump your entire analytics export. Extract the specific data relevant to the decision.

Too much: Your full Stripe export with 10,000 transactions.

Just right: A summary table showing MRR by plan tier, monthly for the last 6 months, with churn rate per tier.

| Month | Basic ($15) | Pro ($29) | Enterprise ($79) | Churn (Basic) | Churn (Pro) | Churn (Enterprise) | |-------|------------|-----------|-------------------|---------------|-------------|-------------------| | Sep | $4,500 | $8,700 | $3,160 | 8% | 4% | 1% | | Oct | $4,200 | $9,280 | $3,950 | 9% | 3% | 0% | | ... | ... | ... | ... | ... | ... | ... |

The experts can immediately see that Basic tier is a leaky bucket and Enterprise is sticky. That insight drives the pricing recommendation.

Label Everything

Column headers matter. "Rev" is ambiguous. "Monthly Recurring Revenue ($)" is not. Include units, time periods, and any definitions that aren't obvious.

Include Context Rows

If you're asking about a trend, include enough history to show the trend. Three months of data doesn't show seasonality. Twelve months might. The right window depends on your decision's time horizon.

Separate Facts from Projections

If your spreadsheet mixes actual data with forecasts, label them clearly. The expert panel will (correctly) treat projections with more skepticism than actuals.

Three Examples of Data Done Right

Example 1: Pricing Analysis (Solo Founder)

Problem statement: "I'm considering raising prices on my $15/mo plan to $19/mo. Here's my revenue and churn data by acquisition channel for the last 6 months."

Data attached: A CSV with columns for month, plan, acquisition channel, active users, churned users, and MRR. Around 50 rows total.

What happens: The panel notices that organic-acquired users churn at 3% regardless of price, while paid-acquired users churn at 11%. The pricing recommendation becomes channel-specific: raise prices for the organic segment (they're price-insensitive), and fix the onboarding problem for paid users before touching their price. Without the channel-level data, the advice would have been a generic "test a price increase with a subset."

Example 2: Product Prioritization (Growth-Stage Team)

Problem statement: "We need to decide between three features for Q2. Here's our product usage data showing which features correlate with retention and expansion."

Data attached: A CSV showing feature adoption rates, correlation with 90-day retention, and correlation with plan upgrades. Plus a separate table of feature requests by customer segment and deal size.

What happens: The expert panel — a product strategist, a data-informed PM, and a revenue operations specialist — cross-references the tables. They find that the feature with the most requests (dashboards) has weak retention correlation, while the least-requested feature (workflow automation) has the strongest upgrade correlation. The recommendation: build workflow automation, because it drives expansion revenue; dashboards are a satisfaction feature, not a growth lever. The team was about to build dashboards based on request volume alone.

Example 3: Hiring Decision (Middle Manager)

Problem statement: "My team of 6 handles all customer support. I've been asked to absorb product feedback triage without adding headcount. Here's our ticket volume, resolution time, and utilization data."

Data attached: Weekly ticket volume for 12 weeks, average resolution time, tickets per agent, plus a breakdown of time spent on different ticket categories.

What happens: The panel spots that 28% of ticket volume is password resets and account access issues — automatable with a self-service flow. Another 15% is duplicate reports that could be deflected with better status pages. The recommendation isn't "push back on the ask" or "demand headcount" — it's "automate the bottom third of your ticket volume, which frees up roughly 2 FTEs of capacity, then absorb the triage work." The data turned a political negotiation into an engineering problem.

When Data Misleads

A warning: data can also anchor the conversation in the wrong place.

If you bring revenue numbers to a question that's really about product vision, the experts will optimize for revenue. If you bring engagement metrics to a question about team morale, you'll get a data-driven answer to the wrong question.

The best practice: write your problem statement first, without thinking about data. Then ask yourself whether any data you have would change the expert panel's recommendation. If the answer is yes, include it. If you're including data because it feels rigorous, leave it out. Rigor comes from clear thinking, not from spreadsheets.

Preparing Your Data: A Checklist

Before uploading data to a meeting:

  • Relevant? Does it directly inform the decision's options or outcomes?
  • Sized right? Summary-level for strategic questions (often 50-100 rows), granular for operational ones (up to 10,000 rows).
  • Labeled? Clear column headers with units and time periods.
  • Clean? No blank rows, no merged cells, no formulas — just values.
  • Contextualized? Does your problem statement explain what the data represents and why it matters?

Get these right, and data becomes the difference between generic advice and a recommendation built on your specific situation.

---

Learn more: Read our help center guide on preparing data for your meetings for formatting tips, CSV templates, and examples of effective data-driven problem statements.

Related Topics

data-driven decisionsbusiness dataAI advisoryCSV datametricsanalytics

Stuck on a decision?

Board of One gives you multiple expert perspectives on your actual problem. Not generic advice — analysis shaped by your business context.

Try It Free