The Ultimate AI-Powered Analytics Workflow
Stop rewriting SQL from scratch. Stop losing days to analysis cycles. Instructions (Rules + Skills) & MCP (Context & Data Access) gives your team a workflow that compounds with every project.
What you'll walk away with
2.5 hrs
end-to-end
vs. 2–3 days manual
3
report formats
PDF · HTML · Dashboard
Once
setup effort
reusable across every project
10×
faster cycle
measured across workshop teams
AI Infrastructure That Remembers Your Stack
Before running any business question, set up the AI infrastructure that makes every subsequent step faster and more consistent. Create Cursor rules, build Claude Skills, load your semantic models, and configure MCP servers for data access and context.
Every analysis follows the same documentation structure. Create this layout before starting - it enables reproducibility, data persistence, and report generation.
<my_repo>/
├── schema.yml # Semantic layer - table definitions, columns, relationships
├── metrics.yml # Semantic layer - business logic, metric calculations
└── analyses/ # All analysis projects
└── YYYY-MM-DD_analysis-name/ # One folder per analysis (date-stamped)
├── conclusions/
│ └── conclusions.md # Synthesis, cohesive story, recommendations
├── data/ # MANDATORY: Query results as JSON (NN_query-name.json)
├── deliverables/ # Final reports (PDF, HTML)
│ ├── report_summary.pdf
│ ├── report.html
│ └── report_interactive.html
└── queries/ # SQL queries (NN_query-name.sql)Hover over the diagram to copy Mermaid code.
- analyses/YYYY-MM-DD_name/ - One folder per analysis. Date-stamped naming keeps analyses organized and versioned.
- queries/ - SQL queries. Filename format:
NN_query-name.sql. - data/ - Save ALL query results immediately after execution. Filename format:
NN_query-name.json(matches SQL). Enables reproducibility. - conclusions/ - Synthesis, cohesive story per finding, recommendations. Save as
conclusions.md. - deliverables/ - Final reports: PDF, static HTML, interactive dashboard.
- schema.yml + metrics.yml - At repo root. Load from GitHub MCP during Data Quality phase (before validation).
Learn how to create project-specific rules that guide AI behavior across your analytics workflow
Learn how to create reusable Claude Skills that automate analytics behaviors
- Skills activate only when relevant - no wasted tokens on irrelevant context
- Rules persist across projects - define once, reuse everywhere
- Schema context = better SQL - AI understands your tables before writing queries
- MCP servers = live data - query Supabase and manage GitHub directly from Cursor
Supabase MCP Setup
Install Supabase MCP Server
Add to your Cursor MCP config
npx -y @supabase/mcp installConfigure Connection
Set up your Supabase project URL and service key in MCP settings
Test Connection
Prompt: "Using Supabase MCP, list all tables and their row counts"
GitHub MCP Setup
Install GitHub MCP Server
npm install -g @modelcontextprotocol/server-githubAuthenticate
Generate a GitHub personal access token with repo permissions
Load Semantic Layer
Prompt: "Using GitHub MCP, read schema.yml and metrics.yml from analytics/ and load into context"
How Cursor Coordinates Both MCPs
Once configured, Cursor AI uses both servers in a single workflow:
Prompt in Cursor chat:
"@schema.yml @metrics.yml
Using GitHub MCP, read the latest schema.yml.
Then using Supabase MCP, run a sample query on the users
and orders tables to verify the schema matches reality.
Report any discrepancies."B. Load schema.yml + metrics.yml from GitHub (MANDATORY FIRST STEP)
Use GitHub MCP to load your semantic layer files into Cursor context. This gives AI full knowledge of your table structure and business logic BEFORE creating rules and skills:
Prompt in Cursor chat:
"Using GitHub MCP tool user-github-get_file_contents, load schema.yml
from <my_repo>/schema.yml"
"Using GitHub MCP tool, load metrics.yml from
<my_repo>/metrics.yml"
Result: All subsequent rules, skills, and queries reference schema.yml
for table structure/relationships and metrics.yml for consistent
business logic.C. Create Cursor Rules for Supabase Optimization
Cursor Rules define persistent instructions that apply to every AI interaction. Create a rule optimized for Supabase PostgreSQL queries, now informed by the schema context you just loaded:
Prompt in Cursor chat:
"Create a Cursor rule that optimizes queries for Supabase PostgreSQL.
Include best practices for:
- Array operations and JSONB queries
- RLS (Row Level Security) considerations
- Index usage and query planning
- Temporal filtering patterns (created_at, updated_at)
- Avoiding common Supabase pitfalls (e.g., missing RLS policies)
- Reference schema.yml and metrics.yml for table structure and metrics"
Result: A .cursor/rules/supabase-optimization.md file that Cursor
references automatically in every future query generation.D. Build Claude Skills for Reusable Behaviors
Claude Skills are reusable instruction sets that activate when needed. Build skills for EDA, Data Quality, and HTML Reporting, now informed by the schema context:
Prompt in Cursor chat:
"Create a Claude Skill called 'EDA Agent' that:
1. Reads @schema.yml for table context
2. Samples data to understand shape and distributions
3. Generates explainable SQL with inline comments
4. Runs queries via Supabase MCP
5. Summarizes findings in structured markdown
6. Flags outliers and data quality issues
Save as .cursor/skills/eda-agent.md"Design the Approach Before Running a Single Query
Use Cursor Plan Mode to collaboratively design your analysis approach before executing queries. Iterate on the plan with AI to ensure you're fully aligned on methodology, data sources, queries, and deliverables. You stay in the driver's seat.
Learn how to use Plan Mode to design analysis workflows collaboratively before execution
- Stay in control - Review and approve the analysis approach before AI executes anything
- Catch issues early - Identify missing data sources, incorrect assumptions, or scope gaps before running queries
- Iterate efficiently - Refine the plan through conversation until you're confident in the approach
- Document decisions - The plan becomes a record of your analytical reasoning and methodology
- Align stakeholders - Share the plan for approval before investing time in execution
Step 1: Activate Plan Mode
Switch to Plan Mode in Cursor to enter collaborative planning mode. In this mode, AI cannot execute code or queries - it only helps you design the approach.
In Cursor chat:
1. Click the mode selector (usually shows "Agent" or "Chat")
2. Select "Plan" mode
3. Or type: "Switch to Plan Mode"
Result: You're now in read-only planning mode where AI helps
design without executing.Step 2: Describe Your Analysis Goal
Prompt in Plan Mode:
"@schema.yml @metrics.yml
I need to analyze monthly revenue trends and identify drivers
of the 15% MoM decline we saw in January.
Help me plan:
1. What data quality checks should I run first?
2. What EDA analyses are needed?
3. What specific queries will answer the business question?
4. What deliverables should I produce?
Create a step-by-step plan with checkpoints."Step 3: Iterate on the Plan
Review the AI's proposed plan and refine it through conversation. Ask questions, challenge assumptions, add constraints:
Example refinements:
"Add a cohort analysis to identify if the decline is concentrated
in specific user segments"
"Include a comparison with the same period last year to account
for seasonality"
"The plan should produce both a PDF summary for leadership and
an interactive dashboard for the product team"
"Add a checkpoint after EDA to review data quality before running
the main analysis queries"Step 4: Approve and Execute
Once you're satisfied with the plan, switch back to Agent Mode to execute it:
In Cursor chat:
"Switch to Agent Mode and execute the plan we just created.
Pause at each checkpoint for my review before proceeding."
Result: AI follows the approved plan step-by-step, pausing
at each checkpoint for your confirmation.Pro Tip: Save Your Plans
Save approved plans as markdown files in your repo. This creates a library of reusable analysis patterns:
"Save this plan as plans/revenue_decline_analysis.md
and commit via GitHub MCP with message 'Analysis plan: revenue decline'"For You (The Analyst)
- Maintain control over the analysis direction
- Catch methodological issues before execution
- Build confidence in the approach
- Document your analytical reasoning
For Stakeholders
- Review methodology before time is invested
- Provide input on scope and deliverables
- Understand the analysis approach upfront
- Align on expected outputs and timeline
Catch Data Issues Before They Surprise Your Stakeholders
Use your pre-built Claude Skill to automatically run validation checks against Supabase tables. The skill reads schema.yml for context, executes checks, and returns a structured pass/fail report.
See how a pre-built Claude Skill automates PK/FK, temporal, and completeness checks
- Skills activate only when relevant - no token waste on irrelevant context
- Works across projects - not copy-paste, truly reusable
- Evolves with your workflow - update once, use everywhere
- Reduces manual prompting - automated behaviors triggered by context
Activate the Data Quality Skill
With the Skill and schema.yml in context, a single prompt triggers comprehensive validation:
Prompt in Cursor chat:
"@schema.yml Run data quality checks on users and orders tables"
Cursor (using Data Quality Skill):
1. Reads schema.yml to identify primary keys, foreign keys, required fields
2. Generates validation queries for each check type
3. Executes via Supabase MCP
4. Returns structured report:
✓ CHECK: PK Uniqueness (users.user_id)
What: Verifying primary key has no duplicates
Result: PASS
Details: 0 duplicates found
✓ CHECK: FK Integrity (orders.user_id → users.user_id)
What: Verifying join doesn't cause fan-out or data loss
Result: PASS
Details: 0 orphan foreign keys
✓ CHECK: Temporal Consistency (orders)
What: created_at <= updated_at, no future dates
Result: PASS
Details: 0 violations
⚠ CHECK: Completeness (users.created_at)
What: Checking null rates for required fields
Result: WARNING
Details: 0.2% null (below 5% threshold)Copyable Validation Prompts
Run these prompts individually or combine them:
"Run PK/FK validation checks on tables defined in schema.yml"
"Check for temporal consistency: created_at <= updated_at, no future dates"
"Calculate null rates for required fields, flag any >5%"
"Identify duplicate records and orphan foreign keys"What the Skill Checks
- Primary Key Uniqueness: Auto-detects PKs from schema.yml and verifies no duplicates
- Foreign Key Integrity: Validates all FK relationships defined in schema.yml
- Temporal Consistency: Checks created_at ≤ updated_at, no future dates
- Completeness: Identifies null rates in required fields (flags >5%)
Save Results for Downstream Steps
Prompt in Cursor chat:
"Save the data quality results as conclusions/dq_report.md
and commit via GitHub MCP with message
'Data quality check - [date]'"Supabase MCP (Execution)
- Executes PK/FK validation queries
- Runs temporal consistency checks
- Computes null rates per column
- Returns result sets to Cursor
GitHub MCP (Context + Storage)
- Reads schema.yml for table structure
- Provides FK relationship context
- Stores DQ reports in version control
- Tracks quality trends over time
Cursor AI Coordination
Prompt in Cursor chat:
"@schema.yml Using Supabase MCP, run data quality checks on
all tables defined in schema.yml. Save the report to
conclusions/dq_report.md and commit via GitHub MCP with message
'Weekly DQ check - Feb 2026'"Run Thorough EDA With Full Business Context
Run exploratory data analysis with full schema context. The EDA Agent reads schema.yml, executes distribution and correlation analysis via Supabase MCP, and generates structured markdown reports - all from a single prompt.
Use Cursor's data analysis chat mode for interactive EDA with schema context
- Schema.yml auto-loaded - AI knows table relationships before writing SQL
- Explainable queries - every SQL statement has inline comments explaining logic
- Structured output - findings organized in markdown, not scattered across query results
- Iterative exploration - ask follow-up questions without re-explaining context
- 5-phase systematic process - from data profiling to pattern detection, ensuring nothing is missed
Conversational Checkpoints
By default, the analysis pauses after each phase for your review. This ensures data quality issues are caught before running main queries. You can skip confirmations by saying "run the full analysis" or "skip confirmations".
Phase 1: Data Profile & Quality (ALWAYS DO THIS FIRST)
Prompt in Cursor chat:
"Profile the dataset: show shape, memory usage, data types,
duplicates, missing values"
Decision point: Does data need cleaning before proceeding?Phase 2: Column Classification & Context
Prompt in Cursor chat:
"Classify all columns: numeric, categorical, date, ID, binary.
Flag any ambiguous columns."
Checkpoint: User confirms column interpretationsPhase 3: Distribution Analysis
Prompt in Cursor chat:
"Analyze numeric distributions: summary stats, skewness, kurtosis,
outliers (IQR method). Generate histograms and box plots."
"Analyze categorical distributions: unique counts, top values,
check for high cardinality."Phase 4: Correlation & Relationship Analysis
Prompt in Cursor chat:
"Calculate correlation matrix, identify high correlations (|r| >= 0.7),
generate heatmap."
"Test categorical relationships with chi-square for independence."Phase 5: Pattern Detection & Insights
Prompt in Cursor chat (adapt based on dataset type):
For time series: "Identify temporal patterns, seasonality, trends"
For segmentation: "Cluster analysis and segment profiling"
For behavioral: "User journey patterns and conversion funnels"Complete EDA Example
Prompt in Cursor chat:
"@schema.yml Perform EDA on the orders table:
- Analyze order_amount distribution (min, max, quartiles, outliers)
- Identify seasonal patterns in order_date
- Check correlation between order_count and avg_order_value
- Profile user segments by order frequency"
EDA Agent:
1. Samples data to understand shape (first 1000 rows)
2. Generates SQL with inline comments explaining each step
3. Runs queries via Supabase MCP
4. Summarizes findings in structured markdown:
## Distribution: order_amount
- Range: $12.50 - $4,890.00
- Median: $185.00 | Mean: $245.30
- Q1: $85.00 | Q3: $320.00
- Outliers: 23 orders > $2,000 (1.2% of total)
## Seasonal Patterns
- Peak months: November, December (+40% vs baseline)
- Trough: January (-25% vs baseline)
## Correlation
- order_count vs avg_order_value: r = -0.32 (weak negative)
- Higher-volume days have slightly lower AOVStep 3: Deep-Dive with Follow-Up Questions
Prompt in Cursor chat:
"Drill into the 23 high-value outlier orders:
- What user segments do they belong to?
- Are they concentrated in specific time periods?
- Check if they correlate with marketing campaigns"
The EDA Agent maintains full context from Step 2 -
no need to re-explain table structure or prior findings.GitHub MCP (Setup Phase)
- Reads schema.yml + metrics.yml from repo
- Provides table structure and business logic context
- Tracks EDA report versions
Supabase MCP (Execution Phase)
- Uses schema.yml context to understand tables
- Executes EDA queries against live database
- Returns result sets to Cursor for summarization
Cursor AI Coordination
Prompt in Cursor chat:
"@schema.yml Using Supabase MCP, analyze the orders table
distribution. Save findings to conclusions/eda_orders.md and
commit via GitHub MCP with message 'EDA: orders table analysis'"Write Production-Ready SQL From a Single Prompt
Use Cursor rules with DWH documentation to write optimized, production-ready queries. The semantic layer (schema.yml + metrics.yml) ensures consistent business logic across all analyses.
See how Cursor rules with DWH docs produce optimized, consistent SQL
- Cursor rules encode DWH best practices - every query follows optimization patterns
- metrics.yml = single source of truth - no more "which revenue formula do we use?"
- Schema context prevents errors - AI knows valid joins, column types, and constraints
- Production-ready output - queries respect indexes, RLS, and Supabase-specific patterns
- Data persistence is mandatory - save all query results for reproducibility and report generation
Using the Semantic Layer for Consistent Metrics
The Cursor rule instructs AI to always reference metrics.yml for business logic. This eliminates metric definition drift:
Prompt in Cursor chat:
"@schema.yml @metrics.yml
Calculate monthly revenue for the last 6 months
using the total_revenue metric."
Cursor (applying Supabase optimization rule + metrics.yml):
1. Reads metrics.yml → total_revenue = SUM(order_amount)
2. Reads schema.yml → orders table, order_date column
3. Applies Supabase rule → uses index on order_date
4. Generates optimized SQL:
SELECT
DATE_TRUNC('month', order_date) AS month,
SUM(order_amount) AS total_revenue -- from metrics.yml
FROM orders
WHERE order_date >= NOW() - INTERVAL '6 months'
GROUP BY DATE_TRUNC('month', order_date)
ORDER BY month DESC;Environment-Aware Optimization
Cursor rules with DWH documentation mean every query follows platform-specific best practices:
# .cursor/rules/supabase-optimization.md
## Query Optimization for Supabase PostgreSQL
### Index Usage
- Always filter on indexed columns first (created_at, user_id)
- Use BETWEEN for date ranges instead of >= AND <=
- Prefer EXISTS over IN for subqueries
### JSONB Patterns
- Use ->> for text extraction, -> for nested access
- Create GIN indexes for frequently queried JSONB paths
- Avoid full-table JSONB scans
### RLS Considerations
- Queries execute with RLS policies applied
- Use service_role key for admin queries via MCP
- Test with both anon and service_role to verify access
### Supabase-Specific
- Use pg_stat_statements to identify slow queries
- Prefer materialized views for heavy aggregations
- Use EXPLAIN ANALYZE to verify query plansComplex Analysis Example
Prompt in Cursor chat:
"@schema.yml @metrics.yml
Build a monthly retention cohort analysis:
- Cohort by user signup month
- Measure retention using the active_users metric
- Show 6-month retention curve
- Highlight cohorts with above/below average retention"
Cursor generates production-ready SQL that:
✓ Uses metrics.yml active_users definition
✓ Applies Supabase index optimizations
✓ Includes EXPLAIN ANALYZE for query plan validation
✓ Adds inline comments referencing metric definitionsData Persistence (MANDATORY)
Save ALL query results to the data/ folder immediately after execution. This enables reproducibility and report generation:
Prompt in Cursor chat:
"Save query results to data/01_monthly-revenue.json immediately
after execution"
Filename format: NN_query-name.json (matches SQL query filename)
Why this matters:
- Enables reproducibility without re-running queries
- Provides data for report generation
- Creates audit trail of analysis results
- Allows data review before synthesisValidation Loop with Supabase MCP
Cursor can generate, execute, and validate queries in a single loop:
Prompt in Cursor chat:
"@schema.yml @metrics.yml
Generate the monthly revenue query, execute it via Supabase MCP,
and validate the results make sense:
- Check for NULL months
- Verify revenue values are positive
- Compare against last month's report for sanity"
Cursor:
1. Generates optimized SQL (using rules + metrics.yml)
2. Executes via Supabase MCP
3. Validates results automatically
4. Reports: "Query returned 6 rows, all months present,
revenue range $120K–$185K, +5% vs previous report ✓"Supabase MCP Usage
- Execute generated SQL against live data
- Run EXPLAIN ANALYZE for optimization
- Validate results in real-time
- Test with different RLS contexts
GitHub MCP Usage
- Read metrics.yml for business logic
- Version control optimized queries
- Track query performance over time
- Share validated queries with team
Turn Raw Results Into a Decision-Ready Story
Transform raw EDA findings and query results into actionable insights. Use Cursor to summarize analysis results into a cohesive story and generate prioritized recommendations with business impact.
- Analysis without conclusions is just data - stakeholders need the "so what?"
- Cursor maintains full context - all EDA + query results are available for synthesis
- Structured synthesis frameworks - prevent cherry-picking and keep reasoning defensible
- Reproducible reasoning - the AI documents how it arrived at each conclusion
Step 1: Summarize Analysis Results into a Cohesive Story
For each key finding, build a defensible narrative from observation to recommendation that anyone can follow:
Prompt in Cursor chat:
"@eda_orders.md @dq_report.md
Summarize the analysis results into a cohesive story:
1. Observation (what happened)
2. Context (why it matters)
3. Supporting data (from EDA results)
4. Root cause analysis
5. Business impact (quantified)
6. Recommendation (specific, actionable)"
Cursor generates:
# Insight: New customer acquisition declined 20% MoM
## What Happened
New customer signups dropped from 1,000 to 800 while
overall traffic remained stable (+2%).
## Root Cause
- Conversion rate fell from 5% to 4%
- 30% abandonment increase on payment page
- Correlated with payment processor downtime (Feb 15-18)
## Business Impact
- Revenue: $50K lost (20% of new customer LTV)
- Compounding: Affects future months' retention cohorts
## Recommendation
- Implement backup payment processor (est. $5K/mo)
- Add error monitoring alerts for checkout flow
- Offer recovery discount to abandoned cartsStep 2: Prioritize Recommendations
Prompt in Cursor chat:
"Based on the analysis summary, create a prioritized
recommendation table with:
- Action item
- Expected impact (revenue/efficiency)
- Effort estimate (low/medium/high)
- Priority score
- Owner suggestion
Format as a markdown table and save to
conclusions/recommendations.md"
Cursor:
| # | Action | Impact | Effort | Priority |
|---|---------------------------|----------|--------|----------|
| 1 | Backup payment processor | $50K/mo | Medium | P0 |
| 2 | Checkout error monitoring | $20K/mo | Low | P0 |
| 3 | Cart recovery campaign | $15K/mo | Low | P1 |
| 4 | Price optimization review | $30K/mo | High | P2 |Conclusion Framework
Every insight should answer four questions:
- What happened? Clear statement of the observation with data
- Why does it matter? Business impact quantified in dollars or KPIs
- What caused it? Root cause with supporting data from EDA
- What should we do? Specific, actionable recommendations with effort/impact
End-to-End Synthesis Prompt
Prompt in Cursor chat:
"@eda_orders.md @dq_report.md @metrics.yml
Synthesize all findings into a conclusions document:
1. Summarize analysis results into a cohesive story per key finding
2. Create prioritized recommendation table
3. Write executive summary (5 bullet points max)
Save to conclusions/conclusions.md and commit
via GitHub MCP with message 'Analysis synthesis - Feb 2026'"Input Sources
- EDA results from Checkpoint 4
- Query results from Checkpoint 5
- Data quality report from Checkpoint 3
- metrics.yml for consistent definitions
Output Artifacts
- Summary of analysis findings per key insight
- Prioritized recommendation table
- Executive summary for stakeholders
Deliver Three Polished Reports — From One Cursor Session
Transform synthesized conclusions into polished, stakeholder-ready deliverables. Generate three distinct report types: Executive PDF for leadership, Static HTML for branded reports, and Interactive Dashboard for deep-dive analysis.
Generate branded, consistent HTML reports from analysis results using a Cursor Skill
- Executive PDF (report_summary.pdf) - Concise presentation format for leadership, board pre-reads, email attachments (~8 pages)
- Static HTML (report.html) - Self-contained branded report with embedded ECharts, shareable via link or file
- Interactive Dashboard (report_interactive.html) - Bootstrap + DataTables + ECharts for deep-dive analysis with sortable/filterable tables
- Consistent structure - same professional layout every time using Cursor Skills
- One prompt per type - from synthesis to stakeholder-ready deliverable
Purpose & Use Cases
Concise presentation format (~8 pages) for email to leadership, board pre-reads, and executive distribution. Uses ReportLab for professional PDF generation.
Prompt in Cursor chat:
"Generate executive PDF summary with key metrics dashboard
and top 3 recommendations"
"@conclusions/conclusions.md
Generate a report_summary.pdf with:
- Page 1-2: Executive summary + key metrics dashboard
- Page 3-5: Top insights with supporting analysis summary
- Page 6-7: Prioritized recommendations with impact/effort matrix
- Page 8: Methodology and data quality status
Use company brand colors and logo.
Save as deliverables/report_summary.pdf"Purpose & Use Cases
Self-contained static HTML with embedded ECharts (CDN). Shareable branded report with charts and full methodology. Perfect for web hosting or file sharing.
Prompt in Cursor chat:
"Generate branded HTML report with company colors, embedded charts,
and full methodology"
"@conclusions/conclusions.md @dq_report.md
Create a self-contained report.html that includes:
- Executive summary with top 3 insights
- Key metrics dashboard with MoM comparisons
- Embedded ECharts visualizations (using CDN)
- Detailed findings from synthesis
- Data quality status
- Prioritized recommendations
- Methodology appendix
Use consistent CSS with brand colors.
Save as deliverables/report.html"Purpose & Use Cases
Deep-dive analysis dashboard with Bootstrap 5 + DataTables + ECharts. Features sortable/filterable tables, dynamic filtering, and interactive visualizations for exploratory analysis.
Prompt in Cursor chat:
"Generate interactive dashboard with DataTables for all result tables
and dynamic filtering"
"@conclusions/conclusions.md @data/01_monthly-revenue.json
Create report_interactive.html with:
- Bootstrap 5 responsive layout
- DataTables for all result tables (sortable, filterable, searchable)
- ECharts for interactive visualizations
- Dynamic filters for date ranges and segments
- Drill-down capabilities for detailed analysis
- Export functionality (CSV, Excel)
Save as deliverables/report_interactive.html"Add Scrollytelling (Optional)
Scrollytelling adds scroll-triggered chart updates that guide stakeholders through insights like a story:
Prompt in Cursor chat:
"Add scrollytelling to the interactive dashboard:
- Section 1: Revenue overview (chart updates as user scrolls)
- Section 2: Zoom into key trends
- Section 3: User segment breakdown
- Section 4: Recommendations with impact estimates
Use Intersection Observer for scroll triggers and
CSS transitions for smooth chart animations."End-to-End Delivery Workflow
Prompt in Cursor chat:
"@conclusions/conclusions.md @dq_report.md @data/
Generate all three deliverables:
1. Executive PDF summary (report_summary.pdf)
→ ~8 pages, leadership format
2. Static HTML report (report.html)
→ Self-contained with embedded ECharts
3. Interactive dashboard (report_interactive.html)
→ Bootstrap + DataTables + ECharts
Commit all via GitHub MCP with message
'Monthly analysis deliverables - February 2026'"
Cursor:
1. Activates HTML Report Generator Skill
2. Generates report_summary.pdf using ReportLab
3. Creates report.html with embedded charts (ECharts CDN)
4. Builds report_interactive.html with DataTables
5. Saves all files to deliverables/
6. Commits via GitHub MCP
7. Confirms: "3 deliverables generated and committed ✓"Executive PDF
ReportLab format, ~8 pages, perfect for email to leadership and board pre-reads
Static HTML
Self-contained with ECharts CDN, shareable branded report with embedded visualizations
Interactive Dashboard
Bootstrap + DataTables + ECharts for deep-dive analysis with sortable tables
Take it further
Want this running in your team — not just on your laptop?
The specific tools don't matter. When AI has direct access to your data, reusable skills for your team's workflows, and solid context about your business — analytical throughput transforms. I've seen it happen with teams across industries, and that's what I help build.
Trusted by analytics teams at


