Objective: Prompt an LLM to perform data analysis and render visualizations via AVM’s distributed network.

Data Analysis

Use LLMs to generate analysis and plotting code, execute it across AVM nodes, and retrieve rich outputs such as charts or summary statistics.

Scenario: Trend Discovery

Analyze token transaction CSVs to extract metrics and visualize patterns without manual scripting.

Solution: LLM + AVM

  1. Prepare Sample
    Load a CSV subset locally.
  2. Model Prompt
    Ask the LLM to write a Python function for metrics and plotting.
  3. Sandbox Execution
    Run the code with AVM’s runPython, leveraging pandas and matplotlib.
  4. Collect Results
    Extract base64-encoded images or JSON stats for downstream use.

Example (TypeScript)

import { runPythonTool } from "@avm-ai/avm-vercel-ai";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import fs from "fs";

const tool = runPythonTool();
const csvSample = fs.readFileSync("data.csv", "utf-8");

async function analyze() {
  const prompt = \`
Write execute(input) that computes min, max, mean, median, 
and returns a line chart as base64 PNG.\`;
  const { text: code } = await generateText({
    model: openai("gpt-4o"),
    prompt,
    tools: { runPython: tool },
  });
  const result = await tool.exec({ code, input: { data: csvSample } });
  // Use result.output.image
}

Next Steps

  • Customize visuals (bar, scatter).
  • Integrate seaborn or plotly via custom runtimes.
  • Embed in agent workflows via MCP.