Skip to content
Go back

n8n AI Automation: Integrating AI Workflows with TypeScript

n8n AI Automation: Integrating AI Workflows with TypeScript

n8n can orchestrate AI workloads by integrating with external AI APIs. Using TypeScript, you can build custom AI nodes, manage prompts, handle API rate limits, and deploy robust AI-driven workflows.

Why Integrate AI in n8n with TypeScript?

Step 1: Setup and Environment

Install n8n and initialize TypeScript project:

# Install n8n
npm install n8n -g

# Create custom n8n node project
mkdir n8n-ai-nodes
cd n8n-ai-nodes
npm init -y

# Install TypeScript and dependencies
npm install -D typescript ts-node-dev @types/node
npm install -D eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin
npm install -D prettier husky lint-staged

# Install n8n core and AI SDKs
npm install n8n-core n8n-workflow
npm install openai
npm install dotenv

Configure TypeScript in tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "CommonJS",
    "declaration": true,
    "outDir": "dist",
    "rootDir": "src",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src"]
}

Step 2: Custom AI Node Example

Create src/nodes/OpenAICreateCompletion.ts:

import {
  INodeType,
  INodeTypeDescription,
  IExecuteFunctions,
  INodeExecutionData,
} from "n8n-workflow";
import { Configuration, OpenAIApi } from "openai";

export class OpenAICreateCompletion implements INodeType {
  description: INodeTypeDescription = {
    displayName: "OpenAI Create Completion",
    name: "openAICreateCompletion",
    icon: "file:openai.svg",
    group: ["transform"],
    version: 1,
    description: "Generate text using OpenAI GPT models",
    defaults: { name: "OpenAI Completion", color: "#29AAE3" },
    credentials: [{ name: "openAIApi", required: true }],
    inputs: ["main"],
    outputs: ["main"],
    properties: [
      {
        displayName: "Model",
        name: "model",
        type: "string",
        default: "text-davinci-003",
      },
      {
        displayName: "Prompt",
        name: "prompt",
        type: "string",
        default: "",
        typeOptions: { rows: 5 },
      },
      {
        displayName: "Max Tokens",
        name: "maxTokens",
        type: "number",
        default: 150,
      },
      {
        displayName: "Temperature",
        name: "temperature",
        type: "number",
        default: 0.7,
      },
    ],
  };

  async execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]> {
    const items = this.getInputData();
    const returnData: INodeExecutionData[] = [];

    const credentials = await this.getCredentials("openAIApi");
    const config = new Configuration({ apiKey: credentials.apiKey });
    const openai = new OpenAIApi(config);

    for (let i = 0; i < items.length; i++) {
      const prompt = this.getNodeParameter("prompt", i) as string;
      const model = this.getNodeParameter("model", i) as string;
      const maxTokens = this.getNodeParameter("maxTokens", i) as number;
      const temperature = this.getNodeParameter("temperature", i) as number;

      try {
        const response = await openai.createCompletion({
          model,
          prompt,
          max_tokens: maxTokens,
          temperature,
        });

        const text = response.data.choices[0]?.text || "";
        items[i].json = { ...items[i].json, completion: text };
      } catch (error) {
        if (this.continueOnFail()) {
          items[i].json = { ...items[i].json, error: (error as Error).message };
        } else {
          throw error;
        }
      }

      returnData.push(items[i]);
    }

    return this.prepareOutputData(returnData);
  }
}

Step 3: Prompt Management and Templates

Store prompt templates in JSON or Markdown under src/prompts and load dynamically:

import fs from 'fs/promises'
import path from 'path'

action async function loadPrompt(name: string): Promise<string> {
  const filePath = path.join(__dirname, '..', 'prompts', `${name}.md`)
  return await fs.readFile(filePath, 'utf-8')
}

Use {{variable}} placeholders and replace at runtime:

function renderPrompt(
  template: string,
  variables: Record<string, any>
): string {
  return Object.entries(variables).reduce(
    (acc, [key, value]) =>
      acc.replace(new RegExp(`{{${key}}}`, "g"), String(value)),
    template
  );
}

Step 4: Rate Limit and Retry Strategies

Implement exponential backoff for OpenAI rate limits:

async function callWithRetry(
  fn: () => Promise<any>,
  retries = 3,
  delay = 1000
) {
  try {
    return await fn();
  } catch (error: any) {
    if (retries > 0 && error.response?.status === 429) {
      await new Promise(r => setTimeout(r, delay));
      return callWithRetry(fn, retries - 1, delay * 2);
    }
    throw error;
  }
}

Use inside the node:

const response = await callWithRetry(() =>
  openai.createCompletion({ model, prompt, max_tokens: maxTokens })
);

Step 5: Production Deployment

  1. Dockerize n8n with custom AI nodes mounted
  2. Manage secrets with process.env and .env
  3. Deploy n8n in Kubernetes or cloud VM
  4. Scale workflows with database queues (Postgres/Redis)
  5. Monitor AI usage and costs via dashboards

Best Practices Summary

Development Commands

# Build custom nodes
npm run build

# Start n8n with AI nodes
export N8N_CUSTOM_EXTENSIONS="$(pwd)/dist/nodes"
n8n start

# Watch and rebuild
npm run watch

# Run tests
npm test

Your n8n instance is now enhanced with AI automation workflows using TypeScript, custom nodes, and best practices for robust AI integrations!


Share this post on:

Previous Post
Fastify TypeScript API Development: High-Performance Server Framework
Next Post
Node.js Performance Optimization: Production-Ready Server Development