Making the Web Fun Again: Building a Digital Zen Garden
Rediscovering the joy of building delightful web experiences through a digital Zen garden. A reminder that websites can be both technically sound and genuinely fun.
The landscape of AI tooling has shifted dramatically over the past year - going from Transformer based architecture to vector databases to agentic systems - keeping up with these trends and terms can be exhausting. While enterprise AI applications and open source options compete for the spotlight, there’s a sweet spot for developers looking to enhance their personal projects with AI capabilities. Let’s explore how to do this effectively, with real costs and implementation details.
Before diving into specific ideas for integrating AI into your apps, it’s crucial to understand the available platforms and their tradeoffs. Having recently passed the AWS AI Practitioner certification, I’ve experimented with various services, but the ones listed below are what I’ve found most useful:
AWS AI Services:
Each platform requires different setup and authentication patterns. Let’s look at some real implementation examples.
Content enhancement is often the easiest entry point for AI integration. Here’s how I’ve seen AI used in various content workflow:
Streamline your development workflow with AI integrations. Here are some practical examples I use:
Example GitHub Action workflow for PR descriptions (similar to how I use GitHub Actions for Astro deployments, CO2 emissions tracking, and REI inventory monitoring):
name: AI PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
generate-description:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Dependencies
run: npm install openai
- name: Generate PR Description
uses: actions/github-script@v6
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
with:
script: |
const generateDescription = require('./src/scripts/generatePRDescription.js');
const commits = await github.rest.pulls.listCommits({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.issue.number
});
const description = await generateDescription(commits);
await github.rest.pulls.update({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.issue.number,
body: description
});
import { Configuration, OpenAIApi } from "openai";
async function generateDescription(commits) {
// Initialize OpenAI client
const openai = new OpenAIApi(new Configuration({
apiKey: process.env.OPENAI_API_KEY
}));
// Extract commit messages and changes
const commitMessages = commits.data.map(commit => commit.commit.message);
// Create a prompt for OpenAI
const prompt = `
Given these git commit messages from a pull request:
${commitMessages.join('\n')}
Generate a clear, concise PR description that:
1. Summarizes the main changes
2. Lists key modifications
3. Highlights any breaking changes
4. Uses markdown formatting
Format as:
## Summary
[summary text]
## Changes
- [change 1]
- [change 2]
## Breaking Changes
- [if any]
`;
// Generate description using OpenAI
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
temperature: 0.7,
max_tokens: 500
});
return completion.data.choices[0].message.content;
}
export default generateDescription;
When generating tests, I’ve found that when using Cursor, the Cline VS Code Plugin or GitHub CoPilot that you should start with having it write tests to achieve your user stories. Typically, I define a bunch of user stories in a large .yaml
file. Then the coding agent will take that info and generate test cases. From there, I have the agent write the proper code to ensure the tests pass while still achieving the use cases that I’ve defined earlier.
Costs breakdown:
Please note that the costs break down depends on how much you utilize the Large Language Model and it’s current pricing. I’ve found that with Cline, if you aren’t careful, you can quickly get above $5.00 USD. Even just recently, I had Cline read my entire /posts folder and interlink my writing better and it cost ~$1.50.
Transform and optimize media assets automatically. Here’s how I handle media in my projects:
Real implementation example from a prototype I built out:
import { RekognitionClient, DetectLabelsCommand } from "@aws-sdk/client-rekognition";
import { Configuration, OpenAIApi } from "openai";
async function generateAltText(imageBuffer) {
// First, get labels from Rekognition
const rekognition = new RekognitionClient({ region: "us-east-1" });
const detectLabelsCommand = new DetectLabelsCommand({
Image: { Bytes: imageBuffer },
MaxLabels: 5,
MinConfidence: 90
});
const { Labels } = await rekognition.send(detectLabelsCommand);
// Then, use GPT to create natural description
const openai = new OpenAIApi(new Configuration({
apiKey: process.env.OPENAI_API_KEY
}));
const prompt = `
Create a natural alt text description using these image labels:
${Labels.map(label => label.Name).join(', ')}
`;
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }]
});
return completion.data.choices[0].message.content;
}
// Usage example:
const altText = await generateAltText(imageBuffer);
console.log(`Generated alt text: ${altText}`);
Real costs from my prototype:
Turn raw data into actionable insights. Here’s how I use AI to analyze my blog’s performance:
Real implementation for generating weekly reports:
import { Configuration, OpenAIApi } from "openai";
import { PostHogClient } from "posthog-node";
async function generateWeeklyReport(startDate, endDate) {
// Initialize clients
const posthog = new PostHogClient(process.env.POSTHOG_API_KEY);
const openai = new OpenAIApi(new Configuration({
apiKey: process.env.OPENAI_API_KEY
}));
// Fetch analytics data
const metrics = await posthog.events.list({
from_date: startDate,
to_date: endDate
});
// Structure data for analysis
const analyticsData = {
pageViews: metrics.filter(e => e.event === 'pageview'),
uniqueVisitors: metrics.filter(e => e.properties.distinct_id),
popularPosts: getTopPosts(metrics),
bounceRate: calculateBounceRate(metrics)
};
// Generate insights using GPT
const prompt = `
Analyze this weekly blog performance data and provide key insights:
- Total pageviews: ${analyticsData.pageViews.length}
- Unique visitors: ${analyticsData.uniqueVisitors.length}
- Top posts: ${JSON.stringify(analyticsData.popularPosts)}
- Bounce rate: ${analyticsData.bounceRate}%
Focus on:
1. Notable trends
2. Content performance
3. User engagement patterns
4. Actionable recommendations
`;
const completion = await openai.createChatCompletion({
model: "gpt-4",
messages: [{ role: "user", content: prompt }],
temperature: 0.7
});
return {
rawData: analyticsData,
insights: completion.data.choices[0].message.content,
generatedAt: new Date().toISOString()
};
}
// Helper functions
function getTopPosts(metrics) {
return metrics
.filter(e => e.event === 'pageview' && e.properties.path.includes('/posts/'))
.reduce((acc, curr) => {
const path = curr.properties.path;
acc[path] = (acc[path] || 0) + 1;
return acc;
}, {});
}
function calculateBounceRate(metrics) {
// Implementation for bounce rate calculation
}
// Usage example:
const lastWeekReport = await generateWeeklyReport(
'2025-01-26',
'2025-02-02'
);
console.log(lastWeekReport.insights);
Costs and performance metrics:
Enhance user interaction with smart features. Here’s how I implement AI-powered UX in my blog:
Implementation costs and performance:
Cost-effective implementation strategies:
When implementing AI features, here are the main challenges I’ve encountered and how to address them:
Most AI services have strict rate limits and varying context window sizes. Here’s how I handle them:
class RateLimiter {
constructor(options) {
this.maxRequests = options.maxRequests;
this.perSeconds = options.perSeconds;
this.requests = [];
}
async acquire() {
const now = Date.now();
this.requests = this.requests.filter(
time => now - time < this.perSeconds * 1000
);
if (this.requests.length >= this.maxRequests) {
const oldestRequest = this.requests[0];
const waitTime = (oldestRequest + this.perSeconds * 1000) - now;
await new Promise(resolve => setTimeout(resolve, waitTime));
}
this.requests.push(now);
}
}
// Usage example:
const limiter = new RateLimiter({
maxRequests: 10,
perSeconds: 60
});
async function makeAIRequest() {
await limiter.acquire();
// Make your API call here
}
Similar to this ReateLimiter class above, you can also chunk
your requests to ensure that it fits within the Contact Window that the LLM has. There are a few different ways to go about this with different results - be sure to investigate and try a couple options to see what works best for your use case.
When you’re running autonomous agents, something is happening somewhere. You need to keep in mind that there is a cost with every action (might be a new Einstein Law of Relativity…).
There are of course subscriptions that you can purchase, like Anthropic’s Claude Pro for ~$20 USD per month. Or GitHub Co-pilot. However, from my own experimentation, I’ve found that Cline is the best tool right now from a Developer Experience perspective. You’re able to give it full control to run commands in the Terminal, write / create files, and much more. It’s quite a fascinating tool to never have to leave the Cursor IDE interface.
Again, you need to be cautious on your spending when using solely API based LLMs.
When implementing AI features, it’s crucial to consider the broader implications and general user privacy. Here’s how I approach responsible AI implementation:
class PrivacyManager {
constructor() {
// NOTE - You can expand these to include other sensitive strings too.
this.sensitivePatterns = [
/\b[\w\.-]+@[\w\.-]+\.\w{2,}\b/, // Email
/\b\d{3}[-.]?\d{3}[-.]?\d{4}\b/, // Phone
/\b\d{3}[-.]?\d{2}[-.]?\d{4}\b/ // SSN
];
}
sanitizeText(text) {
let sanitized = text;
this.sensitivePatterns.forEach(pattern => {
sanitized = sanitized.replace(pattern, '[REDACTED]');
});
return sanitized;
}
async processUserData(data) {
return {
...data,
// Only pass non-sensitive data to AI
content: this.sanitizeText(data.content),
// Store minimal required data
metadata: {
timestamp: new Date(),
contentType: data.type
}
};
}
}
Design your AI integrations to be:
Example abstraction layer:
class AIProvider {
constructor(provider) {
this.provider = provider;
}
async generate(prompt) {
return this.provider.generate(prompt);
}
}
When implementing AI features in personal projects:
Remember: AI should enhance your project, not define it. Focus on solving real problems and providing value to yourself or to users.
If you’ve made it this far and want to discuss AI implementations in your projects, reach out to me.