Full-stack Nuxt framework with database, KV, blob, and cache. Multi-cloud support (Cloudflare, Vercel, Deno, Netlify). **For Nuxt server patterns:** use `nuxt` skill (server.md) **For content with database:** use `nuxt-content` skill **Consider loading these reference files based on your task:**
nuxt skill (server.md) For content with database: use nuxt-content skillnpx nuxi module add hub// nuxt.config.ts export default defineNuxtConfig({ modules: ['@nuxthub/core'], hub: { db: 'sqlite', // 'sqlite' | 'postgresql' | 'mysql' kv: true, blob: true, cache: true, dir: '.data', // local storage directory remote: false // use production bindings in dev (v0.10+) } }) `### Advanced Config` hub: { db: { dialect: 'postgresql', driver: 'postgres-js', // Optional: auto-detected casing: 'snake_case', // camelCase JS -> snake_case DB (v0.10.3+) migrationsDirs: ['server/db/custom-migrations/'], applyMigrationsDuringBuild: true, // default replica: { // Read replica support (v0.10.6+) connection: { connectionString: process.env.DATABASE_REPLICA_URL } } }, remote: true // Use production Cloudflare bindings in dev (v0.10+) }
db and schema are auto-imported on server-side.server/db/schema.ts or server/db/schema/*.ts:// server/db/schema.ts (SQLite) import { integer, sqliteTable, text } from 'drizzle-orm/sqlite-core' export const users = sqliteTable('users', { id: integer().primaryKey({ autoIncrement: true }), name: text().notNull(), email: text().notNull().unique(), createdAt: integer({ mode: 'timestamp' }).notNull() }) `PostgreSQL variant:` import { pgTable, serial, text, timestamp } from 'drizzle-orm/pg-core' export const users = pgTable('users', { id: serial().primaryKey(), name: text().notNull(), email: text().notNull().unique(), createdAt: timestamp().notNull().defaultNow() }) `### Database API` // db and schema are auto-imported on server-side import { db, schema } from 'hub:db' // Select const users = await db.select().from(schema.users) const user = await db.query.users.findFirst({ where: eq(schema.users.id, 1) }) // Insert const [newUser] = await db.insert(schema.users).values({ name: 'John', email: 'john@example.com' }).returning() // Update await db.update(schema.users).set({ name: 'Jane' }).where(eq(schema.users.id, 1)) // Delete await db.delete(schema.users).where(eq(schema.users.id, 1)) `### Migrations` npx nuxt db generate # Generate migrations from schema npx nuxt db migrate # Apply pending migrations npx nuxt db sql "SELECT * FROM users" # Execute raw SQL npx nuxt db drop <TABLE> # Drop a specific table npx nuxt db drop-all # Drop all tables (v0.10+) npx nuxt db squash # Squash migrations into one (v0.10+) npx nuxt db mark-as-migrated [NAME] # Mark as migrated without running
npx nuxi dev and npx nuxi build. Tracked in _hub_migrations table..data/db/sqlite.dbTURSO_DATABASE_URL, TURSO_AUTH_TOKEN)DATABASE_URL), neon-http (v0.10.2+, DATABASE_URL)DATABASE_URL, MYSQL_URL)kv is auto-imported on server-side.import { kv } from 'hub:kv' await kv.set('key', { data: 'value' }) await kv.set('key', value, { ttl: 60 }) // TTL in seconds const value = await kv.get('key') const exists = await kv.has('key') await kv.del('key') const keys = await kv.keys('prefix:') await kv.clear('prefix:')
@upstash/redisUPSTASH_REDIS_REST_URL, UPSTASH_REDIS_REST_TOKENioredisREDIS_URLKV binding in wrangler.jsoncKV_REST_API_URL, KV_REST_API_TOKENblob is auto-imported on server-side.import { blob } from 'hub:blob' // Upload const result = await blob.put('path/file.txt', body, { contentType: 'text/plain', access: 'public', // 'public' | 'private' (v0.10.2+) addRandomSuffix: true, prefix: 'uploads' }) // Returns: { pathname, contentType, size, httpEtag, uploadedAt } // Download const file = await blob.get('path/file.txt') // Returns Blob or null // List const { blobs, cursor, hasMore, folders } = await blob.list({ prefix: 'uploads/', limit: 10, folded: true }) // Serve (with proper headers) return blob.serve(event, 'path/file.txt') // Delete await blob.del('path/file.txt') await blob.del(['file1.txt', 'file2.txt']) // Multiple // Metadata only const meta = await blob.head('path/file.txt') `### Upload Helpers` // Server: Validate + upload handler export default eventHandler(async (event) => { return blob.handleUpload(event, { formKey: 'files', multiple: true, ensure: { maxSize: '10MB', types: ['image/png', 'image/jpeg'] }, put: { addRandomSuffix: true, prefix: 'images' } }) }) // Validate before manual upload ensureBlob(file, { maxSize: '10MB', types: ['image'] }) // Multipart upload for large files (>10MB) export default eventHandler(async (event) => { return blob.handleMultipartUpload(event) // Route: /api/files/multipart/[action]/[...pathname] }) `### Vue Composables` // Simple upload const upload = useUpload('/api/upload') const result = await upload(inputElement) // Multipart with progress const mpu = useMultipartUpload('/api/files/multipart') const { completed, progress, abort } = mpu(file)
BLOB binding in wrangler.jsonc@vercel/blobBLOB_READ_WRITE_TOKENaws4fetchS3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY, S3_BUCKET, S3_REGIONexport default cachedEventHandler((event) => { return { data: 'cached', date: new Date().toISOString() } }, { maxAge: 60 * 60, // 1 hour getKey: event => event.path }) `### Function Caching` export const getStars = defineCachedFunction( async (event: H3Event, repo: string) => { const data = await $fetch(`https://api.github.com/repos/${repo}`) return data.stargazers_count }, { maxAge: 3600, name: 'ghStars', getKey: (event, repo) => repo } ) `### Cache Invalidation` // Remove specific await useStorage('cache').removeItem('nitro:functions:getStars:repo-name.json') // Clear by prefix await useStorage('cache').clear('nitro:handlers')
${group}:${name}:${getKey(...args)}.json (defaults: group='nitro', name='handlers'|'functions'|'routes')wrangler.json from your hub config - no manual wrangler.jsonc required:// nuxt.config.ts export default defineNuxtConfig({ hub: { db: { dialect: 'sqlite', driver: 'd1', connection: { databaseId: '<database-id>' } }, kv: { driver: 'cloudflare-kv-binding', namespaceId: '<kv-namespace-id>' }, cache: { driver: 'cloudflare-kv-binding', namespaceId: '<cache-namespace-id>' }, blob: { driver: 'cloudflare-r2', bucketName: '<bucket-name>' } } }) `**Observability (recommended):** Enable logging for production deployments:` // wrangler.jsonc (optional) { "observability": { "logs": { "enabled": true, "head_sampling_rate": 1, "invocation_logs": true, "persist": true } } } `Create resources via Cloudflare dashboard or CLI:` npx wrangler d1 create my-db # Get database-id npx wrangler kv namespace create KV # Get kv-namespace-id npx wrangler kv namespace create CACHE # Get cache-namespace-id npx wrangler r2 bucket create my-bucket # Get bucket-name
CLOUDFLARE_ENV=preview for preview deployments.hub: { db: { dialect: 'sqlite', driver: 'd1-http' } }
NUXT_HUB_CLOUDFLARE_ACCOUNT_ID, NUXT_HUB_CLOUDFLARE_API_TOKEN, NUXT_HUB_CLOUDFLARE_DATABASE_ID// Extend schema nuxt.hook('hub:db:schema:extend', async ({ dialect, paths }) => { paths.push(await resolvePath(`./schema/custom.${dialect}`)) }) // Add migration directories nuxt.hook('hub:db:migrations:dirs', (dirs) => { dirs.push(resolve('./db-migrations')) }) // Post-migration queries (idempotent) nuxt.hook('hub:db:queries:paths', (paths, dialect) => { paths.push(resolve(`./seed.${dialect}.sql`)) }) `## Type Sharing` // shared/types/db.ts import type { users } from '~/server/db/schema' export type User = typeof users.$inferSelect export type NewUser = typeof users.$inferInsert
// nuxt.config.ts nitro: { experimental: { websocket: true } }
// server/routes/ws/chat.ts export default defineWebSocketHandler({ open(peer) { peer.subscribe('chat') peer.publish('chat', 'User joined') }, message(peer, message) { peer.publish('chat', message.text()) }, close(peer) { peer.unsubscribe('chat') } })
hubAI() -> Use AI SDK with Workers AI ProviderhubBrowser() -> PuppeteerhubVectorize() -> Vectorizenpx nuxthub deploy -> Use wrangler deployimport { db, schema } from 'hub:db'db.select(), db.insert(), etc.import { kv } from 'hub:kv'kv.get(), kv.set(), etc.import { blob } from 'hub:blob'blob.put(), blob.get(), etc.