Skip to content

SebastienWae/maven-lightning-search

Repository files navigation

Maven Lightning Search

A search application for discovering and exploring Maven workshops/talks with instructors. Built with modern web technologies and deployed on Cloudflare.

Features

  • Full-text search - Search workshops by title and description
  • Advanced filtering - Filter by tags, instructors, and status (Scheduled/Live/Recorded)
  • Sortable results - Sort by start time or duration (ascending/descending)
  • Pagination - Configurable rows per page (10, 20, 50, 100)
  • CSV export - Download filtered results as CSV
  • RSS feed - Subscribe to filtered results via RSS
  • Featured talks - Highlighted workshops marked as featured

Tech Stack

Category Technology
Runtime Bun
Monorepo Bun workspaces
Database Cloudflare D1 (SQLite)
ORM Drizzle ORM
Frontend React 19
Routing TanStack Router
Data Fetching TanStack Query
SSR TanStack Start
Styling Tailwind CSS v4
Components shadcn/ui (Base UI)
Scraper Cloudflare Workers
Hosting Cloudflare Pages
Linting Biome
Logging tslog

Prerequisites

Setup

1. Install dependencies

bun install

2. Run Cloudflare setup

Create a Cloudflare API token with the following permissions:

  • D1: Edit
  • Account Settings: Read

Add the token to your .env (create the file if needed):

# .env
CLOUDFLARE_API_TOKEN=...

Then run the setup script:

bun run setup_cf.ts

This will automatically:

  • Create a D1 database (maven_search_prod)
  • Overwrite .env with your credentials
  • Create wrangler.jsonc configuration files

You’ll also need to authenticate Wrangler for migrations and deployment:

bunx wrangler login

3. Set up the database

Generate and run migrations:

# Generate migrations from schema
bun drizzle-kit generate

# Run migrations on remote D1
bun drizzle-kit migrate

For local development, prefix commands with LOCAL=1:

LOCAL=1 bun drizzle-kit migrate   # Run migrations locally
LOCAL=1 bun drizzle-kit studio    # Open Drizzle Studio

4. Seed the database

Start the scraper locally and trigger a scrape:

bun run scrape:dev
curl http://localhost:8080/trigger

Development

Web application

bun run web:dev

Opens at http://localhost:3000

Scraper (local)

bun run scrape:dev

The scraper runs as a Cloudflare Worker with:

  • Cron trigger: Automated scheduled scraping
  • HTTP trigger: Manual scrape via GET /trigger

Quality checks

bun run check
bun run type-check

To auto-fix lint/format issues, run bun run check:fix.

Deployment

Deploy the scraper

bun run scrape:deploy

Deploys the scraper as a Cloudflare Worker with scheduled cron triggers.

Deploy the web app

bun run web:deploy

Builds and deploys to Cloudflare Pages.

Common Scripts

Script Description
bun install Install dependencies
bun run web:dev Run the web app locally
bun run scrape:dev Run the scraper locally
bun run check Lint + format checks (Biome)
bun run check:fix Auto-fix lint/format issues
bun run type-check TypeScript type checking

For the full list of scripts, see package.json.

License

MIT License. See LICENSE for details.

About

Maven Lightning Talks search tool

Resources

License

Stars

Watchers

Forks

Contributors