A minimal, artistic web and mobile app for discovering events in cities worldwide.
- β NO DARK BUTTONS: Avoid using dark background colors for buttons (especially in the admin panel).
- β
Light/Pastel Theme: Prefer light, translucent (
rgba), or pastel background colors with dark text. - Minimalist Style: Maintain an airy, modern, and minimalist aesthetic.
- Hover Transitions: Most buttons should be light by default and only become fully saturated on hover.
β οΈ ALWAYS activate virtual environment first:source venv/bin/activate && python- β NEVER use
python3directly: Causes "no module named bs4" and other dependency errors - β
CORRECT:
source venv/bin/activate && python scripts/venue_event_scraper.py - β WRONG:
python3 scripts/venue_event_scraper.py(uses system Python without dependencies) - Use port 5001, never 5000: App runs on
http://localhost:5001 - Environment variables: Add API keys to
.envfile (never commit this file)
Local source reload
curl -X POST http://localhost:5001/api/admin/reload-sources- Use after editing
data/sources.json - Reloads sources from JSON into the local database
- Verify source changes locally before deploy/sync
Production sync (cities, venues, sources)
# Set PRODUCTION_DATABASE_URL in .env (PostgreSQL public URL from Railway), then:
# Dry run (preview only)
./venv/bin/python scripts/sync_json_to_production.py
# Apply changes
./venv/bin/python scripts/sync_json_to_production.py --apply- Syncs cities, venues, and sources only (not events)
- Run dry run first before
--apply - Prefer
PRODUCTION_DATABASE_URLin.envso you do not need to prefix the command; falls back toDATABASE_URLif it is PostgreSQL - Use Railway Postgres public URL when running from a laptop (do not use
postgres.railway.internallocally) - Duplicate conflicts are skipped for manual review
- Default OCR: Tesseract (local), Google Vision API (deployment)
- LLM Processing: Google Gemini for intelligent event extraction
- Smart Detection: Automatically chooses optimal OCR engine
- Instagram Context: Extracts page names, handles, and poster info
- Intelligent Processing: 90% confidence with Vision API, 80% with Tesseract
STATUS: Feature fully functional - Auto-Fill button now works correctly!
- π Paste Any Event URL: Automatically scrape and create events from web pages
- π Auto-Fill Button: Click to extract event details before creating
- π Smart Time Periods: Choose today, tomorrow, this week, this month, or custom dates
- π Recurring Event Handling: Detects schedules like "Fridays 6:30pm - 7:30pm" or "Weekdays 3pm"
- π Multi-Event Creation: Automatically creates events for all matching days in period
- π― Intelligent Extraction: Pulls title, description, times, and images from page
- π‘οΈ Bot Protection Bypass: Uses cloudscraper with retry logic (Railway-compatible)
- π€ LLM Fallback: Automatically uses AI (Gemini/Groq) when web scraping is blocked
- β Duplicate Prevention: Skips events that already exist in database
-
Auto-Fill Button Fixed (October 10, 2025):
- Problem: Button click with
onclick="autoFillFromUrl(event)"wasn't triggering - Root Cause: Event parameter not properly passed in inline onclick handler
- Solution:
- Removed inline onclick handler
- Added button ID (
autoFillBtn) - Simplified
autoFillFromUrl()to not require parameters - Added proper event listener in
openUrlScraperModal()
- Status: β Fully functional
- Problem: Button click with
-
Venue Dropdown Loading:
- Fixed: Changed from
/api/venuesto/api/admin/venues - Status: β Working
- Fixed: Changed from
-
LLM Fallback Added (October 10, 2025):
- Enhancement: Added automatic LLM extraction when bot protection blocks scraping
- How it Works: Tries web scraping first (3 attempts), then automatically uses LLM
- LLM Providers: Google Gemini, Groq, OpenAI, Anthropic (automatic fallback chain)
- Result: Bot-protected sites (like Met Museum) now work!
- Indicator: Extracted data includes
llm_extracted: trueand confidence level - Status: β Fully functional
- Click "π From URL" button in Events section
- Paste event page URL
- Click "π Auto-Fill" button (now works!)
- Review and edit extracted data
- Select venue (optional) and city (required)
- Choose time period for recurring events
- Click "π Create Events"
- β
/api/admin/extract-event-from-urlendpoint functional - β
extract_event_data_from_url()function works - β Cloudscraper bypasses bot protection
- β Schedule detection works ("Fridays 6:30pm - 7:30pm")
- β
/api/admin/scrape-event-from-urlcreates events correctly
- β Auto-Fill button click handler (FIXED)
- β Event parameter passing (FIXED)
- β Preview section display
- β Form submission with extracted data
# Test extraction API
curl -X POST http://localhost:5001/api/admin/extract-event-from-url \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'π For detailed usage guide, see docs/URL_EVENT_CREATION_GUIDE.md
- Today-Focused: Scrapes events for TODAY only (more relevant and useful)
- Smart Schedule Detection:
- Reads actual webpage content to find schedule (e.g., "Fridays 6:30pm - 7:30pm")
- Day-aware filtering: Only shows events if today matches the specified day
- Uses cloudscraper to bypass bot protection (Railway-compatible, no browser needed)
- Extracts both start AND end times from page text
- Falls back to URL-based time extraction (e.g., "630pm" in URL β 6:30 PM)
- Tour Duration Assumption: If no end time is specified, assumes 1-hour duration
- Example: 3:00 PM start β 4:00 PM end (automatically calculated)
- Essential for Google Calendar integration - calendar events require both start and end times
- Makes events more complete and useful for planning
- Enhanced Title Extraction: Converts generic dates to descriptive titles
- "Friday, October 10" β "Museum Highlights Tour"
- "Collection Tour: Islamic Art" (from actual page titles)
- Meeting Point Detection: Extracts specific locations like "Gallery 534, VΓ©lez Blanco Patio"
- π¨ CRITICAL RULE: Unless we have made a tailored scraper, the generic scraper should be used
- Any venue without a specialized/tailored scraper MUST use the generic scraper
- The generic scraper is the default for all venues
- Only venues with explicitly implemented tailored scrapers should bypass the generic scraper
- Universal Compatibility: A generic scraper (
scripts/generic_venue_scraper.py) that works for any venue/location by using common patterns learned from specialized scrapers - Two-Tier Architecture:
- Specialized Scrapers (Priority): Custom scrapers for specific venues (Hirshhorn, NGA, etc.) with venue-specific logic
- Generic Scraper (Default): Universal scraper that uses common HTML patterns, CSS selectors, and extraction methods - used for ALL venues unless a tailored scraper exists
- Automatic Usage: The system automatically uses the generic scraper for any venue that doesn't have a tailored scraper implementation
- Pattern-Based Extraction: Uses patterns learned from specialized scrapers:
- Common CSS selectors (
.event,.event-item,.program,.tour, etc.) - JSON-LD structured data extraction
- Multiple date/time format parsing
- Automatic event type detection
- Image extraction from listing pages
- Common CSS selectors (
- Continuous Improvement: The generic scraper is designed to evolve - as we create new specialized scrapers or discover new patterns, we extract reusable patterns and add them to the generic scraper, making it work better for more venues over time
- Error Handling: Includes bot protection bypass (cloudscraper), SSL error handling, and retry logic
- Documentation: See
docs/GENERIC_SCRAPER_GUIDE.mdfor detailed usage and integration examples
-
Development Phase (Current): Individual scraper buttons in admin interface for testing specific venues/museums
- Used for development and debugging individual scrapers
- Each scraper (NGA, SAAM, Asian Art, etc.) has its own button for quick testing
- Note: These are temporary development tools, not the final production workflow
-
Production Phase (Manual): Unified scraping interface via main page
- Workflow: Main page β Select city β Select filters (venues/sources) β Click "Scrape" button
- The unified scraping system (
/api/scrape) handles all selected venues/sources automatically - Uses progress tracking with real-time updates in the UI
- Supports filtering by city, event type, time range, and specific venues/sources
-
Production Phase (Automated): Cronjob-based scheduled scraping
- Scrapers will run automatically on a schedule (e.g., daily, hourly)
- All scrapers must be callable programmatically (not just from UI)
- Comprehensive logging for monitoring and debugging
- Error handling that works without UI dependencies
- Progress tracking is optional for cronjobs (no UI needed)
-
Standard Pattern for All Scrapers:
- Backend: Progress tracking via
scraping_progress.json, error handling, always return JSON - Frontend: Progress modal with real-time updates, table refreshes during scraping
- Database Saving: Update progress every 5 events for real-time feedback
- Error Handling: Graceful degradation - continue with other events if one fails
- Timeout Handling: Retry logic with exponential backoff, increased timeout values
- Standalone Operation: All scrapers work both from UI and programmatically (for cronjobs)
- Backend: Progress tracking via
- Platform: Railway with custom domain
planner.ozayn.com - Database: PostgreSQL (Railway managed)
- Environment Detection: Automatically switches OCR engines based on environment
- Security: All API keys properly protected, no exposed credentials
- π¨ DEPLOYMENT PREFERENCE:
- β ALWAYS use GitHub integration: Push to GitHub and let Railway auto-deploy
- β NEVER use
railway up: Bypasses GitHub, creates inconsistency - Deployment process:
git pushβ Railway auto-detects β Builds β Runsreset_railway_database.pyβ Deploys - Wait time: ~2-3 minutes for automatic deployment to complete
β οΈ CRITICAL: Local SQLite and Railway PostgreSQL have different city IDs- Local: New York = city_id 2, Washington = city_id 1
- Production: New York = city_id 452, Washington = city_id 451
- π¨ IMPORTANT: After adding a city to
data/cities.json, you MUST reload it into the database - Complete Workflow (follow these exact steps):
- Add city to JSON: Edit
data/cities.jsonand add your new city entry- Example format:
"28": { "name": "Coimbra", "state": "", "country": "Portugal", "timezone": "Europe/Lisbon" }
- Reload cities into local database:
curl -X POST http://localhost:5001/api/admin/reload-cities
- This updates existing cities and adds new ones from
cities.json(preserves city IDs to avoid breaking events/venues/sources) - Verify: Check admin interface or
curl http://localhost:5001/api/admin/cities
- This updates existing cities and adds new ones from
- Commit and push JSON file:
git add data/cities.json git commit -m "Add new city: [City Name]" git push - Wait for Railway deployment (2-3 minutes)
- Reload cities in production:
curl -X POST https://planner.ozayn.com/api/admin/reload-cities
- Verify: Check production admin interface or API endpoint
- Add city to JSON: Edit
- Why reload is needed: The JSON file is the source of truth, but cities must be loaded into the database to appear in the app
- Common mistake: Adding to JSON but forgetting to reload β city won't appear in admin interface or dropdowns
- β Safe to reload: The reload process preserves existing cities (updates them instead of deleting), so your events, venues, and sources remain linked correctly
- When to use: If you add/edit a city, venue, or source directly in production (via admin interface)
- Problem: Changes are in production database but NOT in JSON files β will be lost on next reload
- Solution: Update JSON files from production database, then commit to git
Complete Workflow (follow these exact steps):
-
Update JSON files from production database:
# Update all JSON files at once (recommended) curl -X POST https://planner.ozayn.com/api/admin/update-all-json # Or update individually: curl -X POST https://planner.ozayn.com/api/admin/update-cities-json curl -X POST https://planner.ozayn.com/api/admin/update-venues-json curl -X POST https://planner.ozayn.com/api/admin/update-sources-json
- This updates the JSON files directly in the
data/directory on production
- This updates the JSON files directly in the
-
Pull updated JSON files from git (after Railway auto-commits or manual commit):
git pull origin master
- Note: On Railway, file changes are ephemeral. You need to either:
- Option A: Download the updated files and commit manually (see Alternative below)
- Option B: Use Railway CLI to commit changes (if configured)
- Note: On Railway, file changes are ephemeral. You need to either:
-
Verify JSON files are correct (check structure matches expected format)
-
Commit and push (if files were downloaded manually):
git add data/*.json git commit -m "Sync JSON files from production database" git push
-
Reload in local (optional, to sync local database):
curl -X POST http://localhost:5001/api/admin/reload-cities curl -X POST http://localhost:5001/api/admin/reload-venues-from-json curl -X POST http://localhost:5001/api/admin/reload-sources
Alternative: Download and commit manually:
-
Download updated JSON files from production:
# After running update-all-json, download the files # (You may need to access Railway file system or use export endpoints) curl -X POST https://planner.ozayn.com/api/admin/export-cities -o data/cities.json curl -X POST https://planner.ozayn.com/api/admin/export-venues -o data/venues.json curl -X POST https://planner.ozayn.com/api/admin/export-sources -o data/sources.json
-
Commit and push:
git add data/*.json git commit -m "Sync JSON files from production database" git push
π‘ Best Practice: After adding/editing cities, venues, or sources in production:
- Immediately run
/api/admin/update-all-jsonto sync JSON files - Download and commit the updated JSON files to git
- This ensures consistency between production database and JSON files
- Syncing Workflow (follow these exact steps):
- Make data changes locally (e.g., update venue URLs in local database)
- Export to JSON:
source venv/bin/activate && python scripts/update_venues_json.py(or cities, sources) - Commit and push JSON files:
git add data/*.json && git commit -m "Update venue data" && git push - Wait 2-3 minutes for Railway auto-deploy
- Call reload endpoint:
curl -X POST https://planner.ozayn.com/api/admin/reload-venues-from-json - Verify changes: Check production at
https://planner.ozayn.com/api/admin/venues
- Available Reload Endpoints:
/api/admin/reload-cities- Reload cities fromcities.json(updates existing, adds new - preserves IDs)- Updates existing cities by name (preserves city IDs to avoid breaking events/venues/sources)
- Adds new cities that don't exist
- Returns:
{cities_updated, cities_added, cities_skipped, total_cities}
/api/admin/reload-venues-from-json- Sync venues from JSON to production DB- Matches venues by name only (handles city_id mismatch between environments)
- Updates existing venues (preserves venue IDs)
- Adds new venues that don't exist
- Updates all venue fields (website_url, social media, contact info, etc.)
- Returns stats:
{updated_count, venues_in_json, venues_matched}
/api/admin/reload-sources- Reload sources fromsources.json(updates existing, adds new - preserves IDs)- Updates existing sources by name (preserves source IDs)
- Adds new sources that don't exist
- Returns:
{sources_updated, sources_added, sources_skipped, total_sources}
/api/admin/load-all-data- Load all data (cities, venues, sources) from JSON files- Use this after deployment if venues/sources are empty
- Updates existing items and adds new ones (preserves IDs - safe for existing events)
- Handles city matching automatically (case-insensitive)
- Returns:
{cities_loaded, venues_loaded, venues_skipped, sources_loaded, total_items}
- Available Update Endpoints (for syncing production β JSON):
/api/admin/update-cities-json- Updatedata/cities.jsonfrom production database/api/admin/update-venues-json- Updatedata/venues.jsonfrom production database/api/admin/update-sources-json- Updatedata/sources.jsonfrom production database/api/admin/update-all-json- Update all JSON files at once (recommended)- Use these when you add/edit cities/venues/sources in production and need to sync back to JSON files
- Why Not
railway run?:- β Can't run local scripts on Railway (connection to postgres.railway.internal fails)
- β Use API endpoints instead - they run in production environment with access to production DB
- Data Flow: Local DB β JSON files β Git β Railway β Reload API β Production DB
- Common Issue: If URLs don't update, check that JSON was committed/pushed and Railway finished deploying
- Fix Script Available: Use
scripts/fix_all_venue_urls.pyto fix known fake URLs in batch - π¨ After Railway Deployment: If venues/sources are empty, call
/api/admin/load-all-datato reload everything - Venue Loading Structure:
venues.jsonhas venues at top level (not nested under cities)- Each venue has a
city_namefield used for matching - City matching is case-insensitive and handles different formats
- If venues don't load, check
venues_skippedcount in response
- Venue Loading Notes:
venues.jsonstructure: Venues are at top level, each hascity_namefield- City matching is case-insensitive and handles different formats
- If
venues_skipped > 0, check Railway logs for city matching errors - The endpoint loads cities first, then venues (matching by city_name), then sources
- π¨ After Railway Deployment: If venues/sources are empty, call
/api/admin/load-all-datato reload everything
- "no module named bs4" Error:
- Cause: Using system Python instead of virtual environment
- Solution:
source venv/bin/activate && pythoninstead ofpython3 - Check:
which pythonshould show/Users/oz/Dropbox/2025/planner/venv/bin/python
- "no module named requests" Error:
- Cause: Same issue - virtual environment not activated
- Solution: Always use
source venv/bin/activate && python
- Port 5000 already in use:
- Cause: Another app using port 5000
- Solution: Use port 5001 (already configured in app.py)
- Railway deployment fails:
- Cause: Missing dependencies or environment variables
- Solution: Check requirements.txt and .env file are committed
- Venues/Sources Empty After Deployment:
- Cause: Railway deployment may clear database or fail to reload data
- Solution: Call reload endpoint after deployment:
curl -X POST https://planner.ozayn.com/api/admin/load-all-data - Note: The
/api/admin/load-all-dataendpoint updates/adds cities, venues, and sources from JSON files (preserves existing IDs) - Structure:
venues.jsonhas venues at top level withcity_namefield (not nested under cities) - City Matching: Uses case-insensitive matching to find cities by name
- Response: Check
venues_loadedandvenues_skippedin response to debug issues - Known Issue (Fixed): The
load-all-dataendpoint had a bug where it returned early after loading venues, preventing sources from loading. This has been fixed. - If venues still don't load: Check Railway logs for city matching errors - venues are skipped if their
city_namedoesn't match any city in the database - β Safe to use: The endpoint now updates/adds instead of deleting, so existing events remain linked
CRITICAL LESSON LEARNED: Never declare local variables that shadow window-scoped data arrays!
// β WRONG - Creates local variable that shadows window scope
let allSources = [];
async function loadSources() {
allSources = await fetch('/api/admin/sources').then(r => r.json());
// Problem: sortTable() looks for window.allSources but finds local allSources
}
function sortTable(tableId, field) {
dataArray = window.allSources; // β Undefined! Local variable shadows it
}// β
CORRECT - No local declaration, uses window scope directly
async function loadSources() {
window.allSources = await fetch('/api/admin/sources').then(r => r.json());
// Now sortTable() can find window.allSources correctly
}
function sortTable(tableId, field) {
dataArray = window.allSources; // β
Works! Finds window-scoped variable
}- Admin table sorting relies on
window.allEvents,window.allVenues,window.allCities,window.allSources - Local variable declarations (
let,const,var) create shadowing in that scope - Symptom: Sorting appears broken, console shows "Data arrays not available"
- Fix: Remove local declarations, always use
window.variableNameexplicitly
- β NO
let allTableName = []declarations - β
USE
window.allTableNamein load function - β
USE
window.allTableNamein all references - β
USE
window.filteredTableNamefor filtered data - β TEST sorting immediately after adding new table
// Before (BROKEN):
let allSources = []; // β This line broke sorting!
async function loadSources() {
allSources = await response.json(); // Sets local, not window
}
// After (FIXED):
// No declaration here!
async function loadSources() {
window.allSources = await response.json(); // β
Sets window scope
}Remember: Events, Venues, and Cities tables work because they DON'T have local declarations!
- NEVER commit
.envfile: Contains sensitive API keys and secrets - NEVER hardcode API keys: Always use environment variables
- Check
.gitignore: Ensure.env,*.key,*.pemfiles are ignored - Rotate keys regularly: Change API keys periodically for security
- Use different keys: Separate keys for development, staging, and production
- Monitor key usage: Check API dashboards for unusual activity
- Secure credentials file:
config/google-vision-credentials.jsoncontains Google service account - Environment-specific configs: Use
FLASK_ENV=developmentlocally,productionon Railway - Security tools available:
- Run
./scripts/security_check.shto scan for exposed secrets - Use
./clean_api_key.shif API keys were accidentally committed
- Run
- Always use modal forms for edit functions: Never use
prompt()dialogs for editing data - Prevent event bubbling: Add
event.stopPropagation()to action buttons to prevent row click events - Use proper form validation: All modals should have client-side and server-side validation
- Maintain consistent UX: All tables should follow the same interaction patterns
- π¨ Database schema changes: When adding/modifying table columns, update ALL related components:
- Model definition: Add new fields to SQLAlchemy model class
- Database migration: β AUTOMATIC - Schema auto-migrates on Railway startup
- Modal forms: Add/edit forms must include new fields with proper validation
- Table headers: Update display logic to show new fields
- API endpoints: Update all CRUD endpoints to handle new fields
- Data processing: Update hybrid event processor and extraction logic
- Form validation: Add client-side and server-side validation for new fields
- JavaScript functions: Update all functions that reference field names
- Calendar integration: Update calendar event creation to include new fields
- Deployment database: β AUTOMATIC - Railway PostgreSQL auto-syncs with local SQLite
- Backward compatibility: Maintain legacy field support during transition
- Testing: Test all forms, APIs, and integrations with new schema
- π¨ CRITICAL: Monitor ALL processes that populate form fields:
- LLM prompts: Update extraction prompts to request new fields
- Response parsing: Update JSON parsing logic to handle new fields
- Fallback extraction: Update regex patterns and fallback logic
- Data flow tracing: Follow data from extraction β processing β storage β display
- Field mapping: Ensure all field references are updated consistently
- Logging: Update log messages to reflect new field names
- β Automatic Migration: The app now automatically migrates Railway PostgreSQL to match local SQLite schema on startup
- β No Manual Steps: Schema changes are automatically deployed when you push code to GitHub
- β Type Conversion: SQLite types are automatically converted to PostgreSQL equivalents
- β Error Handling: Migration failures are logged but don't crash the app
- Manual Sync: If needed, run
python scripts/sync_schema.pywith Railway environment
- Problem: Railway PostgreSQL doesn't automatically create new columns when SQLAlchemy models are updated
- Root Cause: SQLAlchemy only creates tables, not column additions, on existing databases
- Solution: Auto-migration function runs on Railway startup to add missing columns
- Key Insight: Always define expected columns in code rather than reading from local SQLite (which doesn't exist on Railway)
- Dashboard Impact: Missing columns cause API errors, resulting in "undefined" counts on admin dashboard
- Prevention: The auto-migration function prevents this recurring problem permanently
- Symptom: Dashboard shows "undefined" counts for Cities, Venues, Events, Sources
- Cause: Missing columns in Railway PostgreSQL database
- Check: Look for errors like "column events.social_media_platform does not exist" in Railway logs
- Solution: The auto-migration should fix this automatically on next deployment
- Manual Fix: If auto-migration fails, run
python scripts/sync_schema.pywith Railway environment - Verification: Check API endpoints return correct counts:
/api/admin/cities,/api/admin/venues, etc. - π¨ Timezone handling: Always use CITY timezone for event date/time processing:
- Image upload event extraction MUST use city timezone, not server timezone
- Date/time parsing should consider the event's location timezone
- Never use
datetime.now()ordate.today()without timezone context
- Test in both environments: Always test locally AND on Railway deployment
- Check browser console: Look for JavaScript errors before reporting issues
- Verify data persistence: After adding/editing, refresh page to confirm data saved
- Test all CRUD operations: Create, Read, Update, Delete for each entity type
- Mobile responsiveness: Test on mobile devices - app uses pastel design for mobile-first
- π¨ Always check
.envfile first: When troubleshooting, check.envfor API keys, configuration, and environment settings
- Never modify files in
/archive/: These are outdated - use current files in root directories - Keep scripts in
/scripts/: All utility scripts belong there, not in root - Update documentation: When changing functionality, update relevant docs in
/docs/ - Use proper imports: Import from correct modules (e.g.,
scripts/utils.py, notutils.py)
- Monitor API calls: Check browser Network tab for failed requests
- Image optimization: Large images in
/uploads/can slow down the app - Database queries: Use browser dev tools to monitor slow database operations
- Memory leaks: Check for JavaScript memory leaks in long-running sessions
- Cities: 25 loaded
- Venues: 178 loaded
- Sources: 37 loaded
- Hybrid Processing: β Production ready
- Instagram Recognition: β Working perfectly
- π Global Cities: Support for 22 major cities worldwide with 147+ venues
- ποΈ Venue Management: Comprehensive venue database with images, hours, and details
- π° Event Sources: 36+ event sources for Washington DC with smart scraping
- π¨ Minimal Design: Pastel colors, artistic fonts, icon-based UI
- π§ Admin Interface: Full CRUD operations for cities, venues, and sources
- π‘οΈ Bulletproof Setup: Automated restart script with dependency management
- π€ LLM Integration: Multiple AI providers (Groq, OpenAI, Anthropic, etc.)
- π Data Management: JSON-based data with database synchronization
# Clone and setup
git clone https://github.com/ozayn/planner.git
cd planner
# π₯ ALWAYS RUN THIS FIRST!
source venv/bin/activate
# Create virtual environment (if not exists)
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install with setup.py (recommended)
python setup.py install
# Or install dependencies manually
pip install -r requirements.txt
# Setup environment
cp .env.example .env
# Edit .env with your API keys
# Initialize database
python scripts/data_manager.py load
# Start the app
python app.py# Clone and setup
git clone https://github.com/ozayn/planner.git
cd planner
# π₯ ALWAYS RUN THIS FIRST!
source venv/bin/activate
# Create virtual environment (if not exists)
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Setup environment
cp .env.example .env
# Edit .env with your API keys
# Initialize database
python scripts/data_manager.py load
# Start the app
python app.pyVisit: http://localhost:5001
π For detailed setup instructions, see docs/setup/SETUP_GUIDE.md
- Museum tours with start/end times
- Meeting locations (entrance, rotunda, specific floors)
- Images (tour-specific or museum default)
- Opening hours tracking
- Google Calendar integration
- Museums, buildings, locations
- Opening hours for specific dates
- Location and images
- Instagram links for additional events
- Date ranges
- Specific locations within venues
- Multi-day calendar events
- Single or multi-day events
- Multiple locations for different days
- Start/end times and locations
- Descriptions and details
- Backend: Python Flask with SQLAlchemy
- Database: SQLite (local development), PostgreSQL (Railway production)
- Frontend: HTML/CSS/JavaScript with minimal design
- AI Integration: Multiple LLM providers (Groq, OpenAI, Anthropic, Cohere, Google, Mistral)
- Data Management: JSON files with database synchronization
- Admin Interface: Dynamic CRUD operations
- Deployment: Railway-ready with Procfile
- Design: Pastel colors, minimal UI, icon-based interactions
planner/
βββ app.py # Main Flask application
βββ data/ # JSON data files
β βββ cities.json # Predefined cities
β βββ venues.json # Predefined venues
β βββ sources.json # Event sources
βββ scripts/ # Utility scripts
β βββ data_manager.py # Database management
β βββ utils.py # Core utilities
β βββ env_config.py # Environment configuration
β βββ enhanced_llm_fallback.py # LLM integration
βββ templates/ # HTML templates
β βββ index.html # Main web interface
β βββ admin.html # Admin interface
β βββ debug.html # Debug interface
βββ docs/ # π Comprehensive Documentation
β βββ setup/ # Setup & installation guides
β βββ deployment/ # Deployment guides
β βββ admin/ # Admin interface docs
β βββ data/ # Data management guides
β βββ session-notes/ # Development session notes
β βββ README.md # Documentation index
βββ archive/ # Archived files
β βββ outdated_scripts/ # Old scripts
β βββ outdated_docs/ # Old documentation
βββ requirements.txt # Production dependencies
βββ restart.sh # Bulletproof restart script
βββ setup_github.sh # GitHub setup script
βββ README.md # This file
All documentation is organized in the docs/ directory:
- π Documentation Index - Complete documentation overview
- π Setup Guide - Detailed installation instructions
- βοΈ Deployment Guide - Railway deployment
- π Google Vision Setup - OCR configuration
- ποΈ Architecture - System design
- π API Documentation - API endpoints
# LLM API Keys (optional - app works without them)
GROQ_API_KEY=your_groq_api_key
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
COHERE_API_KEY=your_cohere_api_key
GOOGLE_API_KEY=your_google_api_key
MISTRAL_API_KEY=your_mistral_api_key
# Google Maps (optional)
GOOGLE_MAPS_API_KEY=your_google_maps_api_key
# Eventbrite API (optional - for scraping Eventbrite events)
# Use Personal OAuth Token (Private Token) - this is what you need for reading public events
EVENTBRITE_API_TOKEN=your_eventbrite_personal_oauth_token
# Alternative name (both work):
# EVENTBRITE_PRIVATE_TOKEN=your_eventbrite_personal_oauth_token
# Optional: Public token for anonymous access (limited functionality)
# EVENTBRITE_PUBLIC_TOKEN=ZRQRSTL4V3Y5X2X5X2X5
# Flask Configuration
FLASK_ENV=development
FLASK_DEBUG=True
SECRET_KEY=your_secret_key
# Database
DATABASE_URL=sqlite:///instance/events.db- Primary Pastel:
#E8F4FD - Secondary Pastel:
#F0F8E8 - Accent Pastel:
#FFF0F5 - Neutral Pastel:
#F8F9FA
- Display: Playfair Display (artistic serif)
- Body: Inter (clean sans-serif)
- Minimal text, maximum icons
- Soft shadows and rounded corners
- No dark buttons or harsh edges
- Pastel color palette throughout
The app uses a comprehensive database schema supporting:
- Cities with timezone information
- Venues with opening hours
- Multiple event types (tours, exhibitions, festivals, photowalks, workshops)
- Polymorphic event inheritance
- Calendar integration tracking
GET /api/cities- Get available citiesGET /api/events- Get events with filtersGET /api/venues- Get venues for a cityPOST /api/calendar/add- Add event to Google Calendar
Currently seeded with 22 cities including:
- Washington, DC (38 venues, 36 sources)
- New York, NY (10 venues)
- Los Angeles, CA (5 venues)
- San Francisco, CA (5 venues)
- Chicago, IL (5 venues)
- Boston, MA (5 venues)
- Seattle, WA (5 venues)
- Miami, FL (5 venues)
- London, UK (10 venues)
- Paris, France (10 venues)
- Tokyo, Japan (5 venues)
- Sydney, Australia (5 venues)
- Montreal, Canada (5 venues)
- Toronto, Canada (5 venues)
- Vancouver, Canada (5 venues)
- Tehran, Iran (5 venues)
- Baltimore, MD (5 venues)
- Philadelphia, PA (5 venues)
- Madrid, Spain (5 venues)
- Berlin, Germany (5 venues)
- Munich, Germany (5 venues)
- Princeton, NJ (9 venues)
This project follows professional Python development practices:
- Clean Architecture: Separated concerns with config/, scripts/, tests/ directories
- Modular Design: Each component has its own module with proper imports
- Testing Suite: Comprehensive unit tests in tests/ directory
- Configuration Management: Environment-based configuration with settings.py
- Development Tools: Separate requirements-dev.txt for development dependencies
- Documentation: Comprehensive README and inline documentation
- Code Organization: No messy files in root directory - everything properly organized
- π¨ Image processing broken? β QUICK_FIX_GUIDE.md
- Port 5000 in use: App runs on port 5001 by default
- Database errors: Run
python scripts/data_manager.py loadto reload data - Python not found: Use
python3instead ofpython - Dependencies not found: Make sure virtual environment is activated with
source venv/bin/activate - API key errors: Add your API keys to
.envfile (GROQ_API_KEY, OPENAI_API_KEY, etc.)
- Quick fixes: QUICK_FIX_GUIDE.md - Common issues solved in 2 minutes
- Detailed setup: docs/setup/SETUP_GUIDE.md
- Architecture: docs/ARCHITECTURE.md
- Follow the minimal design principles
- Use pastel colors and artistic fonts
- Prefer icons over text labels
- Maintain timezone accuracy
- Test on both web and mobile
- Follow the established project structure
- Add tests for new features
- Update documentation as needed
MIT License - feel free to use and modify!