Architecture & Build
Vite + Bun build system with multiple deployment targets and optimized chunk splitting for performance.
System Architecture Overview
graph TB
subgraph Client["Client Layer"]
Browser[Browser / PWA]
Desktop[Tauri Desktop App]
end
subgraph Presentation["Presentation Layer"]
Apps[17 App Modules]
Layout[Layout Components]
UI[UI Components]
Themes[4 OS Themes]
end
subgraph State["State Layer"]
Zustand[21 Zustand Stores]
Contexts[React Contexts]
end
subgraph Persistence["Persistence Layer"]
LocalStorage[(localStorage)]
IndexedDB[(IndexedDB)]
end
subgraph API["API Layer (Vercel Edge)"]
Chat[Chat API]
Media[Media APIs]
Rooms[Chat Rooms API]
Utility[Utility APIs]
end
subgraph External["External Services"]
AI[AI Providers
OpenAI, Anthropic, Google]
Pusher[Pusher
Real-time]
Redis[(Upstash Redis)]
YouTube[YouTube API]
end
Browser --> Presentation
Desktop --> Presentation
Presentation --> State
State --> Persistence
Presentation --> API
API --> External
State --> API
Deployment Targets
| Target | Technology | Description |
|---|---|---|
| Web (PWA) | Vercel | Primary deployment with CDN, edge functions |
| Desktop | Tauri | Native app for macOS, Windows, Linux |
| Development | Vite | Local server with HMR |
graph LR
subgraph Source
A[Source Code]
end
subgraph Build["Vite + Bun"]
B[Bundle & Optimize]
end
subgraph Targets
C[Web PWA]
D[Desktop]
E[Dev Server]
end
A --> B
B --> C
B --> D
B --> E
C --> F[Vercel CDN]
D --> G[Tauri - macOS/Windows/Linux]
E --> H[localhost + HMR]
Chunk Splitting Strategy
The build system uses intelligent chunk splitting to optimize initial load time while enabling on-demand loading of heavy dependencies.
Core Chunks (Immediate Load)
These chunks are loaded on initial page load:
| Chunk | Packages | Size |
|---|---|---|
react | react, react-dom | ~150KB |
ui-core | @radix-ui/ (dialog, dropdown, select, etc.) | ~80KB |
zustand | zustand, persist middleware | ~10KB |
motion | framer-motion | ~100KB |
Deferred Chunks (Lazy Load)
These chunks are loaded when their corresponding apps are opened:
| Chunk | Contents | Trigger Apps |
|---|---|---|
audio | tone.js, wavesurfer.js | Soundboard, iPod, Synth |
tiptap | @tiptap/ (editor framework) | TextEdit |
three | three.js (3D rendering) | Virtual PC |
ai-sdk | ai, @ai-sdk/* | Chats, Internet Explorer |
graph TD
subgraph Core["Core Chunks (Immediate)"]
R[react]
UI[ui-core]
Z[zustand]
M[motion]
end
subgraph Deferred["Deferred Chunks (Lazy)"]
AU[audio]
TT[tiptap]
TH[three]
AI[ai-sdk]
end
subgraph Apps
SB[Soundboard]
IP[iPod]
SY[Synth]
TE[TextEdit]
PC[Virtual PC]
CH[Chats]
IE[Internet Explorer]
end
SB -.->|"on open"| AU
IP -.->|"on open"| AU
SY -.->|"on open"| AU
TE -.->|"on open"| TT
PC -.->|"on open"| TH
CH -.->|"on open"| AI
IE -.->|"on open"| AI
Lazy Component Pattern
Apps use React's lazy loading with a custom wrapper for HMR compatibility:
// Lazy loading with caching for HMR
function createLazyComponent<T = unknown>(
importFn: () => Promise<{ default: ComponentType<AppProps<T>> }>,
cacheKey: string
): ComponentType<AppProps<T>> {
// Return cached component if exists (prevents HMR issues)
const cached = lazyComponentCache.get(cacheKey);
if (cached) return cached;
const LazyComponent = lazy(importFn);
const WrappedComponent = (props: AppProps<T>) => (
<Suspense fallback={null}>
<LazyComponent {...props} />
<LoadSignal instanceId={props.instanceId} />
</Suspense>
);
lazyComponentCache.set(cacheKey, WrappedComponent);
return WrappedComponent;
}
PWA Caching Strategy
The service worker implements different caching strategies based on resource type:
| Resource Pattern | Strategy | TTL | Rationale |
|---|---|---|---|
| Navigation (HTML) | NetworkFirst | 1 day | Always get latest app shell |
| JS Chunks | NetworkFirst (3s timeout) | 1 day | Fresh code with fast fallback |
| CSS | StaleWhileRevalidate | 7 days | Use cached, update in background |
| Images | CacheFirst | 30 days | Rarely change, prioritize speed |
| Fonts | CacheFirst | 1 year | Never change once deployed |
| API Responses | NetworkOnly | - | Always fresh data |
Cache Invalidation
sequenceDiagram
participant SW as Service Worker
participant Cache as Cache Storage
participant Network as Network
participant App as Application
App->>SW: Request resource
alt CacheFirst (images, fonts)
SW->>Cache: Check cache
Cache-->>SW: Return if exists
SW->>App: Serve from cache
else NetworkFirst (JS, HTML)
SW->>Network: Fetch (with timeout)
alt Success within timeout
Network-->>SW: Response
SW->>Cache: Update cache
SW->>App: Serve response
else Timeout/Failure
SW->>Cache: Fallback to cache
Cache-->>SW: Cached response
SW->>App: Serve cached
end
else StaleWhileRevalidate (CSS)
SW->>Cache: Get cached version
Cache-->>SW: Cached response
SW->>App: Serve immediately
SW->>Network: Fetch update (background)
Network-->>SW: Fresh response
SW->>Cache: Update cache
end
Module Resolution
The project uses TypeScript path aliases for clean imports:
| Alias | Path | Usage |
|---|---|---|
@/ | src/ | Source code root |
@/components | src/components/ | UI components |
@/hooks | src/hooks/ | Custom hooks |
@/stores | src/stores/ | Zustand stores |
@/apps | src/apps/ | App modules |
@/utils | src/utils/ | Utility functions |
@/lib | src/lib/ | Libraries |
@/types | src/types/ | TypeScript types |
Environment Configuration
Development
# Local development with HMR
bun run dev # Vite dev server
bun run dev:vercel # Vercel CLI (recommended for API routes)
Production Build
bun run build # Production build
bun run preview # Preview production build
Desktop (Tauri)
bun run tauri:dev # Development with native shell
bun run tauri:build # Build native applications
Performance Optimizations
Initial Load Optimizations
- Code Splitting: Apps loaded on-demand via React.lazy
- Font Loading: System font stacks with web font fallbacks
- Image Optimization: Responsive images, WebP format
- CSS Layers: Tailwind with theme-specific overrides
Runtime Optimizations
- Zustand Selectors: Fine-grained subscriptions prevent re-renders
- Memo/Callback: Strategic memoization for expensive computations
- Virtual Lists: Large lists use virtualization (iPod, Finder)
- Debounced Actions: User inputs debounced for performance
Audio Optimizations
- Shared AudioContext: Single context prevents resource exhaustion
- Lazy AudioBuffer Loading: Sounds loaded on first interaction
- LRU Cache: Limited audio buffer cache with eviction
- Concurrent Source Limiting: Prevents audio overload
graph TD
subgraph Optimization["Performance Optimization Layers"]
L1[Initial Load
Code Splitting, Lazy Loading]
L2[Runtime
Selectors, Memoization]
L3[Audio
Shared Context, Caching]
L4[Network
PWA Caching, Prefetch]
end
L1 --> L2 --> L3 --> L4
Build Pipeline
flowchart LR
subgraph Input
TS[TypeScript]
TSX[React TSX]
CSS[Tailwind CSS]
Assets[Static Assets]
end
subgraph Vite["Vite Build"]
SWC[SWC Compiler]
Rollup[Rollup Bundler]
PostCSS[PostCSS]
end
subgraph Output
JS[Optimized JS Chunks]
StyleSheet[Minified CSS]
Static[Hashed Assets]
SW[Service Worker]
end
TS --> SWC
TSX --> SWC
CSS --> PostCSS
SWC --> Rollup
PostCSS --> Rollup
Assets --> Static
Rollup --> JS
Rollup --> StyleSheet
Rollup --> SW
Related Documentation
- Application Framework - App structure and lifecycle
- State Management - Zustand stores and persistence
- API Architecture - Backend API design