Building Monitoring the Situation: A Real-Time Travel Risk Dashboard
I'm planning a nomad trip for 2026. The route is Rome, Split, Manchester, Bratislava, Gran Canaria, Tenerife, and back to Manchester. A mix of cities I love, places I haven't been, and reliable bases to work from. The whole thing is mapped out, flights are flexible, and I'm ready to go.
Then the Iran-USA-Israeli conflict escalated, and suddenly "is it safe to fly over the eastern Mediterranean next month" became a real question. I started checking multiple news sources, flight tracking sites, and government travel advisories daily. It was scattered, time-consuming, and easy to miss something important.
So I built a dashboard to do it for me.
How It Grew
It started simple - five flight cards and an oil price chart. That was the whole thing. But each feature answered one question and immediately surfaced the next. What about prediction markets? Should I get notified when something changes? Can I ask the dashboard a question on my phone?
By the end it had 13 sections: airport status strips, travel advisories, ferry operations, commodity charts, a news ticker, prediction market widgets, an itinerary timeline, and a recommended actions briefing. I even wired up push notifications via ntfy.sh so I wake up to an overnight summary on my Pixel every morning. The scope grew organically because each addition felt genuinely useful - not because I was padding it.
I also actually booked my Gran Canaria to Tenerife ferry because the dashboard told me to. The ferry section flagged a route and I acted on it. That felt like the moment the project became real.
What It Tracks
- Flight risk assessment - an overall risk score with a confidence percentage, updated twice daily. This considers airspace closures, conflict zones, and airline route changes
- Commodity prices - oil and jet fuel prices, because spikes in these directly affect flight availability and pricing
- Airport operational health - live status for airports along my route. Tracking delays, cancellations, security incidents, and operational disruptions
- Travel advisories - government-issued advisories from the UK, US, and EU, filtered for the countries on my itinerary
- Ferry operations - because sometimes the backup plan is to not fly at all
- Prediction markets - aggregated prediction market data on relevant geopolitical events. These markets are surprisingly good leading indicators - often pricing in risk before traditional media reports it
- Interactive itinerary timeline - my actual trip mapped on a timeline with a "you are here" marker, so I can see at a glance which legs of the journey are affected by current risk levels
- Push notifications via ntfy.sh - the overnight summary lands on my phone before I open my laptop. Not the dashboard - the notification. The dashboard is just where you go to dig deeper. The notification is where the value lives.
The Two-Agent Validation Pipeline
This is the part I'm most proud of technically. The biggest challenge wasn't building the UI or aggregating data sources - it was making sure the data was correct and current.
Travel risk information is uniquely dangerous to get wrong. A false "safe" signal could mean booking a flight into a conflict zone. A false "danger" signal could mean cancelling plans unnecessarily. I needed the data to be trustworthy.
The solution is a two-agent approach:
Agent 1: Research (Claude Sonnet)
The first agent handles data gathering. It uses Brave Search API to find current information about each tracked metric - flight disruptions, airspace changes, commodity prices, travel advisories. It's instructed to only look at information from the last 24 hours, keeping everything current.
Sonnet is the right choice here because research is a breadth task - it needs to process many sources quickly and extract structured data.
Agent 2: Validation (Claude Opus)
The second agent receives everything the research agent found and validates it. It cross-references claims against multiple sources, checks for contradictions, verifies that quoted statistics are plausible, and flags anything that looks like misinformation or outdated data.
Opus is necessary here because validation requires judgment. Is a 15% increase in jet fuel prices in one day plausible? Does this travel advisory match what other governments are saying? These are reasoning-heavy questions that benefit from a more capable model.
The verifier caught a real mistake during development. The research agent came back reporting that the Strait of Hormuz had gone from "contested" to "CLOSED" - genuinely alarming. I nearly acted on it. Turned out it was based on a month-old article that had resurfaced in search results. On the next full run, Opus correctly reverted the strait status to "contested" and stripped the stale headlines. The system caught its own mistake before I saw it. That was the moment I knew the two-agent approach was worth the extra complexity.
The Automated Pipeline
The whole thing runs on autopilot via GitHub Actions:
- A scheduled action runs twice daily (8am and 8pm UTC)
- It triggers the research agent, which gathers fresh data via Brave Search
- The validation agent reviews the findings
- If the data has changed from the last run, it auto-commits the updated JSON data files
- The site rebuilds with the fresh data
The "only commit if changed" step matters. Most runs produce the same data because geopolitical situations don't change every 12 hours. Without this check, you'd get a noisy commit history full of identical data.
The Design: Committing to the Vibe
The direction I gave myself early on was "super intense." Chakra Petch font, CRT scan lines, pulsing red dots on ACT NOW items, a LIVE indicator on the news ticker. Dark background, grid overlays, glow effects, risk-based colour coding, data-dense panels. It looks like something from a control room.
That was a conscious choice - but it required actually committing to it. The first version was a narrow single-column mobile-first layout that looked fine but felt generic. I pushed it to a full-width dashboard grid. Then I pushed again when all the sections looked the same. Each one needed its own visual personality: airports as a compact departure-board strip, travel advisories as an inline dot-grid, local events as alert banners with coloured left borders, fuel surcharges as clean table rows. Once everything had its own identity, the page stopped reading as a list of cards and started reading as a dashboard.
The lesson I took from this: committing fully to a design vibe, without hedging, is often the best approach. Half-hearted intensity is just noise. Full intensity becomes atmosphere.
The colour coding is functional, not just decorative:
- Green - normal conditions, no action needed
- Amber - elevated risk, worth monitoring. Check back before booking
- Red - significant risk, consider alternatives. This is the "maybe take the ferry" level
The Tech Stack
- Frontend: React 19, Vite 8, TypeScript, Tailwind 4
- Charts: Recharts for commodity price trends and risk score history
- AI: Claude (Sonnet for research, Opus for validation)
- Search: Brave Search API for current information
- Notifications: ntfy.sh for push notifications to mobile
- Automation: GitHub Actions for the twice-daily pipeline
- Data: JSON files committed to the repo, consumed by the frontend at build time
The data-as-JSON-in-repo approach is intentionally simple. There's no database, no API server, no runtime backend. The GitHub Actions pipeline produces JSON files, the frontend reads them at build time, and the static site deploys. Fast, cheap to host, zero runtime dependencies beyond a CDN.
Building for Real Needs
This project exists because I have a genuine need for it. I'm not building a travel risk dashboard as a portfolio piece - I'm building it because I'm actually going on this trip and I want to make informed decisions about when and where to fly.
That real-world motivation shaped every decision. The two-agent validation pipeline exists because I can't afford bad data. The 24-hour freshness window exists because last week's travel advisory is useless when you're booking tomorrow's flight. The push notification exists because I don't want to remember to open a dashboard - I want the dashboard to tell me when something matters. The cyberpunk design exists because the gravity of the data deserves a serious interface.
The pattern of using AI for research-then-validation is something I'll carry to other projects. Any time you're aggregating information from multiple sources and accuracy matters, the two-agent approach - one to gather, one to verify - produces meaningfully better results than a single pass. It's slower and costs more in API calls, but for high-stakes data, the reliability is worth it.