In 2025, we joined Codegeist again and used the opportunity to build something that had been on my mind for a while: better observability for Forge apps without immediately spinning up a full external stack.
Our project is Forge Log Dashboards for Bitbucket (Devpost entry: Advanced Dashboards for Forge Logs).
The core question behind it was simple: when a Forge app grows beyond “small utility,” how do you investigate behavior quickly without context-switching all the time?
Why we built this during Codegeist
In day-to-day app work, logs are usually where debugging and incident analysis starts. Forge gives you built-in logging, which is great. But as soon as complexity increases, you start needing richer filtering, comparisons over time, and repeatable views for your team.
For this hackathon, I wanted to explore a middle ground: keep analysis close to development work, inside Atlassian tooling, and still get more analytical depth than plain log browsing.
What we built
We built dashboards directly inside Bitbucket with widget-based visualizations and query-driven exploration.
- Dashboard widgets can be added, moved, resized
- Log data can be explored with SQL-style queries
- Dashboards can be shared/exported as JSON
- Starter content helps with onboarding and testing
Technically, the solution combines Forge backend components with a frontend-heavy analysis approach for responsiveness.
What I personally learned from this project
1) Log quality matters more than log quantity.
The biggest improvement is often not “more logs,” but cleaner and more structured logs. Once fields are consistent, analysis gets dramatically easier.
2) Developer UX is a feature, not a luxury.
If dashboards are slow or clunky, people stop using them. Fast feedback loops are essential for real debugging workflows.
3) Good defaults reduce friction.
Demo data and starter dashboards made it much easier to explain and test the concept quickly.
4) Platform limits shape architecture early.
Request/payload constraints force good engineering decisions around data transport, caching, and incremental loading.
The hackathon reality
Like many hackathon projects, this one had a chaotic side as well — and for us it got very real in the final stretch. Shortly before the deadline, one developer laptop crashed and we suddenly had to recover parts of our work under serious time pressure. That changed the mood from “let’s polish” to “let’s rescue what matters most”: stabilize the core flow, keep the demo working, and make hard trade-offs about what to cut.
In hindsight, that moment was frustrating, but also one of the most valuable parts of the whole experience. It forced us to communicate clearly, prioritize brutally, and focus on what actually delivers value to users — not what looks fancy in a hackathon demo. That pressure test taught us more about our architecture and team workflow than a smooth week ever could.
What’s next
If we continue this project, the next priorities are clear:
- Cleaner dashboard scoping by repository/workspace
- More prebuilt widgets for common troubleshooting tasks
- Further hardening around ingestion and edge cases
Codegeist was a great forcing function to test this idea in a short, focused cycle. I’m happy with what we learned, and even more interested in where this can go next.
If you’re curious, here is the project page again:
https://devpost.com/software/advanced-dashboards-for-forge-logs