User Journeys

Key workflows across all DIAL user personas
← Back to DIAL Docs
Last updated: 2026-03-10
Personas:
👤 Unauthenticated
🧑‍💻 Member
🛡️ Admin
🔐

Authentication & Onboarding

Getting into DIAL — sign-up, sign-in, and joining an organization

1
Land on sign-in page
User arrives at the application. noAuthGuard redirects already-authenticated users directly to the dashboard.
/auth/signin
2
Sign up or sign in
New users register with email and password. Existing users authenticate via Firebase Auth. Password reset available via email link.
/auth/signup  |  /auth/reset
3
Join an organization via invite code
After sign-up, new members enter the invite code provided by their Admin. This links their profile to the organization and unlocks the team dashboard.
Alt path: Arriving via a direct invite link (/invite/:token) auto-populates the invite code and skips manual entry.
4
Redirect to dashboard
authGuard confirms authentication and the user lands on their role-appropriate dashboard — member view or admin view depending on their isAdmin flag.
/dashboard
📋

Assessment Lifecycle

Creating, completing, and reviewing a DORA capability assessment

1
Start a new assessment
Admin or member initiates a new assessment from the dashboard. The assessment is created in Firestore with status: active and linked to the org.
/assessment/new
2
Work through the capability wizard
Team members rate each of the 20 capabilities (Emerging / Advancing / Leading) and provide qualitative notes about their current practices. Progress is saved to Firestore as capabilityResponses.
/assessment/:id/capabilities
3
AI generates assessments per capability
For each capability, the assessCapability Cloud Function sends the team's responses to Claude Sonnet. The AI returns a maturity assessment grounded in DORA research.
🤖 Claude Sonnet processes capability-specific context in up to 3 conversational turns. The turn budget is managed server-side in Cloud Functions.
4
Review summary and radar chart
The assessment summary page shows a radar chart of scores across all 4 categories, a composite maturity badge, and per-capability AI assessments. Scores are stored in assessmentResults.
/assessment/:id/summary
5
Explore recommendations
generateRecommendations produces prioritized improvement recommendations for each capability. Members can drill into any capability for deeper AI guidance.
/assessment/:id/results
6
Export to PDF
The full assessment report — scores, AI assessments, and recommendations — can be exported as a formatted PDF via PdfExportService (jsPDF + autotable).
🏢

Organization & Team Management

Admin-only: setting up the org, teams, and inviting members

1
Create organization
Admin creates an organization with name and settings. A unique invite code is generated automatically and stored in the organizations collection.
2
Create teams
Admin subdivides the org into teams (e.g., Platform, Product, QA). Teams are stored in the teams collection and displayed in org dashboards.
3
Invite members
Admin shares the invite code or a direct invite link (/invite/:token). Members use this to join the org after signing up. Invite tokens are stored in the invites collection.
4
Manage member roles
Admin can view all members, update roles (member → admin), and assign members to teams. User profiles store isAdmin flag and orgId.
5
Monitor org insights
Admin dashboard shows org-level aggregate scores, AI insights generated by generateInsights, and assessment history across all teams.
/org/insights
📊

Dashboard & Analytics

Viewing assessment results, radar charts, and org health at a glance

📈 Radar Chart
Chart.js radar visualization shows capability scores across all 4 categories. Each axis represents a category aggregate score.
🏅 Composite Score
An overall maturity badge (Emerging / Advancing / Leading) is calculated from the weighted average of all 20 capability scores.
📋 Per-Capability Drill-Down
Click any capability on the radar or results table to see the AI assessment, team responses, and recommendations in detail.
🔍 Org-Level Insights
Admins see AI-generated strategic insights synthesizing patterns across all team assessments. Stored in orgInsights collection.
⚖️

Team Comparison

Side-by-side radar charts comparing capability scores across teams

1
Select teams to compare
Admin selects 2 or more teams from the org. The compare view loads assessment results for each selected team.
/compare
2
View side-by-side radar charts
Each team's scores are rendered on overlapping or adjacent radar charts. Score differences are highlighted to surface the largest gaps between teams.
3
Review AI cross-team insights
AI insights highlight systemic patterns and capability gaps that appear across multiple teams, surfacing org-wide improvement opportunities.
🗓️

Assessment History

Viewing past assessments and resuming with pre-populated responses

1
Browse past assessments
History page lists all completed assessments for the org, sorted by date. Each entry shows the composite score and assessment date.
/history
2
Review a past assessment
Drill into any historical assessment to see the full radar chart, capability scores, AI assessments, and recommendations as they were at the time.
3
Copy responses for a new assessment
A previous assessment's capabilityResponses can be copied as a starting point for a new assessment, so teams only update what has changed rather than starting from scratch.
💡 This enables longitudinal tracking — teams can see how their capability maturity evolves across multiple assessment cycles.
🤖

AI-Guided Assessment Conversation

Exploring a capability in depth with Claude Sonnet

1
Open a capability for deeper exploration
From the assessment results or dashboard, a member selects a capability to explore via AI chat. The chatAboutRecommendations Cloud Function is invoked.
2
Ask questions across 3 turns
The member can ask follow-up questions about the AI's assessment or recommendations. Each exchange is one of a fixed 3-turn budget per capability session.
🤖 Turn budget: 3 turns per capability, tracked server-side. Claude Sonnet is aware of the remaining turns and adjusts its responses to maximize usefulness within the budget.
3
Receive contextual recommendations
The AI grounds its guidance in the team's specific responses, current maturity level, and DORA research. Recommendations are practical and scoped to the team's context.
4
Budget exhausted — graceful wrap-up
When the turn budget is exhausted, Claude summarizes the key actions and directs the member back to the full results view for their remaining capabilities.