How to use the DPE incident dashboard

A guide to running the query, loading the data, reading every panel, and exporting a report.

1. End-to-end workflow

  1. Open the dashboard (DPE-dashboard-fork.html) in your browser.
  2. Click Show Kusto query at the top to copy the canonical query, then run it in the Kusto Web UI against icmdataro.centralus.kusto.windows.net / IcMDataWarehouse.
  3. In the Kusto result grid: Ctrl+A then Ctrl+C.
  4. Paste into the dashboard's text area — rendering is automatic on paste, and the textarea is cleared immediately so the data does not linger in the DOM.
  5. Use the filters to narrow the dataset; every chart, KPI, and the table updates live.
  6. Use Export ▾ (top) for a full Markdown report, or Export incidents (MD) (above the table) to dump just the filtered rows.
Privacy. The page is pure HTML/JS. The pasted data is parsed in browser memory and is never written to disk, browser storage, or any server. The only output that touches your filesystem is a Markdown file you explicitly save via the browser's Save dialog. Refresh the page and all in-memory data is gone.

2. Source columns

FieldSourceMeaning
IncidentIdIcMUnique incident id. Linked in the table to the IcM portal.
SupportTicketIdIcMLinked support case (may be empty).
CreateDateIcMWhen the incident was first created.
ImpactStartDateIcMWhen customer impact began.
MitigateDateIcMWhen impact stopped (mitigation applied). Empty if still active.
ModifiedDateIcMLast time the incident was touched. Used for the Period filter and the timeline.
TTMComputedTime to mitigate, in hours.
TTRComputedTime to resolve, in hours. Empty when the incident is still open.
SeverityIcM1 (highest) → 4 (lowest).
TransferCountComputedNumber of team-to-team hand-offs the incident went through.
IcMTeamMappedWhich of your tracked teams owned/touched the incident (e.g. DPE, NIT, DPE+NIT, DPE+VCPE).
TransferSetIcMOrdered list of every owning team with timestamps. Drives the queue-time and heatmap calculations.

3. Filters panel

Period

The button label shows All, the single value, or N selected. Empty selection means no period filter.

Transfer rule

Search

Free-text match across IncidentId, IcMTeam, and every destination team in the transfer set. Useful for pulling a specific case (700966944) or every incident that touched a specific team (expressroute).

Tracked queues

Comma-separated list of team names whose ownership time should be summed into Queue time (defaults to CLOUDNET\EEECloudnetSev2, CLOUDNET\NetworkingNinjas). Comparison is case-insensitive and tolerant of escaped backslashes (\\) and surrounding quotes from the JSON-style TransferSet. Editing this field recomputes the Queue time column and KPIs immediately.

Severity / IcM team chips (tri-state)

Each chip cycles through three states on click:

Includes and excludes can be mixed across groups, e.g. include Sev 2, Sev 3 while excluding a specific noisy team.

Heatmap selection

Click any tile in Top transfer destinations to constrain the table to incidents whose TransferSet hit that destination at least once. Click again to deselect. Multi-select is supported (OR'd).

4. KPI cards

CardWhat it showsHow to read it
Visible incidentsDistinct IncidentId after filters.The "how many incidents are we looking at" number.
Visible rowsTotal parsed rows after filters.Equal to Visible incidents in normal data; differs only if the export has duplicates.
Zero transfersCount and % of visible incidents with TransferCount = 0.Routed correctly first try. Higher is better.
Zero or one transferCount and % with TransferCount ≤ 1.Minimal hand-off — useful as a routing-health KPI.
Avg TTM (h)Mean of TTM over rows that have one.Detail line shows p50 and p90.
Avg TTR (h)Mean of TTR.Open incidents (empty TTR) are excluded; the worst cases may simply not be done yet.
Avg transfersMean TransferCount.High avg + high p90 = some incidents bounce a lot.
Total queue time (h)Sum of QueueHours across visible rows.How much time, in total, your tracked teams owned these incidents.
Avg queue time / incident (h)Mean QueueHours.Per-incident workload signal for the tracked queues.

5. Queue time calculation

For each incident the dashboard walks the TransferSet chronologically. The team in entry i owns the case from entry[i].at until entry[i+1].at. The final segment is capped at MitigateDateModifiedDate → "now", whichever is first available. Only segments whose owning team is in the Tracked queues input contribute to QueueHours.

Worked example — transfer set:

2026-03-18T22:25:06Z -> CLOUDNET\EEECLOUDNETSEV2
2026-03-19T00:09:22Z -> CLOUDNET\EXPRESSROUTESUPPORT
2026-03-19T00:11:10Z -> CLOUDNET\EEECLOUDNETSEV2
2026-03-19T00:11:33Z -> CLOUDNET\EXPRESSROUTESUPPORT

With the default tracked queue CLOUDNET\EEECloudnetSev2:

QueueHours ≈ 1.75 h.

Queue time colour dot

Each value in the Queue time (h) column shows a dot comparing the incident to the full-sample mean (with a small ±1% tolerance to avoid flicker):

Hover the dot to see the exact comparison.

6. Charts

Incidents over time

Compact monthly bar chart bucketed by ModifiedDate (matching the Period filter). The peak month is called out below the chart.

Severity breakdown

One bar per severity. Bar length = count, percentage = share of the visible set.

Top transfer destinations (heatmap)

Top 10 teams that incidents got transferred into, counted once per incident (so an incident that pings a team three times still counts as 1). Click a tile to filter the rest of the dashboard by that destination; click again to deselect.

Team distribution

One row per IcMTeam (DPE, NIT, etc.). Columns: Share bar, Count (+ % of visible), and avg / p50 / p90 for TTM, TTR, and Transfers per team.

7. Filtered incidents table

Up to 500 rows. Click any column header to sort (click again to flip direction). The IncidentId is a link to the IcM portal. The Transfer set cell shows the first 5 destinations and a (+N) suffix when truncated. The Queue time (h) column is sortable and shows the green/yellow/red dot described above.

If you need more than 500 rows, use Export incidents (MD) — it includes all matching rows in the same sort order as the table, plus the same Summary KPIs / Severity / Team distribution sections as the main report.

8. Export

Two Markdown export options are available:

Both buttons are disabled until data is loaded and re-disabled by Clear all. Each export header records the active filter scope (period, transfer mode, sev/team include & exclude sets, search, tracked queues) so the file is self-describing.

9. Statistics primer (avg vs p50 vs p90)

Incident timings are right-skewed: many fast incidents, a long tail of slow ones. So the three numbers tell different stories.

StatisticDefinitionWhat it answers
Average (mean)Sum ÷ count."On total volume, how much time per incident?" — sensitive to outliers.
p50 (median)Middle value when sorted."What does the typical incident look like?"
p9090% of incidents are at or below this value."What does the bad case look like?" — the "1 in 10 worst" line.

Worked example — 10 TTRs (hours), sorted: 1, 2, 2, 3, 4, 5, 6, 8, 12, 200

Rule of thumb: if mean >> p50 you have outliers; if p90 >> p50 you have a long tail worth investigating.

10. Common scenarios

"Show me Sev 2 incidents from FY26Q3 that bounced more than 5 times"

  1. Period → By fiscal quarter → click Value → tick FY26Q3
  2. Severity chips → click Sev 2 once (becomes Include).
  3. Transfer rule → At least, Count = 5

"Hide everything routed to a specific team"

  1. IcM team chips → click the team twice (cycles to Exclude).

"Compare DPE-only vs DPE+NIT in queue time"

  1. IcM team chips → click DPE once (Include) → check the Total queue time KPI.
  2. Click DPE twice more to clear, then click DPE+NIT → compare.

"Which team has the slowest tail?"

  1. Clear all filters.
  2. In Team distribution, compare the p90 of TTR across teams.
  3. Big delta between p50 and p90 = unpredictable; large p90 alone = consistently slow tail.

11. Troubleshooting

Browser cache: after any change to DPE-dashboard-fork.html, do a hard reload (Ctrl+F5) before pasting again.