Resources / Data quality
Data quality is how Insite makes reporting defensible
Data quality in Insite is not positioned as a “cleanup project”. It’s a control layer across the data lifecycle ( aggregation → import staging → reporting ) so the figures you submit are defensible : you can show what the number is, how trustworthy it is, and what is undermining it.
Most organisations discover data issues too late (during the submission crunch): extracts disagree, definitions drift, missing tags break classification, and teams reconcile in spreadsheets that no one can defend when questions land.
Insite flips that by making data quality a first-class, measurable control, without requiring you to “get to perfect data” before you can keep moving.
What “data quality” means here
Insite treats quality as a set of measurable dimensions that matter for compliance reporting:
- Accuracy — does the value reflect reality (and reconcile to trusted sources)?
- Completeness — are required fields and records present?
- Consistency — are definitions and mappings applied the same way across sources and time?
- Validity — does the value meet required formats/rules/ranges?
- Uniqueness — are duplicates controlled (no double counting)?
- Freshness — is the data current enough for the reporting cut?
This matters because social compliance reporting often fails not due to one “wrong number”, but because governance can’t show how reliable the number is and what it depends on.
The core approach: controls mapped to quality dimensions
Insite uses common control types (for example: reconciliations , validations , reasonableness checks , and analytical review ) and links them explicitly to the quality dimensions above.
That turns “data quality” from vibes into a shared operational language:
- you can see what kind of check failed,
- which dimension it affects (completeness vs freshness vs validity),
- and why it matters for defensible reporting.
Two-level experience: trust at a glance + drill-down truth
1) Quality Bar: a single trust indicator per report
The Quality Bar is a single percentage that rolls up the quality of the fields used by a report across its contributing metrics — a fast “can we trust this?” signal for leadership.
This is the board-grade output: it converts an invisible risk (“we think the spreadsheet is right”) into an explicit trust position.
2) Data Quality Report: root cause and impact
The Data Quality Report provides the “why” and the “where”:
- quantified quality performance across dimensions,
- drill-down to the specific fields/collections contributing to quality outcomes,
- visibility into which reports/outputs are impacted by gaps.
This is the operations-grade output: it lets teams fix the right things instead of “hunting across systems”.
Catch issues early: the Importer staging area (and why it’s not a blocker)
Insite’s Importer is a staging area where data is uploaded, aligned, and prepared without affecting live/production data . It’s often the first place quality issues become visible before aggregation and reporting.
This is where teams replace ad‑hoc spreadsheet “patches” with repeatable, explainable fixes and enhancements where there is a defensible rule: mapping/alignment, standardising codes, and controlled derivations where the transformation path is auditable.
Key principle: Insite does not block you from importing imperfect data just because it has quality issues.
That’s a big deal in the real world:
- many quality issues take months to fix because they require process changes and sometimes upstream system changes,
- you still need to report during that period,
- and the organisation needs a defensible way to state what is reliable, what is not, and what is impacted.
Insite supports this operating reality by letting you proceed while making the trust position explicit via the Quality Bar and the drill-down in the Data Quality Report, so leadership can declare the trust position and limitations with the pack and withstand audit/verification scrutiny.
Some issues still must be fixed upstream (missing capture, process gaps, master-data discipline, tagging, and evidence realities) and those changes take time. Teams drive these to closure via the exception closure list (owners + due dates), without hiding workarounds in spreadsheets.
Missing/incomplete data isn’t a nuisance — it’s a compliance risk
Insite is explicit about the reality: missing or incomplete data can be the difference between being able to aggregate/report and being unable to support a defensible submission.
This is especially true where:
- schemas and mandatory fields must be met for consistent processing, and
- classification/tagging affects how information is treated in scorecards and regulatory views (including “system tags” that influence report treatment).
The “muscle memory”: pre/post aggregation checklists
Insite includes pre/post aggregation checklists (People, Procurement, Training) to formalise governance:
- use pre-aggregation checks to reduce misalignment issues before authorisation,
- reject and re-upload corrected files when issues can’t be resolved safely in-place,
- standardise the discipline so outcomes don’t depend on a few stressed experts.
The point isn’t bureaucracy — it’s repeatability and defensibility.
“Defensible numbers”, not just reports
Insite’s biggest value isn’t that it produces an output document — it’s that it produces defensible numbers :
- leadership gets a clear trust signal (per report),
- operations gets field-level truth about what failed and what it impacts,
- teams can keep reporting while quality improves over time,
- the organisation reduces submission surprises and audit risk because limitations are visible and explainable.