Resources / B‑BBEE
B‑BBEE Pack — what you get (client-facing spec)
This is the standard, template‑driven deliverables specification for a B‑BBEE pack produced from a controlled baseline (governed inputs + repeatable rules).
Outcome: a verification‑ready B‑BBEE pack leadership can stand behind and defend , with traceability from source → metric → scorecard line item , plus an evidence index , exception closure trail , pack provenance , and approval pages .
What this enables (in plain English):
- a declared measured entity scope and code set that stays consistent across the pack,
- a verifier‑friendly evidence and traceability trail tied to each scored line item,
- a clear review position: what’s supported by evidence, what’s missing/expired, who owns closure, and by when,
- optional, quantified non‑binding uplift scenarios framed as sensitivity analysis (no recommendations).
Important: this document contains no client figures. The actual pack we deliver is client‑specific (your measured entity scope, your figures, and your evidence pack references) and follows this same consistent structure.
What this pack will not include
- “Recommendations”, “strategy”, or bespoke advisory narrative.
- A prescriptive improvement programme. Where uplift scenarios are included, they are non‑binding and framed as quantified “if X were true…” scenarios with explicit evidence/owner dependencies.
- Any claim that points/levels are “guaranteed” without the required evidence; missing/expired evidence is treated as an exception, not hand‑waved.
Usage rules (non‑negotiable)
- One pack per measured entity/company per measurement period.
- Scope is explicitly stated and consistent across every output: the measured entity and any included subsidiaries/operating units (if applicable).
- Applicable code set is declared up front: Generic Codes and/or Construction Charter (and the entity’s classification/thresholds as applicable).
- Commentary is limited to:
- scope/definitions,
- objective comparisons (current vs prior period; targets/thresholds where applicable),
- exceptions and closure status.
- Every scorecard/schedule includes:
- measurement period label (start → end),
- code set label (Generic / Construction Charter),
- scope label (measured entity; optional sub-scope where required),
- Insite export name (or report name), and
- export date/time.
- Preserve traceability: source → metric → scorecard line item → evidence.
Authorisation + release review
- Insite includes a data owner authorisation step when promoting final imports from staging to production for the reporting cut.
- Final release review/approval of the exported pack is completed outside Insite (review meeting and/or signature pages).
Pack inputs (what must be defined up front)
- Measured entity/company (legal name; registration/tax identifiers as required by the verifier)
- Measurement period (start → end) and intended verification date (if known)
- Applicable code set(s): Generic and/or Construction Charter (plus entity classification/thresholds as applicable)
- Organisational boundary:
- included subsidiaries/operating units (if any),
- exclusions (if any),
- consolidation rules (where applicable)
- Source systems and controlled extracts (as agreed), including extract dates/timestamps:
- people/payroll/HR,
- training/HRD,
- procurement/AP/vendor master,
- any other systems required for the selected code set
- Evidence sources (where evidence lives, ownership, and how it will be provided to the verifier)
Deliverables (artifacts you receive)
- B‑BBEE scorecard pack (PDF): versioned scorecard outputs populated with your measured entity scope + figures, for the selected code set(s), including overall results and element-level breakdowns.
- Working schedules/workbook (XLSX, where required): structured schedules aligned to the scorecard numbers (for review and verifier workflows).
- Evidence index (table): for every scored line item: required evidence list, evidence reference/location, owner, and status (present / missing / expired / needs update).
- Exception closure list: gaps/anomalies/queries per element with owner, due date, and closure status (including any explicitly accepted assumptions).
- Approval pages: signature blocks per element/metric owner and final authorised signatory page.
- Pack provenance (scope, period, reports used, extract dates, rules/mappings, data quality position)
- Board summary (optional, 1–2 pages): readiness status, key open exceptions, accepted assumptions, and review/approval position.
- Score leakage & uplift scenarios (optional, non‑binding): quantified table showing where points are unclaimed and potential uplift under explicit scenarios, with assumptions, evidence dependencies, and owners (no recommendations).
Traceability (in Insite)
Traceability is provided in the system (not as a separate exported “index”):
- Drill down from scorecard line items to the list of contributing records used.
- Open record‑level trace to see where an individual record came from.
- Export source ↔ final record mappings where required.
Pack structure (example)
1. Cover + scope statement
- Cover page: measured entity, period, code set(s), version/date.
- Scope statement: included entities/operating units and exclusions (if applicable).
- Data sources summary: systems/extracts used (with export dates/times).
2. Scorecard summary (overall position)
Purpose: present the overall scorecard position in a way that is defensible and reviewable.
Outputs typically include:
- Overall scorecard summary (level/points as applicable) and element totals.
- Priority element / threshold summary (where applicable to the selected code set).
- Changes vs prior period (where available) limited to factual deltas and exception references.
2A. Score leakage & uplift scenarios (optional, non‑binding)
Purpose: show where points are being left on the table (or are at risk) and make the “way forward” visible as quantified scenarios — without prescribing actions.
This section is framed as sensitivity analysis: “if X were true and evidence Y is available, points impact may be up to Z”.
| Element / line item | Requirement (scoring rule) | Current (claimed) | Gap | Potential uplift (points) | Evidence / unlock condition | Owner | Due date | Status |
|---|---|---|---|---|---|---|---|---|
| Skills Development | Qualifying spend categories + proof of spend | — | — | — | Training register + invoices + proof of payment | — | — | Open |
| ESD / Procurement | Supplier status confirmation + qualifying spend | — | — | — | Supplier affidavits/certificates + vendor master mapping | — | — | Open |
| Management Control | EAP alignment by level + scope confirmation | — | — | — | HR headcount extract + role leveling + demographic mapping | — | — | Open |
3. Elements (Generic Codes and/or Construction Charter)
Purpose: provide element‑by‑element outputs with traceability, evidence pointers, and review readiness.
For each in‑scope element, the pack includes:
- Element score summary (points and key sub-metrics as applicable).
- Input definition notes (how sources were mapped and categorised; high-signal only).
- Evidence index excerpt for the element (what evidence supports what points).
- Exceptions summary (open vs closed) and owner approval block.
Typical elements (Generic Codes) include:
- Ownership
- Management Control
- Skills Development
- Enterprise and Supplier Development (including procurement-related scoring where applicable)
- Socio‑Economic Development
4. Evidence + traceability (in Insite)
Purpose: make verification efficient and defensible.
Outputs typically include:
- Evidence index (full) and evidence pack references.
- Insite traceability: drill down from scorecard line items to contributing records; open record‑level trace to source; export source ↔ final mappings where required.
- Accepted assumptions / decisions log (where required), with owner approval.
5. Annexures (typical)
- Controlled extracts list and dates (pack provenance detail).
- Material categorisation notes kept factual.
- Glossary of definitions used in the pack.
Output checks (before release)
- Measurement period label matches the stated start → end dates on every scorecard/schedule.
- Code set(s) and measured entity scope match the declared organisational boundary.
- Every scored line item has a traceability path in Insite (drill‑down to contributing records) and an evidence reference (or is flagged as an exception).
- Exceptions are either closed (with evidence) or explicitly accepted (with owner approval recorded on the approval pages).
- Pack provenance is present and matches the stated scope, period, and included exports.
- Approval pages are present and signed (outside Insite) or explicitly marked pending approval.
- If score leakage/uplift scenarios are included, they are labelled as non‑binding scenarios and include assumptions + evidence dependencies (not recommendations).