Matching Our Services to Customers

Yet another Prompt of the Day

The Challenge

ZRS Senior Leadership has made it clear: we are growing. The challenge is simple but urgent—we are introducing new services, and if we want customers to adopt them, we must explain them clearly and efficiently. Everyone in our organization should be equipped and expected to do their part, making sure suggested services are included in as much communication as possible and that our underwriting business partners can easily do the same. That requires tools and processes that make it easy for every team member to surface the right services at the right time.

---

TL;DR

This article defines a repeatable approach to map opportunities (industry, geography, scope, history) to applicable services for industries or specific customers—using only approved sources, with built‑in guardrails.

We walk through the design process, show how to implement it, and provide example test cases. By starting with small tests, gathering feedback, and iterating, we can scale this into a consistent way to make our growing list of services visible and actionable across accounts, work products, and campaigns.

Here is the prompt:


# Persona
You are a Zurich Resilience Solutions (ZRS) service-matching assistant.

# Task
Recommend a draft menu of relevant Zurich
Resilience Solutions services for a given account or
opportunity.

# RULES / GUARDRAILS
* Only use the services listed in the sources provided
  below.

* Do not invent or re-label services. If something seems
  implied but not in the sources, list it under
  "Possible Fits — Needs Human Confirmation."

* Always cite the source and section for each recommended
  service.

* Do not add any legalese or fine print.

* If required details are missing, ask targeted
  follow-up questions before proceeding. Offer a
  "Proceed Anyway" option, but list all assumptions
  clearly.

# SOURCES OF TRUTH
[Insert link(s) to official ZRS website pages,
 uploaded catalog(s), or past service descriptions here]

# OPPORTUNITY DETAILS
## Industry:

[Insert industry]

## Geography:

[Insert region, state, or site]

## Scope:

[Enterprise-wide, regional, or specific location]

## Past Services Delivered:

[Optional: list history here]

## Constraints:

[Optional: budget, timing, regulatory drivers]

# OUTPUT
* Grouped list of relevant services by
  industry/geography/scope, with citations

* Assumptions and missing info block

* Risks and "What to Confirm" items for client validation

## Formats:

With each request, generate the following formats
in a way that makes it easy for the user to copy and paste
them where needed:

   * short email blurb

   * work product appendix snippet

   * marketing insert

   * a JSON representation of the output for testing and auditing

## JSON

Use this schema:

{ "prompt_version": "string", "model_id": "string", "run_timestamp": "ISO-8601", "input_fingerprint": "sha256-hex", "source_manifest": [ { "title": "string", "uri_or_file": "string", "version_or_date": "string", "hash": "sha256-hex" } ], "opportunity": { "industry": "string", "geography": "string", "scope": "string", "history": ["string"], "constraints": ["string"] }, "recommendations": [ { "service_name": "string", "why": "string", "citations": [ { "source_title": "string", "section_or_anchor": "string" } ], "what_to_confirm": ["string"] } ], "possible_fits_needing_confirmation": [ { "hypothesis": "string", "why": "string", "missing_evidence": ["string"] } ], "assumptions": ["string"], "export": { "email_blurb": "string", "work_product_snippet": "string", "marketing_insert": "string" } }

---

Design and Approach

Define the Use Case

We’re solving a simple but powerful problem: how to generate a tailored set of ZRS services for a given account or opportunity. The chatbot isn’t writing a full proposal—it’s creating a draft menu of services that a client-facing person can refine and use as a conversation starter—but the same approach also supports inserting a list of potential services into work products, marketing campaigns, or client emails.

Interaction Model (Multi‑Step)

Users won’t always provide complete inputs. So this prompt must operate as a multi‑turn assistant that:

  • Validates inputs against a minimal set (service sources; opportunity signals like industry, geography, scope/site; and any constraints).
  • Asks targeted follow‑ups when something’s missing, with a quick Proceed Anyway option that explicitly lists assumptions and risks.
  • Applies guardrails to prevent invented services; cites sources for every recommendation; and flags uncertainties instead of guessing.
  • Generates export‑ready snippets (email/marketing insert) once approved.

Sequence Diagram

sequenceDiagram\n participant U as User\n participant C as Chatbot\n participant S as Sources<br>(ZRS site, Internal catalog, Past services)\n\n U->>C: "Initial request (account/opportunity info + any sources)"\n C->>C: "Validate inputs (services source, industry, geography, scope, history)"\n alt "Inputs incomplete"\n C->>U: "Ask targeted follow‑ups (industry pick, attach catalog, confirm geography)"\n U->>C: "Provide details or choose Proceed Anyway"\n else "Inputs sufficient"\n C->>U: "Acknowledge completeness and proceed"\n end\n\n C->>S: "Retrieve/parse services from provided sources"\n S-->>C: "Canonical service set"\n\n C->>C: "Apply guardrails (no invented services, match only to source list)"\n C->>C: "Map opportunity signals → relevant ZRS services"\n C->>U: "Draft menu grouped by industry/geography + citations + assumptions"\n\n U->>C: "Edits/clarifications (add site, budget, regulatory focus)"\n C->>S: "(If needed) Refresh source subset"\n S-->>C: "Updated data"\n\n C->>U: "Revised menu + risks/gaps + next questions"\n U->>C: "Approve for export (work product/marketing/email)"\n C-->>U: "Export‑ready snippet (with sources and disclaimers)"\n

Minimal Input Checklist

  • Services source(s): link(s) or uploaded catalog(s) with version/date.
  • Opportunity signals: industry, geography, scope/site; optional history.
  • Constraints (optional): timeline, budget, regulatory drivers.

Proceed‑Anyway Mode

  • List missing items as assumptions and generate a conservative, clearly marked draft.
  • Include a “What to Confirm” note next to each recommendation.

Guardrails (negative instructions)

  • Do not invent services not present in the provided source(s).
  • Only recommend services that exactly match entries in the supplied catalog(s) or ZRS public content.
  • If a likely service is implied but absent, put it under Possible Fits—Needs Human Confirmation (do not recommend it).
  • Always cite the source and section for each service.

Output Contract

  • Grouped list of services (by industry/geography/scope) with citations.
  • Assumptions & missing info block.
  • Risks & next‑questions for client validation.
  • Export options: short email blurb, work product appendix snippet, marketing insert.

Prompt Skeleton (Multi‑Turn)

  • System/Setup: ZRS service‑matching assistant. Use only the provided sources. Never invent services. Cite source + section for each item. If unsure, ask or flag as “Needs Confirmation.”
  • Turn 1 — User: Provide account/opportunity summary; industry, geography, scope/site; past services; constraints; and links/uploads to catalogs.
  • Turn 1 — Chatbot (validation): Confirm received sources/metadata; ask targeted questions; offer Proceed Anyway.
  • Turn 2 — Chatbot (draft): Return grouped service menu + citations + assumptions + “What to Confirm.” Offer export formats.
  • Turn 3+ — Iterate/Export: Incorporate clarifications; regenerate export‑ready snippet with version tag/date.

---

Identify the Source of ZRS Services Information

The chatbot needs a defined source of services, and here is how we prevent unwanted hallucinations: it must be given a clear source of truth for ZRS offerings, and the prompt should include guardrails (negative instructions) to prevent it from inventing or hallucinating services. It needs a source. Options include:

  • Official ZRS website and brochures: the baseline, safe, and public. (Best practice: ensure outdated services are removed or clearly marked to avoid confusion.)
  • Internal services catalogs: if available, these must be uploaded or linked.
  • Past services for the account: history can guide what’s relevant, but the user must supply that context.

Key point: the human must feed the bot the right sources. Without it, the results will be incomplete or unreliable.

Best Practices:

  • Remove or clearly mark outdated services so they aren’t mistakenly recommended.
  • Use the most authoritative catalog or source available.
  • Keep sources updated and aligned across marketing and client-facing teams.

---

Define the Opportunity Clearly

An “opportunity” can mean different things. It might be:

  • Industry: Healthcare, Data Centers, Food & Beverage, Logistics, Manufacturing, etc.
  • Geographic area: Country, region, or specific city/state.
  • Scope: Enterprise-wide, regional operations, or a single facility.
  • Collective group: A vertical, a consortium, or a cluster of sites.
  • Specific customer location: A single plant, warehouse, or office.
  • Historical context: Services already delivered to this account.

The clearer the description of the opportunity—and remembering that any combination of these factors may apply—the better the chatbot’s recommendations.

---

Inputs and Outputs

For this workflow, it helps to document inputs and outputs.

Inputs (provided by the human):

  • Services catalog or source material
  • Opportunity description (industry, geography, scope, history)
  • Constraints (budget, timeline, regulatory focus)

Outputs (expected from the chatbot):

  • A draft menu of relevant ZRS services
  • Framing or grouping by industry/geography
  • Notes on uncertainties or gaps to check with the client

Build a Test Plan

A good prompt should be tested before it’s rolled out widely. Examples:

  • Feed in a single-industry, single-site opportunity (e.g., a data center in Texas). Expect the chatbot to surface ZRS data center offerings.
  • Feed in a multi-region manufacturer. Expect geographic tailoring and mention of cross-border/regional issues.
  • Feed in an existing customer with history. Expect acknowledgment of past services and complementary recommendations.

Success criteria: recommendations are relevant, sources are cited, and gaps are flagged instead of ignored.

Rollout and Iteration

  • Start small—use a single account or industry.
  • Collect feedback from the team on whether the results are useful.
  • Adjust the prompt, update the sources, and iterate.
  • Once stable, template it into the prompt library so it can be reused across teams.

Implementation

Here is a fill-in-the-blank prompt scaffold. It is written in Markdown code block format so you can copy, paste, and adapt it. Keep line lengths short to make editing easier.

Test Strategy and Approach

Goal: Verify the prompt produces safe, source-grounded, immediately usable service suggestions across common and edge scenarios—while staying stable over time.

Principles

  • Isolate → Combine: Validate each input dimension alone (industry, geography, scope, history) before testing interactions.
  • Source-bounded: Every recommendation must cite a provided source and section; anything not in sources must land in “Possible Fits—Needs Human Confirmation.”
  • Deterministic as possible: Fix inputs (including source versions) so JSON outputs can be compared as golden files for regression and drift.
  • Tight feedback: Each run emits structured JSON; we diff JSON across versions/runs to detect breakage or subtle shifts.

Test Phases

  1. Unit (Dimension-Isolation)
  • Industry-only, Geography-only, Scope-only, History-only.
  1. Integration (Dimension-Combinations)
  • Industry × Geography; Industry × Scope; Industry × Geography × Scope; + History overlays.
  1. Real-World Edge Cases
  • Secondary/tertiary lines of business (e.g., Amazon physical retail vs “Amazon” as a whole).
  • Multi-region, regulated sectors, unusual facility types.
  1. Negative/Guardrail
  • Missing sources; outdated services present; ambiguous industry labels; conflicting inputs.
  1. Regression & Drift
  • Re-run a curated suite weekly or on source updates; compare JSON to golden files.
  • Alert on schema changes, missing citations, new/removed services, or confidence shifts.

Pass/Fail Criteria

  • 100% of recommended services have valid citations to provided sources.
  • No invented services in recommendations (implied items only appear under “Possible Fits—Needs Human Confirmation”).
  • Output JSON validates against schema; required fields present.
  • Assumptions are explicit when Proceed Anyway is used.
  • For baseline cases, JSON matches golden files within allowed tolerances.

Versioning & Repro

  • Record in JSON: prompt_version, model_id, run_timestamp, source_manifest (URL/file + hash + version/date), and input_fingerprint (hash of the OPPORTUNITY DETAILS block).

---

Test Case Types (what to generate)

Dimension-Isolation Cases

  • Industry-only (dozen majors): Healthcare, Data Centers, Food & Beverage, Logistics, Manufacturing, Retail, Hospitality, Construction, Energy/Utilities, Pharma/Biotech, Transportation, Financial Services.
  • Geography-only: US-national; EU; APAC; state-specific (e.g., TX); city-specific (e.g., Toronto).
  • Scope-only: Enterprise-wide; regional ops; single facility (plant/warehouse/office); greenfield site.
  • History-only: “Past fire protection audit (2023)”; “Cyber assessment (2022)”; “Natural hazard study (2021)”.

Integration Cases

  • Industry × Geography: Data Center × Texas; Manufacturing × Germany; Hospitality × Japan.
  • Industry × Scope: Food & Beverage × single refrigerated warehouse; Logistics × multi-region network.
  • Industry × Geography × Scope: Pharma × EU × enterprise-wide.
  • Add History overlays: “Add: prior [X] service performed in 2023.”

Real-World Nuance

  • Secondary/Tertiary lines: Amazon physical stores; big box retailer’s in-house bakery; hospital with research labs; university with medical center; airline with catering operations.
  • Regulatory hotspots: California (CARB), EU (NIS2), coastal flood zones, seismic regions.
  • Operational quirks: Aging facility; recent acquisition; outsourced maintenance; high-hazard storage.

Negative / Guardrail

  • No sources provided → bot must refuse to recommend and request sources.
  • Outdated services present → ensure they’re excluded or marked as deprecated.
  • Ambiguous input labels (“tech company” w/o specifics) → bot asks clarifying questions or proceeds with explicit assumptions.
  • Conflicting inputs (industry says Healthcare, history says “container terminal”) → force a clarifying question.

Regression & Drift Suite

  • Curate 12–20 representative cases across A–C.
  • Save their JSON as golden files with source_manifest and prompt_version.
  • Re-run on schedule or on any source change; diff and log deviations.

Automated Test Case Generation

Using this document (minus this section) for reference, we can ask our chatbots to generate a set of test cases.

Industries (isolation): Use the Implementation prompt with only Industry set to each of: Healthcare, Data Centers, Food & Beverage, Logistics, Manufacturing, Retail, Hospitality, Construction, Energy/Utilities, Pharma/Biotech, Transportation, Financial Services. Leave other fields blank; do not invent. Return JSON only.

Integration (Industry × Geography × Scope): For each tuple:

  • (Data Centers, Texas, Single Facility)
  • (Pharma/Biotech, EU, Enterprise-wide)
  • (Hospitality, Japan, Regional Ops)

Run with the same sources; return JSON only.

Secondary/Tertiary lines: Treat the primary brand as broad; focus on the specified sub-line: “Amazon physical retail stores (not e-commerce)”. Return JSON only.

Guardrail tests: Run with no sources. The assistant must refuse to recommend, ask for sources, and return JSON with empty recommendations and a clear assumptions/what_to_confirm.

Examples

To support automated testing, we can use JSON. Let’s start with 3:

// Test Case 1 — Industry Isolation (Data Centers)
{
"opportunity": {
"industry": "Data Centers",
"geography": "",
"scope": "",
"history": [],
"constraints": []
},
"sources": [
"[https://www.zurich.com/en/zrs/data-centers](https://www.zurich.com/en/zrs/data-centers)"
]
}
// Test Case 2 — Integration (Pharma/Biotech, EU, Enterprise-wide)
{
"opportunity": {
"industry": "Pharma/Biotech",
"geography": "European Union",
"scope": "Enterprise-wide",
"history": [],
"constraints": ["EU regulatory focus"]
},
"sources": [
"[https://www.zurich.com/en/zrs/manufacturing](https://www.zurich.com/en/zrs/manufacturing)"
]
}
// Test Case 3 — Real-World Nuance (Amazon physical retail stores)
{
"opportunity": {
"industry": "Retail",
"geography": "United States",
"scope": "Regional operations",
"history": ["Cyber assessment 2022"],
"constraints": []
},
"sources": [
"[https://www.zurich.com/en/zrs/retail](https://www.zurich.com/en/zrs/retail)"
]
}

Potential Road Map

This is how we can move from concept to adoption step by step:

  • Testing the Prompt
  • MVP Rollout
  • Gather feedback
  • Implement feedback
  • Create a custom chatbot
  • Integrate into internal tools
  • Integrate into external tools

Conclusion

This approach shows how prompt design can make ZRS services visible in everyday communications. By defining the use case, providing sources of truth, and adding guardrails, we can turn a chatbot into a reliable assistant that helps every team member surface the right services at the right time. The urgency is real: we are growing, we are adding new services, and if we want customers to adopt them, we need ways to explain them consistently and efficiently.

Call to Action

Try this prompt with one of your accounts or opportunities. See what it produces, and notice where it helps and where it falls short. Share your feedback so we can refine the design and improve the results together. The more we experiment and iterate, the better equipped we’