AriHelder — Ethics Guidelines for the Use of AI (Internal + Customer Data)

These guidelines define how AriHelder may use AI and—equally important—where we will not use AI, especially anywhere that could expose customers to unwanted profiling, tracking, or misuse of personal data. AriHelder is a wellness company with a “home-first” philosophy, focused on making light-based wellness simple, safe, and trustworthy.

1) Core principles (non-negotiable)

  1. Privacy-first by design

    We minimize data collection and avoid architectures that create tracking trails.
  2. No selling of customer data — ever

    AriHelder does not sell customer personal data, device-use data, or inferred wellness-related signals.
  3. Avoid “AI exposure” to customers unless it’s clearly beneficial and safe

    We treat AI as a tool for AriHelder’s operations—not as something customers must interact with.
  4. Safety and trust beat cleverness

    If a feature risks misuse, surveillance, profiling, or “data used against you,” we won’t ship it.
  5. Local-first product philosophy

    Where possible, AriHelder designs products that do not depend on cloud services.

2) Scope: we separate AI into two domains

A) AI in production + internal procedures (Allowed, with controls)

AI may be used to improve operations, quality, and efficiency—without involving customer identity.

Examples (typical “OK” uses):

  • Manufacturing process optimization (yield analysis, defect pattern detection)
  • Quality inspection support (e.g., identifying cosmetic defects in non-customer photos)
  • Document drafting (SOPs, training materials, internal checklists)
  • Translation/localization of non-personal content
  • Support tooling that uses anonymized and aggregated data only
B) AI related to a user and the user’s ID (Restricted / generally avoided)

Any AI use that touches a customer’s identity, device usage linked to a person, or metadata that can profile them is treated as high risk and is off by default.

We strongly prefer:

  • No user accounts required
  • No cloud dependency
  • No collection of session-level device telemetry linked to an individual

3) What we will not do (hard prohibitions)

AriHelder will not:

  1. Sell customer data or allow third parties to purchase, rent, or broker it.
  2. Use AI to create health or psychological profiles of customers, even if “wellness” framed.
  3. Use AI to infer or label sensitive attributes (e.g., stress level, mood, illness likelihood) tied to an identifiable person.
  4. Enable targeted advertising based on device use, routine behavior, or inferred wellness state.
  5. Share identifiable customer data with ad-tech platforms (including “hashed identifiers” for matching).
  6. Use “dark patterns” to push customers into data sharing.
  7. Deploy AI features that require customers to hand over personal photos, voice, biometrics, or routine logs unless strictly necessary and explicitly consented.

4) Data minimization and metadata defense

Because metadata can be used against users, AriHelder operates on these rules:

Data minimization

  • If we don’t need it, we don’t collect it.
  • We prefer aggregate counters (e.g., total device units sold, total warranty cases) over user-level histories.
  • We avoid creating persistent identifiers unless required for legal/warranty service.

Metadata defense (practical rules)

  • Avoid integrations that leak identity or behavior (ad pixels, third-party analytics tags, cross-site trackers).
  • Prefer privacy-preserving analytics (e.g., self-hosted, IP anonymization, short retention, no fingerprinting).
  • Keep logs short-lived and access-controlled.

5) Customer-facing AI: the “exception-only” rule

If AriHelder ever considers AI that touches customer data or could identify a user, we require all of the following:

  1. Clear benefit (not “nice to have”)
  2. Explicit opt-in (not default)
  3. Minimal data (only what is required)
  4. Short retention (auto-delete by default)
  5. No third-party re-use (vendors cannot train on AriHelder customer data)
  6. Human override for any automated decision that affects service or customer outcomes
  7. Plain-language disclosure explaining:


    What data is used


    Why it’s needed


    Where it is processed (device vs server)


    How long it is retained


    How to opt out / delete


6) Using AI in support, warranty, and communications

Allowed (with guardrails)
  • AI-assisted drafting of support replies without including personal health details.
  • AI summarization of support tickets after redacting personal identifiers.
  • Pattern analysis of issues using anonymized and aggregated warranty data.
Not allowed
  • Feeding raw support messages containing personal identifiers into an AI system that stores prompts for training.
  • Using AI to rank customers by “value,” “risk,” or “likelihood to return.”

7) Vendor and model governance

Any AI vendor or model provider must meet these minimum requirements:

  • Data processing agreement with: no training on AriHelder inputs, no resale, no sharing.
  • Clear retention policy with deletion controls.
  • Security standards appropriate for customer-related data (even if we aim not to process it).
  • Ability to run self-hosted or in a controlled environment for sensitive workflows when needed.

8) Security, access, and retention

  • Access to any dataset (even anonymized) is role-based and logged.
  • Retention defaults:


    Operational logs: shortest practical period


    Support/warranty: only as needed for service and legal obligations


    Any AI training datasets: must not include customer-identified data


9) Product alignment: “no cloud dependency” as an ethical choice

AriHelder products are designed to remain simple and not dependent on cloud services or apps in the core direction. This supports:

  • Lower privacy risk
  • Lower “surveillance surface area”
  • Lower chance of data being repurposed against customers

10) Transparency and customer rights

Customers have the right to:

  • Ask what data AriHelder holds about them (if any)
  • Request correction or deletion where feasible
  • Choose non-digital pathways (e.g., offline use, showroom support)

This matches AriHelder’s mission to be a trusted household wellness tool—safe, calm, and respectful.

11) Governance: who approves AI use

Any AI project must have:

  • A named owner
  • A written “AI Use Brief” covering: purpose, data involved, risks, mitigations, retention, vendor terms
  • Approval from leadership and (where relevant) product + quality oversight

If it touches user identity or device usage linked to a person, it automatically becomes a restricted project and must meet the “exception-only” rule above.