Does AI Gaur train AI models on client data?
No. Client data is never used to train external or third-party AI models. We use it only to operate the services we built for you. We contractually require all AI vendors (Anthropic, OpenAI, etc.) to honor this.
Which AI models does AI Gaur use?
We primarily use Anthropic Claude (4.6 / 4.7) and OpenAI GPT-4o, selecting per task based on accuracy, latency, and cost. For non-sensitive workloads we also run lightweight open-source models (Llama 3, Mistral) on isolated infrastructure.
How do you prevent AI hallucinations?
- Retrieval-augmented generation (RAG) anchored to your verified knowledge base.
- Schema-constrained outputs — agents emit JSON validated against strict schemas.
- Multi-step verification — outputs cross-checked by a second model before publishing.
- Human review on outbound customer communications by default.
Can a human override an AI decision?
Yes. Every agent we deploy ships with a human-override path, a complete audit log, and one-click rollback. No agent takes irreversible action (sending a public message, charging a card, deleting data) without human confirmation by default.
How do you prevent prompt injection & data exfiltration?
- Trusted vs. untrusted input segregation — agents never blindly trust web/email content.
- Tool-use allowlists per agent; no agent has more access than its job requires.
- Output filtering against secret-leakage patterns (API keys, PII).
- Quarterly red-team exercises against our agent fleet.
Where is data processed and stored?
Data is processed in US-East and EU-Central regions by default. EU clients can request EU-only processing, including AI inference via EU-hosted endpoints. Data at rest is AES-256 encrypted; in transit is TLS 1.3.
Can I opt out of AI processing?
Yes. We offer a manual mode for every service. AI processing is never mandatory. Email praneeta.biz@gmail.com to switch your account.
How do you handle bias and fairness?
For customer-facing flows (lead scoring, support routing, content recommendations) we run quarterly fairness audits. We diversify training and prompt examples, sample outputs by demographic where consented, and adjust prompts or pipelines if disparate impact appears.
Will you disclose AI use to my customers?
When required by law (EU AI Act, certain US states) or when an end customer asks directly, we disclose AI involvement in the channel they're using. In channels where disclosure is not required, we follow your brand's preferred language. Default: opt for transparency.
How can I report a concern?
Report any concern about an AI system we run for you to praneeta.biz@gmail.com. We acknowledge within one business day and provide a remediation plan within five.