Back to list of papers
PDF ViewerOpen in new tab
Anthrolytic

File Metadata

Paper Metadata

Title

Domain-Specific Fine-Tuning of PersonaPlex-7b for Customer Persona Simulation

Description

We fine-tuned NVIDIA's PersonaPlex-7b-v1 model on 200 synthetic coffee shop customer conversations to address hallucinations and poor task adherence in customer-facing deployments. Using LoRA training with ChatterboxTTS audio and LibriSpeech voices, we introduced a semantic-weighted loss function and voice prompt injection mechanism to improve emotional accuracy and reduce role-inversion hallucinations. LLM-as-judge evaluation shows improvements over the base model across all three tested configurations.

Resource type

Preprint

Resource language

English

Date created

February 2026

Date modified

February 2026

Contributors

Yoav GoldbergRichard DaySimeon Goldberg