If it was, that line is not an indicator. Distillation is done on useful prompts, not on "Who are you?" - "I'm this model of that company".
Name training is always shallow, Claude itself would claim it's GPT-3, GPT-4, or Reddit (heh) when confused. It's just dataset contamination, because the web is full of slop. Never trust self-reported names.
muzani 3 hours ago [-]
They all are trained by each other. Claude says it's DeepSeek if you ask it in Mandarin.
SilverElfin 2 hours ago [-]
Most people seem to think that phenomenon is not the same thing. People have shown by experimenting with different prompts that even in Mandarin, Claude correctly says it’s Claude when it is doing something for you. But if you ask it about its identity, it sometimes says DeepSeek. The current theory is it just has run into Chinese content that has chat logs that often have a DeepSeek model answering that it is DeepSeek. But the inconsistency in different prompts suggests this is something different from distillation.
SilverElfin 2 hours ago [-]
This has been a common issue with the Chinese open weight models. It appears most or all have been trained via distillation on OpenAI and Anthropic models.
Rendered at 05:11:06 GMT+0000 (Coordinated Universal Time) with Vercel.
Name training is always shallow, Claude itself would claim it's GPT-3, GPT-4, or Reddit (heh) when confused. It's just dataset contamination, because the web is full of slop. Never trust self-reported names.