I remember how we would get into trouble for copying each other’s homework in high school.
Now we get in trouble for generating each other’s homework 🤷
OK Tech bros, repeat after me:
“Thou shalt not make a machine in the likeness of a human mind.”
I wonder if a good fine tuned model beats every general purpose LLM if you need it for a really specific purpose
Yes it does
Of course, and this is why the new hotness is a Mixture of Experts for one model that is effectively a bunch of experts arguing over the answer, or else on a different scale there’s the Combination of Agents where different specialized agents perform specialized tasks.
There is new project which they share fine-tuned modernbert on some task. Here is the org https://huggingface.co/adaptive-classifier
I thought DeepSeek’s selling point was efficiency?
It was beating near frontier models at tenth of a cost could be hosted by you or any cloud service for you and the open weights were not censored for China’s pr and could be jailbroken to say write code for shady stuff which any frontier model would refuse.
(None of them are THAT much better than previous, even worse in some areas)
Wait a second.
Grok’s symbol is a lightning bolt?
Grok 2 will have two lightning bolts
how about you grok deez nuts (it is 1 am please forgive me)






