LLMs are in trouble - YouTube
[https://www.youtube.com/watch?v=o2s8I6yBrxE] - - public:mzimmerm
A super small amount of data (350 documents with SUDO) poison a LLM model of any size
A super small amount of data (350 documents with SUDO) poison a LLM model of any size
AI Halucination is ineviteble as users prefer certainty of answers over correctness - and so the models are trained that way. In addition, models are statistical engines, where errors accumulate with length and complexity.