Many modern APIs (like AsyncOpenAI) have built-in concurrent execution features. But understanding these concepts can help you add your own concurrency controls. You’re not reinventing the wheel, you’re modifying it for your particular use case.
Random Sampling Is Sabotaging Your Models!
Did you know diversity can detect hallucinations? Ask an LLM “what’s the capital of France” five times and you’ll get some variation of “Paris” over and over again. But ask that same LLM about the “boiling point of a dragon’s scale” and you’ll get five different made-up answers.
Self-Hosting LLMs with vLLM
Get ready to save some money 💰. In this post, you’ll learn how to set up your own LLM server using vLLM, choose the right models, and build an architecture that fits your use case.
Testing Your LLM Applications (Without Going Broke)
Traditional software is deterministic. Same input, same output, every time. LLMs? Non-deterministic by design. Same input, different output. Even with temperature=0, you get variations. However, there are ways to successfully test your code and in this post you’ll learn them.
Stop Guessing What Your LLM Returns
If you don’t know what datatypes your LLM function returns, you’re basically playing Russian roulette during runtime 😨
Your LLM Works in a Notebook. Now What?
Right now, there’s probably a data scientist waking up to a $3,000 OpenAI bill because a bot found their exposed API key and has been making calls to GPT-4 for 12 hours straight.
