Running Large Language Models locally 2/3

Featured image

In a series of 3 workshops we want to help you in setting up Ollama and running your local LLM’s. Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data.

Thanks to recent developments ('quantization') we now have models like Mistral 8x7B that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app Ollama.

Workshop 1/3 (April 17th); getting started
Workshop 2/3 (May 15th); making the most of Ollama on a variety of devices
Workshop 3/3 (June 19th); finetuning your LLM

https://www.meetup.com/sensemakersams/events/299714048/

Comments

Topics