Ollama & running Large Language Models locally

Featured image

In a series of 3 workshops we want to help you in setting up Ollama and running your local LLM’s. Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data.

Thanks to recent developments ('quantization') we now have models like Mistral 8x7B that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app Ollama.

Workshop 1/3 (April 17th, during Appril Festival); getting started
Workshop 2/3 (May 15th); making the most of Ollama on a variety of devices
Workshop 3/3 (June 19th); finetuning your LLM

https://www.meetup.com/sensemakersams/events/298443520/

Comments

Topics