Thomas Bley
PulseAugur coverage of Thomas Bley — every cluster mentioning Thomas Bley across labs, papers, and developer communities, ranked by signal.
1 day(s) with sentiment data
-
Local LLM Guide Updated With Qwen 3.6 and Gemma 4
Thomas Bley has released an updated guide for running large language models locally, featuring Qwen 3.6 and Gemma 4. The setup includes configurations for permissions and different "thinking" variants, aiming to make lo…
-
Run LLMs locally with LFM 2 and Transformers.js, using WebGPU
Thomas Bley has released new slides detailing how to run Large Language Models (LLMs) locally using LFM 2. The presentation also covers using Transformers.js with WebGPU for privacy filters, function calling, and embedd…
-
Nvidia's Nemotron 3 Nano Omni and Llama.cpp enable local LLM execution
Thomas Bley has released new presentation slides detailing how to run large language models locally. The slides cover Nvidia's Nemotron 3 Nano Omni, built-in tools for Llama.cpp, and the use of Transformers.js with WebG…