LARQL is a new tool that allows users to query neural network weights as if they were in a graph database, eliminating the need for GPUs. It decompiles transformer models into a queryable format called a vindex and uses a query language called LQL to browse, edit, and recompile model knowledge. The tool supports various extraction levels, quantization, and slicing for different deployment scenarios, enabling efficient local inference or distributed setups. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON Release of an open-source tool for querying neural network weights.