Eugene Yanayt
PulseAugur coverage of Eugene Yanayt — every cluster mentioning Eugene Yanayt across labs, papers, and developer communities, ranked by signal.
No coverage in the last 90 days.
-
Eugene Yan: AI collaboration requires context, configuration, and memory
Eugene Yan outlines a framework for effectively collaborating with AI tools, emphasizing practices that enable compounding improvements over time. Key principles include providing clear context, encoding personal prefer…
-
Eugene Yan outlines a 3-step process for effective LLM product evaluations
Eugene Yan's guide outlines a three-step process for developing product evaluations for LLMs. The first step involves labeling a small dataset, focusing on binary pass/fail or win/lose labels to ensure clarity and consi…
-
Eugene Yan offers advice for new principal tech engineers and scientists
Eugene Yan's advice for principal-level technical individual contributors emphasizes a shift from individual coding to broader influence and technical vision. Principals are encouraged to be hands-on but focus their cor…
-
Eugene Yan trains LLM-recommender hybrid for steerable, explainable recommendations
Eugene Yan has developed a novel approach to recommender systems by training a hybrid language model that understands both natural language and item IDs. This model, which extends the vocabulary of a language model with…
-
Eugene Yan explores exceptional leadership qualities: vision, execution, and empathy.
Eugene Yan's article outlines exceptional leadership qualities, categorizing them into vision, execution, and empathy. Exceptional leaders embody all three, while good leaders excel in at least two. The piece further de…
-
Eugene Yan builds news agents using Amazon Q and MCP for daily recaps
Eugene Yan has developed a system for generating daily news recaps using an agentic workflow powered by Amazon Q CLI and MCP. This system splits news feeds into chunks, with separate sub-agents processing each chunk. Th…
-
Eugene Yan: LLM-as-judge won't fix AI product evals; focus on process
Eugene Yan argues that relying solely on tools like LLM-as-judge will not fix product evaluation issues. Instead, he emphasizes that a robust evaluation process, akin to the scientific method, is crucial for improving A…
-
Eugene Yan discusses building LLM-powered applications at NVIDIA GTC 2025
Eugene Yan presented at NVIDIA GTC 2025 on a panel discussing the development of applications powered by large language models. The session, titled "Insights and Lessons Learned From Building LLM-Powered Applications," …
-
Eugene Yan explores the paradoxical rules of effective writing
Eugene Yan's article explores the paradoxical nature of writing, suggesting that effective writing often involves embracing seemingly contradictory approaches. He posits that clarity can be achieved through both simple …
-
Technical writer shares strategies for building an audience through AI content
Technical writer Hamel Husain shares strategies for building an audience, emphasizing authentic engagement with others' work and consistent content creation. He advises developers to add value to existing discussions an…
-
Eugene Yan shares guide to running weekly AI paper club for learning communities
Eugene Yan details a successful weekly paper club that has met for 18 months, discussing at least 80 AI-related papers. The club focuses on foundational concepts, models, training, and inference techniques within machin…
-
Eugene Yan shares minimal MacBook Pro setup guide for developers
Eugene Yan details his minimal setup for a new M4 MacBook Pro, emphasizing a clean slate approach over restoring from a backup. He outlines configurations for macOS settings, essential developer tools like Homebrew, War…
-
Weights & Biases Hackathon Showcases Creative LLM Evaluation Projects
Eugene Yan, a judge at the Weights & Biases LLM-Evaluator Hackathon, shared insights from the event where over 100 participants built creative projects. Teams focused on areas like knowledge graph construction, LLM eval…
-
OpenAI, Yan, and Latent Space detail effective LLM prompting techniques
OpenAI has released a guide on prompting fundamentals, emphasizing clear instructions and conversational interaction to improve ChatGPT responses. The guide suggests being specific about desired outcomes, providing cont…
-
Eugene Yan shares insights on LLM system building and AI engineering trends
Eugene Yan presented key learnings from building with Large Language Models (LLMs) at the AI Engineer World's Fair 2024. The keynote, co-authored with others, focused on practical aspects of LLM system development, incl…
-
How To Hire AI Engineers — with James Brady & Adam Wiggins of Elicit
Hiring managers for AI and Machine Learning roles should focus on a structured interview process that assesses both technical and non-technical skills. Key areas include software engineering proficiency, demonstrated th…
-
Eugene Yan shares LLM challenges for Netflix recommendation systems
Eugene Yan, a Senior Applied Scientist at Amazon, presented at the 2024 Netflix Workshop on Personalization, Recommendation, and Search. His talk focused on the practical challenges encountered when developing and deplo…
-
Eugene Yan shares lessons learned from a year of building with LLMs
Eugene Yan's article distills a year's worth of experience in developing applications powered by large language models. The insights cover a broad spectrum, from the practical, hands-on aspects of implementation to the …
-
Eugene Yan launches AlignEval to simplify and automate LLM evaluation
Eugene Yan has launched AlignEval, a new application designed to simplify and automate the process of evaluating large language models (LLMs). The tool guides users through uploading data, labeling samples as pass or fa…
-
Eugene Yan advises against mocking machine learning models in unit tests
Eugene Yan's article discusses the challenges of applying traditional unit testing practices to machine learning code. Unlike standard software where logic is handcrafted, ML models learn logic from data, making direct …