<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>In-Context Learning on Producthunt daily</title>
        <link>https://producthunt.programnotes.cn/en/tags/in-context-learning/</link>
        <description>Recent content in In-Context Learning on Producthunt daily</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Fri, 25 Jul 2025 15:33:48 +0800</lastBuildDate><atom:link href="https://producthunt.programnotes.cn/en/tags/in-context-learning/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>rag-from-scratch</title>
        <link>https://producthunt.programnotes.cn/en/p/rag-from-scratch/</link>
        <pubDate>Fri, 25 Jul 2025 15:33:48 +0800</pubDate>
        
        <guid>https://producthunt.programnotes.cn/en/p/rag-from-scratch/</guid>
        <description>&lt;img src="https://images.unsplash.com/photo-1642376344452-4c6001638603?ixid=M3w0NjAwMjJ8MHwxfHJhbmRvbXx8fHx8fHx8fDE3NTM0Mjg3NDh8&amp;ixlib=rb-4.1.0" alt="Featured image of post rag-from-scratch" /&gt;&lt;h1 id=&#34;langchain-airag-from-scratch&#34;&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/langchain-ai/rag-from-scratch&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;langchain-ai/rag-from-scratch&lt;/a&gt;
&lt;/h1&gt;&lt;h1 id=&#34;rag-from-scratch&#34;&gt;RAG From Scratch
&lt;/h1&gt;&lt;p&gt;LLMs are trained on a large but fixed corpus of data, limiting their ability to reason about private or recent information. Fine-tuning is one way to mitigate this, but is often &lt;a class=&#34;link&#34; href=&#34;https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;not well-suited for factual recall&lt;/a&gt; and &lt;a class=&#34;link&#34; href=&#34;https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;can be costly&lt;/a&gt;.
Retrieval augmented generation (RAG) has emerged as a popular and powerful mechanism to expand an LLM&amp;rsquo;s knowledge base, using documents retrieved from an external data source to ground the LLM generation via in-context learning.
These notebooks accompany a &lt;a class=&#34;link&#34; href=&#34;https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&amp;amp;feature=shared&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;video playlist&lt;/a&gt; that builds up an understanding of RAG from scratch, starting with the basics of indexing, retrieval, and generation.
&lt;img src=&#34;https://github.com/langchain-ai/rag-from-scratch/assets/122662504/54a2d76c-b07e-49e7-b4ce-fc45667360a1&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;rag_detail_v2&#34;
	
	
&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Video playlist&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
        
    </channel>
</rss>
