<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>CPU on Producthunt daily</title>
        <link>https://producthunt.programnotes.cn/en/tags/cpu/</link>
        <description>Recent content in CPU on Producthunt daily</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Fri, 19 Sep 2025 15:27:28 +0800</lastBuildDate><atom:link href="https://producthunt.programnotes.cn/en/tags/cpu/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>monad</title>
        <link>https://producthunt.programnotes.cn/en/p/monad/</link>
        <pubDate>Fri, 19 Sep 2025 15:27:28 +0800</pubDate>
        
        <guid>https://producthunt.programnotes.cn/en/p/monad/</guid>
        <description>&lt;img src="https://images.unsplash.com/photo-1694608108899-b70271860e86?ixid=M3w0NjAwMjJ8MHwxfHJhbmRvbXx8fHx8fHx8fDE3NTgyNjY3ODh8&amp;ixlib=rb-4.1.0" alt="Featured image of post monad" /&gt;&lt;h1 id=&#34;category-labsmonad&#34;&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/category-labs/monad&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;category-labs/monad&lt;/a&gt;
&lt;/h1&gt;&lt;h1 id=&#34;monad-execution&#34;&gt;Monad Execution
&lt;/h1&gt;&lt;h2 id=&#34;overview&#34;&gt;Overview
&lt;/h2&gt;&lt;p&gt;This repository contains the execution component of a Monad node. It
handles the transaction processing for new blocks, and keeps track of
the state of the blockchain. Consequently, this repository contains
the source code for Category Labs&amp;rsquo; custom
&lt;a class=&#34;link&#34; href=&#34;https://docs.monad.xyz/monad-arch/execution/native-compilation&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;EVM implementation&lt;/a&gt;,
its &lt;a class=&#34;link&#34; href=&#34;https://docs.monad.xyz/monad-arch/execution/monaddb&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;database implementation&lt;/a&gt;,
and the high-level &lt;a class=&#34;link&#34; href=&#34;https://docs.monad.xyz/monad-arch/execution/parallel-execution&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;transaction scheduling&lt;/a&gt;.
The other main repository is &lt;a class=&#34;link&#34; href=&#34;https://github.com/category-labs/monad-bft&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;monad-bft&lt;/a&gt;,
which contains the source code for the consensus component.&lt;/p&gt;
&lt;h2 id=&#34;building-the-source-code&#34;&gt;Building the source code
&lt;/h2&gt;&lt;h3 id=&#34;package-requirements&#34;&gt;Package requirements
&lt;/h3&gt;&lt;p&gt;Execution has two kinds of dependencies on third-party libraries:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Self-managed&lt;/strong&gt;: execution&amp;rsquo;s CMake build system will checkout most of
its third-party dependencies as git submodules, and build them as part
of its own build process, as CMake subprojects; this will happen
automatically during the build, but you must run:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;git submodule update --init --recursive
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;after checking out this repository.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;System&lt;/strong&gt;: some dependencies are expected to already be part of the
system in a default location, i.e., they are expected to come from the
system&amp;rsquo;s package manager. The primary development platform is Ubuntu, so
the required packages use the Debian/Ubuntu package names; an up-to-date
list of the required system dependencies can be found in the docker
configuration file &lt;code&gt;docker/release.Dockerfile&lt;/code&gt; (you will need all
the packages installed via the &lt;code&gt;apt install&lt;/code&gt; commands)&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;minimum-development-tool-requirements&#34;&gt;Minimum development tool requirements
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;gcc-15 or clang-19&lt;/li&gt;
&lt;li&gt;CMake 3.27&lt;/li&gt;
&lt;li&gt;Even when using clang, the only standard library supported is libstdc++;
libc++ may work but it is not a tested platform&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;cpu-compilation-requirements&#34;&gt;CPU compilation requirements
&lt;/h3&gt;&lt;p&gt;As explained in the &lt;a class=&#34;link&#34; href=&#34;https://docs.monad.xyz/monad-arch/hardware-requirements&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;hardware requirements&lt;/a&gt;,
a Monad node requires a relatively recent CPU. Execution explicitly
requires this to compile: it needs to emit machine code that is only
supported on recent CPU models, for fast cryptographic operations.&lt;/p&gt;
&lt;p&gt;The minimum ISA support corresponds to the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;x86-64-v3&lt;/a&gt;
feature level. Consequently, the minimum flag you must pass to the compiler
is &lt;code&gt;-march=x86-64-v3&lt;/code&gt;, or alternatively &lt;code&gt;-march=haswell&lt;/code&gt; (&amp;ldquo;Haswell&amp;rdquo; was
the codename of the first Intel CPU to support all of these features).&lt;/p&gt;
&lt;p&gt;You may also pass any higher architecture level if you wish, although
the compiled binary may not work on older CPUs. The execution docker
files use &lt;code&gt;-march=haswell&lt;/code&gt; because it tries to maximize the number of
systems the resulting binary can run on. If you are only running locally
(i.e., the binary does not need to run anywhere else) use &lt;code&gt;-march=native&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;compiling-the-execution-code&#34;&gt;Compiling the execution code
&lt;/h3&gt;&lt;p&gt;First, change your working directory to the root directory of the execution
git repository root and then run:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nv&#34;&gt;CC&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;gcc-15 &lt;span class=&#34;nv&#34;&gt;CXX&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;g++-15 &lt;span class=&#34;nv&#34;&gt;CFLAGS&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;-march=haswell&amp;#34;&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;CXXFLAGS&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;-march=haswell&amp;#34;&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;ASMFLAGS&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;-march=haswell&amp;#34;&lt;/span&gt; &lt;span class=&#34;se&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;./scripts/configure.sh &lt;span class=&#34;o&#34;&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./scripts/build.sh
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The above command will do several things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use gcc-15 instead of the system&amp;rsquo;s default compiler&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Emit machine code using Haswell-era CPU extensions&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run CMake, and generate a &lt;a class=&#34;link&#34; href=&#34;https://ninja-build.org/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;ninja&lt;/a&gt; build
system in the &lt;code&gt;&amp;lt;path-to-execution-repo&amp;gt;/build&lt;/code&gt; directory with
the &lt;a class=&#34;link&#34; href=&#34;https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;code&gt;CMAKE_BUILD_TYPE&lt;/code&gt;&lt;/a&gt;
set to &lt;code&gt;RelWithDebInfo&lt;/code&gt; by default&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build the CMake &lt;code&gt;all&lt;/code&gt; target, which builds everything&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The compiler and CPU options are injected via environment variables that
are read by CMake.  If you want debug binaries instead, you can also pass
&lt;code&gt;CMAKE_BUILD_TYPE=Debug&lt;/code&gt; via the environment.&lt;/p&gt;
&lt;p&gt;When finished, this will build all of the execution binaries. The main one is
the execution daemon, &lt;code&gt;build/cmd/monad&lt;/code&gt;. This binary can provide block
execution services for different EVM-compatible blockchains:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;When used as part of a Monad blockchain node, it behaves as the block
execution service for the Category Labs consensus daemon (for details, see
&lt;a class=&#34;link&#34; href=&#34;docs/overview.md#how-is-execution-used&#34; &gt;here&lt;/a&gt;); when running in this mode,
Monad EVM extensions (e.g., Monad-style staking) are enabled&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It can also replay the history of other EVM-compatible blockchains, by
executing their historical blocks as inputs; a common developer workflow
(and a good full system test) is to replay the history of the original
Ethereum mainnet and verify that the computed Merkle roots match after
each block&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can also run the full test suite in parallel with:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-fallback&#34; data-lang=&#34;fallback&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;CTEST_PARALLEL_LEVEL=$(nproc) ctest
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id=&#34;a-tour-of-execution&#34;&gt;A tour of execution
&lt;/h2&gt;&lt;p&gt;To understand how the source code is organized, you should start by reading
the execution &lt;a class=&#34;link&#34; href=&#34;docs/overview.md&#34; &gt;developer overview&lt;/a&gt;, which explains how
execution and consensus fit together, and where in the source tree you can
find different pieces of functionality.&lt;/p&gt;
</description>
        </item>
        <item>
        <title>LMCache</title>
        <link>https://producthunt.programnotes.cn/en/p/lmcache/</link>
        <pubDate>Wed, 20 Aug 2025 15:28:48 +0800</pubDate>
        
        <guid>https://producthunt.programnotes.cn/en/p/lmcache/</guid>
        <description>&lt;img src="https://images.unsplash.com/photo-1478034460338-249ef2da6c0f?ixid=M3w0NjAwMjJ8MHwxfHJhbmRvbXx8fHx8fHx8fDE3NTU2NzQ5MDF8&amp;ixlib=rb-4.1.0" alt="Featured image of post LMCache" /&gt;&lt;h1 id=&#34;lmcachelmcache&#34;&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/LMCache/LMCache&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;LMCache/LMCache&lt;/a&gt;
&lt;/h1&gt;&lt;div align=&#34;center&#34;&gt;
  &lt;p align=&#34;center&#34;&gt;
    &lt;img src=&#34;https://raw.githubusercontent.com/LMCache/LMCache/dev/asset/logo.png&#34; width=&#34;720&#34; alt=&#34;lmcache logo&#34;&gt;
  &lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://docs.lmcache.ai/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/badge/docs-live-brightgreen&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Docs&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://pypi.org/project/lmcache/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/pypi/v/lmcache&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;PyPI&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://pypi.org/project/lmcache/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/pypi/pyversions/lmcache&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;PyPI - Python Version&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://buildkite.com/lmcache/lmcache-unittests&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://badge.buildkite.com/ce25f1819a274b7966273bfa54f0e02f092c3de0d7563c5c9d.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Unit Tests&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://github.com/LMCache/LMCache/actions/workflows/code_quality_checks.yml&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://github.com/lmcache/lmcache/actions/workflows/code_quality_checks.yml/badge.svg?branch=dev&amp;amp;label=tests&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Code Quality&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://buildkite.com/lmcache/lmcache-vllm-integration-tests&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://badge.buildkite.com/108ddd4ab482a2480999dec8c62a640a3315ed4e6c4e86798e.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Integration Tests&#34;
	
	
&gt;&lt;/a&gt;&lt;/p&gt;
   &lt;br /&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.bestpractices.dev/projects/10841&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://www.bestpractices.dev/projects/10841/badge&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;OpenSSF Best Practices&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://scorecard.dev/viewer/?uri=github.com/LMCache/LMCache&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://api.scorecard.dev/projects/github.com/LMCache/LMCache/badge&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;OpenSSF Scorecard&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://deepwiki.com/LMCache/LMCache/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://deepwiki.com/badge.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Ask DeepWiki&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://github.com/LMCache/LMCache/graphs/commit-activity&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/github/commit-activity/w/LMCache/LMCache&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;GitHub commit activity&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://pypi.org/project/lmcache/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/pypi/dm/lmcache&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;PyPI - Downloads&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://www.youtube.com/channel/UC58zMz55n70rtf1Ak2PULJA&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/youtube/channel/views/UC58zMz55n70rtf1Ak2PULJA&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;YouTube Channel Views&#34;
	
	
&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;| &lt;a class=&#34;link&#34; href=&#34;https://blog.lmcache.ai/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;strong&gt;Blog&lt;/strong&gt;&lt;/a&gt;
| &lt;a class=&#34;link&#34; href=&#34;https://docs.lmcache.ai/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;strong&gt;Documentation&lt;/strong&gt;&lt;/a&gt;
| &lt;a class=&#34;link&#34; href=&#34;https://join.slack.com/t/lmcacheworkspace/shared_invite/zt-36x1m765z-8FgDA_73vcXtlZ_4XvpE6Q&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;strong&gt;Join Slack&lt;/strong&gt;&lt;/a&gt;
| &lt;a class=&#34;link&#34; href=&#34;https://forms.gle/MHwLiYDU6kcW3dLj7&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;strong&gt;Interest Form&lt;/strong&gt;&lt;/a&gt;
| &lt;a class=&#34;link&#34; href=&#34;https://github.com/LMCache/LMCache/issues/1253&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;strong&gt;Roadmap&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;🔥 &lt;strong&gt;NEW: For enterprise-scale deployment of LMCache and vLLM, please check out vLLM &lt;a class=&#34;link&#34; href=&#34;https://github.com/vllm-project/production-stack&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Production Stack&lt;/a&gt;. LMCache is also officially supported in &lt;a class=&#34;link&#34; href=&#34;https://github.com/llm-d/llm-d/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;llm-d&lt;/a&gt; and &lt;a class=&#34;link&#34; href=&#34;https://github.com/kserve/kserve&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;KServe&lt;/a&gt;!&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary
&lt;/h2&gt;&lt;p&gt;LMCache is an &lt;strong&gt;LLM&lt;/strong&gt; serving engine extension to &lt;strong&gt;reduce TTFT&lt;/strong&gt; and &lt;strong&gt;increase throughput&lt;/strong&gt;, especially under long-context scenarios. By storing the KV caches of reusable texts across various locations, including (GPU, CPU DRAM, Local Disk), LMCache reuses the KV caches of &lt;strong&gt;&lt;em&gt;any&lt;/em&gt;&lt;/strong&gt; reused text (not necessarily prefix) in &lt;strong&gt;&lt;em&gt;any&lt;/em&gt;&lt;/strong&gt; serving engine instance. Thus, LMCache saves precious GPU cycles and reduces user response delay.&lt;/p&gt;
&lt;p&gt;By combining LMCache with vLLM, developers achieve 3-10x delay savings and GPU cycle reduction in many LLM use cases, including multi-round QA and RAG.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://github.com/user-attachments/assets/86137f17-f216-41a0-96a7-e537764f7a4c&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;performance&#34;
	
	
&gt;&lt;/p&gt;
&lt;h2 id=&#34;features&#34;&gt;Features
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; 🔥 Integration with vLLM v1 with the following features:
&lt;ul&gt;
&lt;li&gt;High performance CPU KVCache offloading&lt;/li&gt;
&lt;li&gt;Disaggregated prefill&lt;/li&gt;
&lt;li&gt;P2P KVCache sharing&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; LMCache is supported in the &lt;a class=&#34;link&#34; href=&#34;https://github.com/vllm-project/production-stack/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;vLLM production stack&lt;/a&gt;, &lt;a class=&#34;link&#34; href=&#34;https://github.com/llm-d/llm-d/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;llm-d&lt;/a&gt;, and &lt;a class=&#34;link&#34; href=&#34;https://github.com/kserve/kserve&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;KServe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; Stable support for non-prefix KV caches&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; Storage support as follows:
&lt;ul&gt;
&lt;li&gt;CPU&lt;/li&gt;
&lt;li&gt;Disk&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/ai-dynamo/nixl&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;NIXL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; Installation support through pip and latest vLLM&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;installation&#34;&gt;Installation
&lt;/h2&gt;&lt;p&gt;To use LMCache, simply install &lt;code&gt;lmcache&lt;/code&gt; from your package manager, e.g. pip:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;pip install lmcache
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Works on Linux NVIDIA GPU platform.&lt;/p&gt;
&lt;p&gt;More &lt;a class=&#34;link&#34; href=&#34;https://docs.lmcache.ai/getting_started/installation&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;detailed installation instructions&lt;/a&gt; are available in the docs, particularly if you are not using the latest stable version of vllm or using another serving engine with different dependencies. Any &amp;ldquo;undefined symbol&amp;rdquo; or torch mismatch versions can be resolved in the documentation.&lt;/p&gt;
&lt;h2 id=&#34;getting-started&#34;&gt;Getting started
&lt;/h2&gt;&lt;p&gt;The best way to get started is to checkout the &lt;a class=&#34;link&#34; href=&#34;https://docs.lmcache.ai/getting_started/quickstart/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Quickstart Examples&lt;/a&gt; in the docs.&lt;/p&gt;
&lt;h2 id=&#34;documentation&#34;&gt;Documentation
&lt;/h2&gt;&lt;p&gt;Check out the LMCache &lt;a class=&#34;link&#34; href=&#34;https://docs.lmcache.ai/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;documentation&lt;/a&gt; which is available online.&lt;/p&gt;
&lt;p&gt;We also post regularly in &lt;a class=&#34;link&#34; href=&#34;https://blog.lmcache.ai/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;LMCache blogs&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples
&lt;/h2&gt;&lt;p&gt;Go hands-on with our &lt;a class=&#34;link&#34; href=&#34;https://github.com/LMCache/LMCache/tree/dev/examples&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;examples&lt;/a&gt;,
demonstrating how to address different use cases with LMCache.&lt;/p&gt;
&lt;h2 id=&#34;interested-in-connecting&#34;&gt;Interested in Connecting?
&lt;/h2&gt;&lt;p&gt;Fill out the &lt;a class=&#34;link&#34; href=&#34;https://forms.gle/mQfQDUXbKfp2St1z7&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;interest form&lt;/a&gt;, &lt;a class=&#34;link&#34; href=&#34;https://mailchi.mp/tensormesh/lmcache-sign-up-newsletter&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;sign up for our newsletter&lt;/a&gt;, &lt;a class=&#34;link&#34; href=&#34;https://join.slack.com/t/lmcacheworkspace/shared_invite/zt-2viziwhue-5Amprc9k5hcIdXT7XevTaQ&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;join LMCache slack&lt;/a&gt;, &lt;a class=&#34;link&#34; href=&#34;https://lmcache.ai/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;check out LMCache website&lt;/a&gt;, or &lt;a class=&#34;link&#34; href=&#34;mailto:contact@lmcache.ai&#34; &gt;drop an email&lt;/a&gt;, and our team will reach out to you!&lt;/p&gt;
&lt;h2 id=&#34;community-meeting&#34;&gt;Community meeting
&lt;/h2&gt;&lt;p&gt;The &lt;a class=&#34;link&#34; href=&#34;https://uchicago.zoom.us/j/6603596916?pwd=Z1E5MDRWUSt2am5XbEt4dTFkNGx6QT09&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;community meeting&lt;/a&gt; for LMCache is hosted bi-weekly. All are welcome to join!&lt;/p&gt;
&lt;p&gt;Meetings are held bi-weekly on: Tuesdays at 9:00 AM PT – &lt;a class=&#34;link&#34; href=&#34;https://drive.usercontent.google.com/u/0/uc?id=1f5EXbooGcwNwzIpTgn5u4PHqXgfypMtu&amp;amp;export=download&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Add to Calendar&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We keep notes from each meeting on this &lt;a class=&#34;link&#34; href=&#34;https://docs.google.com/document/d/1_Fl3vLtERFa3vTH00cezri78NihNBtSClK-_1tSrcow&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;document&lt;/a&gt; for summaries of standups, discussion, and action items.&lt;/p&gt;
&lt;p&gt;Recordings of meetings are available on the &lt;a class=&#34;link&#34; href=&#34;https://www.youtube.com/channel/UC58zMz55n70rtf1Ak2PULJA&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;YouTube LMCache channel&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;contributing&#34;&gt;Contributing
&lt;/h2&gt;&lt;p&gt;We welcome and value all contributions and collaborations.  Please check out &lt;a class=&#34;link&#34; href=&#34;CONTRIBUTING.md&#34; &gt;Contributing Guide&lt;/a&gt; on how to contribute.&lt;/p&gt;
&lt;p&gt;We continually update &lt;a class=&#34;link&#34; href=&#34;https://github.com/LMCache/LMCache/issues/627&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;[Onboarding] Welcoming contributors with good first issues!&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;citation&#34;&gt;Citation
&lt;/h2&gt;&lt;p&gt;If you use LMCache for your research, please cite our papers:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt; 1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 7
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 8
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 9
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;10
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;11
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;12
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;13
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;14
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;15
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;16
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;17
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;18
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;19
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;20
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;21
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;22
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;23
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;24
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-fallback&#34; data-lang=&#34;fallback&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;@inproceedings{liu2024cachegen,
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  title={Cachegen: Kv cache compression and streaming for fast large language model serving},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  author={Liu, Yuhan and Li, Hanchen and Cheng, Yihua and Ray, Siddhant and Huang, Yuyang and Zhang, Qizheng and Du, Kuntai and Yao, Jiayi and Lu, Shan and Ananthanarayanan, Ganesh and others},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  booktitle={Proceedings of the ACM SIGCOMM 2024 Conference},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  pages={38--56},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  year={2024}
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;@article{cheng2024large,
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  title={Do Large Language Models Need a Content Delivery Network?},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  author={Cheng, Yihua and Du, Kuntai and Yao, Jiayi and Jiang, Junchen},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  journal={arXiv preprint arXiv:2409.13761},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  year={2024}
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;@inproceedings{10.1145/3689031.3696098,
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  author = {Yao, Jiayi and Li, Hanchen and Liu, Yuhan and Ray, Siddhant and Cheng, Yihua and Zhang, Qizheng and Du, Kuntai and Lu, Shan and Jiang, Junchen},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  title = {CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  year = {2025},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  url = {https://doi.org/10.1145/3689031.3696098},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  doi = {10.1145/3689031.3696098},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  booktitle = {Proceedings of the Twentieth European Conference on Computer Systems},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  pages = {94–109},
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id=&#34;socials&#34;&gt;Socials
&lt;/h2&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.linkedin.com/company/lmcache-lab/?viewAsMember=true&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Linkedin&lt;/a&gt; | &lt;a class=&#34;link&#34; href=&#34;https://x.com/lmcache&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Twitter&lt;/a&gt; | &lt;a class=&#34;link&#34; href=&#34;https://www.youtube.com/@LMCacheTeam&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Youtube&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;license&#34;&gt;License
&lt;/h2&gt;&lt;p&gt;The LMCache codebase is licensed under Apache License 2.0. See the &lt;a class=&#34;link&#34; href=&#34;LICENSE&#34; &gt;LICENSE&lt;/a&gt; file for details.&lt;/p&gt;
</description>
        </item>
        <item>
        <title>hashcat</title>
        <link>https://producthunt.programnotes.cn/en/p/hashcat/</link>
        <pubDate>Wed, 06 Aug 2025 15:37:25 +0800</pubDate>
        
        <guid>https://producthunt.programnotes.cn/en/p/hashcat/</guid>
        <description>&lt;img src="https://images.unsplash.com/photo-1650749837474-a9ab19e3d1af?ixid=M3w0NjAwMjJ8MHwxfHJhbmRvbXx8fHx8fHx8fDE3NTQ0NjU3NTF8&amp;ixlib=rb-4.1.0" alt="Featured image of post hashcat" /&gt;&lt;h1 id=&#34;hashcathashcat&#34;&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/hashcat/hashcat&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;hashcat/hashcat&lt;/a&gt;
&lt;/h1&gt;&lt;h2 id=&#34;hashcat&#34;&gt;&lt;em&gt;hashcat&lt;/em&gt;
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;hashcat&lt;/strong&gt; is the world&amp;rsquo;s fastest and most advanced password recovery utility, supporting five unique modes of attack for over 300 highly-optimized hashing algorithms. hashcat currently supports CPUs, GPUs, and other hardware accelerators on Linux, Windows, and macOS, and has facilities to help enable distributed password cracking.&lt;/p&gt;
&lt;h3 id=&#34;license&#34;&gt;License
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;hashcat&lt;/strong&gt; is licensed under the MIT license. Refer to &lt;a class=&#34;link&#34; href=&#34;docs/license.txt&#34; &gt;docs/license.txt&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h3 id=&#34;installation&#34;&gt;Installation
&lt;/h3&gt;&lt;p&gt;Download the &lt;a class=&#34;link&#34; href=&#34;https://hashcat.net/hashcat/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;latest release&lt;/a&gt; and unpack it in the desired location. Please remember to use &lt;code&gt;7z x&lt;/code&gt; when unpacking the archive from the command line to ensure full file paths remain intact.&lt;/p&gt;
&lt;h3 id=&#34;usagehelp&#34;&gt;Usage/Help
&lt;/h3&gt;&lt;p&gt;Please refer to the &lt;a class=&#34;link&#34; href=&#34;https://hashcat.net/wiki/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Hashcat Wiki&lt;/a&gt; and the output of &lt;code&gt;--help&lt;/code&gt; for usage information and general help. A list of frequently asked questions may also be found &lt;a class=&#34;link&#34; href=&#34;https://hashcat.net/wiki/doku.php?id=frequently_asked_questions&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;here&lt;/a&gt;. The &lt;a class=&#34;link&#34; href=&#34;https://hashcat.net/forum/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Hashcat Forum&lt;/a&gt; also contains a plethora of information. If you still think you need help by a real human come to &lt;a class=&#34;link&#34; href=&#34;https://discord.gg/HFS523HGBT&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Discord&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;building&#34;&gt;Building
&lt;/h3&gt;&lt;p&gt;Refer to &lt;a class=&#34;link&#34; href=&#34;BUILD.md&#34; &gt;BUILD.md&lt;/a&gt; for instructions on how to build &lt;strong&gt;hashcat&lt;/strong&gt; from source.&lt;/p&gt;
&lt;p&gt;Tests:&lt;/p&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Travis&lt;/th&gt;
          &lt;th&gt;Coverity&lt;/th&gt;
          &lt;th&gt;GitHub Actions&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;a class=&#34;link&#34; href=&#34;https://travis-ci.org/hashcat/hashcat&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://travis-ci.org/hashcat/hashcat.svg?branch=master&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Hashcat Travis Build status&#34;
	
	
&gt;&lt;/a&gt;&lt;/td&gt;
          &lt;td&gt;&lt;a class=&#34;link&#34; href=&#34;https://scan.coverity.com/projects/hashcat&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://scan.coverity.com/projects/11753/badge.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Coverity Scan Build Status&#34;
	
	
&gt;&lt;/a&gt;&lt;/td&gt;
          &lt;td&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/hashcat/hashcat/actions/workflows/build.yml&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://github.com/hashcat/hashcat/actions/workflows/build.yml/badge.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Hashcat GitHub Actions Build status&#34;
	
	
&gt;&lt;/a&gt;&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id=&#34;contributing&#34;&gt;Contributing
&lt;/h3&gt;&lt;p&gt;Contributions are welcome and encouraged, provided your code is of sufficient quality. Before submitting a pull request, please ensure your code adheres to the following requirements:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Licensed under MIT license, or dedicated to the public domain (BSD, GPL, etc. code is incompatible)&lt;/li&gt;
&lt;li&gt;Adheres to gnu99 standard&lt;/li&gt;
&lt;li&gt;Compiles cleanly with no warnings when compiled with &lt;code&gt;-W -Wall -std=gnu99&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Uses &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Indent_style#Allman_style&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Allman-style&lt;/a&gt; code blocks &amp;amp; indentation&lt;/li&gt;
&lt;li&gt;Uses 2-spaces as the indentation or a tab if it&amp;rsquo;s required (for example: Makefiles)&lt;/li&gt;
&lt;li&gt;Uses lower-case function and variable names&lt;/li&gt;
&lt;li&gt;Avoids the use of &lt;code&gt;!&lt;/code&gt; and uses positive conditionals wherever possible (e.g., &lt;code&gt;if (foo == 0)&lt;/code&gt; instead of &lt;code&gt;if (!foo)&lt;/code&gt;, and &lt;code&gt;if (foo)&lt;/code&gt; instead of &lt;code&gt;if (foo != 0)&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Use code like array[index + 0] if you also need to do array[index + 1], to keep it aligned&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can use GNU Indent to help assist you with the style requirements:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-fallback&#34; data-lang=&#34;fallback&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;indent -st -bad -bap -sc -bl -bli0 -ncdw -nce -cli0 -cbi0 -pcs -cs -npsl -bs -nbc -bls -blf -lp -i2 -ts2 -nut -l1024 -nbbo -fca -lc1024 -fc1
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Your pull request should fully describe the functionality you are adding/removing or the problem you are solving. Regardless of whether your patch modifies one line or one thousand lines, you must describe what has prompted and/or motivated the change.&lt;/p&gt;
&lt;p&gt;Solve only one problem in each pull request. If you&amp;rsquo;re fixing a bug and adding a new feature, you need to make two separate pull requests. If you&amp;rsquo;re fixing three bugs, you need to make three separate pull requests. If you&amp;rsquo;re adding four new features, you need to make four separate pull requests. So on, and so forth.&lt;/p&gt;
&lt;p&gt;If your patch fixes a bug, please be sure there is an &lt;a class=&#34;link&#34; href=&#34;https://github.com/hashcat/hashcat/issues&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;issue&lt;/a&gt; open for the bug before submitting a pull request. If your patch aims to improve performance or optimize an algorithm, be sure to quantify your optimizations and document the trade-offs, and back up your claims with benchmarks and metrics.&lt;/p&gt;
&lt;p&gt;In order to maintain the quality and integrity of the &lt;strong&gt;hashcat&lt;/strong&gt; source tree, all pull requests must be reviewed and signed off by at least two &lt;a class=&#34;link&#34; href=&#34;https://github.com/orgs/hashcat/people&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;board members&lt;/a&gt; before being merged. The &lt;a class=&#34;link&#34; href=&#34;https://github.com/jsteube&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;project lead&lt;/a&gt; has the ultimate authority in deciding whether to accept or reject a pull request. Do not be discouraged if your pull request is rejected!&lt;/p&gt;
&lt;h3 id=&#34;happy-cracking&#34;&gt;Happy Cracking!
&lt;/h3&gt;</description>
        </item>
        <item>
        <title>tinygrad</title>
        <link>https://producthunt.programnotes.cn/en/p/tinygrad/</link>
        <pubDate>Wed, 21 May 2025 15:30:12 +0800</pubDate>
        
        <guid>https://producthunt.programnotes.cn/en/p/tinygrad/</guid>
        <description>&lt;img src="https://images.unsplash.com/photo-1718539503170-cec2c93a2f3d?ixid=M3w0NjAwMjJ8MHwxfHJhbmRvbXx8fHx8fHx8fDE3NDc4MTI0ODF8&amp;ixlib=rb-4.1.0" alt="Featured image of post tinygrad" /&gt;&lt;h1 id=&#34;tinygradtinygrad&#34;&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/tinygrad/tinygrad&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;tinygrad/tinygrad&lt;/a&gt;
&lt;/h1&gt;&lt;div align=&#34;center&#34;&gt;
&lt;picture&gt;
  &lt;source media=&#34;(prefers-color-scheme: light)&#34; srcset=&#34;https://producthunt.programnotes.cn/docs/logo_tiny_light.svg&#34;&gt;
&lt;/picture&gt;
&lt;p&gt;tinygrad: For something between &lt;a class=&#34;link&#34; href=&#34;https://github.com/pytorch/pytorch&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;PyTorch&lt;/a&gt; and &lt;a class=&#34;link&#34; href=&#34;https://github.com/karpathy/micrograd&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;karpathy/micrograd&lt;/a&gt;. Maintained by &lt;a class=&#34;link&#34; href=&#34;https://tinygrad.org&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;tiny corp&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/tinygrad/tinygrad&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Homepage&lt;/a&gt; | &lt;a class=&#34;link&#34; href=&#34;https://docs.tinygrad.org/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Documentation&lt;/a&gt; | &lt;a class=&#34;link&#34; href=&#34;https://discord.gg/ZjZadyC7PK&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Discord&lt;/a&gt;&lt;/p&gt;
&lt;/h3&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/tinygrad/tinygrad/stargazers&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/github/stars/tinygrad/tinygrad&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;GitHub Repo stars&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://github.com/tinygrad/tinygrad/actions/workflows/test.yml&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://github.com/tinygrad/tinygrad/actions/workflows/test.yml/badge.svg&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Unit Tests&#34;
	
	
&gt;&lt;/a&gt;
&lt;a class=&#34;link&#34; href=&#34;https://discord.gg/ZjZadyC7PK&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&lt;img src=&#34;https://img.shields.io/discord/1068976834382925865&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;Discord&#34;
	
	
&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;This may not be the best deep learning framework, but it is a deep learning framework.&lt;/p&gt;
&lt;p&gt;Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. If XLA is CISC, tinygrad is RISC.&lt;/p&gt;
&lt;p&gt;tinygrad is still alpha software, but we &lt;a class=&#34;link&#34; href=&#34;https://geohot.github.io/blog/jekyll/update/2023/05/24/the-tiny-corp-raised-5M.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;raised some money&lt;/a&gt; to make it good. Someday, we will tape out chips.&lt;/p&gt;
&lt;h2 id=&#34;features&#34;&gt;Features
&lt;/h2&gt;&lt;h3 id=&#34;llama-and-stable-diffusion&#34;&gt;LLaMA and Stable Diffusion
&lt;/h3&gt;&lt;p&gt;tinygrad can run &lt;a class=&#34;link&#34; href=&#34;https://producthunt.programnotes.cn/docs/showcase.md#llama&#34; &gt;LLaMA&lt;/a&gt; and &lt;a class=&#34;link&#34; href=&#34;https://producthunt.programnotes.cn/docs/showcase.md#stable-diffusion&#34; &gt;Stable Diffusion&lt;/a&gt;!&lt;/p&gt;
&lt;h3 id=&#34;laziness&#34;&gt;Laziness
&lt;/h3&gt;&lt;p&gt;Try a matmul. See how, despite the style, it is fused into one kernel with the power of laziness.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sh&#34; data-lang=&#34;sh&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nv&#34;&gt;DEBUG&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;m&#34;&gt;3&lt;/span&gt; python3 -c &lt;span class=&#34;s2&#34;&gt;&amp;#34;from tinygrad import Tensor;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s2&#34;&gt;N = 1024; a, b = Tensor.rand(N, N), Tensor.rand(N, N);
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s2&#34;&gt;c = (a.reshape(N, 1, N) * b.T.reshape(1, N, N)).sum(axis=2);
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s2&#34;&gt;print((c.numpy() - (a.numpy() @ b.numpy())).mean())&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;And we can change &lt;code&gt;DEBUG&lt;/code&gt; to &lt;code&gt;4&lt;/code&gt; to see the generated code.&lt;/p&gt;
&lt;h3 id=&#34;neural-networks&#34;&gt;Neural networks
&lt;/h3&gt;&lt;p&gt;As it turns out, 90% of what you need for neural networks are a decent autograd/tensor library.
Throw in an optimizer, a data loader, and some compute, and you have all you need.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt; 1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 7
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 8
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 9
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;10
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;11
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;12
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;13
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;14
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;15
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;16
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;17
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;18
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;19
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;20
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-python&#34; data-lang=&#34;python&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kn&#34;&gt;from&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;tinygrad&lt;/span&gt; &lt;span class=&#34;kn&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;nn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;class&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;LinearNet&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;fm&#34;&gt;__init__&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;bp&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;bp&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;l1&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;kaiming_uniform&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;784&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;128&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;bp&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;l2&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;kaiming_uniform&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;128&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;10&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;fm&#34;&gt;__call__&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;bp&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;return&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;flatten&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;dot&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;bp&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;l1&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;relu&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;dot&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;bp&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;l2&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;model&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;LinearNet&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;optim&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;nn&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;optim&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;Adam&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;([&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;model&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;l1&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;model&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;l2&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;],&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;lr&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;mf&#34;&gt;0.001&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;y&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;rand&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;4&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;28&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;28&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;([&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;4&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;3&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;7&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;])&lt;/span&gt;  &lt;span class=&#34;c1&#34;&gt;# replace with real mnist dataloader&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;with&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;train&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;():&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;for&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;i&lt;/span&gt; &lt;span class=&#34;ow&#34;&gt;in&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;range&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;10&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;optim&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;zero_grad&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;loss&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;model&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sparse_categorical_crossentropy&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;y&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;backward&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;optim&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;step&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nb&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;i&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;loss&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;item&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;())&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;See &lt;a class=&#34;link&#34; href=&#34;examples/beautiful_mnist.py&#34; &gt;examples/beautiful_mnist.py&lt;/a&gt; for the full version that gets 98% in ~5 seconds&lt;/p&gt;
&lt;h2 id=&#34;accelerators&#34;&gt;Accelerators
&lt;/h2&gt;&lt;p&gt;tinygrad already supports numerous accelerators, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_gpu.py&#34; &gt;GPU (OpenCL)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_cpu.py&#34; &gt;CPU (C Code)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_llvm.py&#34; &gt;LLVM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_metal.py&#34; &gt;METAL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_cuda.py&#34; &gt;CUDA&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_amd.py&#34; &gt;AMD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_nv.py&#34; &gt;NV&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_qcom.py&#34; &gt;QCOM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;input checked=&#34;&#34; disabled=&#34;&#34; type=&#34;checkbox&#34;&gt; &lt;a class=&#34;link&#34; href=&#34;tinygrad/runtime/ops_webgpu.py&#34; &gt;WEBGPU&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And it is easy to add more! Your accelerator of choice only needs to support a total of ~25 low level ops.&lt;/p&gt;
&lt;p&gt;To check default accelerator run: &lt;code&gt;python3 -c &amp;quot;from tinygrad import Device; print(Device.DEFAULT)&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2 id=&#34;installation&#34;&gt;Installation
&lt;/h2&gt;&lt;p&gt;The current recommended way to install tinygrad is from source.&lt;/p&gt;
&lt;h3 id=&#34;from-source&#34;&gt;From source
&lt;/h3&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sh&#34; data-lang=&#34;sh&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;git clone https://github.com/tinygrad/tinygrad.git
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;cd&lt;/span&gt; tinygrad
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;python3 -m pip install -e .
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id=&#34;direct-master&#34;&gt;Direct (master)
&lt;/h3&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sh&#34; data-lang=&#34;sh&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;python3 -m pip install git+https://github.com/tinygrad/tinygrad.git
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id=&#34;documentation&#34;&gt;Documentation
&lt;/h2&gt;&lt;p&gt;Documentation along with a quick start guide can be found on the &lt;a class=&#34;link&#34; href=&#34;https://docs.tinygrad.org/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;docs website&lt;/a&gt; built from the &lt;a class=&#34;link&#34; href=&#34;https://producthunt.programnotes.cn/docs&#34; &gt;docs/&lt;/a&gt; directory.&lt;/p&gt;
&lt;h3 id=&#34;quick-example-comparing-to-pytorch&#34;&gt;Quick example comparing to PyTorch
&lt;/h3&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;7
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;8
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;9
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-python&#34; data-lang=&#34;python&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kn&#34;&gt;from&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;tinygrad&lt;/span&gt; &lt;span class=&#34;kn&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;eye&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;3&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;requires_grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;True&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;y&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Tensor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;([[&lt;/span&gt;&lt;span class=&#34;mf&#34;&gt;2.0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;-&lt;/span&gt;&lt;span class=&#34;mf&#34;&gt;2.0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;]],&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;requires_grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;True&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;z&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;y&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;matmul&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sum&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;z&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;backward&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;tolist&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;())&lt;/span&gt;  &lt;span class=&#34;c1&#34;&gt;# dz/dx&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;y&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;tolist&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;())&lt;/span&gt;  &lt;span class=&#34;c1&#34;&gt;# dz/dy&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The same thing but in PyTorch:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;7
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;8
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;9
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-python&#34; data-lang=&#34;python&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kn&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;torch&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;torch&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;eye&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;3&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;requires_grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;True&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;y&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;torch&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;tensor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;([[&lt;/span&gt;&lt;span class=&#34;mf&#34;&gt;2.0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;-&lt;/span&gt;&lt;span class=&#34;mf&#34;&gt;2.0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;]],&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;requires_grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;True&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;z&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;y&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;matmul&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sum&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;z&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;backward&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;tolist&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;())&lt;/span&gt;  &lt;span class=&#34;c1&#34;&gt;# dz/dx&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;y&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;grad&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;tolist&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;())&lt;/span&gt;  &lt;span class=&#34;c1&#34;&gt;# dz/dy&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id=&#34;contributing&#34;&gt;Contributing
&lt;/h2&gt;&lt;p&gt;There has been a lot of interest in tinygrad lately. Following these guidelines will help your PR get accepted.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;ll start with what will get your PR closed with a pointer to this section:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No code golf! While low line count is a guiding light of this project, anything that remotely looks like code golf will be closed. The true goal is reducing complexity and increasing readability, and deleting &lt;code&gt;\n&lt;/code&gt;s does nothing to help with that.&lt;/li&gt;
&lt;li&gt;All docs and whitespace changes will be closed unless you are a well-known contributor. The people writing the docs should be those who know the codebase the absolute best. People who have not demonstrated that shouldn&amp;rsquo;t be messing with docs. Whitespace changes are both useless &lt;em&gt;and&lt;/em&gt; carry a risk of introducing bugs.&lt;/li&gt;
&lt;li&gt;Anything you claim is a &amp;ldquo;speedup&amp;rdquo; must be benchmarked. In general, the goal is simplicity, so even if your PR makes things marginally faster, you have to consider the tradeoff with maintainability and readability.&lt;/li&gt;
&lt;li&gt;In general, the code outside the core &lt;code&gt;tinygrad/&lt;/code&gt; folder is not well tested, so unless the current code there is broken, you shouldn&amp;rsquo;t be changing it.&lt;/li&gt;
&lt;li&gt;If your PR looks &amp;ldquo;complex&amp;rdquo;, is a big diff, or adds lots of lines, it won&amp;rsquo;t be reviewed or merged. Consider breaking it up into smaller PRs that are individually clear wins. A common pattern I see is prerequisite refactors before adding new functionality. If you can (cleanly) refactor to the point that the feature is a 3 line change, this is great, and something easy for us to review.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now, what we want:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bug fixes (with a regression test) are great! This library isn&amp;rsquo;t 1.0 yet, so if you stumble upon a bug, fix it, write a test, and submit a PR, this is valuable work.&lt;/li&gt;
&lt;li&gt;Solving bounties! tinygrad &lt;a class=&#34;link&#34; href=&#34;https://docs.google.com/spreadsheets/d/1WKHbT-7KOgjEawq5h5Ic1qUWzpfAzuD_J06N1JwOCGs/edit?usp=sharing&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;offers cash bounties&lt;/a&gt; for certain improvements to the library. All new code should be high quality and well tested.&lt;/li&gt;
&lt;li&gt;Features. However, if you are adding a feature, consider the line tradeoff. If it&amp;rsquo;s 3 lines, there&amp;rsquo;s less of a bar of usefulness it has to meet over something that&amp;rsquo;s 30 or 300 lines. All features must have regression tests. In general with no other constraints, your feature&amp;rsquo;s API should match torch or numpy.&lt;/li&gt;
&lt;li&gt;Refactors that are clear wins. In general, if your refactor isn&amp;rsquo;t a clear win it will be closed. But some refactors are amazing! Think about readability in a deep core sense. A whitespace change or moving a few functions around is useless, but if you realize that two 100 line functions can actually use the same 110 line function with arguments while also improving readability, this is a big win. Refactors should pass &lt;a class=&#34;link&#34; href=&#34;#process-replay-tests&#34; &gt;process replay&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Tests/fuzzers. If you can add tests that are non brittle, they are welcome. We have some fuzzers in here too, and there&amp;rsquo;s a plethora of bugs that can be found with them and by improving them. Finding bugs, even writing broken tests (that should pass) with &lt;code&gt;@unittest.expectedFailure&lt;/code&gt; is great. This is how we make progress.&lt;/li&gt;
&lt;li&gt;Dead code removal from core &lt;code&gt;tinygrad/&lt;/code&gt; folder. We don&amp;rsquo;t care about the code in extra, but removing dead code from the core library is great. Less for new people to read and be confused by.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;running-tests&#34;&gt;Running tests
&lt;/h3&gt;&lt;p&gt;You should install the pre-commit hooks with &lt;code&gt;pre-commit install&lt;/code&gt;. This will run the linter, mypy, and a subset of the tests on every commit.&lt;/p&gt;
&lt;p&gt;For more examples on how to run the full test suite please refer to the &lt;a class=&#34;link&#34; href=&#34;.github/workflows/test.yml&#34; &gt;CI workflow&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Some examples of running tests locally:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sh&#34; data-lang=&#34;sh&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;python3 -m pip install -e &lt;span class=&#34;s1&#34;&gt;&amp;#39;.[testing]&amp;#39;&lt;/span&gt;  &lt;span class=&#34;c1&#34;&gt;# install extra deps for testing&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;python3 test/test_ops.py                &lt;span class=&#34;c1&#34;&gt;# just the ops tests&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;python3 -m pytest test/                 &lt;span class=&#34;c1&#34;&gt;# whole test suite&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h4 id=&#34;process-replay-tests&#34;&gt;Process replay tests
&lt;/h4&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/tinygrad/tinygrad/blob/master/test/external/process_replay/README.md&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Process replay&lt;/a&gt; compares your PR&amp;rsquo;s generated kernels against master. If your PR is a refactor or speedup without any expected behavior change, It should include [pr] in the pull request title.&lt;/p&gt;
</description>
        </item>
        
    </channel>
</rss>
