diff --git a/content/posts/blackwell_datacenter_vs_geforce.mdx b/content/posts/blackwell_datacenter_vs_geforce.mdx
index ab00318..2e103f9 100644
--- a/content/posts/blackwell_datacenter_vs_geforce.mdx
+++ b/content/posts/blackwell_datacenter_vs_geforce.mdx
@@ -17,7 +17,28 @@ you'll see that the new tensor core gen 5 instructions are only compatible with
### What's in the new tensor cores?
The blackwell tensor cores now support lower precision, namely FP6 and FP4, which the previous Hopper generation didn't. This enables extremely fast low precision matrix multiplications.
-To test out the nvfp4 "Nvidia's low precision format. " support, I downloaded the cutlass repo and ran the nvfp4 matrix multiply example. Here's what I got
+The ptx isa also introduces `tcgen05` instructions, which make use of `TMEM` or tensor memory, which only the datacenter cards support. This additional memory sits next to the Tensor cores, and can
+be used independent of the registers used in cuda cores. The GeForce cards get 128KB of shared memory per SM, while the datacenter card and the Jetson thor get 228KB SMEM + 256KB TMEM. This is absolutely insane for
+any kind of work load. Why did I have to dig so hard to find this information? The 5090 is an enthusiast tier card, which I feel deserves a clear description of what you're buying.
+
+I needed to confirm this myself. NVFP4 is Nvidia's new low precision format. I downloaded the cutlass repo and ran the nvfp4 matrix multiply example. Here's what I got

+Over a PETA FLOP of nvfp4 compute! ggs. This is already insane, and I'm very happy with it. I didn't get `wgmma` from hopper, nor the `tcgen05` instructions and the `TMEM`, but I did get a petaflop of nvfp4 compute.
+Nsight compute tells us exactly what we would expect
+
+
+
+Tensor cores are so fast that the memory is bottle necking them. All of the shared memory is filling up. Huh, I guess nvidia realised this and created `tcgen05` but we don't get to see any of that.
+
+
+
+
+To see how the GPU folk in datacenters live, I booted up a vast ai instance and ran the same matmul, but with cutlass kernels for `sm_100a`.
+
+
+
+We're getting over 2 petaflops, and I'm sure these things can go even faster with better code. Not having `tcgen05` really holds back the geforce cards.
+Why jensen why.
+
diff --git a/public/images/1_blackwell_dc_vs_gf/geforce_ncu.png b/public/images/1_blackwell_dc_vs_gf/geforce_ncu.png
new file mode 100644
index 0000000..e131aa1
Binary files /dev/null and b/public/images/1_blackwell_dc_vs_gf/geforce_ncu.png differ