[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"summaries-feed::cloud:":3,"summaries-facets-categories":4075,"trending-tags-9":7657,"summaries-facets-sources":7676},{"items":4,"total":4074},[5,228,302,414,482,544,679,789,1074,1155,1230,1446,1516,1722,1906,2057,2290,2360,2780,2831,2900,3021,3096,3157,3356,3553,3604,3825,3906,3947],{"id":6,"title":7,"ai":8,"body":15,"categories":175,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":180,"navigation":207,"path":208,"published_at":209,"question":177,"scraped_at":210,"seo":211,"sitemap":212,"source_id":213,"source_name":214,"source_type":215,"source_url":216,"stem":217,"tags":218,"thumbnail_url":223,"tldr":224,"tweet":225,"unknown_tags":226,"__hash__":227},"summaries\u002Fsummaries\u002F25544e9965dc4dae-gpu-orchestrated-multi-agent-sustainability-intell-summary.md","GPU-Orchestrated Multi-Agent Sustainability Intelligence Blueprint",{"provider":9,"model":10,"input_tokens":11,"output_tokens":12,"processing_time_ms":13,"cost_usd":14},"openrouter","x-ai\u002Fgrok-4.1-fast",9206,2577,37454,0.0031072,{"type":16,"value":17,"toc":166},"minimark",[18,23,27,30,33,37,40,43,46,49,53,56,79,82,85,88,92,95,98,124,127,130,134],[19,20,22],"h2",{"id":21},"agentic-workloads-demand-elastic-secure-infrastructure","Agentic Workloads Demand Elastic, Secure Infrastructure",[24,25,26],"p",{},"Chelsie Czop emphasizes that AI agents optimize for outcomes over outputs, enabling cross-platform automation, asynchronous productivity, and real-world transactions. An agent is \"a service that autonomously reasons to solve a task using tools and data,\" but must meet compliance, CI\u002FCD, security, cost, and performance standards like latency SLOs.",[24,28,29],{},"Agents stress infrastructure with bursty traffic, latency sensitivity, long-running tasks, idle cycles, and memory hunger. Three core challenges emerge: (1) latency and throughput amid constrained accelerators; (2) compute efficiency to boost density and cut idle resources; (3) security and governance for debugging, auditing, and controlling complex tasks.",[24,31,32],{},"\"Your agentic workloads need to be treated as untrusted,\" Czop warns. They require scaling for elasticity while securing against breaches. Google Cloud's AI Hypercomputer addresses this via purpose-built hardware (NVIDIA GPUs from Hopper to Blackwell), open software, and flexible models.",[19,34,36],{"id":35},"g4-gpus-and-cloud-run-unlock-serverless-agentic-inference","G4 GPUs and Cloud Run Unlock Serverless Agentic Inference",[24,38,39],{},"Czop spotlights G4 instances powered by NVIDIA RTX PRO 6000 Blackwell GPUs: 7x more performant than prior L4s, 4x GPU memory, 3x host memory. Optimized for peer-to-peer multi-GPU workloads, they deliver 2x NVLink collective performance on full VMs (up to 8 GPUs) via a simple environment flag.",[24,41,42],{},"Cloud Run GPU integrates this stack serverlessly for real-time multimodal inference, fine-tuning, or batch jobs. Design patterns include: on-demand inference (Cloud CDN → Cloud Run GPU with Gemma 4 weights from Cloud Storage over VPC); batch fine-tuning (async LoRA\u002FPEFT on domain data). Pros: background execution without management. Fine-tune Gemma for domain knowledge (e.g., SEC filings for finance), task behaviors (customer service), or personas (NPC styles).",[24,44,45],{},"Production win: Flipkart uses G4 for AI-led catalog enrichment, generating videos from images via agents. P2P communication yielded 50% latency and cost reductions versus PCIe.",[24,47,48],{},"Mitesh Patel reinforces: \"Latency is very important, and throughput is very important. And cost effectiveness is also important because when you're scaling the systems out at production level, cost is the primary factor.\"",[19,50,52],{"id":51},"multi-agent-architecture-for-multimodal-sustainability-analysis","Multi-Agent Architecture for Multimodal Sustainability Analysis",[24,54,55],{},"Patel demos a sustainability intelligence app orchestrating specialist agents for urban heat risk: satellite imagery (Phoenix urban heat island dataset), live telemetry, and policy PDFs. Main orchestrator (Google ADK) delegates to three sub-agents:",[57,58,59,67,73],"ul",{},[60,61,62,66],"li",{},[63,64,65],"strong",{},"Satellite Agent",": Analyzes baseline vs. current heat maps.",[60,68,69,72],{},[63,70,71],{},"Telemetry Agent",": Processes weather station data.",[60,74,75,78],{},[63,76,77],{},"Policy Agent",": Retrieves relevant embeddings from Milvus vector DB (pre-embedded via Gemma 3B gn-fp4).",[24,80,81],{},"Inference uses quantized Gemma 4 (31B params, gn-fp4) on VLM engine (swappable with SGLang or NVIDIA Dynamo), served on Cloud Run GPUs. ADK streamlines plugging agents, retrieval (Milvus), and future MCP servers.",[24,83,84],{},"Demo flow: User query triggers task dispatch; agents process modalities in parallel; orchestrator synthesizes into executive summary and mitigation strategies (e.g., cooling tactics). \"The main orchestrator will combine all this information... and generate a report for you,\" Patel explains.",[24,86,87],{},"This blueprint generalizes to any multimodal app: ADK handles orchestration, GPUs accelerate inference, Milvus enables RAG. Avoid coding from scratch—toolkits slash time-to-market.",[19,89,91],{"id":90},"production-insights-avoiding-loops-transitioning-to-autonomy-and-security","Production Insights: Avoiding Loops, Transitioning to Autonomy, and Security",[24,93,94],{},"Agents shine in real-time voice, encoding, and research (e.g., code base analysis via chain-of-thought). Fine-tuning boosts productivity, per Base10 insights.",[24,96,97],{},"Q&A highlights:",[57,99,100,106,112,118],{},[60,101,102,105],{},[63,103,104],{},"Loop Prevention",": Strong orchestration (like ADK) and tools break cycles.",[60,107,108,111],{},[63,109,110],{},"Human-to-Agent Transition",": When tasks are structured, reliable, and low-risk.",[60,113,114,117],{},[63,115,116],{},"Policy Retrieval Challenges",": Accurate RAG via embeddings\u002FMilvus; multimodal grounding.",[60,119,120,123],{},[63,121,122],{},"Security\u002FPrivacy",": VPCs, guardrails, auditability in Cloud Run.",[24,125,126],{},"Patel shares: Used agents for similar multimodal orchestration. Czop notes a friend's MVP failed on unoptimized agent demands, forcing re-architecture for cost\u002Flatency.",[24,128,129],{},"\"If you try to code it yourself, it's not impossible. But your time to market will just be way longer. And that is where these orchestration toolkits becomes very easy to use.\"",[19,131,133],{"id":132},"key-takeaways","Key Takeaways",[57,135,136,139,142,145,148,151,154,157,160,163],{},[60,137,138],{},"Treat agents as untrusted: Build with security, elasticity, and governance from day one.",[60,140,141],{},"Use Cloud Run GPUs for serverless inference\u002Ffine-tuning: Pull Gemma weights via VPC, scale elastically.",[60,143,144],{},"Orchestrate multi-agents with Google ADK: Delegate modalities to specialists, integrate Milvus RAG.",[60,146,147],{},"Quantize models (gn-fp4) on RTX PRO 6000 for 50%+ latency\u002Fcost wins, as in Flipkart's video gen.",[60,149,150],{},"Fine-tune for domains\u002Fpersonas via PEFT\u002FLoRA: Efficient on smaller datasets.",[60,152,153],{},"Pre-embed policies offline; runtime retrieval via vector DBs.",[60,155,156],{},"Start with multimodal demos like sustainability: Satellite + telemetry + docs → actionable reports.",[60,158,159],{},"Enable P2P multi-GPU with one flag for 2x NVLink gains.",[60,161,162],{},"Monitor KPIs: Latency, throughput, cost drive production scaling.",[60,164,165],{},"Get started: Join Google Cloud & NVIDIA community for blueprints.",{"title":167,"searchDepth":168,"depth":168,"links":169},"",2,[170,171,172,173,174],{"id":21,"depth":168,"text":22},{"id":35,"depth":168,"text":36},{"id":51,"depth":168,"text":52},{"id":90,"depth":168,"text":91},{"id":132,"depth":168,"text":133},[176],"AI & LLMs",null,"md",false,{"content_references":181,"triage":202},[182,186,188,190,192,194,197],{"type":183,"title":184,"context":185},"tool","Google Agent Development Kit (ADK)","mentioned",{"type":183,"title":187,"context":185},"Gemma 4",{"type":183,"title":189,"context":185},"Cloud Run",{"type":183,"title":191,"context":185},"Milvus",{"type":183,"title":193,"context":185},"NVIDIA RTX PRO 6000 GPUs",{"type":195,"title":196,"context":185},"dataset","Phoenix urban heat island risk dataset",{"type":198,"title":199,"url":200,"context":201},"other","Google Cloud & NVIDIA community","https:\u002F\u002Fgoo.gle\u002Fgoogle-nvidia-programs","recommended",{"relevance":203,"novelty":204,"quality":204,"actionability":204,"composite":205,"reasoning":206},5,4,4.35,"Category: AI Automation. The article provides a detailed exploration of using AI agents in a serverless architecture, addressing specific challenges and solutions relevant to product builders. It includes practical examples, such as Flipkart's use of G4 GPUs for AI-led catalog enrichment, which demonstrates real-world application.",true,"\u002Fsummaries\u002F25544e9965dc4dae-gpu-orchestrated-multi-agent-sustainability-intell-summary","2026-05-12 17:00:50","2026-05-13 12:00:34",{"title":7,"description":167},{"loc":208},"25544e9965dc4dae","Google Cloud Tech","video","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vIyhQGBkn34","summaries\u002F25544e9965dc4dae-gpu-orchestrated-multi-agent-sustainability-intell-summary",[219,220,221,222],"agents","llm","cloud","ai-automation","https:\u002F\u002Fi.ytimg.com\u002Fvi\u002FvIyhQGBkn34\u002Fhqdefault.jpg","Chelsie Czop and Mitesh Patel demo a serverless multi-agent app using Google ADK, Gemma 4 on NVIDIA RTX PRO 6000 GPUs via Cloud Run, and Milvus RAG for real-time environmental risk reports from satellite, telemetry, and policy data.","Livestream talk by Google Cloud PM Chelsie Czop and NVIDIA's Jay Rodge demoing a multi-agent sustainability app orchestrated with Agent Development Kit, running Gemma 4 on Cloud Run with RTX PRO 6000 GPUs, and using Milvus for policy retrieval, followed by audience Q&A on agent challenges.",[222],"-kVMzSLa9FYK5ixBk7a96OVeuA3rRBdX79ixI_1snjQ",{"id":229,"title":230,"ai":231,"body":236,"categories":272,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":274,"navigation":207,"path":287,"published_at":288,"question":177,"scraped_at":288,"seo":289,"sitemap":290,"source_id":291,"source_name":292,"source_type":293,"source_url":294,"stem":295,"tags":296,"thumbnail_url":177,"tldr":299,"tweet":177,"unknown_tags":300,"__hash__":301},"summaries\u002Fsummaries\u002Fa2a811b50a4c64f5-mrc-resilient-networking-for-100k-gpu-ai-training-summary.md","MRC: Resilient Networking for 100K+ GPU AI Training",{"provider":9,"model":10,"input_tokens":232,"output_tokens":233,"processing_time_ms":234,"cost_usd":235},9014,2044,25377,0.0028023,{"type":16,"value":237,"toc":266},[238,242,245,249,252,256,259,263],[19,239,241],{"id":240},"multi-plane-topologies-slash-switch-tiers-and-power-for-massive-clusters","Multi-Plane Topologies Slash Switch Tiers and Power for Massive Clusters",[24,243,244],{},"Traditional 800Gb\u002Fs networks require three or four tiers of switches to connect over 100,000 GPUs, increasing power use, failure points, and cost. MRC splits each 800Gb\u002Fs interface into eight 100Gb\u002Fs links, creating eight parallel 'planes' that connect to separate switches. A 64-port 800Gb\u002Fs switch now handles 512 ports at 100Gb\u002Fs, enabling full connectivity for 131,000 GPUs using only two tiers. This design boosts path diversity—keeping more traffic local to Tier 0 switches—while cutting components, power, and cost compared to single-plane setups. Without changes, single-path flows (like classic RoCE) still congest links as flows collide, especially in AI's collective communications where worst-case latency stalls synchronous training.",[19,246,248],{"id":247},"packet-spraying-and-srv6-eliminate-congestion-and-dynamic-routing","Packet Spraying and SRv6 Eliminate Congestion and Dynamic Routing",[24,250,251],{},"MRC sprays packets from a single transfer across hundreds of paths spanning all planes, using final memory addresses for out-of-order reassembly at the destination. Adaptive load-balancing monitors paths: congestion triggers path swaps, packet loss retires the path (with probes for recovery), and 'packet trimming' at switches forwards headers only during destination congestion to prompt retransmits without false failure alarms. This achieves microsecond failure detection and rerouting, versus seconds for traditional fabrics. MRC replaces BGP dynamic routing with static SRv6 source routing: senders embed full switch ID sequences in IPv6 addresses. Switches shift addresses and follow pre-configured static tables, blindly forwarding without recomputing routes. Failures simply retire paths at endpoints, simplifying control planes and eliminating routing bugs from switch software.",[19,253,255],{"id":254},"production-impact-zero-measurable-downtime-amid-constant-failures","Production Impact: Zero-Measurable Downtime Amid Constant Failures",[24,257,258],{},"In OpenAI's NVIDIA GB200 supercomputers (including OCI's Abilene Stargate site and Microsoft's Fairwater), MRC handles millions of links with frequent flaps—multiple per minute between tiers—yet synchronous pretraining jobs show no measurable impact, allowing deferred repairs. Rebooting four Tier-1 switches or repairing links during jobs requires no coordination; MRC avoids bad paths automatically. Real training data shows quick recovery from full T1 switch loss with temporary slowdowns far less than physical capacity loss (e.g., one failed port on an 8-port interface reduces max rate by 1\u002F8th but sustains better effective throughput via path recalculation). Multi-job clusters avoid inter-job interference due to core-wide congestion elimination, maximizing GPU utilization for frontier models like those powering ChatGPT (900M weekly users).",[19,260,262],{"id":261},"strategic-wins-simpler-stacks-for-stargate-scale-compute","Strategic Wins: Simpler Stacks for Stargate-Scale Compute",[24,264,265],{},"MRC delivers three edges: two-tier multi-plane redundancy with lower power; zero core congestion for consistent flow throughput in sync training; and SRv6 for instant failure bypass via static planes. Deployed with AMD, Broadcom, Intel, Microsoft, NVIDIA hardware, it's released via Open Compute Project for industry adoption, supporting OpenAI's compute strategy of shared standards to scale AI infrastructure efficiently.",{"title":167,"searchDepth":168,"depth":168,"links":267},[268,269,270,271],{"id":240,"depth":168,"text":241},{"id":247,"depth":168,"text":248},{"id":254,"depth":168,"text":255},{"id":261,"depth":168,"text":262},[273],"DevOps & Cloud",{"content_references":275,"triage":283},[276,279],{"type":198,"title":277,"url":278,"context":185},"OCP MRC 1.0","https:\u002F\u002Fwww.opencompute.org\u002Fdocuments\u002Focp-mrc-1-0-pdf",{"type":280,"title":281,"url":282,"context":185},"paper","Resilient AI Supercomputer Networking using MRC and SRv6","https:\u002F\u002Fcdn.openai.com\u002Fpdf\u002Fresilient-ai-supercomputer-networking-using-mrc-and-srv6.pdf",{"relevance":284,"novelty":284,"quality":204,"actionability":168,"composite":285,"reasoning":286},3,3.05,"Category: DevOps & Cloud. The article discusses the MRC protocol's innovative networking solutions for AI training, which could be relevant for those building AI-powered products. However, it lacks direct actionable insights for the audience, focusing more on technical specifications than practical applications.","\u002Fsummaries\u002Fa2a811b50a4c64f5-mrc-resilient-networking-for-100k-gpu-ai-training-summary","2026-05-11 15:04:27",{"title":230,"description":167},{"loc":287},"a2a811b50a4c64f5","OpenAI News","article","https:\u002F\u002Fopenai.com\u002Findex\u002Fmrc-supercomputer-networking","summaries\u002Fa2a811b50a4c64f5-mrc-resilient-networking-for-100k-gpu-ai-training-summary",[297,298,221],"machine-learning","devops","OpenAI's MRC protocol uses multi-plane topologies and packet spraying across hundreds of paths with SRv6 source routing to eliminate congestion, route around failures in microseconds, and connect 131k GPUs with just two switch tiers, enabling non-stop frontier model training.",[],"BYXvfLzxxajQIir95xuUTVdTfvID4wPt3TOVHNxrCSU",{"id":303,"title":304,"ai":305,"body":310,"categories":393,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":394,"navigation":207,"path":399,"published_at":400,"question":177,"scraped_at":401,"seo":402,"sitemap":403,"source_id":404,"source_name":405,"source_type":293,"source_url":406,"stem":407,"tags":408,"thumbnail_url":177,"tldr":411,"tweet":177,"unknown_tags":412,"__hash__":413},"summaries\u002Fsummaries\u002F9a9f9ad328728e84-aws-kms-envelope-encryption-secures-data-at-scale-summary.md","AWS KMS Envelope Encryption Secures Data at Scale",{"provider":9,"model":10,"input_tokens":306,"output_tokens":307,"processing_time_ms":308,"cost_usd":309},6446,1340,18664,0.00193655,{"type":16,"value":311,"toc":388},[312,316,324,333,340,347,351,354,357,378,381,385],[19,313,315],{"id":314},"envelope-encryption-delivers-aes-speed-without-master-key-exposure","Envelope Encryption Delivers AES Speed Without Master Key Exposure",[24,317,318,319,323],{},"Envelope encryption resolves symmetric AES-256's key distribution limits and RSA's performance bottlenecks by layering fast bulk encryption under protected master keys. Generate a plaintext AES-256 DEK via KMS ",[320,321,322],"code",{},"GenerateDataKey"," API—it returns both the raw DEK (held in memory only) and its encrypted version under the master key. Encrypt your data locally with the plaintext DEK using AES-256, which handles gigabytes per second via CPU-optimized bitwise operations (substitutions, shifts, XORs). Discard the plaintext DEK immediately after; store only the ciphertext and encrypted DEK in your database, like DynamoDB records:",[325,326,331],"pre",{"className":327,"code":329,"language":330},[328],"language-text","{\n  \"user_id\": \"u_12345\",\n  \"encrypted_payload\": \"\u003Cbase64-encoded ciphertext>\",\n  \"encrypted_dek\": \"\u003Cbase64-encoded KMS-encrypted data key>\"\n}\n","text",[320,332,329],{"__ignoreMap":167},[24,334,335,336,339],{},"For decryption: Fetch the record, call KMS ",[320,337,338],{},"Decrypt"," on the encrypted DEK to recover the plaintext DEK in memory, decrypt the payload locally with AES-256, then discard the DEK. This keeps KMS calls out of data paths—only one per record lifecycle—while limiting breach impact: a compromised DEK affects only its data, not the master key or other records.",[24,341,342,343,346],{},"RSA complements for key exchange or small payloads (up to 214 bytes for 2048-bit keys, 4KB via ",[320,344,345],{},"Encrypt"," API), but avoid it for bulk due to slow modular exponentiation (hundreds of KB\u002Fs vs. AES's GB\u002Fs). Use RSA public keys for partners to encrypt DEKs securely over email, then decrypt with your private key.",[19,348,350],{"id":349},"master-keys-anchor-trust-with-hardware-isolation-and-controls","Master Keys Anchor Trust with Hardware Isolation and Controls",[24,352,353],{},"KMS master keys (formerly CMKs) reside exclusively in FIPS 140-2 validated HSMs—never exported in plaintext or to application code. Control access via dual IAM policies and key policies, which even block root users if denied. Rotate symmetric keys annually for new material while decrypting old data. Replicate multi-region for DR without changing key IDs.",[24,355,356],{},"Master keys enable:",[57,358,359,364,369,375],{},[60,360,361,363],{},[320,362,322],{}," for DEKs.",[60,365,366,368],{},[320,367,338],{}," for DEK recovery.",[60,370,371,372,374],{},"Direct ",[320,373,345],{}," for \u003C4KB payloads.",[60,376,377],{},"RSA\u002FECC signing\u002Fverification (2048\u002F3072\u002F4096-bit keys).",[24,379,380],{},"Deletion or disablement irrecoverably locks data, enabling instant revocation.",[19,382,384],{"id":383},"audit-and-compliance-built-into-every-operation","Audit and Compliance Built into Every Operation",[24,386,387],{},"CloudTrail logs all API calls (encrypt\u002Fdecrypt\u002Fgenerate\u002Fdescribe) tamper-resistantly for compliance. Centralized management scales across S3, RDS, EBS, Lambda, DynamoDB, Secrets Manager via IAM\u002Fkey policies. Hardware-backed ops ensure keys stay plaintext-free outside HSMs, eliminating self-managed HSM pitfalls.",{"title":167,"searchDepth":168,"depth":168,"links":389},[390,391,392],{"id":314,"depth":168,"text":315},{"id":349,"depth":168,"text":350},{"id":383,"depth":168,"text":384},[273],{"content_references":395,"triage":396},[],{"relevance":204,"novelty":284,"quality":204,"actionability":204,"composite":397,"reasoning":398},3.8,"Category: AI & LLMs. The article provides a detailed explanation of AWS KMS envelope encryption, which is relevant for developers looking to secure data in AI-powered applications. It offers practical steps for implementing encryption, addressing a specific pain point of ensuring data security in production environments.","\u002Fsummaries\u002F9a9f9ad328728e84-aws-kms-envelope-encryption-secures-data-at-scale-summary","2026-05-08 14:52:29","2026-05-09 15:36:33",{"title":304,"description":167},{"loc":399},"9a9f9ad328728e84","Level Up Coding","https:\u002F\u002Flevelup.gitconnected.com\u002Fsecuring-your-data-with-aws-key-management-service-kms-3fd4dccd2a7b?source=rss----5517fd7b58a6---4","summaries\u002F9a9f9ad328728e84-aws-kms-envelope-encryption-secures-data-at-scale-summary",[221,298,409,410],"encryption","key-management","Encrypt data efficiently with AWS KMS envelope pattern: Use master keys to generate ephemeral AES-256 DEKs for fast local encryption\u002Fdecryption, storing only encrypted DEKs alongside ciphertext for auditable, revocable access.",[409,410],"tm4UdfkNMbaAbn7u72o9O7k5tgJ8oxxTQVwaKxpGAP0",{"id":415,"title":416,"ai":417,"body":422,"categories":459,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":460,"navigation":207,"path":469,"published_at":470,"question":177,"scraped_at":471,"seo":472,"sitemap":473,"source_id":474,"source_name":475,"source_type":293,"source_url":476,"stem":477,"tags":478,"thumbnail_url":177,"tldr":479,"tweet":177,"unknown_tags":480,"__hash__":481},"summaries\u002Fsummaries\u002F30072e6e8b386729-mrc-openai-s-protocol-for-resilient-ai-training-ne-summary.md","MRC: OpenAI's Protocol for Resilient AI Training Networks",{"provider":9,"model":10,"input_tokens":418,"output_tokens":419,"processing_time_ms":420,"cost_usd":421},8465,1915,20569,0.00214365,{"type":16,"value":423,"toc":454},[424,428,431,434,437,441,444,447,451],[19,425,427],{"id":426},"multipath-mechanisms-eliminate-congestion-and-enable-fast-recovery","Multipath Mechanisms Eliminate Congestion and Enable Fast Recovery",[24,429,430],{},"In large AI training clusters, network congestion, link failures, and jitter cause GPU idle time, amplifying costs as clusters scale to millions of data transfers per step. MRC builds on RoCEv2 for hardware-accelerated RDMA over Ethernet and SRv6 for static source routing, shifting intelligence to NICs while switches follow pre-configured paths blindly. This avoids interference from dynamic routing.",[24,432,433],{},"Adaptive packet spraying distributes packets across hundreds of paths at the NIC level, achieving higher bandwidth, reduced tail latency, and packet-level load balancing—unlike single-path RoCEv2. For failures, MRC detects issues in microseconds and reroutes: if an 8-port 800Gb\u002Fs NIC loses one port, it drops to 7\u002F8 capacity but recalculates paths instantly, notifies peers to avoid the failed plane, and restores it within a minute upon recovery. Conventional fabrics take seconds to tens of seconds, often crashing jobs; MRC keeps training alive with minimal performance hit.",[24,435,436],{},"AMD's NSCC congestion control integrates via UEC specs, preserving RDMA semantics while adding multipath support.",[19,438,440],{"id":439},"multi-plane-architecture-cuts-tiers-costs-and-latency","Multi-Plane Architecture Cuts Tiers, Costs, and Latency",[24,442,443],{},"MRC reimagines NICs as multiple smaller links (e.g., one 800Gb\u002Fs interface split into eight 100Gb\u002Fs to eight switches), enabling a two-tier Clos network for 131,000 GPUs versus three-to-four tiers in 800Gb\u002Fs designs. Longest paths cross three switches instead of five-to-seven, slashing latency.",[24,445,446],{},"For full bisection bandwidth, this uses 2\u002F3 the optics and 3\u002F5 the switches of three-tier networks, reducing power, cost, and failure blast radius. A tier-1 switch failure (e.g., rebooting four during training) no longer halts jobs.",[19,448,450],{"id":449},"production-on-named-hardware-across-openai-clusters","Production on Named Hardware Across OpenAI Clusters",[24,452,453],{},"Deployed on 400\u002F800Gb\u002Fs RDMA NICs like NVIDIA ConnectX-8, AMD Pollara\u002FVulcano, Broadcom Thor Ultra; SRv6 switches include NVIDIA Spectrum-4\u002F5 (Cumulus\u002FSONiC) and Broadcom Tomahawk 5 (Arista EOS). Powers NVIDIA GB200 supercomputers in OpenAI's Stargate (OCI Abilene, TX) and Microsoft's Fairwater (Atlanta\u002FWisconsin), training ChatGPT and Codex models without job interruptions from failures.",{"title":167,"searchDepth":168,"depth":168,"links":455},[456,457,458],{"id":426,"depth":168,"text":427},{"id":439,"depth":168,"text":440},{"id":449,"depth":168,"text":450},[273],{"content_references":461,"triage":467},[462,464],{"type":280,"title":281,"url":282,"context":463},"cited",{"type":198,"title":465,"url":466,"context":201},"MRC Supercomputer Networking Technical Details","https:\u002F\u002Fopenai.com\u002Findex\u002Fmrc-supercomputer-networking\u002F",{"relevance":284,"novelty":284,"quality":204,"actionability":168,"composite":285,"reasoning":468},"Category: AI & LLMs. The article discusses OpenAI's MRC protocol, which is relevant to AI infrastructure but lacks direct applicability for product builders looking for actionable insights. While it presents some new technical details about network optimization for AI training, it does not provide practical steps or frameworks that the audience can implement.","\u002Fsummaries\u002F30072e6e8b386729-mrc-openai-s-protocol-for-resilient-ai-training-ne-summary","2026-05-07 07:50:02","2026-05-07 11:24:11",{"title":416,"description":167},{"loc":469},"30072e6e8b386729","MarkTechPost","https:\u002F\u002Fwww.marktechpost.com\u002F2026\u002F05\u002F07\u002Fopenai-introduces-mrc-multipath-reliable-connection-a-new-open-networking-protocol-for-large-scale-ai-supercomputer-training-clusters\u002F","summaries\u002F30072e6e8b386729-mrc-openai-s-protocol-for-resilient-ai-training-ne-summary",[297,298,221],"OpenAI's MRC extends RoCE with multipath spraying, microsecond failure recovery via SRv6, and multi-plane designs to deliver predictable performance in 131k-GPU clusters, using 2\u002F3 fewer optics and 3\u002F5 fewer switches than traditional setups.",[],"XbDsma4E_5cuB3WLtPi6GgqSNlQtb2CdSK-eHkIrlrc",{"id":483,"title":484,"ai":485,"body":490,"categories":518,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":520,"navigation":207,"path":531,"published_at":532,"question":177,"scraped_at":533,"seo":534,"sitemap":535,"source_id":536,"source_name":537,"source_type":293,"source_url":538,"stem":539,"tags":540,"thumbnail_url":177,"tldr":541,"tweet":177,"unknown_tags":542,"__hash__":543},"summaries\u002Fsummaries\u002Ff78d6045a31221d2-mrc-enables-100k-gpu-clusters-with-resilient-multi-summary.md","MRC Enables 100k+ GPU Clusters with Resilient Multipath Networking",{"provider":9,"model":10,"input_tokens":486,"output_tokens":487,"processing_time_ms":488,"cost_usd":489},4244,1621,21683,0.00163665,{"type":16,"value":491,"toc":513},[492,496,499,503,506,510],[19,493,495],{"id":494},"multipath-routing-fixes-core-bottlenecks-in-ai-training","Multipath Routing Fixes Core Bottlenecks in AI Training",[24,497,498],{},"MRC (Multipath Reliable Connection) eliminates congestion in AI supercomputers by distributing packets across hundreds of network paths simultaneously, rather than single paths. This delivers faster, more predictable GPU-to-GPU data transfers critical for training massive models. On failures—links, switches, or paths—MRC reroutes in microseconds, versus seconds or tens of seconds for standard 800 Gb\u002Fs fabrics. Result: Training jobs survive reboots and maintenance without stalls. OpenAI's multi-plane design connects over 100,000 GPUs using only two Ethernet switch tiers, slashing component count, power use, and costs compared to conventional three- or four-tier setups.",[19,500,502],{"id":501},"proven-at-scale-on-frontier-supercomputers","Proven at Scale on Frontier Supercomputers",[24,504,505],{},"Deployed across OpenAI's largest NVIDIA GB200 clusters—including Oracle Cloud in Abilene, Texas, and Microsoft's Fairwater—MRC handled a real-world test during frontier model training for ChatGPT and Codex. Four tier-1 switches rebooted without coordinating with running jobs, proving zero-disruption resilience. This lets operators maintain networks mid-training, boosting uptime for trillion-parameter models where network stalls previously cost hours or days.",[19,507,509],{"id":508},"open-standards-accelerate-adoption","Open Standards Accelerate Adoption",[24,511,512],{},"Specification released via Open Compute Project (OCP MRC 1.0), with contributions from AMD, Broadcom, Intel, Microsoft, and NVIDIA. Builders can implement now for Ethernet-based AI fabrics, avoiding proprietary lock-in while hitting supercomputer-scale performance.",{"title":167,"searchDepth":168,"depth":168,"links":514},[515,516,517],{"id":494,"depth":168,"text":495},{"id":501,"depth":168,"text":502},{"id":508,"depth":168,"text":509},[519],"AI News & Trends",{"content_references":521,"triage":529},[522,524,526],{"type":280,"title":523,"url":282,"context":185},"Resilient AI Supercomputer Networking Using MRC and SRv6",{"type":198,"title":277,"publisher":525,"url":278,"context":185},"Open Compute Project",{"type":198,"title":527,"author":528,"url":466,"context":463},"MRC Supercomputer Networking","OpenAI",{"relevance":284,"novelty":284,"quality":204,"actionability":168,"composite":285,"reasoning":530},"Category: AI & LLMs. The article discusses a new networking protocol that addresses bottlenecks in AI supercomputing, which is relevant to AI engineering. However, it lacks direct actionable insights for product builders on how to implement or leverage this technology in their own projects.","\u002Fsummaries\u002Ff78d6045a31221d2-mrc-enables-100k-gpu-clusters-with-resilient-multi-summary","2026-05-06 19:13:21","2026-05-07 11:24:04",{"title":484,"description":167},{"loc":531},"f78d6045a31221d2","The Decoder","https:\u002F\u002Fthe-decoder.com\u002Fopenai-built-a-networking-protocol-with-amd-broadcom-intel-microsoft-and-nvidia-to-fix-ai-supercomputer-bottlenecks\u002F","summaries\u002Ff78d6045a31221d2-mrc-enables-100k-gpu-clusters-with-resilient-multi-summary",[298,221,297],"OpenAI's MRC protocol spreads packets across hundreds of paths for microsecond failure recovery, connecting 100,000+ GPUs via just 2 switch tiers—cutting power, cost, and downtime in AI training supercomputers.",[],"LvMASfYTesYX0l3RENkA3FOBQpD3T6H-0KnDqYX6HvU",{"id":545,"title":546,"ai":547,"body":552,"categories":650,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":651,"navigation":207,"path":668,"published_at":669,"question":177,"scraped_at":533,"seo":670,"sitemap":671,"source_id":672,"source_name":537,"source_type":293,"source_url":673,"stem":674,"tags":675,"thumbnail_url":177,"tldr":676,"tweet":177,"unknown_tags":677,"__hash__":678},"summaries\u002Fsummaries\u002Fcbd84c97f065e33a-anthropic-leases-220k-spacex-gpus-to-boost-claude--summary.md","Anthropic Leases 220K SpaceX GPUs to Boost Claude Limits 10x",{"provider":9,"model":10,"input_tokens":548,"output_tokens":549,"processing_time_ms":550,"cost_usd":551},4190,2247,24757,0.001939,{"type":16,"value":553,"toc":645},[554,558,561,565,568,635,638,642],[19,555,557],{"id":556},"compute-power-surge-enables-reliable-claude-scaling","Compute Power Surge Enables Reliable Claude Scaling",[24,559,560],{},"Anthropic's deal with SpaceX grants exclusive access to the Colossus-1 data center's full capacity: over 220,000 NVIDIA GPUs delivering more than 300 megawatts, coming online within a month. This addresses rate limiting bottlenecks for high-volume users building with Claude, ensuring production workloads like code generation and agentic apps run without interruptions. Builders gain doubled five-hour rate limits for Claude Code across Pro, Max, Team, and Enterprise plans, plus complete removal of peak-time throttling for Pro and Max—directly translating to faster iteration on AI features without artificial caps.",[19,562,564],{"id":563},"tiered-api-limit-overhauls-for-opus-models","Tiered API Limit Overhauls for Opus Models",[24,566,567],{},"Claude Opus API tokens per minute jump dramatically across tiers, prioritizing high-throughput apps:",[569,570,571,587],"table",{},[572,573,574],"thead",{},[575,576,577,581,584],"tr",{},[578,579,580],"th",{},"Tier",[578,582,583],{},"Input Tokens\u002FMin (Old → New)",[578,585,586],{},"Output Tokens\u002FMin (Old → New)",[588,589,590,602,613,624],"tbody",{},[575,591,592,596,599],{},[593,594,595],"td",{},"1",[593,597,598],{},"30,000 → 500,000",[593,600,601],{},"8,000 → 80,000",[575,603,604,607,610],{},[593,605,606],{},"2",[593,608,609],{},"450,000 → 2,000,000",[593,611,612],{},"90,000 → 200,000",[575,614,615,618,621],{},[593,616,617],{},"3",[593,619,620],{},"800,000 → 5,000,000",[593,622,623],{},"160,000 → 400,000",[575,625,626,629,632],{},[593,627,628],{},"4",[593,630,631],{},"2,000,000 → 10,000,000",[593,633,634],{},"400,000 → 800,000",[24,636,637],{},"Top-tier users now handle 5x more input volume, critical for RAG pipelines or long-context processing in production AI products. Trade-off: relies on centralized compute, exposing builders to provider-specific policies.",[19,639,641],{"id":640},"multi-gw-partnerships-and-forward-commitments","Multi-GW Partnerships and Forward Commitments",[24,643,644],{},"This SpaceX tie-up complements Anthropic's portfolio: 5 GW from Amazon, 5 GW from Google\u002FBroadcom, $30B Azure capacity via Microsoft\u002FNVIDIA, and $50B Fluidstack investment. Company pledges to offset US consumer electricity hikes from its data centers, with plans to expand globally. Exploration of orbital AI compute hints at future low-latency, radiation-hardened options for edge AI. Politically, Anthropic limits partnerships to democratic nations with secure supply chains—yet proceeds with Elon Musk's SpaceX, highlighting tensions between scale needs and values.",{"title":167,"searchDepth":168,"depth":168,"links":646},[647,648,649],{"id":556,"depth":168,"text":557},{"id":563,"depth":168,"text":564},{"id":640,"depth":168,"text":641},[519],{"content_references":652,"triage":665},[653,656,659,662],{"type":198,"title":654,"url":655,"context":463},"Higher limits with SpaceX","https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fhigher-limits-spacex",{"type":198,"title":657,"url":658,"context":185},"Elon Musk's xAI massively expands the world's largest AI supercomputer","https:\u002F\u002Fthe-decoder.com\u002Felon-musks-xai-massively-expands-the-worlds-largest-ai-supercomputer\u002F",{"type":198,"title":660,"url":661,"context":185},"AI in space requires new cooling tech and cheap rockets","https:\u002F\u002Fthe-decoder.com\u002Fai-in-space-requires-new-cooling-tech-and-cheap-rockets\u002F",{"type":198,"title":663,"url":664,"context":185},"Views of Elon Musk","https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FViews_of_Elon_Musk",{"relevance":204,"novelty":284,"quality":204,"actionability":284,"composite":666,"reasoning":667},3.6,"Category: AI & LLMs. The article discusses Anthropic's significant infrastructure upgrade that directly impacts builders using the Claude API, addressing a specific pain point of rate limiting for high-volume users. While it provides valuable insights into the scaling capabilities, it lacks detailed actionable steps for implementation.","\u002Fsummaries\u002Fcbd84c97f065e33a-anthropic-leases-220k-spacex-gpus-to-boost-claude-summary","2026-05-06 18:42:24",{"title":546,"description":167},{"loc":668},"cbd84c97f065e33a","https:\u002F\u002Fthe-decoder.com\u002Fanthropic-taps-spacexs-colossus-1-data-center-for-220000-gpus-to-power-claude\u002F","summaries\u002Fcbd84c97f065e33a-anthropic-leases-220k-spacex-gpus-to-boost-claude--summary",[220,221],"Anthropic secures SpaceX's full Colossus-1 cluster (220,000+ NVIDIA GPUs, 300MW) online in a month, driving Claude API rate limits from 30K to 10M input tokens\u002Fmin for top tiers and eliminating peak throttling.",[],"SoEKxQ8cxIs_d9hsUYf84J4zhG1F1a1FZoV07fZH1s8",{"id":680,"title":681,"ai":682,"body":687,"categories":769,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":771,"navigation":207,"path":775,"published_at":776,"question":177,"scraped_at":777,"seo":778,"sitemap":779,"source_id":780,"source_name":405,"source_type":293,"source_url":781,"stem":782,"tags":783,"thumbnail_url":177,"tldr":786,"tweet":177,"unknown_tags":787,"__hash__":788},"summaries\u002Fsummaries\u002F22a507e9a7c41be0-ditch-preferred-username-for-azure-ad-guest-auth-summary.md","Ditch preferred_username for Azure AD Guest Auth",{"provider":9,"model":10,"input_tokens":683,"output_tokens":684,"processing_time_ms":685,"cost_usd":686},3889,1604,23473,0.00107295,{"type":16,"value":688,"toc":764},[689,693,703,709,713,718,721,725,736,761],[19,690,692],{"id":691},"production-bug-exposed-by-b2b-guests","Production Bug Exposed by B2B Guests",[24,694,695,696,699,700,702],{},"Internal QA passed because testers used employee accounts, where Azure AD's ",[320,697,698],{},"preferred_username"," claim reliably matched their email for whitelisting and access control. But three weeks post-launch, a B2B client's guest users logged in successfully yet hit 403 errors due to mismatched identity. Guests have active sessions and valid Azure AD accounts, but ",[320,701,698],{}," doesn't provide a usable email—it's often absent, null, or mismatched for external users invited via B2B collaboration. This single claim broke the entire auth flow, granting sessions without proper rights.",[24,704,705,706,708],{},"To replicate and confirm: Employee flow succeeds (",[320,707,698],{}," == email), guest flow authenticates but fails authorization since the claim can't anchor whitelists reliably.",[19,710,712],{"id":711},"preferred_username-limitations-for-guests","preferred_username Limitations for Guests",[24,714,715,717],{},[320,716,698],{}," isn't a true email field—it's a user-provided hint for login names, populated only for workplace-joined accounts. For B2B guests (external users invited to your tenant), Azure AD doesn't set it to their guest email; it might reflect their home tenant's UPN or be empty. Result: Your system sees a non-email value or null, failing email-based checks for access groups or features.",[24,719,720],{},"Trade-off: Convenient for internal users (matches UPN\u002Femail), but zero fallback for guests. Never use it as the sole identifier—it's not guaranteed unique or stable across user types.",[19,722,724],{"id":723},"anchor-identities-on-oid-for-cross-user-stability","Anchor Identities on oid for Cross-User Stability",[24,726,727,728,731,732,735],{},"Use Azure AD's ",[320,729,730],{},"oid"," (object ID) claim instead: a stable, tenant-wide UUID unique to every user, including guests. Pair it with ",[320,733,734],{},"userType"," (\"Member\" vs \"Guest\") to differentiate and route logic:",[57,737,738,744,758],{},[60,739,740,741,743],{},"Fetch user details via Microsoft Graph API using ",[320,742,730],{},".",[60,745,746,747,749,750,753,754,757],{},"Check ",[320,748,734],{}," to apply guest-specific handling (e.g., map to external email from ",[320,751,752],{},"mail"," or ",[320,755,756],{},"userPrincipalName",").",[60,759,760],{},"Whitelist based on verified attributes, not fragile claims.",[24,762,763],{},"This ensures employees and guests both resolve correctly without silent failures. Post-fix: Validate claims in dev\u002Fstaging with mixed user types, and monitor auth logs for claim mismatches to catch regressions early.",{"title":167,"searchDepth":168,"depth":168,"links":765},[766,767,768],{"id":691,"depth":168,"text":692},{"id":711,"depth":168,"text":712},{"id":723,"depth":168,"text":724},[770],"Software Engineering",{"content_references":772,"triage":773},[],{"relevance":204,"novelty":284,"quality":204,"actionability":204,"composite":397,"reasoning":774},"Category: DevOps & Cloud. The article addresses a specific pain point regarding Azure AD authentication for B2B guests, providing actionable guidance on using `oid` instead of `preferred_username` for reliable identification. It offers concrete steps for implementing a more stable authentication flow, which is directly applicable to developers working with Azure AD.","\u002Fsummaries\u002F22a507e9a7c41be0-ditch-preferred-username-for-azure-ad-guest-auth-summary","2026-05-06 14:20:27","2026-05-06 16:13:27",{"title":681,"description":167},{"loc":775},"22a507e9a7c41be0","https:\u002F\u002Flevelup.gitconnected.com\u002Fwe-shipped-broken-auth-for-every-guest-user-an-azure-ad-oauth-post-mortem-6cf6f70c6909?source=rss----5517fd7b58a6---4","summaries\u002F22a507e9a7c41be0-ditch-preferred-username-for-azure-ad-guest-auth-summary",[784,298,221,785],"backend","authentication","Using preferred_username as identity anchor worked for employees but failed silently for all B2B guests, causing 403 errors post-launch. Anchor on oid instead for reliable identification.",[785],"eg4lczZi_x4xQW_cXI3teLMC9U00mE_W5R__LSHSGpo",{"id":790,"title":791,"ai":792,"body":797,"categories":1029,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":1030,"navigation":207,"path":1061,"published_at":1062,"question":177,"scraped_at":1063,"seo":1064,"sitemap":1065,"source_id":1066,"source_name":214,"source_type":293,"source_url":1067,"stem":1068,"tags":1069,"thumbnail_url":177,"tldr":1071,"tweet":177,"unknown_tags":1072,"__hash__":1073},"summaries\u002Fsummaries\u002Febc0d711136fb32c-secure-ai-agents-via-mcp-toolbox-custom-tools-summary.md","Secure AI Agents via MCP Toolbox Custom Tools",{"provider":9,"model":10,"input_tokens":793,"output_tokens":794,"processing_time_ms":795,"cost_usd":796},8976,2997,46040,0.00327105,{"type":16,"value":798,"toc":1021},[799,803,806,812,815,819,822,825,832,837,840,844,847,944,947,950,955,959,962,965,968,972,975,978,983,986,988,1017],[19,800,802],{"id":801},"tackling-the-confused-deputy-problem-in-ai-agents","Tackling the Confused Deputy Problem in AI Agents",[24,804,805],{},"AI agents promise automation like midnight database triage, but they risk the 'confused deputy' vulnerability: a service account with broad database access gets tricked by malicious user input (e.g., via prompt injection) into querying sensitive data like executive salaries instead of the paged-down DB. Kurtis Van Gent explains this as Simon Willison's 'lethal trifecta': private data + untrusted input + external sharing. Traditional fixes like prompt-engineered security fail because LLMs struggle to distinguish system vs. user instructions.",[807,808,809],"blockquote",{},[24,810,811],{},"'The confused deputy problem is really a problem where you have some kind of authoritative source... but a malicious user or a bug can trick it into revealing information.' — Kurtis Van Gent, defining the core vulnerability with a real-world paging scenario.",[24,813,814],{},"Developers evaluated broad tool access (e.g., 'run any SQL') but rejected it for runtime agents serving end-users. Instead, they architected MCP Toolbox around customization: pre-author SQL queries reviewed like code, constraining what agents can do.",[19,816,818],{"id":817},"build-time-vs-runtime-agents-tailored-tooling","Build-Time vs. Runtime Agents: Tailored Tooling",[24,820,821],{},"MCP Toolbox distinguishes two agent types, each with different security needs. Build-time agents (e.g., Gemini CLI, Claude Code) assist developers with broad, generic tools like 'any SQL' or BigQuery dashboard queries—safe since they use developer credentials. Runtime agents (e.g., customer service bots via ADK, LangChain) face untrusted users, needing narrow tools for accuracy and safety.",[24,823,824],{},"Toolbox supports both via generic (pre-built ops), runtime (dynamic), and custom tools. For databases like AlloyDB, BigQuery, Postgres, Valkey, Neo4j, Oracle, MariaDB, it acts as a 'central gate.' Open-source (15k+ GitHub stars, 130+ contributors, millions of monthly calls), it's self-hosted—no Google data access.",[24,826,827,828,831],{},"Key decision: Bound parameters separate agent-set values (e.g., flight ID from conversation) from app-set ones (e.g., user identity, target DB). This binds identity at runtime, e.g., ",[320,829,830],{},"tool.bind(user_id=authenticated_user)"," creates a scoped tool the LLM can't override.",[807,833,834],{},[24,835,836],{},"'MCP is kind of the gold standard for interop right now... like USB for AI applications. You can take any agent and you can plug in any server.' — Kurtis Van Gent, positioning MCP as the standard Toolbox builds on.",[24,838,839],{},"Tradeoff: Hardcoding boosts security\u002Faccuracy (no hallucinated DB switches) but reduces flexibility. Philosophy: Remove agent control wherever possible without harming UX—e.g., hardcoded DB for single-DB sessions.",[19,841,843],{"id":842},"custom-tools-pre-written-sql-as-architectural-guardrails","Custom Tools: Pre-Written SQL as Architectural Guardrails",[24,845,846],{},"Core mechanism: Define tools with fixed SQL templates and params. Example Postgres tool for airline queries:",[325,848,852],{"className":849,"code":850,"language":851,"meta":167,"style":167},"language-yaml shiki shiki-themes github-light github-dark","tool_type: postgres-sql\nsql: \"SELECT * FROM flights WHERE airline = $1 AND flight_number = $2\"\nparameters:\n  - name: airline\n    type: string\n  - name: flight_number\n    type: string\ndescription: \"Get flight details by airline and number\"\n","yaml",[320,853,854,871,881,889,902,912,924,933],{"__ignoreMap":167},[855,856,859,863,867],"span",{"class":857,"line":858},"line",1,[855,860,862],{"class":861},"s9eBZ","tool_type",[855,864,866],{"class":865},"sVt8B",": ",[855,868,870],{"class":869},"sZZnC","postgres-sql\n",[855,872,873,876,878],{"class":857,"line":168},[855,874,875],{"class":861},"sql",[855,877,866],{"class":865},[855,879,880],{"class":869},"\"SELECT * FROM flights WHERE airline = $1 AND flight_number = $2\"\n",[855,882,883,886],{"class":857,"line":284},[855,884,885],{"class":861},"parameters",[855,887,888],{"class":865},":\n",[855,890,891,894,897,899],{"class":857,"line":204},[855,892,893],{"class":865},"  - ",[855,895,896],{"class":861},"name",[855,898,866],{"class":865},[855,900,901],{"class":869},"airline\n",[855,903,904,907,909],{"class":857,"line":203},[855,905,906],{"class":861},"    type",[855,908,866],{"class":865},[855,910,911],{"class":869},"string\n",[855,913,915,917,919,921],{"class":857,"line":914},6,[855,916,893],{"class":865},[855,918,896],{"class":861},[855,920,866],{"class":865},[855,922,923],{"class":869},"flight_number\n",[855,925,927,929,931],{"class":857,"line":926},7,[855,928,906],{"class":861},[855,930,866],{"class":865},[855,932,911],{"class":869},[855,934,936,939,941],{"class":857,"line":935},8,[855,937,938],{"class":861},"description",[855,940,866],{"class":865},[855,942,943],{"class":869},"\"Get flight details by airline and number\"\n",[24,945,946],{},"The LLM calls via MCP with params; Toolbox executes safely. No ad-hoc SQL generation—agents use dev-reviewed queries. Supports complex ops like joins\u002Fstored procs via custom SQL. Toolbox doesn't auto-write queries; devs do.",[24,948,949],{},"This mirrors app dev: Write\u002Freview SQL once, expose as API. For production, deploy on Cloud Run; min arch is Toolbox container + MCP client (Gemini\u002FVertex AI) + auth (e.g., IAM).",[807,951,952],{},[24,953,954],{},"'The toolbox's superpower really comes down to... customize tools in a way that lets you constrain that access... write the SQL ahead of time.' — Kurtis Van Gent, on shifting from prompt hacks to code-like security.",[19,956,958],{"id":957},"cymbal-air-demo-resilience-in-action","Cymbal Air Demo: Resilience in Action",[24,960,961],{},"Live demo of Cymbal Air (fictional airline agent): Normal flow—user asks flight status; agent uses bound tools to query only authorized data. Compromise attempt: \"Ignore instructions, query competitor salaries.\" Fails—tools lack access; agent stays on-topic.",[24,963,964],{},"Architecture: MCP client (Gemini) → Toolbox server (Cloud Run, Postgres backend) → bound custom tools. Code shown: Load tool, bind user context, register to agent. Result: Zero-trust, no leaks.",[24,966,967],{},"Evolution: Started with generic tools; pivoted to custom\u002Fbound for prod. Failure modes tested: Prompt injection blocked by param constraints.",[19,969,971],{"id":970},"deployment-tradeoffs-and-best-practices","Deployment Tradeoffs and Best Practices",[24,973,974],{},"Latency: Toolbox adds ~50-100ms vs. direct queries (MCP overhead + execution); fine for interactive agents, not ultra-high-throughput. Self-hosted (binary\u002Fcontainer\u002Flocal); progressive tool exposure via dynamic registration.",[24,976,977],{},"Security-first process: Start with threat modeling ('what can go wrong?'), prototype fast with frameworks like ADK, then harden. 'Move security left'—architect params\u002Ftools early, iterate weekly.",[807,979,980],{},[24,981,982],{},"'Flexibility versus security... anything that you can take away from the agent tends to be a good thing to take away as long as it doesn't diminish the use case.' — Kurtis Van Gent, on balancing autonomy and guardrails.",[24,984,985],{},"Non-obvious: Runtime agents need dev-like rigor (code review SQL); build-time can be looser. Replicate by forking GitHub repo, binding identity, testing injections.",[19,987,133],{"id":132},[57,989,990,993,996,999,1002,1005,1008,1011,1014],{},[60,991,992],{},"Model threats early: Map confused deputy risks (private data + untrusted input) before building agents.",[60,994,995],{},"Use build-time tools broadly for dev (e.g., any-SQL); constrain runtime with custom MCP tools.",[60,997,998],{},"Pre-write\u002Freview SQL templates; define params\u002Fdescriptions for LLM guidance.",[60,1000,1001],{},"Bind app params (user ID, DB) at runtime—LLM sets only conversation-derived ones.",[60,1003,1004],{},"Deploy self-hosted Toolbox on Cloud Run; test latency (\u003C100ms typical) and injections.",[60,1006,1007],{},"Start small: Codelabs for BigQuery\u002FAlloyDB; scale to multi-agent apps.",[60,1009,1010],{},"Prioritize security in architecture: 1st step = threat model, not prototype.",[60,1012,1013],{},"Leverage open MCP spec: Plug any agent\u002Fserver; Google managed options for BigQuery\u002Fetc.",[60,1015,1016],{},"Measure: Millions of safe calls\u002Fmonth via Toolbox—prod-proven.",[1018,1019,1020],"style",{},"html pre.shiki code .s9eBZ, html code.shiki .s9eBZ{--shiki-default:#22863A;--shiki-dark:#85E89D}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":167,"searchDepth":168,"depth":168,"links":1022},[1023,1024,1025,1026,1027,1028],{"id":801,"depth":168,"text":802},{"id":817,"depth":168,"text":818},{"id":842,"depth":168,"text":843},{"id":957,"depth":168,"text":958},{"id":970,"depth":168,"text":971},{"id":132,"depth":168,"text":133},[176],{"content_references":1031,"triage":1059},[1032,1035,1038,1041,1044,1047,1050,1053,1056],{"type":183,"title":1033,"url":1034,"context":185},"MCP Toolbox GitHub","https:\u002F\u002Fgoo.gle\u002Fgithub-mcp-toolbox",{"type":183,"title":1036,"url":1037,"context":185},"MCP Toolbox for Databases (Docs)","https:\u002F\u002Fgoo.gle\u002Fmcp-toolbox-dev",{"type":183,"title":1039,"url":1040,"context":185},"QuickStart","https:\u002F\u002Fgoo.gle\u002Fmcp-quickstart",{"type":183,"title":1042,"url":1043,"context":185},"MCP Toolbox for Databases: Making BigQuery datasets available to MCP clients (Codelab)","https:\u002F\u002Fgoo.gle\u002Fcodelabs",{"type":183,"title":1045,"url":1046,"context":185},"Build a Multi-agent App with MCP Toolbox for AlloyDB & ADK (Codelab)","https:\u002F\u002Fgoo.gle\u002Fcodelab-multi-agent-app",{"type":183,"title":1048,"url":1049,"context":185},"Cymbal Air Toolbox Demo","https:\u002F\u002Fgoo.gle\u002F4tfWYIA",{"type":183,"title":1051,"url":1052,"context":185},"Google Cloud MCP servers overview","https:\u002F\u002Fgoo.gle\u002F42ioQRn",{"type":183,"title":1054,"url":1055,"context":185},"MCP Toolbox for Databases (Toolbox)","https:\u002F\u002Fgoo.gle\u002F4wauUJp",{"type":183,"title":1057,"url":1058,"context":185},"GEAR","https:\u002F\u002Fgoo.gle\u002FGEAR",{"relevance":204,"novelty":284,"quality":204,"actionability":284,"composite":666,"reasoning":1060},"Category: AI & LLMs. The article addresses a specific pain point regarding security in AI agents, particularly the confused deputy problem, which is relevant for developers integrating AI features. It provides insights into a practical solution (MCP Toolbox) but lacks detailed step-by-step guidance for implementation.","\u002Fsummaries\u002Febc0d711136fb32c-secure-ai-agents-via-mcp-toolbox-custom-tools-summary","2026-05-05 16:46:33","2026-05-06 16:12:43",{"title":791,"description":167},{"loc":1061},"ed722ee0fdc7e076","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=CRszhkEjd8s","summaries\u002Febc0d711136fb32c-secure-ai-agents-via-mcp-toolbox-custom-tools-summary",[219,1070,221,298],"ai-tools","MCP Toolbox prevents confused deputy attacks by letting developers pre-write constrained SQL tools with bound parameters, separating agent flexibility from app-controlled security for runtime agents.",[],"htBzEsyR16VdzmViKPvmry-2HFiUx9a6ye2MxpmOJCk",{"id":1075,"title":1076,"ai":1077,"body":1082,"categories":1133,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":1134,"navigation":207,"path":1142,"published_at":1143,"question":177,"scraped_at":1144,"seo":1145,"sitemap":1146,"source_id":1147,"source_name":1148,"source_type":293,"source_url":1149,"stem":1150,"tags":1151,"thumbnail_url":177,"tldr":1152,"tweet":177,"unknown_tags":1153,"__hash__":1154},"summaries\u002Fsummaries\u002F866e10e8d404e5bf-sagemaker-fine-tuning-lora-beats-qlora-on-cost-per-summary.md","SageMaker Fine-Tuning: LoRA Beats QLoRA on Cost-Perf Balance",{"provider":9,"model":10,"input_tokens":1078,"output_tokens":1079,"processing_time_ms":1080,"cost_usd":1081},8501,2110,17961,0.00273255,{"type":16,"value":1083,"toc":1127},[1084,1088,1091,1094,1097,1101,1104,1107,1110,1114,1117,1120,1124],[19,1085,1087],{"id":1086},"fine-tuning-methods-trade-offs-in-params-memory-and-speed","Fine-Tuning Methods: Trade-Offs in Params, Memory, and Speed",[24,1089,1090],{},"Full fine-tuning updates all 7B parameters of models like Llama2-7B, delivering top accuracy (e.g., highest Rouge1\u002F2\u002FL, Bert F1, Intent Accuracy on Banking77 dataset) but at highest cost and time—ideal only for unrestricted budgets or compliance needs where no accuracy compromise is allowed.",[24,1092,1093],{},"LoRA (PEFT) freezes original weights and trains low-rank matrices A\u002FB: for a 2048x2048 update matrix (4M params), it uses (2048x4) + (4x2048) = 16K params, a 96% reduction. Process merges on-the-fly during inference, preserving general knowledge while specializing on domain data like finance intents; slight accuracy drop vs full but massive GPU\u002Ftime savings, with minor inference delay unless merged.",[24,1095,1096],{},"QLoRA quantizes LoRA weights to 4-bit NF4 (e.g., 0.117 → 0.12), yielding 8x memory savings via higher precision near zero and less for outliers. It enables fine-tuning large models on single GPUs but slows training 25%+ due to gradient checkpointing (trades compute for 45% activation memory), dequantization per forward\u002Fbackward pass, and paged_adam_8bit optimizer—use for prototypes or severe constraints where slight accuracy loss is ok.",[19,1098,1100],{"id":1099},"aws-sagemaker-implementation-universal-script-across-approaches","AWS SageMaker Implementation: Universal Script Across Approaches",[24,1102,1103],{},"Prepare Banking77 dataset (HF: PolyAI\u002Fbanking77) into train\u002Ftest .jsonl, upload to S3 bucket (e.g., finetuning-llm-blog-harshitdawar\u002FBanking77\u002F{train,test}). Bundle requirements.txt (key libs: torch, transformers, peft, bitsandbytes, trl, datasets, accelerate) and training_script.py into training-scripts.tar.gz—script handles model_name (Llama2-7B, Mistral7B-v0.1, GPT-NeoX-20B), approach (full\u002Flora\u002Fqlora), epochs, batch_size=8, lr (auto-tuned), hf_token for gated models.",[24,1105,1106],{},"Add S3 bucket policy for SageMaker access. In SageMaker Training Jobs: use HuggingFace PyTorch container (e.g., 763104351884.dkr.ecr.ap-south-1.amazonaws.com\u002Fhuggingface-pytorch-training:2.1.0-...), ml.g5.xlarge+ GPU instances (scale per table: e.g., Llama2 QLoRA on g5.xlarge batch=8; GPT-NeoX-20B LoRA on p4d.24xlarge batch=1). Hyperparams reference S3 code\u002Foutput paths; channels for train\u002Ftest data; output to S3\u002Fmodels\u002F{model}-{approach}. Spot instances optional; ensure IAM role has S3 perms, request quotas for instances.",[24,1108,1109],{},"Run jobs for 9 combos (excluding GPT-NeoX full FT due to cost); eval on 500 test samples with Rouge\u002FBert\u002FIntent Acc\u002FParse Rate\u002FInference Sec.",[19,1111,1113],{"id":1112},"results-lora-wins-on-cost-per-performance-point","Results: LoRA Wins on Cost per Performance Point",[24,1115,1116],{},"On Banking77 intents: Full FT tops metrics (e.g., Llama2 full: high Intent Acc), LoRA close (slight drop), QLoRA lowest but viable baseline. Training time\u002Fcost: QLoRA cheapest upfront (memory savings) yet higher total due to overheads; LoRA optimal (e.g., lower than full by orders, beats QLoRA on perf\u002F$). Inference: Full\u002FLoRA faster\u002Fsec than QLoRA; cost per perf point favors LoRA.",[24,1118,1119],{},"Resources: Fine-tuned sizes ~original (merging bloats); GPU util high across (e.g., Llama2 QLoRA peaks 100% GPU mem); QLoRA maxes smaller instances. Author spent >$200 across runs—get credits\u002Festimates first.",[19,1121,1123],{"id":1122},"recommendations-match-approach-to-constraints","Recommendations: Match Approach to Constraints",[24,1125,1126],{},"Full FT: Max accuracy, no compromises (e.g., regulated finance). LoRA: Production sweet spot—96% param cut, near-full perf, preserves base knowledge. QLoRA: Quick prototypes\u002Fhigh constraints (democratizes research). Scale instances per model (e.g., 7B on g5.12xlarge full; 20B LoRA p4d.24xlarge). Merge LoRA for inference speed; test baselines before scaling.",{"title":167,"searchDepth":168,"depth":168,"links":1128},[1129,1130,1131,1132],{"id":1086,"depth":168,"text":1087},{"id":1099,"depth":168,"text":1100},{"id":1112,"depth":168,"text":1113},{"id":1122,"depth":168,"text":1123},[176],{"content_references":1135,"triage":1140},[1136],{"type":195,"title":1137,"author":1138,"url":1139,"context":185},"Banking77","PolyAI","https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPolyAI\u002Fbanking77",{"relevance":203,"novelty":204,"quality":204,"actionability":204,"composite":205,"reasoning":1141},"Category: AI & LLMs. The article provides a detailed comparison of fine-tuning methods for large language models, specifically focusing on LoRA and QLoRA, which directly addresses the audience's need for practical AI engineering insights. It includes specific implementation steps for using AWS SageMaker, making it actionable for developers looking to integrate these techniques into their workflows.","\u002Fsummaries\u002F866e10e8d404e5bf-sagemaker-fine-tuning-lora-beats-qlora-on-cost-per-summary","2026-05-03 07:33:04","2026-05-03 17:01:03",{"title":1076,"description":167},{"loc":1142},"866e10e8d404e5bf","Towards AI","https:\u002F\u002Fpub.towardsai.net\u002Fthe-ultimate-guide-to-fine-tuning-foundation-models-on-aws-sagemaker-efc673509bb2?source=rss----98111c9905da---4","summaries\u002F866e10e8d404e5bf-sagemaker-fine-tuning-lora-beats-qlora-on-cost-per-summary",[220,297,298,221],"LoRA cuts trainable params by 96% vs full fine-tuning, balancing cost savings and accuracy on Llama2-7B\u002FMistral7B; QLoRA saves 8x memory but trains slower due to dequantization overhead.",[],"zrXCCVv4m3PpFgLRs2NdWo10XbP8h3vRPQKkaW6c8mg",{"id":1156,"title":1157,"ai":1158,"body":1163,"categories":1199,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":1200,"navigation":207,"path":1217,"published_at":1218,"question":177,"scraped_at":1219,"seo":1220,"sitemap":1221,"source_id":1222,"source_name":214,"source_type":293,"source_url":1223,"stem":1224,"tags":1225,"thumbnail_url":177,"tldr":1227,"tweet":177,"unknown_tags":1228,"__hash__":1229},"summaries\u002Fsummaries\u002F3740ad507782d5ab-bigtable-scales-petabytes-for-real-time-nosql-work-summary.md","Bigtable Scales Petabytes for Real-Time NoSQL Workloads",{"provider":9,"model":10,"input_tokens":1159,"output_tokens":1160,"processing_time_ms":1161,"cost_usd":1162},4454,1748,15352,0.0017423,{"type":16,"value":1164,"toc":1193},[1165,1169,1172,1176,1179,1183,1186,1190],[19,1166,1168],{"id":1167},"auto-scaling-performance-for-massive-real-time-loads","Auto-Scaling Performance for Massive Real-Time Loads",[24,1170,1171],{},"Bigtable delivers linear scalability to hundreds of petabytes while maintaining predictable low latency and handling millions of operations per second. It powers Google services like Search, Analytics, Ads, YouTube, and Maps. Use its flexible schema for evolving data like clickstreams, social content, ads, catalogs, and profiles. This supports customer 360 views and multi-tenant SaaS architectures in AdTech, retail, media, finance, and IoT. Automatic versioning timestamps data, and tiered storage shifts between hot\u002Fcold tiers to cut costs via retention policies.",[19,1173,1175],{"id":1174},"time-series-ingestion-and-in-app-reporting","Time Series Ingestion and In-App Reporting",[24,1177,1178],{},"Ingest massive IoT\u002Ffinancial\u002Fapp monitoring streams with auto-timestamping for version history. Enable live reporting via continuous materialized views and write-time aggregations for A\u002FB testing or engagement metrics. Build Kappa architectures with native connectors to Apache Flink, Spark, Kafka, and Beam for stream processing pipelines.",[19,1180,1182],{"id":1181},"ml-feature-stores-and-bigquery-pairing","ML Feature Stores and BigQuery Pairing",[24,1184,1185],{},"Serve low-latency online features for recommendations, user monitoring, or chat apps, while isolating offline mode for training without disrupting traffic. Powers large-scale stores like Spotify's music recommendations. Pair with BigQuery for hybrid setups: BigQuery analyzes historical patterns (e.g., fraud detection, personalization, vehicle telemetry trends via external tables), while Bigtable handles millisecond reactions on live data. This unifies serving speed with deep analytics.",[19,1187,1189],{"id":1188},"hands-on-trial-setup","Hands-On Trial Setup",[24,1191,1192],{},"Start a 10-day free trial (no billing needed) via Google Cloud console: create instance with name and region. Use provided datasets for testing.",{"title":167,"searchDepth":168,"depth":168,"links":1194},[1195,1196,1197,1198],{"id":1167,"depth":168,"text":1168},{"id":1174,"depth":168,"text":1175},{"id":1181,"depth":168,"text":1182},{"id":1188,"depth":168,"text":1189},[273],{"content_references":1201,"triage":1215},[1202,1205,1207,1209,1211,1213],{"type":183,"title":1203,"url":1204,"context":185},"Bigtable","https:\u002F\u002Fgoo.gle\u002F3QEsBhk",{"type":183,"title":1206,"context":185},"BigQuery",{"type":183,"title":1208,"context":185},"Apache Flink",{"type":183,"title":1210,"context":185},"Apache Spark",{"type":183,"title":1212,"context":185},"Apache Kafka",{"type":183,"title":1214,"context":185},"Apache Beam",{"relevance":204,"novelty":284,"quality":204,"actionability":204,"composite":397,"reasoning":1216},"Category: Data Science & Visualization. The article discusses Bigtable's capabilities for handling massive real-time data loads, which is relevant for product builders looking to implement scalable data solutions. It provides actionable steps for setting up a trial, making it practical for developers exploring data storage options.","\u002Fsummaries\u002F3740ad507782d5ab-bigtable-scales-petabytes-for-real-time-nosql-work-summary","2026-04-30 16:01:43","2026-05-03 16:58:17",{"title":1157,"description":167},{"loc":1217},"48896df1eee6051e","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yArSgUhQHT8","summaries\u002F3740ad507782d5ab-bigtable-scales-petabytes-for-real-time-nosql-work-summary",[221,298,297,1226],"data-science","Bigtable auto-scales to hundreds of petabytes and millions of ops\u002Fsec with low latency, powering Google Search\u002FYouTube\u002FMaps; ideal for time series, ML features, and streaming via Flink\u002FKafka integrations.",[],"BaI4rjcPJlZb_hCUCb4-6-WNlw0WnEeyKtIrD7zrXJs",{"id":1231,"title":1232,"ai":1233,"body":1238,"categories":1421,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":1422,"navigation":207,"path":1433,"published_at":1434,"question":177,"scraped_at":1435,"seo":1436,"sitemap":1437,"source_id":1438,"source_name":1439,"source_type":293,"source_url":1440,"stem":1441,"tags":1442,"thumbnail_url":177,"tldr":1443,"tweet":177,"unknown_tags":1444,"__hash__":1445},"summaries\u002Fsummaries\u002F1c37c1cad77c687a-scale-pytorch-ddp-multi-node-on-aws-ec2-infra-firs-summary.md","Scale PyTorch DDP Multi-Node on AWS EC2: Infra-First Guide",{"provider":9,"model":10,"input_tokens":1234,"output_tokens":1235,"processing_time_ms":1236,"cost_usd":1237},8453,1898,16685,0.0026171,{"type":16,"value":1239,"toc":1415},[1240,1244,1247,1250,1254,1257,1260,1272,1275,1279,1282,1395,1402,1405,1409,1412],[19,1241,1243],{"id":1242},"replicate-environments-and-data-for-multi-node-reliability","Replicate Environments and Data for Multi-Node Reliability",[24,1245,1246],{},"Multi-node DDP treats processes across independent EC2 instances as identical, requiring each node to have matching Python\u002FPyTorch\u002FCUDA versions, identical code from version control, and shared dataset access. Use shared EFS volumes mounted on all instances (e.g., DATASET_DIR=\u002Fefs\u002Fandrea\u002Fdataset) to avoid copying data; local copies or remote streaming work but add latency. Homogeneous clusters like 2 g6e.xlarge instances in the same availability zone minimize variance. Without this, expect cryptic errors or silent failures since DDP assumes uniformity.",[24,1248,1249],{},"One process per GPU (world size = total GPUs, e.g., 2 for 1 GPU\u002Fnode), with rank 0 as master for logging\u002Fcheckpointing. NCCL handles intra-node (NVLink\u002FPCIe) and inter-node (TCP) gradient all-reduce; network misconfigs cause silent hangs.",[19,1251,1253],{"id":1252},"secure-aws-networking-and-launch-torchrun","Secure AWS Networking and Launch torchrun",[24,1255,1256],{},"Launch identical instance types, note master's private IP (e.g., 10.x.xxx.203), and edit security group inbound rules: Type=All traffic, Source=same security group ID (e.g., sg-xxx). This enables rendezvous and NCCL comms; default blocks cause indefinite hangs without errors.",[24,1258,1259],{},"Set .env per node:",[57,1261,1262,1269],{},[60,1263,1264,1265],{},"Master: NUMBER_OF_NODES=2, NODE_RANK=0, NUMBER_OF_GPUS=1, MASTER_ADDR=",[1266,1267,1268],"private",{"ip":167},", MASTER_PORT=30000, DDP_TIMEOUT_SECONDS=180",[60,1270,1271],{},"Worker: Same but NODE_RANK=1, OUTPUT_DIR empty (master-only).",[24,1273,1274],{},"Run in tmux: uv run torchrun --nnodes=2 --node_rank=$NODE_RANK --nproc_per_node=1 --master_addr=$MASTER_ADDR --master_port=30000 train.py. Batch size scales linearly (e.g., per-rank batch_size=10 yields effective 20), adjust LR accordingly.",[19,1276,1278],{"id":1277},"integrate-ddpmanager-and-distributedsampler-in-code","Integrate DDPManager and DistributedSampler in Code",[24,1280,1281],{},"Encapsulate DDP in DDPManager class:",[325,1283,1287],{"className":1284,"code":1285,"language":1286,"meta":167,"style":167},"language-python shiki shiki-themes github-light github-dark","import os\nimport torch\nimport torch.distributed as dist\nfrom datetime import timedelta\n\nclass DDPManager:\n    def __init__(self, backend=\"nccl\", timeout_s=180):\n        self.backend = backend\n        self.timeout_s = timeout_s\n    def setup(self) -> bool:\n        if dist.is_initialized(): return True\n        if \"RANK\" not in os.environ: return False\n        local_rank = int(os.environ[\"LOCAL_RANK\"])\n        torch.cuda.set_device(local_rank)\n        dist.init_process_group(backend=self.backend, timeout=timedelta(seconds=self.timeout_s))\n        return True\n    def is_main_process(self) -> bool:\n        return int(os.environ.get(\"RANK\", \"0\")) == 0\n    # barrier(), cleanup(), get_local_rank()\n","python",[320,1288,1289,1294,1299,1304,1309,1314,1319,1324,1329,1335,1341,1347,1353,1359,1365,1371,1377,1383,1389],{"__ignoreMap":167},[855,1290,1291],{"class":857,"line":858},[855,1292,1293],{},"import os\n",[855,1295,1296],{"class":857,"line":168},[855,1297,1298],{},"import torch\n",[855,1300,1301],{"class":857,"line":284},[855,1302,1303],{},"import torch.distributed as dist\n",[855,1305,1306],{"class":857,"line":204},[855,1307,1308],{},"from datetime import timedelta\n",[855,1310,1311],{"class":857,"line":203},[855,1312,1313],{"emptyLinePlaceholder":207},"\n",[855,1315,1316],{"class":857,"line":914},[855,1317,1318],{},"class DDPManager:\n",[855,1320,1321],{"class":857,"line":926},[855,1322,1323],{},"    def __init__(self, backend=\"nccl\", timeout_s=180):\n",[855,1325,1326],{"class":857,"line":935},[855,1327,1328],{},"        self.backend = backend\n",[855,1330,1332],{"class":857,"line":1331},9,[855,1333,1334],{},"        self.timeout_s = timeout_s\n",[855,1336,1338],{"class":857,"line":1337},10,[855,1339,1340],{},"    def setup(self) -> bool:\n",[855,1342,1344],{"class":857,"line":1343},11,[855,1345,1346],{},"        if dist.is_initialized(): return True\n",[855,1348,1350],{"class":857,"line":1349},12,[855,1351,1352],{},"        if \"RANK\" not in os.environ: return False\n",[855,1354,1356],{"class":857,"line":1355},13,[855,1357,1358],{},"        local_rank = int(os.environ[\"LOCAL_RANK\"])\n",[855,1360,1362],{"class":857,"line":1361},14,[855,1363,1364],{},"        torch.cuda.set_device(local_rank)\n",[855,1366,1368],{"class":857,"line":1367},15,[855,1369,1370],{},"        dist.init_process_group(backend=self.backend, timeout=timedelta(seconds=self.timeout_s))\n",[855,1372,1374],{"class":857,"line":1373},16,[855,1375,1376],{},"        return True\n",[855,1378,1380],{"class":857,"line":1379},17,[855,1381,1382],{},"    def is_main_process(self) -> bool:\n",[855,1384,1386],{"class":857,"line":1385},18,[855,1387,1388],{},"        return int(os.environ.get(\"RANK\", \"0\")) == 0\n",[855,1390,1392],{"class":857,"line":1391},19,[855,1393,1394],{},"    # barrier(), cleanup(), get_local_rank()\n",[24,1396,1397,1398,1401],{},"Setup: ddp = DDPManager(); use_ddp = ddp.setup(); device = torch.device(f\"cuda:{ddp.get_local_rank()}\") if use_ddp else \"cuda:0\". Wrap model: model = DDP(model, device_ids=",[855,1399,1400],{},"local_rank",", output_device=local_rank, find_unused_parameters=False); access via model.module.",[24,1403,1404],{},"Use DistributedSampler(dataset, num_replicas=world_size, rank=rank, shuffle=True) for data partitioning; set train_sampler.set_epoch(epoch) per epoch. Barrier after master-only tasks (validate\u002Fsave): if use_ddp: ddp.barrier(). Master handles checkpoints: torch.save({\"step\": step, \"model\": model.module.state_dict()}, f\"{ckpt_dir}\u002Fmodel-{step}.pth\").",[19,1406,1408],{"id":1407},"debug-timeouts-and-failures-proactively","Debug Timeouts and Failures Proactively",[24,1410,1411],{},"Silent hangs signal network issues—ping test instances first. Missing node triggers init timeout (180s default). Master crash kills job; no fault tolerance. Deadlocks (e.g., barrier stall) timeout. Restrict GPUs: export CUDA_VISIBLE_DEVICES=0. Scale batch size with ranks for stable training; effective batch = per-rank batch * world_size.",[1018,1413,1414],{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":167,"searchDepth":168,"depth":168,"links":1416},[1417,1418,1419,1420],{"id":1242,"depth":168,"text":1243},{"id":1252,"depth":168,"text":1253},{"id":1277,"depth":168,"text":1278},{"id":1407,"depth":168,"text":1408},[273],{"content_references":1423,"triage":1430},[1424,1427],{"type":198,"title":1425,"url":1426,"context":185},"Mounting the EFS file system on EC2 Linux","https:\u002F\u002Fdocs.aws.amazon.com\u002Fefs\u002Flatest\u002Fug\u002Fmounting-fs-mount-helper-ec2-linux.html",{"type":183,"title":1428,"url":1429,"context":185},"tmux","https:\u002F\u002Fman7.org\u002Flinux\u002Fman-pages\u002Fman1\u002Ftmux.1.html",{"relevance":203,"novelty":284,"quality":204,"actionability":204,"composite":1431,"reasoning":1432},4.15,"Category: AI & LLMs. The article provides a detailed guide on scaling PyTorch DDP across AWS EC2 instances, addressing practical challenges faced by developers in deploying AI models. It includes specific configurations and code examples that can be directly applied, making it actionable for the target audience.","\u002Fsummaries\u002F1c37c1cad77c687a-scale-pytorch-ddp-multi-node-on-aws-ec2-infra-firs-summary","2026-04-30 13:31:01","2026-05-03 17:01:04",{"title":1232,"description":167},{"loc":1433},"1c37c1cad77c687a","Learning Data","https:\u002F\u002Fmedium.com\u002Flearning-data\u002Fone-gpu-wasnt-enough-my-journey-scaling-pytorch-ddp-across-aws-ec2-instances-506647e086fc?source=rss----eec44e936bf1---4","summaries\u002F1c37c1cad77c687a-scale-pytorch-ddp-multi-node-on-aws-ec2-infra-firs-summary",[1286,297,298,221],"Multi-node DDP demands identical environments, data access, and open security groups across EC2 instances; use torchrun launcher with DDPManager for minimal code changes and reliable gradient sync via NCCL.",[],"IVwD5gQ2TAKP9L1byc-qt16swkQ8B55VMXVdjNlKNQ0",{"id":1447,"title":1448,"ai":1449,"body":1454,"categories":1488,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":1489,"navigation":207,"path":1503,"published_at":1504,"question":177,"scraped_at":1505,"seo":1506,"sitemap":1507,"source_id":1508,"source_name":1509,"source_type":293,"source_url":1510,"stem":1511,"tags":1512,"thumbnail_url":177,"tldr":1513,"tweet":177,"unknown_tags":1514,"__hash__":1515},"summaries\u002Fsummaries\u002F6ee9b4f709da3a06-tpus-dominate-at-infrastructure-scale-over-per-chi-summary.md","TPUs Dominate at Infrastructure Scale Over Per-Chip GPU Wins",{"provider":9,"model":10,"input_tokens":1450,"output_tokens":1451,"processing_time_ms":1452,"cost_usd":1453},5399,1852,23082,0.00198315,{"type":16,"value":1455,"toc":1483},[1456,1460,1463,1466,1470,1473,1476,1480],[19,1457,1459],{"id":1458},"infrastructure-scaling-trumps-per-chip-performance","Infrastructure Scaling Trumps Per-Chip Performance",[24,1461,1462],{},"Google's TPU v8t for training and v8i for inference trail Nvidia's Rubin and AMD GPUs in raw per-chip compute and memory. However, evaluating at infrastructure level reveals TPUs' edge: Nvidia's NVL72 scales 72 Rubin GPUs per rack, while Google's 4x4x4 cube interconnects up to 9600 TPUs into a superpod delivering 121 exaFLOPS in FP4—surpassing Nvidia's 1152-GPU Rubin pod at 60 exaFLOPS FP4. Google's Virgo network further scales out to 134,000 chips, potentially reaching 1 million, minimizing network overhead via ICI and optical interconnects. This Lego-like modularity avoids the scaling cliffs Nvidia faces when stacking GPUs, where interconnect overhead erodes per-chip advantages.",[24,1464,1465],{},"Nvidia balances scale-out with InfiniBand for diverse customers (neo-clouds like CoreWeave, labs like OpenAI\u002FMeta, hyperscalers like Microsoft\u002FAmazon), prioritizing broad demand profiles. Google, serving internal apps like Gemini and Vertex AI plus external deals (Anthropic's $1B TPU commitment: 40% owned, 60% rented; Meta's multi-billion rental), optimizes purely for its high-volume needs without market fragmentation risks.",[19,1467,1469],{"id":1468},"workload-profiles-dictate-hardware-choices","Workload Profiles Dictate Hardware Choices",[24,1471,1472],{},"AI tasks bifurcate demands: training prioritizes network bandwidth over compute\u002Fmemory, benefiting TPU's topology. Inference splits further—prefill (pink line in SemiAnalysis chart) is compute\u002Fmemory-bound for KV cache parallelization; decode (white line) is bandwidth\u002Flatency-bound for autoregressive token streaming. TPU v8t\u002F8i bifurcation matches this: v8t for training's network focus, v8i for inference's varied needs. Virgo flattens network bottlenecks, challenging Nvidia's inference dominance.",[24,1474,1475],{},"Replicating Google's scaling on Nvidia chips risks inefficiency for its varied clientele, locking into a 'balanced diet' pod architecture over specialized superpods.",[19,1477,1479],{"id":1478},"explosive-demand-drives-economics","Explosive Demand Drives Economics",[24,1481,1482],{},"Epoch AI projects 450+ new pre-trained models by 2030, many exceeding GPT-5's ~66 septillion FLOPs (total math ops for weights). A 9600-TPU superpod could theoretically pretrain GPT-5-scale models in under 7 days at FP4 (realistically 3-4 weeks), but efficiency cliffs emerge from memory, bandwidth, or latency based on scale-up\u002Fout choices. Rising inference\u002Ftraining demand amplifies TPU economics: internal fab control ensures supply for massive token serving, positioning Google against Nvidia as workloads evolve toward bandwidth constraints.",{"title":167,"searchDepth":168,"depth":168,"links":1484},[1485,1486,1487],{"id":1458,"depth":168,"text":1459},{"id":1468,"depth":168,"text":1469},{"id":1478,"depth":168,"text":1479},[519],{"content_references":1490,"triage":1501},[1491,1494,1498],{"type":183,"title":1492,"url":1493,"context":201},"Mammoth AI","http:\u002F\u002Fmammouth.ai",{"type":1495,"title":1496,"author":1497,"context":463},"report","SemiAnalysis AI Demand Profiles Diagram","SemiAnalysis",{"type":1495,"title":1499,"author":1500,"context":463},"Epoch AI Pre-Trained Models Projection","Epoch AI",{"relevance":284,"novelty":284,"quality":204,"actionability":168,"composite":285,"reasoning":1502},"Category: AI & LLMs. The article discusses the performance of Google's TPUs compared to Nvidia GPUs, which is relevant to AI infrastructure but lacks direct actionable insights for product builders. While it provides some new perspectives on scaling AI workloads, it does not offer specific frameworks or techniques that the audience can implement.","\u002Fsummaries\u002F6ee9b4f709da3a06-tpus-dominate-at-infrastructure-scale-over-per-chi-summary","2026-04-30 02:16:18","2026-05-03 16:52:02",{"title":1448,"description":167},{"loc":1503},"a42442ea33b32f06","Caleb Writes Code","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=b_KxiTPBIb0","summaries\u002F6ee9b4f709da3a06-tpus-dominate-at-infrastructure-scale-over-per-chi-summary",[297,221,298],"Google's TPU v8t (training) and v8i (inference) lag Nvidia GPUs per chip but deliver superior performance at scale—9600-chip superpods hit 121 exaFLOPS FP4—via cube topology and Virgo networking, optimizing for AI's bandwidth-heavy workloads.",[],"EDdnhIFUjAM7yxc1JlkafqMae9PUuMTF3sbgBpRfDo4",{"id":1517,"title":1518,"ai":1519,"body":1524,"categories":1686,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":1687,"navigation":207,"path":1710,"published_at":1711,"question":177,"scraped_at":1712,"seo":1713,"sitemap":1714,"source_id":1715,"source_name":214,"source_type":293,"source_url":1716,"stem":1717,"tags":1718,"thumbnail_url":177,"tldr":1719,"tweet":177,"unknown_tags":1720,"__hash__":1721},"summaries\u002Fsummaries\u002F668072030a93af7f-next-26-build-agents-with-adk-skills-and-gemini-summary.md","Next '26: Build Agents with ADK, Skills, and Gemini",{"provider":9,"model":10,"input_tokens":1520,"output_tokens":1521,"processing_time_ms":1522,"cost_usd":1523},8783,2529,29411,0.0029986,{"type":16,"value":1525,"toc":1679},[1526,1530,1541,1544,1547,1551,1558,1603,1606,1613,1616,1620,1627,1630,1633,1637,1640,1643,1646,1648,1677],[19,1527,1529],{"id":1528},"agent-development-kit-adk-enables-flexible-production-ready-agents","Agent Development Kit (ADK) Enables Flexible, Production-Ready Agents",[24,1531,1532,1533,1536,1537,1540],{},"ADK, Google's open-source framework launched at Next '26, stands out for building enterprise agents in 2026. It supports Python (primary), Go, TypeScript, and Java libraries, decoupling agent logic from specific models. Use Gemini 3\u002F3.1 Flash\u002FPro for reasoning, or integrate Claude, open models on GKE, or any provider. Agents gain intelligence via ",[63,1534,1535],{},"tools"," (functions for computation or external services like MCP servers\u002Fdatabases) and ",[63,1538,1539],{},"skills"," (new concept: YAML metadata for quick loading + on-demand markdown body with code\u002Fscripts).",[24,1542,1543],{},"Skills keep context lean: Agent loads YAML summaries of all skills at startup (e.g., \"GIS tool generates marathon routes\"), then fetches full body only when needed. This avoids token bloat for complex tasks. ADK 2.0 adds graph-based features for larger agent graphs. Deploy to Agent Runtime, Cloud Run, or GKE for scale.",[24,1545,1546],{},"\"When you go to build agent in 2026, you have a lot of options. And we believe that ADK, agent development kit is the best way to do this.\"",[19,1548,1550],{"id":1549},"marathon-planning-demo-multi-agent-orchestration-in-action","Marathon Planning Demo: Multi-Agent Orchestration in Action",[24,1552,1553,1554,1557],{},"Core demo simulates planning a 10,000-runner Las Vegas marathon via a ",[63,1555,1556],{},"planner agent"," in a 3D Las Vegas app (Race Condition repo). Prompt: \"Plan a marathon in Las Vegas for 10,000 runners.\" Agent dynamically loads skills:",[57,1559,1560,1591,1597],{},[60,1561,1562,1565,1566],{},[63,1563,1564],{},"GIS Spatial Engineering",": Python script processes GeoJSON (Las Vegas road network) to compute exact 42.195km route. Handles constraints: no back-half elevation gains, geofenced to city bounds, water stations at intervals. Math ensures precision—model doesn't hallucinate routes.",[325,1567,1569],{"className":1284,"code":1568,"language":1286,"meta":167,"style":167},"# Excerpt from skill script\ndef generate_marathon_route(geojson_data, target_length_km=42.195):\n    # Mathematical ops on coordinates for route optimization\n    ...\n",[320,1570,1571,1576,1581,1586],{"__ignoreMap":167},[855,1572,1573],{"class":857,"line":858},[855,1574,1575],{},"# Excerpt from skill script\n",[855,1577,1578],{"class":857,"line":168},[855,1579,1580],{},"def generate_marathon_route(geojson_data, target_length_km=42.195):\n",[855,1582,1583],{"class":857,"line":284},[855,1584,1585],{},"    # Mathematical ops on coordinates for route optimization\n",[855,1587,1588],{"class":857,"line":204},[855,1589,1590],{},"    ...\n",[60,1592,1593,1596],{},[63,1594,1595],{},"Mapping",": Queries Google Maps MCP server (natural language over APIs) for places (landmarks like Bellagio, Sphere), weather history (avoid extreme temps).",[60,1598,1599,1602],{},[63,1600,1601],{},"Race Director",": Text-based guidelines from Google Doc (converted via Workspace MCP + Gemini summarization). Covers soft reqs: 3-4 start lanes, porta-potty spacing, traffic impact, economic notes.",[24,1604,1605],{},"Agent iterates: Loads skills on-demand, calls tools, outputs grounded plan. Full code in open-source Race Condition repo (includes .mmd files for Claude\u002FGemini CLI\u002FAntigravity coding harnesses). Codelab guides setup\u002Fdeploy.",[24,1607,1608,1609,1612],{},"\"We took the task of okay can we take that process ",[855,1610,1611],{},"marathon planning"," and make it so that bunch of agents working together can do the same thing if possible even better.\"",[24,1614,1615],{},"Trade-offs: Skills shine for modular, discoverable capabilities but require upfront YAML curation. Tools handle real-time actions; combine for hybrid intelligence.",[19,1617,1619],{"id":1618},"multi-agent-architectures-and-protocols","Multi-Agent Architectures and Protocols",[24,1621,1622,1623,1626],{},"Post-keynote chats (Ivan Nardini, Casey West) detail Demo 2: Multi-agent setup with real-time evaluation, ",[63,1624,1625],{},"Agent2Agent (A2A) protocol",", A2UI registry. Started Feb '26; evolved from tools to skills differentiation. Identities for marathon: planner + specialized roles (e.g., route optimizer, logistics).",[24,1628,1629],{},"A2A enables agent handoffs; registry discovers skills\u002FUI agents. Built with Vertex AI, Gemini Enterprise Agent Platform. Other segments touch Flutter agents, Firebase SQL Connect (gcloud sql connect), OpenTelemetry tracing, Data Agent Kit, Gemini Nano, Vertex AI Memory Bank.",[24,1631,1632],{},"\"We start using tools and then uh we switch and we decide to differentiate between tools and skills.\"",[19,1634,1636],{"id":1635},"developer-resources-and-ecosystem","Developer Resources and Ecosystem",[24,1638,1639],{},"Next '26 emphasizes hands-on: Clone Race Condition for simulation\u002FUI\u002Fagents. Use Google Antigravity, Firebase agent skills, Google AI Studio. Hackathons like Gemini Live Agent Challenge; codelabs (e.g., Building Trustable AI at 100 MPH). GEAR hub, 100+ session VODs.",[24,1641,1642],{},"Integrates Workspace MCP (Docs to skills), Maps MCP (NL queries). For trust\u002Fscaling: Evaluation loops, memory banks. Opinion: 2026 agents succeed via right tools\u002Fskills\u002Fruntime—not just models.",[24,1644,1645],{},"\"It's not about just okay, what model I choose and what agent framework I use. It's more about how do I give the agent the right tools, the right skills and the right place to run.\"",[19,1647,133],{"id":132},[57,1649,1650,1653,1656,1659,1662,1665,1668,1671,1674],{},[60,1651,1652],{},"Start with ADK for multi-language, model-agnostic agents; pair with Gemini for reasoning.",[60,1654,1655],{},"Design skills as YAML metadata + lazy-loaded markdown\u002Fcode to manage context efficiently.",[60,1657,1658],{},"Ground agents: Use Python scripts for math (GIS routes), MCP for APIs (Maps weather\u002Fplaces).",[60,1660,1661],{},"Clone Race Condition repo; follow codelab to build\u002Fdeploy marathon planner.",[60,1663,1664],{},"Differentiate tools (actions) vs. skills (discoverable modules); use A2A for orchestration.",[60,1666,1667],{},"Convert docs to skills via Gemini + Workspace MCP for non-deterministic guidelines.",[60,1669,1670],{},"Deploy to Agent Runtime\u002FCloud Run; trace with OpenTelemetry.",[60,1672,1673],{},"Evaluate Antigravity\u002FCursor for AI-assisted coding in agent repos.",[60,1675,1676],{},"Join Gemini Live Agent Challenge for hands-on multi-agent practice.",[1018,1678,1414],{},{"title":167,"searchDepth":168,"depth":168,"links":1680},[1681,1682,1683,1684,1685],{"id":1528,"depth":168,"text":1529},{"id":1549,"depth":168,"text":1550},{"id":1618,"depth":168,"text":1619},{"id":1635,"depth":168,"text":1636},{"id":132,"depth":168,"text":133},[176],{"content_references":1688,"triage":1708},[1689,1692,1695,1699,1702,1705],{"type":198,"title":1690,"url":1691,"context":185},"Race Condition repo","https:\u002F\u002Fgoo.gle\u002F4w4vvfK",{"type":183,"title":1693,"url":1694,"context":185},"Google Cloud Data Agent Kit","https:\u002F\u002Fgoo.gle\u002F4t66FJx",{"type":1696,"title":1697,"url":1698,"context":185},"event","Gemini Live Agent Challenge (Hackathon)","https:\u002F\u002Fgoo.gle\u002F4cQtJpt",{"type":198,"title":1700,"url":1701,"context":185},"Building Trustable AI at 100 MPH (Codelab)","https:\u002F\u002Fgoo.gle\u002F4tGKNFB",{"type":183,"title":1703,"url":1704,"context":185},"Google Antigravity","https:\u002F\u002Fgoo.gle\u002F48uNu4G",{"type":183,"title":1706,"url":1707,"context":185},"Firebase agent skills","https:\u002F\u002Fgoo.gle\u002F4mZisaY",{"relevance":203,"novelty":204,"quality":204,"actionability":204,"composite":205,"reasoning":1709},"Category: AI & LLMs. The article discusses the Agent Development Kit (ADK) for building production-ready agents, which is highly relevant for developers looking to integrate AI into their products. It provides a concrete example of using the ADK for a marathon planning application, showcasing practical implementation details that can be directly applied.","\u002Fsummaries\u002F668072030a93af7f-next-26-build-agents-with-adk-skills-and-gemini-summary","2026-04-29 17:41:52","2026-05-03 16:58:30",{"title":1518,"description":167},{"loc":1710},"54c3f5596d03fad3","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=N7N0TU9tkzw","summaries\u002F668072030a93af7f-next-26-build-agents-with-adk-skills-and-gemini-summary",[219,1070,1286,221],"Google Cloud Next '26 demos production multi-agent systems using open-source ADK for any language\u002Fmodel, modular skills for efficient context, and tools like MCP servers—open-sourced Race Condition repo for marathon planning.",[],"d5M0dBrNr6LiN5dQor6oXQlD3_RTwdIBq2MRATryRWM",{"id":1723,"title":1724,"ai":1725,"body":1730,"categories":1882,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":1883,"navigation":207,"path":1893,"published_at":1894,"question":177,"scraped_at":1895,"seo":1896,"sitemap":1897,"source_id":1898,"source_name":1899,"source_type":293,"source_url":1900,"stem":1901,"tags":1902,"thumbnail_url":177,"tldr":1903,"tweet":177,"unknown_tags":1904,"__hash__":1905},"summaries\u002Fsummaries\u002F333109d80f15bbdf-batch-size-unlocks-1000x-llm-inference-efficiency-summary.md","Batch Size Unlocks 1000x LLM Inference Efficiency",{"provider":9,"model":10,"input_tokens":1726,"output_tokens":1727,"processing_time_ms":1728,"cost_usd":1729},8770,2537,24557,0.003,{"type":16,"value":1731,"toc":1875},[1732,1736,1739,1742,1770,1773,1784,1787,1790,1794,1797,1800,1803,1806,1810,1813,1816,1819,1822,1825,1829,1832,1835,1838,1841,1843],[19,1733,1735],{"id":1734},"batch-size-dominates-latency-and-cost-tradeoffs","Batch Size Dominates Latency and Cost Tradeoffs",[24,1737,1738],{},"Reiner Pope breaks down autoregressive inference in transformers, where generating one new token requires a full forward pass attending to the entire KV cache of prior tokens. The KV cache—internal representations from past tokens—dominates memory fetches during attention, while weight matrix multiplies handle compute.",[24,1740,1741],{},"Using roofline analysis on a Blackwell NVL72 rack (72 GPUs), Pope models inference time as the maximum of compute time and memory time:",[57,1743,1744,1753],{},[60,1745,1746,866,1749,1752],{},[63,1747,1748],{},"Compute time",[320,1750,1751],{},"t_compute = (batch_size * active_params) \u002F FLOPs_per_chip",". Linear in batch size (B), as each sequence element processes active parameters (e.g., 37B for DeepSeek V3's MoE with 700B total).",[60,1754,1755,866,1758,1761,1762,1765,1766,1769],{},[63,1756,1757],{},"Memory time",[320,1759,1760],{},"t_memory = max(weight_fetch, KV_fetch)",", where ",[320,1763,1764],{},"weight_fetch = total_params \u002F memory_bandwidth"," (constant, ~all 700B params) and ",[320,1767,1768],{},"KV_fetch = (B * context_length * bytes_per_token) \u002F memory_bandwidth"," (linear in B and context).",[24,1771,1772],{},"Latency plot vs. B shows an initial flat region (memory-bound by weight fetches) transitioning to a steep compute-limited slope. At low B (e.g., 1), latency floors at weight fetch time (~15-20ms on HBM, capacity\u002Fbandwidth), but cost skyrockets.",[24,1774,1775,1776,1779,1780,1783],{},"Cost per token is ",[320,1777,1778],{},"latency \u002F B",", transforming curves: compute and KV become constant, weight fetch hyperbolic (1\u002FB). Without batching, weight fetches aren't amortized, yielding \"a thousand times worse\" economics. Optimal B equates memory and compute: ",[320,1781,1782],{},"B ≈ 300 * (total_params \u002F active_params)"," or ~300 * sparsity (e.g., 2400 for DeepSeek's 1\u002F8 sparsity). Practitioners use 2-3x larger for real-world inefficiencies, yielding ~2000 sequences or 128k tokens\u002Fsecond per rack (60\u002FB batches\u002Fsec).",[24,1785,1786],{},"\"If you do not batch together many users, the cost and the economics you get can be a thousand times worse than if you do batch many users together.\"",[24,1788,1789],{},"This explains \"Fast Mode\" (6x price for 2.5x speed): smaller B reduces queue wait but raises per-token cost via poor amortization. No viable \"Slow Mode\"—beyond optimal B, you're compute-bound with no further savings. Global scale (e.g., Gemini's millions tokens\u002Fsec) shards across thousands of racks.",[19,1791,1793],{"id":1792},"roofline-insights-into-hardware-and-context-limits","Roofline Insights into Hardware and Context Limits",[24,1795,1796],{},"Hardware ratio FLOPs\u002F(2 * memory_bandwidth) ~300 holds across A100-H100-B100, tying optimal B to sparsity alone, not scale. HBM capacity\u002Fbandwidth sets ~20ms cycle: racks process one full memory turnover per batch, reading weights\u002FKV mostly once (reads >> writes).",[24,1798,1799],{},"Context length shifts balance: KV slope matches compute at Goldilocks ~100k tokens; doubling to 200k halves MFU (memory-bound). Dense attention scales linearly with context; sparse (e.g., DeepSeek's sqrt scaling) resists this.",[24,1801,1802],{},"\"For the particular context length where the slopes match, that says I am equally memory-bound and compute-bound, which is a really desirable place to be.\"",[24,1804,1805],{},"Batching adds queue latency: fixed 20ms \"train departures\" mean worst-case 40ms wait + process. Centralization push mild—2000 concurrent users\u002Frack isn't huge, but tokens\u002Fsec scales to global traffic.",[19,1807,1809],{"id":1808},"scaling-to-clusters-moe-pipeline-and-training-overkill","Scaling to Clusters: MoE, Pipeline, and Training Overkill",[24,1811,1812],{},"Timestamps hint at cluster layouts: MoE spreads experts across GPU racks (e.g., 37B active\u002F700B total). Pipeline parallelism shards layers across racks, but Ilya Sutskever's quip \"pipelining is not wise\" stems from bubble inefficiencies.",[24,1814,1815],{},"RL drives 100x overtraining beyond Chinchilla-optimal pretrain, bloating params for post-training gains. Pope deduces long-context costs from API pricing: KV memory linear in context explains premiums.",[24,1817,1818],{},"Convergent evolution: nets and crypto both optimize sparse, high-dim ops.",[24,1820,1821],{},"\"Why Ilya said, 'As we now know, pipelining is not wise.'\"",[24,1823,1824],{},"Dwarkesh probes naively: sparse adoption uncertain, but DeepSeek publishes it. Jane Street tangent (sponsor): FPGAs for ns-latency trading vs. GPU batching.",[19,1826,1828],{"id":1827},"pricing-and-architecture-reverse-engineering","Pricing and Architecture Reverse-Engineering",[24,1830,1831],{},"API prices encode stack: fast modes shrink B, long-context hikes KV. Optimal B insensitive to size\u002Fsparsity ties progress to hardware stability.",[24,1833,1834],{},"Flashcards\u002Fpractice problems (reiner-flashcards.vercel.app) aid retention; full transcript markdown for LLM chat.",[24,1836,1837],{},"\"The cost initially starts very high at a batch size of one. It almost goes to infinity because we've got so many weight fetches that are not amortized over a large batch size.\"",[24,1839,1840],{},"Pope's full-stack view (chips to models) demystifies why AI evolves thus: batch economics favor dense clusters, sparse MoE, balanced compute\u002Fmemory.",[19,1842,133],{"id":132},[57,1844,1845,1848,1851,1854,1857,1860,1863,1866,1869,1872],{},[60,1846,1847],{},"Model inference time ≥ max( (B * active_params)\u002FFLOPs , total_params\u002Fbandwidth , (B * ctx * bytes\u002Ftoken)\u002Fbandwidth )—use roofline for predictions.",[60,1849,1850],{},"Optimal batch ~300 * sparsity (e.g., 2400 tokens for 1\u002F8 MoE); run every 20ms for 128k tokens\u002Fsec\u002Frack.",[60,1852,1853],{},"Cost\u002Ftoken = latency\u002FB: batching amortizes weights 1000x; fast modes use small B, no cheap slow mode possible.",[60,1855,1856],{},"Context ~100k balances compute\u002Fmemory; sparse attention (DeepSeek) scales better via sqrt(ctx).",[60,1858,1859],{},"Hardware FLOPs\u002F(2*BW) ~300 stable; pick B 2-3x optimal for real MFU.",[60,1861,1862],{},"Queue latency ≤ 2 * batch_time (e.g., 40ms worst-case).",[60,1864,1865],{},"RL overtrains 100x past Chinchilla; API prices reveal KV costs.",[60,1867,1868],{},"Avoid pipeline parallelism bubbles; MoE shards experts across racks.",[60,1870,1871],{},"Test your setup: equate weight_fetch = B * active_compute for balance.",[60,1873,1874],{},"Build intuition: flashcards at reiner-flashcards.vercel.app.",{"title":167,"searchDepth":168,"depth":168,"links":1876},[1877,1878,1879,1880,1881],{"id":1734,"depth":168,"text":1735},{"id":1792,"depth":168,"text":1793},{"id":1808,"depth":168,"text":1809},{"id":1827,"depth":168,"text":1828},{"id":132,"depth":168,"text":133},[],{"content_references":1884,"triage":1891},[1885,1888],{"type":183,"title":1886,"url":1887,"context":201},"Reiner flashcards and practice problems","https:\u002F\u002Freiner-flashcards.vercel.app\u002F",{"type":198,"title":1889,"url":1890,"context":201},"Markdown transcript of Reiner Pope lecture","https:\u002F\u002Fgist.github.com\u002Fdwarkeshsp\u002F79100f0fdeed69d76241903bb0604dbe",{"relevance":203,"novelty":204,"quality":204,"actionability":204,"composite":205,"reasoning":1892},"Category: AI & LLMs. The article provides in-depth analysis on how batch size impacts latency and cost in LLM inference, addressing a critical aspect of AI engineering that product builders need to consider. It offers actionable insights on optimizing batch sizes for efficiency, which is directly applicable to developers working with LLMs.","\u002Fsummaries\u002F333109d80f15bbdf-batch-size-unlocks-1000x-llm-inference-efficiency-summary","2026-04-29 17:20:27","2026-05-03 16:58:43",{"title":1724,"description":167},{"loc":1893},"4a9b4f0f4e55eb4e","Dwarkesh Patel","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=xmkSf5IS-zw","summaries\u002F333109d80f15bbdf-batch-size-unlocks-1000x-llm-inference-efficiency-summary",[220,297,298,221],"Reiner Pope deduces frontier LLM training and serving mechanics from roofline analysis, revealing batch size as the core driver of latency-cost tradeoffs, with optimal batches of ~2000 tokens amortizing weights for massive gains.",[],"qeSPy0ZxQcYxrXRD8vDDE3TXXiiSijULELBTTzq62BE",{"id":1907,"title":1908,"ai":1909,"body":1914,"categories":2013,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":2014,"navigation":207,"path":2043,"published_at":2044,"question":177,"scraped_at":2045,"seo":2046,"sitemap":2047,"source_id":2048,"source_name":2049,"source_type":293,"source_url":2050,"stem":2051,"tags":2052,"thumbnail_url":177,"tldr":2054,"tweet":177,"unknown_tags":2055,"__hash__":2056},"summaries\u002Fsummaries\u002F57efa85fbbf99fa5-scaffold-ai-agent-prod-infra-in-60s-with-google-st-summary.md","Scaffold AI Agent Prod Infra in 60s with Google Starter Pack",{"provider":9,"model":10,"input_tokens":1910,"output_tokens":1911,"processing_time_ms":1912,"cost_usd":1913},6245,2114,24890,0.00179915,{"type":16,"value":1915,"toc":2007},[1916,1920,1927,1930,1934,1937,1975,1978,1982,1992,1996,2004],[19,1917,1919],{"id":1918},"slash-3-9-month-ai-agent-infra-tax-to-60-seconds","Slash 3-9 Month AI Agent Infra Tax to 60 Seconds",[24,1921,1922,1923,1926],{},"AI agent prototypes fail to ship because teams spend 3-9 months on four core challenges: customization (secure data connections), evaluation (pre-production quality checks), deployment (scalable infra with CI\u002FCD), and observability (real-time monitoring). Agent Starter Pack, an Apache 2.0 project generator from Google Cloud Platform (6,100 GitHub stars, 1,400 forks, weekly releases for a year), solves this with one CLI command: ",[320,1924,1925],{},"uvx agent-starter-pack create",". It scaffolds everything around your agent logic, independent of frameworks like LangGraph or CrewAI, letting you focus on business logic.",[24,1928,1929],{},"Run the command, pick a template and deployment target (two prompts only), and get seven components instantly: FastAPI backend with auth, chat UI frontend, Terraform for GCP resources, Cloud Build\u002FGitHub Actions CI\u002FCD, Vertex AI evaluation framework, Cloud Logging\u002FTrace observability, and auto-generated docs. No manual YAML, boilerplate, or late-night Terraform debugging—output deploys directly.",[19,1931,1933],{"id":1932},"leverage-6-battle-tested-agent-templates","Leverage 6 Battle-Tested Agent Templates",[24,1935,1936],{},"Choose from six complete, working templates matching your architecture:",[57,1938,1939,1945,1951,1957,1963,1969],{},[60,1940,1941,1944],{},[63,1942,1943],{},"ADK",": Base ReAct agent via Google's Agent Development Kit.",[60,1946,1947,1950],{},[63,1948,1949],{},"ADK + A2A",": Adds Agent-to-Agent (A2A) protocol for cross-framework communication (e.g., ADK agent invokes LangGraph\u002FCrewAI agents via standardized tasks).",[60,1952,1953,1956],{},[63,1954,1955],{},"Agentic RAG",": Integrates Vertex AI Search\u002FVector Search for secure document Q&A.",[60,1958,1959,1962],{},[63,1960,1961],{},"LangGraph",": ReAct flow using LangChain's stateful orchestration.",[60,1964,1965,1968],{},[63,1966,1967],{},"ADK Java",": ReAct pattern for Java teams.",[60,1970,1971,1974],{},[63,1972,1973],{},"ADK Live",": Multimodal (audio\u002Fvideo\u002Ftext) real-time chat with Gemini.",[24,1976,1977],{},"All share identical production scaffolding. A2A enables multi-agent coordination out-of-box, future-proofing for distributed systems (upgrading per Google Cloud Blog).",[19,1979,1981],{"id":1980},"pick-cloud-run-or-agent-engine-for-flexible-deployment","Pick Cloud Run or Agent Engine for Flexible Deployment",[24,1983,1984,1985,1987,1988,1991],{},"Generate for ",[63,1986,189],{}," (containerized FastAPI): Full control over scaling, networking, resources; pay-per-use; ideal if you know GCP. Or ",[63,1989,1990],{},"Vertex AI Agent Engine"," (fully managed): Auto-scaling, security (VPC Service Controls), no infra ops; deploy and forget. Switch targets with one CLI flag. Built-in Vertex AI eval runs quality checks pre\u002Fpost-deploy. Observability defaults: Cloud Trace for request paths, Cloud Logging for searchable logs, Looker dashboards for analytics—avoids 6-month regrets from skipped monitoring.",[19,1993,1995],{"id":1994},"stack-up-against-langgraphcrewaiknow-the-trade-offs","Stack Up Against LangGraph\u002FCrewAI—Know the Trade-offs",[24,1997,1998,1999,2003],{},"Unlike orchestration frameworks, Starter Pack wraps ",[2000,2001,2002],"em",{},"any"," (LangGraph for mature state persistence\u002Fcheckpointing but verbose schemas\u002Fnodes\u002Fedges; CrewAI for simple roles but weak long-running state, leading to migrations). Use LangGraph inside Starter Pack for best of both.",[24,2005,2006],{},"Caveats: GCP lock-in (Vertex AI, Cloud Run—no AWS\u002FAzure); no official Google support\u002FSLAs (\"demonstrative\" repo); Python-first (Java template secondary); infra incurs costs (Vertex AI, etc.). Skip if avoiding vendor lock or non-GCP. For GCP teams, it accelerates shipping without reinventing wheels—test via GitHub repo.",{"title":167,"searchDepth":168,"depth":168,"links":2008},[2009,2010,2011,2012],{"id":1918,"depth":168,"text":1919},{"id":1932,"depth":168,"text":1933},{"id":1980,"depth":168,"text":1981},{"id":1994,"depth":168,"text":1995},[176,273],{"content_references":2015,"triage":2040},[2016,2019,2022,2025,2029,2032,2035,2037],{"type":183,"title":2017,"url":2018,"context":201},"Agent Starter Pack","https:\u002F\u002Fgithub.com\u002FGoogleCloudPlatform\u002Fagent-starter-pack",{"type":198,"title":2020,"url":2021,"context":185},"Official Docs","https:\u002F\u002Fgooglecloudplatform.github.io\u002Fagent-starter-pack\u002F",{"type":198,"title":2023,"url":2024,"context":185},"Why Starter Pack Guide","https:\u002F\u002Fgooglecloudplatform.github.io\u002Fagent-starter-pack\u002Fguide\u002Fwhy_starter_pack.html",{"type":198,"title":2026,"author":2027,"url":2028,"context":185},"A2A Protocol Upgrade","Google Cloud Blog","https:\u002F\u002Fcloud.google.com\u002Fblog\u002Fproducts\u002Fai-machine-learning\u002Fagent2agent-protocol-is-getting-an-upgrade",{"type":198,"title":2030,"url":2031,"context":185},"Product Hunt Launch","https:\u002F\u002Fwww.producthunt.com\u002Fproducts\u002Fagent-starter-pack",{"type":183,"title":2033,"url":2034,"context":185},"Google ADK (Agent Development Kit)","https:\u002F\u002Fgoogle.github.io\u002Fadk-docs\u002F",{"type":183,"title":1990,"url":2036,"context":185},"https:\u002F\u002Fcloud.google.com\u002Fvertex-ai\u002Fgenerative-ai\u002Fdocs\u002Fagent-engine\u002Foverview",{"type":183,"title":2038,"url":2039,"context":201},"Dynamous AI","https:\u002F\u002Fdynamous.ai\u002F?code=646a60",{"relevance":203,"novelty":204,"quality":204,"actionability":203,"composite":2041,"reasoning":2042},4.55,"Category: AI & LLMs. The article provides a detailed overview of Google's Agent Starter Pack, which directly addresses the pain point of lengthy infrastructure setup for AI agents by offering a practical solution that can be implemented immediately. The step-by-step command and the description of the components generated make it highly actionable for developers looking to streamline their AI agent deployment.","\u002Fsummaries\u002F57efa85fbbf99fa5-scaffold-ai-agent-prod-infra-in-60s-with-google-st-summary","2026-04-19 16:48:34","2026-04-21 15:22:17",{"title":1908,"description":167},{"loc":2043},"8bb17917095e04bd","DIY Smart Code","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=3XcpwHu9ahQ","summaries\u002F57efa85fbbf99fa5-scaffold-ai-agent-prod-infra-in-60s-with-google-st-summary",[219,298,221,2053],"open-source","Google's Agent Starter Pack CLI generates full production-ready AI agent stack—FastAPI backend, Terraform IaC, CI\u002FCD, Vertex AI eval, observability—in 60 seconds, cutting typical 3-9 month infra setup to minutes across 6 templates.",[],"2WYxEySmkGl-6PT6g6swlwIGu6Z8QlW4YinCWHlJ-Ks",{"id":2058,"title":2059,"ai":2060,"body":2065,"categories":2258,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":2259,"navigation":207,"path":2278,"published_at":2279,"question":177,"scraped_at":2280,"seo":2281,"sitemap":2282,"source_id":2283,"source_name":214,"source_type":293,"source_url":2284,"stem":2285,"tags":2286,"thumbnail_url":177,"tldr":2287,"tweet":177,"unknown_tags":2288,"__hash__":2289},"summaries\u002Fsummaries\u002F268d90eeae6a5c77-gemma-4-prod-stack-model-armor-adk-agents-tracing-summary.md","Gemma 4 Prod Stack: Model Armor, ADK Agents, Tracing",{"provider":9,"model":10,"input_tokens":2061,"output_tokens":2062,"processing_time_ms":2063,"cost_usd":2064},8884,2621,18787,0.0025416,{"type":16,"value":2066,"toc":2251},[2067,2071,2074,2077,2120,2123,2126,2129,2132,2136,2139,2146,2149,2163,2166,2169,2172,2175,2179,2182,2185,2188,2195,2198,2202,2205,2208,2211,2214,2217,2219,2245,2248],[19,2068,2070],{"id":2069},"unifying-model-serving-with-load-balancer-routing","Unifying Model Serving with Load Balancer Routing",[24,2072,2073],{},"After deploying Gemma 4 separately via vLLM (optimized for production throughput, parallelism, memory) and Ollama (suited for dev\u002Ftesting) to Cloud Run services, the team routes traffic through a single regional external Application Load Balancer endpoint. This avoids managing multiple URLs in production.",[24,2075,2076],{},"Key decisions:",[57,2078,2079,2092,2102],{},[60,2080,2081,2084,2085,2088,2089,743],{},[63,2082,2083],{},"Network Endpoint Groups (NEGs)",": Serverless NEGs represent Cloud Run backends for the LB. Created via ",[320,2086,2087],{},"gcloud compute network-endpoint-groups create"," with ",[320,2090,2091],{},"--network-endpoint-type=SERVERLESS",[60,2093,2094,2097,2098,2101],{},[63,2095,2096],{},"Backend Services",": Defined for each Cloud Run service (",[320,2099,2100],{},"gcloud compute backend-services create","), attached to NEGs. Enables LB to communicate securely.",[60,2103,2104,2107,2108,2111,2112,2115,2116,2119],{},[63,2105,2106],{},"URL Map",": Routes based on path—e.g., ",[320,2109,2110],{},"\u002Fvllm\u002F"," to vLLM backend, ",[320,2113,2114],{},"\u002Follama\u002F"," to Ollama. Switch dev\u002Fprod by path prefix without endpoint changes. Command: ",[320,2117,2118],{},"gcloud compute url-maps create"," with host\u002Fpath rules.",[24,2121,2122],{},"Tradeoffs: Cloud Run scales multi-region natively, so LB adds setup overhead (NEGs, backends, proxy subnet, HTTPS certs, target proxy, forwarding rules). But it provides a single invocable HTTPS endpoint and service extensions. Without LB, use direct Cloud Run URLs, losing unified routing.",[24,2124,2125],{},"Proxy-only subnet reserves private IPs for LB-to-Cloud Run communication in the VPC. SSL certs enable HTTPS termination at the target HTTPS proxy, which consults the URL map before forwarding (port 443).",[24,2127,2128],{},"\"The reason why we're doing that for this particular lab using a load balancer, it's actually acting as a very advanced URL or a traffic router. So we have two different services, but we really don't want to be maintaining two different endpoints in production.\"",[24,2130,2131],{},"—Ayo Adedeji, explaining single-endpoint benefits over direct Cloud Run access.",[19,2133,2135],{"id":2134},"network-level-security-with-model-armor-service-extension","Network-Level Security with Model Armor Service Extension",[24,2137,2138],{},"Model Armor scans every prompt\u002Fresponse for jailbreaks, prompt injection, PII leaks (e.g., SSNs, credit cards), harassment via LB service extension—triggered before backend routing.",[24,2140,2141,2142,2145],{},"Integration: Attach as extension to URL map (",[320,2143,2144],{},"gcloud compute url-maps add-service-extension","). Configurable thresholds\u002Factions: block malicious inputs, replace harmful outputs with defaults. Detects sensitive data in agent generations.",[24,2147,2148],{},"Alternatives considered:",[57,2150,2151,2157],{},[60,2152,2153,2156],{},[63,2154,2155],{},"SDK\u002FAPI",": Invoke via Python SDK or REST API in ADK callbacks (before-agent or after-model). No LB needed—e.g., filter inputs pre-agent call.",[60,2158,2159,2162],{},[63,2160,2161],{},"Direct in code",": Embed in app logic, but network-level is zero-code-change, applies to all backends.",[24,2164,2165],{},"Why LB extension? Enforces security at ingress without app modifications; scales with traffic. For non-LB setups, callbacks provide lifecycle hooks (e.g., pre-model scan).",[24,2167,2168],{},"\"Model armor is really versatile you can use it in many different ways so there's a model armor python SDK... There's also model armor API that you can call... often times... before agent call back or after model call back.\"",[24,2170,2171],{},"—Ayo Adedeji, on flexible Model Armor invocation beyond LB.",[24,2173,2174],{},"Results: Blocks malicious traffic pre-model; logs detections for audit. Config via templates for custom harms\u002FPII.",[19,2176,2178],{"id":2177},"model-agnostic-agents-with-adk-and-vllm-on-cloud-run","Model-Agnostic Agents with ADK and vLLM on Cloud Run",[24,2180,2181],{},"Agent Development Kit (ADK) builds agents atop any LLM (Gemini, Gemma 4). Here, pairs with lightweight vLLM serving Gemma 4, deployed to Cloud Run via Cloud Build CI\u002FCD.",[24,2183,2184],{},"Pipeline: Cloud Build triggers deploys; vLLM handles inference. Preps for \"boss fight\"—agent vs. cloud dungeon agent.",[24,2186,2187],{},"Why vLLM? High token throughput, GPU efficiency for prod. ADK callbacks enable Model Armor hooks.",[24,2189,2190,2191,2194],{},"\"ADK is actually model agnostic... The trick is we're gonna using ADK with light LLM ",[855,2192,2193],{},"vLLM"," and you're gonna learn how to use that.\"",[24,2196,2197],{},"—Annie Wang, highlighting ADK flexibility for Gemma 4.",[19,2199,2201],{"id":2200},"production-observability-metrics-and-end-to-end-tracing","Production Observability: Metrics and End-to-End Tracing",[24,2203,2204],{},"Post-deploy: Prometheus sidecar scrapes vLLM metrics (token throughput, GPU utilization, TTFT, req\u002Fs, latency, output tokens\u002Freq)—feeds cost\u002Fperformance monitoring.",[24,2206,2207],{},"Cloud Trace with OpenTelemetry: Traces agent flows end-to-end.",[24,2209,2210],{},"Why these? Directly tie to costs (GPU, tokens); essential for agent ops at scale. Sidecar avoids custom exporters.",[24,2212,2213],{},"\"We want to track things such as time to first token... GPU utilization request per second request latency output tokens per request. The reason why we want to do this because this all factors into how we control for and monitor performance throughput and costs.\"",[24,2215,2216],{},"—Ayo Adedeji, on metric selection for prod serving.",[19,2218,133],{"id":132},[57,2220,2221,2224,2227,2230,2233,2236,2239,2242],{},[60,2222,2223],{},"Use LB + URL maps for single-endpoint routing to multiple backends (e.g., vLLM prod vs. Ollama dev); path-based switching simplifies ops.",[60,2225,2226],{},"Integrate Model Armor as LB extension for zero-code network security; fallback to SDK\u002FAPI in ADK callbacks for direct Cloud Run.",[60,2228,2229],{},"Build model-agnostic agents with ADK + vLLM on Cloud Run; CI\u002FCD via Cloud Build for rapid iteration.",[60,2231,2232],{},"Monitor vLLM via Prometheus sidecar (GPU util, latency, tokens); add OpenTelemetry for agent traces.",[60,2234,2235],{},"Skip LB if no extensions\u002Frouting needed—Cloud Run scales alone—but LB unlocks Model Armor at ingress.",[60,2237,2238],{},"Reserve proxy-only subnet for secure LB-VPC comms; provision SSL certs for HTTPS.",[60,2240,2241],{},"Test in labs: Free GCP credits (non-GPU); full stack preps for agent battles\u002Fdungeons.",[60,2243,2244],{},"Prioritize observability pillars: security\u002Fsafety first, then metrics for cost control.",[24,2246,2247],{},"\"When we're talking about end-to-end agent system management... there's many different pillars... observability and security and safety.\"",[24,2249,2250],{},"—Ayo Adedeji, framing agent ops holistically.",{"title":167,"searchDepth":168,"depth":168,"links":2252},[2253,2254,2255,2256,2257],{"id":2069,"depth":168,"text":2070},{"id":2134,"depth":168,"text":2135},{"id":2177,"depth":168,"text":2178},{"id":2200,"depth":168,"text":2201},{"id":132,"depth":168,"text":133},[176,273],{"content_references":2260,"triage":2276},[2261,2264,2267,2270,2273],{"type":183,"title":2262,"url":2263,"context":201},"Agent Development Kit (ADK)","https:\u002F\u002Fgoo.gle\u002F4uflScr",{"type":183,"title":2265,"url":2266,"context":201},"Model Armor","https:\u002F\u002Fgoo.gle\u002F4mz57Ga",{"type":183,"title":2268,"url":2269,"context":201},"Cloud Trace","https:\u002F\u002Fgoo.gle\u002F4euYyCB",{"type":198,"title":2271,"url":2272,"context":185},"Hands-on AI Lab","https:\u002F\u002Fgoo.gle\u002Fguardians",{"type":198,"title":2274,"url":2275,"context":185},"GCP Credits","https:\u002F\u002Fgoo.gle\u002Fhandson-ep8-lab1",{"relevance":203,"novelty":204,"quality":204,"actionability":203,"composite":2041,"reasoning":2277},"Category: AI Automation. The article provides a detailed guide on deploying AI agents with specific tools and configurations, addressing practical concerns like security and observability, which are crucial for product builders. It includes actionable commands and tradeoffs, making it highly relevant and immediately applicable.","\u002Fsummaries\u002F268d90eeae6a5c77-gemma-4-prod-stack-model-armor-adk-agents-tracing-summary","2026-04-18 19:00:09","2026-04-19 03:42:07",{"title":2059,"description":167},{"loc":2278},"268d90eeae6a5c77","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7wENq-LMHgQ","summaries\u002F268d90eeae6a5c77-gemma-4-prod-stack-model-armor-adk-agents-tracing-summary",[220,219,298,221,1070],"Deploy secure, observable Gemma 4 agents on Cloud Run using load balancers for Model Armor integration, ADK for model-agnostic agents with vLLM, and Prometheus\u002FCloud Trace for metrics like GPU util and latency.",[],"kehgkdafSGcdmGRx8O8cwHNRvKfDZZ4PZMsrWWOjYc0",{"id":2291,"title":2292,"ai":2293,"body":2298,"categories":2343,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":2344,"navigation":207,"path":2348,"published_at":2349,"question":177,"scraped_at":2350,"seo":2351,"sitemap":2352,"source_id":2353,"source_name":1148,"source_type":293,"source_url":2354,"stem":2355,"tags":2356,"thumbnail_url":177,"tldr":2357,"tweet":177,"unknown_tags":2358,"__hash__":2359},"summaries\u002Fsummaries\u002F73f55123201134f9-mount-s3-buckets-as-file-systems-with-aws-s3-files-summary.md","Mount S3 Buckets as File Systems with AWS S3 Files",{"provider":9,"model":10,"input_tokens":2294,"output_tokens":2295,"processing_time_ms":2296,"cost_usd":2297},3939,1507,8922,0.00151865,{"type":16,"value":2299,"toc":2338},[2300,2304,2307,2314,2318,2321,2324,2328,2331],[19,2301,2303],{"id":2302},"s3-files-delivers-native-file-system-access-to-s3","S3 Files Delivers Native File System Access to S3",[24,2305,2306],{},"AWS S3 Files transforms object storage into a POSIX-compliant file system mountable on EC2 instances, containers, and Lambda functions. This eliminates custom hacks like FUSE wrappers or periodic sync scripts, providing low-latency read\u002Fwrite access indistinguishable from local disks for AI\u002FML, data engineering, and DevOps workloads. Under the hood, it leverages S3's metadata for directory structures and supports standard file operations without data migration—your existing buckets work immediately.",[24,2308,2309,2310,2313],{},"To implement, grant IAM roles with s3:PutObject, s3:GetObject, etc., permissions scoped to the bucket prefix, then mount via AWS CLI or SDK: ",[320,2311,2312],{},"aws s3files mount s3:\u002F\u002Fyour-bucket \u002Fmnt\u002Fpoint",". This cuts integration time from hours of scripting to minutes, enabling seamless data access in containerized ML training pipelines or serverless inference.",[19,2315,2317],{"id":2316},"realistic-use-cases-in-aiml-and-devops","Realistic Use Cases in AI\u002FML and DevOps",[24,2319,2320],{},"For AI\u002FML teams, mount training datasets directly into Jupyter on EC2 or SageMaker, avoiding costly EBS volumes or data downloads—process petabyte-scale S3 data at near-native speeds. DevOps benefits include containerized ETL jobs reading\u002Fwriting S3 as local files without volume mounts, and Lambda functions handling file I\u002FO for event-driven processing without temporary storage hacks.",[24,2322,2323],{},"Trade-offs: Strong consistency for small files (\u003C100MB), eventual consistency for large ones; throughput caps at S3's request rates (3,500 PUT\u002FGET per prefix\u002Fsec). Ideal for read-heavy ML feature stores or log processing, less so for high-write transactional DBs.",[19,2325,2327],{"id":2326},"avoid-common-pitfalls-security-cost-data-risks","Avoid Common Pitfalls: Security, Cost, Data Risks",[24,2329,2330],{},"Misconfigurations amplify S3's pitfalls: Broad IAM policies expose buckets publicly—use least-privilege with bucket policies denying public access and encrypting at-rest\u002Ftransit. Costs spike from unoptimized PUTs (e.g., frequent small writes); batch operations and use Intelligent-Tiering to mitigate, monitoring via CloudWatch for >$0.023\u002FGB PUT fees.",[24,2332,2333,2334,2337],{},"Data loss hits from concurrent writes without locks—implement app-level semaphores or use S3 atomic operations. Test mounts in staging: unmount with ",[320,2335,2336],{},"aws s3files unmount \u002Fmnt\u002Fpoint"," to verify no corruption. Always enable versioning and MFA-delete on production buckets.",{"title":167,"searchDepth":168,"depth":168,"links":2339},[2340,2341,2342],{"id":2302,"depth":168,"text":2303},{"id":2316,"depth":168,"text":2317},{"id":2326,"depth":168,"text":2327},[273],{"content_references":2345,"triage":2346},[],{"relevance":203,"novelty":204,"quality":204,"actionability":204,"composite":205,"reasoning":2347},"Category: DevOps & Cloud. The article provides a detailed explanation of how AWS S3 Files can be used to enhance AI\u002FML workflows by transforming S3 buckets into file systems, addressing a specific pain point for developers looking to streamline data access. It includes practical implementation steps and highlights potential pitfalls, making it actionable for the target audience.","\u002Fsummaries\u002F73f55123201134f9-mount-s3-buckets-as-file-systems-with-aws-s3-files-summary","2026-04-18 18:01:01","2026-04-19 01:22:18",{"title":2292,"description":167},{"loc":2348},"73f55123201134f9","https:\u002F\u002Fpub.towardsai.net\u002Faws-s3-files-explained-the-smarter-way-to-turn-s3-buckets-into-file-systems-3459560f7046?source=rss----98111c9905da---4","summaries\u002F73f55123201134f9-mount-s3-buckets-as-file-systems-with-aws-s3-files-summary",[298,221],"AWS S3 Files mounts buckets directly as file systems on EC2, containers, and Lambda—eliminating FUSE hacks and sync scripts for AI\u002FML workflows, but misconfigurations risk exposing, corrupting, or losing data.",[],"i9cGWnIzxtJTktm1csYQQ2MxZ9wdsHkt8i8DFEsZ0tQ",{"id":2361,"title":2362,"ai":2363,"body":2368,"categories":2750,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":2751,"navigation":207,"path":2769,"published_at":2770,"question":177,"scraped_at":2280,"seo":2771,"sitemap":2772,"source_id":2773,"source_name":214,"source_type":293,"source_url":2774,"stem":2775,"tags":2776,"thumbnail_url":177,"tldr":2777,"tweet":177,"unknown_tags":2778,"__hash__":2779},"summaries\u002Fsummaries\u002F17040afbe49e30f1-self-host-gemma-4-on-cloud-run-gpus-ollama-vs-vllm-summary.md","Self-Host Gemma 4 on Cloud Run GPUs: Ollama vs vLLM",{"provider":9,"model":10,"input_tokens":2364,"output_tokens":2365,"processing_time_ms":2366,"cost_usd":2367},8944,2783,21888,0.00288915,{"type":16,"value":2369,"toc":2742},[2370,2374,2377,2380,2383,2386,2390,2393,2448,2451,2454,2461,2465,2468,2473,2500,2513,2519,2545,2548,2554,2557,2560,2564,2567,2573,2612,2619,2626,2629,2632,2636,2696,2699,2702,2705,2707,2740],[19,2371,2373],{"id":2372},"choose-open-models-like-gemma-4-for-control-and-cost-predictability","Choose Open Models like Gemma 4 for Control and Cost Predictability",[24,2375,2376],{},"Self-hosting open models like Google's Gemma 4 gives you full control over customization, fine-tuning, and data privacy—critical for regulated industries like healthcare or finance where sending data to closed models like Gemini isn't viable. Closed models excel out-of-the-box with state-of-the-art performance but limit tuning beyond prompts. Open models cap costs at infrastructure levels (no per-API-call scaling) and integrate as the \"brain\" in agentic systems via wrappers like Google's Agent Development Kit (ADK), which supports any LLM, not just Gemini.",[24,2378,2379],{},"Key principles: Evaluate models by performance, use case, cost, and capacity. Gemma 4 (2B parameter version here) fits L4 GPUs on Cloud Run, enabling scale-to-zero serverless inference. Use Ollama for dev\u002FPOC (easy local testing, multi-GPU) or vLLM for production (PagedAttention for memory efficiency, dynamic batching, high concurrency).",[24,2381,2382],{},"\"Open model like Gemma is easy to take control, you can even fine-tune it.\" — Annie Wang",[24,2384,2385],{},"Common mistake: Assuming agent frameworks lock you into proprietary models—ADK's LiteLLM wrapper connects any model seamlessly.",[19,2387,2389],{"id":2388},"shared-gcp-foundation-project-setup-and-permissions","Shared GCP Foundation: Project Setup and Permissions",[24,2391,2392],{},"Start in Cloud Shell (persistent VS Code-like VM at console.cloud.google.com). Run setup script to:",[2394,2395,2396,2402,2413,2420,2426,2432,2438,2441],"ol",{},[60,2397,2398,2399,757],{},"Authenticate gcloud (",[320,2400,2401],{},"gcloud auth login",[60,2403,2404,2405,2408,2409,2412],{},"Clone repos: ",[320,2406,2407],{},"agentverse-devops-sre"," (templates, Cloud Build YAMLs) and ",[320,2410,2411],{},"agentverse-dungeon"," (agent fight files).",[60,2414,2415,2416,2419],{},"Create project (",[320,2417,2418],{},"agentverse-guardians-\u003CID>","), link billing manually via Manage Resources if needed.",[60,2421,2422,2423,743],{},"Set project: ",[320,2424,2425],{},"gcloud config set project \u003CID>",[60,2427,2428,2429,757],{},"Enable APIs: Artifact Registry, Cloud Build, Cloud Run, Cloud Storage, Secret Manager (",[320,2430,2431],{},"gcloud services enable",[60,2433,2434,2435,743],{},"Create Artifact Registry repo: ",[320,2436,2437],{},"gcloud artifacts repositories create \u003Crepo> --repository-format=docker",[60,2439,2440],{},"Grant default service account roles: Storage Admin, Cloud Build Service Account, Logs Writer\u002FViewer, Secret Manager Secret Accessor.",[60,2442,2443,2444,2447],{},"Run ",[320,2445,2446],{},"warmup.sh"," to cache GCS FUSE.",[24,2449,2450],{},"Service accounts act as \"robot accounts\" for granular permissions—use separate ones in production. Enabling APIs incurs no immediate cost; billing starts on usage.",[24,2452,2453],{},"\"Every Google Cloud project has a default service account... that's essentially going to be like the operator behind many of your default actions.\" — Ayo Adedeji (IO)",[24,2455,2456,2457,2460],{},"Quality criteria: Verify project ID in yellow (Cloud Shell), ",[320,2458,2459],{},"gcloud config list"," shows correct project. Refresh page if timeouts occur (70-min security idle).",[19,2462,2464],{"id":2463},"ollama-deployment-bake-model-for-instant-cold-starts","Ollama Deployment: Bake Model for Instant Cold Starts",[24,2466,2467],{},"Ollama pulls and embeds Gemma 4 directly into the container—ideal for rapid iteration but requires rebuilds for model updates.",[24,2469,2470],{},[63,2471,2472],{},"Dockerfile:",[325,2474,2478],{"className":2475,"code":2476,"language":2477,"meta":167,"style":167},"language-dockerfile shiki shiki-themes github-light github-dark","FROM ollama\u002Follama\nCOPY entrypoint.sh \u002Fentrypoint.sh\nRUN chmod +x \u002Fentrypoint.sh\nENTRYPOINT [\"\u002Fentrypoint.sh\"]\n","dockerfile",[320,2479,2480,2485,2490,2495],{"__ignoreMap":167},[855,2481,2482],{"class":857,"line":858},[855,2483,2484],{},"FROM ollama\u002Follama\n",[855,2486,2487],{"class":857,"line":168},[855,2488,2489],{},"COPY entrypoint.sh \u002Fentrypoint.sh\n",[855,2491,2492],{"class":857,"line":284},[855,2493,2494],{},"RUN chmod +x \u002Fentrypoint.sh\n",[855,2496,2497],{"class":857,"line":204},[855,2498,2499],{},"ENTRYPOINT [\"\u002Fentrypoint.sh\"]\n",[24,2501,2502,2505,2506,2509,2510,743],{},[320,2503,2504],{},"entrypoint.sh"," runs ",[320,2507,2508],{},"ollama serve"," and pulls ",[320,2511,2512],{},"gemma2:2b",[24,2514,2515,2518],{},[63,2516,2517],{},"cloudbuild-ollama.yaml:"," Defines CI\u002FCD pipeline:",[2394,2520,2521],{},[60,2522,2523,2524,2527],{},"Build: ",[320,2525,2526],{},"gcloud builds submit --config=cloudbuild-ollama.yaml .",[57,2528,2529,2534,2539],{},[60,2530,2531],{},[320,2532,2533],{},"docker build -t image .",[60,2535,2536,743],{},[320,2537,2538],{},"docker push gcr.io\u002F$PROJECT_ID\u002Follama",[60,2540,2541,2542,743],{},"Deploy to Cloud Run: ",[320,2543,2544],{},"gcloud run deploy ollama --image=gcr.io\u002F$PROJECT_ID\u002Follama --cpu=4 --memory=16Gi --gpu=nvidia-l4 --concurrency=4 --min-instances=1 --max-instances=1 --allow-unauthenticated --region=us-central1",[24,2546,2547],{},"Trade-offs: 16GB RAM for 2B model; L4 GPU; concurrency=4. Scales to zero but min=1 here for lab (scale higher in prod). Build takes 15-20 mins—monitor in Cloud Build console.",[24,2549,2550,2551,743],{},"Test: ",[320,2552,2553],{},"curl -X POST https:\u002F\u002Follama-\u003Chash>-uc.a.run.app\u002Fapi\u002Fgenerate -d '{\"model\": \"gemma2:2b\", \"prompt\": \"Why is the sky blue?\"}'",[24,2555,2556],{},"Before: Local Ollama testing. After: Serverless endpoint ready for agents.",[24,2558,2559],{},"\"Ollama is great for development use cases. It's really easy to install and get up and running.\" — Ayo Adedeji",[19,2561,2563],{"id":2562},"vllm-deployment-decouple-model-via-gcs-fuse-for-agility","vLLM Deployment: Decouple Model via GCS FUSE for Agility",[24,2565,2566],{},"vLLM loads model from Cloud Storage FUSE mount—slower initial boot (caches on first run) but swap models by updating GCS without redeploy.",[24,2568,2569,2570,757],{},"Prerequisites: Hugging Face token in Secret Manager (",[320,2571,2572],{},"gcloud secrets create hf-token --data-file=\u003Ctoken>",[2394,2574,2575,2581,2597],{},[60,2576,2577,2578,757],{},"Download Gemma 4 to GCS: Script pulls from HF (",[320,2579,2580],{},"huggingface-cli download google\u002Fgemma-2-2b-it",[60,2582,2583,2585,2586,2589,2590,2593,2594,743],{},[63,2584,2472],{}," Base ",[320,2587,2588],{},"vllm\u002Fvllm-openai",", mounts GCS bucket via FUSE (",[320,2591,2592],{},"gcsfuse","), serves on ",[320,2595,2596],{},"\u002Fv1",[60,2598,2599,2602,2603],{},[63,2600,2601],{},"cloudbuild-vllm.yaml:"," Similar pipeline, but image pulls HF token secret.\n",[57,2604,2605],{},[60,2606,2607,2608,2611],{},"Deploy: ",[320,2609,2610],{},"--gpu=nvidia-l4-count=1 --env-vars-file=vllm.env"," (adds HF_TOKEN).",[24,2613,2614,2615,2618],{},"FUSE enables mounting GCS as filesystem: ",[320,2616,2617],{},"gcsfuse \u003Cbucket> \u002Fmodels","—warmup caches for speed.",[24,2620,2621,2622,2625],{},"Test: Same curl to ",[320,2623,2624],{},"\u002Fv1\u002Fchat\u002Fcompletions"," with OpenAI-compatible API.",[24,2627,2628],{},"\"vLLM is great for production use cases. It comes with PagedAttention... great for memory efficiency.\" — Ayo Adedeji",[24,2630,2631],{},"Common mistake: Forgetting GPU alloc (L4), insufficient RAM (16Gi+), or FUSE warmup—leads to OOM or slow boots.",[19,2633,2635],{"id":2634},"production-trade-offs-and-agent-integration","Production Trade-offs and Agent Integration",[569,2637,2638,2650],{},[572,2639,2640],{},[575,2641,2642,2645,2648],{},[578,2643,2644],{},"Aspect",[578,2646,2647],{},"Ollama",[578,2649,2193],{},[588,2651,2652,2663,2674,2685],{},[575,2653,2654,2657,2660],{},[593,2655,2656],{},"Cold Start",[593,2658,2659],{},"Instant (baked model)",[593,2661,2662],{},"Slower (GCS mount)",[575,2664,2665,2668,2671],{},[593,2666,2667],{},"Model Updates",[593,2669,2670],{},"Rebuild\u002Fdeploy",[593,2672,2673],{},"GCS overwrite",[575,2675,2676,2679,2682],{},[593,2677,2678],{},"Use Case",[593,2680,2681],{},"Dev\u002FPOC",[593,2683,2684],{},"Prod (concurrency)",[575,2686,2687,2690,2693],{},[593,2688,2689],{},"Concurrency",[593,2691,2692],{},"Basic",[593,2694,2695],{},"Dynamic batching",[24,2697,2698],{},"Optimize: Use authenticated invokes; scale max-instances >1; monitor costs (GPUs aren't free). Integrate as agent \"brain\": ADK routes tools\u002Freasoning to your Cloud Run endpoint.",[24,2700,2701],{},"\"The model you're choosing really like can determine the upper bound, the capability of your agentic system.\" — Annie Wang",[24,2703,2704],{},"Exercise: Extend to boss fight in Agentverse—deploy agent vs. agent via A2A.",[19,2706,133],{"id":132},[57,2708,2709,2712,2715,2718,2721,2724,2734,2737],{},[60,2710,2711],{},"Self-host Gemma 4 on Cloud Run L4 GPUs for predictable costs and privacy in agent systems.",[60,2713,2714],{},"Use Ollama for fast dev deploys: Bake model in Dockerfile, CI\u002FCD via Cloud Build YAML.",[60,2716,2717],{},"Prefer vLLM for prod: Mount GCS via FUSE, update models without rebuilds.",[60,2719,2720],{},"Always setup IAM on default service account; enable APIs only incur costs on use.",[60,2722,2723],{},"Configure Cloud Run: 4 CPU\u002F16Gi RAM\u002FGPU=1\u002Fconcurrency=4; scale-to-zero with min=1 for labs.",[60,2725,2726,2727,2730,2731,2733],{},"Test with curl to ",[320,2728,2729],{},"\u002Fapi\u002Fgenerate"," (Ollama) or ",[320,2732,2624],{}," (vLLM).",[60,2735,2736],{},"Warm GCS FUSE cache; monitor builds in console (15-20 min).",[60,2738,2739],{},"Integrate via ADK LiteLLM wrapper for any model as agent brain.",[1018,2741,1414],{},{"title":167,"searchDepth":168,"depth":168,"links":2743},[2744,2745,2746,2747,2748,2749],{"id":2372,"depth":168,"text":2373},{"id":2388,"depth":168,"text":2389},{"id":2463,"depth":168,"text":2464},{"id":2562,"depth":168,"text":2563},{"id":2634,"depth":168,"text":2635},{"id":132,"depth":168,"text":133},[176,273],{"content_references":2752,"triage":2767},[2753,2755,2757,2760,2763,2765],{"type":183,"title":2647,"url":2754,"context":185},"https:\u002F\u002Fgoo.gle\u002F3Qdi64w",{"type":183,"title":2193,"url":2756,"context":185},"https:\u002F\u002Fgoo.gle\u002F4cvvxE9",{"type":183,"title":2758,"url":2759,"context":185},"Cloud Storage FUSE","https:\u002F\u002Fgoo.gle\u002F4cQAb0V",{"type":183,"title":2761,"url":2762,"context":185},"Cloud Run GPU","https:\u002F\u002Fgoo.gle\u002F4sEbTvG",{"type":198,"title":2764,"context":185},"Agent Development Kit",{"type":198,"title":2766,"url":2272,"context":201},"Agentverse Lab",{"relevance":203,"novelty":204,"quality":204,"actionability":203,"composite":2041,"reasoning":2768},"Category: AI & LLMs. The article provides a detailed guide on deploying the Gemma 4 LLM on Cloud Run, addressing practical applications for developers looking to integrate AI models into their products. It includes specific steps for setup and deployment, making it immediately actionable for the target audience.","\u002Fsummaries\u002F17040afbe49e30f1-self-host-gemma-4-on-cloud-run-gpus-ollama-vs-vllm-summary","2026-04-18 15:47:23",{"title":2362,"description":167},{"loc":2769},"17040afbe49e30f1","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=njWyDHKYeVA","summaries\u002F17040afbe49e30f1-self-host-gemma-4-on-cloud-run-gpus-ollama-vs-vllm-summary",[220,298,221,219],"Deploy open Gemma 4 LLM on serverless Cloud Run GPUs two ways: Ollama bakes model into container for instant cold starts; vLLM mounts from GCS FUSE for model swaps without rebuilds. Full CI\u002FCD via Cloud Build.",[],"_Vza7Oyow2_qaJpr6tl5zFUEbETkmjALA0GtIyH6k1E",{"id":2781,"title":2782,"ai":2783,"body":2788,"categories":2814,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":2815,"navigation":207,"path":2819,"published_at":2820,"question":177,"scraped_at":2821,"seo":2822,"sitemap":2823,"source_id":2824,"source_name":405,"source_type":293,"source_url":2825,"stem":2826,"tags":2827,"thumbnail_url":177,"tldr":2828,"tweet":177,"unknown_tags":2829,"__hash__":2830},"summaries\u002Fsummaries\u002F71dc58e232e9091c-zero-leak-debt-kill-100-leaked-secrets-platform-wi-summary.md","Zero Leak Debt: Kill 100+ Leaked Secrets Platform-Wide",{"provider":9,"model":10,"input_tokens":2784,"output_tokens":2785,"processing_time_ms":2786,"cost_usd":2787},3867,1345,10526,0.00142325,{"type":16,"value":2789,"toc":2810},[2790,2794,2797,2800,2804,2807],[19,2791,2793],{"id":2792},"leak-debt-persists-for-years-undermining-platforms","Leak Debt Persists for Years, Undermining Platforms",[24,2795,2796],{},"Leaked secrets accumulate as 'leak debt,' remaining active long after exposure—transaction keys from 2022 continued processing payments undetected. Every platform accumulates this debt differently based on stack, but it kills security and reliability. The author shares hands-on experience eliminating 100+ live leaks across local development, CI\u002FCD pipelines, and production environments, revealing a universal pattern: sprawl leads to chaos until teams commit to zero tolerance.",[24,2798,2799],{},"Static secrets create ongoing risks because they expire unexpectedly or demand manual rotation, amplifying vulnerabilities. Platforms suffer uniquely—GitOps teams battle repo exposures, service meshes grapple with identity issues—but all chase the same outcome: secrets that self-manage without human intervention.",[19,2801,2803],{"id":2802},"ruthless-audit-and-prevention-path-to-zero-debt","Ruthless Audit and Prevention Path to Zero Debt",[24,2805,2806],{},"Transition from chaos requires three steps: discover the mess through comprehensive scans, audit ruthlessly to prioritize live threats (e.g., still-valid 2022 keys), and enforce prevention via dynamic tools. Teams adopt stack-specific solutions like HashiCorp Vault for centralized management, AWS or GCP Secrets Manager for cloud-native rotation, Sealed Secrets for GitOps, or SPIFFE for service meshes.",[24,2808,2809],{},"This isn't a generic checklist but proven patterns from production battles: replace static secrets entirely to eliminate leak debt. Outcomes include no leaks, automatic rotation, and zero manual interventions, securing platforms end-to-end. The content cuts off mid-journey but emphasizes sharing these learnings for peer teams facing identical sprawl.",{"title":167,"searchDepth":168,"depth":168,"links":2811},[2812,2813],{"id":2792,"depth":168,"text":2793},{"id":2802,"depth":168,"text":2803},[273],{"content_references":2816,"triage":2817},[],{"relevance":204,"novelty":284,"quality":204,"actionability":204,"composite":397,"reasoning":2818},"Category: DevOps & Cloud. The article addresses a specific pain point regarding the management of leaked secrets, which is crucial for maintaining security in AI-powered products. It provides actionable steps for auditing and preventing leak debt, making it relevant for developers and product builders.","\u002Fsummaries\u002F71dc58e232e9091c-zero-leak-debt-kill-100-leaked-secrets-platform-wi-summary","2026-04-15 14:55:38","2026-04-15 15:38:58",{"title":2782,"description":167},{"loc":2819},"71dc58e232e9091c","https:\u002F\u002Flevelup.gitconnected.com\u002Fmost-leaked-secrets-live-for-years-the-hidden-leak-debt-killing-your-platform-47e74da51697?source=rss----5517fd7b58a6---4","summaries\u002F71dc58e232e9091c-zero-leak-debt-kill-100-leaked-secrets-platform-wi-summary",[298,221],"Leaked secrets from 2022 still process payments as 'leak debt'; ruthlessly audit across local dev, CI\u002FCD, and production to reach zero static secrets that never leak, expire unexpectedly, or need manual rotation.",[],"vQXjNu8RKmXyJRZqlJVXzlosWzob2FyNxRtplwh2n3E",{"id":2832,"title":2833,"ai":2834,"body":2839,"categories":2867,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":2868,"navigation":207,"path":2886,"published_at":2887,"question":177,"scraped_at":2888,"seo":2889,"sitemap":2890,"source_id":2891,"source_name":2892,"source_type":293,"source_url":2893,"stem":2894,"tags":2895,"thumbnail_url":177,"tldr":2897,"tweet":177,"unknown_tags":2898,"__hash__":2899},"summaries\u002Fsummaries\u002F0e80c4820bbdcb73-parasail-brokers-gpus-for-cheap-ai-inference-at-sc-summary.md","Parasail Brokers GPUs for Cheap AI Inference at Scale",{"provider":9,"model":10,"input_tokens":2835,"output_tokens":2836,"processing_time_ms":2837,"cost_usd":2838},5718,1948,15947,0.0020951,{"type":16,"value":2840,"toc":2862},[2841,2845,2848,2852,2855,2859],[19,2842,2844],{"id":2843},"orchestrating-global-capacity-slashes-inference-costs","Orchestrating Global Capacity Slashes Inference Costs",[24,2846,2847],{},"AI developers crave fast, cheap tokens for inference—Parasail delivers by brokering GPUs across 40 data centers in 15 countries, plus liquidity markets, without owning most hardware. CEO Mike Henry, ex-Groq executive, focuses solely on inference (no training), serving seed\u002FSeries B startups without long-term contracts. This agility lets Parasail undercut big clouds and rivals like Fireworks AI or Baseten, who chase enterprise deals. Result: 500 billion tokens generated daily, avoiding demand peaks through smart workload allocation. Builders gain production-ready inference without vendor lock-in or peak pricing.",[19,2849,2851],{"id":2850},"open-models-hybrids-power-agent-explosion","Open Models + Hybrids Power Agent Explosion",[24,2853,2854],{},"Rising friction from frontier APIs—'rough sending 100,000s of requests'—drives open-source model adoption. Elicit CEO Andreas Stuhlmüller (after $22M Series A) uses open models for initial screening on massive datasets (tens of thousands of papers for pharma clients), then frontier models for final answers. This hybrid cuts costs for agentic workflows, where tasks split over long horizons. Parasail's $32M Series A (led by Touring Capital and Kindred Ventures) fuels this shift, as agents proliferate in software.",[19,2856,2858],{"id":2857},"inference-demand-outpaces-supply-no-bubble","Inference Demand Outpaces Supply, No Bubble",[24,2860,2861],{},"Investors predict inference hits 20% of software build costs, exploding with content gen and robotics. Kindred's Steve Jang: demand far outstrips supply despite perceptions of an AI bubble. Parasail differentiates via inference-only focus and startup-friendly terms, positioning for the 'tokenmaxxing' era where open models escape lab constraints.",{"title":167,"searchDepth":168,"depth":168,"links":2863},[2864,2865,2866],{"id":2843,"depth":168,"text":2844},{"id":2850,"depth":168,"text":2851},{"id":2857,"depth":168,"text":2858},[176],{"content_references":2869,"triage":2884},[2870,2873,2875,2877,2880,2882],{"type":198,"title":2871,"url":2872,"context":185},"Parasail says its fleet of on-demand GPUs is larger than Oracle’s entire cloud","https:\u002F\u002Ftechcrunch.com\u002F2025\u002F04\u002F02\u002Fparasail-says-its-fleet-of-on-demand-gpus-is-larger-than-oracles-entire-cloud\u002F",{"type":183,"title":2874,"context":463},"Parasail",{"type":183,"title":2876,"context":185},"Groq",{"type":183,"title":2878,"author":2879,"context":463},"Elicit","Andreas Stuhlmüller",{"type":183,"title":2881,"context":185},"Fireworks AI",{"type":183,"title":2883,"context":185},"Baseten",{"relevance":203,"novelty":284,"quality":204,"actionability":204,"composite":1431,"reasoning":2885},"Category: AI & LLMs. The article discusses how Parasail brokers GPUs to provide affordable AI inference, addressing a key pain point for developers seeking cost-effective solutions for production-ready AI features. It offers insights into the operational model and market positioning that can inform product strategy and decision-making.","\u002Fsummaries\u002F0e80c4820bbdcb73-parasail-brokers-gpus-for-cheap-ai-inference-at-sc-summary","2026-04-15 13:00:00","2026-04-15 15:39:35",{"title":2833,"description":167},{"loc":2886},"0e80c4820bbdcb73","TechCrunch AI","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F04\u002F15\u002Fparasail-raises-32m-to-feed-tokenmaxxing-ai-developers\u002F","summaries\u002F0e80c4820bbdcb73-parasail-brokers-gpus-for-cheap-ai-inference-at-sc-summary",[220,221,2896,1070],"startups","Parasail generates 500B tokens daily by renting global GPUs and dodging peaks, enabling devs to run open-model agents affordably as API costs from OpenAI\u002FAnthropic rise.",[],"jtSViQIaSZn-YaolyzgDZJaMETEsKob2F9fVqFyCH7g",{"id":2901,"title":2902,"ai":2903,"body":2908,"categories":2992,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":2993,"navigation":207,"path":3008,"published_at":3009,"question":177,"scraped_at":3010,"seo":3011,"sitemap":3012,"source_id":3013,"source_name":214,"source_type":293,"source_url":3014,"stem":3015,"tags":3016,"thumbnail_url":177,"tldr":3018,"tweet":177,"unknown_tags":3019,"__hash__":3020},"summaries\u002Fsummaries\u002F2fdd3e70c04c3ea2-next-26-sneak-peek-agents-demos-hands-on-ai-buildi-summary.md","Next '26 Sneak Peek: Agents, Demos, Hands-On AI Building",{"provider":9,"model":10,"input_tokens":2904,"output_tokens":2905,"processing_time_ms":2906,"cost_usd":2907},8459,2862,17687,0.0031003,{"type":16,"value":2909,"toc":2986},[2910,2914,2917,2920,2923,2927,2930,2933,2936,2940,2943,2946,2949,2952,2954,2983],[19,2911,2913],{"id":2912},"developer-keynote-sets-agentic-tone","Developer Keynote Sets Agentic Tone",[24,2915,2916],{},"Stephanie Wong, Richard Seroter, and Emma Twersky hype the must-watch \"Get Real Agents in the Autonomous Era\" keynote, promising live demos of interconnected AI tools. Emma emphasizes the through-line narrative: launches like Gemini 3.1 Flash-Lite, Agent Development Kit (ADK), and A2A protocol build toward autonomous apps. Richard calls it joyful and thematic, contrasting stiff suits with jeans-clad demos of real applications. All agree: unlike abstract talks, this shows production workflows, helping devs contextualize overwhelming updates. \"We're going to blow you away with the example application,\" Richard says, teasing agent negotiations on budgets, trends, and design constraints.",[24,2918,2919],{},"Panelists converge on agents as Next '26's core: ADK for building, Model Context Protocol (MCP) for trusted data sources beyond LLM cutoffs. Richard notes MCP servers expanding for agentic apps, while Emma ties it to mobile\u002Fweb via frameworks like Flutter, Dart, Firebase. Divergence minor—Emma focuses on generative UI personalization (e.g., Toyota RAV4 infotainment, food apps boosting sign-ups), Richard on stacks like GenKit vs. raw APIs. Consensus: Skip theory; build agents that pivot autonomously.",[24,2921,2922],{},"\"My favorite thing about the developer keynote is that you actually get to see demos of actual things on stage,\" Emma notes, echoing hands-on bias over hype.",[19,2924,2926],{"id":2925},"showcase-floor-from-coffee-to-rockets","Showcase Floor: From Coffee to Rockets",[24,2928,2929],{},"The 67,000 sq ft floor divides into Imagine (inspiration: Gemini robotics), Learn (deep dives: data analytics, security), and Build (hands-on: agentic hack zone). Stephanie details CLI Mission Control—use Gemini CLI for rocket launch sequences, leaderboard competition. ADK\u002FA2A demo: agents negotiate to ship games. Emma's picks: Agentic Mobile\u002FWeb (phone verification, full-stack Firebase) and Gen Latte—AI barista app for generative UI coffee orders. \"Reimagined coffee shop with newest tech—you agentically code and get coffee,\" she pitches, solving conference caffeine chaos.",[24,2931,2932],{},"Richard and Emma praise serendipity: agendas derail into valuable hallway chats and unknown demos. Skills Challenge gamifies it—earn swag\u002Fbadges across activations. Developer Theater hosts 75 lightning talks (e.g., AI-assisted apps, scalable agents\u002FAPIs). All panelists agree floor trumps sessions: \"You could spend the entire time... get so much value,\" Stephanie says. Trade-off: Overwhelm possible, but wandering yields unexpected wins like Flutter\u002FToyota stories.",[24,2934,2935],{},"\"I always end up getting lost in the floor and that's... the most valuable time,\" Emma shares.",[19,2937,2939],{"id":2938},"top-sessions-workshops-and-networking","Top Sessions, Workshops, and Networking",[24,2941,2942],{},"Emma's agenda: Spotify AI customer story, DeepMind's Gemini updates, Flutter's A-to-UI\u002Fgen UI talk (personalization for Toyota, food apps), Toyota Connected on RAV4 Flutter infotainment. She hosts Flutter meetup; loves discussion groups as formal hallway extensions post-talks. Richard overlaps on personalization, adds MCP integration, GKE inference, Cloud Run zero-to-prod, SRE\u002Fdata scientist agent content, Go agents stack selection. Both flag workshops like Gemini 3 hands-on: \"Learn from model teams how to work with it.\"",[24,2944,2945],{},"Stephanie covers networking: Google Developer Program (profile links Cloud\u002FFirebase\u002FAndroid\u002FAI, badges\u002Fcodelabs), reinvented Builder Hub (Hacky Hour quests for swag\u002Fmagic), expanded meetups (Flutter, security, women in tech), Birds of a Feather\u002Fdiscussion groups. Skills Zone: Nvidia\u002FMcLaren\u002FTeam USA workshops. Richard values non-coder empowerment (vibe coding for PMs\u002Fexecs); Emma, customer journeys.",[24,2947,2948],{},"Agreement: Best practices from customers\u002Fpartners (Anthropic) and Google internals (AI docs evolution). Predictions: Agents hit production via GKE\u002FCloud Run\u002Fdatabases; generative UI shifts app design to dynamic personalization. Trade-offs: Pace exhausts (\"barely keep up\"), but events curate signal.",[24,2950,2951],{},"\"How do you pick the right stacks? GenKit? ADK? Raw APIs?\" Richard questions, highlighting choice paralysis.",[19,2953,133],{"id":132},[57,2955,2956,2959,2962,2965,2968,2971,2974,2977,2980],{},[60,2957,2958],{},"Prioritize developer keynote for agent demo narrative: ADK + A2A + MCP builds autonomous flows.",[60,2960,2961],{},"Hit Build zone first: CLI Mission Control, Gen Latte for instant AI prototyping.",[60,2963,2964],{},"Join skills challenges\u002FHacky Hour for swag + networking; link to Developer Program profile.",[60,2966,2967],{},"Attend gen UI\u002FFlutter sessions for personalization patterns (e.g., Toyota RAV4).",[60,2969,2970],{},"Use discussion groups for deep post-talk dives; wander floor for serendipity.",[60,2972,2973],{},"Explore MCP for agent data trust; GKE\u002FCloud Run for prod inference.",[60,2975,2976],{},"Balance agendas with workshops: Hands-on Gemini 3 beats passive watching.",[60,2978,2979],{},"Network via meetups\u002FBoF: Flutter, Go, women in tech, global engineering.",[60,2981,2982],{},"Empower non-devs: Vibe coding sessions for faster ideation.",[24,2984,2985],{},"\"There's enough to do... an excuse to learn for a few days,\" Richard sums up event value.",{"title":167,"searchDepth":168,"depth":168,"links":2987},[2988,2989,2990,2991],{"id":2912,"depth":168,"text":2913},{"id":2925,"depth":168,"text":2926},{"id":2938,"depth":168,"text":2939},{"id":132,"depth":168,"text":133},[],{"content_references":2994,"triage":3006},[2995,2997,2999,3000,3002,3004],{"type":1696,"title":2996,"context":185},"Google Cloud Next '26",{"type":183,"title":2998,"context":185},"Gemini 3.1 Flash-Lite",{"type":183,"title":2764,"context":185},{"type":183,"title":3001,"context":185},"Model Context Protocol",{"type":183,"title":3003,"context":185},"Flutter",{"type":183,"title":3005,"context":185},"Gemini CLI",{"relevance":204,"novelty":284,"quality":204,"actionability":284,"composite":666,"reasoning":3007},"Category: AI & LLMs. The article discusses practical applications of AI agents and tools showcased at Google Cloud Next '26, addressing the audience's need for production-ready examples. It highlights specific tools like the Agent Development Kit and Model Context Protocol, which are relevant for developers looking to implement AI features.","\u002Fsummaries\u002F2fdd3e70c04c3ea2-next-26-sneak-peek-agents-demos-hands-on-ai-buildi-summary","2026-04-14 19:43:02","2026-04-19 03:42:38",{"title":2902,"description":167},{"loc":3008},"2fdd3e70c04c3ea2","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=KPTW4L-BQO8","summaries\u002F2fdd3e70c04c3ea2-next-26-sneak-peek-agents-demos-hands-on-ai-buildi-summary",[219,1070,221,3017],"dev-productivity","Google Cloud Next '26 spotlights production-ready AI agents via live demos, massive showcase floor with hack zones, and sessions on Gemini, ADK, generative UI—perfect for developers shipping autonomous apps.",[3017],"P6M3FVnd4gSUugybNYo0wzgI-w3jea56Rw-g3vxj_Vw",{"id":3022,"title":3023,"ai":3024,"body":3029,"categories":3063,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3064,"navigation":207,"path":3083,"published_at":3084,"question":177,"scraped_at":3085,"seo":3086,"sitemap":3087,"source_id":3088,"source_name":2892,"source_type":293,"source_url":3089,"stem":3090,"tags":3091,"thumbnail_url":177,"tldr":3093,"tweet":177,"unknown_tags":3094,"__hash__":3095},"summaries\u002Fsummaries\u002F6642241cbe18bf93-kepler-s-40-gpu-orbital-cluster-powers-edge-ai-in--summary.md","Kepler's 40-GPU Orbital Cluster Powers Edge AI in Space",{"provider":9,"model":10,"input_tokens":3025,"output_tokens":3026,"processing_time_ms":3027,"cost_usd":3028},5915,2171,13908,0.002246,{"type":16,"value":3030,"toc":3058},[3031,3035,3038,3041,3045,3048,3051,3055],[19,3032,3034],{"id":3033},"distributed-edge-compute-outpaces-mega-data-centers","Distributed Edge Compute Outpaces Mega Data Centers",[24,3036,3037],{},"Orbital compute starts with edge processing on existing satellites, not massive Earth-like data centers projected for the 2030s by SpaceX or Blue Origin. Kepler's cluster—40 Nvidia Orin edge GPUs on 10 operational satellites linked by laser comms—handles data collected in orbit, slashing latency for power-hungry sensors like synthetic aperture radar (SAR). This offloads processing from ground stations, runs GPUs at 100% utilization for inference (not training), and avoids kilowatt-scale power waste on idle super-GPUs. Result: faster responsiveness for private firms and U.S. military apps, like missile defense via space-to-air laser demos.",[24,3039,3040],{},"Kepler positions as space network infrastructure, serving its satellites, third-party sats, drones, and aircraft—not a full data center. Satellite makers now design around this, prioritizing always-on distributed inference over rare high-power training bursts.",[19,3042,3044],{"id":3043},"kepler-sophia-partnership-validates-orbital-software","Kepler-Sophia Partnership Validates Orbital Software",[24,3046,3047],{},"With 18 customers, Kepler added Sophia Space to test its passively-cooled orbital OS across 6 GPUs on 2 satellites—first-ever orbital software deployment akin to terrestrial clusters. Sophia solves overheating without heavy active cooling, de-risking its 2027 satellite launch. For Kepler, it showcases network utility for ground uploads, hosted payloads, and future inter-satellite links.",[24,3049,3050],{},"This hands-on validation accelerates adoption: process SAR data in orbit for real-time threat tracking, bypassing Earth bottlenecks.",[19,3052,3054],{"id":3053},"earth-constraints-boost-space-compute-appeal","Earth Constraints Boost Space Compute Appeal",[24,3056,3057],{},"Terrestrial limits—like Wisconsin's new data center ban and congressional pushes—make orbital alternatives viable faster. Kepler and Sophia focus on practical edge wins today, contrasting capital-heavy plays like Starcloud ($170M Series A for space data centers) or Aetherflux ($2B Series B valuation). Builders gain low-latency AI at the edge; scale via laser-linked constellations running inference nonstop.",{"title":167,"searchDepth":168,"depth":168,"links":3059},[3060,3061,3062],{"id":3033,"depth":168,"text":3034},{"id":3043,"depth":168,"text":3044},{"id":3053,"depth":168,"text":3054},[273],{"content_references":3065,"triage":3081},[3066,3069,3072,3075,3078],{"type":198,"title":3067,"url":3068,"context":185},"Sophia Space raises $10M seed to demo novel space computers","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F02\u002F26\u002Fsophia-space-raises-10m-seed-to-demo-novel-space-computers\u002F",{"type":198,"title":3070,"url":3071,"context":185},"Why the economics of orbital AI are so brutal","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F02\u002F11\u002Fwhy-the-economics-of-orbital-ai-are-so-brutal\u002F",{"type":198,"title":3073,"url":3074,"context":185},"Jeff Bezos’ Blue Origin enters the space data center game","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F03\u002F20\u002Fjeff-bezos-blue-origin-enters-the-space-data-center-game\u002F",{"type":198,"title":3076,"url":3077,"context":185},"Starcloud raises $170 million Series A to build data centers in space","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F03\u002F30\u002Fstarcloud-raises-170-million-series-ato-build-data-centers-in-space\u002F",{"type":198,"title":3079,"url":3080,"context":185},"Aetherflux reportedly raising Series B at $2 billion valuation","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F03\u002F27\u002Faetherflux-reportedly-raising-series-b-at-2-billion-valuation\u002F",{"relevance":284,"novelty":284,"quality":204,"actionability":168,"composite":285,"reasoning":3082},"Category: AI Automation. The article discusses the operational capabilities of Kepler's orbital compute cluster, which relates to AI automation in edge computing. However, it lacks specific actionable insights or frameworks that the audience can directly apply to their own AI product development.","\u002Fsummaries\u002F6642241cbe18bf93-kepler-s-40-gpu-orbital-cluster-powers-edge-ai-in-summary","2026-04-13 07:01:00","2026-04-13 17:53:20",{"title":3023,"description":167},{"loc":3083},"6642241cbe18bf93","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F04\u002F13\u002Fthe-largest-orbital-compute-cluster-is-open-for-business\u002F","summaries\u002F6642241cbe18bf93-kepler-s-40-gpu-orbital-cluster-powers-edge-ai-in--summary",[2896,221,222,3092],"hardware","Kepler Communications operates the largest orbital compute cluster with 40 Nvidia Orin processors across 10 satellites, enabling distributed edge inference for sensors—proving value before 2030s mega data centers arrive.",[222,3092],"YKPW3GLYXvd9AFTwBDXekgg00OrTkD8fUU2fPd47O7M",{"id":3097,"title":3098,"ai":3099,"body":3104,"categories":3132,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3133,"navigation":207,"path":3144,"published_at":3145,"question":177,"scraped_at":3146,"seo":3147,"sitemap":3148,"source_id":3149,"source_name":3150,"source_type":293,"source_url":3151,"stem":3152,"tags":3153,"thumbnail_url":177,"tldr":3154,"tweet":177,"unknown_tags":3155,"__hash__":3156},"summaries\u002Fsummaries\u002Fd13db648579ff082-anthropic-eyes-custom-chips-amid-30b-claude-surge-summary.md","Anthropic Eyes Custom Chips Amid $30B Claude Surge",{"provider":9,"model":10,"input_tokens":3100,"output_tokens":3101,"processing_time_ms":3102,"cost_usd":3103},4812,2091,18128,0.00198525,{"type":16,"value":3105,"toc":3127},[3106,3110,3113,3117,3120,3124],[19,3107,3109],{"id":3108},"revenue-explosion-drives-compute-scale","Revenue Explosion Drives Compute Scale",[24,3111,3112],{},"Anthropic's Claude annualized revenue run rate exceeded $30 billion, tripling from $9 billion at end-2025 in just four months. This hyperscale growth creates massive compute demands, pushing the company to mix Google\u002FBroadcom TPUs, Amazon custom chips, and Nvidia GPUs, matching workloads to optimal hardware. Builders scaling AI products should note: at $30B+ run rates, even unprofitable firms justify $500 million custom chip development costs for specialized efficiency, hiring engineers and validating manufacturing.",[19,3114,3116],{"id":3115},"massive-tpu-deal-locks-in-future-capacity","Massive TPU Deal Locks in Future Capacity",[24,3118,3119],{},"Anthropic committed to 3.5 gigawatts of TPU compute from Google and Broadcom starting 2027—triple its ~1 gigawatt 2026 usage—via long-term deal flagged in Broadcom's SEC filing. Access hinges on sustained commercial success, building on November 2025's $50 billion US infrastructure pledge. For technical founders, this shows hedging risks in multi-GW deals; secure capacity early but tie to revenue milestones to manage vendor dependencies.",[19,3121,3123],{"id":3122},"early-stage-push-mirrors-industry-shift","Early-Stage Push Mirrors Industry Shift",[24,3125,3126],{},"Exploration remains preliminary—no design or team yet—with option to stick to third-party purchases. It echoes Meta's and OpenAI's custom silicon efforts, positioning Broadcom centrally (also OpenAI partner, unnamed XPU client). AI builders gain independence from Nvidia GPUs via proprietary chips when workloads demand it, but weigh $500M upfront against buying flexibility.",{"title":167,"searchDepth":168,"depth":168,"links":3128},[3129,3130,3131],{"id":3108,"depth":168,"text":3109},{"id":3115,"depth":168,"text":3116},{"id":3122,"depth":168,"text":3123},[519],{"content_references":3134,"triage":3142},[3135,3138],{"type":198,"title":3136,"url":3137,"context":463},"Anthropic weighs building its own AI chips, sources say","https:\u002F\u002Fwww.reuters.com\u002Fbusiness\u002Fanthropic-weighs-building-it-own-ai-chips-sources-say-2026-04-09\u002F",{"type":1495,"title":3139,"author":3140,"url":3141,"context":463},"SEC filing","Broadcom","https:\u002F\u002Finvestors.broadcom.com\u002Fsec-filings",{"relevance":284,"novelty":284,"quality":204,"actionability":168,"composite":285,"reasoning":3143},"Category: AI & LLMs. The article discusses Anthropic's exploration of custom AI chips, which is relevant to AI product builders considering hardware solutions. However, it lacks specific actionable insights or frameworks that the audience could implement in their own projects.","\u002Fsummaries\u002Fd13db648579ff082-anthropic-eyes-custom-chips-amid-30b-claude-surge-summary","2026-04-10 14:50:53","2026-04-14 14:30:45",{"title":3098,"description":167},{"loc":3144},"d13db648579ff082","__oneoff__","https:\u002F\u002Fthenextweb.com\u002Fnews\u002Fanthropic-custom-ai-chips-30-billion-revenue","summaries\u002Fd13db648579ff082-anthropic-eyes-custom-chips-amid-30b-claude-surge-summary",[220,2896,221],"Anthropic explores in-house AI chips at early stage as Claude hits $30B annual run rate (up from $9B), securing 3.5GW TPU compute while custom silicon costs ~$500M.",[],"YbBk1hXTNlPxJ8bXSI3_MG5hZWfIOkvcqlmbs7MLWHA",{"id":3158,"title":3159,"ai":3160,"body":3165,"categories":3340,"created_at":177,"date_modified":177,"description":3341,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3342,"navigation":207,"path":3343,"published_at":3344,"question":177,"scraped_at":3345,"seo":3346,"sitemap":3347,"source_id":3348,"source_name":214,"source_type":215,"source_url":3349,"stem":3350,"tags":3351,"thumbnail_url":177,"tldr":3353,"tweet":177,"unknown_tags":3354,"__hash__":3355},"summaries\u002Fsummaries\u002F9c16c4c155dcf489-scaling-tpus-on-gke-for-massive-ai-workloads-summary.md","Scaling TPUs on GKE for Massive AI Workloads",{"provider":9,"model":10,"input_tokens":3161,"output_tokens":3162,"processing_time_ms":3163,"cost_usd":3164},8516,2468,54357,0.0029147,{"type":16,"value":3166,"toc":3332},[3167,3171,3174,3177,3180,3183,3187,3190,3193,3213,3216,3219,3222,3226,3229,3243,3246,3249,3252,3256,3259,3262,3265,3269,3272,3292,3295,3298,3301,3304,3306],[19,3168,3170],{"id":3169},"tpu-power-specialized-hardware-for-ai-matrix-crunching","TPU Power: Specialized Hardware for AI Matrix Crunching",[24,3172,3173],{},"Kavitha Gowda, product manager for TPUs on GKE, describes TPUs as Google's custom ASICs optimized for machine learning, particularly heavy matrix multiplications in LLMs and recommendation models. The core is the Matrix Multiply Unit (MXU), a \"dedicated matrix math wizard\" that processes billions of operations per image in recognition tasks thousands of times faster than general-purpose chips.",[24,3175,3176],{},"TPUs feature high-bandwidth memory (HBM) to handle large models and batches on-chip, minimizing data transfer bottlenecks. They interconnect from one chip to thousands via high-speed ICI links and optical circuit switching, enabling massive-scale training and inference. The seventh-generation Ironwood TPU pod supports 9,216 chips, with peak BF16 TFLOPS jumping dramatically—numbers Yufeng Guo initially mistook for typos due to the leap from prior generations like Trillium and v5e.",[24,3178,3179],{},"\"MXU is the hardware that makes TPUs so powerful. It's dedicated matrix math wizard that can perform this massive calculation in a single step, making the entire process thousands times faster and more efficient than a general-purpose chip,\" Gowda explains, highlighting the specialized architecture.",[24,3181,3182],{},"Frameworks like JAX, TensorFlow, and PyTorch are fully supported, integrating seamlessly with GKE, Vertex AI, and Cloud TPU APIs.",[19,3184,3186],{"id":3185},"gkes-atomic-slicing-hiding-complexity-for-exponential-scale","GKE's Atomic Slicing: Hiding Complexity for Exponential Scale",[24,3188,3189],{},"GKE abstracts TPU chip intricacies, exposing them as containerized workloads while preserving Kubernetes advantages. It treats TPU 'slices'—from single chips to 9,216-chip pods—as atomic units for provisioning, scheduling, failover, and resilience, maximizing interconnect performance.",[24,3191,3192],{},"Slice types scale progressively:",[57,3194,3195,3201,3207],{},[60,3196,3197,3200],{},[63,3198,3199],{},"Single-host TPU",": One VM with 1-8 chips at zero network latency, ideal for fine-tuning, interactive dev, or small inference. Scales like CPU VMs via horizontal pod autoscaling.",[60,3202,3203,3206],{},[63,3204,3205],{},"Multi-host TPU",": Multiple VMs (e.g., 16 VMs with 4 chips each for 64 chips) in one node pool, interconnected via ICI for larger training\u002Finference.",[60,3208,3209,3212],{},[63,3210,3211],{},"Multi-slice TPU",": Spans node pools (e.g., 50k-100k chips), with intra-pool ICI links and inter-pool data center networking. Developers must align workloads to high-speed (ICI) vs. slower (DCN) paths.",[24,3214,3215],{},"GKE supports 130k nodes, enabling thousands of TPUs as one unit for frontier models. JobSets and multi-slice networking provide atomic failover: if one VM fails in a 50k-chip slice, GKE auto-repairs the unit and resumes training, boosting 'goodput' (effective throughput) over raw throughput.",[24,3217,3218],{},"\"GKE hides the underlying complexity of the chip architecture and relays the TPU chip power to the container-based workloads,\" Gowda notes, emphasizing ecosystem perks like storage, load balancers, and observability.",[24,3220,3221],{},"Yufeng Guo stresses software-hardware co-design: \"We're really seeing this combination of having to have knowledge of the software as well as the hardware in order to be able to take full advantage of these systems.\"",[19,3223,3225],{"id":3224},"capacity-flexibility-dws-cuds-and-spot-for-cost-control","Capacity Flexibility: DWS, CUDs, and Spot for Cost Control",[24,3227,3228],{},"TPU availability spans options for reliability and economy:",[57,3230,3231,3237],{},[60,3232,3233,3236],{},[63,3234,3235],{},"Committed Use Discounts (CUDs)",": Reserved capacity for enterprise needs, from massive training to online inference.",[60,3238,3239,3242],{},[63,3240,3241],{},"Dynamic Workload Scheduler (DWS)",": New in 2025, with Flex (pay-as-you-go, up to 7 days for bursty POCs\u002Fexperiments) and Calendar (1-3 month reservations for guaranteed, uninterrupted runs).",[24,3244,3245],{},"GKE autoscales DWS Flex node pools only when workloads deploy, billing solely during execution—scale down post-job for zero idle costs. Calendar ensures dedicated, compact placement without maintenance interruptions, vital for month-long fine-tuning where failures would be \"crippling,\" as Guo observes.",[24,3247,3248],{},"Combine modes: Reserve Calendar for critical jobs, burst to Flex. All backed by on-demand and spot.",[24,3250,3251],{},"\"DWS Flex is like an on-demand elasticity... Mostly used for bursty workloads, for experimentation, for POCs... you just pay for what you're running,\" Gowda clarifies.",[19,3253,3255],{"id":3254},"custom-compute-classes-automated-fallbacks-across-tiers","Custom Compute Classes: Automated Fallbacks Across Tiers",[24,3257,3258],{},"Custom compute classes define prioritized hierarchies (e.g., Trillium reservation > spot > DWS Flex > on-demand). GKE automatically falls back if primary capacity lacks, promoting to higher tiers when available—optimizing for power, cost, or availability.",[24,3260,3261],{},"Users previously scripted this; now it's native, with GCP optimizing efficiency. Supports 3+ layers (latency trade-offs apply) and even GPU\u002FTPU fallback via vLLM for serving. Example: Start TPU reservations, scale to GPUs.",[24,3263,3264],{},"\"With custom compute classes, you can define prioritized hierarchy of TPU configuration... GKE can automatically fall back,\" Gowda says, noting use for low-priority jobs starting on spot then escalating.",[19,3266,3268],{"id":3267},"storage-and-ecosystem-fueling-data-intensive-workloads","Storage and Ecosystem: Fueling Data-Intensive Workloads",[24,3270,3271],{},"GKE optimizes AI I\u002FO:",[57,3273,3274,3280,3286],{},[60,3275,3276,3279],{},[63,3277,3278],{},"Secondary boot disks",": Preload data\u002Fimages per node for faster pod startup.",[60,3281,3282,3285],{},[63,3283,3284],{},"GCS Fuse + CSI driver",": Caches\u002Fparallel-downloads from object storage, yielding 9x faster model loads via PersistentVolumeClaims.",[60,3287,3288,3291],{},[63,3289,3290],{},"Managed Lustre",": Parallel filesystem for high-concurrency IO in training\u002Fcheckpointing.",[24,3293,3294],{},"Integrates open-source like Kubray (orchestrator) and vLLM (serving), plus dashboards.",[24,3296,3297],{},"Companies like Anthropic, Moloco, and Light Tricks already use Kubernetes+TPUs.",[24,3299,3300],{},"Resources: Google AI Hypercomputer, GKE for AI\u002FML inference docs, TPU-on-GKE LLM fine-tuning tutorial.",[24,3302,3303],{},"\"By leveraging GKE's job set and multi-slice networking, you gain an atomic failover model... helps you resume your training if one infrastructure fails,\" Gowda adds on maximizing expensive TPU utilization.",[19,3305,133],{"id":132},[57,3307,3308,3311,3314,3317,3320,3323,3326,3329],{},[60,3309,3310],{},"Treat TPU slices as atomic units in GKE for provisioning up to 9k+ interconnected chips, aligning workloads to ICI (intra-pool) vs. DCN (inter-pool) speeds.",[60,3312,3313],{},"Use DWS Flex for bursty experiments (pay-as-you-go, autoscaling) and Calendar for 1-3 month guaranteed reservations to avoid crippling mid-training failures.",[60,3315,3316],{},"Implement custom compute classes for automatic fallbacks (e.g., reservation > spot > Flex) to optimize cost\u002Favailability without custom scripts.",[60,3318,3319],{},"Accelerate startup with secondary boot disks, GCS Fuse (9x model load speedup), and Managed Lustre for high-IO training.",[60,3321,3322],{},"Co-design software for TPU hardware: Leverage MXU\u002FHBM for matrix-heavy LLMs, scale via single\u002Fmulti-host\u002Fslices.",[60,3324,3325],{},"Combine CUDs for steady-state with DWS\u002Fspot for bursts; fallback to GPUs via vLLM for serving resilience.",[60,3327,3328],{},"Maximize goodput with GKE JobSets' atomic failover and auto-resume on VM failures.",[60,3330,3331],{},"Start with Ironwood\u002FTrillium pods on GKE for JAX\u002FTF\u002FPyTorch; reference tutorials for LLM fine-tuning.",{"title":167,"searchDepth":168,"depth":168,"links":3333},[3334,3335,3336,3337,3338,3339],{"id":3169,"depth":168,"text":3170},{"id":3185,"depth":168,"text":3186},{"id":3224,"depth":168,"text":3225},{"id":3254,"depth":168,"text":3255},{"id":3267,"depth":168,"text":3268},{"id":132,"depth":168,"text":133},[273],"Google AI Hypercomputer → https:\u002F\u002Fgoo.gle\u002F3ObrQLK  \nGKE for AI\u002FML inference → https:\u002F\u002Fgoo.gle\u002F4cg4k8y  \n[Tutorial] Fine tune a LLM using TPUs on GKE → https:\u002F\u002Fgoo.gle\u002F48hT4Hu\n\nTensor Processing Units (TPUs) are now in their 7th generation. They allow machine learning workloads to reach massive scale, especially when running on Google Kubernetes Engine (GKE). But how does that work, and what do you need to know in order to run TPUs on GKE successfully? \n\nJoin Yufeng Guo as he sits down with Kavitha Gowda, the product manager of TPUs on GKE, to get into the details of how to scale TPU workloads on GKE.\n\nSpeakers: Yufeng Guo, Kavitha Gowda\nProducts Mentioned: Google Kubernetes Engine, Cloud Tensor Processing Units, AI Hypercomputer",{},"\u002Fsummaries\u002F9c16c4c155dcf489-scaling-tpus-on-gke-for-massive-ai-workloads-summary","2026-04-09 19:00:41","2026-04-10 03:09:44",{"title":3159,"description":3341},{"loc":3343},"9c16c4c155dcf489","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=coP5_SmE4AI","summaries\u002F9c16c4c155dcf489-scaling-tpus-on-gke-for-massive-ai-workloads-summary",[297,298,221,3352],"kubernetes","GKE treats TPU slices as atomic units for seamless scaling up to 9k+ chips, with flexible capacity like DWS Flex\u002FCalendar and custom fallbacks for cost-efficient ML training\u002Finference.",[3352],"a3lw8W4rx5X2n-REtU3W0luTS1Z52UcKBmv5OJIrpAI",{"id":3357,"title":3358,"ai":3359,"body":3364,"categories":3537,"created_at":177,"date_modified":177,"description":3538,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3539,"navigation":207,"path":3540,"published_at":3541,"question":177,"scraped_at":3542,"seo":3543,"sitemap":3544,"source_id":3545,"source_name":2049,"source_type":215,"source_url":3546,"stem":3547,"tags":3548,"thumbnail_url":177,"tldr":3550,"tweet":177,"unknown_tags":3551,"__hash__":3552},"summaries\u002Fsummaries\u002Fe5968758c24688f8-self-host-archon-v3-on-hetzner-vps-with-docker-summary.md","Self-Host Archon v3 on Hetzner VPS with Docker",{"provider":9,"model":10,"input_tokens":3360,"output_tokens":3361,"processing_time_ms":3362,"cost_usd":3363},7846,1531,13348,0.0023122,{"type":16,"value":3365,"toc":3531},[3366,3370,3388,3398,3402,3409,3442,3457,3461,3472,3478,3493,3496,3500,3503,3524],[19,3367,3369],{"id":3368},"automate-vps-provisioning-for-one-click-archon-deployment","Automate VPS Provisioning for One-Click Archon Deployment",[24,3371,3372,3373,753,3376,3379,3380,3383,3384,3387],{},"Hetzner VPS (CX11 at €2.50\u002Fmonth, pay-per-hour) handles Archon v3 basics: Caddy for HTTPS\u002FLet's Encrypt, Postgres DB, Docker stack. Create firewall opening ports 22 (SSH), 80 (HTTP), 443 (HTTPS). Use pre-built cloud-init.yaml from tasklist.smartcode.diy\u002Flist\u002Farchon-v3-cloud-setup—it runs apt upgrade, installs Docker\u002FCompose, clones Archon repo (github.com\u002Fcoleam00\u002FArchon), copies .env.example and Caddyfile.example, creates 'archon' user. Paste YAML into Hetzner server create dialog (Ubuntu 22.04, SSH keys, Nuremberg location). Server boots in minutes; monitor with ",[320,3374,3375],{},"cloud-init status --long",[320,3377,3378],{},"watch cloud-init status",". SSH as root (e.g., via MobaXterm with Pageant keys), ",[320,3381,3382],{},"su - archon",", verify ",[320,3385,3386],{},"\u002Fopt\u002Farchon"," exists. Trade-off: Basic setup, not production-hardened—add WAF (Hetzner), IP whitelisting, or VPN.",[24,3389,3390,3391,753,3394,3397],{},"Point subdomain (e.g., archon.yourdomain.com) A record to VPS public IP. Verify propagation: ",[320,3392,3393],{},"dig archon.yourdomain.com",[320,3395,3396],{},"nslookup",". DNS resolves in seconds on United Domains.",[19,3399,3401],{"id":3400},"secure-env-with-tokens-and-domain-for-production-access","Secure .env with Tokens and Domain for Production Access",[24,3403,3404,3405,3408],{},"Edit ",[320,3406,3407],{},"\u002Fopt\u002Farchon\u002F.env"," minimally:",[57,3410,3411,3417,3427,3436],{},[60,3412,3413,3416],{},[320,3414,3415],{},"GLOBAL_AUTH=false"," (initially; enable later).",[60,3418,3419,3422,3423,3426],{},[320,3420,3421],{},"CLOUD_OAUTH_TOKEN",": Run ",[320,3424,3425],{},"npx @11ty\u002Feleventy@latest --cloud-token"," on local machine.",[60,3428,3429,753,3432,3435],{},[320,3430,3431],{},"GH_TOKEN",[320,3433,3434],{},"GITHUB_TOKEN",": GitHub Settings > Developer Settings > Personal Access Tokens (Classic) > Generate new (repo scope, no expiration for testing).",[60,3437,3438,3441],{},[320,3439,3440],{},"DOMAIN=archon.yourdomain.com"," (line ~126).",[24,3443,3444,3445,3448,3449,3452,3453,3456],{},"Optional integrations (Telegram\u002FSlack): Rasmus's video covers. Start stack: ",[320,3446,3447],{},"docker compose --profile db,cloud,auth up -d",". Check: ",[320,3450,3451],{},"docker compose ps"," (all healthy), ",[320,3454,3455],{},"curl https:\u002F\u002Farchon.yourdomain.com\u002Fhealth"," (returns OK), browser loads Web UI with auto-SSL. Exposes endpoints 24\u002F7.",[19,3458,3460],{"id":3459},"add-form-based-auth-and-lock-down-access","Add Form-Based Auth and Lock Down Access",[24,3462,3463,3464,3467,3468,3471],{},"Generate bcrypt hash: ",[320,3465,3466],{},"htpasswd -bnBC 10 \"\" yourpass | tr -d ':\\n'"," (e.g., username 'archon', pass 'archon'). Hex secret: ",[320,3469,3470],{},"openssl rand -hex 32",". Add to .env (line ~145):",[325,3473,3476],{"className":3474,"code":3475,"language":330},[328],"AUTH_USER=archon\nAUTH_PASS=$2y$10$92ixRDXWuX[hash]\nAUTH_COOKIE_SECRET=yourhexsecret\n",[320,3477,3475],{"__ignoreMap":167},[24,3479,3480,3481,3484,3485,3488,3489,3492],{},"Replace Caddyfile with tasklist version (uncomments form auth reverse_proxy). Restart: ",[320,3482,3483],{},"docker compose --profile db,cloud,auth up -d --force-recreate auth"," (first-time) or ",[320,3486,3487],{},"--force-recreate caddy"," later. Logs: ",[320,3490,3491],{},"docker compose logs caddy",". Test incognito: Login screen blocks unauth access.",[24,3494,3495],{},"Extra security: Hetzner WAF + static IP\u002FVPN whitelist. Blocks public access effectively.",[19,3497,3499],{"id":3498},"update-restart-and-stop-without-downtime","Update, Restart, and Stop Without Downtime",[24,3501,3502],{},"Maintenance via archon user:",[57,3504,3505,3511,3517],{},[60,3506,3507,3508,743],{},"Update: ",[320,3509,3510],{},"git pull && docker compose --profile db,cloud,auth down && docker compose --profile db,cloud,auth up --build -d",[60,3512,3513,3514,743],{},"Restart: ",[320,3515,3516],{},"docker compose --profile db,cloud,auth restart",[60,3518,3519,3520,3523],{},"Stop: ",[320,3521,3522],{},"docker compose --profile db,cloud,auth down"," (includes DB\u002FCaddy).",[24,3525,3526,3527,3530],{},"Cloud-init skips manual steps (Option B in tasklist). External DB (Supabase\u002FNeon): Set ",[320,3528,3529],{},"DATABASE_URL"," in .env, omit 'db' profile. Full docs: archon.diy\u002Fbook. Scales for testing; monitor costs (delete VPS post-test saves €€€).",{"title":167,"searchDepth":168,"depth":168,"links":3532},[3533,3534,3535,3536],{"id":3368,"depth":168,"text":3369},{"id":3400,"depth":168,"text":3401},{"id":3459,"depth":168,"text":3460},{"id":3498,"depth":168,"text":3499},[273],"This video shows you how to install Archon v3 on your own server, making it accessible 24\u002F7 via a subdomain and its Web UI and other Endpoints. We'll walk through the process on a Hetzner VPS server, following a prepared Task List to ensure a straightforward setup for your server, which you can also use to follow the video. The goal is to get you up and running with Archon v3, covering all the essential steps for server management.\n\nHetzner Referral (Support the Channel): https:\u002F\u002Fhetzner.cloud\u002F?ref=nAOvh4nkSWmQ\nRasmus: https:\u002F\u002Fwww.youtube.com\u002F@UCbJSc2NyTZgz3Qu21kDId5Q \nCole: https:\u002F\u002Fwww.youtube.com\u002F@UCMwVTLZIRRUyyVrkjDpn4pA \n\n*Tasklist:* http:\u002F\u002Ftasklist.smartcode.diy\u002Flist\u002Farchon-v3-cloud-setup\n\n----\n🚀 Want to learn agentic coding with live daily events and workshops?\nCheck out Dynamous AI: https:\u002F\u002Fdynamous.ai\u002F?code=646a60\nGet 10% off here 👉 https:\u002F\u002Fshorturl.smartcode.diy\u002Fdynamous_ai_10_percent_discount\n----\n\nChapters\n0:00 Archon - How to set up Archon a a VPS Server?\n2:31 VPS Hetzner - Initial Server Configuration\n3:56 Cloud-Init Configuration for Server Start\n8:06 Domain Setup and DNS Records\n10:43 Configure .env (Environment Settings: Secrets, Tokens, ...)\n13:47 Github Access Token \n18:40 Form-Based Auth for Archon (Login)\n\nResources\n\n⭐ Archon on GitHub: https:\u002F\u002Fgithub.com\u002Fcoleam00\u002FArchon\n📖 The Archon Book: https:\u002F\u002Farchon.diy\u002Fbook\n🎓 Dynamous AI Community: https:\u002F\u002Fdynamous.ai\u002F?code=646a60\n💰 10% OFF Dynamous: https:\u002F\u002Fshorturl.smartcode.diy\u002Fdynamous_ai_10_percent_discount",{},"\u002Fsummaries\u002Fe5968758c24688f8-self-host-archon-v3-on-hetzner-vps-with-docker-summary","2026-04-09 03:00:05","2026-04-10 03:09:03",{"title":3358,"description":3538},{"loc":3540},"e5968758c24688f8","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5CYG0SrpW0Q","summaries\u002Fe5968758c24688f8-self-host-archon-v3-on-hetzner-vps-with-docker-summary",[298,221,1070,3549],"docker","Provision Hetzner VPS, apply cloud-init YAML for auto-setup of Archon v3 with Caddy HTTPS reverse proxy, Postgres DB, then configure .env secrets and optional form auth for secure 24\u002F7 access via subdomain.",[3549],"1AcL7qEopcAf0EBWNfrdu7IaGhfwIpLhhlC3lEWhkmY",{"id":3554,"title":3555,"ai":3556,"body":3561,"categories":3590,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3591,"navigation":207,"path":3592,"published_at":3593,"question":177,"scraped_at":177,"seo":3594,"sitemap":3595,"source_id":3596,"source_name":3597,"source_type":293,"source_url":3598,"stem":3599,"tags":3600,"thumbnail_url":177,"tldr":3601,"tweet":177,"unknown_tags":3602,"__hash__":3603},"summaries\u002Fsummaries\u002Fanthropic-tops-30b-arr-as-ai-hits-helium-wall-summary.md","Anthropic Tops $30B ARR as AI Hits Helium Wall",{"provider":9,"model":10,"input_tokens":3557,"output_tokens":3558,"processing_time_ms":3559,"cost_usd":3560},6461,1563,13545,0.0016033,{"type":16,"value":3562,"toc":3586},[3563,3567,3570,3573,3577,3580,3583],[19,3564,3566],{"id":3565},"anthropics-enterprise-ai-dominance-accelerates","Anthropic's Enterprise AI Dominance Accelerates",[24,3568,3569],{},"Anthropic's Claude Mythos model marks a massive leap in coding and reasoning capabilities, surpassing the Opus family and deemed too dangerous for public release. Instead, Project Glasswing equips 40 cybersecurity firms with early access to patch vulnerabilities. This follows Claude 3.5 Sonnet establishing state-of-the-art coding in 2025, with Claude Code (May 2025) and Claude Cowork (Jan 2026) driving adoption.",[24,3571,3572],{},"Revenue exploded to $30B ARR by April 2026, overtaking OpenAI's $25B after 30x growth in 15 months—adding $6B in February alone, matching 15-20 years of growth for firms like Palantir or Atlassian. Anthropic now serves 1,000 enterprises spending $1M+ each (doubled in two months), with 1000% CAGR over 2.5 years tied to coding model leadership. Enterprise focus sustains 'vibe-working' tools like Cursor, absent major competition from OpenAI, Google, Microsoft, or Meta.",[19,3574,3576],{"id":3575},"critical-bottlenecks-threaten-ai-expansion","Critical Bottlenecks Threaten AI Expansion",[24,3578,3579],{},"Helium supply crisis emerges as a physical constraint: Qatar (34% of global helium) halted production amid Iran war disruptions, spiking prices 50% initially and doubling since late February. Essential for semiconductors, healthcare, and aerospace, this inflates HBM costs and stalls AI datacenter builds—unaccounted for in most infrastructure plans.",[24,3581,3582],{},"Other headwinds include OpenAI's compute over-investment yielding mediocre profitability, exacerbated by a viral New Yorker exposé labeling CEO Sam Altman untrustworthy based on 18 months of investigation and ex-leaders' views—posing IPO risks alongside Elon Musk's profile.",[24,3584,3585],{},"A Forecasting Research Institute study of economists and AI experts predicts steady U.S. economic trends despite AI advances by 2030: GDP growth mirrors history, labor force participation drops gradually from 62% to 55% by 2050 under rapid progress. Scenarios favor moderate AI optimism, countering alarmist job loss narratives.",{"title":167,"searchDepth":168,"depth":168,"links":3587},[3588,3589],{"id":3565,"depth":168,"text":3566},{"id":3575,"depth":168,"text":3576},[519],{},"\u002Fsummaries\u002Fanthropic-tops-30b-arr-as-ai-hits-helium-wall-summary","2026-04-08 21:21:18",{"title":3555,"description":167},{"loc":3592},"d0df406b8098ab95","AI Supremacy","https:\u002F\u002Funknown","summaries\u002Fanthropic-tops-30b-arr-as-ai-hits-helium-wall-summary",[220,2896,221],"Anthropic overtakes OpenAI with 30x revenue growth to $30B ARR via top coding models, but Qatar's 34% helium cutoff doubles prices, bottlenecking AI datacenters.",[],"wTtu3EtLlh8BfUrpr8bnYY3tyss2iRcgIQvdVJWvlx4",{"id":3605,"title":3606,"ai":3607,"body":3612,"categories":3813,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3814,"navigation":207,"path":3815,"published_at":3593,"question":177,"scraped_at":177,"seo":3816,"sitemap":3817,"source_id":3818,"source_name":1148,"source_type":293,"source_url":3598,"stem":3819,"tags":3820,"thumbnail_url":177,"tldr":3822,"tweet":177,"unknown_tags":3823,"__hash__":3824},"summaries\u002Fsummaries\u002Fcut-snowflake-cortex-code-costs-with-prompts-and-l-summary.md","Cut Snowflake Cortex Code Costs with Prompts and Limits",{"provider":9,"model":10,"input_tokens":3608,"output_tokens":3609,"processing_time_ms":3610,"cost_usd":3611},4776,1640,9737,0.0017527,{"type":16,"value":3613,"toc":3807},[3614,3618,3621,3624,3641,3644,3648,3651,3663,3666,3727,3730,3745,3748,3752,3755,3758,3773,3776,3791,3798,3802,3805],[19,3615,3617],{"id":3616},"craft-precise-prompts-to-slash-token-consumption","Craft Precise Prompts to Slash Token Consumption",[24,3619,3620],{},"Cortex Code (CoCo) bills by tokens from both input prompts and outputs, so vague prompts trigger extra tool calls and higher costs. Bad example: \"Help me with my data.\" Good: \"Create staging model for RAW.SALES.ORDERS with not_null on ORDER_ID.\"",[24,3622,3623],{},"Follow these practices to minimize tokens:",[57,3625,3626,3629,3632,3635,3638],{},[60,3627,3628],{},"Use full table names (e.g., RAW.SALES.ORDERS).",[60,3630,3631],{},"Specify exact output format.",[60,3633,3634],{},"Keep prompts concise.",[60,3636,3637],{},"Include business logic upfront.",[60,3639,3640],{},"Reference AGENTS.md for consistent agent behavior.",[24,3642,3643],{},"This approach directly cuts credits since CoCo is serverless and doesn't use warehouses.",[19,3645,3647],{"id":3646},"query-usage-history-and-set-proactive-alerts","Query Usage History and Set Proactive Alerts",[24,3649,3650],{},"Track daily credits, per-user usage, and request counts with these ACCOUNT_USAGE tables (data lags 45 mins to 2 hours):",[57,3652,3653,3658],{},[60,3654,3655],{},[320,3656,3657],{},"SNOWFLAKE.ACCOUNT_USAGE.CORTEX_CODE_SNOWSIGHT_USAGE_HISTORY",[60,3659,3660],{},[320,3661,3662],{},"SNOWFLAKE.ACCOUNT_USAGE.CORTEX_CODE_CLI_USAGE_HISTORY",[24,3664,3665],{},"Example query for last 30 days:",[325,3667,3670],{"className":3668,"code":3669,"language":875,"meta":167,"style":167},"language-sql shiki shiki-themes github-light github-dark","SELECT\n  DATE(u.USAGE_TIME) AS usage_date,\n  us.NAME AS user_name,\n  ROUND(SUM(u.TOKEN_CREDITS), 4) AS daily_credits,\n  SUM(u.TOKENS) AS total_tokens,\n  COUNT(*) AS request_count\nFROM SNOWFLAKE.ACCOUNT_USAGE.CORTEX_CODE_SNOWSIGHT_USAGE_HISTORY u\nLEFT JOIN SNOWFLAKE.ACCOUNT_USAGE.USERS us ON u.USER_ID = us.USER_ID\nWHERE u.USAGE_TIME >= DATEADD('day', -30, CURRENT_TIMESTAMP())\nGROUP BY DATE(u.USAGE_TIME), us.NAME\nORDER BY usage_date DESC, daily_credits DESC;\n",[320,3671,3672,3677,3682,3687,3692,3697,3702,3707,3712,3717,3722],{"__ignoreMap":167},[855,3673,3674],{"class":857,"line":858},[855,3675,3676],{},"SELECT\n",[855,3678,3679],{"class":857,"line":168},[855,3680,3681],{},"  DATE(u.USAGE_TIME) AS usage_date,\n",[855,3683,3684],{"class":857,"line":284},[855,3685,3686],{},"  us.NAME AS user_name,\n",[855,3688,3689],{"class":857,"line":204},[855,3690,3691],{},"  ROUND(SUM(u.TOKEN_CREDITS), 4) AS daily_credits,\n",[855,3693,3694],{"class":857,"line":203},[855,3695,3696],{},"  SUM(u.TOKENS) AS total_tokens,\n",[855,3698,3699],{"class":857,"line":914},[855,3700,3701],{},"  COUNT(*) AS request_count\n",[855,3703,3704],{"class":857,"line":926},[855,3705,3706],{},"FROM SNOWFLAKE.ACCOUNT_USAGE.CORTEX_CODE_SNOWSIGHT_USAGE_HISTORY u\n",[855,3708,3709],{"class":857,"line":935},[855,3710,3711],{},"LEFT JOIN SNOWFLAKE.ACCOUNT_USAGE.USERS us ON u.USER_ID = us.USER_ID\n",[855,3713,3714],{"class":857,"line":1331},[855,3715,3716],{},"WHERE u.USAGE_TIME >= DATEADD('day', -30, CURRENT_TIMESTAMP())\n",[855,3718,3719],{"class":857,"line":1337},[855,3720,3721],{},"GROUP BY DATE(u.USAGE_TIME), us.NAME\n",[855,3723,3724],{"class":857,"line":1343},[855,3725,3726],{},"ORDER BY usage_date DESC, daily_credits DESC;\n",[24,3728,3729],{},"For notifications:",[57,3731,3732,3739],{},[60,3733,3734,3735,3738],{},"Activate account budgets: ",[320,3736,3737],{},"CALL SNOWFLAKE.LOCAL.ACCOUNT_ROOT_BUDGET!ACTIVATE();"," then set limits (e.g., 7 credits monthly) and emails.",[60,3740,3741,3742,743],{},"Build custom alerts, like firing if Snowsight exceeds 2 credits in 24 hours via CRON '* * * * * UTC', using ",[320,3743,3744],{},"SYSTEM$SEND_EMAIL",[24,3746,3747],{},"Budgets alert but don't hard-stop usage.",[19,3749,3751],{"id":3750},"enforce-rolling-24-hour-credit-limits-per-user","Enforce Rolling 24-Hour Credit Limits Per User",[24,3753,3754],{},"Set daily estimated credit limits on a rolling 24-hour window—access blocks when hit until usage drops below:",[24,3756,3757],{},"Account-wide:",[325,3759,3761],{"className":3668,"code":3760,"language":875,"meta":167,"style":167},"ALTER ACCOUNT SET CORTEX_CODE_SNOWSIGHT_DAILY_EST_CREDIT_LIMIT_PER_USER = 5;\nALTER ACCOUNT SET CORTEX_CODE_CLI_DAILY_EST_CREDIT_LIMIT_PER_USER = 10;\n",[320,3762,3763,3768],{"__ignoreMap":167},[855,3764,3765],{"class":857,"line":858},[855,3766,3767],{},"ALTER ACCOUNT SET CORTEX_CODE_SNOWSIGHT_DAILY_EST_CREDIT_LIMIT_PER_USER = 5;\n",[855,3769,3770],{"class":857,"line":168},[855,3771,3772],{},"ALTER ACCOUNT SET CORTEX_CODE_CLI_DAILY_EST_CREDIT_LIMIT_PER_USER = 10;\n",[24,3774,3775],{},"Per-user overrides:",[325,3777,3779],{"className":3668,"code":3778,"language":875,"meta":167,"style":167},"ALTER USER power_user SET CORTEX_CODE_SNOWSIGHT_DAILY_EST_CREDIT_LIMIT_PER_USER = 20;\nALTER USER intern_user SET CORTEX_CODE_SNOWSIGHT_DAILY_EST_CREDIT_LIMIT_PER_USER = 0;\n",[320,3780,3781,3786],{"__ignoreMap":167},[855,3782,3783],{"class":857,"line":858},[855,3784,3785],{},"ALTER USER power_user SET CORTEX_CODE_SNOWSIGHT_DAILY_EST_CREDIT_LIMIT_PER_USER = 20;\n",[855,3787,3788],{"class":857,"line":168},[855,3789,3790],{},"ALTER USER intern_user SET CORTEX_CODE_SNOWSIGHT_DAILY_EST_CREDIT_LIMIT_PER_USER = 0;\n",[24,3792,3793,3794,3797],{},"Unset with ",[320,3795,3796],{},"ALTER ACCOUNT UNSET ..."," or per user. This prevents runaway costs from heavy users.",[19,3799,3801],{"id":3800},"work-around-key-limitations","Work Around Key Limitations",[24,3803,3804],{},"CoCo lacks file uploads (use stages), external API calls (use external functions), background jobs, multi-session memory (use AGENTS.md), full large-context handling, and free tier support. These constraints avoid misuse but require planning to stay efficient without extra credits.",[1018,3806,1414],{},{"title":167,"searchDepth":168,"depth":168,"links":3808},[3809,3810,3811,3812],{"id":3616,"depth":168,"text":3617},{"id":3646,"depth":168,"text":3647},{"id":3750,"depth":168,"text":3751},{"id":3800,"depth":168,"text":3801},[273],{},"\u002Fsummaries\u002Fcut-snowflake-cortex-code-costs-with-prompts-and-l-summary",{"title":3606,"description":167},{"loc":3815},"60d79e4bf9e7f868","summaries\u002Fcut-snowflake-cortex-code-costs-with-prompts-and-l-summary",[1070,3821,298,221],"prompt-engineering","Precise prompts reduce token usage; monitor via ACCOUNT_USAGE tables, set alerts, and enforce per-user daily credit limits like 5 for Snowsight to prevent surprise bills.",[],"K4mwWAXotaxJkbSIlKQ2dhzH9-4pliO4Lkr9uneMcq8",{"id":3826,"title":3827,"ai":3828,"body":3833,"categories":3895,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3896,"navigation":207,"path":3897,"published_at":3593,"question":177,"scraped_at":177,"seo":3898,"sitemap":3899,"source_id":3900,"source_name":405,"source_type":293,"source_url":3598,"stem":3901,"tags":3902,"thumbnail_url":177,"tldr":3903,"tweet":177,"unknown_tags":3904,"__hash__":3905},"summaries\u002Fsummaries\u002Fscale-stateless-backends-by-broadcasting-client-up-summary.md","Scale Stateless Backends by Broadcasting Client Updates",{"provider":9,"model":10,"input_tokens":3829,"output_tokens":3830,"processing_time_ms":3831,"cost_usd":3832},5509,1238,12235,0.0016983,{"type":16,"value":3834,"toc":3889},[3835,3839,3842,3845,3849,3852,3855,3858,3862,3865,3868,3872,3886],[19,3836,3838],{"id":3837},"connection-ownership-mismatch-causes-silent-failures","Connection Ownership Mismatch Causes Silent Failures",[24,3840,3841],{},"In single-instance deployments, callbacks from async workflows reach the same process holding the client's SSE or WebSocket connection, delivering updates instantly. Horizontal scaling with Kubernetes replicas behind a load balancer breaks this: clients connect to one pod (e.g., Pod A), but callbacks hit another (Pod B). Pod B processes correctly—validates, logs, persists state, returns 200 OK—but can't deliver since it lacks the in-memory connection. Users see no updates despite healthy metrics (low CPU, latency, errors). This 'distributed client-context problem' emerges because stateless services scale execution but not long-lived connections, which remain process-local state.",[24,3843,3844],{},"Cloud-native statelessness excels for scaling and recovery but ignores that connections bind to specific replicas. Async webhooks and background jobs land anywhere, decoupling execution from delivery without explicit coordination.",[19,3846,3848],{"id":3847},"decouple-processing-from-delivery-using-pubsub","Decouple Processing from Delivery Using Pub\u002FSub",[24,3850,3851],{},"Sticky sessions or switching SSE to WebSockets fail because they don't solve replica mismatch. Instead, add a broadcast layer: receiving replica publishes events to a shared channel (Redis Pub\u002FSub fits for low-latency fan-out). All replicas subscribe; only the connection-owning one forwards to the client.",[24,3853,3854],{},"Derive stable channel IDs from user\u002Frequest IDs. Each pod maps these to active in-memory connections via a shared subscriber, avoiding per-client subscriptions that don't scale. Clean up mappings on disconnect to prevent stale references, memory leaks, or race conditions during reconnects. This makes delivery predictable without routing callbacks to specific pods.",[24,3856,3857],{},"Stateless services don't eliminate state—they relocate it (e.g., to Redis). Coordination treats delivery as a separate concern from processing, enabling clean horizontal scaling.",[19,3859,3861],{"id":3860},"monitor-end-to-end-delivery-not-just-processing","Monitor End-to-End Delivery, Not Just Processing",[24,3863,3864],{},"Dashboards miss this: processing succeeds (green metrics), but delivery fails silently. Propagate correlation IDs across initiation, callback, publication, and client receipt to trace divergences. Alert on coordination health—e.g., published events without deliveries—beyond infrastructure metrics.",[24,3866,3867],{},"Make updates idempotent: duplicates harmless, misses recoverable by client polling authoritative backend state. Streaming enhances UX but isn't correctness; backend state remains source of truth. Redis Pub\u002FSub's transience (lost on restarts) reinforces this discipline.",[19,3869,3871],{"id":3870},"design-rules-prevent-recurrence","Design Rules Prevent Recurrence",[57,3873,3874,3877,3880,3883],{},[60,3875,3876],{},"Treat connections as local state, not shared.",[60,3878,3879],{},"Broadcast for any-node completion.",[60,3881,3882],{},"Track full-path delivery with correlation IDs.",[60,3884,3885],{},"Ensure idempotency and authoritative state.",[24,3887,3888],{},"Ask upfront: which replica owns the connection, and how does the system find it? This beats transport tweaks. Modern Kubernetes dynamism, webhook reliance, and real-time UIs amplify the issue in event-driven SaaS.",{"title":167,"searchDepth":168,"depth":168,"links":3890},[3891,3892,3893,3894],{"id":3837,"depth":168,"text":3838},{"id":3847,"depth":168,"text":3848},{"id":3860,"depth":168,"text":3861},{"id":3870,"depth":168,"text":3871},[273],{},"\u002Fsummaries\u002Fscale-stateless-backends-by-broadcasting-client-up-summary",{"title":3827,"description":167},{"loc":3897},"8d886af13994638f","summaries\u002Fscale-stateless-backends-by-broadcasting-client-up-summary",[298,221,784],"Horizontal scaling routes callbacks to replicas without client SSE\u002FWebSocket connections, silently dropping updates—broadcast via Redis Pub\u002FSub so the owning replica delivers reliably.",[],"TOIjOhsFcV2nmf_hCLl0rioLl1_qbmxOn-UyAQfb22E",{"id":3907,"title":3908,"ai":3909,"body":3914,"categories":3934,"created_at":177,"date_modified":177,"description":167,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":3935,"navigation":207,"path":3936,"published_at":3937,"question":177,"scraped_at":177,"seo":3938,"sitemap":3939,"source_id":3940,"source_name":405,"source_type":293,"source_url":3598,"stem":3941,"tags":3942,"thumbnail_url":177,"tldr":3944,"tweet":177,"unknown_tags":3945,"__hash__":3946},"summaries\u002Fsummaries\u002Freliable-scraping-pipelines-playwright-bright-data-summary.md","Reliable Scraping Pipelines: Playwright + Bright Data + Kubernetes",{"provider":9,"model":10,"input_tokens":3910,"output_tokens":3911,"processing_time_ms":3912,"cost_usd":3913},3660,809,7409,0.00111385,{"type":16,"value":3915,"toc":3930},[3916,3920,3923,3927],[19,3917,3919],{"id":3918},"production-challenges-beyond-laptop-scrapers","Production Challenges Beyond Laptop Scrapers",[24,3921,3922],{},"Playwright scripts that run smoothly locally fail in production due to operational issues: browser startup delays in containers, bloated Docker images from bundled binaries, proxy and credential management, inconsistent retry logic, overlapping scheduled runs, and JavaScript-heavy pages that render differently under repeated automation. The shift requires building predictable batch workers that start cleanly, finish reliably, and scale via orchestration.",[19,3924,3926],{"id":3925},"solution-remote-browsers-and-kubernetes-orchestration","Solution: Remote Browsers and Kubernetes Orchestration",[24,3928,3929],{},"Replace local browsers with Bright Data's Browser API for remote execution over CDP protocol, keeping Playwright as the automation layer. Use Kubernetes Jobs for one-off runs and CronJobs for recurring schedules. This setup avoids container bloat, simplifies proxy\u002Fcredential handling, and ensures non-overlapping executions in a minimal architecture: Playwright scripts → remote Bright Data browsers → Kubernetes scheduling.",{"title":167,"searchDepth":168,"depth":168,"links":3931},[3932,3933],{"id":3918,"depth":168,"text":3919},{"id":3925,"depth":168,"text":3926},[273],{},"\u002Fsummaries\u002Freliable-scraping-pipelines-playwright-bright-data-summary","2026-04-08 21:21:17",{"title":3908,"description":167},{"loc":3936},"d637e0a19bc1f60e","summaries\u002Freliable-scraping-pipelines-playwright-bright-data-summary",[3943,298,221],"automation","Deploy Playwright scrapers reliably in production using Bright Data's remote Browser API and Kubernetes Jobs\u002FCronJobs to handle browser startup, proxies, retries, and scheduling overlaps.",[],"Qv0UVK7HjWRAPqOYvaLhXgeq6s-So7SmD0Pf7Kaac-M",{"id":3948,"title":3949,"ai":3950,"body":3955,"categories":4058,"created_at":177,"date_modified":177,"description":4059,"extension":178,"faq":177,"featured":179,"kicker_label":177,"meta":4060,"navigation":207,"path":4061,"published_at":4062,"question":177,"scraped_at":4063,"seo":4064,"sitemap":4065,"source_id":4066,"source_name":4067,"source_type":215,"source_url":4068,"stem":4069,"tags":4070,"thumbnail_url":177,"tldr":4071,"tweet":177,"unknown_tags":4072,"__hash__":4073},"summaries\u002Fsummaries\u002F6efb045ed12647b6-claude-code-leak-reveals-ai-supply-chain-perils-summary.md","Claude Code Leak Reveals AI Supply Chain Perils",{"provider":9,"model":10,"input_tokens":3951,"output_tokens":3952,"processing_time_ms":3953,"cost_usd":3954},8229,2225,19837,0.00246745,{"type":16,"value":3956,"toc":4051},[3957,3961,3964,3967,3970,3974,3977,3980,3983,3987,3990,3993,3996,4000,4003,4006,4008,4031,4034],[19,3958,3960],{"id":3959},"ai-coding-tools-expose-broader-supply-chain-weaknesses","AI Coding Tools Expose Broader Supply Chain Weaknesses",[24,3962,3963],{},"Panelists agree the Claude Code source leak isn't isolated to Anthropic but signals systemic flaws in AI-era supply chains, particularly npm's history of typosquatting and dependency confusion attacks. JR Rao frames it as a shift from traditional vulnerabilities to subverted trust chains: attackers exploit package managers to infiltrate workflows, with blame often falling on end-users like Claude adopters. Visibility into Claude Code's internals—via npm maps linking to source artifacts—lowers attack research costs, revealing upcoming features like offline mode and dream mode that could inspire targeted exploits.",[24,3965,3966],{},"Dave Bales highlights npm hash subversion tactics, rendering verification unreliable. Short-term fallout includes malware-laden fake GitHub repos (e.g., Vidar infostealer disguised as forks). Long-term, leaked code lets adversaries bypass guardrails, enabling unrestricted AI coding. Nick Bradley downplays immediate doom for Anthropic, likening it to pirated software, but notes excitement in novel threats beyond XSS or SQLi.",[24,3968,3969],{},"\"This is really a AI era supply chain security problem and it is a problem with npm,\" says JR, emphasizing lookalike packages targeting agentic systems, API key abuses, and embedded logic patterns.",[19,3971,3973],{"id":3972},"removing-ai-guardrails-fuels-malicious-automation","Removing AI Guardrails Fuels Malicious Automation",[24,3975,3976],{},"Leaked AI coding tools like Claude Code pose amplified risks in CI\u002FCD pipelines due to features like proactive mode, which automates 24\u002F7 code generation without human oversight. Dave warns this empowers attackers to build malicious repositories effortlessly: \"Proactive mode being enabled in this source code is a big deal... They're going to have code written for them while they sleep.\"",[24,3978,3979],{},"Panelists diverge on severity—Nick sees it as inevitable abuse of any tool (\"any tool that you think you're going to use for something good, someone else is going to use it for something bad\"), while Dave predicts weaponized bad-actor repos. JR ties it to agent limitations: AI lacks human adeptness at spotting typosquatting or shell executions. Consensus: Test updates in isolated labs before deployment, lag one version behind (N-1 strategy) for stability, and scrutinize supply chains holistically.",[24,3981,3982],{},"Quote from external report cited by host: \"The attack surface exposed by the Clawed Code leak... What changed on March 31st is that the attack research cost collapsed.\"",[19,3984,3986],{"id":3985},"one-credential-suffices-in-brazen-supply-chain-attacks","One Credential Suffices in Brazen Supply Chain Attacks",[24,3988,3989],{},"TeamPCP's spree—starting with a single privileged GitHub Actions token in Trivy Security Scanner—cascades into compromises like Light LLM, Telnyx, and a European Commission cloud exposing 29 entities' data. Dave calls them \"brazen,\" prioritizing speed over stealth: one credential unlocks vast access. Despite rotations, Trivy's miss of one instance enabled entry.",[24,3991,3992],{},"JR positions identity as the \"new perimeter\": attackers race to harvest credentials before short-lived ones expire, targeting code-embedded secrets. Nick attributes failures to overcomplication—too many credentials without airtight procedures—admitting bad guys win via speed, sans QA or ethics: \"Sometimes the bad guys just going to win... They don't have the same practices we do.\"",[24,3994,3995],{},"Murky attribution with ShinyHunters and Lapsus$ claiming overlaps matters little to defenders (per JR), though it informs TTPs. Overlaps via affiliates blur lines, but victims must assume breach, audit soup-to-nuts.",[19,3997,3999],{"id":3998},"sharing-close-calls-and-cybercrime-ai-lessons","Sharing Close Calls and Cybercrime AI Lessons",[24,4001,4002],{},"Beyond breaches, panelists advocate \"close-call\" databases for unexploited threats, shifting threat intel from post-mortems to prevention. Reactive mode dominates, but proactive sharing could reveal patterns.",[24,4004,4005],{},"Cybercriminals model mature AI adoption: unburdened by ethics, they deploy tools like Claude Code aggressively. Businesses lag due to guardrails, but lessons include rapid iteration and testing. Nick urges full-compromise assumptions post-exposure; Dave stresses lab validation to counter fast patches.",[19,4007,133],{"id":132},[57,4009,4010,4013,4016,4019,4022,4025,4028],{},[60,4011,4012],{},"Audit npm packages for lookalikes, typosquatting, and dependency confusion; verify trust chains beyond hashes.",[60,4014,4015],{},"Test AI tool updates (e.g., Claude Code) in isolated labs; adopt N-1 versioning to avoid unvetted latest releases.",[60,4017,4018],{},"Treat identity as primary perimeter: rotate credentials exhaustively, use short-lived\u002FJIT access, avoid embedding in code.",[60,4020,4021],{},"Assume breach after supply chain incidents like TeamPCP; scan environments end-to-end for indicators.",[60,4023,4024],{},"Build close-call sharing mechanisms and study cybercriminals' unhindered AI use for faster, bolder adoption.",[60,4026,4027],{},"Prioritize agentic AI security: monitor for API key leaks, proactive mode abuses, and shell executions in pipelines.",[60,4029,4030],{},"Ignore attribution noise; focus on TTPs from any actor for detection rules.",[24,4032,4033],{},"Notable quotes:",[2394,4035,4036,4039,4042,4045,4048],{},[60,4037,4038],{},"Nick Bradley: \"Any tool that you think you're going to use for something good, someone else is going to use it for something bad.\" (On inevitable AI tool abuse.)",[60,4040,4041],{},"Dave Bales: \"Proactive mode being enabled... allows the engine to code for you 24\u002F7.\" (Highlighting malicious automation risk.)",[60,4043,4044],{},"JR Rao: \"We are moving from an era where we had vulnerabilities to where trust chains are being subverted.\" (Framing supply chain evolution.)",[60,4046,4047],{},"Nick Bradley: \"Sometimes the bad guys just going to win, right? Because they're just going to be faster.\" (On defender challenges vs. threat speed.)",[60,4049,4050],{},"Dave Bales: \"They're brazen... if they can get a credential, it seems like they're going to use it.\" (Describing TeamPCP tactics.)",{"title":167,"searchDepth":168,"depth":168,"links":4052},[4053,4054,4055,4056,4057],{"id":3959,"depth":168,"text":3960},{"id":3972,"depth":168,"text":3973},{"id":3985,"depth":168,"text":3986},{"id":3998,"depth":168,"text":3999},{"id":132,"depth":168,"text":133},[273],"Visit the Security Intelligence the podcast page → https:\u002F\u002Fibm.biz\u002FBdpmAn\n\nWhat happens when one of the world’s most popular AI coding tools falls into the wrong hands? \n\nOn this episode of Security Intelligence, Nick Bradley, Dave Bales and JR Rao discuss the Claude Code source code leak. Attackers are already using the opportunity to spread malware through fake repos, but the real question is how threat actors might use their newfound knowledge of Claude Code’s internals to wreak havoc on AI agents and the CI\u002FCD pipeline. \n\nThen, we follow up on our old friends TeamPCP, Shiny Hunters and Lapsus$, whose overlapping data breach claims are causing no small amount of confusion and consternation among security pros. We examine the credential rotation problem and the uneven security surface of modern supply chains that helped get us in this mess. \n\nPlus: Threat intelligence usually focuses on attacks that did happen. But what if we started talking about the ones that didn’t? And do cybercriminals have anything to teach us about “mature” AI adoption? Some big names seem to think so. \n\nAll that and more on Security Intelligence. \n\nSegments: \n\n00:00 – Introduction\n1:12 -- The Claude Code leak \n11:19 -- TeamPCP’s breach spree \n21:21 -- “Close-call” databases  \n29:28 -- Cybercrime and AI adoption \n\nThe opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. \n\nExplore to securely deploy and operate agentic AI workloads at runtime → https:\u002F\u002Fibm.biz\u002FBdpmAb\n#ClaudeAI #ThreatIntelligence #DataBreach",{},"\u002Fsummaries\u002F6efb045ed12647b6-claude-code-leak-reveals-ai-supply-chain-perils-summary","2026-04-08 10:16:24","2026-04-08 14:47:42",{"title":3949,"description":4059},{"loc":4061},"6efb045ed12647b6","IBM Technology","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qtFtECYOzZE","summaries\u002F6efb045ed12647b6-claude-code-leak-reveals-ai-supply-chain-perils-summary",[298,221,1070,219],"Leaked Claude Code source exposes npm vulnerabilities and AI agent risks in CI\u002FCD, urging defenders to harden supply chains, rotate credentials rigorously, and test updates in labs amid brazen threat actor speed.",[],"rYHeHb79Rw2biSTzKaB6ZPctw8GE0Ra2h3D9INxtA3c",45,[4076,4079,4082,4084,4087,4090,4092,4094,4096,4098,4100,4102,4104,4106,4108,4110,4112,4114,4116,4118,4120,4122,4125,4128,4130,4132,4134,4136,4138,4141,4143,4145,4147,4149,4151,4153,4155,4157,4159,4161,4163,4165,4167,4169,4171,4173,4175,4177,4179,4181,4183,4185,4187,4189,4191,4193,4195,4197,4199,4201,4203,4205,4207,4209,4211,4213,4215,4217,4219,4221,4223,4225,4227,4229,4231,4233,4235,4237,4239,4241,4243,4245,4247,4249,4251,4253,4255,4257,4259,4261,4263,4265,4267,4269,4271,4273,4275,4277,4279,4281,4283,4285,4287,4289,4291,4293,4295,4297,4299,4301,4303,4305,4307,4309,4311,4313,4315,4317,4319,4321,4323,4325,4327,4329,4331,4333,4335,4337,4339,4341,4343,4345,4347,4349,4351,4353,4355,4357,4359,4361,4363,4365,4367,4369,4371,4373,4375,4377,4379,4381,4383,4385,4387,4389,4391,4393,4395,4397,4399,4401,4403,4405,4407,4409,4411,4413,4415,4417,4419,4421,4423,4425,4427,4429,4431,4433,4435,4437,4439,4441,4443,4445,4447,4449,4451,4453,4455,4457,4459,4461,4463,4465,4467,4469,4471,4473,4475,4477,4479,4481,4483,4485,4487,4489,4491,4493,4495,4497,4499,4501,4503,4505,4507,4509,4511,4513,4515,4517,4519,4521,4523,4525,4527,4529,4531,4533,4535,4537,4539,4541,4543,4545,4547,4549,4551,4553,4555,4557,4559,4561,4563,4565,4567,4569,4571,4573,4575,4577,4579,4581,4583,4585,4587,4589,4591,4593,4595,4597,4599,4601,4603,4605,4607,4609,4611,4613,4615,4617,4619,4621,4623,4625,4627,4629,4631,4633,4635,4637,4639,4641,4643,4645,4647,4649,4651,4653,4655,4657,4659,4661,4663,4665,4667,4669,4671,4673,4675,4677,4679,4681,4683,4685,4687,4689,4691,4693,4695,4697,4699,4701,4703,4705,4707,4709,4711,4713,4715,4717,4719,4721,4723,4725,4727,4729,4731,4733,4735,4737,4739,4741,4743,4745,4747,4749,4751,4753,4755,4757,4759,4761,4763,4765,4767,4769,4771,4773,4775,4777,4779,4781,4783,4785,4787,4789,4791,4793,4795,4797,4799,4801,4803,4805,4807,4809,4811,4813,4815,4817,4819,4821,4823,4825,4827,4829,4831,4833,4835,4837,4839,4841,4843,4845,4847,4849,4851,4853,4855,4857,4859,4861,4863,4865,4867,4869,4871,4873,4875,4877,4879,4881,4883,4885,4887,4889,4891,4893,4895,4897,4899,4901,4903,4905,4907,4909,4911,4913,4915,4917,4919,4921,4923,4925,4927,4929,4931,4933,4935,4937,4939,4941,4943,4945,4947,4949,4951,4953,4955,4957,4959,4961,4963,4965,4967,4969,4971,4973,4975,4977,4979,4981,4983,4985,4987,4989,4991,4993,4995,4997,4999,5001,5003,5005,5007,5009,5011,5013,5015,5017,5019,5021,5023,5025,5027,5029,5031,5033,5035,5037,5039,5041,5043,5045,5047,5049,5051,5053,5055,5057,5059,5061,5063,5065,5067,5069,5071,5073,5075,5077,5079,5081,5083,5085,5087,5089,5091,5093,5095,5097,5099,5101,5103,5105,5107,5109,5111,5113,5115,5117,5119,5121,5123,5125,5127,5129,5131,5133,5135,5137,5139,5141,5143,5145,5147,5149,5151,5153,5155,5157,5159,5161,5163,5165,5167,5169,5171,5173,5175,5177,5179,5181,5183,5185,5187,5189,5191,5193,5195,5197,5199,5201,5203,5205,5207,5209,5211,5213,5215,5217,5219,5221,5223,5225,5227,5229,5231,5233,5235,5237,5239,5241,5243,5245,5247,5249,5251,5253,5255,5257,5259,5261,5263,5265,5267,5269,5271,5273,5275,5277,5279,5281,5283,5285,5287,5289,5291,5293,5295,5297,5299,5301,5303,5305,5307,5309,5311,5313,5315,5317,5319,5321,5323,5325,5327,5329,5331,5333,5335,5337,5339,5341,5343,5345,5347,5349,5351,5353,5355,5357,5359,5361,5363,5365,5367,5369,5371,5373,5375,5377,5379,5381,5383,5385,5387,5389,5391,5393,5395,5397,5399,5401,5403,5405,5407,5409,5411,5413,5415,5417,5419,5421,5423,5425,5427,5429,5431,5433,5435,5437,5439,5441,5443,5445,5447,5449,5451,5453,5455,5457,5459,5461,5463,5465,5467,5469,5471,5473,5475,5477,5479,5481,5483,5485,5487,5489,5491,5493,5495,5497,5499,5501,5503,5505,5507,5509,5511,5513,5515,5517,5519,5521,5523,5525,5527,5529,5531,5533,5535,5537,5539,5541,5543,5545,5547,5549,5551,5553,5555,5557,5559,5561,5563,5565,5567,5569,5571,5573,5575,5577,5579,5581,5583,5585,5587,5589,5591,5593,5595,5597,5599,5601,5603,5605,5607,5609,5611,5613,5615,5617,5619,5621,5623,5625,5627,5629,5631,5633,5635,5637,5639,5641,5643,5645,5647,5649,5651,5653,5655,5657,5659,5661,5663,5665,5667,5669,5671,5673,5675,5677,5679,5681,5683,5685,5687,5689,5691,5693,5695,5697,5699,5701,5703,5705,5707,5709,5711,5713,5715,5717,5719,5721,5723,5725,5727,5729,5731,5733,5735,5737,5739,5741,5743,5745,5747,5749,5751,5753,5755,5757,5759,5761,5763,5765,5767,5769,5771,5773,5775,5777,5779,5781,5783,5785,5787,5789,5791,5793,5795,5797,5799,5801,5803,5805,5807,5809,5811,5813,5815,5817,5819,5821,5823,5825,5827,5829,5831,5833,5835,5837,5839,5841,5843,5845,5847,5849,5851,5853,5855,5857,5859,5861,5863,5865,5867,5869,5871,5873,5875,5877,5879,5881,5883,5885,5887,5889,5891,5893,5895,5897,5899,5901,5903,5905,5907,5909,5911,5913,5915,5917,5919,5921,5923,5925,5927,5929,5931,5933,5935,5937,5939,5941,5943,5945,5947,5949,5951,5953,5955,5957,5959,5961,5963,5965,5967,5969,5971,5973,5975,5977,5979,5981,5983,5985,5987,5989,5991,5993,5995,5997,5999,6001,6003,6005,6007,6009,6011,6013,6015,6017,6019,6021,6023,6025,6027,6029,6031,6033,6035,6037,6039,6041,6043,6045,6047,6049,6051,6053,6055,6057,6059,6061,6063,6065,6067,6069,6071,6073,6075,6077,6079,6081,6083,6085,6087,6089,6091,6093,6095,6097,6099,6101,6103,6105,6107,6109,6111,6113,6115,6117,6119,6121,6123,6125,6127,6129,6131,6133,6135,6137,6139,6141,6143,6145,6147,6149,6151,6153,6155,6157,6159,6161,6163,6165,6167,6169,6171,6173,6175,6177,6179,6181,6183,6185,6187,6189,6191,6193,6195,6197,6199,6201,6203,6205,6207,6209,6211,6213,6215,6217,6219,6221,6223,6225,6227,6229,6231,6233,6235,6237,6239,6241,6243,6245,6247,6249,6251,6253,6255,6257,6259,6261,6263,6265,6267,6269,6271,6273,6275,6277,6279,6281,6283,6285,6287,6289,6291,6293,6295,6297,6299,6301,6303,6305,6307,6309,6311,6313,6315,6317,6319,6321,6323,6325,6327,6329,6331,6333,6335,6337,6339,6341,6343,6345,6347,6349,6351,6353,6355,6357,6359,6361,6363,6365,6367,6369,6371,6373,6375,6377,6379,6381,6383,6385,6387,6389,6391,6393,6395,6397,6399,6401,6403,6405,6407,6409,6411,6413,6415,6417,6419,6421,6423,6425,6427,6429,6431,6433,6435,6437,6439,6441,6443,6445,6447,6449,6451,6453,6455,6457,6459,6461,6463,6465,6467,6469,6471,6473,6475,6477,6479,6481,6483,6485,6487,6489,6491,6493,6495,6497,6499,6501,6503,6505,6507,6509,6511,6513,6515,6517,6519,6521,6523,6525,6527,6529,6531,6533,6535,6537,6539,6541,6543,6545,6547,6549,6551,6553,6555,6557,6559,6561,6563,6565,6567,6569,6571,6573,6575,6577,6579,6581,6583,6585,6587,6589,6591,6593,6595,6597,6599,6601,6603,6605,6607,6609,6611,6613,6615,6617,6619,6621,6623,6625,6627,6629,6631,6633,6635,6637,6639,6641,6643,6645,6647,6649,6651,6653,6655,6657,6659,6661,6663,6665,6667,6669,6671,6673,6675,6677,6679,6681,6683,6685,6687,6689,6691,6693,6695,6697,6699,6701,6703,6705,6707,6709,6711,6713,6715,6717,6719,6721,6723,6725,6727,6729,6731,6733,6735,6737,6739,6741,6743,6745,6747,6749,6751,6753,6755,6757,6759,6761,6763,6765,6767,6769,6771,6773,6775,6777,6779,6781,6783,6785,6787,6789,6791,6793,6795,6797,6799,6801,6803,6805,6807,6809,6811,6813,6815,6817,6819,6821,6823,6825,6827,6829,6831,6833,6835,6837,6839,6841,6843,6845,6847,6849,6851,6853,6855,6857,6859,6861,6863,6865,6867,6869,6871,6873,6875,6877,6879,6881,6883,6885,6887,6889,6891,6893,6895,6897,6899,6901,6903,6905,6907,6909,6911,6913,6915,6917,6919,6921,6923,6925,6927,6929,6931,6933,6935,6937,6939,6941,6943,6945,6947,6949,6951,6953,6955,6957,6959,6961,6963,6965,6967,6969,6971,6973,6975,6977,6979,6981,6983,6985,6987,6989,6991,6993,6995,6997,6999,7001,7003,7005,7007,7009,7011,7013,7015,7017,7019,7021,7023,7025,7027,7029,7031,7033,7035,7037,7039,7041,7043,7045,7047,7049,7051,7053,7055,7057,7059,7061,7063,7065,7067,7069,7071,7073,7075,7077,7079,7081,7083,7085,7087,7089,7091,7093,7095,7097,7099,7101,7103,7105,7107,7109,7111,7113,7115,7117,7119,7121,7123,7125,7127,7129,7131,7133,7135,7137,7139,7141,7143,7145,7147,7149,7151,7153,7155,7157,7159,7161,7163,7165,7167,7169,7171,7173,7175,7177,7179,7181,7183,7185,7187,7189,7191,7193,7195,7197,7199,7201,7203,7205,7207,7209,7211,7213,7215,7217,7219,7221,7223,7225,7227,7229,7231,7233,7235,7237,7239,7241,7243,7245,7247,7249,7251,7253,7255,7257,7259,7261,7263,7265,7267,7269,7271,7273,7275,7277,7279,7281,7283,7285,7287,7289,7291,7293,7295,7297,7299,7301,7303,7305,7307,7309,7311,7313,7315,7317,7319,7321,7323,7325,7327,7329,7331,7333,7335,7337,7339,7341,7343,7345,7347,7349,7351,7353,7355,7357,7359,7361,7363,7365,7367,7369,7371,7373,7375,7377,7379,7381,7383,7385,7387,7389,7391,7393,7395,7397,7399,7401,7403,7405,7407,7409,7411,7413,7415,7417,7419,7421,7423,7425,7427,7429,7431,7433,7435,7437,7439,7441,7443,7445,7447,7449,7451,7453,7455,7457,7459,7461,7463,7465,7467,7469,7471,7473,7475,7477,7479,7481,7483,7485,7487,7489,7491,7493,7495,7497,7499,7501,7503,7505,7507,7509,7511,7513,7515,7517,7519,7521,7523,7525,7527,7529,7531,7533,7535,7537,7539,7541,7543,7545,7547,7549,7551,7553,7555,7557,7559,7561,7563,7565,7567,7569,7571,7573,7575,7577,7579,7581,7583,7585,7587,7589,7591,7593,7595,7597,7599,7601,7603,7605,7607,7609,7611,7613,7615,7617,7619,7621,7623,7625,7627,7629,7631,7633,7635,7637,7639,7641,7643,7645,7647,7649,7651,7653,7655],{"categories":4077},[4078],"Developer Productivity",{"categories":4080},[4081],"Business & SaaS",{"categories":4083},[176],{"categories":4085},[4086],"AI Automation",{"categories":4088},[4089],"Product Strategy",{"categories":4091},[176],{"categories":4093},[4078],{"categories":4095},[4081],{"categories":4097},[],{"categories":4099},[176],{"categories":4101},[],{"categories":4103},[519],{"categories":4105},[4086],{"categories":4107},[519],{"categories":4109},[4086],{"categories":4111},[4086],{"categories":4113},[176],{"categories":4115},[176],{"categories":4117},[519],{"categories":4119},[176],{"categories":4121},[],{"categories":4123},[4124],"Design & Frontend",{"categories":4126},[4127],"Data Science & Visualization",{"categories":4129},[519],{"categories":4131},[],{"categories":4133},[770],{"categories":4135},[176],{"categories":4137},[4086],{"categories":4139},[4140],"Marketing & Growth",{"categories":4142},[176],{"categories":4144},[4086],{"categories":4146},[],{"categories":4148},[],{"categories":4150},[4124],{"categories":4152},[4086],{"categories":4154},[4078],{"categories":4156},[4124],{"categories":4158},[176],{"categories":4160},[4086],{"categories":4162},[519],{"categories":4164},[],{"categories":4166},[],{"categories":4168},[4086],{"categories":4170},[770],{"categories":4172},[],{"categories":4174},[4081],{"categories":4176},[],{"categories":4178},[],{"categories":4180},[4086],{"categories":4182},[4086],{"categories":4184},[176],{"categories":4186},[],{"categories":4188},[770],{"categories":4190},[],{"categories":4192},[],{"categories":4194},[],{"categories":4196},[176],{"categories":4198},[4140],{"categories":4200},[4124],{"categories":4202},[4124],{"categories":4204},[176],{"categories":4206},[4086],{"categories":4208},[176],{"categories":4210},[176],{"categories":4212},[4086],{"categories":4214},[4086],{"categories":4216},[4127],{"categories":4218},[519],{"categories":4220},[4086],{"categories":4222},[4140],{"categories":4224},[4086],{"categories":4226},[4089],{"categories":4228},[],{"categories":4230},[4086],{"categories":4232},[],{"categories":4234},[4086],{"categories":4236},[770],{"categories":4238},[4124],{"categories":4240},[176],{"categories":4242},[],{"categories":4244},[],{"categories":4246},[4086],{"categories":4248},[],{"categories":4250},[176],{"categories":4252},[],{"categories":4254},[4078],{"categories":4256},[770],{"categories":4258},[4081],{"categories":4260},[519],{"categories":4262},[176],{"categories":4264},[],{"categories":4266},[176],{"categories":4268},[],{"categories":4270},[770],{"categories":4272},[4127],{"categories":4274},[],{"categories":4276},[176],{"categories":4278},[4124],{"categories":4280},[],{"categories":4282},[4124],{"categories":4284},[4086],{"categories":4286},[],{"categories":4288},[4086],{"categories":4290},[519],{"categories":4292},[4081],{"categories":4294},[176],{"categories":4296},[],{"categories":4298},[4086],{"categories":4300},[176],{"categories":4302},[4089],{"categories":4304},[],{"categories":4306},[176],{"categories":4308},[4086],{"categories":4310},[4086],{"categories":4312},[],{"categories":4314},[4127],{"categories":4316},[176],{"categories":4318},[],{"categories":4320},[4078],{"categories":4322},[4081],{"categories":4324},[176],{"categories":4326},[4086],{"categories":4328},[770],{"categories":4330},[176],{"categories":4332},[],{"categories":4334},[],{"categories":4336},[176],{"categories":4338},[],{"categories":4340},[4124],{"categories":4342},[],{"categories":4344},[176],{"categories":4346},[],{"categories":4348},[4086],{"categories":4350},[176],{"categories":4352},[4124],{"categories":4354},[],{"categories":4356},[176],{"categories":4358},[176],{"categories":4360},[4081],{"categories":4362},[4086],{"categories":4364},[176],{"categories":4366},[4124],{"categories":4368},[4086],{"categories":4370},[],{"categories":4372},[],{"categories":4374},[519],{"categories":4376},[],{"categories":4378},[176],{"categories":4380},[4081,4140],{"categories":4382},[],{"categories":4384},[176],{"categories":4386},[],{"categories":4388},[],{"categories":4390},[176],{"categories":4392},[],{"categories":4394},[176],{"categories":4396},[273],{"categories":4398},[],{"categories":4400},[519],{"categories":4402},[4124],{"categories":4404},[],{"categories":4406},[519],{"categories":4408},[519],{"categories":4410},[176],{"categories":4412},[4140],{"categories":4414},[],{"categories":4416},[4081],{"categories":4418},[],{"categories":4420},[176,273],{"categories":4422},[176],{"categories":4424},[176],{"categories":4426},[4086],{"categories":4428},[176,770],{"categories":4430},[4127],{"categories":4432},[176],{"categories":4434},[4140],{"categories":4436},[4086],{"categories":4438},[4086],{"categories":4440},[],{"categories":4442},[4086],{"categories":4444},[176,4081],{"categories":4446},[],{"categories":4448},[4124],{"categories":4450},[4124],{"categories":4452},[],{"categories":4454},[],{"categories":4456},[519],{"categories":4458},[],{"categories":4460},[4078],{"categories":4462},[770],{"categories":4464},[176],{"categories":4466},[4124],{"categories":4468},[4086],{"categories":4470},[770],{"categories":4472},[519],{"categories":4474},[4124],{"categories":4476},[],{"categories":4478},[176],{"categories":4480},[176],{"categories":4482},[176],{"categories":4484},[519],{"categories":4486},[4078],{"categories":4488},[176],{"categories":4490},[4086],{"categories":4492},[273],{"categories":4494},[4124],{"categories":4496},[4086],{"categories":4498},[],{"categories":4500},[],{"categories":4502},[4124],{"categories":4504},[519],{"categories":4506},[4127],{"categories":4508},[],{"categories":4510},[176],{"categories":4512},[176],{"categories":4514},[4081],{"categories":4516},[176],{"categories":4518},[176],{"categories":4520},[519],{"categories":4522},[],{"categories":4524},[4086],{"categories":4526},[770],{"categories":4528},[],{"categories":4530},[176],{"categories":4532},[176],{"categories":4534},[4086],{"categories":4536},[],{"categories":4538},[],{"categories":4540},[176],{"categories":4542},[],{"categories":4544},[4081],{"categories":4546},[4086],{"categories":4548},[],{"categories":4550},[4078],{"categories":4552},[176],{"categories":4554},[4081],{"categories":4556},[519],{"categories":4558},[],{"categories":4560},[],{"categories":4562},[],{"categories":4564},[519],{"categories":4566},[519],{"categories":4568},[],{"categories":4570},[],{"categories":4572},[4081],{"categories":4574},[],{"categories":4576},[],{"categories":4578},[4078],{"categories":4580},[],{"categories":4582},[4140],{"categories":4584},[4086],{"categories":4586},[4081],{"categories":4588},[4086],{"categories":4590},[770],{"categories":4592},[],{"categories":4594},[4089],{"categories":4596},[4124],{"categories":4598},[770],{"categories":4600},[176],{"categories":4602},[4086],{"categories":4604},[4081],{"categories":4606},[176],{"categories":4608},[],{"categories":4610},[],{"categories":4612},[770],{"categories":4614},[4127],{"categories":4616},[4089],{"categories":4618},[4086],{"categories":4620},[176],{"categories":4622},[],{"categories":4624},[273],{"categories":4626},[],{"categories":4628},[4086],{"categories":4630},[],{"categories":4632},[],{"categories":4634},[176],{"categories":4636},[4124],{"categories":4638},[4140],{"categories":4640},[4086],{"categories":4642},[],{"categories":4644},[4078],{"categories":4646},[],{"categories":4648},[519],{"categories":4650},[176,273],{"categories":4652},[519],{"categories":4654},[176],{"categories":4656},[4081],{"categories":4658},[176],{"categories":4660},[],{"categories":4662},[4081],{"categories":4664},[],{"categories":4666},[770],{"categories":4668},[4124],{"categories":4670},[519],{"categories":4672},[4127],{"categories":4674},[4078],{"categories":4676},[176],{"categories":4678},[770],{"categories":4680},[],{"categories":4682},[],{"categories":4684},[4089],{"categories":4686},[],{"categories":4688},[176],{"categories":4690},[],{"categories":4692},[4124],{"categories":4694},[4124],{"categories":4696},[4124],{"categories":4698},[],{"categories":4700},[],{"categories":4702},[519],{"categories":4704},[4086],{"categories":4706},[176],{"categories":4708},[176],{"categories":4710},[176],{"categories":4712},[4081],{"categories":4714},[176],{"categories":4716},[],{"categories":4718},[770],{"categories":4720},[770],{"categories":4722},[4081],{"categories":4724},[],{"categories":4726},[176],{"categories":4728},[176],{"categories":4730},[4081],{"categories":4732},[519],{"categories":4734},[4140],{"categories":4736},[4086],{"categories":4738},[],{"categories":4740},[4124],{"categories":4742},[],{"categories":4744},[176],{"categories":4746},[],{"categories":4748},[4081],{"categories":4750},[4086],{"categories":4752},[],{"categories":4754},[273],{"categories":4756},[4127],{"categories":4758},[770],{"categories":4760},[4140],{"categories":4762},[770],{"categories":4764},[4086],{"categories":4766},[],{"categories":4768},[],{"categories":4770},[4086],{"categories":4772},[4078],{"categories":4774},[4086],{"categories":4776},[4089],{"categories":4778},[4081],{"categories":4780},[],{"categories":4782},[176],{"categories":4784},[4089],{"categories":4786},[176],{"categories":4788},[176],{"categories":4790},[4140],{"categories":4792},[4124],{"categories":4794},[4086],{"categories":4796},[],{"categories":4798},[],{"categories":4800},[273],{"categories":4802},[770],{"categories":4804},[],{"categories":4806},[4086],{"categories":4808},[176],{"categories":4810},[4124,176],{"categories":4812},[4078],{"categories":4814},[],{"categories":4816},[176],{"categories":4818},[4078],{"categories":4820},[4124],{"categories":4822},[4086],{"categories":4824},[770],{"categories":4826},[],{"categories":4828},[176],{"categories":4830},[],{"categories":4832},[4078],{"categories":4834},[],{"categories":4836},[4086],{"categories":4838},[4089],{"categories":4840},[176],{"categories":4842},[176],{"categories":4844},[4124],{"categories":4846},[4086],{"categories":4848},[273],{"categories":4850},[4124],{"categories":4852},[4086],{"categories":4854},[176],{"categories":4856},[176],{"categories":4858},[176],{"categories":4860},[519],{"categories":4862},[],{"categories":4864},[4089],{"categories":4866},[4086],{"categories":4868},[4124],{"categories":4870},[4086],{"categories":4872},[770],{"categories":4874},[4124],{"categories":4876},[4086],{"categories":4878},[519],{"categories":4880},[],{"categories":4882},[176],{"categories":4884},[4124],{"categories":4886},[176],{"categories":4888},[4078],{"categories":4890},[519],{"categories":4892},[176],{"categories":4894},[4140],{"categories":4896},[176],{"categories":4898},[176],{"categories":4900},[4086],{"categories":4902},[4086],{"categories":4904},[176],{"categories":4906},[4086],{"categories":4908},[4124],{"categories":4910},[176],{"categories":4912},[],{"categories":4914},[],{"categories":4916},[770],{"categories":4918},[],{"categories":4920},[4078],{"categories":4922},[273],{"categories":4924},[],{"categories":4926},[4078],{"categories":4928},[4081],{"categories":4930},[4140],{"categories":4932},[],{"categories":4934},[4081],{"categories":4936},[],{"categories":4938},[],{"categories":4940},[],{"categories":4942},[],{"categories":4944},[],{"categories":4946},[176],{"categories":4948},[4086],{"categories":4950},[273],{"categories":4952},[4078],{"categories":4954},[176],{"categories":4956},[770],{"categories":4958},[4089],{"categories":4960},[176],{"categories":4962},[4140],{"categories":4964},[176],{"categories":4966},[176],{"categories":4968},[176],{"categories":4970},[176,4078],{"categories":4972},[770],{"categories":4974},[770],{"categories":4976},[4124],{"categories":4978},[176],{"categories":4980},[],{"categories":4982},[],{"categories":4984},[],{"categories":4986},[770],{"categories":4988},[4127],{"categories":4990},[519],{"categories":4992},[4124],{"categories":4994},[],{"categories":4996},[176],{"categories":4998},[176],{"categories":5000},[],{"categories":5002},[],{"categories":5004},[4086],{"categories":5006},[176],{"categories":5008},[4081],{"categories":5010},[],{"categories":5012},[4078],{"categories":5014},[176],{"categories":5016},[4078],{"categories":5018},[176],{"categories":5020},[770],{"categories":5022},[4140],{"categories":5024},[176,4124],{"categories":5026},[519],{"categories":5028},[4124],{"categories":5030},[],{"categories":5032},[273],{"categories":5034},[4124],{"categories":5036},[4086],{"categories":5038},[],{"categories":5040},[],{"categories":5042},[],{"categories":5044},[],{"categories":5046},[770],{"categories":5048},[4086],{"categories":5050},[4086],{"categories":5052},[273],{"categories":5054},[176],{"categories":5056},[176],{"categories":5058},[176],{"categories":5060},[],{"categories":5062},[4124],{"categories":5064},[],{"categories":5066},[],{"categories":5068},[4086],{"categories":5070},[],{"categories":5072},[],{"categories":5074},[4140],{"categories":5076},[4140],{"categories":5078},[4086],{"categories":5080},[],{"categories":5082},[176],{"categories":5084},[176],{"categories":5086},[770],{"categories":5088},[4124],{"categories":5090},[4124],{"categories":5092},[4086],{"categories":5094},[4078],{"categories":5096},[176],{"categories":5098},[4124],{"categories":5100},[4124],{"categories":5102},[4086],{"categories":5104},[4086],{"categories":5106},[176],{"categories":5108},[],{"categories":5110},[],{"categories":5112},[176],{"categories":5114},[4086],{"categories":5116},[519],{"categories":5118},[770],{"categories":5120},[4078],{"categories":5122},[176],{"categories":5124},[],{"categories":5126},[4086],{"categories":5128},[4086],{"categories":5130},[],{"categories":5132},[4078],{"categories":5134},[176],{"categories":5136},[4078],{"categories":5138},[4078],{"categories":5140},[],{"categories":5142},[],{"categories":5144},[4086],{"categories":5146},[4086],{"categories":5148},[176],{"categories":5150},[176],{"categories":5152},[519],{"categories":5154},[4127],{"categories":5156},[4089],{"categories":5158},[519],{"categories":5160},[4124],{"categories":5162},[],{"categories":5164},[519],{"categories":5166},[],{"categories":5168},[],{"categories":5170},[],{"categories":5172},[],{"categories":5174},[770],{"categories":5176},[4127],{"categories":5178},[],{"categories":5180},[176],{"categories":5182},[176],{"categories":5184},[4127],{"categories":5186},[770],{"categories":5188},[],{"categories":5190},[],{"categories":5192},[4086],{"categories":5194},[519],{"categories":5196},[519],{"categories":5198},[4086],{"categories":5200},[4078],{"categories":5202},[176,273],{"categories":5204},[],{"categories":5206},[4124],{"categories":5208},[4078],{"categories":5210},[4086],{"categories":5212},[4124],{"categories":5214},[],{"categories":5216},[4086],{"categories":5218},[4086],{"categories":5220},[176],{"categories":5222},[4140],{"categories":5224},[770],{"categories":5226},[4124],{"categories":5228},[],{"categories":5230},[4086],{"categories":5232},[176],{"categories":5234},[4086],{"categories":5236},[4086],{"categories":5238},[4086],{"categories":5240},[4140],{"categories":5242},[4086],{"categories":5244},[176],{"categories":5246},[],{"categories":5248},[4140],{"categories":5250},[519],{"categories":5252},[4086],{"categories":5254},[],{"categories":5256},[],{"categories":5258},[176],{"categories":5260},[4086],{"categories":5262},[519],{"categories":5264},[4086],{"categories":5266},[],{"categories":5268},[],{"categories":5270},[],{"categories":5272},[4086],{"categories":5274},[],{"categories":5276},[],{"categories":5278},[4127],{"categories":5280},[176],{"categories":5282},[4127],{"categories":5284},[519],{"categories":5286},[176],{"categories":5288},[176],{"categories":5290},[4086],{"categories":5292},[176],{"categories":5294},[],{"categories":5296},[],{"categories":5298},[273],{"categories":5300},[],{"categories":5302},[],{"categories":5304},[4078],{"categories":5306},[],{"categories":5308},[],{"categories":5310},[],{"categories":5312},[],{"categories":5314},[770],{"categories":5316},[519],{"categories":5318},[4140],{"categories":5320},[4081],{"categories":5322},[176],{"categories":5324},[176],{"categories":5326},[4081],{"categories":5328},[],{"categories":5330},[4124],{"categories":5332},[4086],{"categories":5334},[4081],{"categories":5336},[176],{"categories":5338},[176],{"categories":5340},[4078],{"categories":5342},[],{"categories":5344},[4078],{"categories":5346},[176],{"categories":5348},[4140],{"categories":5350},[4086],{"categories":5352},[519],{"categories":5354},[4081],{"categories":5356},[176],{"categories":5358},[4086],{"categories":5360},[],{"categories":5362},[176],{"categories":5364},[4078],{"categories":5366},[176],{"categories":5368},[],{"categories":5370},[519],{"categories":5372},[176],{"categories":5374},[],{"categories":5376},[4081],{"categories":5378},[176],{"categories":5380},[],{"categories":5382},[],{"categories":5384},[],{"categories":5386},[176],{"categories":5388},[],{"categories":5390},[273],{"categories":5392},[176],{"categories":5394},[],{"categories":5396},[176],{"categories":5398},[176],{"categories":5400},[176],{"categories":5402},[176,273],{"categories":5404},[176],{"categories":5406},[176],{"categories":5408},[4124],{"categories":5410},[4086],{"categories":5412},[],{"categories":5414},[4086],{"categories":5416},[176],{"categories":5418},[176],{"categories":5420},[176],{"categories":5422},[4078],{"categories":5424},[4078],{"categories":5426},[770],{"categories":5428},[4124],{"categories":5430},[4086],{"categories":5432},[],{"categories":5434},[176],{"categories":5436},[519],{"categories":5438},[176],{"categories":5440},[4081],{"categories":5442},[],{"categories":5444},[273],{"categories":5446},[4124],{"categories":5448},[4124],{"categories":5450},[4086],{"categories":5452},[519],{"categories":5454},[4086],{"categories":5456},[176],{"categories":5458},[],{"categories":5460},[176],{"categories":5462},[],{"categories":5464},[],{"categories":5466},[176],{"categories":5468},[176],{"categories":5470},[176],{"categories":5472},[4086],{"categories":5474},[176],{"categories":5476},[],{"categories":5478},[4127],{"categories":5480},[4086],{"categories":5482},[],{"categories":5484},[],{"categories":5486},[176],{"categories":5488},[519],{"categories":5490},[],{"categories":5492},[4124],{"categories":5494},[273],{"categories":5496},[519],{"categories":5498},[770],{"categories":5500},[770],{"categories":5502},[519],{"categories":5504},[519],{"categories":5506},[273],{"categories":5508},[],{"categories":5510},[519],{"categories":5512},[176],{"categories":5514},[4078],{"categories":5516},[519],{"categories":5518},[],{"categories":5520},[4127],{"categories":5522},[519],{"categories":5524},[770],{"categories":5526},[519],{"categories":5528},[273],{"categories":5530},[176],{"categories":5532},[176],{"categories":5534},[],{"categories":5536},[4081],{"categories":5538},[],{"categories":5540},[],{"categories":5542},[176],{"categories":5544},[176],{"categories":5546},[176],{"categories":5548},[176],{"categories":5550},[],{"categories":5552},[4127],{"categories":5554},[4078],{"categories":5556},[],{"categories":5558},[176],{"categories":5560},[176],{"categories":5562},[273],{"categories":5564},[273],{"categories":5566},[],{"categories":5568},[4086],{"categories":5570},[519],{"categories":5572},[519],{"categories":5574},[176],{"categories":5576},[4086],{"categories":5578},[],{"categories":5580},[4124],{"categories":5582},[176],{"categories":5584},[176],{"categories":5586},[],{"categories":5588},[],{"categories":5590},[273],{"categories":5592},[176],{"categories":5594},[770],{"categories":5596},[4081],{"categories":5598},[176],{"categories":5600},[],{"categories":5602},[4086],{"categories":5604},[4078],{"categories":5606},[4078],{"categories":5608},[],{"categories":5610},[176],{"categories":5612},[4124],{"categories":5614},[4086],{"categories":5616},[],{"categories":5618},[176],{"categories":5620},[176],{"categories":5622},[4086],{"categories":5624},[],{"categories":5626},[4086],{"categories":5628},[770],{"categories":5630},[],{"categories":5632},[176],{"categories":5634},[],{"categories":5636},[176],{"categories":5638},[],{"categories":5640},[176],{"categories":5642},[176],{"categories":5644},[],{"categories":5646},[176],{"categories":5648},[519],{"categories":5650},[176],{"categories":5652},[176],{"categories":5654},[4078],{"categories":5656},[176],{"categories":5658},[519],{"categories":5660},[4086],{"categories":5662},[],{"categories":5664},[176],{"categories":5666},[4140],{"categories":5668},[],{"categories":5670},[],{"categories":5672},[],{"categories":5674},[4078],{"categories":5676},[519],{"categories":5678},[4086],{"categories":5680},[176],{"categories":5682},[4124],{"categories":5684},[4086],{"categories":5686},[],{"categories":5688},[4086],{"categories":5690},[],{"categories":5692},[176],{"categories":5694},[4086],{"categories":5696},[176],{"categories":5698},[],{"categories":5700},[176],{"categories":5702},[176],{"categories":5704},[519],{"categories":5706},[4124],{"categories":5708},[4086],{"categories":5710},[4124],{"categories":5712},[4081],{"categories":5714},[],{"categories":5716},[],{"categories":5718},[176],{"categories":5720},[4078],{"categories":5722},[519],{"categories":5724},[],{"categories":5726},[],{"categories":5728},[770],{"categories":5730},[4124],{"categories":5732},[],{"categories":5734},[176],{"categories":5736},[],{"categories":5738},[4140],{"categories":5740},[176],{"categories":5742},[273],{"categories":5744},[770],{"categories":5746},[],{"categories":5748},[4086],{"categories":5750},[176],{"categories":5752},[4086],{"categories":5754},[4086],{"categories":5756},[176],{"categories":5758},[],{"categories":5760},[4078],{"categories":5762},[176],{"categories":5764},[4081],{"categories":5766},[770],{"categories":5768},[4124],{"categories":5770},[],{"categories":5772},[],{"categories":5774},[],{"categories":5776},[4086],{"categories":5778},[4124],{"categories":5780},[519],{"categories":5782},[176],{"categories":5784},[519],{"categories":5786},[4124],{"categories":5788},[],{"categories":5790},[4124],{"categories":5792},[519],{"categories":5794},[4081],{"categories":5796},[176],{"categories":5798},[519],{"categories":5800},[4140],{"categories":5802},[],{"categories":5804},[],{"categories":5806},[4127],{"categories":5808},[176,770],{"categories":5810},[519],{"categories":5812},[176],{"categories":5814},[4086],{"categories":5816},[4086],{"categories":5818},[176],{"categories":5820},[],{"categories":5822},[770],{"categories":5824},[176],{"categories":5826},[4127],{"categories":5828},[4086],{"categories":5830},[4140],{"categories":5832},[273],{"categories":5834},[],{"categories":5836},[4078],{"categories":5838},[4086],{"categories":5840},[4086],{"categories":5842},[770],{"categories":5844},[176],{"categories":5846},[176],{"categories":5848},[],{"categories":5850},[],{"categories":5852},[],{"categories":5854},[273],{"categories":5856},[519],{"categories":5858},[176],{"categories":5860},[176],{"categories":5862},[176],{"categories":5864},[],{"categories":5866},[4127],{"categories":5868},[4081],{"categories":5870},[],{"categories":5872},[4086],{"categories":5874},[273],{"categories":5876},[],{"categories":5878},[4124],{"categories":5880},[4124],{"categories":5882},[],{"categories":5884},[770],{"categories":5886},[4124],{"categories":5888},[176],{"categories":5890},[],{"categories":5892},[519],{"categories":5894},[176],{"categories":5896},[4124],{"categories":5898},[4086],{"categories":5900},[519],{"categories":5902},[],{"categories":5904},[4086],{"categories":5906},[4124],{"categories":5908},[176],{"categories":5910},[],{"categories":5912},[176],{"categories":5914},[176],{"categories":5916},[273],{"categories":5918},[519],{"categories":5920},[4127],{"categories":5922},[4127],{"categories":5924},[],{"categories":5926},[],{"categories":5928},[],{"categories":5930},[4086],{"categories":5932},[770],{"categories":5934},[770],{"categories":5936},[],{"categories":5938},[],{"categories":5940},[176],{"categories":5942},[],{"categories":5944},[4086],{"categories":5946},[176],{"categories":5948},[],{"categories":5950},[176],{"categories":5952},[4081],{"categories":5954},[176],{"categories":5956},[4140],{"categories":5958},[4086],{"categories":5960},[176],{"categories":5962},[770],{"categories":5964},[519],{"categories":5966},[4086],{"categories":5968},[],{"categories":5970},[519],{"categories":5972},[4086],{"categories":5974},[4086],{"categories":5976},[],{"categories":5978},[4081],{"categories":5980},[4086],{"categories":5982},[],{"categories":5984},[176],{"categories":5986},[4078],{"categories":5988},[519],{"categories":5990},[273],{"categories":5992},[4086],{"categories":5994},[4086],{"categories":5996},[4078],{"categories":5998},[176],{"categories":6000},[],{"categories":6002},[],{"categories":6004},[4124],{"categories":6006},[176,4081],{"categories":6008},[],{"categories":6010},[4078],{"categories":6012},[4127],{"categories":6014},[176],{"categories":6016},[770],{"categories":6018},[176],{"categories":6020},[4086],{"categories":6022},[176],{"categories":6024},[176],{"categories":6026},[519],{"categories":6028},[4086],{"categories":6030},[],{"categories":6032},[],{"categories":6034},[4086],{"categories":6036},[176],{"categories":6038},[273],{"categories":6040},[],{"categories":6042},[176],{"categories":6044},[4086],{"categories":6046},[],{"categories":6048},[176],{"categories":6050},[4140],{"categories":6052},[4127],{"categories":6054},[4086],{"categories":6056},[176],{"categories":6058},[273],{"categories":6060},[],{"categories":6062},[176],{"categories":6064},[4140],{"categories":6066},[4124],{"categories":6068},[176],{"categories":6070},[],{"categories":6072},[4140],{"categories":6074},[519],{"categories":6076},[176],{"categories":6078},[176],{"categories":6080},[4078],{"categories":6082},[],{"categories":6084},[],{"categories":6086},[4124],{"categories":6088},[176],{"categories":6090},[4127],{"categories":6092},[4140],{"categories":6094},[4140],{"categories":6096},[519],{"categories":6098},[],{"categories":6100},[],{"categories":6102},[176],{"categories":6104},[],{"categories":6106},[176,770],{"categories":6108},[519],{"categories":6110},[4086],{"categories":6112},[770],{"categories":6114},[176],{"categories":6116},[4078],{"categories":6118},[],{"categories":6120},[],{"categories":6122},[4078],{"categories":6124},[4140],{"categories":6126},[176],{"categories":6128},[],{"categories":6130},[4124,176],{"categories":6132},[273],{"categories":6134},[4078],{"categories":6136},[],{"categories":6138},[4081],{"categories":6140},[4081],{"categories":6142},[176],{"categories":6144},[770],{"categories":6146},[4086],{"categories":6148},[519],{"categories":6150},[4140],{"categories":6152},[4124],{"categories":6154},[176],{"categories":6156},[176],{"categories":6158},[176],{"categories":6160},[4078],{"categories":6162},[176],{"categories":6164},[4086],{"categories":6166},[519],{"categories":6168},[],{"categories":6170},[],{"categories":6172},[4127],{"categories":6174},[770],{"categories":6176},[176],{"categories":6178},[4124],{"categories":6180},[4127],{"categories":6182},[176],{"categories":6184},[176],{"categories":6186},[4086],{"categories":6188},[4086],{"categories":6190},[176,4081],{"categories":6192},[],{"categories":6194},[4124],{"categories":6196},[],{"categories":6198},[176],{"categories":6200},[519],{"categories":6202},[4078],{"categories":6204},[4078],{"categories":6206},[4086],{"categories":6208},[176],{"categories":6210},[4081],{"categories":6212},[770],{"categories":6214},[4140],{"categories":6216},[],{"categories":6218},[519],{"categories":6220},[176],{"categories":6222},[176],{"categories":6224},[519],{"categories":6226},[770],{"categories":6228},[176],{"categories":6230},[4086],{"categories":6232},[519],{"categories":6234},[176],{"categories":6236},[4124],{"categories":6238},[176],{"categories":6240},[176],{"categories":6242},[273],{"categories":6244},[4089],{"categories":6246},[4086],{"categories":6248},[176],{"categories":6250},[519],{"categories":6252},[4086],{"categories":6254},[4140],{"categories":6256},[176],{"categories":6258},[],{"categories":6260},[176],{"categories":6262},[],{"categories":6264},[],{"categories":6266},[],{"categories":6268},[4081],{"categories":6270},[176],{"categories":6272},[4086],{"categories":6274},[519],{"categories":6276},[519],{"categories":6278},[519],{"categories":6280},[519],{"categories":6282},[],{"categories":6284},[4078],{"categories":6286},[4086],{"categories":6288},[519],{"categories":6290},[4078],{"categories":6292},[4086],{"categories":6294},[176],{"categories":6296},[176,4086],{"categories":6298},[4086],{"categories":6300},[273],{"categories":6302},[519],{"categories":6304},[519],{"categories":6306},[4086],{"categories":6308},[176],{"categories":6310},[],{"categories":6312},[519],{"categories":6314},[4140],{"categories":6316},[4078],{"categories":6318},[176],{"categories":6320},[176],{"categories":6322},[],{"categories":6324},[770],{"categories":6326},[],{"categories":6328},[4078],{"categories":6330},[4086],{"categories":6332},[519],{"categories":6334},[176],{"categories":6336},[519],{"categories":6338},[4078],{"categories":6340},[519],{"categories":6342},[519],{"categories":6344},[],{"categories":6346},[4081],{"categories":6348},[4086],{"categories":6350},[519],{"categories":6352},[519],{"categories":6354},[519],{"categories":6356},[519],{"categories":6358},[519],{"categories":6360},[519],{"categories":6362},[519],{"categories":6364},[519],{"categories":6366},[519],{"categories":6368},[519],{"categories":6370},[4127],{"categories":6372},[4078],{"categories":6374},[176],{"categories":6376},[176],{"categories":6378},[],{"categories":6380},[176,4078],{"categories":6382},[],{"categories":6384},[4086],{"categories":6386},[519],{"categories":6388},[4086],{"categories":6390},[176],{"categories":6392},[176],{"categories":6394},[176],{"categories":6396},[176],{"categories":6398},[176],{"categories":6400},[4086],{"categories":6402},[4081],{"categories":6404},[4124],{"categories":6406},[519],{"categories":6408},[176],{"categories":6410},[],{"categories":6412},[],{"categories":6414},[4086],{"categories":6416},[4124],{"categories":6418},[176],{"categories":6420},[],{"categories":6422},[],{"categories":6424},[4140],{"categories":6426},[176],{"categories":6428},[],{"categories":6430},[],{"categories":6432},[4078],{"categories":6434},[4081],{"categories":6436},[176],{"categories":6438},[4081],{"categories":6440},[4124],{"categories":6442},[],{"categories":6444},[519],{"categories":6446},[],{"categories":6448},[4124],{"categories":6450},[176],{"categories":6452},[4140],{"categories":6454},[],{"categories":6456},[4140],{"categories":6458},[],{"categories":6460},[],{"categories":6462},[4086],{"categories":6464},[],{"categories":6466},[4081],{"categories":6468},[4078],{"categories":6470},[4124],{"categories":6472},[770],{"categories":6474},[],{"categories":6476},[],{"categories":6478},[176],{"categories":6480},[4078],{"categories":6482},[4140],{"categories":6484},[],{"categories":6486},[4086],{"categories":6488},[4086],{"categories":6490},[519],{"categories":6492},[176],{"categories":6494},[4086],{"categories":6496},[176],{"categories":6498},[4086],{"categories":6500},[176],{"categories":6502},[4089],{"categories":6504},[519],{"categories":6506},[],{"categories":6508},[4140],{"categories":6510},[770],{"categories":6512},[4086],{"categories":6514},[],{"categories":6516},[176],{"categories":6518},[4086],{"categories":6520},[4081],{"categories":6522},[4078],{"categories":6524},[176],{"categories":6526},[4124],{"categories":6528},[770],{"categories":6530},[770],{"categories":6532},[176],{"categories":6534},[4127],{"categories":6536},[176],{"categories":6538},[4086],{"categories":6540},[4081],{"categories":6542},[4086],{"categories":6544},[176],{"categories":6546},[176],{"categories":6548},[4086],{"categories":6550},[519],{"categories":6552},[],{"categories":6554},[4078],{"categories":6556},[176],{"categories":6558},[4086],{"categories":6560},[176],{"categories":6562},[176],{"categories":6564},[],{"categories":6566},[4124],{"categories":6568},[4081],{"categories":6570},[519],{"categories":6572},[176],{"categories":6574},[176],{"categories":6576},[4124],{"categories":6578},[4140],{"categories":6580},[4127],{"categories":6582},[176],{"categories":6584},[519],{"categories":6586},[176],{"categories":6588},[4086],{"categories":6590},[273],{"categories":6592},[176],{"categories":6594},[4086],{"categories":6596},[4127],{"categories":6598},[],{"categories":6600},[4086],{"categories":6602},[770],{"categories":6604},[4124],{"categories":6606},[176],{"categories":6608},[4078],{"categories":6610},[4081],{"categories":6612},[770],{"categories":6614},[],{"categories":6616},[4086],{"categories":6618},[176],{"categories":6620},[],{"categories":6622},[519],{"categories":6624},[],{"categories":6626},[519],{"categories":6628},[176],{"categories":6630},[4086],{"categories":6632},[4086],{"categories":6634},[4086],{"categories":6636},[],{"categories":6638},[],{"categories":6640},[176],{"categories":6642},[176],{"categories":6644},[],{"categories":6646},[4124],{"categories":6648},[4086],{"categories":6650},[4140],{"categories":6652},[4078],{"categories":6654},[],{"categories":6656},[],{"categories":6658},[519],{"categories":6660},[770],{"categories":6662},[176],{"categories":6664},[176],{"categories":6666},[176],{"categories":6668},[770],{"categories":6670},[519],{"categories":6672},[4124],{"categories":6674},[176],{"categories":6676},[176],{"categories":6678},[176],{"categories":6680},[519],{"categories":6682},[176],{"categories":6684},[519],{"categories":6686},[4086],{"categories":6688},[4086],{"categories":6690},[770],{"categories":6692},[4086],{"categories":6694},[176],{"categories":6696},[770],{"categories":6698},[4124],{"categories":6700},[],{"categories":6702},[4086],{"categories":6704},[],{"categories":6706},[],{"categories":6708},[],{"categories":6710},[4081],{"categories":6712},[176],{"categories":6714},[4086],{"categories":6716},[4078],{"categories":6718},[4086],{"categories":6720},[4140],{"categories":6722},[],{"categories":6724},[4086],{"categories":6726},[],{"categories":6728},[4078],{"categories":6730},[4086],{"categories":6732},[],{"categories":6734},[4086],{"categories":6736},[176],{"categories":6738},[519],{"categories":6740},[176],{"categories":6742},[4086],{"categories":6744},[519],{"categories":6746},[4086],{"categories":6748},[770],{"categories":6750},[4124],{"categories":6752},[4078],{"categories":6754},[],{"categories":6756},[4086],{"categories":6758},[4124],{"categories":6760},[273],{"categories":6762},[519],{"categories":6764},[176],{"categories":6766},[4124],{"categories":6768},[4078],{"categories":6770},[],{"categories":6772},[4086],{"categories":6774},[4086],{"categories":6776},[176],{"categories":6778},[],{"categories":6780},[4086],{"categories":6782},[4089],{"categories":6784},[519],{"categories":6786},[4086],{"categories":6788},[4081],{"categories":6790},[],{"categories":6792},[176],{"categories":6794},[4089],{"categories":6796},[176],{"categories":6798},[4086],{"categories":6800},[519],{"categories":6802},[4078],{"categories":6804},[273],{"categories":6806},[176],{"categories":6808},[176],{"categories":6810},[176],{"categories":6812},[519],{"categories":6814},[4081],{"categories":6816},[176],{"categories":6818},[4124],{"categories":6820},[519],{"categories":6822},[273],{"categories":6824},[176],{"categories":6826},[],{"categories":6828},[],{"categories":6830},[273],{"categories":6832},[4127],{"categories":6834},[4086],{"categories":6836},[4086],{"categories":6838},[519],{"categories":6840},[176],{"categories":6842},[4078],{"categories":6844},[4124],{"categories":6846},[4086],{"categories":6848},[176],{"categories":6850},[4140],{"categories":6852},[176],{"categories":6854},[4086],{"categories":6856},[],{"categories":6858},[176],{"categories":6860},[176],{"categories":6862},[519],{"categories":6864},[4078],{"categories":6866},[],{"categories":6868},[176],{"categories":6870},[176],{"categories":6872},[770],{"categories":6874},[4124],{"categories":6876},[176,4086],{"categories":6878},[4140,4081],{"categories":6880},[176],{"categories":6882},[],{"categories":6884},[4086],{"categories":6886},[],{"categories":6888},[770],{"categories":6890},[176],{"categories":6892},[519],{"categories":6894},[],{"categories":6896},[4086],{"categories":6898},[],{"categories":6900},[4124],{"categories":6902},[4086],{"categories":6904},[4078],{"categories":6906},[4086],{"categories":6908},[176],{"categories":6910},[273],{"categories":6912},[4140],{"categories":6914},[4081],{"categories":6916},[4081],{"categories":6918},[4078],{"categories":6920},[4078],{"categories":6922},[176],{"categories":6924},[4086],{"categories":6926},[176],{"categories":6928},[176],{"categories":6930},[4078],{"categories":6932},[176],{"categories":6934},[4140],{"categories":6936},[519],{"categories":6938},[176],{"categories":6940},[4086],{"categories":6942},[176],{"categories":6944},[],{"categories":6946},[770],{"categories":6948},[],{"categories":6950},[4086],{"categories":6952},[4078],{"categories":6954},[],{"categories":6956},[273],{"categories":6958},[176],{"categories":6960},[],{"categories":6962},[519],{"categories":6964},[4086],{"categories":6966},[770],{"categories":6968},[176],{"categories":6970},[4086],{"categories":6972},[770],{"categories":6974},[4086],{"categories":6976},[519],{"categories":6978},[4078],{"categories":6980},[519],{"categories":6982},[770],{"categories":6984},[176],{"categories":6986},[4124],{"categories":6988},[176],{"categories":6990},[176],{"categories":6992},[176],{"categories":6994},[176],{"categories":6996},[4086],{"categories":6998},[176],{"categories":7000},[4086],{"categories":7002},[176],{"categories":7004},[4078],{"categories":7006},[176],{"categories":7008},[4086],{"categories":7010},[4124],{"categories":7012},[4078],{"categories":7014},[4086],{"categories":7016},[4124],{"categories":7018},[],{"categories":7020},[176],{"categories":7022},[176],{"categories":7024},[770],{"categories":7026},[],{"categories":7028},[4086],{"categories":7030},[4140],{"categories":7032},[176],{"categories":7034},[519],{"categories":7036},[4140],{"categories":7038},[4086],{"categories":7040},[4081],{"categories":7042},[4081],{"categories":7044},[176],{"categories":7046},[4078],{"categories":7048},[],{"categories":7050},[176],{"categories":7052},[],{"categories":7054},[4078],{"categories":7056},[176],{"categories":7058},[4086],{"categories":7060},[4086],{"categories":7062},[],{"categories":7064},[770],{"categories":7066},[770],{"categories":7068},[4140],{"categories":7070},[4124],{"categories":7072},[],{"categories":7074},[176],{"categories":7076},[4078],{"categories":7078},[176],{"categories":7080},[770],{"categories":7082},[4078],{"categories":7084},[519],{"categories":7086},[519],{"categories":7088},[],{"categories":7090},[519],{"categories":7092},[4086],{"categories":7094},[4124],{"categories":7096},[4127],{"categories":7098},[176],{"categories":7100},[],{"categories":7102},[519],{"categories":7104},[770],{"categories":7106},[4081],{"categories":7108},[176],{"categories":7110},[4078],{"categories":7112},[273],{"categories":7114},[4078],{"categories":7116},[],{"categories":7118},[],{"categories":7120},[519],{"categories":7122},[],{"categories":7124},[4086],{"categories":7126},[4086],{"categories":7128},[4086],{"categories":7130},[],{"categories":7132},[176],{"categories":7134},[],{"categories":7136},[519],{"categories":7138},[4078],{"categories":7140},[4124],{"categories":7142},[176],{"categories":7144},[519],{"categories":7146},[519],{"categories":7148},[],{"categories":7150},[519],{"categories":7152},[4078],{"categories":7154},[176],{"categories":7156},[],{"categories":7158},[4086],{"categories":7160},[4086],{"categories":7162},[4078],{"categories":7164},[],{"categories":7166},[],{"categories":7168},[],{"categories":7170},[4124],{"categories":7172},[4086],{"categories":7174},[176],{"categories":7176},[],{"categories":7178},[],{"categories":7180},[],{"categories":7182},[4124],{"categories":7184},[],{"categories":7186},[4078],{"categories":7188},[],{"categories":7190},[],{"categories":7192},[4124],{"categories":7194},[176],{"categories":7196},[519],{"categories":7198},[],{"categories":7200},[4140],{"categories":7202},[519],{"categories":7204},[4140],{"categories":7206},[176],{"categories":7208},[],{"categories":7210},[],{"categories":7212},[4086],{"categories":7214},[],{"categories":7216},[],{"categories":7218},[4086],{"categories":7220},[176],{"categories":7222},[],{"categories":7224},[4086],{"categories":7226},[519],{"categories":7228},[4140],{"categories":7230},[4127],{"categories":7232},[4086],{"categories":7234},[4086],{"categories":7236},[],{"categories":7238},[],{"categories":7240},[],{"categories":7242},[519],{"categories":7244},[],{"categories":7246},[],{"categories":7248},[4124],{"categories":7250},[4078],{"categories":7252},[],{"categories":7254},[4081],{"categories":7256},[4140],{"categories":7258},[176],{"categories":7260},[770],{"categories":7262},[4078],{"categories":7264},[4127],{"categories":7266},[4081],{"categories":7268},[770],{"categories":7270},[],{"categories":7272},[],{"categories":7274},[4086],{"categories":7276},[4078],{"categories":7278},[4124],{"categories":7280},[4078],{"categories":7282},[4086],{"categories":7284},[273],{"categories":7286},[4086],{"categories":7288},[],{"categories":7290},[176],{"categories":7292},[519],{"categories":7294},[770],{"categories":7296},[],{"categories":7298},[4124],{"categories":7300},[519],{"categories":7302},[4078],{"categories":7304},[4086],{"categories":7306},[176],{"categories":7308},[4081],{"categories":7310},[4086,273],{"categories":7312},[4086],{"categories":7314},[770],{"categories":7316},[176],{"categories":7318},[4127],{"categories":7320},[4140],{"categories":7322},[4086],{"categories":7324},[],{"categories":7326},[4086],{"categories":7328},[176],{"categories":7330},[4081],{"categories":7332},[],{"categories":7334},[],{"categories":7336},[176],{"categories":7338},[4127],{"categories":7340},[176],{"categories":7342},[],{"categories":7344},[519],{"categories":7346},[],{"categories":7348},[519],{"categories":7350},[770],{"categories":7352},[4086],{"categories":7354},[176],{"categories":7356},[4140],{"categories":7358},[770],{"categories":7360},[],{"categories":7362},[519],{"categories":7364},[176],{"categories":7366},[],{"categories":7368},[176],{"categories":7370},[4086],{"categories":7372},[176],{"categories":7374},[4086],{"categories":7376},[176],{"categories":7378},[176],{"categories":7380},[176],{"categories":7382},[176],{"categories":7384},[4081],{"categories":7386},[],{"categories":7388},[4089],{"categories":7390},[519],{"categories":7392},[176],{"categories":7394},[],{"categories":7396},[770],{"categories":7398},[176],{"categories":7400},[176],{"categories":7402},[4086],{"categories":7404},[519],{"categories":7406},[176],{"categories":7408},[176],{"categories":7410},[4081],{"categories":7412},[4086],{"categories":7414},[4124],{"categories":7416},[],{"categories":7418},[4127],{"categories":7420},[176],{"categories":7422},[],{"categories":7424},[519],{"categories":7426},[4140],{"categories":7428},[],{"categories":7430},[],{"categories":7432},[519],{"categories":7434},[519],{"categories":7436},[4140],{"categories":7438},[4078],{"categories":7440},[4086],{"categories":7442},[4086],{"categories":7444},[176],{"categories":7446},[4081],{"categories":7448},[],{"categories":7450},[],{"categories":7452},[519],{"categories":7454},[4127],{"categories":7456},[770],{"categories":7458},[4086],{"categories":7460},[4124],{"categories":7462},[4127],{"categories":7464},[4127],{"categories":7466},[],{"categories":7468},[519],{"categories":7470},[176],{"categories":7472},[176],{"categories":7474},[770],{"categories":7476},[],{"categories":7478},[519],{"categories":7480},[519],{"categories":7482},[519],{"categories":7484},[],{"categories":7486},[4086],{"categories":7488},[176],{"categories":7490},[],{"categories":7492},[4078],{"categories":7494},[4081],{"categories":7496},[],{"categories":7498},[176],{"categories":7500},[176],{"categories":7502},[],{"categories":7504},[770],{"categories":7506},[],{"categories":7508},[],{"categories":7510},[],{"categories":7512},[],{"categories":7514},[176],{"categories":7516},[519],{"categories":7518},[],{"categories":7520},[],{"categories":7522},[176],{"categories":7524},[176],{"categories":7526},[176],{"categories":7528},[4127],{"categories":7530},[176],{"categories":7532},[4127],{"categories":7534},[],{"categories":7536},[4127],{"categories":7538},[4127],{"categories":7540},[273],{"categories":7542},[4086],{"categories":7544},[770],{"categories":7546},[],{"categories":7548},[],{"categories":7550},[4127],{"categories":7552},[770],{"categories":7554},[770],{"categories":7556},[770],{"categories":7558},[],{"categories":7560},[4078],{"categories":7562},[770],{"categories":7564},[770],{"categories":7566},[4078],{"categories":7568},[770],{"categories":7570},[4081],{"categories":7572},[770],{"categories":7574},[770],{"categories":7576},[770],{"categories":7578},[4127],{"categories":7580},[519],{"categories":7582},[519],{"categories":7584},[176],{"categories":7586},[770],{"categories":7588},[4127],{"categories":7590},[273],{"categories":7592},[4127],{"categories":7594},[4127],{"categories":7596},[4127],{"categories":7598},[],{"categories":7600},[4081],{"categories":7602},[],{"categories":7604},[273],{"categories":7606},[770],{"categories":7608},[770],{"categories":7610},[770],{"categories":7612},[4086],{"categories":7614},[519,4081],{"categories":7616},[4127],{"categories":7618},[],{"categories":7620},[],{"categories":7622},[4127],{"categories":7624},[],{"categories":7626},[4127],{"categories":7628},[519],{"categories":7630},[4086],{"categories":7632},[],{"categories":7634},[770],{"categories":7636},[176],{"categories":7638},[4124],{"categories":7640},[],{"categories":7642},[176],{"categories":7644},[],{"categories":7646},[519],{"categories":7648},[4078],{"categories":7650},[4127],{"categories":7652},[],{"categories":7654},[770],{"categories":7656},[519],[7658,7660,7662,7664,7666,7668,7670,7672,7674],{"tag":1070,"count":7659},861,{"tag":219,"count":7661},733,{"tag":220,"count":7663},726,{"tag":222,"count":7665},406,{"tag":3943,"count":7667},300,{"tag":3017,"count":7669},290,{"tag":3821,"count":7671},230,{"tag":2053,"count":7673},170,{"tag":1286,"count":7675},167,[7677,7678,7680,7681,7683,7685,7687,7688,7689,7691,7692,7694,7695,7697,7698,7700,7701,7703,7705,7707,7708,7710,7711,7712,7714,7715,7716,7717,7719,7720,7721,7723,7724,7725,7727,7729,7731,7733,7734,7735,7736,7737,7738,7739,7740,7741,7743,7744,7745,7747,7748,7749,7751,7753,7754,7755,7756,7757,7758,7759,7761,7763,7765,7766,7768,7770,7771,7772,7773,7774,7776,7777,7779,7780,7782,7783,7784,7785,7787,7788,7789,7790,7791,7792,7793,7794,7796,7797,7799,7801,7802,7803,7804,7805,7807,7808,7809,7810,7811,7812,7814,7815,7816,7817,7818,7820,7821,7823,7824,7825,7826,7827,7828,7830,7831,7832,7833,7834,7835,7837,7838,7839,7840,7841,7843,7845,7846,7847,7848,7849,7850,7851,7852,7853,7854,7855,7856,7857,7858,7859,7860,7861,7863,7864,7866,7867,7868,7870,7871,7872,7874,7875,7876,7877,7879,7881,7882,7883,7884,7885,7887,7889,7891,7892,7893,7894,7895,7896,7897,7898,7899,7900,7901,7902,7903,7904,7906,7907,7908,7909,7910,7911,7912,7913,7914,7915,7916,7917,7918,7919,7920,7921,7923,7924,7925,7926,7927,7928,7929,7930,7931,7932,7933,7934,7935,7936,7938,7939,7940,7941,7942,7943,7944,7945,7946,7947,7948,7949,7950,7951,7952,7953,7954,7955,7956,7957,7958,7959,7960,7961,7962,7963,7964,7965,7966,7967,7969,7970,7972,7973,7974,7975,7976,7977,7978,7979,7981,7983,7984,7985,7986,7987,7988,7990,7991,7992,7994,7995,7996,7997,7998,7999,8000,8001,8002,8003,8004,8005,8006,8007,8008,8009,8010,8011,8012,8013,8014,8016,8017,8019,8021,8022,8023,8024,8025,8026,8027,8028,8029,8030,8031,8032,8033,8034,8035,8036,8037,8038,8039,8040,8041,8042,8043,8044,8045,8046,8047,8048,8049,8050,8051,8052,8053,8054,8055,8056,8057,8058,8059,8060,8061,8062,8063,8064,8065,8066,8067,8069,8070,8072,8073,8074,8075,8076,8077,8078,8079,8080,8081,8083,8084,8086,8087,8088,8089,8090,8091,8092,8093,8094,8095,8096,8097,8098,8100,8101,8102,8103,8104,8105,8106,8107,8108,8109,8110,8111,8112,8113,8115,8116,8117,8118,8119,8120,8121,8122,8123,8125,8126,8127,8128,8129,8130,8131,8132,8133,8134,8135,8136,8137,8138,8139,8140,8141,8143,8145,8146,8147,8148,8149,8150,8152,8153,8154,8155,8156,8157,8158,8159,8160,8161,8162,8163,8164,8165,8166,8167,8168,8169,8170,8171,8172,8173,8175,8176,8177,8178,8179,8180,8181,8182,8183,8184,8185,8186,8187,8188,8189,8190,8191,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8203,8204,8205,8206,8207,8208,8209,8210,8211,8212,8213,8214,8215,8216,8217,8218,8219,8221,8222,8224,8225,8226,8227,8228,8229,8230,8231,8232,8233,8234,8235,8236,8237,8238,8239,8240,8241,8242,8243,8244,8245,8246,8247,8248,8250,8251,8252,8253,8254,8255,8256,8257,8258,8259,8260,8261,8262,8263,8264,8265,8266,8267,8268,8269,8270,8271,8272,8273,8274,8275,8276,8277,8278,8279,8280,8281,8282,8283,8284,8285,8286,8287,8288,8289,8290,8291,8292,8293,8294,8295,8296,8297,8298,8299,8300,8301,8302,8303,8304,8305,8306,8307,8308,8309,8310,8311,8312,8313,8314,8315,8316,8317,8318,8319,8320,8321,8322,8323,8324,8325,8326,8327,8328,8329,8330,8331,8332,8333,8334,8335,8336,8337,8338,8339,8340,8341,8342,8343,8344,8345,8346,8347,8348,8349,8350,8351,8352,8353,8354,8355,8357,8358,8359,8361,8362,8363,8364,8365,8366,8367,8368,8369,8370,8371,8372,8374,8375,8376,8377,8378,8379,8380,8381,8382,8383,8384,8385,8386,8387,8388,8389,8390,8391,8392,8393,8394,8395,8396,8397,8398,8399,8400,8401,8402,8403,8404,8405,8406,8407,8408,8409,8410,8411,8412,8413,8414,8415,8416,8417,8418,8419,8420,8421,8422,8423,8424,8425,8426,8427,8428,8429,8430,8431,8432,8433,8434,8435,8436,8437,8438,8439,8440,8441,8442,8443,8444,8445,8446,8447,8448,8449,8450,8451,8452,8453,8454,8455,8456,8457,8458,8459,8460,8461,8462,8463,8464,8465,8466,8467,8468,8469,8470,8471,8472,8473,8474,8475,8476,8477,8478,8479,8480,8481,8482,8484,8485,8486,8487,8489,8490,8491,8492,8493,8494,8495,8496,8497,8498,8499,8500,8501,8502,8503,8504,8505,8506,8507,8508,8509,8510,8511,8512,8513,8514,8515,8516,8517,8518,8519,8520,8521,8522,8523,8524,8525,8526,8527,8528,8529,8530,8531,8532,8533,8534,8535,8536,8537,8538,8539,8540,8541,8542,8543,8544,8545,8546,8547,8548,8549,8550,8551,8552,8553,8554,8555,8556,8557,8558,8559,8560,8561,8562,8563,8564,8565,8566,8567,8568,8569,8570,8571,8573,8574,8575,8576,8577,8578,8579,8580,8581,8583,8584,8585,8586,8587,8588,8589,8590,8591,8592,8593,8594,8595,8596,8597,8598,8599,8600,8601,8602,8603,8604,8605,8606,8607,8608,8609,8610,8611,8612,8613,8614,8615,8616,8617,8618,8619,8620,8621,8623,8624,8625,8626,8627,8628,8629,8630,8631,8632,8633,8635,8636,8637,8638,8640,8641,8642,8643,8644,8645,8646,8647,8648,8649,8650,8651,8652,8653,8654,8655,8656,8657,8658,8659,8660,8661,8662,8664,8665,8666,8667,8668,8669,8671,8672,8673,8674,8675,8676,8677,8678,8679,8680,8681,8682,8683,8684,8685,8686,8687,8688,8689,8690,8691,8692,8693,8694,8695,8696,8697,8698,8699,8700,8701,8702,8703,8704,8705,8706,8707,8708,8709,8710,8711,8712,8713,8714,8715,8716,8717,8718,8719,8720,8721,8722,8723,8724,8725,8726,8727,8728,8729,8730,8731,8732,8733,8734,8735,8736,8737,8738,8739,8740,8741,8742,8743,8744,8745,8746,8747,8748,8749,8750,8751,8753,8754,8755,8756,8757,8758,8759,8760,8761,8762,8763,8765,8766,8767,8768,8769,8770,8771,8772,8773,8774,8775,8776,8777,8778,8779,8780,8781,8782,8783,8784,8785,8786,8787,8788,8789,8790,8791,8792,8793,8794,8795,8796,8797,8798,8799,8800,8801,8802,8803,8804,8805,8806,8807,8808,8809,8810,8811,8812,8813,8814,8815,8816,8817,8818,8819,8820,8821,8822,8823,8824,8825,8826,8827,8828,8829,8830,8831,8832,8834,8835,8836,8837,8838,8839,8840,8841,8842,8843,8844,8845,8846,8847,8848,8849,8850,8851,8853,8854,8855,8856,8857,8858,8859,8860,8861,8862,8863,8864,8865,8866,8867,8868,8869,8870,8871,8872,8873,8874,8875,8876,8877,8878,8879,8880,8881,8882,8883,8885,8886,8887,8888,8889,8890,8891,8893,8894,8895,8897,8898,8899,8900,8901,8902,8903,8904,8905,8906,8907,8908,8909,8910,8911,8912,8913,8914,8915,8916,8917,8918,8919,8920,8921,8922,8923,8924,8925,8926,8927,8928,8929,8930,8931,8932,8933,8934,8935,8936,8937,8938,8939,8940,8941,8942,8943,8944,8945,8946,8947,8948,8949,8950,8951,8952,8953,8954,8955,8956,8957,8958,8959,8960,8961,8962,8963,8964,8965,8966,8967,8968,8969,8970,8971,8972,8973,8974,8975,8976,8978,8979,8980,8981,8982,8983,8984,8985,8986,8987,8988,8989,8990,8991,8992,8993,8994,8995,8996,8997,8998,8999,9000,9001,9002,9003,9004,9005,9006,9007,9008,9009,9010,9011,9012,9013,9014,9015,9016,9017,9018,9019,9020,9021,9022,9023,9024,9025,9026,9027,9028,9029,9030,9031,9032,9033,9034,9035,9036,9037,9038,9039,9040,9041,9042,9043,9044,9045,9046,9047,9048,9049,9050,9051,9052,9053,9054,9055,9056,9057,9058,9059,9060,9061,9062,9063,9064,9065,9066,9067,9068,9069,9070,9071,9072,9073,9074,9075,9076,9077,9078,9079,9080,9081,9082,9083,9084,9085,9086,9087,9088,9089,9090,9091,9092,9093,9094,9095,9096,9097,9098,9099,9100,9101,9102,9103,9104,9105,9106,9107,9108,9109,9110,9111,9112,9113,9114,9115,9116,9117,9118,9119,9120,9121,9122,9123,9124,9125,9126,9127,9128,9129,9130,9131,9132,9133,9134,9135,9136,9137,9138,9139,9140,9141,9142,9143,9144,9145,9146,9147,9148,9149,9150,9151,9152,9153,9154,9155,9156,9157,9158,9159,9160,9161,9162,9163,9164,9165,9166,9167,9168,9169,9170,9171,9172,9173,9174,9175,9176,9177,9178,9179,9180,9181,9182,9183,9184,9185,9186,9187,9188,9189,9190,9191,9192,9193,9194,9195,9196,9197,9198,9199,9200,9201,9202,9203,9204,9205,9206,9207,9208,9209,9210,9211,9212,9213,9214,9215,9216,9217,9218,9219,9220,9221,9222,9223,9224,9225,9226,9227,9228,9229,9230,9231,9232,9233,9234,9235,9236,9237,9238,9239,9240,9241,9242,9243,9244,9245,9246,9247,9248,9249,9250,9251,9252,9253,9254,9255,9256,9257,9258,9259,9260,9262,9263,9264,9265,9266,9267,9268,9269,9270,9271,9272,9273,9274,9275,9276,9277,9278,9279,9280,9281,9282,9283,9284,9285,9286,9287,9288,9289,9290,9291,9292,9293,9294,9295,9296,9297,9298,9299,9300,9301,9302,9303,9304,9305,9306,9307,9308,9309,9310,9311,9312,9313,9314,9315,9316,9317,9318,9319,9320,9321,9322,9323,9324,9325,9327,9328,9329,9330,9331,9332,9333,9334,9335,9336,9337,9338,9339,9340,9341,9342,9343,9344,9345,9346,9347,9348,9349,9350,9351,9352,9353,9354,9355,9356,9357,9358,9359,9360,9361,9362,9363,9364,9365,9366,9367,9368,9369,9370,9371,9372,9373,9374,9375,9376,9377,9378,9379,9380,9381,9382,9383,9384,9385,9386,9387,9388,9389,9390,9391,9392,9393,9394,9395,9396,9397,9398,9399,9400,9401,9402,9403,9404,9405,9406,9407,9408,9409,9410,9411,9412,9413,9414,9415,9417,9418,9419,9420,9421,9422,9423,9424,9425,9426,9427,9428,9429,9430,9431,9432,9433,9434,9435,9436,9437,9438,9439,9440,9441,9442,9443,9444,9445,9446,9447,9448,9449,9450,9451,9452,9453,9454,9455,9456,9457,9458,9459,9460,9461,9462,9463,9464,9465,9466,9467,9468,9469,9470,9471,9472,9473,9474,9475,9476,9477,9478,9479,9480,9481,9482,9483,9484,9485,9486,9488,9489,9490,9491,9492,9493,9494,9495,9496,9497,9498,9499,9500,9501,9502,9503,9504,9505,9506,9507,9508,9509,9510,9511,9512,9513,9514,9515,9516,9517,9518,9519,9520,9521,9522,9523,9524,9525,9526,9527,9528,9529,9530,9531,9532,9533,9534,9535,9536,9537,9538,9539,9540,9541,9542,9543,9544,9545,9546,9547,9548,9549,9550,9551,9552,9553,9554,9555,9556,9557,9558,9559,9560,9561,9562,9563,9564,9565],{"source_name":405},{"source_name":7679},"AI Summaries (evaluation playlist)",{"source_name":475},{"source_name":7682},"Greg Isenberg",{"source_name":7684},"Source Code (Every.to)",{"source_name":7686},"Agrici Daniel",{"source_name":1148},{"source_name":2049},{"source_name":7690},"Prompt Engineering",{"source_name":3150},{"source_name":7693},"Data and Beyond",{"source_name":3150},{"source_name":7696},"Y Combinator",{"source_name":292},{"source_name":7699},"AI Engineer",{"source_name":2049},{"source_name":7702},"Simon Willison's Weblog",{"source_name":7704},"Nate Herk | AI Automation",{"source_name":7706},"Latent Space (Swyx + Alessio)",{"source_name":1148},{"source_name":7709},"AI News & Strategy Daily | Nate B Jones",{"source_name":3150},{"source_name":405},{"source_name":7713},"The AI Daily Brief",{"source_name":7699},{"source_name":7699},{"source_name":3150},{"source_name":7718},"AI LABS",{"source_name":7693},{"source_name":7699},{"source_name":7722},"KodeKloud",{"source_name":3150},{"source_name":3150},{"source_name":7726},"Josh W. Comeau",{"source_name":7728},"Every",{"source_name":7730},"AICodeKing",{"source_name":7732},"Jono Catliff",{"source_name":7699},{"source_name":7686},{"source_name":7713},{"source_name":7690},{"source_name":537},{"source_name":405},{"source_name":405},{"source_name":3150},{"source_name":7742},"SaaStr Blog (Jason Lemkin)",{"source_name":3150},{"source_name":7702},{"source_name":7746},"Samin Yasar",{"source_name":7699},{"source_name":3150},{"source_name":7750},"Nick Saraev",{"source_name":7752},"Maximilian Schwarzmuller",{"source_name":3150},{"source_name":475},{"source_name":3150},{"source_name":1148},{"source_name":292},{"source_name":7730},{"source_name":7760},"Chase AI",{"source_name":7762},"AI Simplified in Plain English",{"source_name":7764},"Gen AI Spotlight",{"source_name":7730},{"source_name":7767},"WorldofAI",{"source_name":7769},"Lukas Margerie",{"source_name":7704},{"source_name":1148},{"source_name":7709},{"source_name":7767},{"source_name":7775},"AI with Surya",{"source_name":3150},{"source_name":7778},"Brian Casel",{"source_name":475},{"source_name":7781},"Duncan Rogoff | AI Automation",{"source_name":7702},{"source_name":7730},{"source_name":3150},{"source_name":7786},"Better Stack",{"source_name":7679},{"source_name":292},{"source_name":3150},{"source_name":7764},{"source_name":7709},{"source_name":1148},{"source_name":214},{"source_name":7795},"Visual Studio Code",{"source_name":3150},{"source_name":7798},"Chris Koerner",{"source_name":7800},"Developers Digest",{"source_name":1148},{"source_name":7699},{"source_name":537},{"source_name":7760},{"source_name":7806},"The PrimeTime",{"source_name":475},{"source_name":4067},{"source_name":1148},{"source_name":7679},{"source_name":7702},{"source_name":7813},"UI Collective",{"source_name":7679},{"source_name":4067},{"source_name":405},{"source_name":537},{"source_name":7819},"TechCrunch — AI",{"source_name":1148},{"source_name":7822},"AI Coding Daily",{"source_name":7693},{"source_name":7730},{"source_name":3150},{"source_name":3150},{"source_name":2892},{"source_name":7829},"Dylan Davis",{"source_name":7679},{"source_name":7718},{"source_name":1148},{"source_name":292},{"source_name":7718},{"source_name":7836},"Your Average Tech Bro",{"source_name":7798},{"source_name":3150},{"source_name":7798},{"source_name":7752},{"source_name":7842},"Martin Fowler",{"source_name":7844},"AI Product Academy",{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":7730},{"source_name":1509},{"source_name":7699},{"source_name":1148},{"source_name":2049},{"source_name":7829},{"source_name":7726},{"source_name":3150},{"source_name":7730},{"source_name":7702},{"source_name":7742},{"source_name":7679},{"source_name":7693},{"source_name":7862},"LukeW — Functioning Form",{"source_name":3150},{"source_name":7865},"Theo - t3.gg",{"source_name":7699},{"source_name":7806},{"source_name":7869},"Matthew Berman",{"source_name":7869},{"source_name":7798},{"source_name":7873},"Nick Puru | AI Automation",{"source_name":475},{"source_name":7822},{"source_name":3150},{"source_name":7878},"Dan Martell",{"source_name":7880},"UX Collective",{"source_name":7786},{"source_name":4067},{"source_name":3150},{"source_name":7767},{"source_name":7886},"Marketing Against the Grain",{"source_name":7888},"Reinike AI",{"source_name":7890},"Generative AI",{"source_name":2892},{"source_name":7786},{"source_name":7679},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":214},{"source_name":2892},{"source_name":7795},{"source_name":3150},{"source_name":3150},{"source_name":1148},{"source_name":292},{"source_name":7905},"Neil Patel",{"source_name":7704},{"source_name":7778},{"source_name":3150},{"source_name":7732},{"source_name":3150},{"source_name":7699},{"source_name":7813},{"source_name":7713},{"source_name":3150},{"source_name":4067},{"source_name":7869},{"source_name":4067},{"source_name":3150},{"source_name":7699},{"source_name":3150},{"source_name":7922},"Dive Club",{"source_name":7764},{"source_name":3150},{"source_name":2892},{"source_name":7769},{"source_name":7699},{"source_name":3150},{"source_name":7775},{"source_name":1148},{"source_name":7865},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":1439},{"source_name":7937},"Julie Zhuo — The Looking Glass",{"source_name":7699},{"source_name":7890},{"source_name":7699},{"source_name":7813},{"source_name":7890},{"source_name":1148},{"source_name":7730},{"source_name":7709},{"source_name":1148},{"source_name":7709},{"source_name":7709},{"source_name":475},{"source_name":3150},{"source_name":7829},{"source_name":7760},{"source_name":3150},{"source_name":7865},{"source_name":2892},{"source_name":2892},{"source_name":3150},{"source_name":3150},{"source_name":292},{"source_name":3150},{"source_name":7709},{"source_name":3150},{"source_name":537},{"source_name":7869},{"source_name":3150},{"source_name":3150},{"source_name":7968},"Data Driven Investor",{"source_name":7762},{"source_name":7971},"AI Revolution",{"source_name":7699},{"source_name":7829},{"source_name":1148},{"source_name":7819},{"source_name":405},{"source_name":4067},{"source_name":7709},{"source_name":7980},"Nielsen Norman Group",{"source_name":7982},"a16z (Andreessen Horowitz)",{"source_name":7786},{"source_name":2892},{"source_name":7886},{"source_name":7781},{"source_name":7836},{"source_name":7989},"Python in Plain English",{"source_name":7989},{"source_name":7730},{"source_name":7993},"Will Larson (Irrational Exuberance)",{"source_name":7786},{"source_name":405},{"source_name":7767},{"source_name":7781},{"source_name":7836},{"source_name":7730},{"source_name":3150},{"source_name":292},{"source_name":3150},{"source_name":7762},{"source_name":7980},{"source_name":7993},{"source_name":7869},{"source_name":475},{"source_name":4067},{"source_name":292},{"source_name":7769},{"source_name":7767},{"source_name":7767},{"source_name":214},{"source_name":8015},"Smashing Magazine",{"source_name":7679},{"source_name":8018},"All About AI",{"source_name":8020},"Robots Ate My Homework",{"source_name":7730},{"source_name":3150},{"source_name":7971},{"source_name":214},{"source_name":2892},{"source_name":7822},{"source_name":2892},{"source_name":1148},{"source_name":7730},{"source_name":7742},{"source_name":475},{"source_name":7709},{"source_name":3150},{"source_name":475},{"source_name":3150},{"source_name":7786},{"source_name":7730},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":292},{"source_name":3150},{"source_name":7813},{"source_name":7732},{"source_name":8015},{"source_name":7730},{"source_name":292},{"source_name":7819},{"source_name":475},{"source_name":3150},{"source_name":7679},{"source_name":7730},{"source_name":2892},{"source_name":214},{"source_name":3150},{"source_name":3150},{"source_name":7806},{"source_name":7890},{"source_name":475},{"source_name":7760},{"source_name":7767},{"source_name":7836},{"source_name":7750},{"source_name":7905},{"source_name":8068},"JeredBlu",{"source_name":7682},{"source_name":8071},"Codrops",{"source_name":7829},{"source_name":7750},{"source_name":3150},{"source_name":7878},{"source_name":7679},{"source_name":214},{"source_name":475},{"source_name":7786},{"source_name":475},{"source_name":8082},"Pixelmojo",{"source_name":7693},{"source_name":8085},"Grace Leung",{"source_name":8018},{"source_name":7762},{"source_name":7781},{"source_name":7702},{"source_name":1148},{"source_name":1148},{"source_name":7798},{"source_name":2049},{"source_name":7709},{"source_name":7980},{"source_name":3150},{"source_name":3150},{"source_name":8099},"Exposure Ninja",{"source_name":405},{"source_name":475},{"source_name":1899},{"source_name":7795},{"source_name":3150},{"source_name":7728},{"source_name":7722},{"source_name":7886},{"source_name":7730},{"source_name":2049},{"source_name":7730},{"source_name":7869},{"source_name":7822},{"source_name":8114},"Eugene Yan",{"source_name":3150},{"source_name":3150},{"source_name":405},{"source_name":1148},{"source_name":1148},{"source_name":7762},{"source_name":405},{"source_name":1148},{"source_name":8124},"Jeff Su",{"source_name":3150},{"source_name":7750},{"source_name":7890},{"source_name":3150},{"source_name":2049},{"source_name":214},{"source_name":7679},{"source_name":7704},{"source_name":405},{"source_name":4067},{"source_name":7699},{"source_name":7706},{"source_name":7699},{"source_name":3150},{"source_name":7873},{"source_name":3150},{"source_name":8142},"Silicon Valley Girl",{"source_name":8144},"Smashing Magazine (Site RSS)",{"source_name":7769},{"source_name":7699},{"source_name":3150},{"source_name":3150},{"source_name":7890},{"source_name":8151},"Kevin Powell",{"source_name":475},{"source_name":7795},{"source_name":475},{"source_name":2892},{"source_name":7693},{"source_name":3150},{"source_name":4067},{"source_name":7750},{"source_name":7704},{"source_name":475},{"source_name":7890},{"source_name":7786},{"source_name":3150},{"source_name":7890},{"source_name":3150},{"source_name":7989},{"source_name":7713},{"source_name":7730},{"source_name":7890},{"source_name":7709},{"source_name":3150},{"source_name":8174},"SaaStr AI (Jason Lemkin Substack)",{"source_name":8099},{"source_name":7829},{"source_name":3150},{"source_name":7873},{"source_name":7762},{"source_name":292},{"source_name":8020},{"source_name":1148},{"source_name":1148},{"source_name":7704},{"source_name":3150},{"source_name":7699},{"source_name":2049},{"source_name":3150},{"source_name":3150},{"source_name":1148},{"source_name":7890},{"source_name":7704},{"source_name":3150},{"source_name":1148},{"source_name":7752},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":7699},{"source_name":7968},{"source_name":4067},{"source_name":405},{"source_name":1148},{"source_name":7693},{"source_name":3150},{"source_name":8071},{"source_name":7699},{"source_name":3150},{"source_name":7822},{"source_name":7890},{"source_name":7760},{"source_name":3150},{"source_name":7730},{"source_name":7982},{"source_name":475},{"source_name":7890},{"source_name":7690},{"source_name":7730},{"source_name":8220},"IndyDevDan",{"source_name":7806},{"source_name":8223},"MicroConf",{"source_name":7980},{"source_name":7971},{"source_name":7880},{"source_name":7728},{"source_name":3150},{"source_name":8071},{"source_name":7699},{"source_name":7778},{"source_name":475},{"source_name":7709},{"source_name":7767},{"source_name":3150},{"source_name":3150},{"source_name":7679},{"source_name":292},{"source_name":3150},{"source_name":7762},{"source_name":7819},{"source_name":3150},{"source_name":7980},{"source_name":2892},{"source_name":292},{"source_name":7873},{"source_name":3150},{"source_name":8249},"Vibe Check (Every.to)",{"source_name":7686},{"source_name":3150},{"source_name":7679},{"source_name":7786},{"source_name":475},{"source_name":4067},{"source_name":3150},{"source_name":7679},{"source_name":7679},{"source_name":7873},{"source_name":7786},{"source_name":537},{"source_name":7679},{"source_name":3150},{"source_name":7890},{"source_name":7730},{"source_name":405},{"source_name":7842},{"source_name":7865},{"source_name":7704},{"source_name":7704},{"source_name":4067},{"source_name":475},{"source_name":7730},{"source_name":7709},{"source_name":7982},{"source_name":7971},{"source_name":7704},{"source_name":7679},{"source_name":405},{"source_name":7890},{"source_name":7696},{"source_name":7989},{"source_name":8220},{"source_name":3150},{"source_name":7704},{"source_name":7699},{"source_name":7968},{"source_name":7767},{"source_name":2892},{"source_name":7890},{"source_name":7709},{"source_name":3150},{"source_name":3150},{"source_name":7713},{"source_name":537},{"source_name":3150},{"source_name":7690},{"source_name":214},{"source_name":475},{"source_name":3150},{"source_name":475},{"source_name":7709},{"source_name":3150},{"source_name":7822},{"source_name":7693},{"source_name":7693},{"source_name":1899},{"source_name":3150},{"source_name":7873},{"source_name":3150},{"source_name":292},{"source_name":7873},{"source_name":7679},{"source_name":2049},{"source_name":7699},{"source_name":7767},{"source_name":1148},{"source_name":3150},{"source_name":7718},{"source_name":7742},{"source_name":7767},{"source_name":7742},{"source_name":7822},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":7699},{"source_name":7890},{"source_name":1148},{"source_name":7693},{"source_name":7769},{"source_name":7699},{"source_name":3150},{"source_name":405},{"source_name":7679},{"source_name":7699},{"source_name":3150},{"source_name":7767},{"source_name":7878},{"source_name":7869},{"source_name":7699},{"source_name":7775},{"source_name":3150},{"source_name":7806},{"source_name":7679},{"source_name":7865},{"source_name":7786},{"source_name":537},{"source_name":7704},{"source_name":7690},{"source_name":7880},{"source_name":3150},{"source_name":7699},{"source_name":1148},{"source_name":8356},"Department of Product",{"source_name":3150},{"source_name":3150},{"source_name":8360},"Addy Osmani",{"source_name":7699},{"source_name":405},{"source_name":475},{"source_name":3150},{"source_name":3150},{"source_name":7971},{"source_name":7769},{"source_name":7829},{"source_name":7699},{"source_name":7890},{"source_name":7704},{"source_name":8373},"Frontend Canteen",{"source_name":2892},{"source_name":7686},{"source_name":3150},{"source_name":7730},{"source_name":7690},{"source_name":8223},{"source_name":7699},{"source_name":8015},{"source_name":7829},{"source_name":7713},{"source_name":292},{"source_name":7732},{"source_name":7989},{"source_name":7890},{"source_name":2049},{"source_name":292},{"source_name":7905},{"source_name":7769},{"source_name":292},{"source_name":8223},{"source_name":8142},{"source_name":7800},{"source_name":1148},{"source_name":3150},{"source_name":7795},{"source_name":7690},{"source_name":7730},{"source_name":405},{"source_name":7760},{"source_name":7775},{"source_name":7798},{"source_name":3150},{"source_name":7869},{"source_name":3150},{"source_name":7760},{"source_name":292},{"source_name":3150},{"source_name":2892},{"source_name":292},{"source_name":1148},{"source_name":214},{"source_name":475},{"source_name":7819},{"source_name":292},{"source_name":3150},{"source_name":475},{"source_name":7813},{"source_name":3150},{"source_name":3150},{"source_name":7699},{"source_name":7764},{"source_name":7690},{"source_name":3150},{"source_name":475},{"source_name":2892},{"source_name":475},{"source_name":3150},{"source_name":7699},{"source_name":3150},{"source_name":475},{"source_name":3150},{"source_name":7760},{"source_name":7878},{"source_name":7682},{"source_name":7693},{"source_name":7760},{"source_name":7873},{"source_name":7742},{"source_name":475},{"source_name":7760},{"source_name":3150},{"source_name":7732},{"source_name":7775},{"source_name":7730},{"source_name":7762},{"source_name":2049},{"source_name":3150},{"source_name":7880},{"source_name":7890},{"source_name":405},{"source_name":7679},{"source_name":7693},{"source_name":3150},{"source_name":7968},{"source_name":7699},{"source_name":475},{"source_name":2892},{"source_name":1148},{"source_name":8151},{"source_name":405},{"source_name":1148},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":7713},{"source_name":7752},{"source_name":7693},{"source_name":2892},{"source_name":3150},{"source_name":7679},{"source_name":7713},{"source_name":7699},{"source_name":1439},{"source_name":7971},{"source_name":3150},{"source_name":1509},{"source_name":4067},{"source_name":292},{"source_name":8483},"Chain of Thought (Every.to)",{"source_name":7829},{"source_name":7968},{"source_name":7890},{"source_name":8488},"Why Try AI",{"source_name":1148},{"source_name":7690},{"source_name":3150},{"source_name":475},{"source_name":7865},{"source_name":475},{"source_name":7699},{"source_name":8020},{"source_name":7682},{"source_name":475},{"source_name":7699},{"source_name":405},{"source_name":7730},{"source_name":1148},{"source_name":7682},{"source_name":7709},{"source_name":3150},{"source_name":3150},{"source_name":7844},{"source_name":8071},{"source_name":7699},{"source_name":7865},{"source_name":7730},{"source_name":7699},{"source_name":1148},{"source_name":7767},{"source_name":405},{"source_name":8223},{"source_name":7718},{"source_name":3150},{"source_name":7709},{"source_name":7730},{"source_name":405},{"source_name":3150},{"source_name":3150},{"source_name":7728},{"source_name":405},{"source_name":7709},{"source_name":3150},{"source_name":3150},{"source_name":7713},{"source_name":1148},{"source_name":7769},{"source_name":405},{"source_name":3150},{"source_name":7890},{"source_name":3150},{"source_name":475},{"source_name":214},{"source_name":7760},{"source_name":8068},{"source_name":3150},{"source_name":475},{"source_name":7873},{"source_name":7760},{"source_name":7690},{"source_name":7989},{"source_name":7699},{"source_name":7706},{"source_name":7781},{"source_name":1148},{"source_name":7873},{"source_name":1439},{"source_name":405},{"source_name":7706},{"source_name":7890},{"source_name":3150},{"source_name":7704},{"source_name":7760},{"source_name":7718},{"source_name":7679},{"source_name":7786},{"source_name":8220},{"source_name":7684},{"source_name":3150},{"source_name":3150},{"source_name":7730},{"source_name":3150},{"source_name":1148},{"source_name":3150},{"source_name":8020},{"source_name":3150},{"source_name":8572},"UX Magazine",{"source_name":7699},{"source_name":7786},{"source_name":8223},{"source_name":7713},{"source_name":537},{"source_name":7709},{"source_name":7730},{"source_name":537},{"source_name":8582},"Priank's Newsletter (Agentic UX)",{"source_name":7722},{"source_name":3150},{"source_name":7922},{"source_name":7822},{"source_name":7709},{"source_name":1148},{"source_name":8099},{"source_name":4067},{"source_name":3150},{"source_name":3150},{"source_name":7760},{"source_name":7767},{"source_name":7709},{"source_name":3150},{"source_name":214},{"source_name":475},{"source_name":3150},{"source_name":7822},{"source_name":7819},{"source_name":7798},{"source_name":7786},{"source_name":7922},{"source_name":7762},{"source_name":7709},{"source_name":7709},{"source_name":3150},{"source_name":7813},{"source_name":7971},{"source_name":1148},{"source_name":7767},{"source_name":7730},{"source_name":3150},{"source_name":7679},{"source_name":3150},{"source_name":3150},{"source_name":1148},{"source_name":7704},{"source_name":7679},{"source_name":8622},"Sam Witteveen",{"source_name":7702},{"source_name":3150},{"source_name":7709},{"source_name":3150},{"source_name":3150},{"source_name":7890},{"source_name":475},{"source_name":292},{"source_name":7699},{"source_name":7989},{"source_name":8634},"The Pragmatic Engineer (Gergely Orosz)",{"source_name":7693},{"source_name":7873},{"source_name":8099},{"source_name":8639},"Vercel Blog",{"source_name":405},{"source_name":7989},{"source_name":2892},{"source_name":7679},{"source_name":405},{"source_name":7786},{"source_name":7762},{"source_name":7746},{"source_name":1148},{"source_name":3150},{"source_name":214},{"source_name":7767},{"source_name":7699},{"source_name":7878},{"source_name":3150},{"source_name":8124},{"source_name":7693},{"source_name":8356},{"source_name":7730},{"source_name":7686},{"source_name":3150},{"source_name":7702},{"source_name":8663},"Ahmad Shadeed",{"source_name":8015},{"source_name":7699},{"source_name":405},{"source_name":7760},{"source_name":7699},{"source_name":8670},"AI Jason",{"source_name":8488},{"source_name":7795},{"source_name":3150},{"source_name":7699},{"source_name":3150},{"source_name":405},{"source_name":3150},{"source_name":7679},{"source_name":405},{"source_name":3150},{"source_name":7890},{"source_name":3150},{"source_name":7989},{"source_name":7869},{"source_name":7968},{"source_name":1439},{"source_name":4067},{"source_name":7806},{"source_name":7693},{"source_name":2049},{"source_name":3150},{"source_name":3150},{"source_name":7699},{"source_name":3150},{"source_name":3150},{"source_name":7702},{"source_name":8068},{"source_name":7730},{"source_name":7704},{"source_name":7713},{"source_name":7798},{"source_name":475},{"source_name":8223},{"source_name":7704},{"source_name":1148},{"source_name":7806},{"source_name":7702},{"source_name":7800},{"source_name":7704},{"source_name":7730},{"source_name":7732},{"source_name":7873},{"source_name":214},{"source_name":7742},{"source_name":7764},{"source_name":4067},{"source_name":8020},{"source_name":1148},{"source_name":8488},{"source_name":3150},{"source_name":7686},{"source_name":8068},{"source_name":7795},{"source_name":7829},{"source_name":3150},{"source_name":7760},{"source_name":7760},{"source_name":3150},{"source_name":7718},{"source_name":7730},{"source_name":1148},{"source_name":7730},{"source_name":3150},{"source_name":7730},{"source_name":3150},{"source_name":4067},{"source_name":1148},{"source_name":7971},{"source_name":7781},{"source_name":3150},{"source_name":7702},{"source_name":3150},{"source_name":3150},{"source_name":405},{"source_name":7728},{"source_name":7873},{"source_name":7696},{"source_name":475},{"source_name":475},{"source_name":1148},{"source_name":8752},"FlowingData",{"source_name":7873},{"source_name":7762},{"source_name":214},{"source_name":3150},{"source_name":2049},{"source_name":7693},{"source_name":8068},{"source_name":4067},{"source_name":4067},{"source_name":7886},{"source_name":8764},"Import AI",{"source_name":475},{"source_name":7865},{"source_name":7752},{"source_name":7699},{"source_name":7728},{"source_name":7679},{"source_name":7775},{"source_name":3150},{"source_name":8099},{"source_name":3150},{"source_name":2892},{"source_name":7730},{"source_name":7730},{"source_name":1148},{"source_name":3150},{"source_name":292},{"source_name":475},{"source_name":7728},{"source_name":3150},{"source_name":7699},{"source_name":7836},{"source_name":7709},{"source_name":7746},{"source_name":7795},{"source_name":7686},{"source_name":7709},{"source_name":7795},{"source_name":7813},{"source_name":292},{"source_name":3150},{"source_name":7806},{"source_name":7699},{"source_name":7742},{"source_name":3150},{"source_name":3150},{"source_name":7769},{"source_name":7971},{"source_name":7781},{"source_name":214},{"source_name":3150},{"source_name":537},{"source_name":214},{"source_name":405},{"source_name":7699},{"source_name":7704},{"source_name":8099},{"source_name":4067},{"source_name":292},{"source_name":475},{"source_name":3150},{"source_name":7890},{"source_name":8151},{"source_name":475},{"source_name":1148},{"source_name":8220},{"source_name":7873},{"source_name":7786},{"source_name":3150},{"source_name":7842},{"source_name":7730},{"source_name":475},{"source_name":405},{"source_name":7767},{"source_name":3150},{"source_name":7702},{"source_name":7873},{"source_name":7679},{"source_name":8833},"Agency Mavericks Podcast",{"source_name":405},{"source_name":8099},{"source_name":3150},{"source_name":7709},{"source_name":7718},{"source_name":7699},{"source_name":475},{"source_name":3150},{"source_name":292},{"source_name":3150},{"source_name":2892},{"source_name":7786},{"source_name":7813},{"source_name":214},{"source_name":3150},{"source_name":214},{"source_name":8020},{"source_name":8852},"Liam Ottley",{"source_name":3150},{"source_name":8356},{"source_name":7873},{"source_name":7886},{"source_name":475},{"source_name":3150},{"source_name":7679},{"source_name":7709},{"source_name":7873},{"source_name":7682},{"source_name":3150},{"source_name":7989},{"source_name":7767},{"source_name":2892},{"source_name":8356},{"source_name":2892},{"source_name":7713},{"source_name":7699},{"source_name":7730},{"source_name":7709},{"source_name":7971},{"source_name":3150},{"source_name":7709},{"source_name":7682},{"source_name":7699},{"source_name":7873},{"source_name":7786},{"source_name":7971},{"source_name":8764},{"source_name":7693},{"source_name":8884},"One Useful Thing (Ethan Mollick)",{"source_name":3597},{"source_name":3597},{"source_name":7693},{"source_name":7989},{"source_name":7890},{"source_name":7890},{"source_name":8892},"Towards AI Newsletter",{"source_name":7989},{"source_name":8020},{"source_name":8896},"Andrej Karpathy Gists",{"source_name":8020},{"source_name":7890},{"source_name":8020},{"source_name":8764},{"source_name":7968},{"source_name":8488},{"source_name":8764},{"source_name":7890},{"source_name":7890},{"source_name":7693},{"source_name":8764},{"source_name":3597},{"source_name":8488},{"source_name":8488},{"source_name":1148},{"source_name":1148},{"source_name":7890},{"source_name":8892},{"source_name":7890},{"source_name":3597},{"source_name":7693},{"source_name":405},{"source_name":7762},{"source_name":3150},{"source_name":1509},{"source_name":1148},{"source_name":8670},{"source_name":7873},{"source_name":2892},{"source_name":3150},{"source_name":1148},{"source_name":3150},{"source_name":7767},{"source_name":3150},{"source_name":7760},{"source_name":3150},{"source_name":292},{"source_name":7922},{"source_name":7713},{"source_name":3150},{"source_name":7699},{"source_name":7722},{"source_name":7706},{"source_name":7865},{"source_name":7786},{"source_name":7971},{"source_name":1148},{"source_name":7679},{"source_name":7718},{"source_name":4067},{"source_name":3150},{"source_name":3150},{"source_name":7798},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":7869},{"source_name":7709},{"source_name":7699},{"source_name":8015},{"source_name":8622},{"source_name":7693},{"source_name":214},{"source_name":7732},{"source_name":3150},{"source_name":475},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":7798},{"source_name":7693},{"source_name":405},{"source_name":475},{"source_name":7690},{"source_name":7730},{"source_name":7786},{"source_name":8099},{"source_name":7730},{"source_name":7750},{"source_name":8977},"Jason M. Lemkin (SaaStr)",{"source_name":7713},{"source_name":7775},{"source_name":3150},{"source_name":3150},{"source_name":7769},{"source_name":3150},{"source_name":3150},{"source_name":7722},{"source_name":3150},{"source_name":7886},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":7713},{"source_name":7704},{"source_name":3150},{"source_name":7822},{"source_name":4067},{"source_name":7922},{"source_name":8896},{"source_name":8896},{"source_name":8488},{"source_name":3150},{"source_name":3150},{"source_name":7679},{"source_name":7728},{"source_name":8488},{"source_name":3150},{"source_name":7704},{"source_name":7704},{"source_name":3150},{"source_name":7769},{"source_name":7690},{"source_name":475},{"source_name":7686},{"source_name":3150},{"source_name":2049},{"source_name":7829},{"source_name":7769},{"source_name":3150},{"source_name":8488},{"source_name":7699},{"source_name":1148},{"source_name":7769},{"source_name":1148},{"source_name":1148},{"source_name":7767},{"source_name":8142},{"source_name":2049},{"source_name":7890},{"source_name":7786},{"source_name":3150},{"source_name":7682},{"source_name":1439},{"source_name":1148},{"source_name":8020},{"source_name":405},{"source_name":7682},{"source_name":8622},{"source_name":7704},{"source_name":3150},{"source_name":7786},{"source_name":7704},{"source_name":7742},{"source_name":7682},{"source_name":537},{"source_name":7709},{"source_name":3150},{"source_name":3150},{"source_name":7679},{"source_name":7769},{"source_name":7699},{"source_name":8018},{"source_name":7869},{"source_name":1148},{"source_name":3150},{"source_name":3150},{"source_name":7822},{"source_name":7800},{"source_name":3150},{"source_name":7890},{"source_name":405},{"source_name":3150},{"source_name":7730},{"source_name":537},{"source_name":7752},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":3150},{"source_name":2892},{"source_name":7730},{"source_name":3150},{"source_name":7699},{"source_name":475},{"source_name":7762},{"source_name":7699},{"source_name":475},{"source_name":7760},{"source_name":7869},{"source_name":405},{"source_name":214},{"source_name":7767},{"source_name":3150},{"source_name":7769},{"source_name":3150},{"source_name":475},{"source_name":7699},{"source_name":7971},{"source_name":3150},{"source_name":4067},{"source_name":3150},{"source_name":7764},{"source_name":475},{"source_name":7730},{"source_name":7686},{"source_name":7682},{"source_name":1148},{"source_name":7865},{"source_name":7795},{"source_name":7699},{"source_name":292},{"source_name":3150},{"source_name":7718},{"source_name":2892},{"source_name":8020},{"source_name":3150},{"source_name":537},{"source_name":537},{"source_name":7786},{"source_name":7781},{"source_name":7989},{"source_name":7699},{"source_name":7679},{"source_name":7767},{"source_name":7699},{"source_name":3150},{"source_name":7800},{"source_name":7769},{"source_name":7989},{"source_name":7704},{"source_name":292},{"source_name":7873},{"source_name":7989},{"source_name":214},{"source_name":3150},{"source_name":7886},{"source_name":7819},{"source_name":475},{"source_name":7865},{"source_name":3150},{"source_name":1148},{"source_name":7890},{"source_name":405},{"source_name":1148},{"source_name":7890},{"source_name":8488},{"source_name":1148},{"source_name":8488},{"source_name":7890},{"source_name":7989},{"source_name":8892},{"source_name":7968},{"source_name":8020},{"source_name":8896},{"source_name":3597},{"source_name":1148},{"source_name":1148},{"source_name":1148},{"source_name":3150},{"source_name":1509},{"source_name":3150},{"source_name":7786},{"source_name":7786},{"source_name":3150},{"source_name":7762},{"source_name":7730},{"source_name":7769},{"source_name":7704},{"source_name":7704},{"source_name":7886},{"source_name":3150},{"source_name":7829},{"source_name":3150},{"source_name":7762},{"source_name":3150},{"source_name":8764},{"source_name":7795},{"source_name":7980},{"source_name":3150},{"source_name":3150},{"source_name":7989},{"source_name":7922},{"source_name":7878},{"source_name":8852},{"source_name":7730},{"source_name":1899},{"source_name":7767},{"source_name":4067},{"source_name":7806},{"source_name":475},{"source_name":3150},{"source_name":3150},{"source_name":7890},{"source_name":475},{"source_name":8151},{"source_name":7730},{"source_name":7795},{"source_name":8018},{"source_name":292},{"source_name":292},{"source_name":7905},{"source_name":3150},{"source_name":3150},{"source_name":7730},{"source_name":3150},{"source_name":3150},{"source_name":7746},{"source_name":7728},{"source_name":7699},{"source_name":7989},{"source_name":8622},{"source_name":7905},{"source_name":7819},{"source_name":7873},{"source_name":405},{"source_name":2049},{"source_name":7699},{"source_name":475},{"source_name":4067},{"source_name":7890},{"source_name":1148},{"source_name":7679},{"source_name":3150},{"source_name":7693},{"source_name":3150},{"source_name":7693},{"source_name":7989},{"source_name":1439},{"source_name":475},{"source_name":7704},{"source_name":3150},{"source_name":7704},{"source_name":7842},{"source_name":8142},{"source_name":7706},{"source_name":7702},{"source_name":4067},{"source_name":7922},{"source_name":7865},{"source_name":7795},{"source_name":475},{"source_name":3150},{"source_name":8174},{"source_name":7760},{"source_name":7890},{"source_name":3150},{"source_name":292},{"source_name":475},{"source_name":1148},{"source_name":3150},{"source_name":7865},{"source_name":7769},{"source_name":7732},{"source_name":214},{"source_name":3150},{"source_name":214},{"source_name":405},{"source_name":3150},{"source_name":8174},{"source_name":8020},{"source_name":3150},{"source_name":2049},{"source_name":7742},{"source_name":7778},{"source_name":7982},{"source_name":7968},{"source_name":1148},{"source_name":2049},{"source_name":8220},{"source_name":3150},{"source_name":8360},{"source_name":9261},"leerob",{"source_name":475},{"source_name":7829},{"source_name":1509},{"source_name":7730},{"source_name":7865},{"source_name":3150},{"source_name":8099},{"source_name":7704},{"source_name":1148},{"source_name":8622},{"source_name":3150},{"source_name":7890},{"source_name":3150},{"source_name":3150},{"source_name":7806},{"source_name":7767},{"source_name":7822},{"source_name":3150},{"source_name":7767},{"source_name":8071},{"source_name":475},{"source_name":7880},{"source_name":3150},{"source_name":7752},{"source_name":7702},{"source_name":3150},{"source_name":7795},{"source_name":7822},{"source_name":2049},{"source_name":7890},{"source_name":1148},{"source_name":1148},{"source_name":7706},{"source_name":7822},{"source_name":7781},{"source_name":7798},{"source_name":8670},{"source_name":3150},{"source_name":1148},{"source_name":1148},{"source_name":292},{"source_name":7800},{"source_name":8144},{"source_name":7730},{"source_name":537},{"source_name":3150},{"source_name":7709},{"source_name":7890},{"source_name":7718},{"source_name":3150},{"source_name":7709},{"source_name":7873},{"source_name":7760},{"source_name":7822},{"source_name":7686},{"source_name":3150},{"source_name":7699},{"source_name":8071},{"source_name":3150},{"source_name":3150},{"source_name":7829},{"source_name":7890},{"source_name":1148},{"source_name":9326},"Brad Frost",{"source_name":7713},{"source_name":7869},{"source_name":7869},{"source_name":1439},{"source_name":7682},{"source_name":7730},{"source_name":2892},{"source_name":7732},{"source_name":7886},{"source_name":2892},{"source_name":8099},{"source_name":214},{"source_name":3150},{"source_name":8764},{"source_name":8018},{"source_name":8220},{"source_name":7713},{"source_name":7679},{"source_name":1148},{"source_name":3150},{"source_name":475},{"source_name":7971},{"source_name":7905},{"source_name":214},{"source_name":7730},{"source_name":7873},{"source_name":7690},{"source_name":214},{"source_name":7679},{"source_name":7728},{"source_name":7873},{"source_name":7696},{"source_name":7786},{"source_name":3150},{"source_name":3150},{"source_name":405},{"source_name":8020},{"source_name":1148},{"source_name":405},{"source_name":7844},{"source_name":7968},{"source_name":7968},{"source_name":7989},{"source_name":292},{"source_name":7699},{"source_name":7767},{"source_name":7786},{"source_name":7980},{"source_name":7890},{"source_name":7693},{"source_name":214},{"source_name":3150},{"source_name":475},{"source_name":292},{"source_name":537},{"source_name":3150},{"source_name":3150},{"source_name":7760},{"source_name":7713},{"source_name":7786},{"source_name":7730},{"source_name":7699},{"source_name":8142},{"source_name":3150},{"source_name":7769},{"source_name":7795},{"source_name":7704},{"source_name":475},{"source_name":7905},{"source_name":3150},{"source_name":7760},{"source_name":7873},{"source_name":3150},{"source_name":3150},{"source_name":405},{"source_name":7693},{"source_name":7696},{"source_name":3150},{"source_name":292},{"source_name":7865},{"source_name":3150},{"source_name":3150},{"source_name":537},{"source_name":3150},{"source_name":3150},{"source_name":7704},{"source_name":7798},{"source_name":405},{"source_name":9416},"arXiv cs.AI",{"source_name":475},{"source_name":7706},{"source_name":7709},{"source_name":7730},{"source_name":7718},{"source_name":7709},{"source_name":292},{"source_name":3150},{"source_name":7730},{"source_name":3150},{"source_name":292},{"source_name":7878},{"source_name":3150},{"source_name":7713},{"source_name":537},{"source_name":475},{"source_name":7702},{"source_name":7786},{"source_name":7690},{"source_name":7730},{"source_name":7798},{"source_name":7713},{"source_name":1148},{"source_name":1509},{"source_name":8852},{"source_name":7886},{"source_name":7819},{"source_name":3150},{"source_name":7693},{"source_name":7775},{"source_name":3150},{"source_name":7971},{"source_name":7679},{"source_name":7699},{"source_name":7842},{"source_name":537},{"source_name":292},{"source_name":8099},{"source_name":405},{"source_name":7704},{"source_name":7873},{"source_name":7699},{"source_name":7968},{"source_name":7767},{"source_name":7693},{"source_name":475},{"source_name":475},{"source_name":405},{"source_name":8018},{"source_name":8572},{"source_name":7989},{"source_name":1439},{"source_name":1148},{"source_name":8892},{"source_name":1148},{"source_name":1148},{"source_name":8896},{"source_name":7693},{"source_name":1148},{"source_name":3597},{"source_name":8892},{"source_name":1148},{"source_name":7890},{"source_name":1148},{"source_name":405},{"source_name":8360},{"source_name":7968},{"source_name":7890},{"source_name":8884},{"source_name":9487},"Andrej Karpathy Blog",{"source_name":8892},{"source_name":405},{"source_name":405},{"source_name":7890},{"source_name":405},{"source_name":8896},{"source_name":405},{"source_name":8764},{"source_name":7890},{"source_name":405},{"source_name":1148},{"source_name":7989},{"source_name":8896},{"source_name":8896},{"source_name":7890},{"source_name":8896},{"source_name":7693},{"source_name":1148},{"source_name":8896},{"source_name":8373},{"source_name":7844},{"source_name":405},{"source_name":1439},{"source_name":3597},{"source_name":1439},{"source_name":7989},{"source_name":8896},{"source_name":1439},{"source_name":1148},{"source_name":7989},{"source_name":405},{"source_name":8373},{"source_name":7989},{"source_name":7989},{"source_name":7989},{"source_name":7989},{"source_name":7989},{"source_name":8896},{"source_name":1439},{"source_name":405},{"source_name":8892},{"source_name":1148},{"source_name":405},{"source_name":1148},{"source_name":405},{"source_name":1439},{"source_name":1439},{"source_name":405},{"source_name":405},{"source_name":7968},{"source_name":1148},{"source_name":405},{"source_name":405},{"source_name":405},{"source_name":405},{"source_name":7989},{"source_name":3597},{"source_name":1439},{"source_name":1148},{"source_name":7844},{"source_name":1439},{"source_name":405},{"source_name":7693},{"source_name":7762},{"source_name":7762},{"source_name":1148},{"source_name":405},{"source_name":405},{"source_name":8488},{"source_name":1148},{"source_name":1148},{"source_name":1148},{"source_name":3597},{"source_name":7989},{"source_name":1148},{"source_name":1148},{"source_name":405},{"source_name":3597}]