<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[cengiz han]]></title><description><![CDATA[Production-ready AI-native engineering—not demos, not hype, just what actually works when you're shipping to prod]]></description><link>https://www.cengizhan.com</link><generator>Substack</generator><lastBuildDate>Mon, 04 May 2026 11:32:52 GMT</lastBuildDate><atom:link href="https://www.cengizhan.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Cengiz Han]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[hancengiz@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[hancengiz@substack.com]]></itunes:email><itunes:name><![CDATA[Cengiz Han]]></itunes:name></itunes:owner><itunes:author><![CDATA[Cengiz Han]]></itunes:author><googleplay:owner><![CDATA[hancengiz@substack.com]]></googleplay:owner><googleplay:email><![CDATA[hancengiz@substack.com]]></googleplay:email><googleplay:author><![CDATA[Cengiz Han]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[fabriqa.ai: turning scattered AI coding tools into one coordinated, spec-driven workspace]]></title><description><![CDATA[The AI coding tools spectrum itself is actually a good way of working.]]></description><link>https://www.cengizhan.com/p/fabriqaai-turning-scattered-ai-coding</link><guid isPermaLink="false">https://www.cengizhan.com/p/fabriqaai-turning-scattered-ai-coding</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Sun, 29 Mar 2026 15:08:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!N4Le!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!N4Le!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!N4Le!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 424w, https://substackcdn.com/image/fetch/$s_!N4Le!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 848w, https://substackcdn.com/image/fetch/$s_!N4Le!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 1272w, https://substackcdn.com/image/fetch/$s_!N4Le!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!N4Le!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png" width="1456" height="1356" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1356,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:696326,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/192513447?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!N4Le!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 424w, https://substackcdn.com/image/fetch/$s_!N4Le!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 848w, https://substackcdn.com/image/fetch/$s_!N4Le!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 1272w, https://substackcdn.com/image/fetch/$s_!N4Le!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f91b03-24b8-45f8-a95e-b4cb3f750595_2312x2154.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The AI coding tools spectrum itself is actually a good way of working. Each tool brings its own strengths for different contexts, and using multiple tools across a project is natural. The problem is what happens in between. I was working on a spec-driven development project recently and found myself reaching for Codex when I wanted autonomous execution against my specifications. Codex is genuinely good at following structured specs and running through implementation tasks without hand-holding. But when the implementation introduced an edge case bug, I tried troubleshooting with Codex multiple times and it could not find the issue it had created. The bug was subtle enough that the same model that wrote the code kept missing it on review.</p><p>So I switched to Claude Code CLI. But I needed to give it the full context of what had been built, what the specs were, and where things had gone wrong. I actually asked Codex to write me a handover prompt first, a summary of the current state, the implementation decisions, and the specific failure. I copied that prompt into Claude Code, and as I expected, it identified the edge case almost immediately. That entire workflow, using one tool&#8217;s strength to compensate for another&#8217;s blind spot, with a manual copy-paste handover in between, is something I do constantly. It works, but it is held together with clipboard and memory.</p><p>That is the problem fabriqa solves. fabriqa is not another editor. It is an AI Development Orchestration Layer: a coordinated, spec-driven workspace for the tools you already use.</p><div><hr></div><h2><strong>Why You Should Use It</strong></h2><p>There are three big reasons to use fabriqa.</p><p>First, the unified worktree experience is already useful today. If you are already paying for tools like Claude Code, Codex, Cursor, OpenCode, Gemini CLI, or Kiro CLI, fabriqa gives them one shared place to work. You can switch tools without losing the thread, keep the same project history visible, inspect diffs and git changes in one place, and avoid the usual copy-paste handoff mess. The broader ACP lineup already works too:</p><ul><li><p>Amp</p></li><li><p>Auggie CLI</p></li><li><p>Autohand Code</p></li><li><p>Claude Agent</p></li><li><p>Cline</p></li><li><p>Codebuddy Code</p></li><li><p>Corust Agent</p></li><li><p>crow-cli</p></li><li><p>DimCode</p></li><li><p>Factory Droid</p></li><li><p>GitHub Copilot</p></li><li><p>goose</p></li><li><p>Junie</p></li><li><p>Kilo</p></li><li><p>Kimi CLI</p></li><li><p>Minion Code</p></li><li><p>Mistral Vibe</p></li><li><p>pi ACP</p></li><li><p>Qoder CLI</p></li><li><p>Qwen Code</p></li><li><p>Stakpak</p></li></ul><p>fabriqa fetches the ACP registry and hot-swaps new entries into the catalog, so this list keeps growing without me having to ship a release every time a new tool shows up.</p><p>Second, specs are the real point. That is the part I care about most, and it is the reason I think fabriqa can become much more than a tool switcher. The specs module is coming in April 2026, in a couple of weeks. That is what I am focused on getting right now. I believe spec-driven development is a fundamental skill everyone needs to learn if they want agents to work like real teammates instead of glorified autocomplete. If you cannot define the work clearly, you cannot expect autonomous agents to execute it well.</p><p>Third, multi-agent orchestration is where this goes next. That part is phased right now. Today, worktrees plus manual prompts already let you do a practical version of multi-agent work inside fabriqa. But that is not the final goal. The real goal is: define specs, define dependencies and execution order, then fire up multi-agent profiles that can run those tasks fully autonomously. I want that layer to sit on top of a best-in-class specs system, not on top of vague prompts. That is why I am pushing specs first and the deeper orchestration layer after that.</p><p>fabriqa is in alpha, and it is free right now. When I start charging, it will be a small platform fee. I am not going to charge for tokens or meter your LLM usage. The model side is BYOK, bring your own keys. The agent side is BYOS, bring your own subscriptions. Native LLM integrations through API keys are already there, but they are still limited. The full agentic loop on that side is not fully where I want it yet.</p><p>It runs as a desktop application on macOS, Windows, and Linux. I initially started by maintaining a CLI TUI and the desktop app together, but for now I have stopped trying to keep the TUI at parity with desktop. That is intentional. I think the popularity of CLIs is decreasing, so the desktop experience is the primary focus. If there is real demand, I will continue investing in the TUI more seriously.</p><p>A session in fabriqa is not just a chat thread. It is a full execution context backed by a database that tracks what actually happened.</p><div><hr></div><h2><strong>What Is Coming</strong></h2><p>This alpha focuses on the foundation: coordinated execution across the tools you already pay for, plus a user experience that feels like a real application, not a weekend side project. Since releasing fabriqa on March 3, 2026, I have been using it as my main daily driver. Until mid-February 2026 I was using Claude Code more heavily. Around the GPT-5.3 release, Codex became my main subscription inside fabriqa. I still keep a Claude Code subscription and use it where I find it better, especially for more interactive troubleshooting sessions and a lot of UI design work, but not only that.</p><p>But the main unique value proposition of fabriqa is not just putting existing tools into one window. It is spec-driven execution. That layer is under active development now and is planned for release in April 2026. I am already testing these workflows myself with a limited number of early testers. If you want to get into the specs testing group and do not want to wait another month, reach out.</p><p>After that, multi-agent orchestration builds on top of those specs. The goal is not random agent swarms. The goal is coordinated execution against explicit intent, structured artifacts, and clear workflow state, with git worktree isolation and conflict detection where that makes sense.</p><div><hr></div><h2><strong>The Details That Matter</strong></h2><p>Global hotkeys bring fabriqa to the front and send it to the background instantly. In the chat view, pressing the right and left arrow keys acts as page up and page down. Command-up takes you to the top of the conversation. There is a strict sticky scroll behavior that always keeps your last message to the AI visible at the top of the viewport. If you are a multitasker like I am and you switch back to a fabriqa chat after working on something else, you do not have to wonder what you were doing. Your last message is right there, and you immediately have context on where you left off. When you scroll up, your previous message stays anchored and visible.</p><p>There is light mode, dark mode, and a bunch of themes. Git changes are visible directly in the interface. If you are curious about how I implemented ACP, I actually kept the ACP debug logs that I used during development open as a feature. You can open the ACP debug panel and see the raw protocol messages going back and forth between fabriqa and the agents. The settings page, the command palette, and the keyboard shortcuts have a bunch of things in them that are not common in tools like this yet. I am prioritizing features over documentation right now, so some of these are things you discover by exploring the application itself.</p><div><hr></div><h2><strong>Where This Is Going</strong></h2><p>I have been writing about spec-driven development and the <a href="https://www.cengizhan.com/p/the-ai-native-way-of-building">Explore-Specify-Engineer workflow</a> for a while. fabriqa is where those ideas become tooling. Mark my word: if you are not working with specs yet, you are missing where this is going. Making specs a first-class part of how you build should be a top priority.</p><p>The specs-driven system in fabriqa is being built to be generic, not hardcoded to one opinionated flow. It will include my own <a href="http://specs.md/">specs.md</a> FIRE flow, AWS AI-DLC flows as implemented in <a href="https://specs.md/">specs.md</a>, and commonly used patterns like BMAD-METHOD as built-in fabriqa workflows. I am also working on a meta-workflow for fabriqa itself, where you can talk to fabriqa agents to design your own workflows around your own needs, with those agents being aware of fabriqa&#8217;s workflow DSL instead of treating workflows like raw text.</p><p>The release planned for May 2026 adds a marketplace so fabriqa users can share their own workflows with each other. That matters because the long-term goal is not just to ship my workflows. It is to make fabriqa a system where good workflows can be created, evolved, reused, and shared.</p><p>fabriqa is also architected in a way that lets me run it as a hosted web application later. I plan to offer that as fabriqa.cloud with the exact same core experience. That is possible because the frontend is React-based and the server-side architecture is cleanly separated, more like Slack than like a one-off desktop app. The hosted version will run in cloud sandboxes, but that is not the only future I care about.</p><p>I also want a mobile app that can connect to your fabriqa instance on fabriqa.cloud, on your own machine, or on your own premises. I am planning around private-network approaches like Tailscale so fabriqa.cloud does not have to be mandatory for mobile. I want fabriqa to be something you can keep building with while you are on the go, not something that traps you into one deployment model.</p><div><hr></div><h2><strong>What You Get Today</strong></h2><p>If you use fabriqa today, you get a real desktop workspace for coordinating the AI coding tools you already pay for.</p><ul><li><p>One place to switch between tools like Claude Code, Codex, Gemini CLI, OpenCode, Cursor, and more without losing context</p></li><li><p>A unified worktree and git-aware workflow where chats, diffs, progress, and handoffs live together</p></li><li><p>A practical path to multi-agent execution today through worktrees and manual prompt coordination, with the specs layer landing in April</p></li><li><p>Access to the ACP ecosystem without vendor lock-in, plus occasional free-model opportunities that come through platforms like OpenCode and Kilo</p></li></ul><p>The desktop builds are available at <strong><a href="https://fabriqa.ai/">fabriqa.ai</a>. Go download fabriqa for free</strong> and start using it with your own subscriptions like Claude Code and Codex. You can also benefit from free model offers that show up through platforms like OpenCode and Kilo. For example, OpenCode is currently hosting Xiaomi MiMo-V2-Pro free for OpenCode users for a limited time, and those kinds of campaign models are useful inside fabriqa too.</p><p>fabriqa itself is free during the alpha. I am not asking anyone to sign up right now. Just download fabriqa and start using it. Later, when I have user accounts or email capture in place, the people who helped me make fabriqa better during this alpha will get free fabriqa access and should not need to pay that small platform fee. I have not really settled the pricing model yet, and I want to be honest about that. My instinct is that it should cost less than a Starbucks coffee, not something meaningful compared to what you already spend on the tools around it. My main goal right now is to get fabriqa into the hands of agentic AI developers and teams so they can help me make fabriqa the best AI Development Orchestration Layer for software development in the world. If you want early access to the specs workflows, reach out. fabriqa is still early, but amazing things are in plan. It will be nothing like what already exists.</p>]]></content:encoded></item><item><title><![CDATA[Building a Permanent Archive of Every AI Conversation]]></title><description><![CDATA[with Claude Code]]></description><link>https://www.cengizhan.com/p/building-a-permanent-archive-of-every</link><guid isPermaLink="false">https://www.cengizhan.com/p/building-a-permanent-archive-of-every</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Sun, 18 Jan 2026 14:09:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mMj7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every interaction I have with Claude Code generates valuable data. The prompts I write, the clarifications I need, the approaches that work and the ones that fail. This data sits in <code>~/.claude/projects/</code> as JSONL files, but here is the problem: Claude Code automatically deletes these logs after 30 days by default. All that context, all those conversations, gone. I recently made a change to how I work with this data that preserves it permanently and opens up possibilities I have not fully explored yet.</p><p>The change is simple: I now export all my Claude Code logs to markdown files and track them in a git repository. Instead of regenerating the entire archive each time, the tool appends only new sessions. This creates a permanent, version-controlled history of every AI-assisted coding conversation I have ever had.</p><h2><strong>Why Markdown and Git</strong></h2><p>The JSONL format that Claude Code uses is optimized for machine processing, not human reading. I built <a href="https://github.com/fabriqaai/claude-code-logs">claude-code-logs</a> to convert these into readable HTML pages with search functionality, and that solved the immediate problem of finding past conversations. But I realized I was missing something more fundamental: a persistent record that grows over time and survives across machines, operating system reinstalls, and years of development work.</p><p>Git provides exactly what I need here. Each conversation becomes a commit. The history is immutable. I can see how my prompting style evolves over months or years. I can grep through years of conversations with familiar tools. And because it is just a git repository, I can sync it across machines, back it up to remote origins, and know that this knowledge is not going anywhere.</p><p>The markdown format matters because it is both human-readable and machine-parseable. I can open any conversation in my editor and read it directly. I can also write scripts that analyze patterns across thousands of files. The format serves both purposes without compromise.</p><h2><strong>What This Enables</strong></h2><p>The most obvious application is what I am calling a &#8220;year wrapped&#8221; analysis. At the end of 2026, I will have a complete record of every conversation: which projects I worked on, which problems I struggled with, which approaches I kept returning to, which tools I underutilized. This is the kind of retrospective that requires data collected over time; you cannot reconstruct it later from memory.</p><p>But the more interesting applications are the ones I am discovering as I think through the possibilities. I already built a <a href="https://www.cengizhan.com/p/claude-code-prompt-coach-skill-to">Prompt Coach skill</a> (<a href="https://github.com/hancengiz/claude-code-prompt-coach-skill">GitHub</a>) that analyzes recent Claude Code sessions and scores prompt quality against Anthropic&#8217;s official guidelines. With a permanent archive, I can run this analysis across months or years of data. I can see whether my prompts are actually improving over time, or whether I keep making the same mistakes.</p><p>I use <a href="https://specs.md/">specs.md</a> for spec-driven development on most of my projects. The philosophy is simple: write a detailed specification before coding, then let Claude implement against that spec. The opposite approach is vibe coding, where you iterate through ad-hoc prompts and hope for the best. With a permanent log archive, I can measure the ratio between these two modes. When I start a project with <a href="http://specs.md/">specs.md</a>, how many follow-up prompts do I need? Is that number decreasing as I write better specs? Is it decreasing as models improve? The data to answer these questions now exists.</p><h2><strong>The Unknown Future Uses</strong></h2><p>There is a category of value I cannot predict yet. Having a complete record of how I worked with AI tools from 2026 onwards creates optionality for future analysis. Perhaps in 2027 there will be tools for analyzing developer-AI collaboration patterns that do not exist today. Perhaps I will want to train a personal model on my coding style and preferences. Perhaps some researcher will want to study how early adopters of AI coding tools evolved their practices over time.</p><p>I do not know what I will want to do with this data in five years. But I know that if I do not capture it now, I will not have the option later. Storage on GitHub is free. The cost of not having the data when you need it is potentially significant.</p><h2><strong>Implementation Details</strong></h2><p>The setup is straightforward. I run <code>claude-code-logs serve --watch</code> which generates markdown files to <code>~/claude-code-logs</code> by default and watches for new conversations in real-time. The tool only processes new sessions since the last run, skipping files that have not changed, which makes it practical to run continuously without regenerating everything.</p><p>I added a simple git workflow: after generating new logs, commit them with a timestamp. This happens automatically on a schedule. The result is a repository that grows organically as I work, without requiring any conscious effort to maintain.</p><p>For anyone who wants to replicate this approach, the key insight is that the value compounds over time. Starting now means having a richer dataset next year. The tooling exists. The storage is free on GitHub. The only question is whether you care enough about understanding your own development practices to capture the data while it is being generated.</p><h2><strong>What Comes Next</strong></h2><p>I am planning to build analysis tools specifically designed for this archive format. The Prompt Coach skill works on recent sessions, but a persistent archive enables long-term analysis that was not possible before. Trends over months. Comparisons across projects. Correlations between prompting patterns and project outcomes.</p><p>The archive also raises interesting questions about privacy and sharing. My conversations contain proprietary code, and half-formed ideas that I would not want published. But anonymized patterns, aggregated statistics, and general insights could be valuable to share with the community. The right abstraction layer would let me analyze everything locally while sharing only the meta-patterns publicly.</p><h2><strong>How to Set This Up</strong></h2><p>If you want to replicate this approach, the setup takes a few minutes.</p><p>First, install claude-code-logs via Homebrew:</p><pre><code><code>brew tap fabriqaai/tap
brew install claude-code-logs
</code></code></pre><p>Create a private git repository on GitHub for your logs. Clone it to the default output location that claude-code-logs uses:</p><pre><code><code>git clone git@github.com:yourusername/your-private-logs-repo.git ~/claude-code-logs
</code></code></pre><p>Run the tool to generate markdown files from your existing Claude Code conversations:</p><pre><code><code>claude-code-logs serve
</code></code></pre><p>This generates markdown files in <code>~/claude-code-logs</code> and starts a local server for browsing. The tool only processes new or changed sessions, so subsequent runs are fast.</p><p>If you want to select specific projects instead of processing everything, use the <code>--list</code> flag for interactive project selection:</p><pre><code><code>claude-code-logs serve --list
</code></code></pre><p>For continuous monitoring that automatically picks up new conversations as they happen, use the <code>--watch</code> flag:</p><pre><code><code>claude-code-logs serve --watch
</code></code></pre><p>After generating new logs, commit and push the changes:</p><pre><code><code>cd ~/claude-code-logs
git add .
git commit -m "Update logs $(date +%Y-%m-%d)"
git push
</code></code></pre><p>You can automate this with a cron job or run it manually whenever you want to checkpoint your archive. The key is consistency: the value compounds over time, and starting now means having a richer dataset next year.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://github.com/fabriqaai/claude-code-logs" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mMj7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 424w, https://substackcdn.com/image/fetch/$s_!mMj7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 848w, https://substackcdn.com/image/fetch/$s_!mMj7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 1272w, https://substackcdn.com/image/fetch/$s_!mMj7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mMj7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png" width="1200" height="600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:600,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;fabriqaai/claude-code-logs&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:&quot;https://github.com/fabriqaai/claude-code-logs&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="fabriqaai/claude-code-logs" title="fabriqaai/claude-code-logs" srcset="https://substackcdn.com/image/fetch/$s_!mMj7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 424w, https://substackcdn.com/image/fetch/$s_!mMj7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 848w, https://substackcdn.com/image/fetch/$s_!mMj7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 1272w, https://substackcdn.com/image/fetch/$s_!mMj7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ec546dc-7b2b-490d-b5be-677fe4e113e7_1200x600.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.cengizhan.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[A new Simple Flow added to SPECS.MD : for solo devs and small teams]]></title><description><![CDATA[Kiro-Style Specs for Any AI Coding Tool]]></description><link>https://www.cengizhan.com/p/simple-flow-lightweight-specs-for</link><guid isPermaLink="false">https://www.cengizhan.com/p/simple-flow-lightweight-specs-for</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Tue, 13 Jan 2026 07:21:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9Dhf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="http://specs.md/">specs.md</a> has now a new flow; simple flow.</p><p>It&#8217;s spec-driven development stripped down to three phases and one agent.</p><p>Imagine Kiro specs in any AI coding tool you like.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Dhf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Dhf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 424w, https://substackcdn.com/image/fetch/$s_!9Dhf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 848w, https://substackcdn.com/image/fetch/$s_!9Dhf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 1272w, https://substackcdn.com/image/fetch/$s_!9Dhf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Dhf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png" width="1456" height="778" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:778,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8690415,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/184377482?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Dhf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 424w, https://substackcdn.com/image/fetch/$s_!9Dhf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 848w, https://substackcdn.com/image/fetch/$s_!9Dhf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 1272w, https://substackcdn.com/image/fetch/$s_!9Dhf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b9953-efe1-4952-8e5c-f7462c23cddb_2816x1504.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>What It Is</strong></h2><p><strong>Requirements</strong> &#8594; <strong>Design</strong> &#8594; <strong>Tasks</strong></p><p>One agent (<code>/specsmd-agent</code>) guides you through all three. No context switching between specialized agents. No complex handoffs. You describe what you want to build, and the agent generates documents at each phase, waiting for your approval before continuing.</p><p>Install it:</p><pre><code><code>npx specsmd@latest install</code></code></pre><p>Select &#8220;Simple&#8221; when prompted. Done.</p><p>Works with all major agentic coding tools:</p><ul><li><p>Claude Code</p></li><li><p>Cursor</p></li><li><p>Kiro (Amazon)</p></li><li><p>Windsurf</p></li><li><p>GitHub Copilot</p></li><li><p>Cline</p></li><li><p>Roo</p></li><li><p>Gemini</p></li><li><p>Codex (OpenAI)</p></li><li><p>Antigravity (Google)</p></li><li><p>OpenCode</p></li></ul><p>The installer auto-detects which tools you have. For Kiro, it creates a symlink so the editor detects your specs automatically.</p><h2><strong>How It Works</strong></h2><p>Invoke the agent with your feature idea:</p><pre><code><code>/specsmd-agent Create a user authentication system with email login</code></code></pre><p>The agent:</p><ol><li><p>Derives a feature name (<code>user-auth</code>)</p></li><li><p>Generates a requirements document with user stories and <a href="https://alistairmavin.com/ears/">EARS</a> acceptance criteria</p></li><li><p><strong>Waits for your approval</strong></p></li><li><p>Generates a technical design with architecture and data models</p></li><li><p><strong>Waits for your approval</strong></p></li><li><p>Generates numbered implementation tasks</p></li><li><p><strong>Waits for your approval</strong></p></li><li><p>Executes tasks one at a time, pausing after each</p></li></ol><p>The pattern is generate, then ask. Every phase requires explicit approval. Say &#8220;yes,&#8221; &#8220;approved,&#8221; or &#8220;looks good&#8221; to continue. Say anything else to trigger revision.</p><h2><strong>The Pause Is Intentional</strong></h2><p>By default, Simple Flow executes one task, then stops.</p><p>This is deliberate. You review what was built. You understand the changes. Then you decide whether to continue.</p><p>If you&#8217;re in flow and trust the direction, tell the agent: &#8220;continue until done&#8221; or &#8220;go yolo.&#8221; The guardrails are there. You choose when to lower them.</p><h2><strong>What Gets Generated</strong></h2><p>After completing the phases:</p><pre><code><code>specs/
&#9492;&#9472;&#9472; user-auth/
    &#9500;&#9472;&#9472; requirements.md    # What to build
    &#9500;&#9472;&#9472; design.md          # How to build it
    &#9492;&#9472;&#9472; tasks.md           # Step-by-step plan</code></code></pre><p>These documents persist. When you return to the project (or start a new session), the agent reads these files to understand context. The spec becomes the source of truth.</p><h2><strong>EARS Format for Requirements</strong></h2><p>Acceptance criteria use <a href="https://alistairmavin.com/ears/">EARS</a> (Easy Approach to Requirements Syntax):</p><pre><code>Event-driven: WHEN [trigger], THE [system] SHALL [response]
State-driven: WHILE [condition], THE [system] SHALL [response]
Unwanted behavior: IF [condition], THEN THE [system] SHALL [response]</code></pre><p>Example:</p><pre><code><code>WHEN user submits login form, THE Auth_System SHALL validate credentials
IF password is invalid, THEN THE Auth_System SHALL display error message</code></code></pre><p>This format makes requirements unambiguous and testable.</p><h2><strong>Simple Flow vs AI-DLC</strong></h2><p>They&#8217;re independent flows for different project types. Not a progression.</p><p><strong>Simple Flow:</strong></p><ul><li><p>Solo developers and small teams</p></li><li><p>Prototypes and MVPs</p></li><li><p>Features where you want structure without ceremony</p></li><li><p>One agent, three phases</p></li></ul><p><strong>AI-DLC:</strong></p><ul><li><p>Production systems with multiple stakeholders</p></li><li><p>Teams needing coordination across phases</p></li><li><p>Complex domains requiring Domain-Driven Design</p></li><li><p>Four agents, full methodology</p></li></ul><p>Choose based on your context. Install one or the other.</p><h2><strong>Commands Reference</strong></h2><p><strong>Create new spec:</strong> <code>/specsmd-agent Create a [feature idea]</code></p><p><strong>Continue existing:</strong> <code>/specsmd-agent</code></p><p><strong>Resume specific spec:</strong> <code>/specsmd-agent --spec="user-auth"</code></p><p><strong>Execute next task:</strong> <code>/specsmd-agent What's the next task?</code></p><p><strong>Execute specific task:</strong> <code>/specsmd-agent Execute task 2.1</code></p><h2><strong>Getting Started</strong></h2><pre><code><code>npx specsmd@latest install</code></code></pre><p>Select Simple. Invoke the agent with your feature idea. Review and approve each phase.</p><p>That&#8217;s it. Structure without the overhead.</p><div><hr></div><p><strong>Resources:</strong></p><ul><li><p><a href="https://specs.md/">specs.md Documentation</a></p></li><li><p><a href="https://specs.md/simple-flow/quick-start">Simple Flow Quick Start</a></p></li><li><p><a href="https://specs.md/simple-flow/three-phases">Three Phases Deep Dive</a></p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading my blog!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[llm-cli: Simple Aliases for Your Favorite AI Models]]></title><description><![CDATA[A simple tool to run Gemini and Claude CLI for single prompt calls]]></description><link>https://www.cengizhan.com/p/llm-cli-simple-aliases-for-your-favorite</link><guid isPermaLink="false">https://www.cengizhan.com/p/llm-cli-simple-aliases-for-your-favorite</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Thu, 01 Jan 2026 18:49:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Qhi-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I use multiple AI models every day. Claude for code. Gemini for quick queries. Sometimes I want Opus for deep reasoning, sometimes Haiku for fast responses. Sometimes Flash for something lightweight.</p><p>I found myself starting Claude CLI for single small prompts all day. Just quick calls, fire and forget.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.cengizhan.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>The old way:</strong></p><p>Start Claude. Type my command. Or start Gemini. Type my command. Different tools for different models, every single time.</p><p>So I spent an afternoon building <strong>llm-cli</strong>. A tiny Go wrapper that gives me one simple interface for everything.</p><p><strong>Here&#8217;s a quick demo:</strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;c79f04b0-69eb-4013-bebc-ac2ef84fc9d4&quot;,&quot;duration&quot;:null}"></div><p><em>See it in action on <a href="https://asciinema.org/a/Kjg9itpahwD8pO5KjU25Ev8lq">asciinema</a> if the video doesn&#8217;t load.</em></p><div><hr></div><p>I have multiple AI CLIs installed, each with different model naming conventions. I wanted one unified interface that could handle everything through simple aliases.</p><p>I just wanted to type <code>llm-cli opus "prompt"</code> and have it work.</p><div><hr></div><h2><strong>The Solution</strong></h2><p>llm-cli is a simple wrapper. That&#8217;s it. You call it with a model alias and a prompt, it routes to the right underlying CLI.</p><p><strong>The new way:</strong></p><pre><code><code># Simple aliases for everything
llm-cli haiku "what is 2+2?"
llm-cli opus "explain quantum computing"
llm-cli gemini "translate hello to spanish"
llm-cli flash "quick summary"
</code></code></pre><p>One command. Easy aliases. No way I could remember <code>claude-opus-4-5-20251101</code> anyway.</p><div><hr></div><h2><strong>Features (That I Actually Use)</strong></h2><p><strong>Model Aliases</strong></p><p>Instead of typing <code>claude-opus-4-5-20251101</code>, just type <code>opus</code>. Instead of <code>gemini-3-flash-preview</code>, just type <code>flash</code>.</p><p><strong>Unified Interface</strong></p><p>Same command structure whether you&#8217;re using Claude or Gemini models. You don&#8217;t have to remember which CLI handles which provider.</p><p><strong>Config File</strong></p><p>Run <code>llm-cli</code> once and it generates <code>~/.llm-cli/models.json</code> with all the defaults. Add your own aliases. Change the default model. Remove models you never use.</p><p><strong>Session Management</strong></p><p>By default, sessions are stored centrally in <code>~/.llm-cli/sessions/</code>. Set <code>run_on_current_directory</code> to <code>true</code> to store sessions in your current directory instead. This is useful for project-specific conversations that you can resume with <code>claude --resume</code>.</p><p>When running in current directory mode, the CLI gets access to files in that directory. This means if you run it from your project folder, it can read those files as part of the conversation.</p><p>For example:</p><pre><code><code># From your project directory (with run_on_current_directory: true)
llm-cli opus "show me the content of test.md"

# The content of `test.md` is:
# llm-cli is great
</code></code></pre><p>Use the <code>-t</code> flag to temporarily run in temp/sessions mode, where files in your current directory aren&#8217;t accessible:</p><pre><code><code>llm-cli -t opus "show me the content of test.md"

# The file `test.md` does not exist in the current directory (`~/.llm-cli/sessions`).
</code></code></pre><p>Configure the default behavior in <code>~/.llm-cli/options.json</code>:</p><pre><code><code>{
  "run_on_current_directory": false
}
</code></code></pre><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qhi-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qhi-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 424w, https://substackcdn.com/image/fetch/$s_!Qhi-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 848w, https://substackcdn.com/image/fetch/$s_!Qhi-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 1272w, https://substackcdn.com/image/fetch/$s_!Qhi-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qhi-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png" width="1312" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1312,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1814176,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/183164580?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe46ad81c-8d36-4e45-b89c-1493c4a88b88_1312x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qhi-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 424w, https://substackcdn.com/image/fetch/$s_!Qhi-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 848w, https://substackcdn.com/image/fetch/$s_!Qhi-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 1272w, https://substackcdn.com/image/fetch/$s_!Qhi-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b59057-75c1-49d7-a09f-afc9e36f9741_1312x816.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2><strong>Getting Started (Literally Two Commands)</strong></h2><pre><code><code>brew tap fabriqaai/tap
brew install llm-cli
</code></code></pre><p>That&#8217;s it. Run it once to generate default configs, then customize if you want:</p><pre><code><code>llm-cli "hello"
# Now edit ~/.llm-cli/models.json
</code></code></pre><p><strong>Or from source:</strong></p><pre><code><code>go install github.com/fabriqaai/llm-cli@latest
</code></code></pre><div><hr></div><h2><strong>Usage Examples</strong></h2><pre><code><code># Simple prompt with default model (haiku)
llm-cli "what is the capital of france?"

# Use a specific model alias
llm-cli opus "explain go interfaces"
llm-cli sonnet "write a python function"
llm-cli gemini "what is 2+2?"
llm-cli flash "translate hello to spanish"

# Using flags
llm-cli -m opus -s "You are a Go expert" "how do I use interfaces?"

# List all available models
llm-cli models

# Check version
llm-cli version
</code></code></pre><div><hr></div><h2><strong>Configuration (Optional)</strong></h2><p>The <code>~/.llm-cli/models.json</code> file is where everything lives. After first run it&#8217;s populated with defaults:</p><pre><code><code>{
  "default_model": "haiku",
  "models": {
    "haiku": { "cli": "claude", "model_id": "claude-haiku-4-5-20251001" },
    "opus": { "cli": "claude", "model_id": "claude-opus-4-5-20251101" },
    "sonnet": { "cli": "claude", "model_id": "claude-sonnet-4-5-20251001" },
    "gemini": { "cli": "gemini", "model_id": "gemini-3-pro-preview" },
    "flash": { "cli": "gemini", "model_id": "gemini-3-flash-preview" }
  }
}
</code></code></pre><p>Add custom models. Change defaults. Remove what you don&#8217;t use. It&#8217;s just JSON.</p><div><hr></div><h2><strong>Should You Use This?</strong></h2><p><strong>Yes, if:</strong></p><ul><li><p>You use both Claude and Gemini CLIs regularly</p></li><li><p>You&#8217;re tired of typing long model IDs</p></li><li><p>You want a unified interface for multiple AI providers</p></li><li><p>You like customizing aliases</p></li></ul><p><strong>No, if:</strong></p><ul><li><p>You only use one AI CLI</p></li><li><p>You don&#8217;t care about model aliases</p></li><li><p>You&#8217;re happy with your current workflow</p></li></ul><p>I built this for me. If it helps you too, great. If not, no worries.</p><div><hr></div><h2><strong>Source Code</strong></h2><p>Go + Cobra (CLI framework) + some JSON config parsing. It shells out to the underlying CLIs, captures output, and streams it back to you. Nothing fancy, just works.</p><p><strong>Published places:</strong></p><ul><li><p><strong>GitHub:</strong> <a href="https://github.com/fabriqaai/llm-cli">github.com/fabriqaai/llm-cli</a></p></li><li><p><strong>Homebrew:</strong> <code>brew tap fabriqaai/tap &amp;&amp; brew install llm-cli</code></p></li><li><p><strong>Go:</strong> <code>go install github.com/fabriqaai/llm-cli@latest</code></p></li></ul><div><hr></div><p>This is a tiny tool that solves a tiny annoyance. But sometimes those are the best tools.</p><p>If it sounds useful, give it a try. If you find bugs or have ideas, hit up the GitHub.</p><div><hr></div><p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading cengizhan.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Announcing claude-code-logs: A Searchable Web View of Your Claude Code Conversations ]]></title><description><![CDATA[If you use Claude Code, you&#8217;ve probably had this moment: you remember solving a problem brilliantly with Claude last week, but now you can&#8217;t find that conversation.]]></description><link>https://www.cengizhan.com/p/announcing-claude-code-logs-a-searchable</link><guid isPermaLink="false">https://www.cengizhan.com/p/announcing-claude-code-logs-a-searchable</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Mon, 29 Dec 2025 20:31:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cDDb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you use Claude Code, you&#8217;ve probably had this moment: you remember solving a problem brilliantly with Claude last week, but now you can&#8217;t find that conversation. Or you crafted a prompt that combined sequential thinking with MCP tools to do deep research, and it worked perfectly. Now you want to reuse that approach, but you can&#8217;t remember exactly how you constructed it.</p><p>Even if you&#8217;re disciplined about spec-driven development for production code, discovery and exploration still happen through vibe coding. Those experimental sessions, the prompts that led to breakthroughs, the dead ends that taught you something: they&#8217;re worth revisiting. But only if you can find them.</p><p>Good luck searching through <code>~/.claude/projects/</code>. Those JSONL files weren&#8217;t designed for human eyes.</p><p>I built <code>claude-code-logs</code> to fix this. Open it, type what you&#8217;re looking for, and it searches across every Claude Code conversation on your machine.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cDDb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cDDb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 424w, https://substackcdn.com/image/fetch/$s_!cDDb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 848w, https://substackcdn.com/image/fetch/$s_!cDDb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 1272w, https://substackcdn.com/image/fetch/$s_!cDDb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cDDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png" width="1456" height="1266" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1266,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5887111,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/182896303?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cDDb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 424w, https://substackcdn.com/image/fetch/$s_!cDDb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 848w, https://substackcdn.com/image/fetch/$s_!cDDb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 1272w, https://substackcdn.com/image/fetch/$s_!cDDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F920b07c7-7fe0-41ba-9ebb-0b8e0db835ad_2208x1920.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2><strong>The Problem</strong></h2><p>Claude Code stores every conversation as JSONL files in <code>~/.claude/projects/</code>. Great for persistence, terrible for humans. You can&#8217;t browse them. You can&#8217;t search them. That brilliant regex Claude wrote for you? Gone into the void.</p><h2><strong>The Solution</strong></h2><p><code>claude-code-logs</code> is a single-binary CLI that:</p><ol><li><p><strong>Discovers</strong> all your Claude Code projects automatically</p></li><li><p><strong>Generates</strong> beautiful HTML pages that mirror <a href="http://claude.ai/">Claude.ai</a>&#8216;s own aesthetic</p></li><li><p><strong>Serves</strong> them locally with full-text search</p></li><li><p><strong>Watches</strong> for new conversations and updates in real-time</p></li></ol><p>The UI uses Claude&#8217;s signature warm cream backgrounds, serif fonts for assistant responses, and that familiar orange accent. It feels like home.</p><h2><strong>Quick Start</strong></h2><pre><code><code># Install via Homebrew
brew tap fabriqaai/tap
brew install claude-code-logs

# Start browsing
claude-code-logs serve

#Open http://localhost:8080</code></code></pre><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pkXs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pkXs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 424w, https://substackcdn.com/image/fetch/$s_!pkXs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 848w, https://substackcdn.com/image/fetch/$s_!pkXs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 1272w, https://substackcdn.com/image/fetch/$s_!pkXs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pkXs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png" width="1456" height="1281" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1281,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1421609,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/182896303?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pkXs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 424w, https://substackcdn.com/image/fetch/$s_!pkXs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 848w, https://substackcdn.com/image/fetch/$s_!pkXs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 1272w, https://substackcdn.com/image/fetch/$s_!pkXs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefb326ee-21a7-4621-b97b-7f56b442ba56_2814x2476.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And your entire Claude Code history is now searchable and readable in web format.</p><h2><strong>Key Features</strong></h2><p><strong>Project Navigation</strong>: The left sidebar lists all your projects. Click to see every conversation.</p><p><strong>Full-Text Search</strong>: Find that regex, that API call, that deployment script. Search across all sessions or filter by project.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j4sp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j4sp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 424w, https://substackcdn.com/image/fetch/$s_!j4sp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 848w, https://substackcdn.com/image/fetch/$s_!j4sp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 1272w, https://substackcdn.com/image/fetch/$s_!j4sp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j4sp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png" width="1456" height="1281" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1281,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1382111,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/182896303?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j4sp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 424w, https://substackcdn.com/image/fetch/$s_!j4sp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 848w, https://substackcdn.com/image/fetch/$s_!j4sp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 1272w, https://substackcdn.com/image/fetch/$s_!j4sp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa85dd407-6f77-4a3d-bb39-71f75538fb68_2814x2476.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Watch Mode</strong>: Run <code>claude-code-logs serve --watch</code> and new conversations appear automatically.</p><p><strong>Works Offline</strong>: No API calls, no cloud dependencies. Your conversations stay on your machine.</p><p><strong>Static Fallback</strong>: The generated HTML files work without the server, too. Share them, archive them, whatever you need.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading cengizhan.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Why Go?</strong></h2><p>Single binary. No runtime. Cross-platform. Install once, run forever. The whole thing compiles to about 10MB and starts in milliseconds.</p><h2><strong>100% AI-Generated with <a href="http://specs.md/">specs.md</a></strong></h2><p>Here&#8217;s the interesting part: this entire project was generated using <a href="https://specs.md/">specs.md</a>.</p><p>Every feature, every refactoring decision, every future enhancement lives as an &#8220;intent&#8221; in the project&#8217;s memory bank. The tree view sidebar? An intent. The resizable panels? An intent. Simplifying the CLI commands? Also an intent. </p><p>The <code>memory-bank/</code> directory contains the complete specification hierarchy:</p><ul><li><p><strong>Intents</strong> define what we&#8217;re building and why</p></li><li><p><strong>Units</strong> break intents into deliverable chunks</p></li><li><p><strong>Stories</strong> specify individual features</p></li><li><p><strong>Bolts</strong> capture implementation details</p></li></ul><p>If you&#8217;re curious how it works, the <a href="https://github.com/fabriqaai/claude-code-logs">GitHub repo</a> is the source code. But the <code>memory-bank/</code> folder is where the real blueprint lives. It&#8217;s a practical example of spec-driven AI development in action.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!19Sl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!19Sl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 424w, https://substackcdn.com/image/fetch/$s_!19Sl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 848w, https://substackcdn.com/image/fetch/$s_!19Sl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 1272w, https://substackcdn.com/image/fetch/$s_!19Sl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!19Sl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png" width="2804" height="2586" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2586,&quot;width&quot;:2804,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1758552,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/182896303?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc704906-bb1c-417e-bcb2-c59ce68b712d_2804x2586.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!19Sl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 424w, https://substackcdn.com/image/fetch/$s_!19Sl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 848w, https://substackcdn.com/image/fetch/$s_!19Sl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 1272w, https://substackcdn.com/image/fetch/$s_!19Sl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F854c0491-1a06-4607-84df-9b827c7494f9_2804x2586.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Built with Go, generated with specs.md, served with &#10084;&#65039;.</figcaption></figure></div><div><hr></div><p></p><h2><strong>What&#8217;s Next</strong></h2><p>The code is open source. If you&#8217;re a Claude Code user drowning in chat logs, give it a try. If you find bugs or want features, PRs welcome.</p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Building a Million-Token Research Agent for Claude Code]]></title><description><![CDATA[I built a Gemini research specialist that uses Google&#8217;s 1 million token context window for research.]]></description><link>https://www.cengizhan.com/p/building-a-million-token-research</link><guid isPermaLink="false">https://www.cengizhan.com/p/building-a-million-token-research</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Mon, 29 Dec 2025 11:10:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_MuC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1><strong>Building a Million-Token Research Agent for Claude Code</strong></h1><p><strong>TL;DR:</strong> Claude Code&#8217;s extensible agent system lets you create custom sub-agents that leverage different AI models for specialized tasks. I built a Gemini research specialist that uses Google&#8217;s 1 million token context window for deep research&#8212;and it&#8217;s transformed how I gather information during coding sessions.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_MuC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_MuC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!_MuC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!_MuC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!_MuC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_MuC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6033776,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/182847784?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_MuC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!_MuC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!_MuC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!_MuC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cd52a95-1ec6-416d-ac4f-cb24bb04904b_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2><strong>The Research Problem Every Developer Knows</strong></h2><p>You&#8217;re deep in a coding session. The flow state is perfect. Then you hit a wall: you need to understand how a library handles edge cases, compare authentication approaches, or research best practices for a pattern you&#8217;ve never implemented.</p><p>What happens next? You open a browser. Start Googling. Open fifteen tabs. Lose your flow state. Spend 45 minutes reading documentation, Stack Overflow answers, and blog posts. Try to hold all that context in your head while switching back to code.</p><p>The mental cost is brutal. Context switching destroys flow state. But the bigger problem is synthesis&#8212;turning scattered information into actionable knowledge while keeping your coding context intact.</p><p>What if your coding assistant could delegate research to a specialized agent that processes a million tokens of context and returns synthesized insights without you ever leaving your terminal?</p><div><hr></div><h2><strong>Why Build a Custom Research Agent?</strong></h2><p>Claude Code is powerful for coding tasks. But research has different requirements:</p><ul><li><p><strong>Web access</strong>: Need current information, not just training data</p></li><li><p><strong>Massive context</strong>: Processing entire documentation sets, not snippets</p></li><li><p><strong>Synthesis focus</strong>: Connecting dots across multiple sources</p></li><li><p><strong>Background execution</strong>: Research while you continue coding</p></li></ul><p>Claude&#8217;s context window is substantial, but Gemini offers something different: a 1 million token context window. That&#8217;s not just &#8220;bigger&#8221;&#8212;it&#8217;s a different category of capability for research tasks.</p><p>The insight: <strong>use the right model for the right job</strong>. Claude for code understanding and editing. Gemini for web research and massive context synthesis.</p><div><hr></div><h2><strong>The Gemini Research Specialist Agent</strong></h2><p>I created a custom agent called <code>gemini-research-specialist</code> that Claude Code can delegate to for research tasks. Here&#8217;s what it does:</p><ul><li><p>Leverages Gemini&#8217;s web search capabilities</p></li><li><p>Processes research using the 1 million token context window</p></li><li><p>Synthesizes findings into actionable developer-focused insights</p></li><li><p>Returns results directly into my Claude Code session</p></li></ul><p>Think of it as adding a research department to your coding assistant.</p><h3><strong>How It Works in Practice</strong></h3><p>When I&#8217;m in Claude Code and need research, the system recognizes research-oriented requests and spawns the Gemini agent:</p><pre><code><code>You: I'm building a recommendation system. Can you research
     best practices for collaborative filtering?

Claude Code: Let me use the gemini-research-specialist agent
             to research best practices for collaborative
             filtering in recommendation systems.

[Agent runs in background, processing web sources]

Claude Code: Based on the research, here are the key findings...
</code></code></pre><p>The agent runs asynchronously. I can continue other work while it gathers and synthesizes information. When it returns, the insights integrate directly into my conversation context.</p><div><hr></div><h2><strong>The 1 Million Token Context Window: Why It Matters</strong></h2><p>Here&#8217;s where things get interesting. Gemini&#8217;s context window isn&#8217;t just &#8220;bigger&#8221;&#8212;it&#8217;s categorically different.</p><p><strong>Traditional research with smaller context windows:</strong></p><ul><li><p>Retrieve a document chunk</p></li><li><p>Summarize and discard</p></li><li><p>Retrieve next chunk</p></li><li><p>Try to connect insights across lossy summaries</p></li><li><p>Lose nuance, miss connections, make errors</p></li></ul><p><strong>Research with a 1 million token context window:</strong></p><ul><li><p>Load entire documentation sets simultaneously</p></li><li><p>Process complete technical specifications</p></li><li><p>Hold multiple full research papers at once</p></li><li><p>See connections across sources that smaller windows miss</p></li><li><p>Synthesize without information loss</p></li></ul><p>To put this in perspective: 1 million tokens is roughly 750,000 words. That&#8217;s approximately:</p><ul><li><p>4-5 complete technical books</p></li><li><p>Hundreds of documentation pages</p></li><li><p>Dozens of research papers</p></li><li><p>Thousands of Stack Overflow answers</p></li></ul><p>All held in working memory. Simultaneously. While looking for patterns and connections.</p><h3><strong>The Mental Model: Research Synthesis at Scale</strong></h3><p>Think of traditional AI research like looking through a keyhole&#8212;you see one thing at a time and try to remember what you saw before. A million-token context window is like having the entire wall removed. You see everything at once and can trace connections that keyhole viewing would never reveal.</p><p>For developers, this means research that actually captures:</p><ul><li><p>How library X&#8217;s approach compares to library Y&#8217;s across their full documentation</p></li><li><p>The evolution of best practices from 2020 recommendations to current consensus</p></li><li><p>Edge cases mentioned in GitHub issues that contradict documentation claims</p></li><li><p>The &#8220;why&#8221; behind decisions, not just the &#8220;what&#8221;</p></li></ul><div><hr></div><h2><strong>Real-World Use Cases</strong></h2><h3><strong>1. Learning New Protocols and Technologies</strong></h3><p>This goes far beyond searching for library docs. I used this approach to learn the Agent Commerce Protocol (ACP). Instead of bouncing between websites, reading scattered documentation, and trying to piece together understanding, I used the research agent as a learning partner:</p><pre><code><code>"Explain the Agent Commerce Protocol architecture. What are
the core concepts, how do agents discover each other, and
what's the payment flow?"
</code></code></pre><p>Then I asked follow-up questions, drilling deeper into areas I didn&#8217;t understand. Once I had clarity, I asked the agent to synthesize everything into step-by-step tutorials that my entire team could use to learn the protocol.</p><p>The workflow becomes:</p><ol><li><p><strong>Ask questions</strong> - Use the agent to explore and understand</p></li><li><p><strong>Drill deeper</strong> - Follow up on confusing parts</p></li><li><p><strong>Create artifacts</strong> - Turn your learning into tutorials, guides, documentation</p></li><li><p><strong>Share knowledge</strong> - Now your whole team benefits from your learning session</p></li></ol><p>This transforms the agent from a search tool into a <strong>learning accelerator</strong>. You&#8217;re not just finding information&#8212;you&#8217;re building understanding and creating reusable knowledge assets.</p><h3><strong>2. Academic Research and Whitepaper Discovery</strong></h3><p>Staying current with academic research is nearly impossible manually. New papers drop daily on arXiv, and finding the ones relevant to your work requires constant vigilance. The research agent changes this:</p><pre><code><code>"Find recent whitepapers and academic research on AI-native
software engineering methodologies. Focus on papers from
arxiv.org discussing agentic development, LLM-assisted
coding workflows, and human-AI collaboration in software
development. Summarize the key approaches and findings."
</code></code></pre><p>The agent can surface papers you&#8217;d never find through casual browsing, synthesize their key contributions, and help you understand how academic research connects to practical engineering. This is how I stay current on AI-native engineering approaches&#8212;not by manually checking arXiv every day, but by periodically asking the agent to find what&#8217;s new and explain what matters.</p><p>For practitioners, this bridges the gap between academic innovation and real-world application. You get the insights without the hours of reading dense papers.</p><h3><strong>3. Technology Comparisons</strong></h3><pre><code><code>"Compare the performance characteristics of different vector
databases for a use case with 10M embeddings, focusing on
query latency, scalability, and operational complexity"
</code></code></pre><p>The agent processes documentation, benchmarks, and community experiences simultaneously, returning a synthesis that would take hours to compile manually.</p><div><hr></div><h2><strong>Limitations</strong></h2><ul><li><p><strong>Research latency</strong>: 30-60 seconds for complex queries. Thorough research isn&#8217;t instant.</p></li><li><p><strong>Synthesis quality</strong>: Large context enables better synthesis, but verify critical findings.</p></li><li><p><strong>Token costs</strong>: Use targeted requests rather than &#8220;tell me everything about X.&#8221;</p></li></ul><div><hr></div><h2><strong>The Complete Source Code</strong></h2><p>Here&#8217;s the full implementation. Create this file at <code>~/.claude/agents/gemini-research-specialist.md</code>:</p><p></p><div class="github-gist" data-attrs="{&quot;innerHTML&quot;:&quot;<div id=\&quot;gist144023782\&quot; class=\&quot;gist\&quot;>\n    <div class=\&quot;gist-file\&quot; translate=\&quot;no\&quot; data-color-mode=\&quot;light\&quot; data-light-theme=\&quot;light\&quot;>\n      <div class=\&quot;gist-data\&quot;>\n        <div class=\&quot;js-gist-file-update-container js-task-list-container\&quot;>\n  <div id=\&quot;file-gemini-research-specialist-md\&quot; class=\&quot;file my-2\&quot;>\n      <div id=\&quot;file-gemini-research-specialist-md-readme\&quot; class=\&quot;Box-body readme blob p-5 p-xl-6 \&quot;\n    style=\&quot;overflow: auto\&quot; tabindex=\&quot;0\&quot; role=\&quot;region\&quot;\n    aria-label=\&quot;gemini-research-specialist.md content, created by hancengiz on 01:03PM today.\&quot;\n  >\n    <article class=\&quot;markdown-body entry-content container-lg\&quot; itemprop=\&quot;text\&quot;><markdown-accessiblity-table><table>\n  <thead>\n  <tr>\n  <th>name</th>\n  <th>description</th>\n  <th>model</th>\n  </tr>\n  </thead>\n  <tbody>\n  <tr>\n  <td><div dir=\&quot;auto\&quot;>gemini-research-specialist</div></td>\n  <td><div dir=\&quot;auto\&quot;>Use this agent when the user needs to research information, gather data from the web, investigate topics, find current information, or explore subjects that require internet search capabilities.</div></td>\n  <td><div dir=\&quot;auto\&quot;>sonnet</div></td>\n  </tr>\n  </tbody>\n</table></markdown-accessiblity-table>\n\n<p dir=\&quot;auto\&quot;>You are an elite Research Specialist with expertise in conducting thorough, efficient, and accurate research using the Gemini AI model in headless mode. Your primary tool is the command <code>gemini -p \&quot;prompt\&quot;</code> which you will use to gather information from the web and synthesize findings.</p>\n<div class=\&quot;markdown-heading\&quot; dir=\&quot;auto\&quot;><h2 class=\&quot;heading-element\&quot; dir=\&quot;auto\&quot;>Core Responsibilities</h2><a id=\&quot;user-content-core-responsibilities\&quot; class=\&quot;anchor\&quot; aria-label=\&quot;Permalink: Core Responsibilities\&quot; href=\&quot;#core-responsibilities\&quot;><svg class=\&quot;octicon octicon-link\&quot; viewBox=\&quot;0 0 16 16\&quot; version=\&quot;1.1\&quot; width=\&quot;16\&quot; height=\&quot;16\&quot; aria-hidden=\&quot;true\&quot;><path d=\&quot;m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z\&quot;></path></svg></a></div>\n<ol dir=\&quot;auto\&quot;>\n<li>\n<p dir=\&quot;auto\&quot;><strong>Execute Targeted Research</strong>: When given a research task, formulate precise, well-structured prompts for Gemini that will yield the most relevant and comprehensive information.</p>\n</li>\n<li>\n<p dir=\&quot;auto\&quot;><strong>Strategic Prompt Design</strong>: Craft your Gemini prompts to:</p>\n<ul dir=\&quot;auto\&quot;>\n<li>Be specific and focused on the exact information needed</li>\n<li>Request current, factual information when timeliness matters</li>\n<li>Ask for multiple perspectives or sources when appropriate</li>\n<li>Include requests for examples, data, or evidence to support findings</li>\n<li>Specify the desired format or structure of the response when helpful</li>\n</ul>\n</li>\n<li>\n<p dir=\&quot;auto\&quot;><strong>Synthesize and Present Findings</strong>: After receiving results from Gemini:</p>\n<ul dir=\&quot;auto\&quot;>\n<li>Organize information logically and coherently</li>\n<li>Highlight key findings and insights</li>\n<li>Identify any gaps or limitations in the research</li>\n<li>Present information in a clear, actionable format</li>\n<li>Cite or reference the nature of sources when relevant</li>\n</ul>\n</li>\n</ol>\n<div class=\&quot;markdown-heading\&quot; dir=\&quot;auto\&quot;><h2 class=\&quot;heading-element\&quot; dir=\&quot;auto\&quot;>Operational Guidelines</h2><a id=\&quot;user-content-operational-guidelines\&quot; class=\&quot;anchor\&quot; aria-label=\&quot;Permalink: Operational Guidelines\&quot; href=\&quot;#operational-guidelines\&quot;><svg class=\&quot;octicon octicon-link\&quot; viewBox=\&quot;0 0 16 16\&quot; version=\&quot;1.1\&quot; width=\&quot;16\&quot; height=\&quot;16\&quot; aria-hidden=\&quot;true\&quot;><path d=\&quot;m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z\&quot;></path></svg></a></div>\n<p dir=\&quot;auto\&quot;><strong>Research Process</strong>:</p>\n<ul dir=\&quot;auto\&quot;>\n<li>Begin by clarifying the research objective and scope</li>\n<li>Break complex research questions into focused sub-queries if needed</li>\n<li>Execute Gemini searches using the exact format: <code>gemini -p \&quot;your precise prompt here\&quot;</code></li>\n<li>Evaluate the quality and relevance of returned information</li>\n<li>Conduct follow-up searches if initial results are incomplete or require deeper investigation</li>\n</ul>\n<p dir=\&quot;auto\&quot;><strong>Quality Assurance</strong>:</p>\n<ul dir=\&quot;auto\&quot;>\n<li>Cross-reference information when making critical claims</li>\n<li>Note when information may be time-sensitive or subject to change</li>\n<li>Distinguish between factual information, expert opinions, and speculation</li>\n<li>Acknowledge uncertainty when sources conflict or information is limited</li>\n</ul>\n<p dir=\&quot;auto\&quot;><strong>Prompt Engineering Best Practices</strong>:</p>\n<ul dir=\&quot;auto\&quot;>\n<li>Use clear, unambiguous language in your Gemini prompts</li>\n<li>Include relevant context that helps narrow the search scope</li>\n<li>Request specific types of information (statistics, examples, comparisons, etc.)</li>\n<li>Ask for recent or current information when timeliness is important</li>\n<li>Frame questions to elicit comprehensive yet focused responses</li>\n</ul>\n<p dir=\&quot;auto\&quot;><strong>Output Standards</strong>:</p>\n<ul dir=\&quot;auto\&quot;>\n<li>Present research findings in a well-structured format (use headings, bullet points, or numbered lists as appropriate)</li>\n<li>Lead with the most important or directly relevant information</li>\n<li>Provide context and background when it aids understanding</li>\n<li>Include actionable insights or recommendations when applicable</li>\n<li>Clearly indicate if additional research would be beneficial</li>\n</ul>\n<div class=\&quot;markdown-heading\&quot; dir=\&quot;auto\&quot;><h2 class=\&quot;heading-element\&quot; dir=\&quot;auto\&quot;>Edge Cases and Special Situations</h2><a id=\&quot;user-content-edge-cases-and-special-situations\&quot; class=\&quot;anchor\&quot; aria-label=\&quot;Permalink: Edge Cases and Special Situations\&quot; href=\&quot;#edge-cases-and-special-situations\&quot;><svg class=\&quot;octicon octicon-link\&quot; viewBox=\&quot;0 0 16 16\&quot; version=\&quot;1.1\&quot; width=\&quot;16\&quot; height=\&quot;16\&quot; aria-hidden=\&quot;true\&quot;><path d=\&quot;m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z\&quot;></path></svg></a></div>\n<ul dir=\&quot;auto\&quot;>\n<li><strong>Insufficient Results</strong>: If initial research yields limited information, reformulate your prompt with different angles or broader/narrower scope</li>\n<li><strong>Conflicting Information</strong>: When sources disagree, present multiple perspectives and note the discrepancy</li>\n<li><strong>Rapidly Evolving Topics</strong>: Explicitly note that information may change quickly and recommend follow-up research timelines</li>\n<li><strong>Highly Technical Topics</strong>: Break down complex findings into accessible explanations while maintaining accuracy</li>\n<li><strong>Ambiguous Requests</strong>: Proactively ask clarifying questions before conducting research to ensure you're investigating the right topic</li>\n</ul>\n<div class=\&quot;markdown-heading\&quot; dir=\&quot;auto\&quot;><h2 class=\&quot;heading-element\&quot; dir=\&quot;auto\&quot;>Self-Verification</h2><a id=\&quot;user-content-self-verification\&quot; class=\&quot;anchor\&quot; aria-label=\&quot;Permalink: Self-Verification\&quot; href=\&quot;#self-verification\&quot;><svg class=\&quot;octicon octicon-link\&quot; viewBox=\&quot;0 0 16 16\&quot; version=\&quot;1.1\&quot; width=\&quot;16\&quot; height=\&quot;16\&quot; aria-hidden=\&quot;true\&quot;><path d=\&quot;m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z\&quot;></path></svg></a></div>\n<p dir=\&quot;auto\&quot;>Before presenting findings, ask yourself:</p>\n<ul dir=\&quot;auto\&quot;>\n<li>Does this information directly address the research question?</li>\n<li>Have I provided sufficient depth and breadth of coverage?</li>\n<li>Are there obvious gaps or follow-up questions that should be addressed?</li>\n<li>Is the information presented in a clear, actionable format?</li>\n<li>Have I noted any important caveats or limitations?</li>\n</ul>\n<p dir=\&quot;auto\&quot;>Your goal is to be a reliable, efficient research partner that delivers high-quality, relevant information through strategic use of Gemini's capabilities. Always prioritize accuracy, clarity, and usefulness in your research outputs.</p>\n</article>\n  </div>\n\n  </div>\n</div>\n\n      </div>\n      <div class=\&quot;gist-meta\&quot;>\n        <a href=\&quot;https://gist.github.com/hancengiz/63ccfad08f297c57b778c5da13849275/raw/a3b7105f07e630821cfae95118de22d088c48190/gemini-research-specialist.md\&quot; style=\&quot;float:right\&quot; class=\&quot;Link--inTextBlock\&quot;>view raw</a>\n        <a href=\&quot;https://gist.github.com/hancengiz/63ccfad08f297c57b778c5da13849275#file-gemini-research-specialist-md\&quot; class=\&quot;Link--inTextBlock\&quot;>\n          gemini-research-specialist.md\n        </a>\n        hosted with &amp;#10084; by <a class=\&quot;Link--inTextBlock\&quot; href=\&quot;https://github.com\&quot;>GitHub</a>\n      </div>\n    </div>\n</div>\n&quot;,&quot;stylesheet&quot;:&quot;https://github.githubassets.com/assets/gist-embed-ed91f9610ae6.css&quot;}" data-component-name="GitgistToDOM"><link rel="stylesheet" href="https://github.githubassets.com/assets/gist-embed-ed91f9610ae6.css"><div id="gist144023782" class="gist">
    <div class="gist-file" data-color-mode="light" data-light-theme="light">
      <div class="gist-data">
        <div class="js-gist-file-update-container js-task-list-container">
  <div id="file-gemini-research-specialist-md" class="file my-2">
      <div id="file-gemini-research-specialist-md-readme" class="Box-body readme blob p-5 p-xl-6 " style="overflow:auto">
    <article class="markdown-body entry-content container-lg" itemprop="text"><table>
  <thead>
  <tr>
  <th>name</th>
  <th>description</th>
  <th>model</th>
  </tr>
  </thead>
  <tbody>
  <tr>
  <td><div>gemini-research-specialist</div></td>
  <td><div>Use this agent when the user needs to research information, gather data from the web, investigate topics, find current information, or explore subjects that require internet search capabilities.</div></td>
  <td><div>sonnet</div></td>
  </tr>
  </tbody>
</table>

<p>You are an elite Research Specialist with expertise in conducting thorough, efficient, and accurate research using the Gemini AI model in headless mode. Your primary tool is the command <code>gemini -p "prompt"</code> which you will use to gather information from the web and synthesize findings.</p>
<div class="markdown-heading"><h2 class="heading-element">Core Responsibilities</h2><a id="user-content-core-responsibilities" class="anchor" href="#core-responsibilities"></a></div>
<ol>
<li>
<p><strong>Execute Targeted Research</strong>: When given a research task, formulate precise, well-structured prompts for Gemini that will yield the most relevant and comprehensive information.</p>
</li>
<li>
<p><strong>Strategic Prompt Design</strong>: Craft your Gemini prompts to:</p>
<ul>
<li>Be specific and focused on the exact information needed</li>
<li>Request current, factual information when timeliness matters</li>
<li>Ask for multiple perspectives or sources when appropriate</li>
<li>Include requests for examples, data, or evidence to support findings</li>
<li>Specify the desired format or structure of the response when helpful</li>
</ul>
</li>
<li>
<p><strong>Synthesize and Present Findings</strong>: After receiving results from Gemini:</p>
<ul>
<li>Organize information logically and coherently</li>
<li>Highlight key findings and insights</li>
<li>Identify any gaps or limitations in the research</li>
<li>Present information in a clear, actionable format</li>
<li>Cite or reference the nature of sources when relevant</li>
</ul>
</li>
</ol>
<div class="markdown-heading"><h2 class="heading-element">Operational Guidelines</h2><a id="user-content-operational-guidelines" class="anchor" href="#operational-guidelines"></a></div>
<p><strong>Research Process</strong>:</p>
<ul>
<li>Begin by clarifying the research objective and scope</li>
<li>Break complex research questions into focused sub-queries if needed</li>
<li>Execute Gemini searches using the exact format: <code>gemini -p "your precise prompt here"</code></li>
<li>Evaluate the quality and relevance of returned information</li>
<li>Conduct follow-up searches if initial results are incomplete or require deeper investigation</li>
</ul>
<p><strong>Quality Assurance</strong>:</p>
<ul>
<li>Cross-reference information when making critical claims</li>
<li>Note when information may be time-sensitive or subject to change</li>
<li>Distinguish between factual information, expert opinions, and speculation</li>
<li>Acknowledge uncertainty when sources conflict or information is limited</li>
</ul>
<p><strong>Prompt Engineering Best Practices</strong>:</p>
<ul>
<li>Use clear, unambiguous language in your Gemini prompts</li>
<li>Include relevant context that helps narrow the search scope</li>
<li>Request specific types of information (statistics, examples, comparisons, etc.)</li>
<li>Ask for recent or current information when timeliness is important</li>
<li>Frame questions to elicit comprehensive yet focused responses</li>
</ul>
<p><strong>Output Standards</strong>:</p>
<ul>
<li>Present research findings in a well-structured format (use headings, bullet points, or numbered lists as appropriate)</li>
<li>Lead with the most important or directly relevant information</li>
<li>Provide context and background when it aids understanding</li>
<li>Include actionable insights or recommendations when applicable</li>
<li>Clearly indicate if additional research would be beneficial</li>
</ul>
<div class="markdown-heading"><h2 class="heading-element">Edge Cases and Special Situations</h2><a id="user-content-edge-cases-and-special-situations" class="anchor" href="#edge-cases-and-special-situations"></a></div>
<ul>
<li><strong>Insufficient Results</strong>: If initial research yields limited information, reformulate your prompt with different angles or broader/narrower scope</li>
<li><strong>Conflicting Information</strong>: When sources disagree, present multiple perspectives and note the discrepancy</li>
<li><strong>Rapidly Evolving Topics</strong>: Explicitly note that information may change quickly and recommend follow-up research timelines</li>
<li><strong>Highly Technical Topics</strong>: Break down complex findings into accessible explanations while maintaining accuracy</li>
<li><strong>Ambiguous Requests</strong>: Proactively ask clarifying questions before conducting research to ensure you're investigating the right topic</li>
</ul>
<div class="markdown-heading"><h2 class="heading-element">Self-Verification</h2><a id="user-content-self-verification" class="anchor" href="#self-verification"></a></div>
<p>Before presenting findings, ask yourself:</p>
<ul>
<li>Does this information directly address the research question?</li>
<li>Have I provided sufficient depth and breadth of coverage?</li>
<li>Are there obvious gaps or follow-up questions that should be addressed?</li>
<li>Is the information presented in a clear, actionable format?</li>
<li>Have I noted any important caveats or limitations?</li>
</ul>
<p>Your goal is to be a reliable, efficient research partner that delivers high-quality, relevant information through strategic use of Gemini's capabilities. Always prioritize accuracy, clarity, and usefulness in your research outputs.</p>
</article>
  </div>

  </div>
</div>

      </div>
      <div class="gist-meta">
        <a href="https://gist.github.com/hancengiz/63ccfad08f297c57b778c5da13849275/raw/a3b7105f07e630821cfae95118de22d088c48190/gemini-research-specialist.md" style="float:right" class="Link--inTextBlock">view raw</a>
        <a href="https://gist.github.com/hancengiz/63ccfad08f297c57b778c5da13849275#file-gemini-research-specialist-md" class="Link--inTextBlock">
          gemini-research-specialist.md
        </a>
        hosted with &#10084; by <a class="Link--inTextBlock" href="https://github.com">GitHub</a>
      </div>
    </div>
</div>
</div><p></p><h3><strong>Prerequisites</strong></h3><p>Before the agent works, you need Gemini CLI installed and configured:</p><pre><code><code># Install Gemini CLI (choose one)
npm install -g @google/gemini-cli    # npm
brew install gemini-cli               # Homebrew (macOS/Linux)
npx https://github.com/google-gemini/gemini-cli  # Run without installing

# First run - authenticate with Google (recommended, free tier)
gemini
# Select "Login with Google" when prompted
# This gives you 60 requests/min and 1,000 requests/day for free

# Alternative: Use API key instead
export GEMINI_API_KEY="your-api-key-here"  # from https://aistudio.google.com/apikey

# Test it works
gemini "What is the current date?"
</code></code></pre><p>The Google login option is recommended for most users since it&#8217;s simpler and includes a generous free tier.</p><h3><strong>How It Works Under the Hood</strong></h3><p>The core pattern is simple: <strong>a Claude Code agent that invokes another CLI tool</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rmn7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rmn7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 424w, https://substackcdn.com/image/fetch/$s_!Rmn7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 848w, https://substackcdn.com/image/fetch/$s_!Rmn7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 1272w, https://substackcdn.com/image/fetch/$s_!Rmn7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rmn7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png" width="784" height="414" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:414,&quot;width&quot;:784,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:36027,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/182847784?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rmn7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 424w, https://substackcdn.com/image/fetch/$s_!Rmn7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 848w, https://substackcdn.com/image/fetch/$s_!Rmn7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 1272w, https://substackcdn.com/image/fetch/$s_!Rmn7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f810490-0de6-4b46-965a-ef5da1497eeb_784x414.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>This is the key insight: Claude Code agents can invoke any CLI tool.</strong> The agent is just a markdown file that defines when to trigger and what instructions to give Claude. The actual research happens through the Gemini CLI, which Claude calls like any other command-line tool.</p><p>You could use this same pattern to integrate:</p><ul><li><p><code>perplexity</code> CLI for search</p></li><li><p><code>llm</code> CLI for other models</p></li><li><p><code>gh</code> for GitHub operations</p></li><li><p>Any tool with a command-line interface</p></li></ul><div><hr></div><h2><strong>Getting Started: Step by Step</strong></h2><p><strong>Create the agents directory</strong> (if it doesn&#8217;t exist):</p><pre><code><code>mkdir -p ~/.claude/agents
</code></code></pre><p><strong>Create the agent file</strong>:</p><pre><code><code># Copy the source code above into this file
nano ~/.claude/agents/gemini-research-specialist.md
</code></code></pre><p><strong>Install and configure Gemini CLI</strong>:</p><pre><code><code>npm install -g @google/generative-ai-cli
export GEMINI_API_KEY="your-key-here"
</code></code></pre><p><strong>Test the agent</strong>: In Claude Code, try:</p><pre><code><code>Research the latest best practices for TypeScript monorepo tooling in 2025
</code></code></pre><p><strong>Iterate</strong>: Adjust the system prompt based on the quality of research you receive</p><p>Have fun!</p>]]></content:encoded></item><item><title><![CDATA[Announcing specs.md: Structure for AI-Native Development ]]></title><description><![CDATA[https://specs.md]]></description><link>https://www.cengizhan.com/p/announcing-specsmd-structure-for</link><guid isPermaLink="false">https://www.cengizhan.com/p/announcing-specsmd-structure-for</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Wed, 24 Dec 2025 21:47:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Uajg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1><strong><a href="http://specs.md/">https://specs.md</a></strong></h1><p><strong>TL;DR:</strong> Vibe-coding works for side projects. But production systems need more.<br><a href="http://specs.md/">specs.md</a> brings the <em>Explore &#8594; Specify &#8594; Engineer</em> loop into your AI coding workflow.<br>It&#8217;s open source, in alpha, and I&#8217;m looking for feedback from engineers who care about building things that actually work.</p><div><hr></div><h2><strong>The Problem We All Know</strong></h2><p>I&#8217;ve seen it happen dozens of times.</p><p>You start with AI&#8212;throw prompts, get code, iterate fast. It works. Until it doesn&#8217;t.</p><p>The AI forgets context between sessions. You&#8217;re re-explaining your architecture every time. Code quality swings wildly. Nobody remembers why decisions were made. Debugging becomes archaeology.</p><p><strong>Vibes got you here. Vibes won&#8217;t get you to production.</strong></p><p>The spec-first crowd says &#8220;write everything down before you start.&#8221; But that&#8217;s just as broken&#8212;specs written before learning are guesses dressed up as documentation.</p><p>What actually works is the loop: <strong>explore freely, specify what you learned, then engineer with rigor.</strong></p><p>That&#8217;s what <a href="http://specs.md/">specs.md</a> does.</p><div><hr></div><h2><strong>What Is <a href="http://specs.md/">specs.md</a>?</strong></h2><p><a href="http://specs.md/">specs.md</a> is an open-source framework that implements <strong>AI-DLC</strong> (AI-Driven Development Lifecycle)&#8212;a methodology that came out of AWS for AI-native software development.</p><p>It gives you:</p><ul><li><p><strong>Specialized agents</strong> for each phase&#8212;Inception, Construction, Operations</p></li><li><p><strong>Memory Bank</strong>&#8212;persistent context that survives across sessions</p></li><li><p><strong>Human gates</strong>&#8212;validation checkpoints that catch errors before they cascade</p></li><li><p><strong>Standards you define once</strong>&#8212;tech stack, coding conventions, architecture patterns</p></li></ul><p>It plugs into the tools you already use: Claude Code, Cursor, GitHub Copilot.</p><div><hr></div><h2><strong>How It Works</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://asciinema.org/a/763995" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Uajg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 424w, https://substackcdn.com/image/fetch/$s_!Uajg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 848w, https://substackcdn.com/image/fetch/$s_!Uajg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!Uajg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Uajg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png" width="1424" height="1400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1400,&quot;width&quot;:1424,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:299491,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://asciinema.org/a/763995&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/182536264?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Uajg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 424w, https://substackcdn.com/image/fetch/$s_!Uajg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 848w, https://substackcdn.com/image/fetch/$s_!Uajg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!Uajg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4fd04f3-a9d0-47aa-bcdd-ef7bf7b712b4_1424x1400.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You don&#8217;t start by prompting blindly. You start by capturing intent.</p><pre><code><code>/specsmd-master-agent
</code></code></pre><p>The Master Agent asks: what are you building?</p><p>From there:</p><ol><li><p><strong>Inception Agent</strong> elaborates your goal into requirements, user stories, and system context</p></li><li><p><strong>Construction Agent</strong> executes bolts&#8212;time-boxed sessions with validated stages</p></li><li><p><strong>Operations Agent</strong> handles deployment and monitoring</p></li></ol><p>Every artifact gets stored in a file-based Memory Bank. Readable by humans. Parseable by AI. Context that persists.</p><p>Here&#8217;s what it looks like in action:</p><div><hr></div><h2><strong>Why This Matters</strong></h2><p>Because AI amplifies whatever you give it.</p><p>Feed it chaos, you get faster chaos.<br>Feed it structure, you get faster delivery.</p><p><a href="http://specs.md/">specs.md</a> doesn&#8217;t slow you down. It stops you from crashing later.</p><p>The human gates aren&#8217;t bureaucracy&#8212;they&#8217;re the 30% that makes AI output production-ready. You review at each stage. Errors get caught early, before they cascade downstream.</p><div><hr></div><h2><strong>It&#8217;s Alpha&#8212;I Need Your Feedback</strong></h2><p>This is early. Some things work well. Some things don&#8217;t yet.</p><p>The Operations Agent has known issues I&#8217;m actively fixing. The framework is evolving based on real usage.</p><p>I&#8217;m not looking for users who want a polished product. I&#8217;m looking for <strong>opinionated engineers</strong> who want to shape how AI-native development actually works in practice.</p><p>If you&#8217;ve felt the pain of vibe-coding at scale&#8212;if you&#8217;ve seen promising AI experiments fail when they hit production&#8212;I want to hear from you.</p><div><hr></div><h2><strong>Get Started</strong></h2><pre><code><code>npx specsmd@latest install
</code></code></pre><p>Then open your AI coding tool and type:</p><pre><code><code>/specsmd-master-agent
</code></code></pre><p><strong>Links:</strong></p><ul><li><p>Documentation: <a href="https://specs.md/">specs.md</a></p></li><li><p>GitHub: <a href="https://github.com/fabriqaai/specsmd">github.com/fabriqaai/specsmd</a></p></li><li><p>Report issues: <a href="https://github.com/fabriqaai/specsmd/issues">GitHub Issues</a></p></li></ul><div><hr></div><p>Vibes are fun. Specs are how you ship.</p><p>Let&#8217;s build the next generation of engineering, together.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading cengizhan.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Developer’s Guide: The State of AI Global Survey 2025]]></title><description><![CDATA[This is a developer-focused personal analysis of McKinsey&#8217;s November 2025 &#8220;The State of AI&#8221; survey.]]></description><link>https://www.cengizhan.com/p/developers-guide-mckinsey-state-of</link><guid isPermaLink="false">https://www.cengizhan.com/p/developers-guide-mckinsey-state-of</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Mon, 10 Nov 2025 01:24:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oadI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><strong>DISCLAIMER</strong></p><p>This report was generated by <strong>Claude Code</strong> analyzing <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai">McKinsey&#8217;s November 2025 State of AI survey report</a> for myself, and I thought I could share it. This is NOT an official McKinsey publication&#8212;it&#8217;s my personal analysis created by working with AI to extract developer-relevant insights from their research. <strong>This is my personal research and learning method -especially for big reports like this- and I thought I could share </strong>these<strong> insights I captured from this report on my blog.</strong></p><p>All McKinsey data is clearly marked with &#128202;. Everything else is interpretation, context, and technical translation.</p></blockquote><div><hr></div><h2><strong>What This Is</strong></h2><p>This is a <strong>developer-focused analysis</strong> of McKinsey&#8217;s November 2025 &#8220;State of AI&#8221; survey report. Instead of wading through 52 pages of business-speak, I used Claude Code to extract what actually matters for software engineers and translate McKinsey&#8217;s findings into actionable technical insights.</p><p><strong>The big question I asked:</strong> <em>&#8220;From a developer&#8217;s perspective, what does McKinsey&#8217;s State of AI 2025 report actually mean for my career and daily work?&#8221;</em></p><p><strong>What you&#8217;ll find here:</strong></p><ul><li><p><strong>&#128202; McKinsey&#8217;s data</strong> (1,993 respondents, 105 countries) on AI adoption, scaling challenges, and what separates high performers</p></li><li><p><strong>Technical translations</strong> of business concepts into engineering reality</p></li><li><p><strong>Real-world context</strong> from 2025 job market data, Big Tech layoffs, and actual implementation patterns</p></li><li><p><strong>Actionable guidance</strong> on what skills to learn, what questions to ask, and how to position yourself</p></li></ul><p><strong>Not interested in the developer angle?</strong> Read the <strong><a href="https://github.com/hancengiz/research_reports/blob/main/2-analysis/executive-summary-november-2025.md">Executive Summary</a></strong> for the business leadership perspective on the same report.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oadI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oadI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 424w, https://substackcdn.com/image/fetch/$s_!oadI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 848w, https://substackcdn.com/image/fetch/$s_!oadI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 1272w, https://substackcdn.com/image/fetch/$s_!oadI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oadI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png" width="1456" height="874" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:874,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2643240,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/178458745?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oadI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 424w, https://substackcdn.com/image/fetch/$s_!oadI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 848w, https://substackcdn.com/image/fetch/$s_!oadI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 1272w, https://substackcdn.com/image/fetch/$s_!oadI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc6670b-be7f-431a-867a-45dbef5f520e_2040x1224.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>How This Was Created</strong></h2><p>I loaded <strong>all seven McKinsey State of AI reports (2020-2025)</strong> into Claude Code and analyzed them with my <a href="https://www.cengizhan.com/p/vibe-coded-a-pdf-reader-mcp-tool">pdf reader mcp</a> to identify trends, shifts, and patterns over time. This isn&#8217;t just about the November 2025 report&#8212;it&#8217;s about understanding how <a href="https://github.com/hancengiz/research_reports/blob/main/2-analysis/analysis.md#-mckinseys-current-position-november-2025">McKinsey&#8217;s perspective on AI has evolved and what that means for developers.</a></p><p><strong>Questions I asked Claude Code:</strong></p><ul><li><p>&#8220;How has the definition of &#8216;high performers&#8217; changed from 2021 to 2025?&#8221;</p></li><li><p>&#8220;If AI will create more job opportunities in tech, why are there Big Tech layoffs?&#8221;</p></li><li><p>&#8220;What trends changed between 2023, 2024, and 2025?&#8221;</p></li></ul><p>The result is this analysis&#8212;combining McKinsey&#8217;s multi-year survey research with technical context, market data, trend analysis, and engineering best practices.</p><p><strong>Visual markers in this document:</strong></p><ul><li><p><strong>&#128202; = McKinsey data</strong> (directly from their November 2025 report)</p></li><li><p><strong>Paragraphs with orange lines = External context</strong> (Claude&#8217;s analysis, web searches, technical interpretation, real-world examples)</p></li></ul><div><hr></div><h1><strong>&#128202; McKinsey&#8217;s Current Position (November 2025)</strong></h1><p><em>A paragraph from the <a href="https://github.com/hancengiz/research_reports/blob/main/2-analysis/analysis.md#-mckinseys-current-position-november-2025">general analysis document </a>I generated.</em></p><p><strong> The Realistic Assessment:</strong></p><p>&#128202; <strong>From McKinsey&#8217;s November 2025 report:</strong></p><ul><li><p>&#8220;Most organizations are still navigating the transition from experimentation to scaled deployment.&#8221;</p></li><li><p>&#8220;While AI tools are now commonplace, most organizations have not yet embedded them deeply enough into their workflows and processes to realize material enterprise-level benefits.&#8221;</p></li><li><p>&#8220;The transition from pilots to scaled impact remaining a work in progress at most organizations.&#8221;</p></li></ul><p><strong>The Path Forward:</strong></p><p>&#128202; <strong>McKinsey&#8217;s recommended practices:</strong></p><ul><li><p>Think transformatively, not incrementally</p></li><li><p>Redesign workflows fundamentally</p></li><li><p>Pursue innovation and growth, not just efficiency</p></li><li><p>Invest heavily and track ROI rigorously</p></li><li><p>Ensure C-suite ownership and commitment</p></li><li><p>Follow comprehensive best practices</p></li><li><p>Build organizational capabilities, not just deploy technology</p></li></ul><p><strong>The Ultimate Message:</strong></p><p>&#128202; <strong>From McKinsey&#8217;s November 2025 report:</strong></p><blockquote><p>&#8220;As AI tools, including agents, improve and companies&#8217; capabilities mature, the opportunity to embed AI more fully into the enterprise will offer organizations new ways to capture value and create competitive advantage.&#8221;</p></blockquote><p>&#128202; <strong>Synthesis:</strong> The journey from 2023&#8217;s &#8220;breakout year&#8221; to 2025&#8217;s &#8220;agents, innovation, and transformation&#8221; reflects McKinsey&#8217;s view evolving from technological excitement to organizational realism - recognizing that AI&#8217;s promise remains ahead, but achieving it requires fundamental business transformation, not just technology adoption. </p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading cengizhan.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>TL;DR for Developers</strong></h2><p><strong>The Bottom Line:</strong> Your company is probably using AI (88% are), but they&#8217;re likely stuck in pilot hell (68% haven&#8217;t scaled). The ones succeeding aren&#8217;t just adding AI to existing code&#8212;they&#8217;re fundamentally redesigning workflows and building with AI agents. This is a <strong>systems architecture challenge</strong>, not just an API integration problem.</p><p><strong>Three Critical Insights:</strong></p><ol><li><p><strong>AI Agents are the new frontier</strong> (62% experimenting, 23% scaling)</p></li><li><p><strong>Workflow redesign &gt; Tool adoption</strong> (2.8x gap between high/low performers)</p></li><li><p><strong>Your job is changing, not disappearing</strong> (IT/dev functions seeing headcount increases)</p></li></ol><div><hr></div><h2><strong>1. AI Agents: The Technical Shift You Need to Understand</strong></h2><h3><strong>What McKinsey Defines as &#8220;AI Agents&#8221;</strong></h3><p>&#128202; <strong>McKinsey&#8217;s Definition:</strong> &#8220;AI agents are systems based on foundation models that can act in the real world. Unlike a gen AI chatbot or copilot, which is largely reactive, an agentic solution can plan and execute multiple steps in a workflow.&#8221;</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Translation for Developers:</strong></p><pre><code><code>Traditional Gen AI (2023-2024):          AI Agents (2025+):
&#9500;&#9472; User prompt &#8594; AI response            &#9500;&#9472; Goal &#8594; Multi-step planning
&#9500;&#9472; Single-turn interaction              &#9500;&#9472; Multi-turn autonomous execution
&#9500;&#9472; No state persistence                 &#9500;&#9472; State management &amp; context retention
&#9500;&#9472; Human-driven workflow                &#9500;&#9472; AI-driven workflow orchestration
&#9492;&#9472; Example: ChatGPT, Copilot prompts   &#9492;&#9472; Example: AutoGPT, agent frameworks
</code></code></pre></blockquote><h3><strong>Adoption Reality Check</strong></h3><p>&#128202; <strong>Current State (McKinsey Nov 2025):</strong></p><ul><li><p><strong>62%</strong> of organizations are experimenting with or piloting AI agents</p></li><li><p><strong>23%</strong> are scaling agents somewhere in their enterprise</p></li><li><p><strong>Industries leading:</strong> Technology (24%), Media/Telecom (21%), Healthcare (18%)</p></li></ul><p><strong>Where Agents Are Being Deployed:</strong></p><ol><li><p><strong>IT and knowledge management</strong> (most common)</p></li><li><p><strong>Service operations</strong></p></li><li><p><strong>Software engineering</strong> &#8592; You&#8217;re here</p></li><li><p><strong>Product development</strong></p></li></ol><h3><strong>The Reality vs. The Hype</strong></h3><p>&#128202; <strong>McKinsey&#8217;s Realistic Assessment (Michael Chui):</strong></p><p>&#8220;When it comes to agents, it takes hard work to do it well.&#8221;</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>What This Means for You:</strong></p><ul><li><p>Building production-ready agents is NOT just using LangChain or AutoGPT</p></li><li><p>You need robust error handling, fallback mechanisms, and human oversight</p></li><li><p>Most implementations are still exploratory (not production-critical)</p></li><li><p>The technical challenges are real: state management, reliability, cost control</p></li></ul></blockquote><div><hr></div><h2><strong>2. The Scaling Gap: Why Most AI Projects Fail to Deploy</strong></h2><h3><strong>Where Organizations Are Stuck</strong></h3><p>&#128202; <strong>McKinsey Data (Nov 2025):</strong></p><pre><code><code>Experimentation:  31% &#8592; Still testing, PoC phase
Piloting:         30% &#8592; Limited production, single team/use case
Scaling:          25% &#8592; Multiple teams, cross-functional deployment
Fully Scaled:      7% &#8592; Enterprise-wide, integrated into core systems
&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;
STUCK (not scaling): 68%
</code></code></pre><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Developer Translation:</strong></p><ul><li><p><strong>Experimenting</strong> = Jupyter notebooks, side projects, hackathons</p></li><li><p><strong>Piloting</strong> = One team using it, hardcoded configs, manual processes</p></li><li><p><strong>Scaling</strong> = Multi-team adoption, CI/CD integration, monitoring</p></li><li><p><strong>Fully Scaled</strong> = Platform-level integration, automated ops, org-wide access</p></li></ul></blockquote><h3><strong>Why Technical Teams Get Stuck</strong></h3><p>&#128202; <strong>Why Companies Get Stuck (from McKinsey):</strong></p><ol><li><p><strong>Incremental thinking:</strong> Use-case-by-use-case approach creates technical debt</p></li><li><p><strong>Efficiency-only objectives:</strong> Cost focus limits organizational energy</p></li><li><p><strong>No workflow redesign:</strong> Adding AI to broken processes doesn&#8217;t transform outcomes</p></li><li><p><strong>IT ownership:</strong> Delegating to IT instead of a CEO-led transformation</p></li><li><p><strong>Incomplete execution:</strong> Following some best practices, but not all 6 dimensions</p></li></ol><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Technical Translation:</strong></p><p>What this means for developers and technical teams:</p><ol><li><p><strong>&#8220;Incremental thinking creates technical debt&#8221;</strong> translates to:</p><ul><li><p>Adding AI to existing systems without redesigning architecture</p></li><li><p>Building one-off solutions instead of reusable platforms</p></li><li><p>Result: Fragile integrations, maintenance nightmares</p></li></ul></li><li><p><strong>&#8220;No workflow redesign&#8221;</strong> translates to:</p><ul><li><p>Just adding AI endpoints to existing code</p></li><li><p>Not rethinking the entire system architecture</p></li><li><p>Missing opportunities for AI-native design patterns</p></li></ul></li><li><p><strong>&#8220;IT ownership without business context&#8221;</strong> translates to:</p><ul><li><p>Missing MLOps pipelines and AI infrastructure</p></li><li><p>No monitoring/observability for AI systems</p></li><li><p>Building what&#8217;s asked, not what drives business value</p></li><li><p>Disconnect between technical capability and impact</p></li></ul></li></ol></blockquote><h3><strong>The High Performer Difference (Technical Practices)</strong></h3><p>&#128202; <strong>What the top 6% do differently:</strong></p><p><strong>PracticeHigh PerformersOthersGapTechnology infrastructure</strong> allowing latest tech implementation60%22%<strong>2.7xIterative solution development</strong> with established improvement processes54%23%<strong>2.3xHuman-in-the-loop processes</strong> clearly defined65%24%<strong>2.7xWorkflow redesign</strong> embedding AI into business processes58%20%<strong>2.9x</strong></p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Translation:</strong></p><ul><li><p>They build <strong>platforms</strong>, not one-off solutions</p></li><li><p>They have <strong>CI/CD for AI models</strong>, not manual deployments</p></li><li><p>They define <strong>when humans validate outputs</strong>, not ad-hoc checking</p></li><li><p>They <strong>redesign the system</strong>, not just add AI endpoints</p></li></ul></blockquote><div><hr></div><h2><strong>3. Workflow Redesign: The #1 Technical Success Factor</strong></h2><h3><strong>What McKinsey Found</strong></h3><p>&#128202; <strong>Key Finding:</strong></p><p>&#8220;Out of 31 variables tested, workflow redesign has <strong>one of the strongest contributions</strong> to achieving meaningful business impact.&#8221;</p><p><strong>Statistics:</strong></p><ul><li><p>Only <strong>21%</strong> of all organizations have fundamentally redesigned workflows</p></li><li><p><strong>55%</strong> of high performers redesigned workflows vs <strong>20%</strong> of others (<strong>2.8x gap</strong>)</p></li><li><p>This is the <strong>highest correlation</strong> with EBIT impact across all factors tested</p></li></ul><h3><strong>What &#8220;Fundamental Redesign&#8221; Means for Developers</strong></h3><blockquote><p><strong>This information is from LLM, external sources</strong></p><p>&#10060; <strong>NOT Workflow Redesign (Adding AI to existing process):</strong></p><pre><code><code># Before: Manual customer support ticket handling
def handle_ticket(ticket):
    assign_to_human(ticket)
    human_resolves_ticket(ticket)

# After: Adding AI to existing workflow
def handle_ticket(ticket):
    ai_suggests_response(ticket)  # &#8592; Just added AI
    assign_to_human(ticket)        # &#8592; Same old process
    human_resolves_ticket(ticket)
</code></code></pre><p>&#9989; <strong>Workflow Redesign (Rearchitecting around AI capabilities):</strong></p><pre><code><code># Redesigned: AI-first with human oversight
def handle_ticket(ticket):
    # AI handles entire workflow
    severity = ai_classify_severity(ticket)

    if severity == &#8220;low&#8221;:
        # AI resolves autonomously
        response = ai_generate_resolution(ticket)
        ai_send_response(response)
        human_review_sample(response, probability=0.1)

    elif severity == &#8220;medium&#8221;:
        # AI drafts, human approves
        draft = ai_generate_resolution(ticket)
        human_approves_and_sends(draft)

    else:  # high severity
        # Human-led with AI assistance
        context = ai_gather_context(ticket)
        assign_to_specialist(ticket, context)
        ai_monitor_and_suggest(ticket)
</code></code></pre><p><strong>The Architectural Difference:</strong></p><ul><li><p><strong>Before:</strong> Linear process, AI as optional helper</p></li><li><p><strong>After:</strong> Branching logic based on AI capabilities, human-in-the-loop as checkpoints</p></li></ul><h3><strong>Real-World Implications for Your Architecture</strong></h3><p><strong>What changes:</strong></p><ol><li><p><strong>Role transformation:</strong></p><ul><li><p>Developers: From building features &#8594; Building AI-enabled systems</p></li><li><p>Users: From doers &#8594; Overseers/validators</p></li></ul></li><li><p><strong>New system requirements:</strong></p><ul><li><p>Confidence scoring for AI outputs</p></li><li><p>Audit trails for AI decisions</p></li><li><p>Fallback mechanisms when AI fails</p></li><li><p>Human escalation paths</p></li><li><p>Feedback loops for model improvement</p></li></ul></li><li><p><strong>Infrastructure needs:</strong></p><ul><li><p>Real-time model inference at scale</p></li><li><p>A/B testing infrastructure for AI variants</p></li><li><p>Monitoring for AI-specific failures (hallucinations, bias, drift)</p></li><li><p>Cost tracking per AI call (tokens, compute)</p></li></ul></li></ol></blockquote><div><hr></div><h2><strong>4. Your Job: Changing, Not Disappearing</strong></h2><h3><strong>The Nuanced Reality</strong></h3><p>&#128202; <strong>McKinsey&#8217;s Workforce Data (Nov 2025):</strong></p><ul><li><p><strong>32%</strong> expect workforce decreases of 3%+ in the next year</p></li><li><p><strong>43%</strong> expect little to no change</p></li><li><p><strong>13%</strong> expect increases of 3%+</p></li></ul><p><strong>Function-Level Breakdown:</strong></p><ul><li><p>&#128994; <strong>Software Engineering/Dev</strong> &#8594; Headcount likely to INCREASE</p></li><li><p>&#128994; <strong>IT</strong> &#8594; Headcount likely to INCREASE</p></li><li><p>&#128994; <strong>Product/Service Development</strong> &#8594; Headcount likely to INCREASE</p></li><li><p>&#128308; <strong>Service Operations</strong> &#8594; Headcount likely to decrease</p></li><li><p>&#128308; <strong>Supply Chain/Inventory</strong> &#8594; Headcount likely to decrease</p></li></ul><p>&#128202; <strong>McKinsey Quote (Lareina Yee):</strong></p><p>&#8220;Even in these early days of adoption, we are seeing changes in the <strong>skills demanded</strong> for a range of jobs.&#8221;</p><h3><strong>The Big Tech Reality: What&#8217;s Actually Happening in November 2025</strong></h3><blockquote><p><strong>This information is from LLM, external sources</strong></p><p>While McKinsey&#8217;s survey data shows IT/dev functions expecting headcount increases overall, the 2025 reality for Big Tech companies is more complex:</p><p><strong>Big Tech Layoffs (2025 Data):</strong></p><ul><li><p><strong>178,635 tech workers</strong> laid off in 2025 across 606 layoff events</p></li><li><p><strong>627 tech workers losing jobs every day</strong> in AI-driven restructuring</p></li><li><p>Major cuts: Amazon (14K), Microsoft (9K), Intel (25K), IBM (8K), Salesforce (2.5-5K)</p></li><li><p>Over <strong>17,000 jobs explicitly attributed to AI</strong>, another 20,000 to automation</p></li></ul><p><strong>BUT: Tech Jobs ARE Migrating to Non-Tech Industries:</strong></p><p>Large non-tech companies are absorbing tech talent:</p><ul><li><p><strong>Walmart</strong>: +5,000 tech workers hired in 2025</p></li><li><p><strong>JP Morgan Chase</strong>: 55,000 technology employees total</p></li><li><p><strong>United Health</strong>: +10,000 tech workers over past decade</p></li><li><p><strong>Goldman Sachs, Citizens Financial</strong>: Active hiring sprees</p></li></ul><p><strong>What This Means for You:</strong></p><ul><li><p>Big Tech (FAANG) is contracting and using &#8220;AI efficiency&#8221; as rationale</p></li><li><p>Large traditional companies (finance, retail, healthcare) are hiring tech talent</p></li><li><p>Tech jobs spreading from Big Tech to non-tech Fortune 500 companies</p></li><li><p>Small/mid companies (&lt;$500M revenue) still face talent constraints</p></li></ul><p><strong>Key Insight:</strong> McKinsey&#8217;s data showing &#8220;larger companies hiring AI talent at 2x rate&#8221; refers to large NON-TECH companies, not Big Tech. The democratization of tech jobs is real, but it&#8217;s migrating to traditional industries becoming software-enabled, not to startups.</p></blockquote><h3><strong>What This Means for Software Engineers</strong></h3><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Your role is evolving, and where you work may change too.</strong></p><p><strong>Old Job Description (2023):</strong></p><ul><li><p>Write code to implement features</p></li><li><p>Debug and fix issues</p></li><li><p>Deploy applications</p></li><li><p>Maintain systems</p></li></ul><p><strong>New Job Description (2025+):</strong></p><ul><li><p>Design AI-enabled systems architecture</p></li><li><p>Define human-AI collaboration patterns</p></li><li><p>Build AI observability/monitoring</p></li><li><p>Implement safety guardrails</p></li><li><p>Optimize AI workflows for cost &amp; performance</p></li><li><p><strong>Use AI to augment your own productivity</strong> (Copilot, etc.)</p></li></ul></blockquote><h3><strong>Where to Look for Opportunities</strong></h3><blockquote><p><strong>This information is from LLM, external sources</strong></p><p>Based on 2025 hiring trends, consider these sectors:</p><p><strong>High Growth Sectors for Tech Talent:</strong></p><ol><li><p><strong>Financial Services</strong> - JP Morgan (55K tech employees), Goldman Sachs, Citizens Financial</p></li><li><p><strong>Retail/E-commerce</strong> - Walmart (+5K in 2025), Target, other large retailers</p></li><li><p><strong>Healthcare</strong> - United Health (+10K over decade), insurance companies, health tech</p></li><li><p><strong>Traditional Enterprise</strong> - Fortune 500 companies building software capabilities</p></li></ol><p><strong>Lower Growth/Higher Risk:</strong></p><ul><li><p>Big Tech (FAANG) - Significant layoffs despite selective AI hiring</p></li><li><p>Startups (&lt;$500M revenue) - Resource constraints, limited AI hiring</p></li></ul><p><strong>Education Pipeline Shift:</strong></p><ul><li><p>Cornell CS graduates going to finance increased from 16% &#8594; 22% (since 2022)</p></li><li><p>Carnegie Mellon (Heinz College): Finance placements rose from 16% &#8594; 19%</p></li><li><p>Students choosing finance/healthcare/retail over Big Tech for stability</p></li></ul></blockquote><h3><strong>New Roles Emerging (High Demand)</strong></h3><p>&#128202; <strong>From McKinsey March 2025 Report:</strong></p><pre><code><code>&#8220;Respondents at larger companies are more likely than their peers at smaller organizations
to report hiring a broad range of AI-related roles, with the largest gaps seen in hiring:

&#8226; AI data scientists
&#8226; Machine learning engineers
&#8226; Data engineers&#8221;
</code></code></pre><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Translation: These roles are in HIGH demand:</strong></p><ol><li><p><strong>ML Engineers</strong> - Building and deploying models</p></li><li><p><strong>Data Engineers</strong> - Building pipelines for AI training data</p></li><li><p><strong>MLOps Specialists</strong> - CI/CD for AI systems</p></li><li><p><strong>AI Product Managers</strong> - Defining AI-enabled products</p></li><li><p><strong>AI Safety/Compliance Engineers</strong> - Ensuring responsible AI use</p></li></ol><p><strong>Reality Check on &#8220;AI replacing developers&#8221;:</strong></p><ul><li><p>Yale Budget Lab research: Only <strong>1% of service firms</strong> reported AI as reason for layoffs (down from 10% in 2024)</p></li><li><p>AI may be a convenient excuse rather than primary driver of tech layoffs</p></li><li><p>Amazon CEO admitted layoffs were &#8220;not even really AI driven&#8221;</p></li><li><p>Real driver: ~$1 trillion in AI infrastructure spending forcing cost-cutting elsewhere</p></li></ul><p><strong>This information is from LLM, external sources</strong></p><h3><strong>Skills to Develop Now</strong></h3><p><strong>Technical Skills:</strong></p><ul><li><p>Understanding of LLM APIs and prompt engineering</p></li><li><p>Agent frameworks (LangChain, AutoGPT, CrewAI)</p></li><li><p>Vector databases (Pinecone, Weaviate, Chroma)</p></li><li><p>Model evaluation and monitoring</p></li><li><p>Cost optimization for AI systems</p></li></ul><p><strong>System Design Skills:</strong></p><ul><li><p>Designing for human-in-the-loop</p></li><li><p>Building feedback mechanisms</p></li><li><p>Failure mode analysis for AI</p></li><li><p>State management for multi-step agents</p></li><li><p>Scalability patterns for AI workloads</p></li></ul><p><strong>Business Skills:</strong></p><ul><li><p>Understanding ROI of AI features</p></li><li><p>Identifying high-impact use cases</p></li><li><p>Communicating AI limitations to stakeholders</p></li><li><p>Balancing automation with human oversight</p></li></ul></blockquote><div><hr></div><h2><strong>5. Investment &amp; Resource Realities</strong></h2><h3><strong>The Resource Gap</strong></h3><p>&#128202; <strong>Digital Budget Allocation to AI (McKinsey Nov 2025):</strong></p><p>| Metric                                                      | High Performers | Others   | Gap.         |<br>|---&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-&#8212;-|-----------&#8212;&#8212;---|-----&#8212;--|---&#8212;&#8212;&#8212;&#8212;|<br>| Spend &gt;20% of digital budget on AI  | 35%                      | 10%       | <strong>**3.5x**  </strong> |<br>| Spend &gt;11% of digital budget on AI   | 55%                      | 25%       | <strong>**2.2x**</strong>   |<br>| Spend &#8804;5% of digital budget on AI    | 6%                        | 44%       | <strong>**0.14x**</strong>  |</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>What This Means for Your Team:</strong></p><p>If your company is treating AI as a 5% side project, you&#8217;re in the &#8220;others&#8221; category. High performers are making AI a <strong>core budget priority</strong>.</p><p><strong>Questions to Ask Your Leadership:</strong></p><ol><li><p>What % of our engineering budget is allocated to AI initiatives?</p></li><li><p>Are we building platforms or one-off solutions?</p></li><li><p>Do we have dedicated headcount for AI infrastructure?</p></li><li><p>What&#8217;s our 3-year AI roadmap?</p></li></ol></blockquote><h3><strong>Company Size Matters (A Lot)</strong></h3><p>&#128202; <strong>McKinsey Data: Large vs Small Companies</strong></p><p><strong>Large Companies ($5B+ revenue):</strong></p><ul><li><p><strong>47%</strong> in scaling phase</p></li><li><p><strong>2x more likely</strong> to hire specialized AI roles</p></li><li><p>Can afford comprehensive AI infrastructure</p></li><li><p>Have resources for dedicated AI teams</p></li></ul><p><strong>Small/Mid Companies (&lt;$500M revenue):</strong></p><ul><li><p><strong>29%</strong> in scaling phase</p></li><li><p>Limited specialized hiring</p></li><li><p>Must be scrappier with resources</p></li><li><p>Often rely on external expertise</p></li></ul><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Implication for Developers:</strong></p><ul><li><p>At large companies: Expect specialized roles, bigger teams, more structure</p></li><li><p>At small/mid companies: Expect to wear multiple hats, use managed services, prioritize ruthlessly</p></li></ul></blockquote><div><hr></div><h2><strong>6. Technical Risks You&#8217;ll Need to Handle</strong></h2><h3><strong>The Risk Landscape</strong></h3><p>&#128202; <strong>McKinsey Data (Nov 2025):</strong></p><ul><li><p><strong>51%</strong> of organizations experienced at least one negative consequence from AI</p></li><li><p><strong>Top consequences:</strong> Inaccuracy (30%), cybersecurity issues, IP infringement</p></li><li><p><strong>Organizations are mitigating more:</strong> Average of 4 risks mitigated (up from 2 in 2022)</p></li></ul><blockquote><p><strong>This information is from LLM, external sources</strong></p><h3><strong>What This Means for Your Code</strong></h3><p><strong>You need to build for these failure modes:</strong></p><ol><li><p><strong>Inaccuracy / Hallucinations (30% experienced this)</strong></p></li></ol><pre><code><code># Don&#8217;t just trust the output
response = llm.generate(prompt)

# Add verification layers
if is_factual_claim(response):
    verified = fact_check(response)
    if not verified:
        flag_for_human_review(response)
</code></code></pre><ol><li><p><strong>Cybersecurity Issues</strong></p><ul><li><p>Prompt injection attacks</p></li><li><p>Data leakage through model outputs</p></li><li><p>Unauthorized access via AI interfaces</p></li></ul><p><strong>Your responsibility:</strong></p><ul><li><p>Input sanitization for prompts</p></li><li><p>Output filtering for sensitive data</p></li><li><p>Access controls on AI endpoints</p></li></ul></li><li><p><strong>IP Infringement</strong></p><ul><li><p>Models trained on copyrighted data</p></li><li><p>Outputs that reproduce training data</p></li></ul><p><strong>Your responsibility:</strong></p><ul><li><p>Document model training data sources</p></li><li><p>Implement plagiarism detection</p></li><li><p>Have legal review of model outputs (especially for public-facing features)</p></li></ul></li></ol></blockquote><h3><strong>Human Oversight Patterns</strong></h3><p>&#128202; <strong>McKinsey Data:</strong></p><ul><li><p><strong>27%</strong> review ALL gen AI outputs before use</p></li><li><p><strong>27%</strong> review &#8804;20% of outputs</p></li><li><p><strong>Industries with highest oversight:</strong> Business, legal, professional services</p></li></ul><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Design Pattern: Confidence-Based Review</strong></p><pre><code><code>def handle_ai_output(input_data):
    result = ai_model.predict(input_data)
    confidence = result.confidence_score

    if confidence &gt; 0.95:
        # Auto-approve for high confidence
        return auto_execute(result)
    elif confidence &gt; 0.70:
        # Sample review for medium confidence
        if random.random() &lt; 0.20:  # 20% review rate
            return queue_for_review(result)
        return auto_execute(result)
    else:
        # Always review for low confidence
        return queue_for_review(result)
</code></code></pre></blockquote><div><hr></div><blockquote><p><strong>This information is from LLM, external sources</strong></p><h2><strong>7. The Path Forward: What to Do Monday Morning</strong></h2><h3><strong>Immediate Actions (This Week)</strong></h3><ol><li><p><strong>Audit your current AI usage:</strong></p><ul><li><p>What AI tools is your team using? (Copilot, ChatGPT, Claude?)</p></li><li><p>Are they sanctioned or shadow IT?</p></li><li><p>What % of your workflow includes AI?</p></li></ul></li><li><p><strong>Assess your scaling phase:</strong></p><ul><li><p>Experimenting? (Just testing, POCs)</p></li><li><p>Piloting? (One team, limited production)</p></li><li><p>Scaling? (Multiple teams, real users)</p></li><li><p>Fully scaled? (Integrated into core systems)</p></li></ul></li><li><p><strong>Identify one workflow to redesign:</strong></p><ul><li><p>Don&#8217;t just add AI to existing process</p></li><li><p>Ask: &#8220;If we built this from scratch with AI-first, what would it look like?&#8221;</p></li><li><p>Start small but think transformatively</p></li></ul></li></ol><h3><strong>Short-Term (This Quarter)</strong></h3><ol><li><p><strong>Skill Up:</strong></p><ul><li><p>Take a course on LLM APIs (OpenAI, Anthropic, local models)</p></li><li><p>Build a small agent that does multi-step tasks</p></li><li><p>Experiment with vector databases</p></li><li><p>Learn prompt engineering beyond basic chat</p></li></ul></li><li><p><strong>Propose Infrastructure Improvements:</strong></p><ul><li><p>Monitoring for AI costs (token usage)</p></li><li><p>A/B testing framework for AI features</p></li><li><p>Feedback collection mechanism</p></li><li><p>Human review workflow</p></li></ul></li><li><p><strong>Document Your AI Usage:</strong></p><ul><li><p>What models are you using?</p></li><li><p>What prompts/configurations?</p></li><li><p>What are the failure modes?</p></li><li><p>How do you handle errors?</p></li></ul></li></ol><h3><strong>Long-Term (This Year)</strong></h3><ol><li><p><strong>Position Yourself as AI-Native:</strong></p><ul><li><p>Be the person who understands both traditional software AND AI</p></li><li><p>Learn to explain AI capabilities/limitations to non-technical stakeholders</p></li><li><p>Contribute to your org&#8217;s AI strategy discussions</p></li></ul></li><li><p><strong>Build Platform Thinking:</strong></p><ul><li><p>Stop building one-off AI integrations</p></li><li><p>Design reusable components (prompt templates, agent frameworks, monitoring)</p></li><li><p>Create internal tools that let others leverage AI</p></li></ul></li><li><p><strong>Stay Ahead of the Curve:</strong></p><ul><li><p>Follow AI agent frameworks development</p></li><li><p>Track production AI case studies</p></li><li><p>Join communities (r/LocalLLaMA, AI engineering Slack groups)</p></li><li><p>Contribute to open source AI tools</p></li></ul></li></ol></blockquote><div><hr></div><blockquote><p><strong>This information is from LLM, external sources</strong></p><h2><strong>8. Critical Questions to Ask Your Organization</strong></h2><h3><strong>Strategy Questions</strong></h3><ol><li><p><strong>&#8220;What&#8217;s our AI vision beyond cost savings?&#8221;</strong></p><ul><li><p>&#128202; High performers set growth/innovation goals (80% vs 50%)</p></li><li><p>If your company only talks about efficiency, that&#8217;s a red flag</p></li></ul></li><li><p><strong>&#8220;Are we redesigning workflows or just adding AI to existing processes?&#8221;</strong></p><ul><li><p>&#128202; Workflow redesign is the #1 success factor</p></li><li><p>If you&#8217;re just wrapping AI around old processes, you&#8217;ll struggle to scale</p></li></ul></li><li><p><strong>&#8220;Who owns AI strategy&#8212;IT or the C-suite?&#8221;</strong></p><ul><li><p>&#128202; Sukharevsky: &#8220;Delegating to IT is a recipe for failure&#8221;</p></li><li><p>CEO-led initiatives are 3x more successful</p></li></ul></li></ol><h3><strong>Technical Questions</strong></h3><ol start="4"><li><p><strong>&#8220;Do we have MLOps infrastructure?&#8221;</strong></p><ul><li><p>CI/CD for models?</p></li><li><p>Model versioning?</p></li><li><p>Monitoring and alerting?</p></li></ul></li><li><p><strong>&#8220;What&#8217;s our human-in-the-loop policy?&#8221;</strong></p><ul><li><p>&#128202; 65% of high performers have this clearly defined vs 24% of others</p></li><li><p>If undefined, you&#8217;re building on shaky ground</p></li></ul></li><li><p><strong>&#8220;How do we track AI costs and ROI?&#8221;</strong></p><ul><li><p>&#128202; High performers track well-defined KPIs (52% vs 13%)</p></li><li><p>Token costs can spiral quickly without tracking</p></li></ul></li></ol><h3><strong>Career Questions</strong></h3><ol start="7"><li><p><strong>&#8220;What AI skills are we hiring for?&#8221;</strong></p><ul><li><p>Are we building internal capabilities or outsourcing?</p></li><li><p>What&#8217;s the career path for AI-focused engineers?</p></li></ul></li><li><p><strong>&#8220;What % of engineering time is spent on AI projects?&#8221;</strong></p><ul><li><p>If &lt;10%, AI is a side project</p></li><li><p>If &gt;30%, AI is a strategic priority</p></li></ul></li></ol></blockquote><div><hr></div><h2><strong>9. Common Developer Misconceptions (Corrected)</strong></h2><h3><strong>Misconception #1: &#8220;AI will replace developers&#8221;</strong></h3><p>&#128202; <strong>Reality (McKinsey):</strong> IT and product development functions are <strong>increasing headcount</strong>, not decreasing.</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>The 2025 Market Reality:</strong></p><ul><li><p>Big Tech laid off <strong>178,635 workers</strong> in 2025 (often citing &#8220;AI efficiency&#8221;)</p></li><li><p>However, <strong>only 1% of firms</strong> actually report AI as layoff reason (Yale research)</p></li><li><p>Tech jobs ARE migrating: From Big Tech &#8594; Finance/Retail/Healthcare</p></li><li><p>Large non-tech companies (Walmart, JP Morgan, United Health) hiring thousands</p></li></ul><p><strong>What&#8217;s actually happening:</strong></p><ul><li><p>Junior developers using AI become more productive (less junior work needed)</p></li><li><p>Senior developers focus on system design, AI integration, oversight (more senior work needed)</p></li><li><p>New roles emerge (ML engineers, MLOps, AI safety)</p></li><li><p><strong>Job market shifting</strong>: Big Tech contracting, traditional industries expanding tech teams</p></li></ul></blockquote><h3><strong>Misconception #2: &#8220;We just need to add ChatGPT API and we&#8217;re done&#8221;</strong></h3><p>&#128202; <strong>Reality (McKinsey):</strong> Only 32% of organizations are scaling AI despite 88% adoption.</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Why adding an API isn&#8217;t enough:</strong></p><ul><li><p>No monitoring/observability</p></li><li><p>No cost controls</p></li><li><p>No human oversight patterns</p></li><li><p>No workflow redesign</p></li><li><p>No feedback loops for improvement</p></li></ul></blockquote><h3><strong>Misconception #3: &#8220;AI agents will work autonomously right away&#8221;</strong></h3><p>&#128202; <strong>Reality (Michael Chui, McKinsey):</strong> &#8220;When it comes to agents, it takes hard work to do it well.&#8221;</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>The hard parts:</strong></p><ul><li><p>Error handling when agents get stuck</p></li><li><p>State management across multi-step workflows</p></li><li><p>Cost control (agents can burn through API credits)</p></li><li><p>Defining when to escalate to humans</p></li><li><p>Building trust with users</p></li></ul></blockquote><h3><strong>Misconception #4: &#8220;Smaller companies will hire more AI engineers&#8221;</strong></h3><p>&#128202; <strong>Reality (McKinsey):</strong> Large companies are hiring AI talent at <strong>2x the rate</strong> of smaller companies.</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Why:</strong></p><ul><li><p>Large companies have bigger budgets for specialized roles</p></li><li><p>They can afford comprehensive AI infrastructure</p></li><li><p>They&#8217;re further along in scaling (47% vs 29%)</p></li></ul></blockquote><h3><strong>Misconception #5: &#8220;High performers just move faster&#8221;</strong></h3><p>&#128202; <strong>Reality (McKinsey):</strong> High performers are <strong>3.6x more ambitious</strong>, not just faster.</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><p><strong>Difference:</strong></p><ul><li><p>Others: Add AI to 10 existing processes (incremental)</p></li><li><p>High performers: Redesign the entire business model around AI (transformative)</p></li></ul></blockquote><div><hr></div><h2><strong>10. The Developer&#8217;s Reality Check</strong></h2><h3><strong>What McKinsey&#8217;s Data Really Tells Us</strong></h3><p><strong>The Harsh Truth:</strong></p><ul><li><p><strong>68%</strong> of companies are stuck in pilots</p></li><li><p><strong>61%</strong> see no enterprise-level EBIT impact</p></li><li><p><strong>79%</strong> haven&#8217;t fundamentally redesigned workflows</p></li><li><p><strong>Most AI projects fail to scale</strong></p></li></ul><p><strong>But also:</strong></p><ul><li><p>The <strong>6% who succeed</strong> follow clear patterns</p></li><li><p>Workflow redesign is THE differentiator (2.8x gap)</p></li><li><p>Developer/IT roles are growing, not shrinking</p></li><li><p>AI agents are the next frontier (62% experimenting)</p></li></ul><blockquote><p><strong>This information is from LLM, external sources</strong></p><h3><strong>What This Means for Your Career</strong></h3><p><strong>Short-term (1-2 years):</strong></p><ul><li><p>Learn AI tooling (APIs, agents, vector DBs)</p></li><li><p>Build AI-augmented features</p></li><li><p>Understand AI limitations and failure modes</p></li></ul><p><strong>Medium-term (3-5 years):</strong></p><ul><li><p>Master AI system design</p></li><li><p>Become expert in human-AI collaboration patterns</p></li><li><p>Lead AI infrastructure initiatives</p></li><li><p>Understand business impact (not just technical capability)</p></li></ul><p><strong>Long-term (5+ years):</strong></p><ul><li><p>Be the bridge between traditional software and AI-native systems</p></li><li><p>Design entirely new workflows around AI capabilities</p></li><li><p>Lead transformative (not incremental) AI initiatives</p></li></ul><h3><strong>The Mindset Shift Required</strong></h3><p>&#10060; <strong>Old Developer Mindset:</strong><br>&#8220;I write code that implements business logic.&#8221;</p><p>&#9989; <strong>New AI-Native Developer Mindset:</strong><br>&#8220;I design systems where AI and humans collaborate, with clear fallback patterns, monitoring, and continuous improvement loops.&#8221;</p></blockquote><div><hr></div><h2><strong>Final Takeaway: Think Transformatively, Not Incrementally</strong></h2><p>&#128202; <strong>McKinsey&#8217;s Core Message (Alex Singla):</strong></p><p>&#8220;It pays to think big. The organizations that are building a genuine and lasting competitive advantage from their AI efforts are the ones that are thinking in terms of wholesale transformative change that stands to alter their business models, cost structures, and revenue streams&#8212;rather than proceeding incrementally.&#8221;</p><blockquote><p><strong>This information is from LLM, external sources</strong></p><h3><strong>For Developers, This Means:</strong></h3><p><strong>Don&#8217;t just:</strong></p><ul><li><p>Add AI to your existing code</p></li><li><p>Use Copilot to write boilerplate faster</p></li><li><p>Build one-off AI features</p></li></ul><p><strong>Instead:</strong></p><ul><li><p>Redesign your architecture around AI capabilities</p></li><li><p>Build platforms that enable AI-first workflows</p></li><li><p>Create systems where AI and humans collaborate effectively</p></li><li><p>Think about what&#8217;s possible if AI handles 80% of the work</p></li></ul><p><strong>The companies winning are not moving incrementally. Neither should you.</strong></p></blockquote><div><hr></div><p><strong>Report Citation:</strong><br>McKinsey &amp; Company (November 2025). &#8220;The state of AI in 2025: Agents, innovation, and transformation.&#8221; McKinsey Global Survey, 1,993 participants across 105 nations. Survey fielded June 25 - July 29, 2025.</p><p><strong>Authors:</strong> Alex Singla, Alexander Sukharevsky, Lareina Yee, Michael Chui, Bryce Hall, Tara Balakrishnan (QuantumBlack, AI by McKinsey)</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.cengizhan.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>Explore All Analyses</strong></h3><p><strong>Browse the full analysis folder:</strong> <a href="https://github.com/hancengiz/research_reports/tree/main/2-analysis">github.com/hancengiz/research_reports/tree/main/2-analysis</a></p><p>This folder contains all my ongoing conversations with Claude Code as I work to understand McKinsey&#8217;s reports and other industry research. New analyses are added as I explore different angles and questions.</p><h3><strong>Main Repository</strong></h3><p><strong>Full repository:</strong> <a href="https://github.com/hancengiz/research_reports">github.com/hancengiz/research_reports</a></p><p>Contains all source PDFs, text extractions, and the framework to analyze them yourself with Claude Code.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/p/developers-guide-mckinsey-state-of?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.cengizhan.com/p/developers-guide-mckinsey-state-of?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[Claude Code Prompt Coach Skill to analyse your AI-Assisted Coding Skills]]></title><description><![CDATA[Claude Code Prompt Coach Skill to analyse your ai-assisted coding skills and Claude Code Usage]]></description><link>https://www.cengizhan.com/p/claude-code-prompt-coach-skill-to</link><guid isPermaLink="false">https://www.cengizhan.com/p/claude-code-prompt-coach-skill-to</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Sun, 09 Nov 2025 14:47:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MI4y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever wonder if you&#8217;re actually good at working with AI tools? Or if you&#8217;re just burning tokens with vague prompts and wondering why Claude keeps asking for clarification?</p><p>Yeah, me too.</p><p>So I built Prompt Coach&#8212;a Claude Code skill that analyzes your coding session logs to tell you exactly how good (or bad) your prompts are, where you&#8217;re wasting time, and which powerful tools you&#8217;re completely ignoring.</p><p><strong>The surprising part?</strong> Claude Code has been logging everything you do. Every prompt. Every tool call. Every token spent. It&#8217;s all sitting in <code>~/.claude/projects/</code> as JSONL files, waiting to be analyzed.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MI4y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MI4y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 424w, https://substackcdn.com/image/fetch/$s_!MI4y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 848w, https://substackcdn.com/image/fetch/$s_!MI4y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 1272w, https://substackcdn.com/image/fetch/$s_!MI4y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MI4y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png" width="1408" height="736" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:736,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1068463,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/178417033?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MI4y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 424w, https://substackcdn.com/image/fetch/$s_!MI4y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 848w, https://substackcdn.com/image/fetch/$s_!MI4y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 1272w, https://substackcdn.com/image/fetch/$s_!MI4y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e812055-75fc-4746-a006-6a849abc4bed_1408x736.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>What Is This Thing?</strong></h2><p>Prompt Coach is a Claude Code skill&#8212;basically a markdown file that teaches Claude how to read and analyze your session logs. Once installed, you can ask Claude natural questions like:</p><ul><li><p>&#8220;How much have I spent on tokens this month?&#8221;</p></li><li><p>&#8220;Analyze my prompt quality from last week&#8221;</p></li><li><p>&#8220;Which tools do I use most?&#8221;</p></li><li><p>&#8220;When am I most productive?&#8221;</p></li></ul><p>And it responds with detailed analysis, scores your prompts against official Anthropic best practices, and gives you actionable recommendations.</p><p>No external services. No API calls. Just Claude reading your own logs and telling you where you&#8217;re messing up.</p><h2><strong>The Problem with Adapting to AI-Assisted Development</strong></h2><p>Working with AI tools is a skill. A weird, new skill that nobody taught us.</p><p>We know how to write good code. We know how to use git. We know debugging. But prompting? Using tools efficiently? Understanding when you&#8217;re being too vague vs. too specific?</p><p>Most people just... wing it. They don&#8217;t know if they&#8217;re good at it or not. They don&#8217;t know if that clarification Claude asked for was because their prompt was unclear, or if Claude was just being thorough.</p><p><strong>Prompt Coach quantifies it.</strong></p><p>It reads your session logs and calculates:</p><ul><li><p>How often your prompts need clarification (the clarification rate)</p></li><li><p>Which tools you use vs. which you should be using</p></li><li><p>How many iterations you need per task</p></li><li><p>When you&#8217;re most productive</p></li><li><p>Where you&#8217;re burning tokens unnecessarily</p></li></ul><p>Then it maps your patterns to <a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview">official Anthropic prompt engineering guidelines</a> and tells you exactly what to improve.</p><h2><strong>What I Learned About My Claude Code Usage (On My Personal Projects)</strong></h2><p>Running Prompt Coach on my own 30 days of Claude Code sessions (133 sessions, 6,091 prompts across 17 projects) revealed some eye-opening patterns:</p><h3><strong>1. I&#8217;m Burning Through My Usage Limits with Opus</strong></h3><p><strong>The pattern:</strong> 17% of my API calls use Opus 4.1, but they account for <strong>47% of my token consumption </strong>($136.74 equivalent out of $291.30 calculated usage).</p><p>That&#8217;s 5x more expensive per token than Sonnet. Most of those Opus calls? Totally unnecessary. Code reviews, documentation, refactoring&#8212;Sonnet handles all of these perfectly fine at 1/5th the token cost.</p><p><strong>Reality check:</strong> I&#8217;m on a Claude Code subscription, so I&#8217;m not actually saving money here. But I <em>am</em> hitting my usage limits faster by choosing Opus unnecessarily. If I were on pay-per-token pricing, this would cost me <strong>$80-100/month extra</strong>.</p><h3><strong>2. Prompt Quality: Good, But Room for Improvement</strong></h3><p><strong>Average prompt score: 7.61/10 (Very Good)</strong></p><ul><li><p>54% of my prompts scored 8-10/10 (excellent)</p></li><li><p>44% scored 5-7/10 (good)</p></li><li><p>1.4% scored 0-4/10 (needs work)</p></li></ul><p>The excellent prompts? Context-rich commands like this one. Efficient communication.</p><p><code>@agent-youtube-transcript-analyzer what was the ironman metaphor karpathy gave in this video? I love it but forgot how it was exactly, quote him directly and explain? https://www.you tube.com/watch?v=&lt;VIDEO_ID&gt;</code></p><p>The bad ones? Extremely brief standalone prompts lacking context: &#8220;run&#8221;, &#8220;ok&#8221;, &#8220;where&#8221;, &#8220;delete&#8221;&#8212;all triggering unnecessary clarification rounds.</p><p><strong>The opportunity:</strong> 88 low-scoring prompts that could&#8217;ve been clearer. Each unclear prompt costs ~2 minutes in clarification rounds. That adds up.</p><h3><strong>3. My Cache Game is Strong</strong></h3><p><strong>99.9% cache hit rate.</strong> Since I&#8217;m on a subscription, this means faster responses and making the most of my usage limits. (If I were on pay-per-token pricing, this would&#8217;ve saved <strong>$806.79</strong> over 30 days.)</p><p>The secret? Focused sessions on single tasks. No excessive project hopping. Work in uninterrupted blocks where Claude can keep your context warm.</p><p>When cache is hot, responses are faster. When you context-switch constantly, you pay the cold cache penalty with slower first responses.</p><div><hr></div><p>The data doesn&#8217;t lie. Three clear patterns emerged: <strong>I&#8217;m burning through usage limits with Opus</strong> (47% of token consumption for 17% of calls), <strong>my prompt quality has room for improvement</strong> (88 unclear prompts causing extra clarification rounds), and <strong>my cache efficiency is excellent</strong> (99.9% hit rate, keeping responses fast).</p><p>The analysis also flagged context switching across 17 projects as a productivity drain. But here&#8217;s the contradiction: if my cache hit rate is 99.9%, I&#8217;m clearly <em>not</em> losing productivity to context switching. The skill is counting every folder I open with Claude Code as a &#8220;project switch&#8221;&#8212;including quick 2-minute productivity tasks like &#8220;rename these files&#8221; or &#8220;explain this error message.&#8221;</p><p>I use Claude Code both for deep coding work <em>and</em> as a general productivity tool throughout the day. Those small quick-hit uses shouldn&#8217;t count as context switches. This is actually a limitation of the current skill&#8212;it needs to filter out projects where I spent less than, say, 10 minutes. I might update it to capture this distinction.</p><p>If you&#8217;re also curious about how you use Claude Code, keep reading. (Or <a href="https://www.cengizhan.com/p/claude-code-prompt-coach-skill-to#%C2%A7how-can-you-start-using-this">jump straight</a> to installation if you want to try it now.)</p><h2><strong>What It Actually Shows You</strong></h2><h3><strong>1. Prompt Quality Analysis</strong></h3><p>The killer feature. It scores your prompts on four dimensions:</p><ul><li><p><strong>Clarity</strong>: How clear and unambiguous is the request?</p></li><li><p><strong>Specificity</strong>: Does it include file paths, error messages, context?</p></li><li><p><strong>Actionability</strong>: Can Claude act immediately or does it need clarification?</p></li><li><p><strong>Scope</strong>: Is the task appropriately sized and focused?</p></li></ul><p>The analysis is context-aware&#8212;it recognizes when brief prompts like &#8220;git commit&#8221; or &#8220;run tests&#8221; are actually <em>excellent</em> because Claude has environmental context. It distinguishes between efficient communication and vague requests.</p><p>Then it gives you a breakdown:</p><pre><code><code>&#128221; Prompt Quality Analysis

Total prompts: 99
Context-aware analysis: 99 prompts categorized
Average prompt score: 7.2/10 (Very Good!)

&#9989; Context-Rich Brief Prompts Identified: 18 (18%)
Examples: &#8220;git commit&#8221;, &#8220;yes&#8221;, &#8220;1&#8221;, &#8220;v&#8221;, &#8220;clear&#8221;.
These score 8-10/10 - excellent use of environmental context!

&#128202; Prompt Category Breakdown:
- Excellent (8-10): 71 prompts (72%) - Context-rich OR detailed
- Good (5-7): 12 prompts (12%) - Adequate information
- Needs Work (3-4): 13 prompts (13%) - Brief AND low context
- Poor (0-2): 3 prompts (3%)

Clarifications needed: 13 (13%)

&#128681; Most Common Issues (context-poor prompts only):
1. Missing URLs when referencing videos: 5 prompts
2. Formatting errors from rushed typing: 4 prompts
3. Ambiguous pronouns without clear referents: 3 prompts

&#128308; Real Examples from Your Logs:

**Example 1: Missing URL Reference**
&#10060; Your prompt: &#8220;get the english transcript for this video&#8221;
&#129300; Problem: Which video? No URL provided
&#9989; Better: &#8220;get the english transcript for https://www.you tube.com/watch?v=&lt;VIDEO_ID&gt;&#8221;
&#128201; Cost: +1 minute clarification needed

**Example 2: Context-Rich Brief Prompt** &#9989;
&#9989; Your prompt: &#8220;git commit&#8221;
&#128161; Claude used git diff to create perfect commit message
&#9889; Time saved: ~2 minutes by trusting Claude&#8217;s context awareness

**Example 3: Formatting Error**
&#10060; Your prompt: &#8220;test our fetcher for this videohttps://www.you tube.com/watch?v=...&#8221;
&#129300; Problem: Missing space, vague &#8220;our fetcher&#8221;
&#9989; Better: &#8220;test the YouTube transcript fetcher with this video: https://...&#8221;
&#128201; Cost: +30 seconds parsing issues
</code></code></pre><p>Off course not only git commit and other short promtps counted as excellent. I included a context awareness check to the skill so it counts these short prompts as fine prompts. See the <a href="https://github.com/hancengiz/claude-code-prompt-coach-skill/blob/main/docs/prompt-quality-analysis-report-public.md#excellent-prompts-8-1010-71-prompts-72">full list</a> from my recent report. Also, please share your findings in the comments section on this post.<br><br>It&#8217;s like having a prompt engineering coach that watched every conversation you&#8217;ve ever had with Claude and graded you on it.</p><h3><strong>2. Token Usage &amp; Cache Efficiency</strong></h3><p>Claude Code uses prompt caching on Anthropic&#8217;s servers&#8212;when you send context (system instructions, files you&#8217;ve read, conversation history), it gets cached for ~5 minutes. Future requests reuse that cache instead of reprocessing everything.</p><p><strong>What this means:</strong></p><ul><li><p>Pay-per-token users: Cached tokens cost 10x less (0.30$ vs 3<em>$</em> per million tokens)</p></li><li><p>Subscription users: Faster responses, less server processing time needed</p></li><li><p>Everyone: More context available, better session continuity</p></li></ul><p><strong>Why it matters:</strong> Staying focused in one project keeps your cache hot. Switching projects = cold cache = slower first response.</p><pre><code><code>&#128202; Token Usage Analysis (18 days)

Total Cost: $287.03 (matches ccusage within 0.4%)

By Model:
SONNET-4.5 (3,703 calls, 81.4%)
  Input:        191,981 (0.58$)
  Output:       145,676 (2.19$)
  Cache writes:  20.4M  (76.40$)
  Cache reads:   243.2M (72.96$)
  Subtotal: $152.12 (53.0% of cost)

OPUS-4.1 (768 calls, 16.9%) &#9888;&#65039; 5x more expensive!
  Input:         3,175 (0.05$)
  Output:       30,084 (2.26$)
  Cache writes:   2.5M (46.53$)
  Cache reads:   57.2M (85.74$)
  Subtotal: $134.57 (46.9% of cost)

HAIKU-4.5 (78 calls, 1.7%)
  Subtotal: $0.34 (0.1% of cost)

&#128203; Deduplication: 6,508 duplicates removed (14.7%)
&#9889; Cache efficiency: 92.8% hit rate, saving $1,428.88

&#128161; Key insight: Opus is 16.9% of calls but 46.9% of cost.
Each Opus call costs 4.3x more than Sonnet.
</code></code></pre><p>The analysis now uses <strong>model-specific pricing</strong> that matches the popular <code>ccusage</code> tool within 0.4% (287.03 <em>vs </em>288.13). The deduplication logic filters out streaming response duplicates to show actual Anthropic billing.</p><p><em>Verification: The token cost calculation is now consistent with ccusage. The 0.4% difference ($1.10) comes from deduplication and model-specific pricing&#8212;both tools now apply identical strategies. (The difference is probably just a timing issue between my runs.)</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yg7Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yg7Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 424w, https://substackcdn.com/image/fetch/$s_!yg7Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 848w, https://substackcdn.com/image/fetch/$s_!yg7Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 1272w, https://substackcdn.com/image/fetch/$s_!yg7Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yg7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png" width="1456" height="966" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:966,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:329029,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/178417033?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yg7Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 424w, https://substackcdn.com/image/fetch/$s_!yg7Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 848w, https://substackcdn.com/image/fetch/$s_!yg7Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 1272w, https://substackcdn.com/image/fetch/$s_!yg7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aff0c6-249e-4da9-b778-e5ccb9712b5d_2044x1356.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>For CLI users:</strong> You can also run <code>npx ccusage</code> for quick token cost checking from the command line. Prompt Coach provides the same accuracy with additional context like model breakdowns, cache optimization insights, and recommendations.</p><h3><strong>3. Tool Usage Patterns</strong></h3><p>Claude Code has powerful tools&#8212;both built-in and MCP (Model Context Protocol) servers that extend its capabilities.</p><pre><code><code>&#128736;&#65039; Tool Usage Patterns (Last 30 Days)

Built-in Claude Code Tools:
&#9492;&#9472; Total: 375 uses (97.4%)
   &#9500;&#9472; Bash:       123 uses (32.0%)
   &#9500;&#9472; Edit:        82 uses (21.3%)
   &#9500;&#9472; Read:        67 uses (17.4%)
   &#9500;&#9472; Grep:        23 uses (6.0%)
   &#9500;&#9472; WebSearch:   13 uses (3.4%)
   &#9492;&#9472; Others:      67 uses (17.4%)

&#127775; MCP &amp; 3rd Party Tools:
&#9492;&#9472; youtube-transcript: 10 uses (2.6%)

Total tool calls: 385
MCP adoption: 2.6%

&#128161; Insights:

&#9989; Excellent editing practices: 10:1 Edit-to-Write ratio
   &#8594; You&#8217;re modifying existing files, not creating unnecessary new ones

&#9989; Read-before-edit discipline: 25 instances
   &#8594; You consistently review code before changing it

&#9888;&#65039;  MCP adoption is low (2.6%)
   &#8594; Only using 1 MCP server (youtube-transcript)
   &#8594; Huge opportunity: 50+ MCP servers available
   &#8594; Could automate browser tasks, GitHub workflows, PDF analysis

&#9888;&#65039;  Bash chaining: 74 instances of Bash &#8594; Bash
   &#8594; Try batching with &amp;&amp; (e.g., &#8220;git add . &amp;&amp; git commit &amp;&amp; git push&#8221;)
   &#8594; 30% efficiency gain possible
</code></code></pre><p><strong>Full disclosure:</strong> This 2.6% MCP adoption is hilariously misleading. This data is only from my personal laptop. On my work machine, I&#8217;m actually using the MCPs I built (<a href="https://www.cengizhan.com/p/one-more-piece-built-adding-youtube">YouTube Transcript MCP</a> and <a href="https://www.cengizhan.com/p/vibe-coded-a-pdf-reader-mcp-tool">PDF Reader MCP</a>) plus context7, playwright, browserbase, and a bunch of others daily.</p><p>So the real insight here: <strong>I should probably use my own tools on both machines.</strong> &#129318;&#8205;&#9794;&#65039;</p><h3><strong>4. Productivity Time Patterns</strong></h3><p>When are you actually good at this?</p><pre><code><code>&#128336; Productivity Time Patterns (Last 30 Days)

Peak productivity hours:
1. 14:00-17:00 [============] (32 sessions, 2.1 avg iterations)
2. 09:00-12:00 [========]     (24 sessions, 2.8 avg iterations)
3. 20:00-23:00 [====]         (15 sessions, 4.2 avg iterations)

Most efficient: 14:00-17:00 (afternoon)
- 40% fewer iterations than average
- 25% faster completion time

Least efficient: 20:00-23:00 (evening)
- 50% more iterations needed
- More clarification requests

&#128161; Recommendation: Schedule complex tasks between 2-5pm on Tue-Thu
</code></code></pre><p>As a night owl, this hurts to see. But the data doesn&#8217;t lie&#8212;I&#8217;m objectively worse at coding after 8pm. &#129417;&#128148;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share Cengiz Han&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.cengizhan.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share Cengiz Han</span></a></p><h2><strong>How Can You Start Using This?</strong></h2><p>Dead simple. Clone the repo and run the install script:</p><pre><code><code>git clone https://github.com/hancengiz/claude-code-prompt-coach-skill
cd claude-code-prompt-coach-skill
./install.sh
</code></code></pre><p>It copies the skill to <code>~/.claude/skills/prompt-coach/</code> and verifies installation.</p><p>Restart Claude Code. That&#8217;s it.</p><p><strong>First thing to try:</strong></p><blockquote><p><strong>&#8220;Give me a general analysis of my Claude Code usage&#8221;</strong></p></blockquote><p>This gives you a comprehensive overview of everything&#8212;prompt quality, token costs, tool usage, productivity patterns, and personalized recommendations. <strong>Perfect for getting started.</strong></p><p><a href="https://github.com/hancengiz/claude-code-prompt-coach-skill/blob/main/docs/general-analysis-report-public.md">See example general analysis report &#8594;</a></p><p><strong>Other analysis commands:</strong></p><blockquote><p><strong>&#8220;How much have I spent on tokens this month?&#8221;</strong></p><p><strong>&#8220;Analyze my prompt quality from last week&#8221;</strong></p><p><strong>&#8220;Which tools do I use most?&#8221;</strong></p><p><strong>&#8220;When am I most productive?&#8221;</strong></p></blockquote><p>Claude will automatically read your logs and respond with detailed analysis.</p><h2><strong>How Does It Work?</strong></h2><p>Claude Code logs every session to <code>~/.claude/projects/</code> as JSONL files. Each line is a JSON object representing one event in the conversation:</p><ul><li><p>User prompts</p></li><li><p>Assistant responses (with token usage)</p></li><li><p>Tool calls (Read, Edit, Bash, etc.)</p></li><li><p>Timestamps</p></li><li><p>Working directory</p></li></ul><p>Prompt Coach is a skill file (<code>Skill.md</code>) that teaches Claude:</p><ul><li><p>Where the logs are stored</p></li><li><p>How to parse the JSONL format</p></li><li><p>Official Anthropic prompt engineering best practices</p></li><li><p>How to score prompts (the scoring rubric)</p></li><li><p>What metrics to calculate</p></li><li><p>How to present insights</p></li></ul><p>When you ask Claude to analyze your prompts, it uses its existing tools (Read, Bash, Grep) to:</p><ol><li><p>Find your session logs</p></li><li><p>Parse the JSON data</p></li><li><p>Score your prompts against official guidelines</p></li><li><p>Calculate metrics</p></li><li><p>Generate personalized recommendations</p></li></ol><p><strong>No external dependencies. No servers. No data leaving your machine.</strong></p><p>It&#8217;s just Claude being really good at reading structured data and applying a scoring rubric.</p><h2><strong>The &#8220;Golden Rule&#8221; of Prompt Engineering</strong></h2><p>The skill is trained on official Anthropic guidelines, but the most powerful one is this:</p><blockquote><p><strong>&#8220;Show your prompt to a colleague with minimal context. If they&#8217;re confused, Claude will likely be too.&#8221;</strong></p></blockquote><p>This one rule would&#8217;ve saved me dozens of clarification cycles.</p><blockquote><p>Instead of: &#8220;fix the bug&#8221;</p><p>Try: &#8220;fix the authentication error in src/auth/login.ts where JWT token validation fails with a 401 response&#8221;</p></blockquote><p>The difference is massive. One needs three rounds of back-and-forth. The other gets fixed immediately.</p><h2><strong>What I Learned Building This</strong></h2><h3><strong>1. Claude Code&#8217;s logging is incredibly detailed</strong></h3><p>Every token. Every tool call. Every timestamp. It&#8217;s all there. This opens up possibilities for all kinds of analysis&#8212;time tracking, collaboration patterns, learning curves, custom benchmarks.</p><h3><strong>2. Skills are wildly powerful</strong></h3><p>A single markdown file can teach Claude completely new capabilities. No code. No APIs. Just instructions in natural language.</p><p>The skill is 1,636 lines of markdown that explains:</p><ul><li><p>Where logs are stored</p></li><li><p>How to parse them</p></li><li><p>What patterns to look for</p></li><li><p>How to score prompts</p></li><li><p>How to present insights</p></li></ul><p>And Claude just... does it. Perfectly.</p><h3><strong>3. You can measure and improve prompt quality</strong></h3><p>Looking at real data from the YouTube Transcript MCP project:</p><ul><li><p>72% of prompts scored 8-10/10 (excellent)</p></li><li><p>Only 13% needed clarification</p></li><li><p>18% were context-rich brief prompts (efficient communication)</p></li><li><p>Time saved by trusting context: ~45 minutes</p></li><li><p>Time lost to unclear prompts: ~28 minutes</p></li></ul><p>The difference between a 3/10 prompt and a 9/10 prompt? Specifics. File paths. URLs. Error messages. Success criteria.</p><p>Being specific saves time. Being specific saves money.</p><h3><strong>4. Context engineering with subagents</strong></h3><p>For complex analysis like prompt quality scoring, the skill uses <strong>subagents</strong>&#8212;launching a specialized agent to handle the heavy lifting.</p><p><strong>Why this matters:</strong></p><ul><li><p>Analyzing 100+ prompts across multiple sessions would blow up the main context window</p></li><li><p>Subagents get their own fresh context, optimized for one task</p></li><li><p>The main Claude session stays clean and focused</p></li><li><p>Results come back as a structured report</p></li></ul><p><strong>The pattern:</strong></p><blockquote><p>Main Claude &#8594; Launches Task agent with specific instructions</p><p>Task agent &#8594; Reads logs, analyzes patterns, scores prompts</p><p>Task agent &#8594; Generates comprehensive report</p><p>Main Claude &#8594; Presents results to you</p></blockquote><p>This is <strong>context engineering in action</strong>&#8212;managing LLM context windows by delegating complex tasks to specialized agents instead of trying to do everything in one bloated conversation.</p><p><strong>About the skill size:</strong> Yes, at 1,636 lines, the skill file itself consumes ~20% of the context window when loaded. But this works fine because:</p><ol><li><p><strong>Subagents get fresh context</strong> - When the Task agent launches, it gets its own context budget</p></li><li><p><strong>Claude Code generates temp files</strong> - The subagent writes Python scripts to <code>/tmp</code> for log parsing, keeping analysis out of the main context</p></li><li><p><strong>Only results return</strong> - The comprehensive report comes back, not the entire log analysis process</p></li></ol><p><em>After the Task agent completes the analysis, the result is shown with context usage at the bottom: 22.9% for Claude Sonnet 4.5, 20.1% cached. The skill itself is 21.5% (~70.6k tokens), and the analysis report adds minimal overhead.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YbzI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YbzI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 424w, https://substackcdn.com/image/fetch/$s_!YbzI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 848w, https://substackcdn.com/image/fetch/$s_!YbzI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 1272w, https://substackcdn.com/image/fetch/$s_!YbzI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YbzI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png" width="1456" height="1074" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1074,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:393517,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/178417033?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YbzI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 424w, https://substackcdn.com/image/fetch/$s_!YbzI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 848w, https://substackcdn.com/image/fetch/$s_!YbzI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 1272w, https://substackcdn.com/image/fetch/$s_!YbzI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fd06f58-5029-4fb8-8917-6c5086d4fe07_2052x1514.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Inside the subagent&#8217;s context: Processing 168 session files from the ~/.claude/projects/ directory. The agent applies model-specific pricing (Sonnet 4.5, Opus 4.1, Haiku 4.5), deduplication logic (removing 6,508 duplicate streaming responses), and cache efficiency analysis&#8212;all without bloating the main conversation.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OpI1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OpI1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 424w, https://substackcdn.com/image/fetch/$s_!OpI1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 848w, https://substackcdn.com/image/fetch/$s_!OpI1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 1272w, https://substackcdn.com/image/fetch/$s_!OpI1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OpI1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png" width="1456" height="1353" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1353,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:468271,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/178417033?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OpI1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 424w, https://substackcdn.com/image/fetch/$s_!OpI1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 848w, https://substackcdn.com/image/fetch/$s_!OpI1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 1272w, https://substackcdn.com/image/fetch/$s_!OpI1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe18d7efc-42f7-4fa3-a59b-7d16a7bbb8c2_2090x1942.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If context consumption becomes an issue, the skill can be refactored to be more compact. But for now, the subagent pattern handles it elegantly.</p><h3><strong>5. Data about yourself is fascinating</strong></h3><p>Seeing your productivity patterns quantified is weirdly motivating. Knowing that I&#8217;m 40% more efficient between 2-5pm changes how I schedule my day.</p><p>Knowing that I use Grep 10x less than I should changes how I approach searching code.</p><p>It&#8217;s like a fitness tracker, but for coding with AI.</p><h2><strong>If You Want This</strong></h2><p>The project is open source: <a href="https://github.com/hancengiz/claude-code-prompt-coach-skill">github.com/hancengiz/claude-code-prompt-coach-skill</a></p><p>Key files:</p><ul><li><p><code>Skill.md</code> - The skill that teaches Claude how to analyze logs (1,636 lines of prompt engineering best practices)</p></li><li><p><code>install.sh</code> - One-command installation script</p></li><li><p>Sample analysis reports in <code>/docs/</code>:</p><ul><li><p><code>general-analysis-report-public.md</code> - Complete overview with all metrics combined</p></li><li><p><code>prompt-quality-analysis-report-public.md</code> - Comprehensive prompt quality analysis</p></li><li><p><code>token-usage-analysis-30days-public.md</code> - Token cost breakdown</p></li><li><p><code>tool-usage-analysis-30days-public.md</code> - Tool usage patterns</p></li><li><p><code>productivity_time_patterns_report-public.md</code> - Time-based productivity analysis</p></li></ul></li></ul><p>Installation takes 30 seconds. Then just ask Claude to analyze your usage.</p><h2><strong>This Should Be Built Into Claude Code</strong></h2><p>Let&#8217;s be honest&#8212;<strong>prompt quality analysis should be a native Claude Code feature.</strong></p><p>Anthropic already has server-side telemetry. They know which prompts lead to clarification requests. They know which tool usage patterns are efficient. They know when developers are struggling vs. crushing it.</p><p><strong>They could surface this in real-time:</strong></p><ul><li><p>&#8220;Your last 3 prompts needed clarification. Try including file paths.&#8221;</p></li><li><p>&#8220;You&#8217;re using Opus 5x more than similar developers. Consider Sonnet for most tasks.&#8221;</p></li><li><p>&#8220;Cache hit rate dropped to 45% today. Are you context-switching between projects?&#8221;</p></li><li><p>&#8220;Your afternoon sessions have 40% fewer iterations. Schedule complex work then.&#8221;</p></li></ul><p>The data exists. The value is obvious. Developers want to improve.</p><p><strong>Imagine Claude Code with built-in prompt coaching:</strong></p><pre><code><code>claude&gt; fix the bug

&#128161; Prompt Coach: This prompt is vague. Consider specifying:
   - Which file?
   - What bug? (error message, behavior)
   - Expected vs actual behavior?

   Better example: &#8220;fix the TypeError in src/auth.ts line 45
   where user.id is undefined during logout&#8221;
</code></code></pre><p>Instead of every developer needing to build their own analysis tool (or ignore the problem entirely), make it part of the product. Use your server-side insights to guide developers toward better practices.</p><p><strong>Hey Anthropic team &#128075;</strong> - you&#8217;re sitting on incredibly valuable behavioral data that could accelerate the entire AI-native development learning curve. Surface it. Guide us. Make prompt engineering a measurable, improvable skill instead of tribal knowledge.</p><p>Until then, I&#8217;ll keep using this skill. But it really should be built-in.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>What&#8217;s Next? You Tell Me</strong></h2><p>The current version already changed how I work with Claude Code. But there&#8217;s so much more this could do.</p><p><strong>I want to hear from you:</strong></p><ul><li><p><strong>What insights would be valuable to you?</strong> Git commit patterns? Learning curve over time? Team collaboration metrics?</p></li><li><p><strong>What&#8217;s missing from the analysis?</strong> What questions about your workflow can&#8217;t you answer right now?</p></li><li><p><strong>What patterns have you discovered?</strong> Share your analysis results, interesting findings, or surprising insights</p></li></ul><p><strong>Ways to contribute:</strong></p><ul><li><p><strong>Open an issue</strong> on <a href="https://github.com/hancengiz/claude-code-prompt-coach-skill/issues">GitHub</a> with feature ideas or bugs</p></li><li><p><strong>Send a pull request</strong> if you&#8217;ve built something cool (new analysis types, better scoring, visualization tools)</p></li><li><p><strong>Comment below</strong> with what you&#8217;d like to see</p></li></ul><p>Some ideas I&#8217;m considering:</p><ul><li><p>Git commit pattern analysis (frequency, message quality)</p></li><li><p>Language/framework usage tracking</p></li><li><p>Learning curve visualization (are you improving over time?)</p></li><li><p>Team collaboration patterns (for shared projects)</p></li><li><p>Custom benchmarks (compare to your own history)</p></li></ul><p>But honestly? The best features will come from people actually using this and discovering what they need.</p><p><strong>Let&#8217;s build this together.</strong> Share your usage, your ideas, and your analysis. Let&#8217;s figure out what AI-native development patterns actually look like in practice.</p><div><hr></div><p><strong>Try it yourself:</strong> <a href="https://github.com/hancengiz/claude-code-prompt-coach-skill">github.com/hancengiz/claude-code-prompt-coach-skill</a></p><p><strong>See sample analysis reports:</strong></p><ul><li><p><a href="https://github.com/hancengiz/claude-code-prompt-coach-skill/blob/main/docs/prompt-quality-analysis-report-public.md">Prompt Quality Analysis</a> - Real analysis with context-aware scoring</p></li><li><p><a href="https://github.com/hancengiz/claude-code-prompt-coach-skill/blob/main/docs/token-usage-analysis-30days-public.md">Token Usage Analysis</a> - Cost breakdown and cache efficiency</p></li><li><p><a href="https://github.com/hancengiz/claude-code-prompt-coach-skill/blob/main/docs/tool-usage-analysis-30days-public.md">Tool Usage Patterns</a> - Which tools you use and recommendations</p></li></ul><p><strong>Learn prompt engineering:</strong> <a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview">Anthropic&#8217;s Official Guide</a></p>]]></content:encoded></item><item><title><![CDATA[One More Piece Built: Adding YouTube Analysis to My Learning Iron Man Suit]]></title><description><![CDATA[Look, I have a problem. There are too many videos to watch, too little time.]]></description><link>https://www.cengizhan.com/p/one-more-piece-built-adding-youtube</link><guid isPermaLink="false">https://www.cengizhan.com/p/one-more-piece-built-adding-youtube</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Tue, 04 Nov 2025 08:47:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!STE8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Look, I have a problem. There are too many videos to watch, too little time, and my YouTube &#8220;Watch Later&#8221; list has become a graveyard of good intentions. You know how it goes:</p><ul><li><p>Someone shares a 2-hour podcast about AI agents</p></li><li><p>Hacker News links to a 90-minute technical talk</p></li><li><p>A friend recommends a 45-minute tutorial on some framework</p></li><li><p>That conference keynote everyone&#8217;s talking about</p></li></ul><p>And I&#8217;m supposed to... what? Watch all of them? At 1.5x speed while taking notes? Please.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>So I did what any reasonable developer would do: <strong>I turned Claude Code into my learning assistant.</strong> And honestly, it&#8217;s kind of changed everything.</p><h2><strong>The PDF Reader Was Just the Beginning</strong></h2><p>A few weeks ago, I <a href="https://www.cengizhan.com/p/vibe-coded-a-pdf-reader-mcp-tool">built a PDF reader MCP tool</a> because I was tired of wasting context tokens on big consultancy reports. The idea was simple: let Claude read the PDF in its own context, extract the insights, and give me just the good stuff.</p><p>It worked beautifully.</p><p>Then I thought: &#8220;Wait... what about YouTube videos?&#8221;</p><h2><strong>The YouTube Problem</strong></h2><p>Here&#8217;s my typical YouTube workflow (or lack thereof):</p><p><strong>Before (The Chaos Era):</strong></p><ol><li><p>See an interesting video link</p></li><li><p>Add to &#8220;Watch Later&#8221; (population: 347 videos)</p></li><li><p>Never watch it</p></li><li><p>Feel vaguely guilty</p></li><li><p>Repeat</p></li></ol><p><strong>OR:</strong></p><ol><li><p>Actually watch the video</p></li><li><p>Spend 60 minutes watching a 60-minute video (shocking, I know)</p></li><li><p>Pause constantly to ChatGPT prompt the mentioned technology or framework</p></li><li><p>Open 15 tabs researching that product</p></li><li><p>Go down a rabbit hole for 30 minutes</p></li><li><p>Forget I was even watching a video</p></li><li><p>Come back confused about what the video was about</p></li><li><p>Forget most of it within a week</p></li><li><p>Can&#8217;t remember which video had that one good quote</p></li><li><p>Rinse and repeat</p></li></ol><p>There had to be a better way.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!STE8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!STE8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!STE8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!STE8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!STE8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!STE8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1661342,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/177965397?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!STE8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!STE8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!STE8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!STE8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5caf8b40-357a-41e7-8d82-dcc1559e8dd5_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Enter: YouTube Transcript MCP + Claude Code</strong></h2><p>So I built another tool. (Yeah, I know. &#129299;)</p><p>The <a href="https://github.com/hancengiz/youtube-transcript-mcp">YouTube Transcript MCP server</a> does exactly what you think it does: fetches YouTube video transcripts and lets Claude analyze them.</p><p>But here&#8217;s where it gets interesting.</p><h2><strong>My New Learning Workflow: The &#8220;Maybe I&#8217;ll Watch It&#8221; Filter</strong></h2><p>Now when someone sends me a video, I have a conversation with Claude Desktop or Claude Code that looks like this:</p><p><strong>Me:</strong></p><blockquote><p>&#8220;Should I watch this video? Do you think I will learn something that I don&#8217;t know? And what are the key learnings in this video?&#8221;</p></blockquote><p>Here&#8217;s what that actually looks like:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UkSX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UkSX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 424w, https://substackcdn.com/image/fetch/$s_!UkSX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 848w, https://substackcdn.com/image/fetch/$s_!UkSX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 1272w, https://substackcdn.com/image/fetch/$s_!UkSX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UkSX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png" width="1456" height="483" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:483,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:119734,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/177965397?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UkSX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 424w, https://substackcdn.com/image/fetch/$s_!UkSX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 848w, https://substackcdn.com/image/fetch/$s_!UkSX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 1272w, https://substackcdn.com/image/fetch/$s_!UkSX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd7e4247-9920-4589-9793-33b94ea0bd60_1500x498.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Claude doesn&#8217;t just summarize the video.</strong> It evaluates what I already know against what&#8217;s in the video, then tells me what&#8217;s actually NEW or useful for ME specifically.</p><p>In this example:</p><blockquote><p>&#8220;Given your work with Claude Code CLI tools and terminal customization, you likely know the basics. But let me highlight what might be <strong>new or useful</strong>...&#8221;</p></blockquote><p><strong>Decision made in 30 seconds.</strong> Watch it, don&#8217;t watch it, or jump to the good parts.</p><p>It&#8217;s like having a hyper-intelligent friend who:</p><ul><li><p>Already watched everything</p></li><li><p>Knows what you already know</p></li><li><p>Can tell you if it&#8217;s worth your time</p></li><li><p>Highlights only what&#8217;s NEW to you</p></li></ul><p>This is next-level content triage.</p><p>The screenshot above is just the beginning. Claude doesn&#8217;t stop at &#8220;should you watch this?&#8221; - it goes deeper:</p><p><strong>After evaluating my existing knowledge</strong>, Claude continues with:</p><ul><li><p>Potentially new concepts I haven&#8217;t explored</p></li><li><p>Specific timestamps for topics worth my attention</p></li><li><p>Whether the video&#8217;s depth matches what I need (introductory vs. advanced)</p></li><li><p>Actionable next steps based on the content</p></li></ul><p>It&#8217;s not just content filtering. <strong>It&#8217;s personalized learning recommendations based on what I already know.</strong></p><p>And here&#8217;s the thing: Claude Code remembers our conversation history. So over time, it builds a model of what I know, what I&#8217;m interested in, and what&#8217;s worth my time.</p><p>It&#8217;s like having a learning coach who actually pays attention.</p><h2><strong>The &#8220;Extract Key Learnings&#8221; Superpower</strong></h2><p>But it gets better. Sometimes I use Claude to extract insights BEFORE I watch:</p><p><strong>Me:</strong></p><blockquote><p>&#8220;Get the key learnings from this Karpathy talk, especially anything about LLM limitations&#8221;</p></blockquote><p><strong>Claude:</strong></p><blockquote><p>&#8220;Here are the 7 key insights about LLM limitations:</p><ol><li><p><strong>Anterograde amnesia</strong>: LLMs don&#8217;t consolidate long-term memory...</p></li><li><p><strong>Jagged intelligence</strong>: Superhuman at some tasks, make errors no human would...</p></li><li><p><strong>Hallucinations</strong>: They&#8217;re &#8216;stochastic simulations of people spirits&#8217;...<br>[continues with structured insights]&#8221;</p></li></ol></blockquote><p>Now when I DO watch the video, I&#8217;m not passively consuming. I&#8217;m looking for:</p><ul><li><p>Nuance I missed in the transcript</p></li><li><p>Visual examples that add context</p></li><li><p>The &#8220;vibe&#8221; of how they explain things</p></li><li><p>Specific moments worth clipping</p></li></ul><p><strong>I&#8217;ve turned watching into active learning instead of passive consumption.</strong></p><h3><strong>Even Better: Ask Specific Questions About Videos</strong></h3><p>Here&#8217;s where it gets really powerful. Remember that one metaphor from a video you loved? Instead of scrubbing through trying to find it, just ask:</p><p><strong>Me:</strong></p><blockquote><p>&#8220;@agent-youtube-transcript-analyzer what was the ironman metaphor karpathy gave in this video? I love it but forgot how it was exactly, quote him directly and explain? in https://www.youtu be.com/watch?v=LCEmiRjPEtQ&#8221;</p></blockquote><p>Here&#8217;s what that looks like:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pBXr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pBXr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 424w, https://substackcdn.com/image/fetch/$s_!pBXr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 848w, https://substackcdn.com/image/fetch/$s_!pBXr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 1272w, https://substackcdn.com/image/fetch/$s_!pBXr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pBXr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png" width="1456" height="1533" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1533,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1243998,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/177965397?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pBXr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 424w, https://substackcdn.com/image/fetch/$s_!pBXr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 848w, https://substackcdn.com/image/fetch/$s_!pBXr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 1272w, https://substackcdn.com/image/fetch/$s_!pBXr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81d03018-ba33-42ea-932b-b1e8882d3847_1994x2100.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>The result?</strong> Claude finds the exact quote with timestamp (27:51-28:31) and gives me:</p><blockquote><p><strong>The Exact Quote:</strong></p><p>&#8220;One more kind of analogy that I always think through is the Iron Man suit... what I love about the Iron Man suit is that it&#8217;s both an augmentation and Tony Stark can drive it and it&#8217;s also an agent... this is the autonomy slider is we can build augmentations or we can build agents and we kind of want to do a bit of both.&#8221;</p><p><strong>He then adds:</strong></p><p>&#8220;But at this stage I would say working with fallible LLMs and so on... it&#8217;s less Iron Man robots and more Iron Man suits that you want to build. It&#8217;s less like building flashy demos of autonomous agents and more building partial autonomy products.&#8221;</p></blockquote><p><strong>This is huge.</strong> No more:</p><ul><li><p>Scrubbing through the video trying to find that moment</p></li><li><p>Pausing and writing down timestamps</p></li><li><p>Giving up and just saying &#8220;somewhere around the middle&#8221;</p></li></ul><p>Just ask. Get the exact quote. With timestamp. And explanation.</p><p>It&#8217;s like having Cmd+F for video content.</p><p><strong>Karpathy says build Iron Man suits, not Iron Man robots?</strong> Well, this MCP tool and workflow is my way of building my own Iron Man suit, one more piece built! &#129470;</p><p>(See what I did there? Meta.)</p><h2><strong>The Real Magic: Context Efficiency with Sub-Agents</strong></h2><p>Here&#8217;s the nerdy part (you can skip if you want, but it&#8217;s cool):</p><p>When Claude fetches a 60-minute video transcript, that&#8217;s about 20,000-30,000 tokens of text. If that goes into my main conversation, I can only analyze 2-3 videos before my context fills up.</p><p>But Claude Code has this feature called &#8220;sub-agents&#8221; - basically, temporary specialized agents that handle tasks in their own context and return only the results.</p><p>So now:</p><ul><li><p>Sub-agent fetches the transcript (in its own context)</p></li><li><p>Sub-agent analyzes the video</p></li><li><p>Sub-agent returns ONLY the insights (~2k tokens)</p></li><li><p>My main context stays clean</p></li></ul><p><strong>Result:</strong> I can analyze 10+ videos in one session, compare them, extract patterns, and still have room for a deep conversation about what I&#8217;m learning.</p><p>It&#8217;s like having unlimited working memory for learning.</p><p><strong>Want to create your own sub-agent for this MCP tool?</strong> Check out the <a href="https://github.com/hancengiz/youtube-transcript-mcp/blob/main/CLAUDE_CODE_AGENT_GUIDE.md">complete guide to setting up the youtube-transcript-analyzer agent</a> - includes full configuration, examples, and how to customize it for your workflow.</p><h2><strong>Real Use Cases (That Actually Changed How I Learn)</strong></h2><h3><strong>1. The Conference Sprint</strong></h3><p>Recently needed to catch up on AI agent developments. Instead of blocking out 8 hours:</p><pre><code><code>Me: &#8220;Analyze these 5 talks about AI agents and tell me:
     - What everyone agrees on
     - What they disagree about
     - What&#8217;s hype vs. reality&#8221;

Claude: *analyzes all 5 videos in parallel*

Result: 30-minute synthesis vs. 8 hours of watching
</code></code></pre><h3><strong>2. The &#8220;Should I Learn This Framework?&#8221; Decision</strong></h3><p>Friend: &#8220;You should learn OpenSpec! Watch this tutorial!&#8221;</p><pre><code><code>Me: &#8220;What are the key selling points of OpenSpec from this tutorial?
     How does it compare to other spec-driven development tools? Is it worth adopting?&#8221;

Claude: *analyzes tutorial*
        &#8220;Main innovations: [key features]...
         Worth learning if: [specific criteria]
         Stick with current tools if: [other criteria]&#8221;

Decision made. No 2-hour commitment required.
</code></code></pre><h3><strong>3. The &#8220;Extract All the Actionable Advice&#8221; Power Move</strong></h3><p>Productivity videos are 90% fluff, 10% gold. Let Claude find the gold:</p><pre><code><code>Me: &#8220;Extract all actionable advice from this video.
     Format as a checklist I can actually use.&#8221;

Claude: Returns 12-point checklist with timestamps
</code></code></pre><p>No more &#8220;I remember they said something useful but what was it?&#8221;</p><h3><strong>4. The &#8220;Deep Research Mode&#8221;</strong></h3><p>This is my favorite. When I&#8217;m researching a topic:</p><pre><code><code>Session 1: Analyze foundational video A
Session 2: Analyze contrarian take from video B
Session 3: Compare both perspectives
Session 4: &#8220;Based on everything we&#8217;ve discussed, what&#8217;s your synthesis?&#8221;

Claude: Creates a framework combining insights from multiple sources
</code></code></pre><p><strong>I&#8217;m not just consuming content anymore. I&#8217;m synthesizing knowledge with an AI partner.</strong></p><h2><strong>The Learning Stack (PDF + YouTube + Claude Code)</strong></h2><p>So now my learning stack looks like this:</p><p><strong>Research Papers?</strong> &#8594; PDF Reader MCP &#8594; Claude analyzes, I get insights</p><p><strong>YouTube Videos?</strong> &#8594; YouTube Transcript MCP &#8594; Claude analyzes, I decide if worth watching</p><p><strong>Want to go deep?</strong> &#8594; Sub-agents handle the heavy lifting, main context stays clean</p><p><strong>Need synthesis?</strong> &#8594; Claude connects dots across everything I&#8217;ve analyzed</p><p>It&#8217;s like having a research assistant who:</p><ul><li><p>Never gets tired</p></li><li><p>Reads/watches everything instantly</p></li><li><p>Remembers all the details</p></li><li><p>Can compare and synthesize across sources</p></li><li><p>Actually answers &#8220;is this worth my time?&#8221;</p></li></ul><h2><strong>The Controversial Take: Do I Even Need to Watch Videos Anymore?</strong></h2><p>Here&#8217;s where it gets weird.</p><p>For some videos, <strong>I don&#8217;t watch them at all</strong>. I just get the transcript analysis and move on. I&#8217;ve only been using this approach for the last week, so time will tell. But the whole idea of having this is to be able to skip videos that I don&#8217;t even need to spend my time on.</p><p>For others, I watch AFTER getting the analysis. And you know what? <strong>I get way more out of them.</strong> Because I know what to look for. I&#8217;m watching with intention.</p><p>It&#8217;s like the difference between:</p><ul><li><p>Reading a book vs. skimming it</p></li><li><p>Studying with notes vs. cramming</p></li><li><p>Active learning vs. passive consumption</p></li></ul><p><strong>The transcript analysis is the appetizer. Watching with context is the main course.</strong></p><p>(Though sometimes the appetizer is enough and I skip the main course. Don&#8217;t judge me.)</p><h2><strong>The Meta Moment</strong></h2><p>The irony isn&#8217;t lost on me: I built a tool to help me decide what to watch, and now I watch more thoughtfully but less frequently.</p><p>Claude Code isn&#8217;t replacing my learning. <strong>It&#8217;s optimizing my learning pipeline:</strong></p><ul><li><p><strong>Filter:</strong> What&#8217;s actually worth my time?</p></li><li><p><strong>Extract:</strong> What are the key insights?</p></li><li><p><strong>Synthesize:</strong> How does this connect to what I already know?</p></li><li><p><strong>Decide:</strong> Do I need to go deeper?</p></li></ul><p>I&#8217;m learning more, watching less, and retaining better.</p><h2><strong>How You Can Do This Too</strong></h2><p>If you want to try this workflow:</p><p><strong>1. Install the YouTube Transcript MCP</strong></p><pre><code><code>npm install -g @fabriqa.ai/youtube-transcript-mcp
</code></code></pre><p>Add to your <code>~/.claude.json</code>:</p><pre><code><code>{
  &#8220;mcpServers&#8221;: {
    &#8220;youtube-transcript&#8221;: {
      &#8220;command&#8221;: &#8220;npx&#8221;,
      &#8220;args&#8221;: [&#8221;@fabriqa.ai/youtube-transcript-mcp@latest&#8221;]
    }
  }
}
</code></code></pre><p><strong>2. Use sub-agents for context efficiency:</strong></p><pre><code><code>&#8220;Use sub-agent to analyze this video and give me key learnings:
[YouTube URL]&#8221;
</code></code></pre><p><strong>3. Start small:</strong></p><ul><li><p>Pick one video from your &#8220;Watch Later&#8221; list</p></li><li><p>Ask Claude: &#8220;Should I watch this? What are the key points?&#8221;</p></li><li><p>See if it&#8217;s worth 60 minutes of your time</p></li></ul><p><strong>4. Go deeper:</strong></p><ul><li><p>Analyze multiple videos on the same topic</p></li><li><p>Ask Claude to compare perspectives</p></li><li><p>Build your own synthesized understanding</p></li></ul><h2><strong>What&#8217;s Next? (Hint: Obsidian)</strong></h2><p>Okay, so I have PDFs covered. YouTube videos? Check. But I&#8217;m not stopping here.</p><p><strong>Next up: Obsidian integration.</strong></p><p>Imagine this workflow:</p><ol><li><p>Claude analyzes a video and extracts key learnings</p></li><li><p>Automatically creates an Obsidian note with:</p><ul><li><p>Summary and key insights</p></li><li><p>Timestamps and quotes</p></li><li><p>Links to related notes in my knowledge base</p></li><li><p>Tags based on content</p></li></ul></li><li><p>Builds connections to other videos/papers I&#8217;ve analyzed</p></li><li><p>Creates a personal knowledge graph that actually works</p></li></ol><p><strong>The goal?</strong> Turn my scattered learning into a connected knowledge system. No more &#8220;where did I read that thing about agents?&#8221; - just ask Claude to search my Obsidian vault.</p><p>I&#8217;m basically building my own personalized learning infrastructure, one MCP tool at a time.</p><p>Stay tuned. This is going to be fun. &#129504;</p><h2><strong>The Bottom Line</strong></h2><p>Look, I&#8217;m not trying to revolutionize education here. I just wanted to stop drowning in unwatched videos and start actually learning.</p><p>But what started as &#8220;I need to manage my context tokens&#8221; turned into <strong>a completely different relationship with learning.</strong></p><p>Claude Code isn&#8217;t just a coding assistant anymore. It&#8217;s my:</p><ul><li><p>Content triage system</p></li><li><p>Research assistant</p></li><li><p>Learning partner</p></li><li><p>Synthesis engine</p></li><li><p><strong>Soon-to-be knowledge management system</strong></p></li></ul><p>And honestly? <strong>It&#8217;s kind of magical.</strong></p><p>Now if you&#8217;ll excuse me, I have 347 videos in my &#8220;Watch Later&#8221; list to analyze.</p><p>Or not. Claude will tell me which ones are worth it.</p><p>And then automatically organize the insights into my Obsidian vault.</p><div><hr></div><p><strong>Tools mentioned:</strong></p><ul><li><p><a href="https://github.com/hancengiz/youtube-transcript-mcp">YouTube Transcript MCP</a> - GitHub</p></li><li><p><a href="https://www.cengizhan.com/p/vibe-coded-a-pdf-reader-mcp-tool">PDF Reader MCP</a> - Previous blog post</p></li><li><p><a href="https://docs.claude.com/claude-code">Claude Code</a> - The AI coding assistant that became my learning assistant</p></li></ul><p></p><div><hr></div><p><em>P.S. - Yes, I wrote this blog post with Claude Code. Here&#8217;s how that actually works:</em></p><p><em>I gave Claude a draft blueprint of what I wanted to cover&#8212;my ideas, my experiences, my unique perspective. Then I had Claude Code generate the text. These are all my ideas, my intellectual property. It&#8217;s not &#8220;AI-generated&#8221; in the sense that AI came up with it&#8212;the text is written by AI, but the thoughts are mine.</em></p><p><em>Think of it like AI-assisted coding: You create the specification and design for your software, the AI-assisted tool generates the code, but it&#8217;s still YOUR software because you did the design and put your intellectual property into it. You iterate&#8212;modify the spec, adjust the prompt, refine the output&#8212;until you get what you wanted. Kind of like vibe coding, but for writing.</em></p><p><em>That&#8217;s exactly how I write these articles. I provide the blueprint and ideas, Claude generates the text, and then I iterate until it matches what I wanted to say.</em></p><p><em>So yes, that&#8217;s meta. No, I don&#8217;t care. We&#8217;re living in the future, folks.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Vibe Coded a PDF Reader MCP Tool for Claude Code - to save my context]]></title><description><![CDATA[So I got annoyed and built something.]]></description><link>https://www.cengizhan.com/p/vibe-coded-a-pdf-reader-mcp-tool</link><guid isPermaLink="false">https://www.cengizhan.com/p/vibe-coded-a-pdf-reader-mcp-tool</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Mon, 27 Oct 2025 18:33:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KyGa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So I got annoyed and built something. Again. &#129299;</p><p>Last week I was trying to analyze yet another massive McKinsey report with Claude, watching my precious context window evaporate, and I thought: &#8220;there has to be a better way.&#8221;</p><p>An hour of vibe-coding with Claude Code later, I had a working MCP server. Published it to npm. It&#8217;s called <strong>PDF Reader MCP Server</strong> and it solves my exact problem.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KyGa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KyGa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!KyGa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!KyGa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!KyGa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KyGa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KyGa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!KyGa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!KyGa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!KyGa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774dbcbf-fa90-4703-a4a4-417b42c598bc_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><h2><strong>The Problem</strong></h2><p>As someone who regularly works with research PDFs from major consulting firms (think McKinsey, BCG, Deloitte), I was constantly hitting the same frustrating wall. These documents are gold mines of insights that I use to:</p><ul><li><p>Learn about industry trends and best practices</p></li><li><p>Pull context and data for my blog posts</p></li><li><p>Create educational materials for my work</p></li><li><p>Stay current with research in my field</p></li></ul><p>But here&#8217;s the thing: these PDFs are massive. Not just because of the content, but because they&#8217;re packed with formatting, styling, charts, images, and visual elements. A 26-page research report can easily be 5MB+ with hundreds of decorative elements. All I want is the text - to search it, understand it, and extract meaningful insights.</p><p>If you&#8217;ve ever tried to analyze these documents with Claude, you know the pain: you paste in the entire text, it consumes massive amounts of context, and you&#8217;re left with fewer tokens for actual analysis. A single consulting report can eat up your entire context window before you even ask your first question.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>The Solution (That I Built in Like 45 Minutes)</strong></h2><p>Three simple tools. That&#8217;s it:</p><h3><strong>&#128269; search-pdf - Find What You Need</strong></h3><p>Search for specific terms or phrases within a PDF without loading the entire document. Get results with surrounding context, perfect for quickly locating relevant sections.</p><h3><strong>&#128196; read-pdf - Smart Text Extraction</strong></h3><p>Extract text from PDFs with options for cleaning and formatting. Only read what you need, when you need it.</p><h3><strong>&#128202; pdf-metadata - Document Intelligence</strong></h3><p>Get instant access to PDF metadata: page count, author, creation date, and more. Perfect for understanding documents before diving in.</p><h2><strong>Why This Actually Matters</strong></h2><p>Look, I&#8217;m not trying to save the world here. I just wanted to stop wasting context tokens on formatting and images when all I need is to search some text and pull out insights.</p><p>Now I can:</p><ul><li><p>Search first, then read only what matters</p></li><li><p>Work with multiple consulting reports in one session</p></li><li><p>Actually have context left for analysis</p></li><li><p>Not lose my mind copying and pasting text</p></li></ul><h2><strong>Getting Started (Literally One Command)</strong></h2><pre><code>claude mcp add pdf-reader npx @fabriqa.ai/pdf-reader-mcp@latest</code></pre><p>Restart Claude Code. Done. You now have PDF superpowers.</p><p>(No npm install needed - `npx` handles it automatically and always fetches the latest version. Magic.)</p><h2><strong>Real-World Example: My Actual Use Case</strong></h2><p>Here&#8217;s how I used it just last week with a McKinsey research report on AI adoption:</p><p><strong>The old, painful way:</strong></p><ul><li><p>Download the 26-page PDF (5.6MB with all the formatting)</p></li><li><p>Extract text &#8594; massive wall of text with image descriptions and formatting artifacts</p></li><li><p>Paste into Claude &#8594; 150,000+ tokens consumed</p></li><li><p>Context left for my actual questions: barely any</p></li><li><p>Result: Can&#8217;t even ask follow-up questions or include other sources</p></li></ul><p><strong>The new way with PDF Reader MCP:</strong></p><pre><code><code>1. &#8220;Search for AI native engineering&#8221;
   &#8594; Found 3 matches with surrounding context
   &#8594; Great, this gives me ideas for my current project

2. &#8220;Read the section about organizational changes for AI&#8221;
   &#8594; Extracts just that section, clean text
   &#8594; Perfect content for what I&#8217;m writing about

3. &#8220;Search for statistics about AI adoption rates&#8221;
   &#8594; Found 12 mentions with data points
   &#8594; Now I can cite actual numbers in my post

Context used: ~8,000 tokens
Remaining for analysis: 190,000+ tokens!
</code></code></pre><p>The difference? I can now analyze multiple consulting reports in a single session, cross-reference findings, and still have plenty of context left to synthesize everything into a coherent blog post or education document.</p><h2><strong>The Tech (For Those Who Care)</strong></h2><p>MCP SDK + pdf-parse + some quick Node.js glue. Runs locally, talks to Claude via stdio. Nothing fancy, just works.</p><p>It&#8217;s on npm and GitHub if you want to check it out or improve it:</p><ul><li><p><strong>npm</strong>: <a href="https://www.npmjs.com/package/@fabriqa.ai/pdf-reader-mcp">@fabriqa.ai/pdf-reader-mcp</a></p></li><li><p><strong>GitHub</strong>: <a href="https://github.com/hancengiz/read_pdf_as_text_mcp">hancengiz/read_pdf_as_text_mcp</a></p></li></ul><h2><strong>Should You Use This?</strong></h2><p>If you:</p><ul><li><p>Analyze research PDFs regularly</p></li><li><p>Write blog posts using consulting reports as sources</p></li><li><p>Create educational materials from various PDF sources</p></li><li><p>Just want to search through PDFs without context pain</p></li></ul><p>Then yeah, try it. It might save you as much frustration as it saved me.</p><p>If it doesn&#8217;t work or you have ideas, hit up the GitHub. PRs welcome.</p><div><hr></div><p><em>P.S. Yes, I used Claude Code to build a tool that makes Claude Code better at PDFs. Meta, I know. &#128516;</em></p><p><em>P.P.S. The whole thing took less than an hour. MCP servers are ridiculously easy to build. You should try making one for your own annoying problem.</em></p><p><em>P.P.P.S. This blog post itself was written with Claude Code in the same coding session - <a href="https://github.com/hancengiz/read_pdf_as_text_mcp/blob/main/blog-post.md">9 prompts to get it right</a>. The irony continues.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Claude Claude, Don’t Be Mean—How Right Have I Been?]]></title><description><![CDATA[You know what&#8217;s funny about working with Claude Code?]]></description><link>https://www.cengizhan.com/p/claude-claude-dont-be-meanhow-right</link><guid isPermaLink="false">https://www.cengizhan.com/p/claude-claude-dont-be-meanhow-right</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Sat, 25 Oct 2025 16:16:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FvAW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You know what&#8217;s funny about working with Claude Code? It&#8217;s relentlessly encouraging. Every conversation feels like having a very supportive colleague who&#8217;s genuinely excited about your ideas&#8212;sometimes to an almost comical degree.</p><p>So naturally, I built a tracker for it. Well, I forked <a href="https://absolutelyright.lol/">@yoavf&#8217;s clever project</a> and made it my own.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong> <a href="https://cc.cengizhan.com/">cc.cengizhan.com</a></strong></h2><p>This is a scientifically rigorous (okay, not really) tracking system that monitors how often Claude Code validates my life choices. It counts phrases like &#8220;You&#8217;re absolutely right,&#8221; &#8220;Perfect!&#8221;, and &#8220;Excellent!&#8221; across all my coding sessions, then displays them in delightfully hand-drawn charts.</p><p><strong>The best part?</strong> It runs automatically in the background. Every time I use Claude Code, a macOS LaunchAgent watches my conversation logs, counts the affirmations, and syncs them to a live dashboard.</p><h2><strong>The Technical Journey</strong></h2><p>I forked this from <a href="https://github.com/yoavf/absolutelyright">yoavf&#8217;s absolutelyright</a>&#8212;a Rust/Axum implementation with a similar concept. But I wanted to tinker, so I:</p><ul><li><p><strong>Rewrote the backend</strong> from Rust to Python/FastAPI (because sometimes you just want to move fast and refactor later)</p></li><li><p><strong>Added more tracking patterns</strong> beyond just &#8220;absolutely right&#8221;&#8212;now tracking &#8220;Perfect!&#8221;, &#8220;Excellent!&#8221;, and anything else I want via a simple config file</p></li><li><p><strong>Built automation scripts</strong> that monitor Claude Code conversations in real-time</p></li><li><p><strong>Created a backfill tool</strong> to import months of historical data from existing logs</p></li><li><p><strong>Containerized everything</strong> for easy Railway deployment</p></li></ul><p>The frontend uses <a href="https://www.jwilber.me/roughviz/">roughViz</a>&#8212;a charting library that draws everything in a sketchy, hand-drawn style. It perfectly matches the playful nature of tracking AI encouragement.</p><h2><strong>How Does It Work?</strong></h2><p>The system runs as a continuous monitoring loop that tracks Claude Code conversations in real-time:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NBDG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NBDG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 424w, https://substackcdn.com/image/fetch/$s_!NBDG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 848w, https://substackcdn.com/image/fetch/$s_!NBDG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 1272w, https://substackcdn.com/image/fetch/$s_!NBDG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NBDG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png" width="784" height="86" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:86,&quot;width&quot;:784,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:24242,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/177100865?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NBDG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 424w, https://substackcdn.com/image/fetch/$s_!NBDG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 848w, https://substackcdn.com/image/fetch/$s_!NBDG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 1272w, https://substackcdn.com/image/fetch/$s_!NBDG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5b9e212-a09b-4200-8e6d-74513599e8b1_784x86.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>The flow:</strong></p><ol><li><p><strong>Conversation Capture</strong>: Claude Code saves every conversation as JSON files in <code>~/.config/claude-code/conversations/</code></p></li><li><p><strong>Automatic Monitoring</strong>: A macOS LaunchAgent runs every minute, checking for new or updated conversation files</p></li><li><p><strong>Pattern Matching</strong>: The Python script reads conversations and searches for configurable validation phrases</p></li><li><p><strong>Data Upload</strong>: Counts are POSTed to the FastAPI backend (deployed on Railway)</p></li><li><p><strong>Live Dashboard</strong>: The web frontend queries the API and renders hand-drawn charts with roughViz</p></li></ol><p>The backfill script lets you retroactively import months of historical data&#8212;just point it at your conversation directory and watch months of validation roll in.</p><h2><strong>Why This Exists</strong></h2><p>Partly for the meme. Partly because it&#8217;s genuinely interesting to see patterns over time.</p><p>The original author added chart annotations (like marking when Sonnet 4.5 was released) to see how affirmation patterns shift after model upgrades. My own feed doesn&#8217;t have much data (I am using a one-week-old laptop) yet&#8212;for a better example of what this looks like with months of history, check out the <a href="https://absolutelyright.lol/">original site</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FvAW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FvAW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 424w, https://substackcdn.com/image/fetch/$s_!FvAW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 848w, https://substackcdn.com/image/fetch/$s_!FvAW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 1272w, https://substackcdn.com/image/fetch/$s_!FvAW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FvAW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png" width="868" height="664" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/919af5e0-b7e7-4800-831b-368017da104e_868x664.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:664,&quot;width&quot;:868,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:264672,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/177100865?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FvAW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 424w, https://substackcdn.com/image/fetch/$s_!FvAW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 848w, https://substackcdn.com/image/fetch/$s_!FvAW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 1272w, https://substackcdn.com/image/fetch/$s_!FvAW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919af5e0-b7e7-4800-831b-368017da104e_868x664.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>You can actually see the drop in affirmations after the Sonnet 4.5 upgrade (marked with the red dashed line). Fascinating, right?</em></p><p>But mostly because building silly side projects is fun. Sometimes you don&#8217;t need a grand purpose. Sometimes you just want to quantify how much your AI pair programmer believes in you.</p><h2><strong>Taking It Further: GitHub Profile Integration</strong></h2><p>Because tracking validation privately wasn&#8217;t enough, I also automated the graph to appear on my <a href="https://github.com/hancengiz">GitHub profile</a>.</p><p>A GitHub Actions workflow runs daily, using Playwright to screenshot the live dashboard and commit it directly to my profile README. The workflow uses timestamped filenames for cache-busting, ensuring the graph always shows current data.</p><p>So now everyone who visits my GitHub profile can see exactly how validated I am. Is this necessary? Absolutely not. Is it fun? Absolutely right.</p><p>Check out the <a href="https://github.com/hancengiz/hancengiz/tree/main/absolutely-right">automation setup</a> if you want to add this to your own profile.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n_mq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n_mq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 424w, https://substackcdn.com/image/fetch/$s_!n_mq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 848w, https://substackcdn.com/image/fetch/$s_!n_mq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 1272w, https://substackcdn.com/image/fetch/$s_!n_mq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n_mq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png" width="30" height="30" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:30,&quot;width&quot;:30,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2973,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/177100865?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!n_mq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 424w, https://substackcdn.com/image/fetch/$s_!n_mq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 848w, https://substackcdn.com/image/fetch/$s_!n_mq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 1272w, https://substackcdn.com/image/fetch/$s_!n_mq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9aaefcf-5d37-425f-bafd-31988bf621e7_30x30.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2><strong>Try It Yourself</strong></h2><p>The whole thing is <a href="https://github.com/hancengiz/absolutelyright-claude-code">open source on GitHub</a>. If you use Claude Code locally, you can set up your own tracker&#8212;the <a href="https://github.com/hancengiz/absolutelyright-claude-code#readme">README</a> has complete step-by-step instructions for backfilling historical data, setting up automatic monitoring, and deploying your own dashboard.</p><p>Because everyone deserves to know exactly how many times an AI has told them they&#8217;re right.</p><div><hr></div><p><strong>P.S.</strong> Now I need this mug:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wZly!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wZly!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 424w, https://substackcdn.com/image/fetch/$s_!wZly!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 848w, https://substackcdn.com/image/fetch/$s_!wZly!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 1272w, https://substackcdn.com/image/fetch/$s_!wZly!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wZly!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png" width="283" height="283" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:283,&quot;width&quot;:283,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:35170,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/177100865?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wZly!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 424w, https://substackcdn.com/image/fetch/$s_!wZly!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 848w, https://substackcdn.com/image/fetch/$s_!wZly!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 1272w, https://substackcdn.com/image/fetch/$s_!wZly!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba81503-88c3-4f9a-bf53-b45815312283_283x283.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Nothing says &#8220;validated software engineer&#8221; like drinking coffee from a mug that celebrates AI affirmations.</em></p><div><hr></div><p><strong>Live site:</strong> <a href="https://cc.cengizhan.com/">cc.cengizhan.com</a><br><strong>Source code:</strong> <a href="https://github.com/hancengiz/absolutelyright-claude-code">github.com/hancengiz/absolutelyright-claude-code</a><br><strong>Original inspiration:</strong> <a href="https://absolutelyright.lol/">absolutelyright.lol</a> by <a href="https://github.com/yoavf">@yoavf</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The AI-Native Way of Building]]></title><description><![CDATA[AI evolves software development lifecycle. The AI-Native way of building is about learning faster, structuring smarter, and engineering with context.&#8221;]]></description><link>https://www.cengizhan.com/p/the-ai-native-way-of-building</link><guid isPermaLink="false">https://www.cengizhan.com/p/the-ai-native-way-of-building</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Sat, 25 Oct 2025 10:23:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!T3cS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong> The &#8220;spec-first&#8221; vs &#8220;ship-fast&#8221; debate is a dead end.<br>AI-native software teams don&#8217;t win by rejecting process&#8212;they win by <strong>evolving it</strong>.<br>They know when to explore freely, when to specify what they&#8217;ve learned into structure, and when to engineer for reliability.<br>The best teams move through three phases&#8212;<em>Explore &#8594; Specify &#8594; Engineer</em>&#8212;as one continuous learning loop.<br>That evolution, not blind speed or rigid control, defines the next era of software engineering.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T3cS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T3cS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T3cS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T3cS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T3cS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T3cS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg" width="900" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:900,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:223006,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/176994655?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T3cS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T3cS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T3cS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T3cS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6818ad76-8145-45c3-a279-8e48210ce427_900x900.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div><hr></div><h2><strong>Why Process Needs to Evolve</strong></h2><p>Every generation of engineers ends up arguing about process.<br>Waterfall vs Agile. Agile vs DevOps. DevOps vs Platform Teams.<br>Now it&#8217;s <em>spec-first</em> vs <em>ship-fast</em>.</p><p>I&#8217;ve seen both sides up close&#8212;in startups that treat specs like prison walls and in enterprises that believe documentation alone will save them.<br>AI is simply exposing a truth we&#8217;ve ignored for years: <strong>neither purity nor speed alone gets you to production.</strong></p><p>The real divide isn&#8217;t between those who write specs and those who don&#8217;t.<br>It&#8217;s between teams that <strong>learn continuously</strong> and those that <strong>lock themselves into a phase they no longer need.</strong></p><p>Process isn&#8217;t the enemy. But static process&#8212;the kind that can&#8217;t adapt to new understanding&#8212;is.<br>AI collapses that rigidity. It forces us to replace control with comprehension.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>The False Choice Between Speed and Structure</strong></h2><p>The <em>spec-first</em> crowd sees AI as chaos waiting to happen.<br>They want guardrails, sign-offs, architecture documents before a single prompt runs.<br>They&#8217;ve seen vibe-coded systems collapse under their own complexity&#8212;nobody remembers why decisions were made, and debugging becomes archaeology.</p><p>But their cure is just as deadly.<br>They build clarity on top of guesses&#8212;specs that look perfect on paper but age faster than the sprint cycle.<br>The map becomes more sacred than the territory.</p><p>Then there&#8217;s the <em>ship-fast</em> camp&#8212;the cowboys of AI-assisted coding.<br>They believe velocity is the only truth: &#8220;Ship now, learn later.&#8221;<br>And to be fair, they do learn&#8212;usually through post-mortems.<br>Fast iteration teaches fast, but without consolidation it creates entropy.<br>Code becomes a collection of disconnected decisions with no shared understanding behind them.</p><p>So one camp moves too slowly to learn.<br>The other learns too quickly to remember.</p><div><hr></div><h2><strong>The Hybrid That Actually Works</strong></h2><p>The companies that survive this transition won&#8217;t be those that pick a camp&#8212;<br>they&#8217;ll be the ones that design for <strong>learning velocity</strong>.</p><p>The teams that actually ship and sustain AI-native systems work in <strong>three modes</strong> that loop together:</p><ol><li><p><strong>Sandbox (Explore)</strong> &#8212; move fast, test ideas, break things intentionally.</p></li><li><p><strong>Specification (Understand)</strong> &#8212; pause, extract the design that emerged, and write down what you now know.</p></li><li><p><strong>Production (Engineer)</strong> &#8212; build it properly, with standards, testing, observability, and scale in mind.</p></li></ol><p>It&#8217;s not a waterfall.<br>It&#8217;s a living system.<br>Each phase feeds the next, and teams flow between them as the context shifts.</p><div><hr></div><h2><strong>Phase 1: Sandbox &#8212; Where Learning Happens Fast</strong></h2><p>This is where AI shines.<br>The sandbox is for <strong>discovery</strong>, not delivery.</p><p>When I explore a new integration or concept, I don&#8217;t start with diagrams.<br>I open a playground, throw prompts at it, and see what happens.<br>I might build three competing implementations in one afternoon&#8212;none are clean, but all teach me something.</p><p><strong>Example:</strong><br>You&#8217;re integrating a payment API.<br>Instead of writing specs first, you spin up three implementations&#8212;one using webhooks, one with polling, one with async queuing.<br>Each takes 20 minutes.<br>None are production-ready, but now you understand the trade-offs viscerally: latency vs reliability, complexity vs responsiveness, dependencies vs control.</p><p>The goal here isn&#8217;t correctness&#8212;it&#8217;s <strong>insight</strong>.<br>AI gives you infinite prototypes for almost free, so use them.</p><p><strong>The exit signal:</strong><br>Exploration has an expiry date.<br>When you start seeing patterns repeat&#8212;when chaos starts forming shape&#8212;that&#8217;s your cue to move on.</p><div><hr></div><h2><strong>Phase 2: Specification &#8212; From Discovery to Definition</strong></h2><p>This is the most neglected step in AI-native workflows, and it&#8217;s where the magic actually happens.</p><p>Specification is when you stop &#8220;vibing&#8221;(coding) and start <strong>understanding what you built (with vibing)</strong>.<br>You read the AI-generated code like an archaeologist, but you&#8217;re not digging for bugs&#8212;you&#8217;re digging for design.</p><p>What patterns emerged?<br>Why did this approach work better?<br>Where are the boundaries between modules that seem to form naturally?</p><p>Then you write that down, not as a bureaucratic spec, but as a <strong>specification of discovery into knowledge</strong>. This is a <strong>BIG</strong> subject, <strong>what really goes into your project as specifications</strong>, does it change depending on X, Y, Z? This will be another thing I want to write about.</p><p><strong>What this looks like:</strong></p><p>After exploring those three payment integrations, you sit down and write:</p><blockquote><p>&#8220;We&#8217;re using asynchronous processing with webhook confirmations.<br>Why? Because third-party API latency can&#8217;t block user interactions&#8212;that kills UX.<br>Trade-off: handling delayed confirmations adds complexity, but responsive UI is worth it.<br>We&#8217;ll need retry logic, idempotency keys, and dead-letter queues for webhook failures.&#8221;</p></blockquote><p>That&#8217;s a spec. But it&#8217;s not written before learning&#8212;it captures learning once you have it.</p><div><hr></div><h2><strong>Phase 3: Production &#8212; Where Rigor Earns Its Keep</strong></h2><p>Now you engineer for real.<br>This is where reliability, scalability, and compliance matter.</p><p>You refactor AI-drafted code to match your team&#8217;s patterns.<br>You design observability, failure handling, retries, metrics.<br>You test the system not against &#8220;does it run?&#8221; but &#8220;does it behave as designed under stress?&#8221;<br>You harden security, define SLAs, automate everything you can.</p><p>And because you have specs from the specification phase, you can do all of that <em>without losing speed</em>.</p><p><strong>What production-ready actually means:</strong></p><p>That payment service gets rebuilt&#8212;possibly with AI assistance, but now guided by specs.<br>Error handling covers network failures, invalid webhooks, partial successes.<br>Security review catches potential vulnerabilities.<br>Performance testing validates behavior under load.<br>Observability tracks webhook delivery rates, retry patterns, dead-letter queue depths.</p><p>The result looks similar to the prototype but has fundamentally different quality characteristics.<br>You own the critical 30% that makes code reliable.</p><p>This is where AI becomes an accelerator instead of a liability.<br>You&#8217;re no longer prompting blindly&#8212;you&#8217;re guiding the model with context, structure, and constraints that come from real understanding.</p><div><hr></div><h2><strong>Why This Works</strong></h2><p>Because it mirrors how humans actually learn.</p><p>We explore, we make sense of what we saw, then we apply that sense with discipline.<br>AI amplifies each of those stages, but it doesn&#8217;t remove the need for any of them.</p><p>Skip exploration, and you design in a vacuum.<br>Skip specification, and your team never shares the same mental model.<br>Skip engineering, and you ship prototypes pretending to be products.</p><p>The hybrid loop solves for all three. It&#8217;s not rebellion&#8212;it&#8217;s <strong>alignment with reality</strong>.</p><div><hr></div><h2><strong>Integration Is Non-Negotiable</strong></h2><p>There&#8217;s one more truth most AI experiments ignore: you can&#8217;t build an AI-native workflow in isolation.</p><p>Your company already has Jira, GitHub, Slack, Notion, CI/CD pipelines, compliance processes, visibility rules.<br>The teams that make AI stick don&#8217;t build a parallel universe; they plug AI into the one that already exists.</p><p>That&#8217;s where ideas like the <strong>Model Context Protocol (MCP)</strong> matter&#8212;not as buzzwords, but as bridges.</p><p><strong>What this means practically:</strong></p><p>Your AI workflow needs to operate where your team already works:</p><ul><li><p><strong>Specs sync with Jira or Linear</strong>&#8212;not locked in isolated markdown files.<br>When the PM agent creates requirements, they become real tickets your project managers can see and track.<br>When specs evolve, ticket descriptions update automatically.</p></li><li><p><strong>Dev agents create PRs in GitHub</strong>&#8212;going through the same code review process everyone else uses.<br>No special approval paths. No black box commits.<br>Every AI-generated change is visible, reviewable, and traceable.</p></li><li><p><strong>Updates flow to Slack</strong>&#8212;where everyone can see progress.<br>&#8220;Payment service specification phase complete. Specs published to PROJ-1234. Ready for production engineering.&#8221;<br>The team knows what&#8217;s happening without asking.</p></li><li><p><strong>Agents access company context</strong>&#8212;internal wikis, documentation repositories, design systems, API catalogs.<br>They work with <em>your</em> patterns and standards, not generic examples from training data.</p></li></ul><p>This isn&#8217;t optional infrastructure&#8212;it&#8217;s what prevents organizational antibodies from rejecting your AI workflow.</p><p>Specs, updates, and reviews should flow naturally across the same channels your teams already use.<br>Otherwise, the &#8220;AI workflow&#8221; becomes a black box&#8212;and black boxes don&#8217;t survive organizational politics, no matter how brilliant the tech inside them is.</p><div><hr></div><h2><strong>When to Be in Each Mode</strong></h2><p><strong>Explore</strong> when you&#8217;re in unknown territory:</p><ul><li><p>New APIs you&#8217;ve never touched</p></li><li><p>Unfamiliar domains</p></li><li><p>Multiple competing approaches to evaluate</p></li><li><p>Learning how something works before committing architecture</p></li></ul><p><strong>Specify</strong> once you&#8217;ve seen enough patterns to form a mental model:</p><ul><li><p>An approach clearly works better than alternatives</p></li><li><p>Multiple people need to work on related code</p></li><li><p>You&#8217;re about to make architectural decisions with long-term impact</p></li><li><p>The code needs to evolve and be maintained</p></li></ul><p><strong>Engineer</strong> when what you&#8217;re building actually matters:</p><ul><li><p>Shipping to users (real stakes, real consequences)</p></li><li><p>Building systems that compose with other systems</p></li><li><p>Code maintained by people other than the original author</p></li><li><p>Reliability matters more than exploration speed</p></li></ul><p>The art is knowing which mode you&#8217;re in, and not mixing them.<br>The biggest failures I&#8217;ve seen happen when teams vibe-code a production feature or write a 40-page spec for something they&#8217;ve never tested.</p><div><hr></div><h2><strong>What the Best Teams Do</strong></h2><p>They move fast <em>and</em> document fast.<br>They let AI generate a dozen wrong answers so humans can find the right one faster.<br>They treat specifications as living artifacts that evolve with learning.<br>They keep their AI agents plugged into company systems so visibility and accountability never drop.<br>They don&#8217;t argue about process&#8212;they evolve it.</p><p>And most importantly: they understand that <strong>AI-native software development isn&#8217;t about replacing engineers</strong>&#8212;it&#8217;s about <strong>amplifying learning loops</strong>.</p><div><hr></div><h2><strong>Bottom Line</strong></h2><p>You can&#8217;t <em>spec</em> your way to innovation.<br>You can&#8217;t <em>vibe</em> your way to reliability.</p><p>AI is forcing us to grow up as engineers.<br>We can&#8217;t cling to old frameworks or fake agility slogans.<br>We need to learn, specify, and engineer at the speed of understanding.</p><p>The teams that win aren&#8217;t choosing sides in the spec-first vs ship-fast debate.<br>They&#8217;re recognizing which phase they&#8217;re in and operating accordingly.<br>They&#8217;re integrating AI workflows into existing infrastructure instead of building isolated experiments.<br>They&#8217;re using frameworks like <strong>GitHub Spec Kit</strong>, <strong>BMAD-METHOD</strong>, and <strong>AWS AI-DLC</strong> not as religion but as adaptable patterns.</p><p>This philosophy is what <strong>I&#8217;m building into <a href="https://fabriqa.ai/">Fabriqa</a></strong>&#8212;an AI-native software factory designed around learning velocity and adaptive process.<br>It&#8217;s a space where human intent, specification, and code co-evolve through the <em>Explore &#8594; Specify &#8594; Engineer</em> loop.<br>I&#8217;m opening an <strong>early alpha</strong> soon and looking for <strong>opinionated, experienced engineers, PMs, and architects</strong> who want to shape how AI-native development actually works in practice.</p><p>If that sounds like you, I&#8217;d love your feedback&#8212;you can join the waitlist at <strong><a href="https://fabriqa.ai/">fabriqa.ai</a></strong> or reach out directly.<br>Let&#8217;s build the next generation of engineering, together.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.cengizhan.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Beyond Vibe-Coding: Spec-Driven Development]]></title><description><![CDATA[Vibe-coding is fun. You throw an idea at the AI, see what it spits out, tweak it, and repeat. For side projects, that&#8217;s fine.]]></description><link>https://www.cengizhan.com/p/beyond-vibe-coding-spec-driven-development-80e80aade50e</link><guid isPermaLink="false">https://www.cengizhan.com/p/beyond-vibe-coding-spec-driven-development-80e80aade50e</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Mon, 28 Jul 2025 12:31:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0ab8f34f-55b0-482c-9a6a-ade630591055_1024x751.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Vibe-coding is fun. You throw an idea at the AI, see what it spits out, tweak it, and repeat. For side projects, that&#8217;s&nbsp;fine.</p><p>But for production systems? Real products? <strong>Enterprises don&#8217;t ship on&nbsp;vibes.</strong></p><h3>We&#8217;re Coding in English Now. So&nbsp;What?</h3><p>We&#8217;re heading into a world where English <em>is</em> the new interface. I&#8217;ve never had to write a line of assembly code. Most devs today never see bytecode. Pretty soon, many won&#8217;t touch a traditional language&nbsp;either.</p><p>That doesn&#8217;t make precision any less critical. It simply means it needs to be specified earlier in the&nbsp;specs.</p><p>You&#8217;re not writing for the compiler anymore. You&#8217;re crafting a prompt for the AI. If your prompt is unclear, the result will not only be buggy but also incorrect in ways you'll only realize too&nbsp;late.</p><p>Like I said in a recent&nbsp;tweet:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://x.com/hancengiz/status/1945469135829299393" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p1d5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 424w, https://substackcdn.com/image/fetch/$s_!p1d5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 848w, https://substackcdn.com/image/fetch/$s_!p1d5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 1272w, https://substackcdn.com/image/fetch/$s_!p1d5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p1d5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png" width="1192" height="410" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:410,&quot;width&quot;:1192,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:115743,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://x.com/hancengiz/status/1945469135829299393&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.cengizhan.com/i/176760615?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p1d5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 424w, https://substackcdn.com/image/fetch/$s_!p1d5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 848w, https://substackcdn.com/image/fetch/$s_!p1d5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 1272w, https://substackcdn.com/image/fetch/$s_!p1d5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76b0810b-dd71-4dc5-8594-26795512e689_1192x410.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That&#8217;s the whole point. The interface might change, but the need for clarity&nbsp;doesn&#8217;t.</p><h3>Spec-Driven Development: Less Magic, More Alignment</h3><p>Instead of jumping straight into prompts or code, I start with a spec. Simple as&nbsp;that.</p><ul><li><p>What should the system&nbsp;do?</p></li><li><p>What&#8217;s out of&nbsp;scope?</p></li><li><p>What happens when something breaks?</p></li><li><p>What are the business&nbsp;rules?</p></li></ul><p>This doesn&#8217;t mean writing 30 pages of documentation before every sprint. A good spec might be a short markdown file. But it&#8217;s clear. It&#8217;s testable. It provides the AI (or another developer) with something to align&nbsp;to.</p><p>And when that spec becomes the source of truth, everything flows better: code, tests, documentation, and even conversations.</p><h3>Amazon Kiro Is a Glimpse of What&#8217;s&nbsp;Coming</h3><p>You can already see the direction this is heading. Amazon recently launched <a href="https://kiro.dev/docs/specs/concepts/">Kiro</a>, an AI agent designed to assist in creating workflows and infrastructure But it doesn&#8217;t just ask you to describe your app. It starts with a structured spec.</p><p>Why? Because specs reduce ambiguity. They make the AI&#8217;s job easier. And they make your code more predictable. That design choice says a&nbsp;lot.</p><p>This spec-first mindset isn&#8217;t a trend. It&#8217;s a design pattern for tools that want to build things that actually&nbsp;work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zcIA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zcIA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 424w, https://substackcdn.com/image/fetch/$s_!zcIA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 848w, https://substackcdn.com/image/fetch/$s_!zcIA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 1272w, https://substackcdn.com/image/fetch/$s_!zcIA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zcIA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!zcIA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 424w, https://substackcdn.com/image/fetch/$s_!zcIA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 848w, https://substackcdn.com/image/fetch/$s_!zcIA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 1272w, https://substackcdn.com/image/fetch/$s_!zcIA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd231f150-cc0e-4977-bddb-cf92eb60171d_1024x751.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3>AI Makes Things Faster. Specs Make Them&nbsp;Safer.</h3><p>Yes, AI can speed things up. No doubt about that. But speed without structure is a mess waiting to happen. It&#8217;s seen it play out in teams that moved fast, skipped the alignment, and spent months cleaning up avoidable bugs.</p><p>Specs don&#8217;t slow you down. They stop you from crashing&nbsp;later.</p><p>Having your system ready before starting to write code or prompts helps everyone work more efficiently and reduces unexpected issues.</p><h3>TL;DR</h3><ul><li><p>Coding in English is here, but clarity still&nbsp;matters.</p></li><li><p>Prompting without structure leads to drift, bugs, and fragile&nbsp;systems.</p></li><li><p>Spec-first thinking ensures everything remains aligned, whether your team consists of humans or&nbsp;AI.</p></li><li><p>Tools like Kiro show where things are headed: structured input, reliable&nbsp;output.</p></li><li><p>Vibes are fun. Specs are how you&nbsp;ship.</p></li></ul><p>Write the spec. Let the tools do the rest. That&#8217;s how production gets&nbsp;done.</p>]]></content:encoded></item><item><title><![CDATA[(Micro)services with Event Notifications]]></title><description><![CDATA[Microservices is an architectural style that describes software design as independently deployable, loosely coupled services which are modelled around particular business domain.]]></description><link>https://www.cengizhan.com/p/micro-services-with-event-notifications-c8462792e700</link><guid isPermaLink="false">https://www.cengizhan.com/p/micro-services-with-event-notifications-c8462792e700</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Mon, 25 Feb 2019 09:25:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/508874e4-cb2a-4336-927e-43c8bd49fcd4_500x450.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microservices is an architectural style that describes software design as independently deployable, loosely coupled services which are modelled around particular business&nbsp;domain.</p><p>Extracting different business domain capabilities from a single process monolith application and creating a system design that contains smaller service processes enables you to <em>scale</em> and <em>deploy</em> each service separately.</p><p>The fact that each service needs to be deployable separately requires and also enables application deployment automation, Continuous Delivery.</p><p>Martin Fowler describes pre requirements of microservices in his articles and emphasises the importance of <a href="https://martinfowler.com/bliki/DevOpsCulture.html">DevOpsCulture</a>.</p><blockquote><p><a href="https://martinfowler.com/bliki/MicroservicePrerequisites.html">You must be this tall to use microservices</a>.</p></blockquote><p>I have been mostly working with micro-services and event-driven systems for almost last 7 years. In this post, I will try to explain my view on a particular challenge when designing your service oriented&nbsp;system.</p><p><strong>How they talk to each&nbsp;other?</strong></p><p>One of the fastest way people start with is service to service communication between services. It is a fast way to start making call from to another but it brings lots of problems with it. Going down this route creates a distributed mess and brings problems of all services being available all the time and each services possibly depending on each other which makes things like failing safely really hard, mostly impossible.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GJHe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GJHe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 424w, https://substackcdn.com/image/fetch/$s_!GJHe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 848w, https://substackcdn.com/image/fetch/$s_!GJHe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 1272w, https://substackcdn.com/image/fetch/$s_!GJHe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GJHe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!GJHe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 424w, https://substackcdn.com/image/fetch/$s_!GJHe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 848w, https://substackcdn.com/image/fetch/$s_!GJHe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 1272w, https://substackcdn.com/image/fetch/$s_!GJHe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffae8dcc5-5695-4d3a-af36-ac1eda9df111_500x450.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a><figcaption class="image-caption">Microservices directly calling each other. A bit of mess&nbsp;right?</figcaption></figure></div><p>You always need to plan for failures and how are you going to handle them, how are you going to <strong>fail safely</strong>. Service to service sync communication makes handling failures way much harder. When a business operation needs orchestration of a couple services to be successfully completed you need to <em><strong>start thinking about decoupling services, eventual consistency and bounded context</strong></em> (a way of defining boundaries of a complex domain into business context).</p><blockquote><p>A system&#8217;s being &#8220;<strong>fail</strong>-<strong>safe</strong>&#8221; means not that <strong>failure</strong> is impossible or improbable, but rather that the system&#8217;s design prevents or mitigates unsafe consequences of the system&#8217;s <strong>failure</strong>. That is, if and when a &#8220;<strong>fail</strong>-<strong>safe</strong>&#8221; system &#8220;<strong>fails</strong>&#8221;, it is &#8220;<strong>safe</strong>&#8221; or at least no less <strong>safe</strong> than when it was operating correctly. <a href="https://en.wikipedia.org/wiki/Fail-safe">wikipedia</a>.</p></blockquote><p>For example, what happens if SMTP service is down or you get an error message back from Notification Service when you call it from Order Service? Are you going to cancel the order for that reason? Of course not! If you are doing service to service communication, you need to create a retry mechanism in sync context or persist a state somewhere that needs to be processed by a background running job to keep sending notifications later when notification service is back online and functioning. And you need to think about all the failure scenarios in all service to service communications, no need to say it is a complex solution, <em><strong>guaranteeing reliability and maintainability, and keeping operability</strong></em><strong> </strong><em><strong>simple</strong></em> becomes a hard thing to accomplish.</p><h3>Event Driven Architecture</h3><p>Event Driven Architecture is mostly known as storing all the application state changes as sequence of events. But when you look at the different implementations on different projects we see different usage patterns under the name of Event Sourcing, Command Query Responsibility Segregation (CQRS), Event Notification. Martin Fowler published an article and talked about different type of Event Driven Architecture&#8217;s on <em>April, 2017 on a GoTo Conference </em>and created a better classification for different models we were using under the name of Event Driven Architecture. It is not my place to classify them when a guru has done it&nbsp;already.</p><p>I have used CQRS before in two projects, one of them went production, one of them was changed to <em><strong>&#8220;Events as secondary concern&#8221; </strong></em>as I called back then without knowing the better name to identify the pattern. I now know, it is named as <em><strong>Event Notification.</strong></em></p><blockquote><p><em><strong>Event notification</strong>:</em> components communicating via events<br><em><strong>Event-based State Transfer</strong>:</em> allowing components to access data without calling the source.<br><em><strong>Event Sourcing</strong></em>: using an event log as the primary record for a system<br><em><strong>CQRS</strong>:</em> having a separate component for updating a store from any readers of the&nbsp;store</p></blockquote><blockquote><p>Please see <a href="https://martinfowler.com/articles/201701-event-driven.html">Martin&#8217;s post</a> and video on this link for further information about different types.</p></blockquote><h3>Event Notification</h3><p>There is a slight difference between Event Notification and Event-base State Transfer. In <em>EbST</em> design events contain all the information for consumer service, but Event Notification just contains information about event and consumer services needs to go and get the all context about that event from origin service. Imagine you have an event called OrderCreated and that event contains OrderNumber and maybe some other metadata about order, but if you are creating a Notification Service to send an email/sms to customer about their recent order you might need to go to Order Service and ask details of that order with order number you just received via OrderCreated event.</p><p>Event Notification provides great decoupling and it allows other systems to hook up to events without telling it to source event. You can create a new service which is interested with an event from any service without telling or asking any change to the originating service.</p><p>One gotcha about this design you need to find a way to get all dependencies and which service depends what event from which service. You can not tell what happens in all system when a particular event happens, you can not just read code in source service and see what happens after that event. You need to go through all subscriptions and find out what happens in whole system. This is generally the case we all ignore till we end up in a place we have no idea what happens in the&nbsp;system.</p><p>One of the further things to think about once you started using Event Notification is what goes in to the events? Should you use Event-based State Transfer or do you need to call Order Service to get more information about order that was just&nbsp;created.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BCN8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BCN8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 424w, https://substackcdn.com/image/fetch/$s_!BCN8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 848w, https://substackcdn.com/image/fetch/$s_!BCN8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 1272w, https://substackcdn.com/image/fetch/$s_!BCN8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BCN8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!BCN8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 424w, https://substackcdn.com/image/fetch/$s_!BCN8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 848w, https://substackcdn.com/image/fetch/$s_!BCN8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 1272w, https://substackcdn.com/image/fetch/$s_!BCN8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45b2e97d-cf63-4e9e-8235-b7e89ac305ad_500x501.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Microservices does not know each other, they are decoupled by event notification</figcaption></figure></div><p>In the above model, when you use Event Notification model you create services that publish events and downstream services subscribes to those events on your event stream. This way, you push the complexity of handling failures in to event stream and event stream processors, your downstream services.</p><p>Order Service publishes an order created event and notification service subscribes to OrderCreated event and send a notification to customer.</p><p>Event store system stores all your events, you can use something like Kafka, RabbitMQ, nsq to store events and dispatch each event type to subscribers. If a service is down temporarily event store sends it to subscriber when they are back, you do not need to implement anything different in this scenario your system design handles it automatically.</p><p>Now you have an event driven architecture, that <em>notifies</em> other system when something happens, when you have a new requirement to react to O<em>rderCreated</em> event you do not need to go and make any changes in your Order Service. All you need to do is create a new subscriber service that subscribes to O<em>rderCreated</em> event.</p><p>Say, you want to show your customers product reviews separately by other customers who actually bought that product. They are verified customer reviews and more valuable to potential buyers, so you create a new subscription in your Customer Product Reviews Service to listen for O<em>rderCreated</em> and other order related events to mark customers as verified customers for that product. Very simple, it does not touch the existing part of the system, you build and deploy it separately without touching upstream&nbsp;service.</p><p>I am planning to write a follow up post with a code example to demonstrate what I mentioned here. Till then I highly recommend to watch <a href="https://martinfowler.com/articles/201701-event-driven.html">Martin&#8217;s keynote on goto conference</a>.</p><p><em>Originally published at <a href="https://medium.com/p/4425ef5be871">cengizhan.com</a> on September 4,&nbsp;2017.</em></p><div><hr></div><p><a href="https://medium.com/hepsiburadatech/micro-services-with-event-notifications-c8462792e700">(Micro)services with Event Notifications</a> was originally published in <a href="https://medium.com/hepsiburadatech">hepsiburadatech</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded></item><item><title><![CDATA[3 Pillars of Observability]]></title><description><![CDATA[Observability of the system in production comes as a requirement when we design complex systems.]]></description><link>https://www.cengizhan.com/p/3-pillars-of-observability-8e6cb5434206</link><guid isPermaLink="false">https://www.cengizhan.com/p/3-pillars-of-observability-8e6cb5434206</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Sun, 19 Nov 2017 17:54:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1810b8d7-07b2-49f7-9eba-383537333c91_400x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Observability of the system in production comes as a requirement when we design complex systems. Some says being able to monitor your system in production is more important than testing all of it&#8217;s functionality during development. To me, they are not really comparable things or you can give up one of&nbsp;another.</p><p>Traditionally, if you have IT operations department in your organization you probably have people who does <strong>blackbox monitoring</strong> with tools like Nagios. What this tools give you are signals like <em>system is down, server/service is down, CPU consumption high etc. </em>This is a must have and very good for identifying the <em>symptoms</em> of a problem but not the <em>root&nbsp;cause</em>.</p><p>Once you get this symptoms telling you there is something wrong. You need to dive deep and understand the root cause. <strong>Whitebox monitoring</strong> comes in to the picture here. Whitebox monitoring can help you to identify root cause of a problem and also more importantly can give you proactive alerting for the possible preventable problems by looking at some tendencies on the system if it is designed right. Because internals of an application can provide more valuable and actionable alerts to take actions on critical cases or notice things like performance problems to be more proactive and take actions before things go&nbsp;down.</p><p>Logging, metrics and distributed tracing on the other hand are whitebox monitoring that refers to a category of monitoring tools and techniques that work with data reported from the internals of a system. I would like to write about these 3 pillars of observability in the scope of whitebox monitoring. When position these tools correctly you might not need to do blackbox monitoring that often, but still good to keep them on if you ask&nbsp;me.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oxZa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oxZa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 424w, https://substackcdn.com/image/fetch/$s_!oxZa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 848w, https://substackcdn.com/image/fetch/$s_!oxZa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 1272w, https://substackcdn.com/image/fetch/$s_!oxZa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oxZa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!oxZa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 424w, https://substackcdn.com/image/fetch/$s_!oxZa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 848w, https://substackcdn.com/image/fetch/$s_!oxZa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 1272w, https://substackcdn.com/image/fetch/$s_!oxZa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8db20221-644a-44d6-9a46-9af246af4fa8_400x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><ul><li><p><em><strong>Logging</strong></em></p></li><li><p><em><strong>Metrics</strong></em></p></li><li><p><em><strong>Distributed Tracing</strong></em></p></li></ul><p>What are the differences of this three and how we accomplish this foundation with these 3&nbsp;pillars.</p><h3><strong>Logging</strong></h3><p>This is something probably the most systems I ever worked has implemented.</p><p>Logs are events happening in your system, these are detailed, prioritized messages from your system. I think thinking of logs are events in your system is not false&nbsp;idea.</p><p>The biggest drawbacks of logs is the expensive to process, store and ship. They contain data for every single request that happened to your system. If you are running your application on hundreds of servers you need to aggregate them carefully to a central location otherwise it becomes impossible to check them on each server.ELK is the most common stack here as you probably&nbsp;know.</p><p>By saying that, there are also some drawback of shipping all to logs to be aggregated centrally. If you are dealing with huge volume of traffic and you might need to think of what to ship, what not to ship (hint: correct logging levels) also you need to have the right scale for your aggregation clusters, in most cases elasticsearch cluster. It is not uncommon to have a cluster of elasticsearch to aggregate all the logs and it fails to catchup when there is a spike of logs on the days like Black&nbsp;Friday.</p><p>Libraries like SLF4J, log4j, log4net (there are lots of options depending on the tech stack you are on) are being used to create formatted plaintext logs. Most popular way of shipping your application logs is writing them to files on the disk and shipping them to ELK with tools like FileBeat. But your application can also ship your logs directly to your log aggregator. There are lots of options you can evaluate for your case. Once I developed a log4net appender which pushes logs as messages to amqp (we were using rabbitmq for this) then we were using logstash to receive logs from rabbitmq and insert them to elasticsearch then visualize with&nbsp;Kibana.</p><p>Recently we started to use Docker Engine to ship our logs. Docker added a feature to ship logs to central log repositories like ELK stack. Most of central logging repositories I know support Graylog Extended Log Format (GELF) and I believe that is <a href="https://docs.docker.com/engine/admin/logging/overview/#configure-the-default-logging-driver">how docker engine&nbsp;ships.</a></p><p>You can also get logs from your infrastructure tools. Most of the popular message brokers (things like kafka, rabbitmq, nsq), HTTP reverse proxies, load balancers, databases, firewalls, application servers, middlewares provide their logs and you can ship them to your central log aggregators.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4Slw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4Slw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 424w, https://substackcdn.com/image/fetch/$s_!4Slw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 848w, https://substackcdn.com/image/fetch/$s_!4Slw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 1272w, https://substackcdn.com/image/fetch/$s_!4Slw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4Slw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e06654af-a946-472d-997b-3d4979a2f0de_1024x358.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!4Slw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 424w, https://substackcdn.com/image/fetch/$s_!4Slw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 848w, https://substackcdn.com/image/fetch/$s_!4Slw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 1272w, https://substackcdn.com/image/fetch/$s_!4Slw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe06654af-a946-472d-997b-3d4979a2f0de_1024x358.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Metrics</strong></h3><p>Metrics are numbers that aggregatable and measured over intervals of time as time series data. Metrics are optimized for storage, processing of the data since they are just numbers aggregated over intervals of&nbsp;time.</p><p>One of the advantages of metrics based monitoring is overhead of metrics generation and storage is constant, it does not change like logs based monitoring in direct proportion to the increase of the system load. That means Disk or processing utilization does not change based on the increase of traffic. Disk storage utilization only increase based on data on time series database being captured, which happens when you add new metrics&#8217; in your instrumentation in your application code or when you spin up new services/containers/hosts.</p><p>Prometheus (p8s) clients does not send each and every metric to the p8s. Popular prometheus client libraries, for example Coda Hale&#8217;s popular <a href="http://metrics.dropwizard.io/">metrics</a> library (and it is java but this project direct ports to different languages) aggregates time series data in your application process and generates metrics output based on in-process calculations. I recommend watching <a href="https://www.youtube.com/watch?v=czes-oa0yik">his presentation on youtube</a> if you want to learn more about his metrics&nbsp;library.</p><p>So, you need to add instrumentation to your application code first if you want to start using Prometheus to collect metrics from your application. You can find a list of <a href="https://prometheus.io/docs/instrumenting/clientlibs/">client libraries</a> on p8s web site. Prometheus works pull based basically you use one of the available to collect metrics in your application and then expose them on your application as accessible via HTTP, generally /metrics endpoint in your application. Then you go an configure prometheus to scrape metrics from your application every few&nbsp;seconds.</p><p>Metrics are far more efficient than querying and aggregating log data. But logs can give you exact data, if you want to get exact average of response times of your server you can log them and then write aggregation queries on elasticsearch. We have to member that metrics are not hundred percent accurate, they relay on some statistical algorithm. Tools like Prometheus and popular metrics client libraries implements some advanced algorithms to give us most accurate numbers. Do not get me wrong! I am not saying use logs, I am saying use both, logs and metrics for the right&nbsp;purpose.</p><p>Finally if you are want to learn Prometheus from strach and if you like learning it from vides like me, I highly recommend this talk: <a href="https://www.youtube.com/watch?v=5GYe_-qqP30">Infrastructure and application monitoring using Prometheus by Marco&nbsp;Pas</a></p><p>Once you collect all your metrics in Prometheus, you can use <a href="http://www.grafana.net">Grafana</a> to visualize those&nbsp;metrics.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AJes!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AJes!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 424w, https://substackcdn.com/image/fetch/$s_!AJes!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 848w, https://substackcdn.com/image/fetch/$s_!AJes!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 1272w, https://substackcdn.com/image/fetch/$s_!AJes!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AJes!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!AJes!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 424w, https://substackcdn.com/image/fetch/$s_!AJes!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 848w, https://substackcdn.com/image/fetch/$s_!AJes!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 1272w, https://substackcdn.com/image/fetch/$s_!AJes!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d4c06d6-47f2-436d-90bc-2cd58efccdc5_1024x449.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Metrics on Prometheus</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nmMs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nmMs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 424w, https://substackcdn.com/image/fetch/$s_!nmMs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 848w, https://substackcdn.com/image/fetch/$s_!nmMs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 1272w, https://substackcdn.com/image/fetch/$s_!nmMs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nmMs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!nmMs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 424w, https://substackcdn.com/image/fetch/$s_!nmMs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 848w, https://substackcdn.com/image/fetch/$s_!nmMs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 1272w, https://substackcdn.com/image/fetch/$s_!nmMs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46e98633-7c5c-4e0d-b7d1-cfbc82d98d8d_1024x613.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Prometheus data visualized on&nbsp;Grafana</figcaption></figure></div><p><strong>What should I&nbsp;collect?</strong></p><p>Once you have the setup to collect your own this is the question you need to answer. If you are adding metrics for a microservice;</p><p>Firstly, I would recommend you to start with capturing number of requests to observe how busy your service is and how many requests you receive per second/minute. <strong>Number of&nbsp;Requests</strong></p><p>Secondly, I would say start capturing your service&#8217;s service time. Basically duration of each request to capture what is your service&#8217;s latency. <strong>Service Response&nbsp;Time</strong></p><p>And, I would say capture number of erroneous requests to observe of what percentage of the requests coming to your service are failing. <strong>Error rate of requests.</strong></p><p>Lastly, <strong>always check %95 percentile</strong> if you are not sure what percentile to check. Mean time or average is a happy picture if you want to trick yourself.</p><p>There will be very specific cases for your application, these are just some suggestions you can start to think of. For example in our last project we wanted to measure ETL processing time of each product. We captured each product&#8217;s update rate in the underlying system and we calculated the time it took to get to the end of the ETL pipeline. So this way we wanted to see if there is a bottleneck in the Kafka based data streaming pipeline. This way we could observe each stage of data streaming pipeline to identify bottleneck and provision new Kafka Streams containers or Kafka Connect containers when&nbsp;needed.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tpk6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tpk6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 424w, https://substackcdn.com/image/fetch/$s_!tpk6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 848w, https://substackcdn.com/image/fetch/$s_!tpk6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 1272w, https://substackcdn.com/image/fetch/$s_!tpk6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tpk6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!tpk6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 424w, https://substackcdn.com/image/fetch/$s_!tpk6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 848w, https://substackcdn.com/image/fetch/$s_!tpk6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 1272w, https://substackcdn.com/image/fetch/$s_!tpk6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F225628ca-2c44-44ab-8c8d-451068b9570e_1024x474.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Monitoring product update latency on our Kafka data streaming pipeline.</figcaption></figure></div><p>Logs and metrics both needs to exists in your application monitoring stack and they need to be owned by the your team to build, not an IT Ops team. Logs can give you insight about each single request and look for details of what happened exactly at a specific time but metrics can show you context and understand trends in our&nbsp;system.</p><h3><strong>Distributed Tracing</strong></h3><p>When logs can give you insight about a specific time and see what happened it is hard to correlate them when you are building a distributed system. Especially in the era of microservices, a request from a customer can cause hundreds of different service calls in your application.</p><p>Monitoring calls that took longer than expected, call that are failed, why they are failed can be hard to do with logs. Also finding matching logs with a unique request id is something you can accomplish but it would still be hard to query slowest calls that my customers faced.</p><p>Google published a paper with the title <strong><a href="https://research.google.com/pubs/pub36356.html">Dapper</a></strong><a href="https://research.google.com/pubs/pub36356.html">, a Large-Scale Distributed Systems Tracing Infrastructure</a> at 2010. They talked about how they trace distributed calls. June 2012 Twitter open sourced their internal distributed tracing project,&nbsp;<strong>Zipkin</strong>.</p><p>So if you are in the world of microservices and working on distributed system, you can imagine how valuable to have a visual of correlated distributed calls between services. I tried Zipkin in it&#8217;s early years and it was not easy to setup but now in the era of container, it is just one single command. But everyone was not using it and still not using probably. <a href="http://opentracing.io/">OpenTracing</a> was introduced as one single standard as all OSS projects and your application code to instrument your code without depending on one particular tracing&nbsp;vendor.</p><p>So now, you can use one of the <a href="http://opentracing.io/">listed</a> open source client libraries to instrument your code and publish this span information to one of the supported <a href="http://opentracing.io/documentation/pages/supported-tracers.html">tracers</a> (<a href="http://zipkin.io/">Zipkin</a>, <a href="http://uber.github.io/jaeger">Jaeger</a>, Appdash, LightStep, Hawkular, Instane and&nbsp;more).</p><p>If you remember your browsers developer tools and check network tab which calls are being made, it gives you very good insight about what your browsers does, which calls are made in parallell, which ones are taking too long to process and makes your customer wait. Distributed tracers gives you this kind of visualization, on server&nbsp;side.</p><p>For example, you can see which of your services being called, which ones take longer than expected or which ones fail when you receive a request to get list of products under a specific category, order by&nbsp;prices.</p><p>Zipkin interface let&#8217;s you query by longest and shortest duration of call stack. So you can focus on your low performing calls and understand which part of the system being a bottleneck. You can also get visualization of dependencies between services which becomes very valuable when you have hundreds of systems talking to each&nbsp;other.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RFtT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RFtT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 424w, https://substackcdn.com/image/fetch/$s_!RFtT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 848w, https://substackcdn.com/image/fetch/$s_!RFtT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 1272w, https://substackcdn.com/image/fetch/$s_!RFtT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RFtT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!RFtT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 424w, https://substackcdn.com/image/fetch/$s_!RFtT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 848w, https://substackcdn.com/image/fetch/$s_!RFtT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 1272w, https://substackcdn.com/image/fetch/$s_!RFtT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43ef6ed3-28fc-4266-a244-909aafb4c0ce_1024x324.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">A detailed view of a Zipkin&nbsp;trace</figcaption></figure></div><div><hr></div><p><a href="https://medium.com/hancengiz/3-pillars-of-observability-8e6cb5434206">3 Pillars of Observability</a> was originally published in <a href="https://medium.com/hancengiz">cengiz han</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded></item><item><title><![CDATA[(Micro)services with Event Notifications]]></title><description><![CDATA[Microservices is an architectural style that describes software design as independently deployable, loosely coupled services which are modelled around particular business domain.]]></description><link>https://www.cengizhan.com/p/micro-services-with-event-notifications-4425ef5be871</link><guid isPermaLink="false">https://www.cengizhan.com/p/micro-services-with-event-notifications-4425ef5be871</guid><dc:creator><![CDATA[Cengiz Han]]></dc:creator><pubDate>Mon, 04 Sep 2017 09:20:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9e628e2e-350e-44eb-847d-55fd7240155d_500x450.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microservices is an architectural style that describes software design as independently deployable, loosely coupled services which are modelled around particular business&nbsp;domain.</p><p>Extracting different business domain capabilities from a single process monolith application and creating a system design that contains smaller service processes enables you to <em>scale</em> and <em>deploy</em> each service separately.</p><p>The fact that each service needs to be deployable separately requires and also enables application deployment automation, Continuous Delivery.</p><p>Martin Fowler describes pre requirements of microservices in his articles and emphasises the importance of <a href="https://martinfowler.com/bliki/DevOpsCulture.html">DevOpsCulture</a>.</p><blockquote><p><a href="https://martinfowler.com/bliki/MicroservicePrerequisites.html">You must be this tall to use microservices</a>.</p></blockquote><p>I have been mostly working with micro-services and event-driven systems for almost last 7 years. In this post, I will try to explain my view on a particular challenge when designing your service oriented&nbsp;system.</p><p><strong>How they talk to each&nbsp;other?</strong></p><p>One of the fastest way people start with is service to service communication between services. It is a fast way to start making call from to another but it brings lots of problems with it. Going down this route creates a distributed mess and brings problems of all services being available all the time and each services possibly depending on each other which makes things like failing safely really hard, mostly impossible.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TgTs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TgTs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 424w, https://substackcdn.com/image/fetch/$s_!TgTs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 848w, https://substackcdn.com/image/fetch/$s_!TgTs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 1272w, https://substackcdn.com/image/fetch/$s_!TgTs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TgTs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TgTs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 424w, https://substackcdn.com/image/fetch/$s_!TgTs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 848w, https://substackcdn.com/image/fetch/$s_!TgTs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 1272w, https://substackcdn.com/image/fetch/$s_!TgTs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8aab88b3-d728-4709-9cd5-2b6d74b8d57d_500x450.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a><figcaption class="image-caption">Microservices directly calling each other. A bit of mess&nbsp;right?</figcaption></figure></div><p>You always need to plan for failures and how are you going to handle them, how are you going to <strong>fail safely</strong>. Service to service sync communication makes handling failures way much harder. When a business operation needs orchestration of a couple services to be successfully completed you need to <em><strong>start thinking about decoupling services, eventual consistency and bounded context</strong></em> (a way of defining boundaries of a complex domain into business context).</p><blockquote><p>A system&#8217;s being &#8220;<strong>fail</strong>-<strong>safe</strong>&#8221; means not that <strong>failure</strong> is impossible or improbable, but rather that the system&#8217;s design prevents or mitigates unsafe consequences of the system&#8217;s <strong>failure</strong>. That is, if and when a &#8220;<strong>fail</strong>-<strong>safe</strong>&#8221; system &#8220;<strong>fails</strong>&#8221;, it is &#8220;<strong>safe</strong>&#8221; or at least no less <strong>safe</strong> than when it was operating correctly. <a href="https://en.wikipedia.org/wiki/Fail-safe">wikipedia</a>.</p></blockquote><p>For example, what happens if SMTP service is down or you get an error message back from Notification Service when you call it from Order Service? Are you going to cancel the order for that reason? Of course not! If you are doing service to service communication, you need to create a retry mechanism in sync context or persist a state somewhere that needs to be processed by a background running job to keep sending notifications later when notification service is back online and functioning. And you need to think about all the failure scenarios in all service to service communications, no need to say it is a complex solution, <em><strong>guaranteeing reliability and maintainability, and keeping operability</strong></em><strong> </strong><em><strong>simple</strong></em> becomes a hard thing to accomplish.</p><h3>Event Driven Architecture</h3><p>Event Driven Architecture is mostly known as storing all the application state changes as sequence of events. But when you look at the different implementations on different projects we see different usage patterns under the name of Event Sourcing, Command Query Responsibility Segregation (CQRS), Event Notification. Martin Fowler published an article and talked about different type of Event Driven Architecture&#8217;s on <em>April, 2017 on a GoTo Conference </em>and created a better classification for different models we were using under the name of Event Driven Architecture. It is not my place to classify them when a guru has done it&nbsp;already.</p><p>I have used CQRS before in two projects, one of them went production, one of them was changed to <em><strong>&#8220;Events as secondary concern&#8221; </strong></em>as I called back then without knowing the better name to identify the pattern. I now know, it is named as <em><strong>Event Notification.</strong></em></p><blockquote><p><em><strong>Event notification</strong>:</em> components communicating via events<br><em><strong>Event-based State Transfer</strong>:</em> allowing components to access data without calling the source.<br><em><strong>Event Sourcing</strong></em>: using an event log as the primary record for a system<br><em><strong>CQRS</strong>:</em> having a separate component for updating a store from any readers of the&nbsp;store</p></blockquote><blockquote><p>Please see <a href="https://martinfowler.com/articles/201701-event-driven.html">Martin&#8217;s post</a> and video on this link for further information about different types.</p></blockquote><h3>Event Notification</h3><p>There is a slight difference between Event Notification and Event-base State Transfer. In <em>EbST</em> design events contain all the information for consumer service, but Event Notification just contains information about event and consumer services needs to go and get the all context about that event from origin service. Imagine you have an event called OrderCreated and that event contains OrderNumber and maybe some other metadata about order, but if you are creating a Notification Service to send an email/sms to customer about their recent order you might need to go to Order Service and ask details of that order with order number you just received via OrderCreated event.</p><p>Event Notification provides great decoupling and it allows other systems to hook up to events without telling it to source event. You can create a new service which is interested with an event from any service without telling or asking any change to the originating service.</p><p>One gotcha about this design you need to find a way to get all dependencies and which service depends what event from which service. You can not tell what happens in all system when a particular event happens, you can not just read code in source service and see what happens after that event. You need to go through all subscriptions and find out what happens in whole system. This is generally the case we all ignore till we end up in a place we have no idea what happens in the&nbsp;system.</p><p>One of the further things to think about once you started using Event Notification is what goes in to the events? Should you use Event-based State Transfer or do you need to call Order Service to get more information about order that was just&nbsp;created.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nH0q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nH0q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 424w, https://substackcdn.com/image/fetch/$s_!nH0q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 848w, https://substackcdn.com/image/fetch/$s_!nH0q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 1272w, https://substackcdn.com/image/fetch/$s_!nH0q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nH0q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!nH0q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 424w, https://substackcdn.com/image/fetch/$s_!nH0q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 848w, https://substackcdn.com/image/fetch/$s_!nH0q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 1272w, https://substackcdn.com/image/fetch/$s_!nH0q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30c3194d-9a7a-4c15-a914-2a190b0ed357_500x501.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Microservices does not know each other, they are decoupled by event notification</figcaption></figure></div><p>In the above model, when you use Event Notification model you create services that publish events and downstream services subscribes to those events on your event stream. This way, you push the complexity of handling failures in to event stream and event stream processors, your downstream services.</p><p>Order Service publishes an order created event and notification service subscribes to OrderCreated event and send a notification to customer.</p><p>Event store system stores all your events, you can use something like Kafka, RabbitMQ, nsq to store events and dispatch each event type to subscribers. If a service is down temporarily event store sends it to subscriber when they are back, you do not need to implement anything different in this scenario your system design handles it automatically.</p><p>Now you have an event driven architecture, that <em>notifies</em> other system when something happens, when you have a new requirement to react to O<em>rderCreated</em> event you do not need to go and make any changes in your Order Service. All you need to do is create a new subscriber service that subscribes to O<em>rderCreated</em> event.</p><p>Say, you want to show your customers product reviews separately by other customers who actually bought that product. They are verified customer reviews and more valuable to potential buyers, so you create a new subscription in your Customer Product Reviews Service to listen for O<em>rderCreated</em> and other order related events to mark customers as verified customers for that product. Very simple, it does not touch the existing part of the system, you build and deploy it separately without touching upstream&nbsp;service.</p><p>I am planning to write a follow up post with a code example to demonstrate what I mentioned here. Till then I highly recommend to watch <a href="https://martinfowler.com/articles/201701-event-driven.html">Martin&#8217;s keynote on goto conference</a>.</p><div><hr></div><p><a href="https://medium.com/hancengiz/micro-services-with-event-notifications-4425ef5be871">(Micro)services with Event Notifications</a> was originally published in <a href="https://medium.com/hancengiz">cengiz han</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded></item></channel></rss>