<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Home on Aaron Ott</title>
    <link>https://ado.im/</link>
    <description>Recent content in Home on Aaron Ott</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 24 Feb 2026 10:00:00 -0700</lastBuildDate><atom:link href="https://ado.im/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Claude Customization Stack: MCP vs Skills vs Plugins</title>
      <link>https://ado.im/posts/claude-customization-stack-mcp-skills-plugins/</link>
      <pubDate>Tue, 10 Feb 2026 10:00:00 -0700</pubDate>
      
      <guid>https://ado.im/posts/claude-customization-stack-mcp-skills-plugins/</guid>
      
      <description>&lt;p&gt;&lt;img src=&#34;https://ado.im/images/featured/claude-customization-stack.png&#34; alt=&#34;The Claude Customization Stack: MCP vs Skills vs Plugins&#34;&gt;&lt;/p&gt;
&lt;p&gt;In about fourteen months, Anthropic shipped three different ways to customize and extend what Claude can do. MCP arrived in November 2024. Skills came in October 2025. Plugins landed January 30, 2026, and promptly wiped $285 billion off software stocks.&lt;/p&gt;
&lt;p&gt;Even people who work with Claude every day are mixing these three things up, and I get it. The names are vague, the capabilities overlap, and there&amp;rsquo;s no clean comparison chart anywhere. I spent some time trying to understand what each one actually does, when you&amp;rsquo;d want to use it, and where the lines blur.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>How I Turned a Knowledge Gap Into a Quiz Game</title>
      <link>https://ado.im/posts/how-i-turned-knowledge-gap-into-quiz-game/</link>
      <pubDate>Thu, 22 Jan 2026 10:30:57 -0700</pubDate>
      
      <guid>https://ado.im/posts/how-i-turned-knowledge-gap-into-quiz-game/</guid>
      
      <description>&lt;h1 id=&#34;how-i-turned-a-knowledge-gap-into-a-quiz-game&#34;&gt;How I Turned a Knowledge Gap Into a Quiz Game&lt;/h1&gt;
&lt;p&gt;I&amp;rsquo;m embarrassingly bad at identifying flags. Like, really bad. Show me anything beyond the major world powers and I&amp;rsquo;m guessing. I recently saw the Scottish flag and could only identify it because the name Scotland was right next to it. This, eventhough I was recently in Scotland (🏴󠁧󠁢󠁳󠁣󠁴󠁿 just in case you were curious). I decided to see if Claude could help me with this so I started a quick session and said it would be fun to build a matching game to practice.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Hacker Manifesto 40 Years Later</title>
      <link>https://ado.im/posts/hacker-manifesto-40-years-later/</link>
      <pubDate>Tue, 13 Jan 2026 07:55:53 -0700</pubDate>
      
      <guid>https://ado.im/posts/hacker-manifesto-40-years-later/</guid>
      
      <description>&lt;p&gt;It&amp;rsquo;s been 40 years since The Mentor (&lt;a href=&#34;https://en.wikipedia.org/wiki/Loyd_Blankenship&#34;&gt;Loyd Blankenship&lt;/a&gt;) published &amp;ldquo;&lt;a href=&#34;https://phrack.org/issues/7/3&#34;&gt;The Conscience of a Hacker&lt;/a&gt;&amp;rdquo; in Phrack. Still rings true today. I was curious to see what Claude would do with it so I asked it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you were to rewrite the hacker manefesto today, how would you write it?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;It did not disappoint. I rarely post anything directly from my Claude sessions, but this one just hit. So, with no edits from me, here it is:&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>AI Security in 2025: What I Learned, and What I&#39;ll Be Watching for in 2026</title>
      <link>https://ado.im/posts/ai-security-in-2025/</link>
      <pubDate>Sun, 14 Dec 2025 09:41:00 -0700</pubDate>
      
      <guid>https://ado.im/posts/ai-security-in-2025/</guid>
      
      <description>&lt;p&gt;This weekend I sat down with a couple of AI security reports for the year. I wanted to understand what the industry actually experienced in 2025, what broke in the real world, and how well those lessons line up with what many of us have been seeing in our own work. I read the &lt;a href=&#34;https://adversa.ai/top-ai-security-incidents-report-2025-edition/&#34;&gt;Adversa AI incident report&lt;/a&gt;, &lt;a href=&#34;https://www.ibm.com/reports/data-breach&#34;&gt;IBM’s 2025 Cost of a Data Breach&lt;/a&gt;, and the &lt;a href=&#34;https://arxiv.org/abs/2509.10540&#34;&gt;EchoLeak research paper&lt;/a&gt; that documents the first real zero click prompt injection exploit in a production LLM system.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Learning n8n by Building: A Small Experiment With Big WYSIWYG Energy</title>
      <link>https://ado.im/posts/learning-n8n-by-building-a-small-experiment-with-big-wysiwyg-energy/</link>
      <pubDate>Fri, 05 Dec 2025 09:00:00 -0700</pubDate>
      
      <guid>https://ado.im/posts/learning-n8n-by-building-a-small-experiment-with-big-wysiwyg-energy/</guid>
      
      <description>A small morning-weather automation taught me how n8n thinks. It felt a lot like building with early WYSIWYG editors, only this time the output is real automation.</description>
      
    </item>
    
    <item>
      <title>Pen Testing With Claude 4.5</title>
      <link>https://ado.im/posts/pen-testing-with-claude/</link>
      <pubDate>Fri, 03 Oct 2025 10:46:31 -0600</pubDate>
      
      <guid>https://ado.im/posts/pen-testing-with-claude/</guid>
      
      <description>&lt;p&gt;I gave Claude 4.5 access to a Kali Linux box, pointed it at an intentionally vulnerable web app, and told it to find security holes. Fifteen minutes later, it handed me a report with 21 real vulnerabilities, including SQL injection, exposed repos, and misconfigured cookies. It also missed some obvious XSS flaws.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s what worked, what didn&amp;rsquo;t, and when you&amp;rsquo;d actually want an AI doing your pen testing.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;https://assets.anthropic.com/m/12f214efcc2f457a/original/Claude-Sonnet-4-5-System-Card.pdf&#34;&gt;Claude 4.5 system card&lt;/a&gt; shows big improvements in cyber capabilities. That&amp;rsquo;s interesting in theory, but does it actually work on real vulnerabilities? And more importantly, where does it fit in a security workflow? Prior to this I’d used MCP to run nmap scans and &lt;a href=&#34;https://ado.im/posts/building-a-local-prompt-injection-lab/&#34;&gt;tested prompt-injection attacks on models&lt;/a&gt;, but this was my first time letting an AI run a full pen test. I ran this in a controlled environment so I could actually see what it&amp;rsquo;s doing and evaluate it properly, which is really the responsible way to test any tool before trusting it with anything that matters.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>When Do You Trust AI</title>
      <link>https://ado.im/posts/when-do-you-trust-ai/</link>
      <pubDate>Sun, 21 Sep 2025 12:34:18 -0600</pubDate>
      
      <guid>https://ado.im/posts/when-do-you-trust-ai/</guid>
      
      <description>&lt;p&gt;I was sitting with a friend this weekend, and we started talking about AI and when you actually trust what it tells you. His stance was pretty firm: &lt;em&gt;“I don’t trust anything it says until I verify it.”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I followed up with the obvious question: &lt;em&gt;“Well then, when are there efficiency gains in using AI?”&lt;/em&gt; If I have to fact-check every single answer, is it really saving me time, or just giving me more to verify?&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Coding With Chatgpt vs Claude Code</title>
      <link>https://ado.im/posts/coding-with-chatgpt-vs-claude-code/</link>
      <pubDate>Sun, 14 Sep 2025 11:32:18 -0600</pubDate>
      
      <guid>https://ado.im/posts/coding-with-chatgpt-vs-claude-code/</guid>
      
      <description>&lt;h1 id=&#34;chatgpt-vs-claude-code-how-i-built-an-analytics-stack-and-when-each-one-won&#34;&gt;ChatGPT vs Claude Code: How I Built an Analytics Stack (and When Each One Won)&lt;/h1&gt;
&lt;p&gt;I just finished standing up a lightweight analytics stack. I started with a single &lt;code&gt;analytics.js&lt;/code&gt; in the browser, asked ChatGPT to reverse-engineer the backend from that client, and then pulled the repo into Claude Code to harden it, build tests, configs, and the “don’t-break-what-works” edits.&lt;/p&gt;
&lt;p&gt;A few observations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ChatGPT is a fantastic architect and first-draft generator. I asked it to write an NGINX config for the backend, add security headers, and enable HTTP/2 + QUIC. It produced a full, working setup on the first pass, plus a clean FastAPI scaffold for the ingest endpoints. Great velocity.&lt;/li&gt;
&lt;li&gt;Claude Code is a careful surgeon. When I later needed targeted tweaks, adding specific analytics API endpoints, adjusting CORS/CSP, and wiring health checks, ChatGPT sometimes rewrote big sections of a config that were already correct (especially NGINX). Claude Code, operating repo-wide from the terminal, tended to find the exact lines and propose minimal diffs without collateral damage.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That pattern repeated: when the change was architectural or green-field, ChatGPT flew. When it was surgical, like threading changes through NGINX, systemd, app config, and tests, Claude Code earned its keep.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Building a Local Prompt Injection Lab</title>
      <link>https://ado.im/posts/building-a-local-prompt-injection-lab/</link>
      <pubDate>Sat, 06 Sep 2025 10:00:00 -0600</pubDate>
      
      <guid>https://ado.im/posts/building-a-local-prompt-injection-lab/</guid>
      
      <description>&lt;p&gt;A couple of days ago I stumbled across this whitepaper: &lt;a href=&#34;https://arxiv.org/pdf/2508.21669&#34;&gt;&lt;em&gt;Cybersecurity AI: Hacking the AI Hackers via Prompt Injection&lt;/em&gt;&lt;/a&gt;. The premise intrigued me, using prompt injection as a way to “hack the hackers” when AI agents are in the attack chain. I wanted to see what this looked like in practice, so I spun up a local lab to reproduce (and play with) the concept.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;first-attempt-docker-without-the-llm&#34;&gt;First Attempt: Docker Without the LLM&lt;/h2&gt;
&lt;p&gt;I started by dropping the whitepaper into ChatGPT and asked it to build a lab environment. The initial code generated three Docker containers:&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>About</title>
      <link>https://ado.im/about/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://ado.im/about/</guid>
      
      <description>&lt;h1 id=&#34;hey-im-aaron&#34;&gt;Hey, I’m Aaron.&lt;/h1&gt;
&lt;p&gt;I help teams make sense of &lt;strong&gt;application security&lt;/strong&gt; and &lt;strong&gt;AI reliability&lt;/strong&gt; with clear writing, useful playbooks, and approaches that fit how people actually work. My aim is simple: &lt;em&gt;turn complex into practical&lt;/em&gt; and keep a friendly, no-jargon tone.&lt;/p&gt;
&lt;h3 id=&#34;what-i-work-on&#34;&gt;What I work on&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;SSDLC &amp;amp; threat modeling that developers will actually use&lt;/li&gt;
&lt;li&gt;AI resilience: guardrails, evals, fallbacks, and disaster recovery (“When the Robot Breaks”)&lt;/li&gt;
&lt;li&gt;Pen-testing workflows and lightweight automation that speed learning &amp;amp; remediation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;a-bit-more-human&#34;&gt;A bit more human&lt;/h3&gt;
&lt;p&gt;Colorado-based. Big on dogs, hiking, and the occasional whiskey. I like teaching by building small labs and writing as I learn.&lt;/p&gt;</description>
      
    </item>
    
  </channel>
</rss>
