<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Chris Gmyr</title>
        <link>https://chrisgmyr.dev</link>
        <description>Staff Software Engineer at Rula. Newsletter author, podcast co-host, and builder of things.</description>
        <language>en-us</language>
        <atom:link href="https://chrisgmyr.dev/feed" rel="self" type="application/rss+xml"/>
                    <item>
                <title>My AI Agent Knows What Project I&#039;m Working On Before I Tell It</title>
                <link>https://chrisgmyr.dev/blog/my-ai-agent-knows-what-project-im-working-on-before-i-tell-it</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/my-ai-agent-knows-what-project-im-working-on-before-i-tell-it</guid>
                <description><![CDATA[<p>I kept repeating myself. Every time I opened Claude Code in my Obsidian vault, I'd type &quot;let's work on the newsletter&quot; and then spend the first few minutes pointing Claude at the right context files. Here's the publishing context. Here's the last session file. Here's the Signals doc. Every single time.</p>
<p>The vault already had all this context organized. 33 <code>_context.md</code> files tracking project state. 23 <code>_last-session.md</code> files with handoff notes. Signals documents tracking patterns over time. The information existed. Claude just didn't know to read it.</p>
<p>On the OpenClaw side (my Discord-based AI agent), this was already solved. Each Discord channel maps to a project. The agent knows which context to load based on which channel you're talking in. But Claude Code sessions are stateless. Every conversation starts fresh.</p>
<p>So I built a hook that fixes that.</p>
<hr />
<h2>The Problem with Instructions</h2>
<p>The obvious first attempt: put &quot;read the context files at session start&quot; in CLAUDE.md. I tried that. It's guidance, not automation. Claude sees the instruction, and sometimes follows it. Sometimes doesn't. As conversations get long and context compresses, the instruction gets buried.</p>
<p>The second attempt: a hook that dumps all context files into every session. I have 23 session files. Most would be noise for any given conversation. &quot;What's a good pasta recipe&quot; doesn't need the DevX Team's last session loaded.</p>
<p>The real problem is intent detection. A hook fires before Claude processes your message. It can't understand what you're about to work on. It has to guess, and guessing wrong is worse than loading nothing.</p>
<hr />
<h2>Keyword Mapping</h2>
<p>The solution is embarrassingly simple: a JSON file that maps keywords to projects.</p>
<pre><code class="language-json">{
  &quot;id&quot;: &quot;newsletter&quot;,
  &quot;name&quot;: &quot;Newsletter (Dev Notes)&quot;,
  &quot;keywords&quot;: [&quot;newsletter&quot;, &quot;dev notes&quot;, &quot;buttondown&quot;, &quot;weekly newsletter&quot;],
  &quot;patterns&quot;: [&quot;(write|draft|send).*?(newsletter|dev notes)&quot;],
  &quot;context&quot;: [
    &quot;Publishing/_context.md&quot;,
    &quot;Publishing/Newsletters/_context.md&quot;,
    &quot;Publishing/Newsletters/_last-session.md&quot;
  ],
  &quot;suggestSkills&quot;: [&quot;newsletter-writer&quot;, &quot;writing-voice&quot;],
  &quot;priority&quot;: 2
}
</code></pre>
<p>When I say &quot;let's draft tomorrow's newsletter,&quot; the hook matches &quot;newsletter,&quot; loads the three context files, and suggests two relevant skills. No AI inference. No guessing. String matching.</p>
<p>The full config has 30 mappings covering every project area in the vault: Rula work projects, Modo consulting clients, publishing channels, personal projects, finances, fitness, even Cub Scouts. Each mapping defines which context files to load, which Signals doc to include, and which skills are relevant.</p>
<hr />
<h2>The Priority System</h2>
<p>Keywords overlap. &quot;Content&quot; could mean the Content project or the content calendar skill. &quot;Rula&quot; could mean the top-level employer area or a specific sub-project like DevX.</p>
<p>The fix: priority levels. Higher numbers are more specific.</p>
<table>
<thead>
<tr>
<th>Priority</th>
<th>Scope</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Top-level area</td>
<td>&quot;all work&quot;, &quot;personal overview&quot;</td>
</tr>
<tr>
<td>1</td>
<td>Company/area</td>
<td>&quot;rula&quot;, &quot;publishing&quot;</td>
</tr>
<tr>
<td>2</td>
<td>Specific project</td>
<td>&quot;devx&quot;, &quot;newsletter&quot;, &quot;plex&quot;</td>
</tr>
<tr>
<td>3</td>
<td>Sub-project</td>
<td>&quot;project1&quot;, &quot;project2&quot;</td>
</tr>
</tbody>
</table>
<p>When &quot;devx&quot; matches at priority 2 and &quot;rula&quot; matches at priority 1, only DevX loads. The most specific match wins. When two things match at the same priority (like &quot;content calendar&quot; hitting both Content and Newsletter), both load. That's usually what you want.</p>
<hr />
<h2>The Hook</h2>
<p>The hook is a bash script wired to <code>UserPromptSubmit</code> in Claude Code's settings. It runs every time I send a message:</p>
<ol>
<li>Reads my prompt from stdin (JSON with <code>user_prompt</code> field)</li>
<li>Lowercases it and checks every keyword in the config</li>
<li>Falls back to regex patterns if no keyword matched</li>
<li>Filters by priority, keeping only the most specific matches</li>
<li>Outputs the file paths Claude should read</li>
</ol>
<p>The output looks like this:</p>
<pre><code>=== PROJECT CONTEXT DETECTED ===
Matched: Content / Social, Newsletter (Dev Notes)

READ these files before responding (use the Read tool):
  - Publishing/_context.md
  - Publishing/Content/_context.md
  - Publishing/Content/_last-session.md
  - Publishing/Newsletters/_context.md
  - Publishing/Newsletters/_last-session.md
  - Publishing/Stats/Signals.md (Signals)

Relevant skills for this area:
  - /content-calendar
  - /social-coach
  - /writing-voice
  - /newsletter-writer

After completing work, suggest /handoff to save session state.
=== END PROJECT CONTEXT ===
</code></pre>
<p>Claude sees this injected into the conversation and reads the files before responding. No match means no output. &quot;What's the weather&quot; produces nothing. Zero overhead on conversations that don't need project context.</p>
<p>The hook also writes the matched project name to a temp file. More on that in a second.</p>
<hr />
<h2>Statusline: See It at a Glance</h2>
<p>Claude Code has a configurable statusline at the bottom of the terminal. Mine already shows the repo name, git branch, context window usage, session cost, and lines changed. I added one more piece: the active project.</p>
<p>The context-loader hook writes the matched project name to <code>.claude/hooks/.current-project</code>. The statusline script reads that file and displays it right after the repo name:</p>
<pre><code>Gray Matter ⟨Newsletter⟩ git:(main) | ctx: 12% | $0.42 | +35/-8 [Opus 4.6]
</code></pre>
<p>No match, no file, nothing shown. When the hook fires and matches &quot;newsletter,&quot; the statusline updates. When I switch to a general conversation, it stays on the last project (which is usually what I want, since I'm likely still in that context).</p>
<p>The <code>/ctx</code> and <code>/handoff</code> skills also update this file, so the statusline stays in sync regardless of how the project was set. <code>/handoff --clear</code> removes the file entirely, resetting the statusline for when you're done with a project and switching direction.</p>
<p>It's a small thing, but glancing at the statusline and seeing <code>⟨DevX Team⟩</code> confirms the hook did its job without scrolling back to check the output.</p>
<hr />
<h2>Loading Children</h2>
<p>Some conversations span a whole area. &quot;Let's review all publishing stuff&quot; shouldn't load just the top-level Publishing context. It should load every sub-project underneath.</p>
<p>The <code>loadChildren</code> flag handles this. When a parent mapping has <code>loadChildren: true</code>, the hook runs <code>find</code> on the base directory and loads every <code>_context.md</code> and <code>_last-session.md</code> it finds. Publishing has six sub-projects (Blog, Content, Newsletters, Podcast, Stats, plus the parent). One keyword loads all of them.</p>
<hr />
<h2>The Write Side: /handoff</h2>
<p>Loading context is the read side. The write side is a <code>/handoff</code> skill that saves session state before I close a conversation.</p>
<p>When I run <code>/handoff</code>, Claude writes a structured <code>_last-session.md</code> with four sections: what was done, what needs review, deferred items, and next actions. Items from the previous session that weren't addressed get carried forward automatically. The project stays active in the statusline so the next message still has context.</p>
<p><code>/handoff --clear</code> does the same save but removes the active project from the statusline. That's for when I'm done with a project and want to switch direction mid-session. The session state is saved, the statusline resets, and the next <code>/ctx</code> or keyword match picks up a new project.</p>
<p>The file is written for a fresh agent. No &quot;as we discussed&quot; or references to the conversation. Just facts, file paths, and concrete next steps. Because the next agent to read it might be Claude Code on my laptop, or OpenClaw on my phone through Discord. Either way, it needs to stand on its own.</p>
<hr />
<h2>Manual Override: /ctx</h2>
<p>The hook handles 90% of cases. For the rest, there's <code>/ctx</code>.</p>
<pre><code>/ctx                    → list all available projects
/ctx devx               → load DevX Team context
/ctx publishing         → load all Publishing context with children
</code></pre>
<p>This covers three scenarios: the hook matched the wrong project, I want to switch projects mid-conversation, or I'm starting a conversation that doesn't have obvious keywords (&quot;let's pick up where we left off&quot; doesn't match anything useful).</p>
<hr />
<h2>Cross-System Continuity</h2>
<p>The key design choice: the <code>_last-session.md</code> files are the shared state between systems. They live in the Obsidian vault. Obsidian Sync keeps them current across devices. Both Claude Code and OpenClaw read and write the same files.</p>
<p>The hook itself only runs locally. It's wired in <code>.claude/settings.local.json</code>, which is gitignored. OpenClaw never sees it because OpenClaw doesn't need it. Discord channels already provide the project routing that the hook provides for Claude Code.</p>
<p>The session files are the contract between systems. The routing mechanism is system-specific.</p>
<hr />
<h2>What I'd Change</h2>
<p><strong>The keyword list needs real-world tuning.</strong> I seeded it with obvious terms from each project's context file, but I won't know the natural language I actually use until I've run a few weeks of sessions. The first round of edits will probably happen within days.</p>
<p><strong>Regex patterns are powerful but fragile.</strong> <code>(write|draft|send).*?(newsletter|dev notes)</code> catches &quot;draft the newsletter&quot; but not &quot;I need to finish that dev notes thing.&quot; I'm keeping patterns minimal and leaning on keywords. Patterns are the fallback, not the primary matching strategy.</p>
<p><strong>No learning loop yet.</strong> The system doesn't track which keywords actually fired vs. which projects I ended up working on. A log file that captures &quot;hook matched X, user actually worked on Y&quot; would make tuning much faster. That's probably the next thing to build.</p>
<hr />
<h2>The Stack</h2>
<p>For anyone building something similar:</p>
<ul>
<li><strong>Claude Code hooks</strong> (<code>UserPromptSubmit</code>) for automatic context injection</li>
<li><strong>A JSON config</strong> mapping keywords to file paths (no code changes needed to add projects)</li>
<li><strong>Claude Code skills</strong> for <code>/ctx</code> (manual load) and <code>/handoff</code> (session save)</li>
<li><strong>Claude Code statusline</strong> to show the active project at a glance</li>
<li><strong><code>.claude/settings.local.json</code></strong> to keep the hook local-only</li>
<li><strong>Obsidian Sync</strong> to share session files across devices and systems</li>
</ul>
<p>The hook script is about 130 lines of bash. The JSON config is where all the project knowledge lives. Adding a new project means adding a JSON object with keywords and file paths. No code changes.</p>
<hr />
<h2>How It Fits Together</h2>
<p>This is the missing piece between sessions. The vault already had project context. OpenClaw already had cross-session continuity through Discord channels. Claude Code was the gap: powerful for deep work, but amnesiac between conversations.</p>
<p>Now the flow works end-to-end. Start a Claude Code session, mention the project, context loads automatically. Do the work. Run <code>/handoff</code>. Close the session. Open Discord on my phone, pick up in the same project channel, and OpenClaw reads the same <code>_last-session.md</code> that Claude Code just wrote. Switch back to Claude Code tomorrow, and the hook loads the same file again.</p>
<p>The context follows the work, not the tool.</p>
<hr />
<p>If this sounds interesting to you, I'd love to chat with you about it. Find me on <a href="https://bsky.app/profile/cmgmyr.dev">Bluesky</a> or <a href="https://x.com/cmgmyr">X</a>.</p>
]]></description>
                <pubDate>Tue, 31 Mar 2026 13:00:00 +0000</pubDate>
            </item>
                    <item>
                <title>Not Every Podcast Has a Transcript - Now I Don&#039;t Care</title>
                <link>https://chrisgmyr.dev/blog/not-every-podcast-has-a-transcript-now-i-dont-care</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/not-every-podcast-has-a-transcript-now-i-dont-care</guid>
                <description><![CDATA[<p>For podcasts I co-host, transcripts already exist. Riverside generates them automatically, we import those to Transistor, and I have a skill that imports them directly into my Obsidian vault.</p>
<p>For podcasts I just <em>listen</em> to, sometimes there's nothing. No transcript, no structured notes, no way to search what was said.</p>
<p>I wanted the same workflow for both: give a URL, get a note with a summary, key takeaways, quotes, and a searchable full transcript. So I built a tool. URL in, structured Obsidian note out.</p>
<p>Here's how it works.</p>
<hr />
<h2>Step 1: Check for an Existing Transcript First</h2>
<p>Before downloading any audio, the skill checks if the episode page already has a transcript.</p>
<p>Transistor-hosted shows publish transcripts at <code>/[episode]/transcript</code>. A quick HTML fetch finds the link. If it's there, scrape it, skip the audio entirely. Free and instant.</p>
<p>Other patterns to check: embedded transcript blocks in the page HTML, <code>&lt;podcast:transcript&gt;</code> tags in the RSS feed. More shows are publishing these than you'd expect.</p>
<p>Only if none of that works does the skill fall back to downloading audio and running it through Whisper, OpenAI's speech-to-text API.</p>
<hr />
<h2>Step 2: Get the Audio URL</h2>
<p>If there's no transcript, I need the audio file.</p>
<p>The cleanest source is the RSS feed. Apple Podcasts pages embed the <code>feedUrl</code> in the page's JSON data. Fetch the RSS, find the right episode's <code>&lt;enclosure&gt;</code> tag, and you have a direct MP3 link.</p>
<p>For the first show I tested, the RSS had every episode URL, season numbers, and GUIDs for state tracking. No <code>yt-dlp</code> needed.</p>
<p><code>yt-dlp</code> is a command-line tool that can download audio from just about any podcast or video URL. It's the fallback for pages that don't expose RSS cleanly.</p>
<hr />
<h2>Step 3: Transcribe with OpenAI Whisper</h2>
<p>The Whisper API has a 25MB file limit. A typical one-hour podcast episode is 50-70MB.</p>
<p>The fix: split the audio with <code>ffmpeg</code>, a command-line tool for processing audio and video files.</p>
<pre><code class="language-python">chunk_duration = 1200  # 20 minutes
</code></pre>
<p>For a 60-minute episode, that's 3 chunks. Each goes to the Whisper API separately. The transcripts get joined. Total cost for an hour of audio: about $0.36.</p>
<p>The script handles the full flow: download the audio, detect file size, split if needed, transcribe each chunk, stitch the output.</p>
<hr />
<h2>Step 4: Process with Claude</h2>
<p>The raw Whisper transcript is a wall of text. No speaker labels, no paragraph breaks, no structure.</p>
<p>After transcription, Claude reads it and generates:</p>
<ul>
<li><strong>Summary:</strong> 2-3 sentences on what the episode covered</li>
<li><strong>Key takeaways:</strong> 5-8 bullet points with the main insights</li>
<li><strong>Notable quotes:</strong> 3-5 exact quotes pulled from the transcript</li>
<li><strong>Action items:</strong> anything actionable for the listener</li>
</ul>
<p>Here's what a processed note looks like in practice:</p>
<pre><code class="language-markdown">## Summary
The hosts break down how their team adopted trunk-based development
after years of long-lived feature branches, and what broke along the way...

## Key Takeaways
- Short-lived branches forced smaller PRs, which made reviews faster
- Feature flags replaced branch-based isolation for incomplete work
- The hardest part was trust, not tooling
- ...

## Notable Quotes
&gt; &quot;We didn't have a branching problem. We had a confidence problem.
&gt; Nobody trusted main.&quot;
</code></pre>
<p>The full transcript goes in the note too. I can re-run Claude on the same text later with different prompts. Last week I asked it to pull every question the host asked across five episodes. No re-transcription, no extra Whisper cost. The raw text is the asset.</p>
<hr />
<h2>Step 5: Save the Note</h2>
<p>Each episode becomes a structured Obsidian note with frontmatter including show, season, episode, date, and source URL. The show folder gets an <code>_index.md</code> that uses a Dataview query to auto-populate the episode list from frontmatter. No manual maintenance as new episodes come in.</p>
<pre><code>Personal/Podcasts/
  Show Name/
    _index.md              ← Dataview query, auto-populates
    2026-03-17 - Episode Title Here.md
    2026-03-10 - Another Episode Title.md
    ...
</code></pre>
<hr />
<h2>Step 6: Auto-Sync New Episodes</h2>
<p>For shows I want to follow week over week, I added a tracking config:</p>
<pre><code class="language-json">// Personal/Podcasts/_tracked.json
[
  {
    &quot;show&quot;: &quot;Show Name&quot;,
    &quot;rss&quot;: &quot;https://rss.example.com/feed.xml&quot;,
    &quot;description&quot;: &quot;One-line description of the show.&quot;,
    &quot;sync_from&quot;: &quot;2026-03-18&quot;
  }
]
</code></pre>
<p>A Python script reads this file, checks each show's RSS for episodes it hasn't seen (tracked by GUID), and transcribes any new ones. The <code>sync_from</code> date tells it where to start. Without it, a new show would backfill its entire back catalog on the first run. Per-show state lives in <code>_sync_state.json</code> so shows are fully independent.</p>
<p>A cron job runs daily at 10am ET. If any tracked show has a new episode, it gets transcribed and added to the vault automatically.</p>
<p>To track a new show: add an entry to <code>_tracked.json</code>. That's it. The cron picks it up without any other changes.</p>
<p>If you're building something similar: use underscore prefixes (<code>_sync_state.json</code>), not dotfiles. Obsidian Sync skips dotfiles by default, which means your state file won't sync across devices. Learned that the hard way.</p>
<hr />
<h2>What It Actually Cost to Build</h2>
<p>A few hours. The transcription script is about 160 lines of Python. The sync script is about 300. The Obsidian skill that ties it together is a markdown file.</p>
<p>Tools used:</p>
<ul>
<li><strong>OpenAI Whisper API</strong> for transcription (~$0.006/min)</li>
<li><strong>ffmpeg</strong> for audio splitting and duration detection</li>
<li><strong>yt-dlp</strong> as a fallback for audio extraction</li>
<li><strong>Python 3</strong> for glue code (stdlib only, no pip packages)</li>
<li><strong>Claude</strong> for post-processing the raw transcript into structured notes</li>
</ul>
<p>The whole thing runs on the same DigitalOcean droplet that runs OpenClaw. No new infrastructure.</p>
<hr />
<h2>What I'd Add Next</h2>
<p>Speaker diarization is the obvious gap. Whisper doesn't identify who's speaking. AssemblyAI and Deepgram both have this built in for similar pricing. For any show with consistent hosts and guests, labeled transcripts would make the notes far more useful.</p>
<p>I'll swap the transcription backend when I have a reason to re-transcribe.</p>
<p>The other thing: cross-episode analysis. Once I have a full season of transcripts, I can ask Claude to find every time the hosts discussed a specific topic across all episodes. That's a skill worth building once there's enough content in the vault.</p>
<hr />
<p>A month ago, I couldn't search anything from the podcasts I listen to. Now every episode lands in the vault with structure, summaries, and a full transcript I can query whenever I want. The vault gets richer without extra work. Some podcasts publish transcripts. Some don't. I stopped thinking about which is which.</p>
<hr />
<p>If you're building something similar I'd love to chat with you about it. Find me on <a href="https://bsky.app/profile/cmgmyr.dev">Bluesky</a> or <a href="https://x.com/cmgmyr">X</a>.</p>
]]></description>
                <pubDate>Thu, 26 Mar 2026 13:00:00 +0000</pubDate>
            </item>
                    <item>
                <title>I Built a Content Flywheel That Runs on Obsidian and AI Skills</title>
                <link>https://chrisgmyr.dev/blog/i-built-a-content-flywheel-that-runs-on-obsidian-and-ai-skills</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/i-built-a-content-flywheel-that-runs-on-obsidian-and-ai-skills</guid>
                <description><![CDATA[<p>Two weeks ago I rebuilt my personal site with Claude Code. Last week I wired OpenClaw into my Obsidian vault so my AI agent could read project context and run workflows across sessions. This week the system started running itself.</p>
<p>Sunday night at 9 PM, a scheduled task kicks off on my OpenClaw server. It syncs my Bluesky posts, Twitter activity, newsletter stats, and podcast downloads into the vault. Then it aggregates everything into a cross-platform review. Then it generates next week's content calendar with seven days of social posts, newsletter angles, podcast topics, and blog candidates. All from the same vault the agent reads and writes to every day.</p>
<p>I didn't plan to build a content flywheel. I built individual tools to solve individual problems, and they connected into a loop.</p>
<p>This is how the pieces fit together, what the weekly cycle actually looks like, and how the system improves its own writing over time.</p>
<hr />
<h2>The Sunday Night Pipeline</h2>
<p>The whole thing runs as a single chained command in OpenClaw:</p>
<pre><code>/bluesky-sync → /twitter-sync → /buttondown-sync → /transistor-sync → /social-review → /content-calendar
</code></pre>
<p>Each step is a Claude Code skill. Each one reads from the vault and writes back to it. The four syncs run as separate scheduled jobs. Once they've finished, the review runs on their combined output, then the calendar.</p>
<p>Here's what each step does:</p>
<p><strong>Platform syncs</strong> pull raw data into markdown files. <code>/bluesky-sync</code> creates <code>Publishing/Stats/Bluesky/Bluesky 2026-W12.md</code> with every post, reply, like count, and follower change for the week. Same pattern for Twitter, Buttondown (newsletter), and Transistor (podcast). Each file has structured frontmatter for the numbers and a content section with the actual posts.</p>
<p><strong><code>/social-review</code></strong> reads all four sync files and produces a combined stats review. It compares engagement rates across platforms, identifies which posts performed best, tracks what content types are working (original observations vs. promo vs. personal), and maintains a running Signals document. Signals are patterns that persist across weeks: &quot;Twitter outperforming Bluesky on cross-posted content&quot; or &quot;original content outperforms promotional.&quot; When a signal shows up three weeks in a row, it gets promoted to a strategy change.</p>
<p><strong><code>/content-calendar</code></strong> reads the stats review, the signals, work logs, recent newsletters, podcast episodes, and a social backlog of unused post ideas. It produces a daily content calendar for the upcoming week: one post per day, Monday through Sunday. Each post includes the draft text, content type, source references, sanitization flags, and empty <code>Posted:</code> blocks with Twitter and Bluesky link placeholders for me to fill in after posting.</p>
<p>The calendar also suggests newsletter opening thought angles, podcast topics, and blog candidates for the week. All sourced from what's already in the vault.</p>
<hr />
<h2>The Backlog System</h2>
<p>Not every idea fits into this week's calendar. Some are too early, some need more distance from work, some are waiting for a triggering event. These used to disappear. Now they go to backlogs.</p>
<p>Four backlog files, one per medium:</p>
<table>
<thead>
<tr>
<th>Medium</th>
<th>File</th>
<th>What goes there</th>
</tr>
</thead>
<tbody>
<tr>
<td>Social</td>
<td><code>Publishing/Content/Social Backlog.md</code></td>
<td>Post drafts ready to schedule</td>
</tr>
<tr>
<td>Newsletter</td>
<td><code>Publishing/Newsletters/Newsletter Backlog.md</code></td>
<td>Opening thought angles and section ideas</td>
</tr>
<tr>
<td>Blog</td>
<td><code>Publishing/Blog/Blog Backlog.md</code></td>
<td>Post ideas with enough substance for long-form</td>
</tr>
<tr>
<td>Podcast</td>
<td><code>Publishing/Podcast/Topics Backlog.md</code></td>
<td>Episode topics, predictions to revisit, recurring questions</td>
</tr>
</tbody>
</table>
<p>Each backlog has three sections: <strong>Active</strong> (ready to use), <strong>Potential</strong> (parked, not timely yet), and <strong>Ignored</strong> (stop suggesting).</p>
<p>The <code>/content-idea</code> skill captures ideas mid-week and routes them to the right backlog. If an idea spans multiple mediums, it writes to each one and cross-links them with <code>Related:</code> fields. A social post about &quot;AI context switching overhead&quot; that could also be a newsletter angle gets entries in both backlogs, each pointing to the other.</p>
<p>When the content calendar generates next week, it reads the social backlog's Active section and can pull ideas directly into the schedule. Unposted calendar items flow back to the backlog. Items that sit in Active for two weeks without being posted get moved to Potential during grooming.</p>
<p>The backlogs are the memory between weeks. The calendar is this week's plan.</p>
<hr />
<h2>The Voice Loop</h2>
<p>The part I didn't expect to build: the system that teaches itself how I write.</p>
<p>The content calendar drafts social posts in my voice using a <code>/writing-voice</code> skill. That skill is a detailed reference document. It covers how I structure arguments, my default sentence length, how humor shifts between social and newsletters, emoji habits, hashtag patterns. Specific enough that AI-drafted posts are close to what I'd write.</p>
<p>Close, but not right. I still rewrite most posts before publishing.</p>
<p>The content calendar captures both versions. The AI draft sits in the blockquote under each day. My rewritten version goes in the <code>Posted:</code> block. Over time, these pairs accumulate.</p>
<p>Once a month, I run <code>/voice-review</code>. It reads every AI draft and Posted pair across the past 4+ weeks. It categorizes the edits I made: tone shifts, length changes, added specifics, dropped sarcasm, added emoji. It tallies patterns across weeks and proposes updates to three skills:</p>
<ol>
<li><strong>writing-voice</strong> (the reference document itself)</li>
<li><strong>social-coach</strong> (the skill that reviews posts for voice accuracy)</li>
<li><strong>content-calendar</strong> (the skill that drafts the posts)</li>
</ol>
<p>The proposals only apply if a pattern shows up in 3+ posts or across 2+ separate weeks. One-off edits are noise. Consistent rewrites are signal.</p>
<p>After the first review, I found that the AI was consistently writing resigned, self-deprecating social posts (&quot;classic.&quot;, &quot;I don't know, maybe write the doc.&quot;) and I was consistently rewriting them toward genuine enthusiasm (&quot;Super excited!&quot;, &quot;Docs save time!&quot;). The writing-voice skill now explicitly says: dry humor belongs in newsletters and podcasts, social posts get sincere energy. The next round of drafts should be closer.</p>
<p>The flywheel: write → post → review → update the voice → write better next time.</p>
<hr />
<h2>The Full Weekly Cycle</h2>
<p>Here's how a typical week runs end to end:</p>
<p><strong>Sunday night (automated):</strong>
Platform syncs → social review → content calendar generated. I wake up Monday with a plan.</p>
<p><strong>Monday through Sunday:</strong>
I review each day's post, edit if needed, post to Twitter and Bluesky, fill in the <code>Posted:</code> block and links. Mid-week ideas get captured with <code>/content-idea</code> and routed to the right backlog.</p>
<p><strong>Thursday:</strong>
I run <code>/newsletter-writer</code> to draft it, pulling from the vault's work logs, recent content, and newsletter backlog.</p>
<p><strong>Friday:</strong>
Newsletter goes out.</p>
<p><strong>Saturday:</strong>
Podcast publishes. I run <code>/import-podcast</code> to pull the episode into the vault, then <code>/podcast-review</code> to update the topics backlog.</p>
<p><strong>End of month:</strong>
<code>/voice-review</code> compares AI drafts to actual posts and proposes skill updates. <code>/review-monthly</code> aggregates weekly reviews and forces promote-or-drop decisions on lingering signals.</p>
<p>Every step reads from and writes to the same vault. The output of one skill becomes the input of another. The stats review feeds the content calendar. The content calendar produces drafts. The drafts get rewritten. The rewrites teach the voice skill. The voice skill improves the next round of drafts.</p>
<hr />
<h2>What I'd Change</h2>
<p><strong>The 70/20/10 mix target needs rethinking for daily posting.</strong> The original content strategy was built for 3-4 posts a week. Seven posts a week shifts the math. Some weeks are heavier on personal content because there are only so many value observations per week. I'm still calibrating.</p>
<p><strong>Backlog grooming needs a trigger, not just a timer.</strong> The &quot;2 weeks in Active then move to Potential&quot; rule is fine for social posts. For blog and newsletter ideas, timeliness varies more. A blog idea about a conference talk might sit for months and still be valid. I'll probably split the aging rules by medium.</p>
<p><strong>The voice review needs more data.</strong> One month of before/after pairs showed clear patterns. But the sample is small. The real test is whether the second month's drafts need fewer rewrites. I'll know in about a month.</p>
<hr />
<h2>The Stack</h2>
<p>For anyone who wants to replicate this:</p>
<ul>
<li><strong>Obsidian</strong> as the knowledge base (any vault structure works, but context files and signals make it much better)</li>
<li><strong>Claude Code</strong> for building skills (each skill is a markdown file with instructions, no traditional code)</li>
<li><strong>OpenClaw</strong> on a DigitalOcean droplet ($24/month) for the always-on agent</li>
<li><strong>Obsidian Headless</strong> to keep the server and your Mac in sync</li>
<li><strong>Discord</strong> as the chat interface (each channel maps to a project)</li>
<li><strong>Bluesky, Twitter/X, Buttondown, Transistor</strong> as publishing platforms (the sync skills pull from their APIs)</li>
</ul>
<p>The total stack cost is about $29/month: $24 for the server and $5 for Obsidian Sync. I also bought $20 in Twitter API credits to pull weekly stats, but that should last a while at this usage level. Everything else is free or already paid for.</p>
<p>26 Claude Code skills power the whole system at time of writing. Each one is a markdown file. No Python scripts, no cron jobs, no custom API integrations. The skills are instructions the AI follows. If the workflow changes, I edit the markdown.</p>
<hr />
<h2>How It Started</h2>
<p>I didn't set out to build a content system. I set out to stop forgetting things.</p>
<p>The work logs started because I needed a brag doc for performance reviews. The stats syncs started because I wanted to know which platform was worth my time. The content calendar started because I was posting randomly and wanted a plan. The voice review started because the AI drafts were too polished and self-deprecating, and I kept rewriting them toward something more direct.</p>
<p>Each tool solved a specific problem. The flywheel emerged because they all read and write to the same vault.</p>
<hr />
<p>Hit me up on <a href="https://bsky.app/profile/cmgmyr.dev">Bluesky</a> or <a href="https://x.com/cmgmyr">X</a> if you're building something similar. I'd like to hear how others are approaching this.</p>
]]></description>
                <pubDate>Tue, 24 Mar 2026 13:00:00 +0000</pubDate>
            </item>
                    <item>
                <title>I Wired OpenClaw into My Obsidian Vault. Here&#039;s What It Can Do.</title>
                <link>https://chrisgmyr.dev/blog/i-wired-openclaw-into-my-obsidian-vault-heres-what-it-can-do</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/i-wired-openclaw-into-my-obsidian-vault-heres-what-it-can-do</guid>
                <description><![CDATA[<p>Last week's <a href="https://chrisgmyr.dev/newsletter/dev-notes-march-13-2026">newsletter</a> opened with a problem: I'm running four or more Claude sessions at once, and none of them know what the others are doing. I'm the routing layer. The coordination overhead from the tool that's supposed to make me faster is becoming its own job.</p>
<p>This week I did something about it.</p>
<p>I restructured my Obsidian vault into a format AI agents can navigate, built an automation layer with Claude Code skills, and deployed OpenClaw on a DigitalOcean droplet with headless Obsidian Sync tying it all together. The vault is the brain. The agent reads it, writes to it, and picks up where the last session left off.</p>
<p>This is how I approached it, what I learned, and what I'd change.</p>
<hr />
<h2>The Problem: Smart Tools, No Memory</h2>
<p>Claude is fast. Claude Code is faster. But every session starts from zero. You open a new thread, re-explain the project, re-describe the architecture, re-state your preferences. The AI forgets everything between conversations.</p>
<p>I tried to solve this with longer system prompts, CLAUDE.md files in repos, and careful copy-pasting of context between sessions. It helped. It wasn't enough.</p>
<p>The real issue: I have a dozen active projects across my day job, a consulting business, a podcast, a newsletter, and personal projects. No single prompt can hold all of that. And the projects connect to each other in ways that matter. A work observation becomes a newsletter angle. A podcast conversation surfaces a blog post idea. A side project teaches a lesson that applies to the day job.</p>
<p>I needed a persistent, structured knowledge base that any AI session could read and write to. I already had one. I just needed to reorganize it.</p>
<hr />
<h2>Step 1: Restructure the Vault</h2>
<p>My Obsidian vault (&quot;Gray Matter&quot;) had been a flat collection of project folders with inconsistent formats. Some projects had detailed context files. Most didn't. Meeting notes lived in one place, work logs in another, but nothing explicitly connected them.</p>
<p>The restructuring introduced three conventions:</p>
<p><strong>Domain-based hierarchy.</strong> Everything lives under <code>Work/</code>, <code>Publishing/</code>, or <code>Personal/</code>. Work splits by employer or client. Projects nest under their parent area.</p>
<p><strong>Context files per project.</strong> Every project gets three files:</p>
<ul>
<li><code>ProjectName.md</code> with a stable summary, stack, and key links</li>
<li><code>_context.md</code> with active state: current work, open decisions, blockers, signals, key people, and notes for the agent</li>
<li><code>_last-session.md</code> with what was done, what needs review, and next actions</li>
</ul>
<p><strong>Context stacking.</strong> At session start, an agent reads files in order: area context, project context, signals, last session. Each layer adds specificity without repeating what's above it. A session about a specific work project reads the area context, then the team context, then the project context, then signals, then the last session file. Five files, full picture.</p>
<p>Context files are session memory. <code>_context.md</code> answers &quot;what should an AI know right now?&quot; and <code>_last-session.md</code> answers &quot;what happened last time?&quot; Together, they eliminate the re-explanation problem.</p>
<hr />
<h2>Step 2: Build the Automation Layer</h2>
<p>Restructuring the vault was the foundation. Making it self-maintaining was the real unlock.</p>
<p>I built a suite of Claude Code skills that handle the weekly workflow:</p>
<ul>
<li><strong><code>/review-daily</code></strong> enriches work log entries with data from GitHub, Jira, Confluence, and meeting notes</li>
<li><strong>Platform syncs</strong> (<code>/bluesky-sync</code>, <code>/twitter-sync</code>, <code>/buttondown-sync</code>, <code>/transistor-sync</code>) pull engagement data into the vault</li>
<li><strong><code>/social-review</code></strong> aggregates stats across all platforms and maintains a running Signals document</li>
<li><strong><code>/review-weekly</code></strong> and <strong><code>/review-monthly</code></strong> generate structured reviews from the week's or month's raw data</li>
<li><strong><code>/content-calendar</code></strong> mines the vault for post ideas, checks what resonated last week, and produces a week of social posts, newsletter angles, and podcast topics</li>
<li><strong><code>/summarize-meetings</code></strong> generates frontmatter summaries for meeting notes based on their content</li>
<li><strong><code>/import-podcast</code></strong> pulls in new episodes from Transistor</li>
</ul>
<p>Each skill reads specific vault files and writes its output back to the vault. The vault gets richer over time without manual effort. The content calendar skill reads work signals, publishing stats, recent newsletters, and podcast topics to suggest posts. It knows what performed well last week because the stats are in the vault. It knows what I'm working on because the work logs are in the vault.</p>
<p>The skills started as local Claude Code commands. Now they run from the server on a schedule or on demand through Discord.</p>
<hr />
<h2>Step 3: Deploy OpenClaw</h2>
<p>OpenClaw is a self-hosted AI agent runtime. You interact with it through Discord (or WhatsApp, or other channels). It reads your files, runs commands, and maintains persistent context across sessions.</p>
<p>The setup:</p>
<p><strong>DigitalOcean droplet</strong> (4GB RAM, 2 vCPU, $24/month). Runs Ubuntu with Node 22. OpenClaw, obsidian-headless, and pm2 for process management.</p>
<p><strong>Obsidian headless sync.</strong> This is the piece that makes the architecture work. <code>obsidian-headless</code> is a new official Obsidian product (released February 2026) that syncs your vault to a server without the desktop app. Changes I make in Obsidian on my Mac sync to the server. Changes the agent makes on the server sync back. The vault is the shared state layer.</p>
<pre><code>Mac (Obsidian) &lt;--sync--&gt; Server (obsidian-headless) &lt;--reads/writes--&gt; OpenClaw
</code></pre>
<p><strong>Discord as the interface.</strong> Each Discord channel maps to a project domain. I message the bot, it reads the relevant context files from the vault, and responds with full project awareness. The sessions persist per-channel, so a conversation about a project picks up where I left off.</p>
<p>The channel-to-project mapping lives in <code>openclaw.json</code>. Each entry points to a vault path and injects a system prompt telling the agent what it's working in:</p>
<pre><code class="language-json">&quot;channels&quot;: {
  &quot;discord&quot;: {
    &quot;guilds&quot;: {
      &quot;YOUR_GUILD_ID&quot;: {
        &quot;requireMention&quot;: false,
        &quot;users&quot;: [&quot;YOUR_USER_ID&quot;],
        &quot;channels&quot;: {
          &quot;CHANNEL_ID_WORK_PROJECT&quot;: {
            &quot;systemPrompt&quot;: &quot;This is the #work-project channel. Project vault path: /vault/Work/Company/Project/. On first message, load context following the order defined in Claude.md: area _context.md → project _context.md → Signals (if present) → _last-session.md → related projects. After completing any meaningful task, decision, or topic, update _last-session.md (frontmatter + What we did / Changes made / Open work). If project state changed, also update _context.md — overwrite stale content, do not append.&quot;
          }
        }
      }
    }
  }
}
</code></pre>
<p>Every channel follows the same pattern: vault path + context stacking instructions. The agent knows where it is, what to read first, and what to write when the session ends. Adding a new project is one new entry in this config.</p>
<p><strong>Cloudflare Tunnel for the Control UI.</strong> OpenClaw ships a browser-based Control UI for managing sessions, config, and devices. The gateway binds to loopback by default, so nothing is publicly exposed. I use a Cloudflare Tunnel to access it remotely without opening any ports, with a Cloudflare Access PIN policy as an outer auth gate. The OpenClaw token and device pairing are the inner gates. Three layers, no open ports.</p>
<p><strong>Skills on the server.</strong> Obsidian Sync doesn't sync hidden directories, so <code>.claude/skills/</code> needs its own path. I set up a separate git repo for skills. Both my local machine and the server pull from it. OpenClaw discovers skills automatically from the workspace directory.</p>
<p>The whole deployment took an afternoon. Most of the time was configuring Discord bot permissions and testing the sync pipeline end-to-end.</p>
<hr />
<h2>What This Actually Looks Like Day to Day</h2>
<p>Monday morning. I open Discord on my phone and ask the publishing channel: &quot;What's on the content calendar this week?&quot; OpenClaw reads <code>Publishing/Content/2026-W12.md</code> and gives me the rundown. I tell it to adjust Wednesday's post. It updates the file. The change syncs to my Mac by the time I sit down.</p>
<p>During a meeting, I notice a pattern worth tracking. After the meeting, I tell the work channel: &quot;Add a signal about review bottlenecks increasing since the AI rollout.&quot; It appends to <code>Work/Signals.md</code>.</p>
<p>Friday evening, I run the weekly review. The agent reads five days of work logs, meeting notes, and signals, then generates a structured review with metrics, recurring themes, and brag doc candidates. It writes the review to the vault. I read it in Obsidian on my couch.</p>
<p>The agent isn't doing anything I couldn't do manually. It's doing the things I wouldn't get around to. The weekly reviews, the stats syncs, the signal tracking. The maintenance work that makes the vault useful but never feels urgent enough to do by hand.</p>
<hr />
<h2>The Architecture Decision That Matters Most</h2>
<p>The vault is the single source of truth. Not OpenClaw's memory. Not a database. Not a separate knowledge graph. Plain markdown files in Obsidian, synced everywhere.</p>
<p><strong>Portability.</strong> If OpenClaw disappears tomorrow, the vault is still there. Every context file, every review, every signal. The data isn't locked into an agent framework. It's markdown.</p>
<p><strong>Readability.</strong> I can open any file in Obsidian and see exactly what the agent knows. No black-box memory systems. No embeddings I can't inspect. If the agent has wrong context, I edit a markdown file.</p>
<p><strong>Composability.</strong> Claude Code reads these files locally. OpenClaw reads them on the server. A future agent I haven't built yet could read them too. The format is the interface.</p>
<hr />
<h2>What I'd Do Differently</h2>
<p><strong>Start with fewer context files.</strong> I populated context files for every project at once. It was a lot. Better to start with your 2-3 most active projects and expand as you go.</p>
<p><strong>Test the sync pipeline earlier.</strong> I spent time getting the vault structure right before deploying obsidian-headless. Should have set up the sync first and iterated on the structure with the agent already running. Seeing how the agent actually uses the files changes how you write them.</p>
<p><strong>Keep <code>_context.md</code> ruthlessly current.</strong> A context file that's two weeks old is worse than no context file. It gives the agent confident but wrong information. The fix: build the update instructions directly into the channel system prompt. Instead of &quot;update at the end of a session&quot; (which never fires on Discord, since sessions don't have a clear end), use &quot;after completing any meaningful task, update both <code>_last-session.md</code> and <code>_context.md</code>.&quot; Give the agent an explicit format for each file. The agent writes mid-conversation, not on some hypothetical session close that never happens.</p>
<hr />
<h2>Try It Yourself</h2>
<p>You don't need OpenClaw to start. The vault structure works with Claude Code today.</p>
<ol>
<li>
<p><strong>Pick your top 3 projects.</strong> Create <code>_context.md</code> and <code>_last-session.md</code> for each one. Write them for an AI reader, not for yourself. What does a new session need to know?</p>
</li>
<li>
<p><strong>Add a CLAUDE.md at the vault root.</strong> Tell the agent where things live, what the conventions are, and what to read first. This is your vault's instruction manual.</p>
</li>
<li>
<p><strong>Add a USER.md.</strong> Who are you, what do you work on, and what does the agent need to know to be useful? Keep it under 300 words.</p>
</li>
<li>
<p><strong>Update <code>_last-session.md</code> after meaningful work.</strong> This is the handoff note to your next session. What did you do, what's pending, what's next. If you're using OpenClaw with Discord channels, build this into the channel system prompt so the agent writes after each task automatically. Don't rely on &quot;end of session&quot; as a trigger; it never fires.</p>
</li>
<li>
<p><strong>Build one automation skill.</strong> Start with something you do every week that pulls data from multiple sources. The weekly review is a good first candidate.</p>
</li>
<li>
<p><strong>When you're ready for always-on, deploy OpenClaw.</strong> A $24/month droplet, obsidian-headless for sync, Discord for the interface. Map each Discord channel to a project in <code>openclaw.json</code>. The vault you already structured is the brain.</p>
</li>
</ol>
<p>Your AI gets better when it has memory. The simplest memory system is files it can read.</p>
<hr />
<p>If you try this approach or have a different solution to the multi-session problem, find me on <a href="https://bsky.app/profile/cmgmyr.dev">Bluesky</a> or <a href="https://x.com/cmgmyr">X</a>.</p>
]]></description>
                <pubDate>Thu, 19 Mar 2026 13:00:00 +0000</pubDate>
            </item>
                    <item>
                <title>I Rebuilt My Personal Site with Claude AI and Claude Code. Here&#039;s Exactly How.</title>
                <link>https://chrisgmyr.dev/blog/i-rebuilt-my-personal-site-with-claude-ai-and-claude-code-heres-exactly-how</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/i-rebuilt-my-personal-site-with-claude-ai-and-claude-code-heres-exactly-how</guid>
                <description><![CDATA[<p>I'd been on Hashnode for years. It was fine. But &quot;fine&quot; started to bother me.</p>
<p>My site had no space for my newsletter archive, my podcast, my projects, or my talks. It was just a blog with a custom domain. My content was scattered across Hashnode, Buttondown, Transistor, and a few other places with no unified home. I wanted a single hub for everything I publish, built with the tools I know: Laravel, Filament, Blade, Tailwind. So I rebuilt <a href="https://chrisgmyr.dev">chrisgmyr.dev</a> from scratch as a self-hosted Laravel application.</p>
<p>The whole thing, from first conversation to deployed on Laravel Cloud, took about 24 hours (not working the whole time, of course). I used Claude AI for design and Claude Code for implementation. This is how I approached it.</p>
<hr />
<h2>Step 1: Design First, In Chat</h2>
<p>Before touching code, I spent time in the Claude chat interface talking through the design. Most people skip this step. It's the most valuable one.</p>
<p>I didn't show up with a spec. I showed up with a handful of personal sites from other developers I respect, plus a vibe: dark-first, clean but distinctive, personality through typography and subtle interactions.</p>
<p>Before Claude made any recommendations, it ran a research pass, reviewing 14 developer blogs, dark mode design systems, typography pairings, and multi-content architecture patterns. That's not something I asked for explicitly. It's just what happened when I treated the conversation as a design session rather than a prompt. From that research, we worked through specifics together.</p>
<p><strong>Colors.</strong> We landed on amber (<code>#F59E0B</code>) as the accent instead of the blue or green you see on every developer site. Amber is warm, pairs well with dark backgrounds, and clears accessibility contrast requirements. We mapped out a full palette for dark and light modes.</p>
<p><strong>Typography.</strong> Space Grotesk for headings, Inter for body text, JetBrains Mono for code and metadata. Using a monospace font for dates, tags, and reading times gives the site a subtle technical feel without needing custom illustrations.</p>
<p><strong>Layout.</strong> A bento grid homepage that shows all content types at once: blog posts, latest podcast episode, newsletter signup, featured project, and recent talks, all in one responsive grid. Plus <code>/now</code> and <code>/uses</code> pages for the personal context stuff I always end up wanting to reference.</p>
<p><strong>Interactions.</strong> A cursor-following amber glow on the background, a reading progress bar on blog posts, cards that scale on hover with border color shifts, and animated link underlines. Small details that make the site feel alive without being distracting.</p>
<p>The key was treating Claude as a design collaborator, not a search engine. I pushed back on suggestions. I asked &quot;why amber over teal?&quot; and refined based on the reasoning. The conversation naturally produced a complete design system.</p>
<hr />
<h2>Step 2: Write the Plan Document</h2>
<p>After the design conversations, I compiled everything into a single markdown file: <code>chrisgmyr-dev-project-plan.md</code>. Think of it as a detailed project plan and design spec combined.</p>
<p>The document covered:</p>
<ul>
<li><strong>Design system:</strong> every color value, typography choice, spacing rule, and interactive detail</li>
<li><strong>Site architecture:</strong> routes, page layouts with ASCII wireframes, navigation structure</li>
<li><strong>Data models:</strong> migration schemas for posts, tags, newsletters, episodes, projects, talks, and pages</li>
<li><strong>Filament admin setup:</strong> resources to create, Block Builder blocks, dashboard widgets, and Unsplash integration (a dedicated <code>UnsplashService</code>, a custom picker modal inside Filament, attribution rendering on the frontend, and responsive image delivery via Unsplash CDN URL params)</li>
<li><strong>File structure:</strong> every controller, model, service, and view mapped out</li>
<li><strong>Implementation phases:</strong> six phases with specific tasks and acceptance criteria</li>
<li><strong>Package recommendations:</strong> what to install and why</li>
</ul>
<p>The document ran about 800 lines of markdown. That might sound like overkill. It saved days of back-and-forth during the build.</p>
<p>When Claude Code had a question about how something should work, the answer was already in the plan. The quality of AI output is directly tied to the quality of your input. A vague &quot;build me a blog&quot; gets you a generic blog. A plan with wireframes, color values, and acceptance criteria gets you something that matches your vision.</p>
<hr />
<h2>Step 3: Build Phase by Phase with Claude Code</h2>
<p>With the plan in place, I opened Claude Code and worked through the phases one at a time. Each phase had a clear scope and acceptance criteria, so I knew when to move on.</p>
<h3>Phase 1: Foundation</h3>
<p>Scaffolding: Laravel 12, Filament 5, Tailwind CSS 4, all migrations and models, the layout with nav and footer, and the theme toggle. Claude Code generated the initial structure based on the plan. I reviewed every file before committing.</p>
<h3>Phase 2: Filament Admin</h3>
<p>Seven Filament resources (Posts, Tags, Newsletters, Episodes, Projects, Talks, Pages), the Block Builder, an Unsplash picker modal for hero images, and dashboard widgets. This phase was the most Filament-heavy. Having the correct namespaces and patterns in the plan kept things on track.</p>
<h3>Phase 3: Public Pages</h3>
<p>Controllers and Blade views for every page. Blog rendering with markdown-to-HTML conversion, syntax highlighting with Prism.js, newsletter signup forms that post to Buttondown, and RSS feed generation.</p>
<h3>Phase 4: Homepage Bento Grid</h3>
<p>The responsive CSS Grid homepage pulling real data from all content types. This was the page I was most excited about. Seeing it come together with live data was satisfying.</p>
<h3>Phase 5: Visual Polish</h3>
<p>The cursor-following glow effect, reading progress bar, scroll-triggered animations, Open Graph meta tags, a custom 404 page, and the sitemap. The details that tell visitors you cared.</p>
<h3>Phase 6: Content Migration</h3>
<p>Importing blog posts from Hashnode, newsletter archive from Buttondown, podcast episodes from Transistor's API, and setting up 301 redirects so old URLs kept working.</p>
<hr />
<h2>The Simplify Pass</h2>
<p>Here's a workflow I want to highlight separately, because it changed how I think about building with AI.</p>
<p>After Claude Code finished each phase, I'd review the output, then ask it to simplify. Look at the git history:</p>
<pre><code>d1bbcb4 Phase 1: Foundation - models, migrations, layout, design system, routes
198d00e Phase 2: Filament admin - all content resources, dashboard, Unsplash picker
d2ed096 Simplify Phase 2: extract SlugHelper, fix duplicate labels, remove dead code
b8c72fa Phase 3: Wire up markdown rendering, prose styles, syntax highlighting
aba807a Simplify Phase 3: remove duplicate CSS, fix Prism timing, clean deps
50f9ec7 Phase 4: Homepage bento grid with all content types
8fa343d Simplify Phase 4: fix YouTube ID extraction, replace inline JS hover
ae9d3ab Phase 5: Visual polish, OG images, 404 page, sitemap
4b6e2a4 Simplify Phase 5: cache sitemap, add Project screenshot accessor
b052095 Phase 6: Content migration tooling and Hashnode redirects
331c4c9 Simplify Phase 6: fix cover images, canonical URLs, route structure
</code></pre>
<p>The first pass gets the functionality right. The second pass cleans it up: removes dead code, extracts shared helpers, fixes edge cases, tightens the CSS. This two-pass approach works well with AI code generation. Let it build fast, then review and refine.</p>
<hr />
<h2>Building Away from My Desk with Claude Code Remote Sessions</h2>
<p>One feature I used a lot that I haven't seen written about much: Claude Code remote sessions.</p>
<p>The workflow: start a Claude Code session locally on my Mac, then connect to that same session from the Claude mobile app. This let me review progress, answer Claude's questions, and steer the implementation while away from my desk, during a lunch break, sitting on the couch after the kids went to bed, or waiting somewhere.</p>
<p>Claude Code will pause and ask for input when it hits a decision point or needs clarification. Without remote sessions, those pauses meant coming back to a stalled terminal. With remote sessions, I could respond from my phone, keep the session moving, and come back to a desk with more work done.</p>
<p>If you're running longer Claude Code sessions on a personal project, set up remote access. The async nature of the workflow fits evening-and-weekend building well.</p>
<hr />
<h2>What Worked</h2>
<p><strong>The plan document was the most important investment.</strong> Claude Code referenced it for architectural decisions, design values, and implementation details. No ambiguity. If you take one thing from this post, spend time on the plan before touching code.</p>
<p><strong>Phase-based builds kept scope manageable.</strong> Each phase had clear acceptance criteria. I knew when something was &quot;done enough&quot; to move on.</p>
<p><strong>The simplify pass caught real problems.</strong> Duplicate CSS selectors, dead code from refactoring, a timing bug with Prism.js initialization, inline JavaScript that should have been Alpine.js directives. A second review pass always finds something worth fixing.</p>
<p><strong>Reviewing every commit mattered.</strong> I didn't rubber-stamp the AI output. I pushed back on approaches I didn't like and made manual tweaks where my taste differed from what was generated. The AI wrote the bulk of the code. I steered the architecture and design decisions.</p>
<hr />
<h2>What I'd Do Differently</h2>
<p><strong>Tests earlier.</strong> I added the test suite after most of the building was done. Writing tests alongside each phase would have caught bugs sooner.</p>
<p><strong>Smaller phases.</strong> Phase 2 (Filament admin) and Phase 3 (public pages) were both large. Smaller chunks would have made the simplify passes more focused.</p>
<p><strong>Screenshot-driven iteration.</strong> For visual polish, I should have captured screenshots after each change to track the progression. It would make this post more visual too.</p>
<hr />
<h2>Try It Yourself</h2>
<p>If you want to take a similar approach on your own project:</p>
<ol>
<li>
<p><strong>Chat first, code later.</strong> Spend time in Claude AI talking through your design, architecture, and content strategy. Push back. Ask why. Refine until you have a clear vision.</p>
</li>
<li>
<p><strong>Write the plan.</strong> Compile the conversation into a structured document with color values, wireframes, data models, file structure, and phased tasks. Be specific. &quot;Dark mode&quot; isn't a plan. &quot;<code>#0E0E10</code> background, <code>#18181B</code> surface, <code>#F59E0B</code> accent&quot; is.</p>
</li>
<li>
<p><strong>Phase your build.</strong> Break the work into chunks with acceptance criteria. Two to four days per phase. Commit after each one.</p>
</li>
<li>
<p><strong>Simplify after each phase.</strong> Review the output, then run a cleanup pass. Remove dead code, extract shared logic, fix edge cases.</p>
</li>
<li>
<p><strong>Use remote sessions.</strong> If you're building in short windows of time, connect your local Claude Code session to the mobile app. You can keep things moving without being at your desk.</p>
</li>
<li>
<p><strong>Own the decisions.</strong> The AI writes the code. You own the taste, the architecture, and the tradeoffs. If something doesn't feel right, change it.</p>
</li>
</ol>
<hr />
<p>The site is live at <a href="https://chrisgmyr.dev">chrisgmyr.dev</a>. It's fast (server-rendered Blade, minimal JavaScript), looks distinctive, and gives me a single home for everything I publish.</p>
<p>Claude AI for design conversations and Claude Code for implementation felt like having a design partner and a senior engineer available at odd hours. The plan document was the bridge between the two. Writing it was the best time investment of the whole project.</p>
<p>If you try this approach, I'd love to hear about it. Find me on <a href="https://bsky.app/profile/cmgmyr.dev">Bluesky</a> or <a href="https://x.com/cmgmyr">X</a>.</p>
]]></description>
                <pubDate>Thu, 05 Mar 2026 13:00:00 +0000</pubDate>
            </item>
                    <item>
                <title>Building a YNAB CLI in Laravel</title>
                <link>https://chrisgmyr.dev/blog/building-a-ynab-cli-in-laravel</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/building-a-ynab-cli-in-laravel</guid>
                <description><![CDATA[<p>I've been using You Need a Budget (YNAB) since 2019 and love it. I'm in the budget daily, entering transactions, tinkering, and ensuring my family is on track with our financial goals and current priorities. However, as the transactions and payees pile up over the years, the web UI has become slower and slower. It's not <em>that</em> bad, but when I only want to enter a simple transaction or check on the current amount for a category I don't always want to wait 20-30 seconds for the entire web UI to load.</p>
<p>I used to use a fantastic Chrome extension called <a href="https://sproutforynab.danielcabuslay.com/">Sprout</a> for entering transactions, but unfortunately the developer took the extension off the Chrome store and stopped development. I wanted something that could be equally as fast and easy to use.</p>
<p>Over this blog series, we'll utilize the PHP Laravel framework and its built-in tools to build a robust CLI application and interface with YNAB's API.</p>
<p>The CLI functionality will not replace the excellent web UI but only introduce faster ways to alleviate everyday tasks to go about our day. It's a perfect fit since I'm usually in my terminal during the day.</p>
<p>Before we begin, if you aren't a YNAB user, please <a href="https://ynab.com/referral/?ref=PWdXkxyHrHqoKZAw&amp;utm_source=customer_referral">sign up</a>. Using my referral link will give you a 34 day trial and if you decide to purchase a subscription, YNAB will give us both a free month!</p>
<h2>Set up Laravel</h2>
<p>To get started, install Laravel from the <a href="https://laravel.com/docs/11.x#creating-a-laravel-project">docs</a>. I usually use the Laravel installer, but do whatever works best for you. At the time of writing, this will install a new Laravel 11 application.</p>
<pre><code class="language-plaintext">laravel new ynab
cd ynab
</code></pre>
<p>Because we want an easy to use interface within the CLI, we'll also want to install <a href="https://laravel.com/docs/11.x/prompts">Laravel Prompts</a>.</p>
<pre><code class="language-plaintext">composer require laravel/prompts
</code></pre>
<p>Next, we'll want to add config values for our YNAB API key and the default budget ID that we'll want to utilize. Right now, I'm only setting this up as a single budget, but in the future we could make this more configurable, or you can do this on your own in your own application.</p>
<p>Within <code>.env</code> and <code>.env.example</code>, add</p>
<pre><code class="language-plaintext">YNAB_TOKEN=
YNAB_BUDGET_ID=
</code></pre>
<p>Within <code>config/services.php</code>, add</p>
<pre><code class="language-php">    'ynab' =&gt; [
        'token' =&gt; env('YNAB_TOKEN'),
        'budget_id' =&gt; env('YNAB_BUDGET_ID'),
    ],
</code></pre>
<p>To get a YNAB API key, go into your <code>Account Settings</code>, scroll toward the bottom and within the <code>Developer Settings</code> click the <code>Developer Settings</code> button. On the next page, under <code>Personal Access Tokens</code>, click the <code>New Token</code> button. This will reveal your API token that you can paste into your <code>.env</code> file.</p>
<p>The budget id is the UUID in the URL when viewing your budget in the YNAB web UI, so you can copy/paste if from there, or get it from the API (<code>/budgets</code>).</p>
<h2>Create Transaction Command</h2>
<p>Next, we'll want to set up our initial artisan command, so we'll run <code>php artisan make:command CreateTransaction</code>, which will create <code>app/Console/Commands/CreateTransaction.php</code>. Then, we'll stub out the initial command class.</p>
<pre><code class="language-php">class CreateTransaction extends Command
{
    protected $signature = 'ynab:transaction';
    protected $description = 'Creates a new transaction in YNAB';

    public function handle(): int
    {
        return self::SUCCESS;
    }
</code></pre>
<p>Before we get too far, let's set up a shared trait for the HTTP client to be shared across all commands, since we'll be creating more commands in the next blog posts. Create <code>app/Console/Commands/Concerns/Ynab.php</code> with the following code</p>
<pre><code class="language-php">&lt;?php
declare(strict_types=1);

namespace App\Console\Commands\Concerns;

use Illuminate\Http\Client\PendingRequest;
use Illuminate\Support\Facades\Http;

trait Ynab
{
    protected function getClient(): PendingRequest
    {
        return Http::baseUrl('https://api.ynab.com/v1/')
            -&gt;acceptJson()
            -&gt;asJson()
            -&gt;throw()
            -&gt;withToken(config('services.ynab.token'));
    }
}
</code></pre>
<p>This helper method will allow us to remove the boilerplate of setting up the YNAB client and set up JSON handling, throw an exception if there's an API error, and use the token from the config file for authentication.</p>
<p>Then, we'll include that in the transaction command class</p>
<pre><code class="language-diff">class CreateTransaction extends Command
{
+    use Concerns\Ynab;
+
    protected $signature = 'ynab:transaction';
    protected $description = 'Creates a new transaction in YNAB';
</code></pre>
<p>Similar to the Sprout UI, we'll want to build a CLI to handle:</p>
<ul>
<li>
<p>Amount</p>
</li>
<li>
<p>Account</p>
</li>
<li>
<p>Payee</p>
</li>
<li>
<p>Category</p>
</li>
<li>
<p>Optional memo</p>
</li>
<li>
<p>Optional flag</p>
</li>
<li>
<p>Optional cleared/uncleared</p>
</li>
</ul>
<p>For now, we'll skip date (we'll assume today's date), approval (default to true), split transactions, and other options found in the Sprout and YNAB UIs. For reference, here is the Sprout UI.</p>
<p><img src="https://fls-a1375e46-63b7-4062-ba2b-4aae42ad07e6.laravel.cloud/posts/lCOGHRt6bepcyxRKDIwijCxGLPOlltYUiwKfMuCd.png" alt="Sprout extension UI" /></p>
<h2>Gathering Budget Data</h2>
<p>Before we start building our CLI UI, we'll need to get initial budget data from YNAB. You can find the API docs <a href="https://api.ynab.com/">here</a>. For now, we'll want to gather</p>
<ul>
<li>
<p>Accounts</p>
</li>
<li>
<p>Payees</p>
</li>
<li>
<p>Categories</p>
</li>
</ul>
<p>So, we'll stub those out in our command</p>
<pre><code class="language-diff">class CreateTransaction extends Command
{
    use Concerns\Ynab;

    protected $signature = 'ynab:transaction';
    protected $description = 'Creates a new transaction in YNAB';

    public function handle(): int
    {
+        $accounts = $this-&gt;getAccounts();
+        $payees = $this-&gt;getPayees();
+        $categories = $this-&gt;getCategories();
+
        return self::SUCCESS;
    }
</code></pre>
<h3>Getting Budget Accounts</h3>
<pre><code class="language-php">    public function getAccounts(): Collection
    {
        $response = $this-&gt;getClient()-&gt;get('budgets/' . config('services.ynab.budget_id') . '/accounts');

        return collect($response-&gt;json('data.accounts'))
            -&gt;filter(fn($account) =&gt; $account['on_budget'] &amp;&amp; !$account['closed'])
            -&gt;mapWithKeys(fn($account) =&gt; [$account['id'] =&gt; $account['name']]);
    }
</code></pre>
<p>In this step, we're</p>
<ol>
<li>
<p>Getting the raw response from the <code>/accounts</code> endpoint from the API</p>
</li>
<li>
<p>Grabbing the data from the response under <code>data.accounts</code></p>
</li>
<li>
<p>Filtering for only accounts &quot;on budget&quot; and not closed. On budget means that the money is used for specific budgeting purposes. Alternatively, there are &quot;tracking&quot; accounts that don't impact your budget positively or negatively.</p>
</li>
<li>
<p>Creating a new collection of account IDs matching their name</p>
</li>
</ol>
<h3>Getting Budget Payees</h3>
<pre><code class="language-php">    public function getPayees(): Collection
    {
        $response = $this-&gt;getClient()-&gt;get('budgets/' . config('services.ynab.budget_id') . '/payees');

        return collect($response-&gt;json('data.payees'))
            -&gt;reject(fn($payee) =&gt; $payee['deleted'])
            -&gt;mapWithKeys(fn($payee) =&gt; [$payee['id'] =&gt; $payee['name']]);
    }
</code></pre>
<p>In this step, we're</p>
<ol>
<li>
<p>Getting the raw response from the <code>/payees</code> endpoint from the API</p>
</li>
<li>
<p>Grabbing the data from the response under <code>data.payees</code></p>
</li>
<li>
<p>Removing any deleted payees</p>
</li>
<li>
<p>Creating a new collection of payee IDs matching their name</p>
</li>
</ol>
<h3>Getting Budget Categories</h3>
<p>This step is a little more involved because the <code>/categories</code> endpoint has top-level Category Groups, then includes the categories data within each group. You can see the schema for <code>CategoryGroupWithCategories</code> <a href="https://api.ynab.com/v1">here</a>.</p>
<pre><code class="language-php">    public function getCategories(): Collection
    {
        $response = $this-&gt;getClient()-&gt;get('budgets/' . config('services.ynab.budget_id') . '/categories');

        return collect($response-&gt;json('data.category_groups'))
            -&gt;reject(fn($categoryGroup) =&gt; $categoryGroup['hidden'] || $categoryGroup['deleted'])
            -&gt;pluck('categories')
            -&gt;flatten(1)
            -&gt;reject(fn($category) =&gt; $category['hidden'] || $category['deleted'] || $category['category_group_name'] === 'Credit Card Payments' || $category['category_group_name'] === 'Internal Master Category')
            -&gt;mapWithKeys(fn($category) =&gt; [$category['id'] =&gt; $category['category_group_name'].': '.$category['name']]);
    }
</code></pre>
<p>In this step, we're</p>
<ol>
<li>
<p>Getting the raw response from the <code>/categories</code> endpoint from the API</p>
</li>
<li>
<p>Grabbing the data from the response under <code>data.category_groups</code></p>
</li>
<li>
<p>Removing hidden and deleted category groups</p>
</li>
<li>
<p>Grabbing the <code>categories</code> array from each group</p>
</li>
<li>
<p>Flattening the collection so all of the categories are in the same root of the collection</p>
</li>
<li>
<p>Removing any categories that are hidden, deleted, or match specific YNAB-generate categories we don't need</p>
</li>
<li>
<p>Creating a new collection of category IDs matching the category group name with the category name</p>
</li>
</ol>
<p>Whew, we did it! Now, we'll move on to the Laravel Prompts UI.</p>
<h2>Leveraging Laravel Prompts for the CLI UI</h2>
<p>Prompts has numerous options for CLI inputs. These can either be one-off or chained together into a form. A form enables us to have multiple questions and inputs while collecting the responses to be used later. Since we'll have a number of questions to ask, we'll be using a form.</p>
<pre><code class="language-php">        $responses = form()
            -&gt;text('Amount', required: true, name: 'amount')
            -&gt;select('Account', options: $accounts, required: true, name: 'account')
            -&gt;search(
                label: 'Payee',
                options: fn (string $value) =&gt; strlen($value) &gt; 0
                    ? $payees-&gt;filter(fn ($payee) =&gt; Str::contains($payee, $value, true))-&gt;all()
                    : [],
                name: 'payee'
            )
            -&gt;search(
                label: 'Category',
                options: fn (string $value) =&gt; strlen($value) &gt; 0
                    ? $categories-&gt;filter(fn ($category) =&gt; Str::contains($category, $value, true))-&gt;all()
                    : [],
                name: 'category'
            )
            -&gt;text('Memo (optional)', name: 'memo')
            -&gt;select('Flag color (optional)', options: ['none', 'red', 'orange', 'yellow', 'green', 'blue', 'purple'], name: 'flag_color')
            -&gt;select('Cleared', options: ['cleared', 'uncleared'], default: 'uncleared', name: 'cleared')
            -&gt;submit();
</code></pre>
<p>There's a lot to unpack here, but it should be straight forward</p>
<ol>
<li>
<p>We'll ask for the amount of the transaction - either positive (income) or negative (expense)</p>
</li>
<li>
<p>Select the account from a <code>select</code> prompt loaded with our accounts from the API</p>
</li>
<li>
<p>A search box for the payee loaded from the API. As new characters are added, the search suggest auto-updates and allows you to arrow up/down to select the payee. The <code>options</code> function searches and returns the values the match the input given.</p>
</li>
<li>
<p>Similar to the payees, the categories work the same way with a search suggest</p>
</li>
<li>
<p>Memo is a simple text field and also optional</p>
</li>
<li>
<p>Flag color is also optional using a select, like accounts. YNAB allows you to use their default flags, or change them in the UI. I only use these occasionally.</p>
</li>
<li>
<p>Cleared or uncleared transaction using another select. If I already know the transaction has cleared the bank/card, then I'll change to <code>cleared</code> but otherwise most transactions are new and haven't hit the account yet, so we'll default to <code>uncleared</code></p>
</li>
<li>
<p>Finally, submit the responses</p>
</li>
</ol>
<h2>Creating the new transaction via the API</h2>
<p>The Laravel Prompts <code>form</code> will return an array of <code>$responses</code> that will contain the key/value from what's provided in each input's name. We'll use the <code>/transactions</code> endpoint to combine our data and use some sensible defaults for other transaction-related data points.</p>
<pre><code class="language-php">        $response = $this-&gt;getClient()-&gt;post('budgets/' . config('services.ynab.budget_id') . '/transactions', [
            'transaction' =&gt; [
                'account_id' =&gt; $responses['account'],
                'payee_id' =&gt; $responses['payee'],
                'category_id' =&gt; $responses['category'],
                'amount' =&gt; $responses['amount'] * 1000,
                'memo' =&gt; $responses['memo'],
                'flag_color' =&gt; $responses['flag_color'] === 'none' ? null : $responses['flag_color'],
                'cleared' =&gt; $responses['cleared'],
                'date' =&gt; now()-&gt;toDateString(),
                'approved' =&gt; true,
            ],
        ]);
</code></pre>
<ol>
<li>
<p>The account will be the account's uuid from YNAB</p>
</li>
<li>
<p>The payee will be the payee's uuid</p>
</li>
<li>
<p>The category will be the category's uuid</p>
</li>
<li>
<p>The amount (positive or negative) needs to be converted to what YNAB calls <code>milliunits</code> or 3 decimal places, so we'll multiply our amount by 1,000 (<a href="https://api.ynab.com/#formats">docs</a>)</p>
</li>
<li>
<p>Optional memo text</p>
</li>
<li>
<p>Optional flag color, or converting <code>none</code> to <code>null</code> in case a color isn't specified</p>
</li>
<li>
<p>Cleared value</p>
</li>
<li>
<p>Current date - I usually add transactions as they happen, so we'll default to the current date. We might make this configurable in the future.</p>
</li>
<li>
<p>Approved defaulted to <code>true</code> - YNAB will highlight any new transactions via their auto-import process which you can either approve or delete. However, for the CLI's purposes, I'm entering the transaction manually, I'm approving at the same time.</p>
</li>
</ol>
<h2>Handling API responses</h2>
<p>At this point, I'm not too worried about responses and everything has been working smoothly so far. We'll expand on this in the future, but again, we'll keep things simple right now.</p>
<pre><code class="language-php">        if ($response-&gt;successful()) {
            $this-&gt;info('Expense created successfully');
        } else {
            $this-&gt;error('Failed to create expense - ' . $response-&gt;json('error.detail'));
        }

        return self::SUCCESS;
</code></pre>
<h2>Wrapping Up</h2>
<p>I hope you enjoyed this quick journey through setting up a fresh, new, Laravel application with Prompts and the YNAB API. As stated earlier, I'll be adding more blog posts to this series as I work through other command and refactorings along the way. I'm sure you can already see some areas that could use improvement. Further, the YNAB API offers some interesting options like Delta Requests, Rate Limiting, and better error handling that we can explore later as well.</p>
<h3>Live Demo</h3>
<p><img src="https://fls-a1375e46-63b7-4062-ba2b-4aae42ad07e6.laravel.cloud/posts/CgsjS1z7VTiIvlwTGgLG4MAVJUVGPKqMv1lzFfal.gif" alt="demo CLI transaction" /></p>
<h3>Check out the repo on GitHub</h3>
<ul>
<li>
<p><a href="https://github.com/cmgmyr/laravel-ynab-cli">laravel-ynab-cli</a></p>
</li>
<li>
<p><a href="https://github.com/cmgmyr/laravel-ynab-cli/pull/1">PR #1 for the CreateTransaction command</a></p>
</li>
</ul>
<h3>Listen to the podcast episode</h3>

<h3>Subscribe</h3>
<p>Please subscribe to my newsletter, RSS feed, podcast, or the GH repo to follow along for more content!</p>
<h3>Join the YNAB Family</h3>
<p><a href="https://ynab.com/referral/?ref=PWdXkxyHrHqoKZAw&amp;utm_source=customer_referral">Signing up</a> using my referral link will give you a 34 day trial and if you decide to purchase a subscription, YNAB will give us both a free month!</p>
]]></description>
                <pubDate>Fri, 31 Jan 2025 22:49:05 +0000</pubDate>
            </item>
                    <item>
                <title>Colocating Tests in Laravel</title>
                <link>https://chrisgmyr.dev/blog/colocating-tests-in-laravel</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/colocating-tests-in-laravel</guid>
                <description><![CDATA[<p>In a typical Laravel application, tests are housed in the <code>/Tests</code> directory and spread across <code>/Tests/Feature</code> and <code>/Tests/Unit</code>, which mimics the structure within the <code>app/</code> directory. While this is a great starting point for apps, it quickly breaks down within a more extensive application and minimizes developer efficiency and happiness over time.</p>
<p>Let's look at the typical structure with minimal application code and tests.</p>
<pre><code class="language-plaintext">.
├── app/
│   ├── Http/
│   │   └── Controllers/
│   │       └── MyUserController.php
│   └── Models/
│       └── User.php
└── tests/
    ├── TestCase.php
    ├── Features/
    │   └── Http/
    │       └── Controllers/
    │           └── MyUserControllerTest.php
    └── Unit/
        └── Models/
            └── UserTest.php
</code></pre>
<p>This issue gets exacerbated in larger applications with additional sub-directories. For example,</p>
<pre><code class="language-plaintext">.
├── app/
│   ├── Http/
│   │   └── Controllers/
│   │       └── Admin/
│   │           └── Users/
│   │               └── MyUserController.php
│   └── Models/
│       └── User.php
└── tests/
    ├── TestCase.php
    ├── Features/
    │   └── Http/
    │       └── Controllers/
    │           └── Admin/
    │               └── Users/
    │                   └── MyUserControllerTest.php
    └── Unit/
        └── Models/
            └── UserTest.php
</code></pre>
<p>This structure further breaks down if you've chosen to break up your application within modules or domains but don't include the tests within the directories. The application and test structures could easily be five or more directories deep.</p>
<p>Discoverability of application files with their test partners is challenging, and it is unclear if a given application file has a partnering test.</p>
<h2>The solution</h2>
<p>Colocating tests is an ideal structure. The test files are elevated to the same &quot;status&quot; and layer as application files. This makes it very easy to notice, at a glance, whether a file has a partnering test. Further, this simplifies pull requests for reviewers since the two files are next to each other.</p>
<p>The structure of our application now looks like this.</p>
<pre><code class="language-plaintext">.
├── app/
│   ├── Http/
│   │   └── Controllers/
│   │       ├── MyUserController.php
│   │       └── MyUserControllerTest.php
│   └── Models/
│       ├── User.php
│       └── UserTest.php
└── tests/
    └── TestCase.php
</code></pre>
<p>or</p>
<pre><code class="language-plaintext">.
├── app/
│   ├── Http/
│   │   └── Controllers/
│   │       └── Admin/
│   │           └── Users/
│   │               ├── MyUserController.php
│   │               └── MyUserControllerTest.php
│   └── Models/
│       ├── User.php
│       └── UserTest.php
└── tests/
    └── TestCase.php
</code></pre>
<p>Now imagine if this app contains hundreds of different directories and files. Can you easily see if <code>MyUserController</code> has a test? Yes! How about the <code>User</code> model? Yes again!</p>
<h2>The mechanics</h2>
<p>Unfortunately, we cannot just move our tests into this new structure. We'll have to make some small adjustments, but they are small and straightforward.</p>
<h3>composer.json</h3>
<p>Add <code>autoload.exclude-from-classmap</code></p>
<pre><code class="language-diff">{
    &quot;autoload&quot;: {
        &quot;psr-4&quot;: {
            &quot;App\\&quot;: &quot;app/&quot;,
            &quot;Database\\Factories\\&quot;: &quot;database/factories/&quot;,
            &quot;Database\\Seeders\\&quot;: &quot;database/seeders/&quot;
-        }
+        },
+        &quot;exclude-from-classmap&quot;: [
+            &quot;app/**/*Test&quot;
+        ]
    }
}
</code></pre>
<h3>phpunit.xml</h3>
<p>Add or adjust <code>source</code> and <code>testsuites</code>.</p>
<pre><code class="language-diff">	&lt;source&gt;  
	    &lt;include&gt;  
	        &lt;directory suffix=&quot;.php&quot;&gt;./app&lt;/directory&gt;
	    &lt;/include&gt;  
+	    &lt;exclude&gt;  
+	        &lt;directory suffix=&quot;Test.php&quot;&gt;./app&lt;/directory&gt;
+	    &lt;/exclude&gt;  
	&lt;/source&gt;
    &lt;testsuites&gt;
-        &lt;testsuite name=&quot;Unit&quot;&gt;
-            &lt;directory&gt;tests/Unit&lt;/directory&gt;
-        &lt;/testsuite&gt;
-        &lt;testsuite name=&quot;Feature&quot;&gt;
-            &lt;directory&gt;tests/Feature&lt;/directory&gt;
-        &lt;/testsuite&gt;
+        &lt;testsuite name=&quot;Application Test Suite&quot;&gt;
+            &lt;directory&gt;./app&lt;/directory&gt;
+        &lt;/testsuite&gt;
    &lt;/testsuites&gt;
</code></pre>
<h3>tests/Pest.php</h3>
<p>Ensure all tests have the <code>TestCase</code> and other <code>uses</code> classes available.</p>
<pre><code class="language-diff">uses(  
    Tests\TestCase::class,  
    // Illuminate\Foundation\Testing\RefreshDatabase::class,  
-)-&gt;in('Feature');
+)-&gt;in('../');
</code></pre>
<h3>app/Console/Kernel.php (&lt; v11)</h3>
<p>Add or adjust the <code>load()</code> method.</p>
<pre><code class="language-diff">protected function load($paths)
{
	$paths = Arr::wrap($paths);
	$namespace = $this-&gt;app-&gt;getNamespace();

	foreach ((new Finder)-&gt;in($paths)-&gt;files() as $command) {
		$command = $namespace.str_replace(
			['/', '.php'],
			['\\', ''],
			Str::after($command-&gt;getPathname(), realpath(app_path()).DIRECTORY_SEPARATOR)
		);

+		$isTestClass = Str::endsWith($command, 'Test');

		if (
+			!$isTestClass &amp;&amp;
			is_subclass_of($command, Command::class) &amp;&amp;
			!(new ReflectionClass($command))-&gt;isAbstract()
		) {
			Artisan::starting(function ($artisan) use ($command) {
				$artisan-&gt;resolve($command);
			});
		}
	}
}
</code></pre>

<h3>External Tooling</h3>
<p>If your application uses a tool like <a href="https://codeclimate.com/quality">Code Climate</a>, you might need to make additional config changes. For example, within the <code>.codeclimate.yml</code>, add <code>**/*Test.php</code> within the <code>exclude_patterns</code>.</p>
<pre><code class="language-diff">exclude_patterns:
+  - '**/*Test.php'
</code></pre>
<h2>Quality of life</h2>
<p>If you like the idea of colocating tests, but don't want the additional mess of looking at test files all the time you can enable <a href="https://www.jetbrains.com/help/phpstorm/file-nesting-dialog.html">File Nesting</a> in PHPStorm, and other JetBrains IDEs. Once you add the <code>.php -&gt; Test.php</code> mapping your files will be collapsed.</p>
<p><img src="https://fls-a1375e46-63b7-4062-ba2b-4aae42ad07e6.laravel.cloud/posts/zT2lsWk0uksUZxZAyv8S6xAQmsCvb9JgDbdQWnXp.png" alt="PHPStorm file nesting" /></p>
<p>Thanks to <a href="https://x.com/enzoinnocenzi/status/1831723197537849375">Enzo Innocenzi</a> for bringing that to my attention.</p>
<p>If you use VSCode, please check out <a href="https://marketplace.visualstudio.com/items?itemName=liamhammett.temphpest">Liam Hammett</a>'s extension.</p>
<h2>Wrapping up</h2>
<p>Even though colocating your tests within the <code>app/</code> directory in your Laravel application is currently atypical, I highly recommend you try it out. By colocating tests alongside application code, the direct mapping of the two will become stronger, minimizing the cognitive load in your app. You will also have a simplified structure and ease of refactoring if you happen to move your modules or domains within your app or to a new one.</p>
<p>Please comment below or reach out via <a href="https://x.com/cmgmyr">X/Twitter</a> about what you think about colocating tests and if you've implemented it within your application.</p>
<h2>Listen to the Podcast</h2>

]]></description>
                <pubDate>Sun, 01 Sep 2024 20:50:16 +0000</pubDate>
            </item>
                    <item>
                <title>Improve Your Productivity While Utilizing Laravel Mocks</title>
                <link>https://chrisgmyr.dev/blog/improve-your-productivity-while-utilizing-laravel-mocks</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/improve-your-productivity-while-utilizing-laravel-mocks</guid>
                <description><![CDATA[<p>Laravel has numerous test helpers and <a href="https://laravel.com/docs/10.x/mocking">mocks</a> within the framework, which is fantastic. However, I see other engineers getting caught up on debugging issues when they arise.</p>
<p>The <a href="https://laravel.com/docs/10.x/mocking#queue-fake">Laravel docs</a> show this as an example</p>
<pre><code class="language-php">Queue::assertPushed(function (ShipOrder $job) use ($order) {
    return $job-&gt;order-&gt;id === $order-&gt;id;
});
</code></pre>
<p>Straightforward, right? We're trying to ensure that the <code>ShipOrder</code> job was pushed to the queue and the order IDs matched.</p>
<p>What happens when the IDs do not match, or something else happens? We get the following error</p>
<pre><code class="language-bash">Failed asserting that false is true.
</code></pre>
<p>Which is not that helpful. We know that the job wasn't pushed, but was it an error with the IDs matching, or something else? This issue gets exacerbated when you have multiple conditions that need to pass.</p>
<pre><code class="language-php">Queue::assertPushed(function (ShipOrder $job) use ($order) {
    return $job-&gt;order-&gt;id === $order-&gt;id
        &amp;&amp; $job-&gt;order-&gt;second === $order-&gt;second
        &amp;&amp; $job-&gt;order-&gt;third === $order-&gt;third
        &amp;&amp; $job-&gt;order-&gt;fourth === $order-&gt;fourth;
});
</code></pre>
<p>Now, if <em>any</em> of those conditions fail, then we'll still get the same PHPUnit error message.</p>
<pre><code class="language-bash">Failed asserting that false is true.
</code></pre>
<p>There is no way to know which condition failed easily. You can remove one condition at a time, but that's a time-consuming process. There's a better way.</p>
<p>Instead of using chained conditions, we can use assertions for each of the comparisons. Then we can manually return true if all of the conditions are true.</p>
<pre><code class="language-php">Queue::assertPushed(function (ShipOrder $job) use ($order) {
    $this-&gt;assertSame($job-&gt;order-&gt;id, $order-&gt;id);
    $this-&gt;assertSame($job-&gt;order-&gt;second, $order-&gt;second);
    $this-&gt;assertSame($job-&gt;order-&gt;third, $order-&gt;third);
    $this-&gt;assertSame($job-&gt;order-&gt;fourth, $order-&gt;fourth);

    return true;
});
</code></pre>
<p>Now, if any of these assertions fail, PHPUnit will be able to narrow in on the exact line number that failed - making it quicker to debug and getting you on your way.</p>
<pre><code class="language-bash">Failed asserting that X is identical to Y

# ---

Failed asserting that 456 is identical to 123.
 /tests/PathToFileTest.php:62
</code></pre>
<p>In conclusion, by using assertions for each comparison, you can quickly pinpoint and resolve issues in your code. Leveraging Laravel mocks effectively can significantly improve your productivity as a developer.</p>
<hr />
<p>Don't forget to share this article with your fellow developers to help them optimize their Laravel testing experience!</p>
]]></description>
                <pubDate>Tue, 26 Sep 2023 17:00:12 +0000</pubDate>
            </item>
                    <item>
                <title>Tips, Tricks, and Good Practices with Laravel&#039;s Eloquent</title>
                <link>https://chrisgmyr.dev/blog/tips-tricks-and-good-practices-with-laravels-eloquent</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/tips-tricks-and-good-practices-with-laravels-eloquent</guid>
                <description><![CDATA[<p>This is a talk I gave at <a href="https://www.meetup.com/trianglephp/events/zgpswmyxlbvb/">TrianglePHP</a> on Aug 16, 2018. We'll learn how Eloquent functions on the basic levels and continue through some more well-known methods and some possibly lesser-known ones. Then we'll finish with some more advanced ideas and techniques.</p>
<hr />

<hr />
<h2>Tips, Tricks, and Good Practices</h2>
<h3>with</h3>
<h2>Laravel's Eloquent</h2>
<h3>Presented by Chris Gmyr</h3>
<hr />
<h1>What is Laravel?</h1>
<p>Laravel is a modern PHP framework that helps you create applications using simple, expressive syntax as well as offers powerful features like an ORM, routing, queues, events, notifications, simple authentication...</p>
<p>...and so much more!</p>
<hr />
<h1>What is Eloquent?</h1>
<blockquote>
<p>The Eloquent ORM included with Laravel provides a beautiful, simple ActiveRecord implementation for working with your database. Each database table has a corresponding &quot;Model&quot; which is used to interact with that table. Models allow you to query for data in your tables, as well as insert new records into the table.</p>
</blockquote>
<p><a href="https://laravel.com/docs/5.6/eloquent">https://laravel.com/docs/5.6/eloquent</a></p>
<hr />
<h1>The Basics</h1>
<hr />
<h1>A Model</h1>
<pre><code class="language-php">class Post extends Model
{
    // look Ma, no code!
}
</code></pre>
<pre><code class="language-bash">- id
- title
- created_at
- updated_at
</code></pre>
<pre><code class="language-php">$post = Post::find(1);
</code></pre>
<hr />
<h1>Artisan Goodies</h1>
<pre><code class="language-bash">php artisan make:model Product
</code></pre>
<hr />
<h1>Artisan Goodies</h1>
<pre><code class="language-bash">php artisan make:model Product -mcr
</code></pre>
<p><code>-m</code> will create a migration file <code>-c</code> will create a controller <code>-r</code> will indicate that controller should be resourceful</p>
<hr />
<h1>Cruddy</h1>
<hr />
<h1>Creating</h1>
<pre><code class="language-php">$user = new User();
$user-&gt;first_name = 'Chris';
$user-&gt;email = 'cmgmyr@gmail.com';
$user-&gt;save();
</code></pre>
<hr />
<h1>Creating</h1>
<pre><code class="language-php">$user = User::create([
    'first_name' =&gt; 'Chris',
    'email' =&gt; 'cmgmyr@gmail.com',
]);
</code></pre>
<p><strong>Note</strong>: <code>$fillable</code>/<code>$guarded</code> properties</p>
<hr />
<h1>Updating</h1>
<pre><code class="language-php">$user = User::find(1);
$user-&gt;email = 'me@chrisgmyr.com';
$user-&gt;save();
</code></pre>
<hr />
<h1>Updating</h1>
<pre><code class="language-php">$user = User::find(1);
$user-&gt;update([
    'email' =&gt; 'me@chrisgmyr.com',
]);
</code></pre>
<p><strong>Note</strong>: <code>$fillable</code>/<code>$guarded</code> properties</p>
<hr />
<h1>Updating</h1>
<pre><code class="language-php">$user = User::find(1);
$user-&gt;fill([
    'email' =&gt; 'me@chrisgmyr.com',
]);
$user-&gt;save();
</code></pre>
<p><strong>Note</strong>: <code>$fillable</code>/<code>$guarded</code> properties</p>
<hr />
<h1>Deleting</h1>
<pre><code class="language-php">$user = User::find(1);
$user-&gt;delete();
</code></pre>
<hr />
<h1>Deleting</h1>
<pre><code class="language-php">User::destroy(1);
User::destroy([1, 2, 3]);
User::destroy(1, 2, 3);
</code></pre>
<hr />
<h1>&quot;or&quot; helper methods</h1>
<pre><code class="language-php">User::findOrFail(1);

$user-&gt;saveOrFail(); // same as save(), but uses transaction

User::firstOrCreate([ /* attributes */]);

User::updateOrInsert([/* attributes to search */], [/* attributes to update */]);
</code></pre>
<hr />
<h1>Querying</h1>
<hr />
<h1>Querying</h1>
<pre><code class="language-php">$users = User::get(); // User::all()
$user  = User::where('id', 1)-&gt;first();
$user  = User::find(1);
$user  = User::findOrFail(1);
$users = User::find([1, 2, 3]);
$users = User::whereIn('id', [1, 2, 3])-&gt;get();

$users = User::where('is_admin', true)
    -&gt;where('id', '!=', Auth::id())
    -&gt;take(10)
    -&gt;orderBy('last_name', 'ASC')
    -&gt;get();
</code></pre>
<hr />
<h1>Chunking</h1>
<pre><code class="language-php">User::chunk(50, function ($users) {
    foreach ($users as $user) {
        //
    }
});
</code></pre>
<hr />
<h1>Collections</h1>
<blockquote>
<p>For Eloquent methods like <code>all()</code> and <code>get()</code> which retrieve multiple results, an instance of <code>Illuminate\Database\Eloquent\Collection</code> will be returned.</p>
</blockquote>
<pre><code class="language-php">$admins = $users-&gt;filter(function ($user) {
    return $user-&gt;is_admin;
});
</code></pre>
<hr />
<h1>Raw query methods</h1>
<pre><code class="language-php">Product::whereRaw('price &gt; IF(state = &quot;NC&quot;, ?, 100)', [200])
    -&gt;get();

Post::groupBy('category_id')
    -&gt;havingRaw('COUNT(*) &gt; 1')
    -&gt;get();

Customer::where('created_at', '&gt;', '2016-01-01')
    -&gt;orderByRaw('(updated_at - created_at) desc')
    -&gt;get();
</code></pre>
<hr />
<h1>Relationships</h1>
<hr />
<h1>Relationships</h1>
<pre><code class="language-php">hasOne() // User has one Address
belongsTo() // Address belongs to User
hasMany() // Post has many Comment
belongsToMany() // Role belongs to many User
hasManyThrough() // Country has many Post through User

// Use single table
morphTo() // Comment can be on Post, Video, Album
morphMany() // Post has many Comment

// Use pivot table
morphToMany() // Post has many Tag
morphedByMany() // Tag has many Post
</code></pre>
<hr />
<h1>Relationships</h1>
<pre><code class="language-php">class Video extends Model
{
    public function comments()
    {
        return $this-&gt;hasMany(Comment::class);
    }
}

$video = Video::find(1);
foreach ($video-&gt;comments as $comment) {
    // $comment-&gt;body
}
</code></pre>
<hr />
<h1>Relationships</h1>
<pre><code class="language-php">class Video extends Model
{
    public function comments()
    {
        return $this-&gt;hasMany(Comment::class);
    }
}

$video = Video::find(1);
foreach ($video-&gt;comments()-&gt;where('approved', true)-&gt;get() as $comment) {
    // $comment-&gt;body
}
</code></pre>
<hr />
<h1>Default conditions and ordering</h1>
<pre><code class="language-php">class Video extends Model
{
    public function comments()
    {
        return $this-&gt;hasMany(Comment::class)
        -&gt;where('approved', true)
        -&gt;latest();
    }
}
</code></pre>
<hr />
<h1>Default conditions and ordering</h1>
<pre><code class="language-php">class Video extends Model
{
    public function comments()
    {
        return $this-&gt;hasMany(Comment::class);
    }

    public function publicComments()
    {
        return $this-&gt;comments()
        -&gt;where('approved', true)
        -&gt;latest();
    }
}
</code></pre>
<hr />
<h1>Default Models</h1>
<p>Default models can be used with <code>belongsTo()</code>, <code>hasOne()</code>, and <code>morphOne()</code> relationships.</p>
<hr />
<h1>Default Models</h1>
<pre><code class="language-bash">{{ $post-&gt;author-&gt;name }} // error if author not found
</code></pre>
<pre><code class="language-php">class Post extends Model
{
    public function author()
    {
        return $this-&gt;belongsTo(User::class);
    }
}
</code></pre>
<hr />
<h1>Default Models</h1>
<pre><code class="language-bash">{{ $post-&gt;author-&gt;name ?? '' }} // meh
</code></pre>
<pre><code class="language-php">class Post extends Model
{
    public function author()
    {
        return $this-&gt;belongsTo(User::class);
    }
}
</code></pre>
<hr />
<h1>Default Models</h1>
<pre><code class="language-bash">{{ $post-&gt;author-&gt;name }} // better!
</code></pre>
<pre><code class="language-php">class Post extends Model
{
    public function author()
    {
        return $this-&gt;belongsTo(User::class)-&gt;withDefault();
    }
}
</code></pre>
<hr />
<h1>Default Models</h1>
<pre><code class="language-php">class Post extends Model
{
    public function author()
    {
        return $this-&gt;belongsTo(User::class)-&gt;withDefault([
            'name' =&gt; 'Guest Author',
        ]);
    }
}
</code></pre>
<hr />
<h1>Events</h1>
<blockquote>
<p>The <code>retrieved</code> event will fire when an existing model is retrieved from the database. When a new model is saved for the first time, the <code>creating</code> and <code>created</code> events will fire. If a model already existed in the database and the <code>save()</code> method is called, the <code>updating</code> / <code>updated</code> events will fire. However, in both cases, the <code>saving</code> / <code>saved</code> events will fire.</p>
</blockquote>
<p><a href="https://laravel.com/docs/5.6/eloquent#events">https://laravel.com/docs/5.6/eloquent#events</a></p>
<hr />
<h1>Events</h1>
<pre><code class="language-php">class User extends Model
{
    protected $dispatchesEvents = [
        'saved' =&gt; UserSaved::class,
        'deleted' =&gt; UserDeleted::class,
    ];
}
</code></pre>
<hr />
<h1>Observers</h1>
<p><code>php artisan make:observer UserObserver --model=User</code></p>
<pre><code class="language-php">class ModelObserverServiceProvider extends ServiceProvider
{
    public function boot()
    {
        User::observe(UserObserver::class);
    }
}
</code></pre>
<hr />
<h1>Observers</h1>
<pre><code class="language-php">class UserObserver
{
    public function created(User $user)
    {
    }

    public function updated(User $user)
    {
    }

    public function deleted(User $user)
    {
    }
}
</code></pre>
<hr />
<h1><code>boot()</code> method</h1>
<pre><code class="language-php">class Post extends Model
{
    public static function boot()
    {
        parent::boot();

        self::creating(function ($model) {
            $model-&gt;uuid = (string) Uuid::generate();
        });
    }
}
</code></pre>
<hr />
<h1>Bootable Trait</h1>
<pre><code class="language-php">class Post extends Model
{
    use HasUuid;
}

trait HasUuid
{
    public static function bootHasUuid()
    {
        self::creating(function ($model) {
            $model-&gt;uuid = (string) Uuid::generate();
        });
    }

    // more uuid related methods
}
</code></pre>
<hr />
<h1>Helper Methods</h1>
<hr />
<h1>Increments and Decrements</h1>
<pre><code class="language-php">$post = Post::find(1);
$post-&gt;stars++;
$post-&gt;save();

$post-&gt;stars--;
$post-&gt;save();
</code></pre>
<hr />
<h1>Increments and Decrements</h1>
<pre><code class="language-php">$post = Post::find(1);
$post-&gt;increment('stars'); // add 1
$post-&gt;increment('stars', 15); // add 15

$post-&gt;decrement('stars'); // subtract 1
$post-&gt;decrement('stars', 15); // subtract 15
</code></pre>
<hr />
<h1>Aggregates</h1>
<pre><code class="language-php">$count = Product::where('active', 1)-&gt;count();
$min   = Product::where('active', 1)-&gt;min('price');
$max   = Product::where('active', 1)-&gt;max('price');
$avg   = Product::where('active', 1)-&gt;avg('price');
$sum   = Product::where('active', 1)-&gt;sum('price');
</code></pre>
<hr />
<h1>Check if Records Exist</h1>
<p>Instead of <code>count()</code>, you could use...</p>
<pre><code class="language-php">User::where('username', 'cmgmyr')-&gt;exists();

User::where('username', 'cmgmyr')-&gt;doesntExist();
</code></pre>
<hr />
<h1>Model State</h1>
<pre><code class="language-php">$model-&gt;isDirty($attributes = null);
$model-&gt;isClean($attributes = null);
$model-&gt;wasChanged($attributes = null);
$model-&gt;hasChanges($changes, $attributes = null);
$model-&gt;getDirty();
$model-&gt;getChanges();

//Indicates if the model exists.
$model-&gt;exists;

//Indicates if the model was inserted during the current request lifecycle.
$model-&gt;wasRecentlyCreated;
</code></pre>
<hr />
<h1>&quot;Magic&quot; where()</h1>
<pre><code class="language-php">$users = User::where('approved', 1)-&gt;get();
$users = User::whereApproved(1)-&gt;get();

$user = User::where('username', 'cmgmyr')-&gt;get();
$user = User::whereUsername('cmgmyr')-&gt;get();

$admins = User::where('is_admin', true)-&gt;get();
$admins = User::whereIsAdmin(true)-&gt;get();
</code></pre>
<hr />
<h1>Super &quot;Magic&quot; where()</h1>
<pre><code class="language-php">User::whereTypeAndStatus('admin', 'active')-&gt;get();
User::whereTypeOrStatus('admin', 'active')-&gt;get();
</code></pre>
<p><a href="https://twitter.com/themsaid/status/1029731544942952448">https://twitter.com/themsaid/status/1029731544942952448</a></p>
<hr />
<h1>Dates</h1>
<pre><code class="language-php">User::whereDate('created_at', date('Y-m-d'));
User::whereDay('created_at', date('d'));
User::whereMonth('created_at', date('m'));
User::whereYear('created_at', date('Y'));
</code></pre>
<hr />
<h1><code>when()</code> to eliminate conditionals</h1>
<pre><code class="language-php">$query = Author::query();

if (request('filter_by') == 'likes') {
    $query-&gt;where('likes', '&gt;', request('likes_amount', 0));
}

if (request('filter_by') == 'date') {
    $query-&gt;orderBy('created_at', request('ordering_rule', 'desc'));
}
</code></pre>
<hr />
<h1><code>when()</code> to eliminate conditionals</h1>
<pre><code class="language-php">$query = Author::query();

$query-&gt;when(request('filter_by') == 'likes', function ($q) {
    return $q-&gt;where('likes', '&gt;', request('likes_amount', 0));
});

$query-&gt;when(request('filter_by') == 'date', function ($q) {
    return $q-&gt;orderBy('created_at', request('ordering_rule', 'desc'));
});
</code></pre>
<hr />
<h1><code>replicate()</code> a Model</h1>
<pre><code class="language-php">$invoice = Invoice::find(1);
$newInvoice = $invoice-&gt;replicate();
$newInvoice-&gt;save();
</code></pre>
<hr />
<h1>Pagination</h1>
<pre><code class="language-php">// 1, 2, 3, 4, 5...
$users = User::where('active', true)-&gt;paginate(15);

// Previous/Next
$users = User::where('active', true)-&gt;simplePaginate(15);

// In Blade
{{ $users-&gt;links() }}
</code></pre>
<hr />
<p>Pagination to Json</p>
<pre><code class="language-javascript">{
   &quot;total&quot;: 50,
   &quot;per_page&quot;: 15,
   &quot;current_page&quot;: 1,
   &quot;last_page&quot;: 4,
   &quot;first_page_url&quot;: &quot;https://my.app?page=1&quot;,
   &quot;last_page_url&quot;: &quot;https://my.app?page=4&quot;,
   &quot;next_page_url&quot;: &quot;https://my.app?page=2&quot;,
   &quot;prev_page_url&quot;: null,
   &quot;path&quot;: &quot;https://my.app&quot;,
   &quot;from&quot;: 1,
   &quot;to&quot;: 15,
   &quot;data&quot;:[
        {
            // Result Object
        },
        {
            // Result Object
        }
   ]
}
</code></pre>
<hr />
<h1>Model Properties</h1>
<pre><code class="language-php">protected $table = 'users';
protected $fillable = ['first_name', 'email', 'password']; // create()/update()
protected $dates = ['created', 'deleted_at']; // Carbon
protected $appends = ['full_name', 'company']; // additional JSON values
protected $casts = ['is_admin' =&gt; 'boolean', 'options' =&gt; 'array'];

protected $primaryKey = 'uuid';
public $incrementing = false;
protected $perPage = 25;
const CREATED_AT = 'created';
const UPDATED_AT = 'updated';
public $timestamps = false;
</code></pre>
<p>...and more!</p>
<hr />
<h1>Overriding <code>updated_at</code></h1>
<pre><code class="language-php">$product = Product::find(1);
$product-&gt;updated_at = '2020-01-01 10:00:00';
$product-&gt;save(['timestamps' =&gt; false]);
</code></pre>
<hr />
<h1>Primary Key Methods</h1>
<pre><code class="language-php">$video = Video::find(1);
$video-&gt;getKeyName(); // 'id'
$video-&gt;getKeyType(); // 'int'
$video-&gt;getKey(); // 1
</code></pre>
<hr />
<h1>Accessors/Mutators</h1>
<pre><code class="language-php">class User extends Model
{
    public function setFirstNameAttribute($value)
    {
        $this-&gt;attributes['first_name'] = strtolower($value);
    }
    public function setLastNameAttribute($value)
    {
        $this-&gt;attributes['last_name'] = strtolower($value);
    }
}
</code></pre>
<hr />
<h1>Accessors/Mutators</h1>
<pre><code class="language-php">class User extends Model
{
    public function getFirstNameAttribute($value)
    {
        return ucfirst($value);
    }
    public function getLastNameAttribute($value)
    {
        return ucfirst($value);
    }
    public function getEmailAttribute($value)
    {
        return new Email($value);
    }
    public function getFullNameAttribute()
    {
        return &quot;{$this-&gt;first_name} {$this-&gt;last_name}&quot;;
    }
}
</code></pre>
<hr />
<h1>Accessors/Mutators</h1>
<pre><code class="language-php">$user = User::create([
    'first_name' =&gt; 'Chris', // chris
    'last_name' =&gt; 'Gmyr', // gmyr
    'email' =&gt; 'cmgmyr@gmail.com',
]);

$user-&gt;first_name; // Chris
$user-&gt;last_name; // Gmyr
$user-&gt;email; // instance of Email
$user-&gt;full_name; // 'Chris Gmyr'
</code></pre>
<hr />
<h1>To Array/Json</h1>
<pre><code class="language-php">$user = User::find(1);
return $user-&gt;toArray();
return $user-&gt;toJson();
</code></pre>
<p>You can also return <code>$user</code> from a controller method and it will automatically return JSON.</p>
<hr />
<h1>Appending Values to JSON</h1>
<pre><code class="language-php">class User extends Model
{
    protected $appends = ['full_name']; // adds to toArray()

    public function getFullNameAttribute()
    {
        return &quot;{$this-&gt;first_name} {$this-&gt;last_name}&quot;;
    }
}

// or...
return $user-&gt;append('full_name')-&gt;toArray();
return $user-&gt;setAppends(['full_name'])-&gt;toArray();
</code></pre>
<hr />
<h1>Local Scopes</h1>
<pre><code class="language-php">
$posts = Post::whereNotNull('published_at')
    -&gt;where('published_at', '&lt;=', Carbon::now())
    -&gt;latest('published_at')
    -&gt;get();
</code></pre>
<hr />
<h1>Local Scopes</h1>
<pre><code class="language-php">class Post extends Model
{
    public function scopePublished($query)
    {
        return $query-&gt;whereNotNull('published_at')
            -&gt;where('published_at', '&lt;=', Carbon::now())
            -&gt;latest('published_at');
    }
}
</code></pre>
<hr />
<h1>Local Scopes</h1>
<pre><code class="language-php">$posts = Post::published()-&gt;get();
</code></pre>
<hr />
<h1>Single Table Inheritance</h1>
<hr />
<h1>Single Table Inheritance</h1>
<pre><code class="language-php">$admins = User::where('is_admin', true)-&gt;get();
$customers = User::where('is_admin', false)-&gt;get();
</code></pre>
<hr />
<h1>Single Table Inheritance</h1>
<pre><code class="language-php">class User extends Model
{
    public function scopeAdmin($query)
    {
        return $query-&gt;where('is_admin', true);
    }
    public function scopeCustomer($query)
    {
        return $query-&gt;where('is_admin', false);
    }
}

$admins = User::admin()-&gt;get();
$customers = User::customer()-&gt;get();
</code></pre>
<hr />
<h1>Single Table Inheritance</h1>
<pre><code class="language-php">class Admin extends User
{
    protected static function boot()
    {
        parent::boot();
        static::addGlobalScope(function ($query) {
            $query-&gt;where('is_admin', true);
        });
    }
}
</code></pre>
<hr />
<h1>Single Table Inheritance</h1>
<pre><code class="language-php">class Customer extends User
{
    protected static function boot()
    {
        parent::boot();
        static::addGlobalScope(function ($query) {
            $query-&gt;where('is_admin', false);
        });
    }
}
</code></pre>
<hr />
<h1>Single Table Inheritance</h1>
<pre><code class="language-php">$admins = Admin::get();
$customers = Customer::get();
</code></pre>
<hr />
<h1>Single Table Inheritance</h1>
<p>Read more:</p>
<ul>
<li>
<p><a href="https://twitter.com/cmgmyr/status/885204646498893824">https://twitter.com/cmgmyr/status/885204646498893824</a></p>
</li>
<li>
<p><a href="https://tighten.co/blog/extending-models-in-eloquent">https://tighten.co/blog/extending-models-in-eloquent</a></p>
</li>
</ul>
<hr />
<h1>Default Model Data</h1>
<hr />
<h1>Default Model Data</h1>
<pre><code class="language-php">Schema::create('users', function (Blueprint $table) {
    $table-&gt;increments('id');
    $table-&gt;string('name');
    $table-&gt;string('email')-&gt;unique();
    $table-&gt;string('password');
    $table-&gt;string('role')-&gt;default('user'); // moderator, admin, etc
    $table-&gt;rememberToken();
    $table-&gt;timestamps();
});
</code></pre>
<hr />
<h1>Default Model Data</h1>
<pre><code class="language-php">class User extends Model
{
    protected $fillable = [
        'name', 'email', 'password', 'role'
    ];
}
</code></pre>
<hr />
<h1>Default Model Data</h1>
<pre><code class="language-php">$user = new User();
$user-&gt;name = 'Chris';
$user-&gt;email = 'cmgmyr@gmail.com';
$user-&gt;password = Hash::make('p@ssw0rd');

// $user-&gt;role is currently NULL

$user-&gt;save();

$user-&gt;role; // 'user'
</code></pre>
<hr />
<h1>Default Model Data</h1>
<p>Remove <code>-&gt;default('user')</code>;</p>
<pre><code class="language-php">Schema::create('users', function (Blueprint $table) {
    $table-&gt;increments('id');
    $table-&gt;string('name');
    $table-&gt;string('email')-&gt;unique();
    $table-&gt;string('password');
    $table-&gt;string('role'); // moderator, admin, etc
    $table-&gt;rememberToken();
    $table-&gt;timestamps();
});
</code></pre>
<hr />
<h1>Default Model Data</h1>
<p>Set <code>$attributes</code></p>
<pre><code class="language-php">class User extends Model
{
    protected $fillable = [
        'name', 'email', 'password', 'role'
    ];

    protected $attributes = [
        'role' =&gt; 'user',
    ];
}
</code></pre>
<hr />
<h1>Default Model Data</h1>
<pre><code class="language-php">$user = new User();
$user-&gt;name = 'Chris';
$user-&gt;email = 'cmgmyr@gmail.com';
$user-&gt;password = Hash::make('p@ssw0rd');

// $user-&gt;role is currently 'user'!

$user-&gt;save();

$user-&gt;role; // 'user'
</code></pre>
<hr />
<h1>Default Model Data</h1>
<pre><code class="language-php">$user = new User();
$user-&gt;name = 'Chris';
$user-&gt;email = 'cmgmyr@gmail.com';
$user-&gt;password = Hash::make('p@ssw0rd');
$user-&gt;role = 'admin'; // can override default
$user-&gt;save();

$user-&gt;role; // 'admin'
</code></pre>
<hr />
<h1>Default Models</h1>
<p>Remember our previous example?</p>
<pre><code class="language-php">class Post extends Model
{
    public function author()
    {
        return $this-&gt;belongsTo(User::class)-&gt;withDefault([
            'name' =&gt; 'Guest Author',
        ]);
    }
}
</code></pre>
<hr />
<h1>Default Models</h1>
<p>We no longer need to provide a <code>name</code>, use the <code>User $attributes</code> property!</p>
<pre><code class="language-php">class Post extends Model
{
    public function author()
    {
        return $this-&gt;belongsTo(User::class)-&gt;withDefault();
    }
}
</code></pre>
<hr />
<h1>Default Model Data</h1>
<pre><code class="language-php">class User extends Model
{
    protected $fillable = [
        'name', 'email', 'password', 'role'
    ];

    protected $attributes = [
        'name' =&gt; 'Guest Author',
        'role' =&gt; 'user',
    ];
}
</code></pre>
<hr />
<h1>Default Model Data</h1>
<p>Watch Colin DeCarlo's - Keeping Eloquent Eloquent from Laracon US 2016</p>
<p><a href="https://streamacon.com/video/laracon-us-2016/colin-decarlo-keeping-eloquent-eloquent">https://streamacon.com/video/laracon-us-2016/colin-decarlo-keeping-eloquent-eloquent</a></p>
<hr />
<h1>Sub-Queries</h1>
<pre><code class="language-php">$customers = Customer::with('company')
    -&gt;orderByName()
    -&gt;paginate();
</code></pre>
<p>Get latest interactions?</p>
<pre><code class="language-html">&lt;p&gt;{{ $customer
    -&gt;interactions()
    -&gt;latest()
    -&gt;first()
    -&gt;created_at
    -&gt;diffForHumans() }}&lt;/p&gt;
</code></pre>
<hr />
<h1>Sub-Queries</h1>
<pre><code class="language-php">public function scopeWithLastInteractionDate($query)
{
    $subQuery = \DB::table('interactions')
        -&gt;select('created_at')
        -&gt;whereRaw('customer_id = customers.id')
        -&gt;latest()
        -&gt;limit(1);

    return $query-&gt;select('customers.*')-&gt;selectSub($subQuery, 'last_interaction_date');
}

$customers = Customer::with('company')
    -&gt;withLastInteractionDate()
    -&gt;orderByName()
    -&gt;paginate();
</code></pre>
<pre><code class="language-html">&lt;p&gt;{{ $customer-&gt;last_interaction_date-&gt;diffForHumans() }}&lt;/p&gt;
</code></pre>
<hr />
<h1>Sub-Queries</h1>
<p>Jonathan Reinink's Laracon 2018 Online Talk - Advanced Querying with Eloquent</p>
<p><a href="https://github.com/reinink/laracon2018">https://github.com/reinink/laracon2018</a></p>
<hr />
<h1>Resources</h1>
<ul>
<li>
<p><a href="https://laravel.com/docs/5.6/eloquent">https://laravel.com/docs/5.6/eloquent</a></p>
</li>
<li>
<p><a href="https://laravel-news.com/eloquent-tips-tricks">https://laravel-news.com/eloquent-tips-tricks</a></p>
</li>
<li>
<p><a href="https://twitter.com/themsaid/status/1029731544942952448">https://twitter.com/themsaid/status/1029731544942952448</a></p>
</li>
<li>
<p><a href="https://twitter.com/cmgmyr/status/885204646498893824">https://twitter.com/cmgmyr/status/885204646498893824</a></p>
</li>
<li>
<p><a href="https://tighten.co/blog/extending-models-in-eloquent">https://tighten.co/blog/extending-models-in-eloquent</a></p>
</li>
<li>
<p><a href="https://streamacon.com/video/laracon-us-2016/colin-decarlo-keeping-eloquent-eloquent">https://streamacon.com/video/laracon-us-2016/colin-decarlo-keeping-eloquent-eloquent</a></p>
</li>
<li>
<p><a href="https://github.com/reinink/laracon2018">https://github.com/reinink/laracon2018</a></p>
</li>
<li>
<p><a href="https://eloquentbyexample.com">https://eloquentbyexample.com</a></p>
</li>
</ul>
<hr />
<h1>Thank you!</h1>
<h2>Please say &quot;hi&quot;</h2>
<h4>twitter.com/cmgmyr</h4>
<h4>github.com/cmgmyr</h4>
<h4>chrisgmyr.com</h4>
]]></description>
                <pubDate>Fri, 17 Aug 2018 20:00:00 +0000</pubDate>
            </item>
                    <item>
                <title>Deploying Specific Branches with Laravel, CircleCI, and Envoyer</title>
                <link>https://chrisgmyr.dev/blog/deploying-specific-branches-with-laravel-circleci-and-envoyer</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/deploying-specific-branches-with-laravel-circleci-and-envoyer</guid>
                <description><![CDATA[<p>A few weeks ago I was trying to update a side project's <a href="https://circleci.com/">CircleCI</a> config from version 1 to version 2 since they are <a href="https://circleci.com/blog/sunsetting-1-0/">depreciating V1</a> in August 2018. In the process, I was curious how I could deploy specific branches to specific environments in Laravel's <a href="https://envoyer.io">Envoyer</a> if the tests passed successfully.</p>
<p>My project has two main branches: <code>develop</code> and <code>master</code>. In Envoyer I have two projects, one for <code>dev.project.com</code> which uses the <code>develop</code> branch and the other for <code>project.com</code> which uses the <code>master</code> branch.</p>
<p>Here is the final result of the <code>circle.yml</code> file. Let's work through each of the sections below.</p>
<h2>Section 1: Defaults</h2>
<p>By leveraging <a href="https://learnxinyminutes.com/docs/yaml/">YAML anchors</a> we can set a group of defaults that will be used for all of our later jobs. For now, this includes our</p>
<ul>
<li>
<p>working directory</p>
</li>
<li>
<p>chosen CircleCI <a href="https://circleci.com/docs/2.0/circleci-images/">docker image</a></p>
</li>
</ul>
<h2>Section 2: Jobs</h2>
<p>In this file we have three jobs: <code>build</code> (and test), <code>deploy_develop</code>, and <code>deploy_master</code>.</p>
<p>Our <code>build</code> job</p>
<ol>
<li>
<p>Imports the defaults</p>
</li>
<li>
<p>Sets environment variables</p>
</li>
<li>
<p>Checks out the repo's code</p>
</li>
<li>
<p>Restores <code>composer</code> cache, if available</p>
</li>
<li>
<p>Runs <code>composer install</code></p>
</li>
<li>
<p>Saves a new <code>composer</code> cache</p>
</li>
<li>
<p>Runs the test suite with PHPUnit</p>
</li>
</ol>
<p>Our &quot;deploy&quot; jobs:</p>
<ol>
<li>
<p>Imports the defaults</p>
</li>
<li>
<p>Pings Envoyer to deploy the project</p>
</li>
</ol>
<h2>Section 3: Workflows</h2>
<p>Now that we have our jobs set up, we need to implement <a href="https://circleci.com/docs/2.0/workflows/">workflows</a> to pull everything together. Workflows are optional, but they can come in handy depending on what you'd like to do with your project.</p>
<p>In this example, we only need one workflow <code>notify_deploy</code> which will notify Envoyer that we want to deploy a specific branch.</p>
<p>Within the workflow, you'll notice that we are listing all of our jobs: <code>build</code>, <code>deploy_develop</code>, and <code>deploy_master</code>.</p>
<p>We start off running our <code>build</code> job, and if that is successful, we'll move forward with our deploy jobs. Each deploy job requires the <code>build</code> to run first; then we'll check if the version branch matches the filter on the workflow. So <code>deploy_develop</code> is only run on the <code>develop</code> branch and <code>deploy_master</code> is only run on the <code>master</code> branch.</p>
<p>By limiting the filters to only the <code>develop</code> and <code>master</code> branches we can guarantee that we're only deploying those specific branches, but the <code>build</code> job will run on all branches (bug, hotfix, and feature branches), which is needed for pull requests.</p>
<h2>Wrapping Up</h2>
<p>Once we merge a branch into either <code>develop</code> or <code>master</code> CircleCI will build and notify Envoyer to deploy if successful. In our CircleCI dashboard, you'll now see a workflow similar to this.</p>
<p><img src="https://fls-a1375e46-63b7-4062-ba2b-4aae42ad07e6.laravel.cloud/posts/z8878DiJrK576PCBWeji4poJPLH6CAlGiUUtzu6b.png" alt="CircleCI Workflow" /></p>
<blockquote>
<p>You'll also need to make sure you turn off the &quot;Deploy When Code Is Pushed&quot; option in your Enoyer project.</p>
</blockquote>
<h2>Learn More</h2>
<p>This is only scratching the surface of what you can do with CircleCI builds and workflows. I encourage you to look through the documentation and example projects to see what you can implement in your projects.</p>
<ul>
<li>
<p><a href="https://circleci.com/docs/2.0/">2.0 Documentation</a></p>
</li>
<li>
<p><a href="https://circleci.com/docs/2.0/tutorials/">Sample Projects</a></p>
</li>
<li>
<p><a href="https://circleci.com/docs/2.0/workflows/">Workflow Documentation</a></p>
</li>
<li>
<p><a href="http://www.yaml.org/spec/1.2/spec.html">YAML 1.2 Spec</a></p>
</li>
</ul>
<blockquote>
<p>Please note - CircleCI nor Envoyer/Laravel paid me to write this article, I'm just a happy customer.</p>
</blockquote>
]]></description>
                <pubDate>Fri, 06 Apr 2018 20:00:00 +0000</pubDate>
            </item>
                    <item>
                <title>Revisiting Our Work</title>
                <link>https://chrisgmyr.dev/blog/revisiting-our-work</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/revisiting-our-work</guid>
                <description><![CDATA[<p>I recently watched David Heinemeier Hansson’s (<a href="https://twitter.com/dhh">@dhh</a>) video on code comments and refactoring. While it’s interesting to see how he tackles these code changes, the most interesting thing to me is what he said he does with the codebase.</p>

<blockquote>
<p><em>“I read through the entire codebase of Basecamp 3 and try to make things better that I don’t think are good enough, or revisit decisions that we’ve made earlier that I think now I have a better idea of how to do”</em></p>
</blockquote>
<p>There’s a lot in that statement, so let’s unpack it.</p>
<p>First, he reads through the <em>entire</em> codebase! The fact that he does this shows great care for what he does and believes in.</p>
<p>Second, he makes changes where he doesn’t feel like the code is <em>good enough</em>. While many of us have ideas about what is “good” or not, I’m sure we’ve all gone back through older code and just known when we’ve seen it.</p>
<p>Lastly, he revisits past decisions that can be better handled now. Programming is an ever-changing space and developers are enhancing their skills and knowledge on a daily basis. So why shouldn’t our code reflect our most up-to-date understandings?</p>
<p>DHH isn’t the only one who does this though. <a href="https://twitter.com/taylorotwell">Taylor Otwell</a> (creator of Laravel) does something similar.</p>
<p>%[https://twitter.com/taylorotwell/status/818863355066798080]</p>
<p>Going through code and documentation that you’ve already worked on isn’t the most glamorous job, even quite tedious, but necessary. It’s silly to think that something that was done years, weeks, or even days ago is still “good enough” for today.</p>
<p>I know I’ve fallen into the habit of not revisiting what I’ve worked on long ago — and maybe some projects don’t need it. However, if the public is using it — like in a current site, app, or package it might be time to take a look through for improvements.</p>
<h3>Next Steps</h3>
<p>I’m going to take a hard look at my <a href="https://github.com/cmgmyr/laravel-messenger">messenger</a> package (which needs some love) as well as a few side projects that I haven’t worked on in a while. What projects are you going to look at? Let’s share some successes. Send me a tweet or screenshot on <a href="https://twitter.com/cmgmyr">Twitter</a>, or leave a comment below.</p>
]]></description>
                <pubDate>Wed, 28 Feb 2018 20:26:02 +0000</pubDate>
            </item>
                    <item>
                <title>How to order by all() in Laravel</title>
                <link>https://chrisgmyr.dev/blog/how-to-order-by-all-in-laravel</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/how-to-order-by-all-in-laravel</guid>
                <description><![CDATA[<p>One common issue that I see with Laravel newcomers is that they have <a href="https://stackoverflow.com/questions/17553181/laravel-4-how-to-order-by-using-eloquent-orm/18289241#18289241">hangups</a> using Eloquent correctly. The most basic reference given in the documentation and tutorials is using the <code>all()</code> method.</p>
<pre><code class="language-php">$users = User::all();
</code></pre>
<p><strong>But what happens when you want to sort your users?</strong></p>
<p>As newcomers to the framework, I feel like most are too excited to “jump in and build something” instead of <a href="https://laravel.com/docs/5.4/eloquent#retrieving-models">learning more</a> about it. (But who can blame them, right?!?) So something like this would happen:</p>
<pre><code class="language-php">$users = User::all()-&gt;orderBy('name', 'ASC');

# BadMethodCallException with message 'Method orderBy does not exist.'

// or

$users = User::orderBy('name', 'ASC')-&gt;all();

# BadMethodCallException with message 'Call to undefined method Illuminate\Database\Query\Builder::all()'
</code></pre>
<h3>Forget about <code>all()</code></h3>
<p>In my experience, I’ve never needed an unordered dump of data in an application.</p>
<p>Note that <code>all()</code> is a convenience method for <code>get()</code> but does not allow you to chain additional methods. <a href="https://github.com/laravel/framework/blob/5.4/src/Illuminate/Database/Eloquent/Model.php#L340">Take a look</a>:</p>
<pre><code class="language-php">public static function all($columns = ['*'])
{
    return (new static)-&gt;newQuery()-&gt;get
        (is_array($columns) ? $columns : func_get_args()
    );
}
</code></pre>
<p>By using <code>get()</code> you’ll be able to achieve the desired results.</p>
<pre><code class="language-php">$users = User::orderBy('name', 'ASC')-&gt;get();

// and

$users = User::where('email', 'LIKE', '%@gmail.com')  
    -&gt;orderBy('name', 'ASC')-&gt;get();
</code></pre>
<p>So any time you reach for the <code>all()</code> method, I highly recommend using <code>get()</code> instead.</p>
]]></description>
                <pubDate>Thu, 01 Jun 2017 01:19:48 +0000</pubDate>
            </item>
                    <item>
                <title>Prioritizing Queued Jobs in Laravel</title>
                <link>https://chrisgmyr.dev/blog/prioritizing-queued-jobs-in-laravel</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/prioritizing-queued-jobs-in-laravel</guid>
                <description><![CDATA[<p>Laravel queues allow you to defer long-running, or resource-intensive, processes until a later time. A queue system is imperative for larger applications but can be helpful for smaller ones as well. But with so many jobs and queues, how can we prioritize them?</p>
<p>On a current project, I had to figure out how to grab a ton of data from an API (Facebook) that had dependent data — meaning the script cannot proceed to get new data before it’s done with the current data set. This typically ended up being an ID that was needed for the next call as well as some data that needed to be processed.</p>
<p>In pseudo-code, this would look something similar to:</p>
<pre><code class="language-php">$a = $get-&gt;a();  
$b = $get-&gt;b($a);   
$c = $get-&gt;c($b);   
$this-&gt;doSomethingWith($c);
</code></pre>
<p>As you can see, the code should not move ahead without processing the previous data. In my situation, I had to process all of the “A” jobs, then all of the “B” jobs, and so on, before continuing.</p>
<p>The data that I needed was quite large and each round needed a good amount of processing before continuing to the next step. Luckily this is where <a href="https://laravel.com/">Laravel</a> and its queue system stepped in to help!</p>
<p>Laravel lets you specify a dynamic queue name along with a job, like so:</p>
<pre><code class="language-php">dispatch((new JobA($data))-&gt;onQueue('a'));
</code></pre>
<p>In my application in each “JobA” a “JobB” would be called. For each “JobB” a “JobC” would be called. At the end of “JobC”, we’d do some additional work on the whole data collection. So you’d get something like this:</p>
<pre><code class="language-php">// In Controller  
dispatch((new JobA($data))-&gt;onQueue('a'));
 
// In JobA  
dispatch((new JobB($data))-&gt;onQueue('b'));
 
// In JobB  
dispatch((new JobC($data))-&gt;onQueue('c'));
 
// In JobC  
dispatch((new JobFinish($data))-&gt;onQueue('finish'));
</code></pre>
<p>I wanted to make sure we ran all of the jobs in order, and only continued onto the next set of jobs once the current batch was finished. This was very important since the application could easily have 20, or so, JobAs, 50 JobBs, and hundreds of JobCs.</p>
<p>In my supervisor config file, I added something similar to this:</p>
<pre><code class="language-bash">[program:artisan-queue]  
command = php artisan queue:work --queue=a,b,c,finish
</code></pre>
<p>Now all of the jobs on the “a” queue would have to finish before the “b” jobs would start, and “c” would wait for “b”, and so on.</p>
<p>So there you have it — a prioritized queuing system in only a few lines of code!</p>
<p>The Laravel Queue system is very robust, and if you haven’t used it, I’d highly recommend trying it in your next project. You can read more about the queue system and queue priorities <a href="https://laravel.com/docs/5.3/queues#queue-priorities">here</a>.</p>
]]></description>
                <pubDate>Tue, 10 Jan 2017 21:34:49 +0000</pubDate>
            </item>
                    <item>
                <title>Moving from self-hosted image service to Cloudinary</title>
                <link>https://chrisgmyr.dev/blog/moving-from-self-hosted-image-service-to-cloudinary</link>
                <guid isPermaLink="true">https://chrisgmyr.dev/blog/moving-from-self-hosted-image-service-to-cloudinary</guid>
                <description><![CDATA[<p><img src="https://fls-a1375e46-63b7-4062-ba2b-4aae42ad07e6.laravel.cloud/posts/a92R6euEUWCBVC8nkxpcppwsV4pJB7w6lTeDZ4Oh.png" alt="Cloudinary Logo" /></p>
<p>Image manipulation is hard. Handling images in the long term is even harder. Here at <a href="http://about.dose.com/">Dose</a> we have a lot of images and a number of ways to serve them. We have Android and iOS apps, as well as completely responsive websites, so each image has the potential of getting manipulated multiple times depending on the device, orientation, and how we optimize an image for a certain platform.</p>
<h3>Handling this ourselves</h3>
<p>We used to have an internal image service that took an author’s uploaded image and handled some pre-processing. On upload, we’d make sure it’s within a max width and encoded correctly. Animated <strong>GIF</strong>? We’d have to convert the file to <strong>MP4</strong> and <strong>WEBM</strong> also. This would end up adding extra load on our servers and more space taken in our S3 account.</p>
<p>When an asset was requested with a certain height/width combination we’d check to see if we had it in our S3 bucket. If not we’d return the master asset and kick off a queue job to create it then add it to the bucket for the next request. As you can imagine, the usage was huge. Want to change the article promo image from 500px width to 475px width? That would invalidate all previous images and we’d need to recreate all new images the next time they were requested. What a mess!</p>
<p>If your product is not image manipulation, then don’t do this yourself. Services like Cloudinary do this much more efficiently and much better than you will, so use them. And if you’re worried about the cost, think about how much it’ll cost you in development and upkeep, as well as hosting, storage, and delivery costs. More on that later.</p>
<h3>Getting Started with Cloudinary</h3>
<blockquote>
<p>Cloudinary is the market leader in providing a comprehensive cloud-based image management solution. Cloudinary is being used by tens of thousands of web and mobile application developers all around the world, from small startups to large enterprises. We are here to cover your every image-related need.</p>
</blockquote>
<p><a href="http://cloudinary.com/about">Source</a></p>
<p>Cloudinary has taken the massive task of handling and manipulating images and broke it down to something very simple — URL manipulation.</p>
<p>Take the following image url:</p>
<p><code>http://res.cloudinary.com/demo/image/upload/sample.jpg</code></p>
<p>Let’s say we’d like to resize this to a max width of 300px, you’d get</p>
<p><code>http://res.cloudinary.com/demo/image/upload/w_300/sample.jpg</code></p>
<p>How about we restrict the height to 150px</p>
<p><code>http://res.cloudinary.com/demo/image/upload/w_300,h_150/sample.jpg</code></p>
<p>Well, we got the size that we wanted, but it looks <a href="http://res.cloudinary.com/demo/image/upload/w_300,h_150/sample.jpg">pretty bad</a> right now. We’ll need to make some adjustments. Let’s <strong>crop</strong> it to <strong>fit</strong> the space that we need</p>
<p><code>http://res.cloudinary.com/demo/image/upload/w_300,h_150,c_fit/sample.jpg</code></p>
<p>Yeah, <a href="http://res.cloudinary.com/demo/image/upload/w_300,h_150,c_fit/sample.jpg">this looks a lot better</a>!</p>
<h3>Combining Transformations</h3>
<p>Combining transformations is just as straightforward as single transformations, all you have to do is add another segment to the URL</p>
<p><code>http://res.cloudinary.com/demo/image/upload/w_300,h_150/c_crop,w_50,h_50,x_100,y_75/sample.jpg</code></p>
<p>So with this example we are:</p>
<ol>
<li>
<p>Resizing the image to 300px X 150px then</p>
</li>
<li>
<p>Cropping the image to be 50px X 50px and moving the X, Y point to 100, 75 in order to focus in on the <a href="http://res.cloudinary.com/demo/image/upload/w_300,h_150/c_crop,w_50,h_50,x_100,y_75/sample.jpg">yellow part</a> of one of the flowers.</p>
</li>
</ol>
<p><a href="http://cloudinary.com/documentation/image_transformations#chained_transformations">Learn more</a></p>
<h3>File Type Transformations</h3>
<p>This is by far one of the most powerful features of Cloudinary. Say we are uploading a <strong>GIF</strong>, but want to optimize it and change it to a <strong>MP4</strong>.</p>
<p><code>http://res.cloudinary.com/demo/image/upload/kitten_fighting.gif</code></p>
<p>would turn into</p>
<p><code>http://res.cloudinary.com/demo/image/upload/kitten_fighting.mp4</code></p>
<p>Just by changing the extension of the file in the URL will make this conversion for you. No more keeping track of different file references or different hashes. Only ask for a different file extension!</p>
<p><a href="http://cloudinary.com/blog/reduce_size_of_animated_gifs_automatically_convert_to_webm_and_mp4">Learn more</a></p>
<h3>Programmatic SDKs</h3>
<p>It’s all well and good that we can convert any file asset via a URL, but if you want to make these changes more programmatically, you can use one of their many SDKs. For this example, we’ll be using their <a href="https://github.com/cloudinary/cloudinary_php">PHP SDK</a></p>
<p>In my initial example, we ended with the final URL of</p>
<p><code>http://res.cloudinary.com/demo/image/upload/w_300,h_150,c_fit/sample.jpg</code></p>
<p>for our image. In the PHP SDK, we’d be able to do</p>
<pre><code class="language-bash">$transformations = ['width' =&gt; 100, 'height' =&gt; 150, 'crop' =&gt; 'fill'];  
$url = cloudinary_url('sample.jpg', $transformations);
</code></pre>
<p>and multiple transformations, like our second example, would look like</p>
<pre><code class="language-bash">$transformations = ['transformation' =&gt; [  
   ['width' =&gt; 300, 'height' =&gt; 150],  
   ['width' =&gt; 50, 'height' =&gt; 50, 'crop' =&gt; 'crop', 'x' =&gt; 100, 'y' =&gt; 75]  
]];  
$url = cloudinary_url('sample.jpg', $transformations);
</code></pre>
<h3>Migrating to Cloudinary</h3>
<p>Cloudinary has taken the hard work out of migrating assets over to their platform. There are currently a few options to choose from.</p>
<ol>
<li>
<p>Add a full URL to your current image and Cloudinary will automatically pull this in for you.</p>
</li>
<li>
<p>Set up an auto upload mapping to your S3 bucket. When this “virtual” directory is requested, it will pull the asset from your bucket into your Cloudinary account.</p>
</li>
</ol>
<p><a href="http://cloudinary.com/blog/how_to_automatically_migrate_all_your_images_to_the_cloud">Learn more</a></p>
<h3>Why We Chose Cloudinary</h3>
<p>Before moving forward with anything new, we went through a diligent research period where we were looking at a handful of SaaS options as well as possibly redesigning our current in-house system. As we worked through the options one thought became very clear — handling our own system didn’t make sense. There are a good handful of services that are more cost-effective and feature-rich than something that we could ever make. Our business is not handling images. Just the cost of AWS instances, storage, and CDN fees were tens of thousands of dollars per month. Now take developer costs to maintain and add to the system, and it adds up quickly. But even more importantly, each time something would break or need to be added to our image service, it would take attention away from more important tasks or services.</p>
<p>After evaluating a number of image and CDN services, we moved forward with Cloudinary for a number of reasons:</p>
<ol>
<li>
<p><strong>They were very attentive.</strong> They set up multiple meetings with their sales and tech teams to answer all of our questions and to help get us the best price for the resources we need. Even now, after the “sale”, they continue to reach out personally to share new features that haven’t been published yet and check in on analytics and our performance.</p>
</li>
<li>
<p><strong>Price.</strong> Their price is a LOT cheaper than running all of our instances, storage, and CDN through AWS.</p>
</li>
<li>
<p><strong>Features.</strong> The flexibility and amount of features they have for image handling is impressive and there would be no realistic way that we’d have the bandwidth in order to make similar features ourselves. Their system makes it very easy to make adjustments on the fly to see what combinations perform, and look better.</p>
</li>
<li>
<p><strong>Uptime.</strong> Since the majority of our content is showing images, if the images aren’t available we aren’t able to provide the value to our users that we need to. Since moving to Cloudinary, our image uptime has been spectacular and image responsiveness has been significantly faster too.</p>
</li>
</ol>
<h3>Summary</h3>
<p>We have simplified our image handling process over the last 6 months since starting to work with Cloudinary. They can easily handle our <strong>4.5 Million images</strong> (and counting) and over <strong>1.4 Billion requests per month</strong>. They also provide insight to help us further improve our performance on our sites. Site performance and user experience are not taken lightly, so we are very happy with our decision to utilize their services. Some other helpful points are</p>
<ul>
<li>
<p><a href="http://cloudinary.com/documentation">Great documentation</a></p>
</li>
<li>
<p>A bunch of <a href="http://cloudinary.com/addons">add-on plugins</a> like JPEGmini and Imagga</p>
</li>
<li>
<p>Tons of transformation options like <a href="http://cloudinary.com/blog/adding_watermarks_credits_badges_and_text_overlays_to_images">watermarks</a>, <a href="http://cloudinary.com/cookbook/pixelate_an_image_or_a_region">custom pixelation</a>, and even <a href="http://cloudinary.com/cookbook/convert_pdf_to_jpg">PDF to image</a> conversion</p>
</li>
<li>
<p>Automatic <a href="http://cloudinary.com/blog/introducing_intelligent_responsive_image_breakpoints_solutions">responsive image</a> handling</p>
</li>
<li>
<p>Video <a href="http://cloudinary.com/documentation/video_management">management and transformations</a></p>
</li>
<li>
<p><a href="http://cloudinary.com/blog/automatic_backup_of_user_uploaded_images_using_cloudinary">Automatic backups to S3</a></p>
</li>
<li>
<p>Newly introduced “<a href="http://cloudinary.com/blog/introducing_smart_cropping_intelligent_quality_selection_and_automated_responsive_images">Auto Everything</a>”</p>
</li>
</ul>
<p>As an aside, we’re just sharing this article and information as happy customers. Cloudinary did not commission this article nor give us any incentive for writing it. We just want to share our experience.</p>
]]></description>
                <pubDate>Tue, 28 Jun 2016 21:59:13 +0000</pubDate>
            </item>
            </channel>
</rss>
