<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://www.gustavwengel.dk/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.gustavwengel.dk/" rel="alternate" type="text/html" /><updated>2026-04-08T09:50:32+00:00</updated><id>https://www.gustavwengel.dk/feed.xml</id><title type="html">Tinkerer</title><subtitle>Code and Climate Change. Blog about software development in ClimateTech</subtitle><author><name>Gustav Wengel</name></author><entry><title type="html">Moving My Digital Footprint out of the United States - Part 1</title><link href="https://www.gustavwengel.dk/2026/02/01/out-of-us-1.html" rel="alternate" type="text/html" title="Moving My Digital Footprint out of the United States - Part 1" /><published>2026-02-01T00:00:00+00:00</published><updated>2026-02-01T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/2026/02/01/out-of-us-1</id><content type="html" xml:base="https://www.gustavwengel.dk/2026/02/01/out-of-us-1.html"><![CDATA[<p>I’ve never really been too keen on many of the US tech giants. I deleted my facebook back in 2019 (and shamefully re-created it again last year, to participate in a few groups), and I X’d Twitter back when it changed name.</p>

<p>Having the US president militarily threaten your kingdom certainly doesn’t make me more enthusiastic about sending my money our my data to the US, so I’m beginning a much more full migration of my digital footprint away from the United States. I’m documenting my journey here to hopefully inspire others on how to do the same<sup id="fnref:0" role="doc-noteref"><a href="#fn:0" class="footnote" rel="footnote">1</a></sup>.</p>

<p>When starting this process I needed to reflect on a few things:</p>
<ul>
  <li>I use <em>a lot of</em> US services, and often for good reasons. Whether it’s because they have a generous free tier, they’re the best you can get, or just because I’m used to them. Migrating over won’t be something I can accomplish in one day, so I’m dedicating a few hours to the process every Sunday. I hope to dedicate a blog post to each phase.</li>
  <li>This is probably going to cost money for me. I don’t think it’s necessarily going to be <em>a lot</em> of money, but I am willing to put my money where my mouth is and help strengthen the non-US tech scene.</li>
  <li>I might never be able to get away from all US technology. My personal computer still runs Windows (might switch that though), my Smartphone still runs Android, and my credit card is still a Visa. I’ll need to be pragmatic and focus my energy where I can make reasonable choices.</li>
</ul>

<h2 id="low-hanging-fruit">Low Hanging Fruit</h2>

<p>I’ve switched my search engine to <a href="https://www.ecosia.org">Ecosia</a>. If you’re already using Chrome, I can recommend switching over to the <a href="https://www.ecosia.org/browser">Ecosia browser</a>. Ecosia is a German-based company that donates all its profit to help combat climate  change.<br />
I’ll fully admit I don’t think the search results are as good as Google<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">2</a></sup>, particularly for local queries, but you can always prefix your search queries with <code class="language-plaintext highlighter-rouge">!g</code> and it’ll redirect to google for the tricky queries.</p>

<h2 id="first-phase">First Phase</h2>

<p>The first phase for me is switching over services that I actively pay for. The services I pay the most for are cloud file storage. I actually pay for both Google and Dropbox<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">3</a></sup>.<br />
I’ve moved them both over to <a href="https://proton.me/drive">Proton</a>, a privacy focused company from Switzerland. Proton also seems to offer almost a full replacement for all the Google services I actually use.</p>

<p>The process for migrating took a little while but actually wasn’t too complex:</p>

<ul>
  <li>I signed up for Proton and installed the Proton Drive appliction on my desktop computer</li>
  <li>I exported all of my Google data via <a href="https://takeout.google.com/">Google Takeout</a>, and downloaded the files.
    <ul>
      <li>For my photos I followed <a href="https://proton.me/support/how-to-import-from-google-photos">this workflow</a> to get them into proton drive</li>
      <li>For the rest of the Drive files, I just dropped them into a directory that Proton drive syncs with the cloud.</li>
    </ul>
  </li>
  <li>For Dropbox I already had the desktop app installed, so synchronizing over was a simple matter of copying the files into a directory that proton drive syncs to.</li>
</ul>

<p>This process wasn’t particularly difficult, but syncing many hundreds of GBs did mean I had to leave my computer running for a fair amount of time.</p>

<p>After synchronizing, I deleted most of my old / big files from both Dropbox and GDrive, and cancelled my subscriptions.</p>

<p>I was paying around 15£ per month for Dropbox (I was on a subscription much too big for what I needed), and around 4£ per month for Google One.<br />
I’ve managed to move all my files to a 15£ a month subscription to proton, that covers both me and my partner.</p>

<p>After moving these services, the only US tech service I still actively pay for is 6$ I spend on DigitalOcean every month to host a small website.<br />
In the next couple of months, I hope to move that service to <a href="https://www.scaleway.com/en/">Scaleway</a> which I’ve used before and quite like.</p>

<h2 id="next-phase">Next Phase</h2>

<p>I expect the next phase for me to be switching over my email. I’ve started setting up proton so that I can use hi@gustavwengel.dk as my primary email address.<br />
I didn’t get around to migrating over accounts or forwarding emails today, though - hopefully that’s for next time.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:0" role="doc-endnote">
      <p>I don’t necessarily think switching tech is the highest impact one person can have. As an individual you might not have much power, but you can decide who you vote for, and where you spend your money. But I’m a technical person, so this seems like a natural thing for me to write about. <a href="#fnref:0" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:1" role="doc-endnote">
      <p>I used to use DuckDuckGo most places before, but it also has the same issues as Ecosia, that particularly local search queries don’t always work as well as you want. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2" role="doc-endnote">
      <p>I paid for Google One as it was convenient, and for Dropbox for a second backup, as I’d heard enough horror stories of people being locked out of their google accounts. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[I’ve never really been too keen on many of the US tech giants. I deleted my facebook back in 2019 (and shamefully re-created it again last year, to participate in a few groups), and I X’d Twitter back when it changed name.]]></summary></entry><entry><title type="html">Moving My Digital Footprint out of the United States - Part 2</title><link href="https://www.gustavwengel.dk/2026/02/01/out-of-us-2.html" rel="alternate" type="text/html" title="Moving My Digital Footprint out of the United States - Part 2" /><published>2026-02-01T00:00:00+00:00</published><updated>2026-02-01T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/2026/02/01/out-of-us-2</id><content type="html" xml:base="https://www.gustavwengel.dk/2026/02/01/out-of-us-2.html"><![CDATA[<p>I’m trying to rely less on US services &amp; tech, and I’m using a few hours every Sunday to migrate.<br />
I’ve mentioned why I want to do this migration in the <a href="/2026/02/01/out-of-us-1.html">first part of this series</a>.</p>

<p>This Sunday I didn’t get quite as much done as I had hoped (our dishwasher broke and I had a lot of dishes that needed washing), but I did manage to get further with migrating my email over to <a href="https://proton.me/">proton</a>, and I’ve set it up so that you can now reach me at <a href="mailto:hi@gustavwengel.dk">hi@gustavwengel.dk</a>, instead of my old Gmail.</p>

<p>I spent a bit of time digging through 1Password to see what accounts I wanted to switch over, and oh boy do I have many accounts lying around for services I never use, or haven’t used in years. I decided to switch over a handful of accounts that I’d actually be sad to lose access to, and leave the rest as ghost accounts still tied to my old Gmail.</p>

<p>As for my old email? I’m probably going to keep it around for a while. I’ve set it to redirect to my new one (took like 2 minutes), and proton helpfully offers a wizard to import contacts, calendars and emails from your Gmail to proton.</p>

<p>Changing habits is hard, and I still find myself reaching for my old mail, but hopefully having changed accounts, and replaced the Gmail app on my phone will make the muscle memory stick eventually.</p>

<p>I’ve sent my first outbound email from my new mail, and everything seems to work just dandy!</p>

<h2 id="next-phase">Next Phase</h2>

<p>Next week my plan is to try and move away from the wider Google ecosystem:</p>
<ul>
  <li>Replace GDocs with Proton Docs. I don’t imagine this will be very hard as I don’t actually use it for anything fancy, but I am uncertain about how I’ll migrate my documents.</li>
  <li>Replace my Google Calendar with the proton calendar</li>
  <li>Figure out what to do as a replacement for Google Keep</li>
</ul>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[I’m trying to rely less on US services &amp; tech, and I’m using a few hours every Sunday to migrate. I’ve mentioned why I want to do this migration in the first part of this series.]]></summary></entry><entry><title type="html">Moving My Digital Footprint out of the United States - Part 3</title><link href="https://www.gustavwengel.dk/2026/02/01/out-of-us-3.html" rel="alternate" type="text/html" title="Moving My Digital Footprint out of the United States - Part 3" /><published>2026-02-01T00:00:00+00:00</published><updated>2026-02-01T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/2026/02/01/out-of-us-3</id><content type="html" xml:base="https://www.gustavwengel.dk/2026/02/01/out-of-us-3.html"><![CDATA[<p>I’m trying to rely less on US services &amp; tech, and I’m using a few hours every Sunday to migrate.<br />
I’ve mentioned why I want to do this migration in the <a href="/2026/02/01/out-of-us-1.html">first part of this series</a>.</p>

<p>Last week I didn’t get to do anything, as I spent all Sunday setting up a self-contained solar panel circuitry with a battery and everything.<br />
I’m pretty sure it’s a hideously large solar cell for running such a small light, but it makes me happy every time I see it light up. Solar cells are super cool!</p>

<div class="img-div">
<img src="https://www.gustavwengel.dk/assets/img/solar-cell.jpg" />
Taking light from the sun, and turning it into... well more light!
</div>

<p>Anyway, enough about solar energy, let’s talk status updates of tech migrations. There’s been some good and bad things.</p>

<h2 id="what-went-well">What Went Well</h2>

<p>First off, switching email has been much easier than expected! As emails are forwarded automatically from Gmail to my new proton mail, I’m not really scared of losing an y emails, and then I just start any new conversations from the new proton-based email.</p>

<p>Running inbox zero in both emails is a little annoying, but it’s good motivation for switching stuff over (or perhaps unsubscribing from a few things). A surprising amount of services require you to write them an email to switch your email, but on the other hand, at least you <em>can</em> write them an email. I’m looking at you, horrible AI “support assistant” bots.</p>

<p>Switching over to use Proton Drive and Proton Docs instead of Google Drive &amp; Google Docs has been pretty seamless.<br />
The only hiccup has been that when you export your data via <a href="https://takeout.google.com/">Google Takeout</a> you don’t automatically get shared directories, so you have to download those manually if you need them, so I had to go through my GDrive to determine what I wanted to keep.</p>

<p>I also canceled my Netflix today. A very easy decision that I’ve been putting off for a long time, mostly due to inertia - they almost never have anything I want to watch.</p>

<h2 id="what-went-poorly">What Went Poorly</h2>

<p>I was really hoping that at this time I would have switched my Calendar away from Google, but Proton Calendar doesn’t support reminders (or tasks, as I believe Google calls them these days), which I use extensively. Seems like I’m going to be stuck with Google here a bit longer, until Proton ups their game.</p>

<p>I also wanted to try out Mistral as an LLM, specifically I had a small web scraping task that I figured I would test it out at<sup id="fnref:0" role="doc-noteref"><a href="#fn:0" class="footnote" rel="footnote">1</a></sup>. It’s, not great. It’s doing the thing the frontier models used where it’d say something was “perfect” and “fantastic”, when it doesn’t work at all, which is a tad disappointing.</p>

<p>It is however, much cheaper than e.g. a Claude Max subscription (15€ versus 100$ or 200$ a month), but it seems here you do get what you pay for.</p>

<h2 id="next-phase">Next Phase</h2>

<p>For my next phase I’m hoping to move my hosted services out of DigitalOcean, as that’s the last US service I pay actual money for.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:0" role="doc-endnote">
      <p>While I’m skeptical of LLMs for a fair amount of tasks, I feel like one-off web-scraping scripts is something they should be a perfect fit for. <a href="#fnref:0" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[I’m trying to rely less on US services &amp; tech, and I’m using a few hours every Sunday to migrate. I’ve mentioned why I want to do this migration in the first part of this series.]]></summary></entry><entry><title type="html">How I Review Pull Requests</title><link href="https://www.gustavwengel.dk/2025/02/19/pr-reviewer-practices.html" rel="alternate" type="text/html" title="How I Review Pull Requests" /><published>2025-02-19T00:00:00+00:00</published><updated>2025-02-19T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/2025/02/19/pr-reviewer-practices</id><content type="html" xml:base="https://www.gustavwengel.dk/2025/02/19/pr-reviewer-practices.html"><![CDATA[<p>I’ve written quite a bit about pull requests before - covering everything from <a href="/how-thoroughly-should-you-review-pull-requests">how thoroughly you should review a pull request</a> to strategies for <a href="/how-to-not-get-blocked-while-waiting-for-code-review">avoiding being blocked by waiting for reviews</a> and making your <a href="/4-ways-to-make-your-pull-requests-faster-to-review">PR faster to review</a>. This post builds on those, focusing specifically on what <strong>I</strong> do when reviewing PRs.</p>

<p>I spend a fair amount of time reviewing code, and I’d like to think I’m both reasonably fast and thorough. Over time, I’ve developed some habits, that I think make this more efficient. Not all of these habits might be relevant for everyone - we’re all wired differently in our brains.</p>

<p>I also recommend checking out <a href="https://google.github.io/eng-practices">Google’s Engineering Practices</a>, which provides a short, pragmatic take on code reviews.</p>

<h2 id="read-the-pr-description-first">Read the PR Description First</h2>
<p>The first thing I do when reviewing a PR is read the description. A good description should explain <strong>what</strong> is being changed and, more importantly, <strong>why</strong>. If I don’t understand the intent behind a PR, I can’t judge whether it’s the right change to make.</p>

<p>A test I often use: if I revisit this PR in six months, will the git history tell me what I need to know? I prefer all relevant context to be included directly in the PR description rather than relying on external issue trackers like Linear, Jira, or GitHub issues. Git logs don’t decay over time, but if your company changes issue tracker - good luck finding out why the changes described in <code class="language-plaintext highlighter-rouge">JIRA-3481</code> were made.</p>

<p>If the PR description isn’t clear or doesn’t justify the change, that’s my first point of feedback. I also often check whether the code aligns with the description, and provide feedback on that.</p>

<h2 id="keep-a-notepad">Keep a Notepad</h2>
<p>While reviewing, I almost always have somewhere to jot down thoughts and questions—things like:</p>

<ul>
  <li>“I don’t understand how X works with Y.”</li>
  <li>“This is different from how we normally do X - why?”</li>
  <li>“What happens if the user provides input X?”</li>
  <li>“This looks tricky - how well-tested is it?”</li>
</ul>

<p>Mostly I keep these as a list in a separate text file. This helps because questions you think of, often aren’t answered until later in the PR. Writing them down means you don’t have to worry you’ll forget them.</p>

<p>Once I finish reviewing, any unresolved notes become comments for the author—ranging from clarifying questions to suggestions for improvement.</p>

<h2 id="some-prs-are-quick-to-review">Some PRs Are Quick to Review</h2>

<p>Not all PRs require deep scrutiny. Some are quick to review, especially if they involve:</p>

<ul>
  <li>Small refactor-only changes (especially if no tests are modified).</li>
  <li>Renaming things across the codebase, e.g., renaming <code class="language-plaintext highlighter-rouge">energy</code> to <code class="language-plaintext highlighter-rouge">electricity</code>, streamlining capitalization or similar.</li>
  <li>PRs that only add additional tests or logging.</li>
  <li>Bugfix PRs that involve minimal changes and come with a test.</li>
</ul>

<p>When a PR is well-scoped and self-contained, it can sometimes be reviewed in minutes.</p>

<h2 id="some-prs-need-to-be-split-up">Some PRs Need to Be Split Up</h2>

<p>PR review time scales exponentially with size. If a PR contains multiple unrelated changes, I have to manually untangle their interactions, which takes significantly more time. I often request that PRs be split, especially if they mix refactoring with feature work or if they are difficult to review in a single pass.</p>

<p>Spending the time as an author to split up a PR, can improve the overall time to ship, because the time you spend is often less than the extra time it would take your reviewer to review the larger PR.</p>

<h2 id="pay-extra-attention-to-module-boundaries">Pay Extra Attention to Module Boundaries</h2>
<p>Code generally have parts that are public, and parts that are internal. You can consider the public part the “module boundaries”: The parts where the module interacts with the outside world.<br />
The module boundaries could be an end-user-facing HTTP API, or simply the methods your particular module exposes for other code to interact with.<br />
Often, getting the module boundaries right is more important than getting the internal details perfect. If a module boundary is well-designed and its implementation is properly tested at the level of the module boundary, it is often easy to refactor the internals later if needed.</p>

<p>When reviewing module boundaries, I pay particular attention to:</p>

<ul>
  <li>How easy it is to use from the perspective of other developers or API consumers.</li>
  <li>Whether the type system could enforce constraints that are currently left to developers to uphold manually.</li>
</ul>

<h2 id="evaluate-the-author--module-match">Evaluate the Author &amp; Module Match</h2>
<p>The level of scrutiny I apply depends on both <strong>who wrote the PR</strong> and <strong>which module it touches</strong>.</p>

<ul>
  <li>If the author is experienced and has a strong track record, I will probably still comment on many things, but I will be quicker to accept their reasons for the way the code looks.</li>
  <li>For newer developers or those unfamiliar with a codebase or a module, I generally take a little more convincing that their approach is correct, and challenge them a bit more. Both because I can’t take their good judgement for granted yet, and because I see it as a learning experience for them.</li>
  <li>If the author is the primary maintainer of a module, I often frame feedback as suggestions rather than things that must be fixed. They are the primary maintainer, so they often know best what constraints they are operating within.</li>
  <li>If the PR affects a critical or sensitive module, I generally invest more time in reviewing it thoroughly.</li>
  <li>If the module is experimental or still evolving, I’m okay with less than stellar code, since it will likely be refactored several times during development.</li>
</ul>

<p>Based on the author, I also decide <strong>when to insist on cleanup now vs. later</strong>:<br />
Some developers follow through on “I’ll clean this up in a later PR,” while others don’t.<br />
For developers that reliably perform the follow-up, I’m generally pretty lenient about when cleanup appears, and for others, I normally insist on cleanup work being done in the same PR.</p>

<h2 id="finding-the-important-changes">Finding the Important Changes</h2>
<p>Many PRs contain a mix of boilerplate updates and tricky changes. I generally start by skimming over the files and marking the boilerplate or irrelevant files as “viewed” so I can focus on the core modifications. If the author has left comments guiding the review, that’s always helpful.</p>

<p>Then I either start looking at the module boundaries or the tests for the given code, to get an idea of how the feature is used, before jumping into the implementation - but this is very much based on gut-feeling.</p>

<p>For complex PRs, I sometimes pull the code locally to explore it in an editor, but not that often.</p>

<h2 id="things-i-look-for-when-reviewing-production-code">Things I Look for When Reviewing Production Code</h2>

<p>Some issues are easy to spot with a fresh set of eyes—like leftover debug logs or unclear method names. Others require deeper thought. I generally focus on:</p>

<ul>
  <li>When first evaluating a PR, I consider whether this code is performance-sensitive. If it isn’t, I generally don’t think too deeply about performance.</li>
  <li>Is there anything that’s un-idiomatic or inconsistent for the codebase or the language? If so, I will generally ask for this to be corrected. I think conventions for how to do and name things are generally helpful, even if there are some I disagree with personally.<br />
I generally think developers should err on the side of respecting convention, even ones they disagree with. I think having a sub-optimal convention is better than having no convention at all.</li>
  <li>I look at <code class="language-plaintext highlighter-rouge">//TODO</code> comments and often add a PR comment asking if we need to track this in an issue somewhere else, but I often leave it up to the developer if they think we need to.</li>
  <li>I try to pay particular attention to stuff that’s not easily reversible, such as security issues, or bugs that would persist invalid data into the database.</li>
  <li>I’m a big fan of type systems, and if there is an invariant that needs to be upheld, I will often recommend relying on the type-system rather than relying on developer skills - we’re only human after all. Examples of this could be using an ordered data structure for data that must be ordered, using a list that <em>cannot</em> be empty, for lists that <em>should not</em> be empty, and so forth.</li>
</ul>

<h2 id="things-i-look-for-when-reviewing-test-code">Things I Look for When Reviewing Test Code</h2>
<p>I generally pay less attention to test code than production code. My main checks are:</p>

<ul>
  <li>Do test names clearly describe what they’re testing?</li>
  <li>Do the tests cover both happy paths and unhappy paths that are either likely, or critical?</li>
  <li>Do they answer the questions in my notebook that I wrote down while reading through the implementation.</li>
</ul>

<p>If the tests cover the key scenarios I was curious about, I make a point to mention it, so reviews don’t always feel like a list of negatives.</p>

<h2 id="approve-with-suggestions">Approve with Suggestions</h2>

<p>In many cases, I like to “Approve with suggestions”, or as Google calls it a <a href="https://google.github.io/eng-practices/review/reviewer/speed.html#lgtm-with-comments">LGTM with comments</a>.<br />
That essentially means that I trust the author to correct the issues I pointed out. However, this depends on the fixes required, and my trust in the author to get them right without an additional review.</p>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[I’ve written quite a bit about pull requests before - covering everything from how thoroughly you should review a pull request to strategies for avoiding being blocked by waiting for reviews and making your PR faster to review. This post builds on those, focusing specifically on what I do when reviewing PRs.]]></summary></entry><entry><title type="html">Thoughts on Product Roadmaps</title><link href="https://www.gustavwengel.dk/2024/12/10/thoughts-on-roadmaps.html" rel="alternate" type="text/html" title="Thoughts on Product Roadmaps" /><published>2024-12-10T00:00:00+00:00</published><updated>2024-12-10T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/2024/12/10/thoughts-on-roadmaps</id><content type="html" xml:base="https://www.gustavwengel.dk/2024/12/10/thoughts-on-roadmaps.html"><![CDATA[<p><em>Caveat: While I consider myself a product-first engineer with solid product skills and an academic background partly founded in product design and UX, I have never worked as a full-time product manager. My experience only comes from “the other side of the table”, i.e. working with product managers as an engineer.</em></p>

<p>During my time at my current employer, I’ve spent a fair amount of time reflecting on product roadmaps—what they’re good for and what they might not be so good for. I think a shared understanding of priorities and collective ownership is important, and a roadmap can be a valuable tool for achieving that. Roadmaps also seem to be a reasonably standard industry practice, though there are <a href="https://basecamp.com/articles/options-not-roadmaps">some notable holdouts</a>.</p>

<h2 id="categories-of-roadmaps">Categories of Roadmaps</h2>

<p><a href="https://roadmunk.com/guides/what-is-a-product-roadmap/">Roadmunk</a> provides a solid overview of three overarching types of roadmaps:</p>

<ol>
  <li><strong>No-Dates Roadmap</strong>: A prioritized list of features or problems agreed upon by the team. Priorities can be defined in many ways (e.g., ROI, urgency/priority matrices), but this type of roadmap deliberately avoids assigning deadlines or timeframes to specific items.</li>
  <li><strong>Timeline-Based Roadmap</strong>: A roadmap where each item has a defined duration or deadline.</li>
  <li><strong>Hybrid Roadmap</strong>: Combines the two approaches, with timeframes for near-term items (e.g., 1-3 months) and a prioritized list for tasks further out.</li>
</ol>

<p>For my five cents, I think I believe the no-dates roadmap or a hybrid roadmap provides works the best. Let’s explore why.</p>

<hr />

<h2 id="the-risk-of-commitment">The Risk of Commitment</h2>

<p>The moment you introduce dates or timeframes into your roadmap, people gravitate towards treating them as commitments. The software world is rife with stories about estimates being turned into promises. I often see it recommended to keep dates out of roadmaps shared outside the immediate team:</p>

<blockquote>
  <p><em>It is not uncommon for sales reps to share internal roadmaps with customers, as a way of closing a deal, generating interest, and keeping leads warm. Avoid having sales teams committing a product to a specific release date, by excluding release or launch dates in these roadmaps.</em> - <a href="https://www.productplan.com/learn/what-is-a-product-roadmap/">ProductPlan</a></p>
</blockquote>

<blockquote>
  <p><em>An important note: avoid including hard dates in sales roadmaps to avoid tying internal teams to potentially unrealistic dates. Unless there’s certainty about the product’s availability date, it’s a good habit to avoid including dates in an external-facing roadmap.</em> - <a href="https://www.atlassian.com/agile/product-management/product-roadmaps">Atlassian</a></p>
</blockquote>

<blockquote>
  <p><em>Some schools of thought around roadmaps believe that you should keep dates out of the roadmap due to the risk of committing to something that might not be delivered</em> - <a href="https://roadmunk.com/guides/what-is-a-product-roadmap/">also Roadmunk</a></p>
</blockquote>

<p>There’s two reasons why you might be hesitant to put deadlines on roadmap items (and by extension, make some sort of commitment):</p>

<ol>
  <li><strong>Unrealistic Deadlines</strong>: Humans are notoriously bad at estimating software complexity. Unless you build in substantial slack into your timelines or do extensive time-consuming pre-planning, you’ll likely miss deadlines.</li>
  <li><strong>Building the Wrong Thing</strong>: Sometimes you realize that what you’re building doesn’t actually solve the problem you think, or even that you have another problem you’d rather focus your resources on. If you’ve commited to timeframes, you either have to renege on your promise or build the wrong thing.</li>
</ol>

<blockquote>
  <p><em>Product managers rarely know what’s going to happen a year from now (market changes, discovery of new user needs, etc.), so planning for a one-year timeline doesn’t make sense. You only need the details of the who, what and how for the month and quarter, focused on working towards achieving a high-level goal or two (for agile teams and startups, even that time frame can be a stretch!)</em> - <a href="https://roadmunk.com/guides/what-is-a-product-roadmap/">Roadmunk</a></p>
</blockquote>

<p>Both of these grow more likely the longer you try to plan your timeline, as delays in earlier tasks cascade. While you might hope to make up the difference with some tasks finishing ahead of schedule, in my experience, that’s exceedingly rare.</p>

<h2 id="flexible-commitments--appetite">Flexible Commitments &amp; Appetite</h2>
<p>There’s a natural tension between the fact that often want to communicate timeframes, either internally or externally.</p>

<p>If you do have this desire, here are some strategies to make it work:</p>

<h3 id="1-limit-the-accuracy-of-commitments">1. Limit the accuracy of commitments</h3>
<p>“We’ll be working on this in Q2 of next year” is a much weaker promise than “This will be done in April next year, with this functionality…”, often a vague commitment is “good enough”, while still leaving more room to maneuver.</p>

<h3 id="2-limit-the-timeframe-for-your-commitments">2. Limit the timeframe for your commitments.</h3>
<p>Limiting your commitments to X amount of weeks or months in the future is generally wise for two reasons. Planning enough to forecast anything with any amount of accuracy is fairly labour-intensive. I’ve seen people spend days or weeks on planning for the next 6 months, only to figure out later that they’ve been building the wrong thing and need to scrap it all.<br />
Delays also tend to cascade. If you’re only forecasting for a month, you might only be delayed for a week, while if you’re forecasting for a year you could be entire quarters off.</p>

<h3 id="3-build-plenty-of-slack-into-the-system">3. Build plenty of slack into the system</h3>
<p>Unexpected things crop up, things take longer than expected, people get sick. That’s life. If you expect 100% utilization for the new features or roadmap items you’re planning on, you’re going to be in for either a world of pain when your timetable collides with the real world, or engineers cutting corners aggressively, leading to a product so filled with technical debt it’ll remind you of the 2008 financial crisis.</p>

<h3 id="4-be-extremely-flexible-in-scope">4. Be extremely flexible in scope</h3>
<p>If you do need accuracy in time, you’ll have to be flexible in the scope. Determine a few high priority items that are must-haves, and then a prioritized list further than that, that you’re okay not delivering. Cutting down to must-haves is hard to do well, as there’s a tendency to over-estimate how much can fit in there. The list of must-haves should be so short it makes you a little uncomfortable.</p>

<h3 id="5-use-appetite-rather-than-scope">5. Use appetite rather than scope</h3>
<p>Work with the concept of <a href="(https://basecamp.com/shapeup/1.2-chapter-03)">appetite</a> rather than scope. Appetite is a concept that acknowledges that whether problems are worth solving depends on how hard they are to solve. Rather than starting with a set of features or scope, you start with a problem, and you dedicate “X weeks” of appetite to solving as much of this problem as you can. That is your appetite for the problem. It is then up to the team to try and do as much of possible to get something released for the given appetite, with whatever scope that fits into the given appetite.<br />
Obviously not all issues lead well to appetite. If you have contractual obligations to keep e.g. your database up-to-date, you cannot simply disregard this task, if it takes longer than your appetite. But I believe for most feature-based work, appetite is a really great framework that allows you to forecast timelines well into the future, at the cost of not being able to forecast scope.</p>

<hr />

<h2 id="roadmap-as-an-internal-alignment-tool">Roadmap as an Internal Alignment Tool</h2>

<p>It is important for teams and companies to agree on what problems are important to solve—and which are not. Creating a roadmap can be a catalyst for these discussions, even if the actual roadmap is never used afterward. The process of creating the roadmap creates alignment by encouraging collaboration and shared prioritization across teams.</p>

<p>As an example, sales might care more about shipping quickly to meet customer demands, while engineering often emphasizes reliability and long-term maintainability. By bringing these differing priorities to light, teams can find common ground and focus on achieving shared goals that take different priorities into account.</p>

<h2 id="when-roadmaps-become-adversarial">When Roadmaps Become Adversarial</h2>

<p>Sometimes I see roadmaps created in isolation. One team creates the first draft (or even version!) entirely on their own, and then sends it to others for feedback. This has a high risk of the process becoming adversarial, as the teams not part of the initial process will feel like they have to fight for their goals and time. There is a significantly different emotional quality to creating a roadmap together, versus having to fight for your priorities by critiquing an existing roadmap.</p>

<p>As an example, in many companies, sales-driven roadmaps pressure engineering teams into accepting unrealistic deadlines, sacrificing code quality and long-term maintainability.<br />
Situations like this, where the roadmap dictated solely by product teams or executive management can alienate stakeholders and engineers. This lack of buy-in can lead to two common reactions:</p>

<ol>
  <li>Indifference: Engineers may disengage, feeling no commitment or ownership over deadlines they had no role in setting. As a result, they might say, “If I didn’t have a say in the deadline, it’s not my problem if it isn’t met.”</li>
  <li>Stress: Unrealistic deadlines can create intense pressure, leading engineers to cut corners or burn out trying to deliver on time. Over time, this can result in a toxic environment, driving away top performers who seek healthier workplaces.</li>
</ol>

<p>In the end, roadmaps aren’t just an artifact. The way they’re created matters immensely for their effectiveness. As with so many other things that seem numbers and engineering-based in theory, they often end up being about those squishy humans in the end. I’ll leave you with this last quote which I think underlines this point well.</p>

<blockquote>
  <p>“Third, there’s the guilt. Yeah, guilt. Have you ever looked at a long list of things you said were you going to do but haven’t gotten around to yet? How does that list make you feel? The realities of life and uncertainty show us that 100% of the things on the roadmap are not going to happen on time the way we imagine.” - <a href="https://basecamp.com/articles/options-not-roadmaps">Basecamp</a></p>
</blockquote>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[Caveat: While I consider myself a product-first engineer with solid product skills and an academic background partly founded in product design and UX, I have never worked as a full-time product manager. My experience only comes from “the other side of the table”, i.e. working with product managers as an engineer.]]></summary></entry><entry><title type="html">You Should Run Cleanup Code at the Start of Your Tests</title><link href="https://www.gustavwengel.dk/cleanup-at-the-start-of-tests" rel="alternate" type="text/html" title="You Should Run Cleanup Code at the Start of Your Tests" /><published>2024-11-14T00:00:00+00:00</published><updated>2024-11-14T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/cleanup-at-the-start-of-tests</id><content type="html" xml:base="https://www.gustavwengel.dk/cleanup-at-the-start-of-tests"><![CDATA[<p>I recently came across an integration test that demonstrates what I believe is an antipattern.<br />
This particular test was consistently failing at the start with a “Unique constraint violation” in the database when calling the <code class="language-plaintext highlighter-rouge">createUser</code> function.</p>

<p>The test, with irrelevant details omitted, looked like this:</p>

<div class="language-typescript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">it</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="c1">// Setup</span>
    <span class="nx">someOtherSetupCode</span><span class="p">();</span>
    <span class="nx">createUser</span><span class="p">({</span><span class="na">id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">TEST_ID</span><span class="dl">"</span><span class="p">});</span>
    
    <span class="c1">// Actual test code</span>

    <span class="c1">// Cleanup</span>
    <span class="nx">someOtherTeardownCode</span><span class="p">();</span>
    <span class="nx">deleteUser</span><span class="p">({</span><span class="na">id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">TEST_ID</span><span class="dl">"</span><span class="p">});</span>
<span class="p">})</span>
</code></pre></div></div>

<p>This approach to organizing your test with some setup, some test code, and some cleanup at the end might seem logical.<br />
However, there’s a significant flaw: <strong>The cleanup code might never run.</strong><br />
When we put the cleanup code at the end of the test, it means it won’t run if the test fails.</p>

<p>In this case, that meant we ended up in a particularly problematic loop: if the test failed once, it could never pass without human intervention. This was because <code class="language-plaintext highlighter-rouge">createUser</code> would throw a unique constraint error if the <code class="language-plaintext highlighter-rouge">id</code> had already been used, which meant the test would not proceed, preventing the cleanup code from ever running.</p>

<h2 id="first-alternative-beforeeach-and-aftereach">First Alternative: beforeEach and afterEach</h2>
<p>A better alternative, if your testing framework supports it, is to use hooks for running code before and after each test, such as <code class="language-plaintext highlighter-rouge">beforeEach</code> and <code class="language-plaintext highlighter-rouge">afterEach</code>. This would make the test code look like this:</p>

<div class="language-typescript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">beforeEach</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="nx">someOtherSetupCode</span><span class="p">();</span>
    <span class="nx">createUser</span><span class="p">({</span><span class="na">id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">TEST_ID</span><span class="dl">"</span><span class="p">});</span>
<span class="p">})</span>

<span class="nx">it</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="c1">// Actual test code</span>
<span class="p">})</span>

<span class="nx">afterEach</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="nx">someOtherTeardownCode</span><span class="p">();</span>
    <span class="nx">deleteUser</span><span class="p">({</span><span class="na">id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">TEST_ID</span><span class="dl">"</span><span class="p">});</span>
<span class="p">})</span>
</code></pre></div></div>

<p>However, this still isn’t quite optimal for a few reasons:</p>

<ul>
  <li>The <code class="language-plaintext highlighter-rouge">deleteUser</code> call is still not guaranteed to run. It might not run if, for example, <code class="language-plaintext highlighter-rouge">someOtherTeardownCode</code> fails, or if the programmer manually interrupts the test run. This is still better than the initial example because most test frameworks run <code class="language-plaintext highlighter-rouge">afterEach</code> even if the test fails, so we avoid a scenario where the test <em>never</em> succeeds—eventually, the user model will be cleaned up.</li>
  <li>Debugging database state is challenging when you always delete models at the end. If the test fails and you suspect an issue with a database interaction, you can’t inspect the database state after the test has failed because you’ve already deleted all the relevant data.</li>
  <li>Using <code class="language-plaintext highlighter-rouge">beforeEach</code> and <code class="language-plaintext highlighter-rouge">afterEach</code> requires that all of your tests share the same state. This may be fine in many cases, but if your tests have very different setup needs, it can become cumbersome.</li>
</ul>

<h2 id="best-alternative-clean-up-first">Best Alternative: Clean Up First</h2>

<p>The best approach, in my opinion, is for each test to ensure that any data that should not exist is deleted at the <em>start</em> of the test, leaving it in the database at the end. The code would look like this:</p>

<div class="language-typescript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">it</span><span class="p">(()</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="c1">// Clean up any potential leftover state from previous test runs</span>
    <span class="nx">someOtherTeardownCode</span><span class="p">();</span>
    <span class="nx">deleteUser</span><span class="p">({</span><span class="na">id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">TEST_ID</span><span class="dl">"</span><span class="p">});</span>
    
    <span class="c1">// Ensure consistent state</span>
    <span class="nx">someOtherSetupCode</span><span class="p">();</span>
    <span class="nx">createUser</span><span class="p">({</span><span class="na">id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">TEST_ID</span><span class="dl">"</span><span class="p">});</span>
    
    <span class="c1">// Actual test code</span>
<span class="p">})</span>
</code></pre></div></div>

<p>This way, you guarantee that the cleanup always runs before the test, and if you need to debug anything in the database after a test failure, all relevant data is still available for inspection.<br />
The only caveat is that your teardown code must handle both cases: when there is something to clean up and when there isn’t.</p>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[I recently came across an integration test that demonstrates what I believe is an antipattern. This particular test was consistently failing at the start with a “Unique constraint violation” in the database when calling the createUser function.]]></summary></entry><entry><title type="html">Serde Errors When Deserializing Untagged Enums Are Bad - But Easy to Make Better</title><link href="https://www.gustavwengel.dk/serde-untagged-enum-errors-are-bad" rel="alternate" type="text/html" title="Serde Errors When Deserializing Untagged Enums Are Bad - But Easy to Make Better" /><published>2023-06-26T00:00:00+00:00</published><updated>2023-06-26T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/serde-untagged-enums-terrible-errors</id><content type="html" xml:base="https://www.gustavwengel.dk/serde-untagged-enum-errors-are-bad"><![CDATA[<p><strong>Update 01/06/2023: Unfortunately the approach at the bottom of this article has been rejected, and how to get good error messages with untagged enums is currently in stasis</strong></p>

<p><a href="https://serde.rs/">Serde</a> is a powerful Rust library for serializing and deserializing data structures efficiently and generically. One of the cooler features is itssupport for untagged enums, which allow us to specify a list of structs in an enum, and Serde will parse the first one that matches. Here’s an example demonstrating this:</p>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nd">#[derive(Deserialize)]</span>
<span class="nd">#[serde(deny_unknown_fields)]</span>
<span class="k">pub</span> <span class="k">struct</span> <span class="n">Fruits</span> <span class="p">{</span>
    <span class="n">fruit_count</span><span class="p">:</span> <span class="nb">i32</span><span class="p">,</span>
<span class="p">}</span>

<span class="nd">#[derive(Deserialize)]</span>
<span class="k">pub</span> <span class="k">struct</span> <span class="n">Burgers</span> <span class="p">{</span>
    <span class="n">burger_count</span><span class="p">:</span> <span class="nb">i32</span><span class="p">,</span>
<span class="p">}</span>

<span class="nd">#[derive(Deserialize)]</span>
<span class="nd">#[serde(untagged)]</span>
<span class="k">pub</span> <span class="k">enum</span> <span class="n">MyFood</span> <span class="p">{</span>
    <span class="nf">Fruits</span><span class="p">(</span><span class="n">Fruits</span><span class="p">),</span>
    <span class="nf">Burgers</span><span class="p">(</span><span class="n">Burgers</span><span class="p">),</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Parsing JSON<sup id="fnref:0" role="doc-noteref"><a href="#fn:0" class="footnote" rel="footnote">1</a></sup> with either <code class="language-plaintext highlighter-rouge">fruit_count</code> or <code class="language-plaintext highlighter-rouge">burger_count</code> automatically parses the corresponding enum variant:</p>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">let</span> <span class="n">json</span> <span class="o">=</span> <span class="nd">json!</span><span class="p">(</span>
    <span class="p">{</span>
        <span class="s">"fruit_count"</span><span class="p">:</span> <span class="mi">5</span>
    <span class="p">}</span>
<span class="p">)</span>
<span class="nf">.to_string</span><span class="p">();</span>

<span class="k">let</span> <span class="n">my_food</span><span class="p">:</span> <span class="n">MyFood</span> <span class="o">=</span> <span class="nn">serde_json</span><span class="p">::</span><span class="nf">from_str</span><span class="p">(</span><span class="o">&amp;</span><span class="n">json</span><span class="p">)</span><span class="nf">.unwrap</span><span class="p">();</span>
<span class="c1">// my_food is now the MyFood::Fruit variant</span>
</code></pre></div></div>

<p>While this feature is quite attractive and  the happy path works great, it has an unfriendly downside. The error messages when failing to parse are often unclear, stating only that “data did not match any variant of your untagged enum”. Examples of problematic code are as follows:</p>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">let</span> <span class="n">json</span> <span class="o">=</span> <span class="nd">json!</span><span class="p">(</span>
    <span class="p">{</span>
        <span class="s">"tacos"</span><span class="p">:</span> <span class="mi">5</span>
    <span class="p">}</span>
<span class="p">)</span>
<span class="nf">.to_string</span><span class="p">();</span>

<span class="k">let</span> <span class="n">my_food</span><span class="p">:</span> <span class="n">MyFood</span> <span class="o">=</span> <span class="nn">serde_json</span><span class="p">::</span><span class="nf">from_str</span><span class="p">(</span><span class="o">&amp;</span><span class="n">json</span><span class="p">)</span><span class="nf">.unwrap</span><span class="p">();</span>
<span class="c1">// Does not work because no enum variant matches "tacos"</span>
<span class="c1">// Error returned is: Error("data did not match any variant of untagged enum MyFood")c</span>
</code></pre></div></div>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">let</span> <span class="n">json</span> <span class="o">=</span> <span class="nd">json!</span><span class="p">(</span>
    <span class="p">{</span>
        <span class="s">"fruit_count"</span><span class="p">:</span> <span class="mi">5</span><span class="p">,</span>
        <span class="s">"foo"</span><span class="p">:</span> <span class="mi">3</span>
    <span class="p">}</span>
<span class="p">)</span>
<span class="nf">.to_string</span><span class="p">();</span>

<span class="k">let</span> <span class="n">my_food</span><span class="p">:</span> <span class="n">MyFood</span> <span class="o">=</span> <span class="nn">serde_json</span><span class="p">::</span><span class="nf">from_str</span><span class="p">(</span><span class="o">&amp;</span><span class="n">json</span><span class="p">)</span><span class="nf">.unwrap</span><span class="p">();</span>
<span class="c1">// Does not work because Fruits has `deny_unknown_fields` and we have a field too much</span>
<span class="c1">// Error returned is: Error("data did not match any variant of untagged enum MyFood")c</span>
</code></pre></div></div>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">let</span> <span class="n">json</span> <span class="o">=</span> <span class="nd">json!</span><span class="p">(</span>
    <span class="p">{</span>
        <span class="s">"fruit_count"</span><span class="p">:</span> <span class="s">"5"</span><span class="p">,</span>
    <span class="p">}</span>
<span class="p">)</span>
<span class="nf">.to_string</span><span class="p">();</span>

<span class="k">let</span> <span class="n">my_food</span><span class="p">:</span> <span class="n">MyFood</span> <span class="o">=</span> <span class="nn">serde_json</span><span class="p">::</span><span class="nf">from_str</span><span class="p">(</span><span class="o">&amp;</span><span class="n">json</span><span class="p">)</span><span class="nf">.unwrap</span><span class="p">();</span>
<span class="c1">// Does not work because "fruit_count" is supposed to be an int, but found string</span>
<span class="c1">// Error returned is: Error("data did not match any variant of untagged enum MyFood")c</span>
</code></pre></div></div>

<p>These errors are not very informative for developers, and certainly can’t be forwarded to end-users. To mitigate this, we’ve adopted a workaround pattern that involves a custom <code class="language-plaintext highlighter-rouge">MaybeValid</code> untagged enum<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">2</a></sup>:</p>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nd">#[derive(Debug,</span> <span class="nd">Clone,</span> <span class="nd">Serialize,</span> <span class="nd">Deserialize)]</span>
<span class="nd">#[serde(untagged)]</span>
<span class="k">pub</span> <span class="k">enum</span> <span class="n">MaybeValid</span><span class="o">&lt;</span><span class="n">U</span><span class="o">&gt;</span> <span class="p">{</span>
    <span class="nf">Valid</span><span class="p">(</span><span class="n">U</span><span class="p">),</span>
    <span class="nf">Invalid</span><span class="p">(</span><span class="nn">serde_json</span><span class="p">::</span><span class="n">Value</span><span class="p">),</span>
<span class="p">}</span>

<span class="c1">// First parse it to something that's either valid or not valid.</span>
<span class="c1">// This is simplified for the sake of brevity - in production code you wouldn't unwrap here</span>
<span class="k">let</span> <span class="n">maybe_valid</span> <span class="o">=</span> <span class="n">MaybeValid</span><span class="o">&lt;</span><span class="n">MyFood</span><span class="o">&gt;</span> <span class="o">=</span> <span class="nn">serde_json</span><span class="p">::</span><span class="nf">from_str</span><span class="p">(</span><span class="n">my_string</span><span class="p">)</span><span class="nf">.unwrap</span><span class="p">();</span>

<span class="k">match</span> <span class="n">maybe_valid</span> <span class="p">{</span>
    <span class="nn">MaybeValid</span><span class="p">::</span><span class="nf">Valid</span><span class="p">(</span><span class="n">valid</span><span class="p">)</span> <span class="k">=&gt;</span> <span class="p">{</span>
        <span class="c1">// Yay! We can move on.</span>
    <span class="p">}</span>
    <span class="nn">MaybeValid</span><span class="p">::</span><span class="nf">Invalid</span><span class="p">(</span><span class="n">json_value</span><span class="p">)</span> <span class="k">=&gt;</span> <span class="p">{</span>
        <span class="c1">// Parse the string to JSON ourselves, and then attempt to build a sensible error message</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>This pattern, while not ideal, gives us the ability to construct somewhat useful error messages. We have thousands of lines doing this parsing and error construction.</p>

<p>There’s a better way though. We could aggregate errors for all enum variants tried during parsing, resulting in more informative errors:</p>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">let</span> <span class="n">json</span> <span class="o">=</span> <span class="nd">json!</span><span class="p">(</span>
    <span class="p">{</span>
        <span class="s">"fruit_count"</span><span class="p">:</span> <span class="s">"5"</span><span class="p">,</span>
    <span class="p">}</span>
<span class="p">)</span>
<span class="nf">.to_string</span><span class="p">();</span>

<span class="k">let</span> <span class="n">my_food</span><span class="p">:</span> <span class="n">MyFood</span> <span class="o">=</span> <span class="nn">serde_json</span><span class="p">::</span><span class="nf">from_str</span><span class="p">(</span><span class="o">&amp;</span><span class="n">json</span><span class="p">)</span><span class="nf">.unwrap</span><span class="p">();</span>
<span class="c1">// Does not work because "fruit_count" is supposed to be an int, but found string.</span>
<span class="c1">// It could return something like: </span>
<span class="c1">// Error("data did not match any variant of untagged enum MyFood.</span>
<span class="c1">// Did not match Fruit because `fruit_count` was string but expected integer.</span>
<span class="c1">// Did not match burger because required property `burger_count` was missing.")</span>
</code></pre></div></div>

<p>This isn’t a new problem, and has several issues ranging from <a href="https://github.com/serde-rs/serde/issues/773">2019</a> to <a href="https://github.com/serde-rs/serde/issues/2157">2022</a>. There’s even <a href="https://github.com/serde-rs/serde/pull/1544">a Pull Request from 2019</a> with one person who’s been maintaining his own up-to-date fork of serde that aggregates untagged enum errors. The last conflicts with the main serde branch were fixed two weeks ago. The PR has over 50 like reactions.</p>

<p>This PR is an excellent contribution, and I want to take this opportunity to bring it to attention. Merging this could greatly enhance the experience of using untagged enums in Serde.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:0" role="doc-endnote">
      <p>Or any other supported serde format such as yaml, postcard, bincode or others <a href="#fnref:0" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:1" role="doc-endnote">
      <p>It’s untagged enums all the way down. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[Update 01/06/2023: Unfortunately the approach at the bottom of this article has been rejected, and how to get good error messages with untagged enums is currently in stasis]]></summary></entry><entry><title type="html">Talk: Environmentally sustainable Serverless</title><link href="https://www.gustavwengel.dk/serverless-berlin-environmentally-sustainable-serverless" rel="alternate" type="text/html" title="Talk: Environmentally sustainable Serverless" /><published>2023-05-31T00:00:00+00:00</published><updated>2023-05-31T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/serverless-berlin</id><content type="html" xml:base="https://www.gustavwengel.dk/serverless-berlin-environmentally-sustainable-serverless"><![CDATA[<p>Following some of my talk on <a href="/how-we-keep-the-climatiq-api-fast">serverless performance</a> and <a href="/greener-cloud-computing">greener cloud computing</a> i was invited to talk at the Serverless Berlin meetup. You can see my title called “Sustainable Serverless: Serving 2million+ requests with a lean team”</p>

<p>You can view it below, or <a href="https://youtu.be/P4qJElbYeEc">here on YouTube</a>:</p>

<div class="img-div">
<iframe width="560" height="315" src="https://www.youtube.com/embed/P4qJElbYeEc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</div>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[Following some of my talk on serverless performance and greener cloud computing i was invited to talk at the Serverless Berlin meetup. You can see my title called “Sustainable Serverless: Serving 2million+ requests with a lean team”]]></summary></entry><entry><title type="html">How To Make Your Cloud Computing Greener</title><link href="https://www.gustavwengel.dk/greener-cloud-computing" rel="alternate" type="text/html" title="How To Make Your Cloud Computing Greener" /><published>2023-04-23T00:00:00+00:00</published><updated>2023-04-23T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/greener-cloud</id><content type="html" xml:base="https://www.gustavwengel.dk/greener-cloud-computing"><![CDATA[<p>The Green Software Foundation (GSF) is a non-profit organization under the Linux Foundation that aims to reduce the climate impact of running software. With multiple projects under its belt, GSF is driving the conversation about sustainable software practices. One of the things I’ve heard most about is Carbon Aware Computing, which entails developing software that understands its carbon emissions and adjusts its behavior according to the electricity grid it’s currently running on.</p>

<h2 id="carbon-aware-computing-a-quick-overview">Carbon Aware Computing: A Quick Overview</h2>

<p>The overarching principle of carbon-aware computing is that you should do more when the electricity is greener, and less when it is not. This is most often accomplished in three ways:</p>
<ul>
  <li><strong>Location shifting:</strong> You switch your servers around to where the grid is greener.</li>
  <li><strong>Time-shifting:</strong> You run your time-insensitive jobs, such as machine learning model training or batch processing, when the percentage of renewable electricity in the grid is the highest.</li>
  <li><strong>Demand-shaping:</strong> Your software performs less tasks when it can tell that the electricity powering it is dirtier.</li>
</ul>

<p>While if your infrastructure provider allows you to do these things easily, such as <a href="https://blog.cloudflare.com/announcing-green-compute/">Cloudflare’s Green Compute</a>, you should absolutely take advantage of it.<br />
However, if you don’t have such tools available, you can still achieve a fair chunk of these benefits by going through your infrastructure once without needing to make your software carbon-aware. Let’s take a look.</p>

<h3 id="location-shifting">Location-shifting</h3>

<p>While the composition of power grids do vary quite a bit between the worst and the best (Wyoming and Sweden have approximately a 10x difference in CO2 per kWh), the day-to-day or month-to-month differences are much smaller than that.</p>

<div class="img-div extra-bottom">
<img src="https://www.gustavwengel.dk/assets/img/carbon-aware-computing/grid-per-month.png" />
The variance in CO2 per month in Denmark, one of the grids in the world with the highest percentage of renewables. Courtesy of <a href="https://electricitymap.org" target="_blank">electricitymap.org</a>
</div>

<div class="img-div">
<img src="https://www.gustavwengel.dk/assets/img/carbon-aware-computing/grid-per-day.png" />
And per day. Also courtesy of <a href="https://electricitymap.org" target="_blank">electricitymap.org</a>
</div>

<p>That means picking a region with a good renewable-heavy grid gets you a long way without needing to shift around location of your compute jobs. Some cloud providers such as GCP <a href="https://cloud.withgoogle.com/region-picker/">even has tools</a> to help you find the best regions based on the carbon emissions.</p>

<h3 id="time-shifting">Time-shifting</h3>

<p>Time-shifting is an interesting concept. It makes sense because the grid production of renewables compared to the general demand for electricity <a href="https://en.wikipedia.org/wiki/Peak_demand">varies from hour to hour</a>. Many grids however have a daily pattern that can take on the following shapes:</p>

<ul>
  <li>The daytime has more green electricity available, apart from the mornings and afternoons which are (often) when people are doing their cooking. Grids with higher percentage of green energy in the day often contain a high percentage of solar in the mix.</li>
  <li>The nighttime has more green electricity available. This is often the case for grids where wind is a high percentage of energy in the mix.</li>
</ul>

<p>You can look at <a href="https://electricitymap.org">electricitymap.org</a> to spot the trends that are relevant to your grid. If you do that, you should be able to get most of the advantage of time-shifting by scheduling your jobs during the time when your grid composition is more favourable, rather than needing to schedule it dynamically.</p>

<h1 id="the-importance-of-hardware-utilization">The Importance of Hardware Utilization</h1>

<p>Some of you might be looking at the grid pictures above and (correctly) noticing that there’s a 2x difference between some of my graphs, and wonder what I’m smoking. Reducing your carbon emissions by 50% seems like a huge win.</p>

<p>That wouldn’t be an accurate number though, because actual cloud emissions don’t come from the electricity usage - they come from embodied emissions. Embodied emissions refer to the emissions associated with the production and disposal of servers.<br />
For consumer laptops, embodied emissions make up <a href="https://circularcomputing.com/news/carbon-footprint-laptop/">75-85%</a> of total emissions, and the general pattern is the same for servers.</p>

<div class="img-div">
<img src="https://www.gustavwengel.dk/assets/img/carbon-aware-computing/climatiq-instance.png" />
Renting out a VM on Azure. The embodied estimates make up around 65% of the total carbon estimate. <br />
Calculations courtesy of <a href="https://climatiq.io" target="_blank">climatiq.io</a>
</div>

<p>Carbon-aware computing only focuses on the electricity use of your servers, and not the production, lifetime and disposal, where a majority of the emissions come from.</p>

<p>This means that in most cases, looking at optimizing how well we use the physical hardware before we have to replace it, is a better use of our time than trying to switch load around.</p>

<p>Cloud data centers typically replace servers every 4-5 years, making it essential to use them as efficiently as possible. Reducing the number of physical servers and utilizing rented servers at as close 100% capacity can significantly lower the environmental impact. Plus it often leads to better use of resources, resulting in cost savings.</p>

<div class="img-div">
<img src="https://www.gustavwengel.dk/assets/img/carbon-aware-computing/instance-half-utilization.png" />
</div>

<div class="img-div">
<img src="https://www.gustavwengel.dk/assets/img/carbon-aware-computing/instance-full-utilization.png" />
CPU usage in particularly also doesn't scale linearly with electricity as even a CPU at 0% utilization uses power.
This means that the power difference between a CPU running at half versus full utilization is only around 30%. <br />
Calculations courtesy of <a href="https://climatiq.io" target="_blank">climatiq.io</a>
</div>

<hr />

<p>So what is the right thing to focus on? If you’re not actively building infrastructure for others to run jobs on, here’s where I think you’ll get the most bang for your buck, in regards to optimizing your computing carbon emissions:</p>

<ol>
  <li>Don’t spend excessive time on building software that reacts dynamically to the electricity grid, the benefits are marginal.</li>
  <li>Go through your software and choose regions with a more green grid. Often cloud providers give you this information - otherwise you can try using <a href="https://climatiq.io">Climatiq’s</a> cloud computing functionality to get a feel for the different regions. If you are not using one of the big three, <a href="https://electricitymap.org">electricitymap.org</a> is also a good bet to get grid emissions, if you know the location of your datacenter.</li>
  <li>Research if your cloud provider offers ways to switch locations and shift work around to lead to fewer emissions - and use them if they do.</li>
  <li>Consider running time-insensitive jobs on <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html">spot instances</a> for both carbon and cost savings.</li>
  <li>If you’re renting virtual machines, try to utilize them as close to 100% as possible, both in regard to CPU usage, but also the amount of memory allocated vs what you actually use. Some things such as serverless or managed platforms might allow your cloud providers to utilize the underlying machines more efficiently than you could yourself.</li>
</ol>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[The Green Software Foundation (GSF) is a non-profit organization under the Linux Foundation that aims to reduce the climate impact of running software. With multiple projects under its belt, GSF is driving the conversation about sustainable software practices. One of the things I’ve heard most about is Carbon Aware Computing, which entails developing software that understands its carbon emissions and adjusts its behavior according to the electricity grid it’s currently running on.]]></summary></entry><entry><title type="html">Crafting the Ideal Specifications for Software Developers</title><link href="https://www.gustavwengel.dk/software-specifications" rel="alternate" type="text/html" title="Crafting the Ideal Specifications for Software Developers" /><published>2023-04-14T00:00:00+00:00</published><updated>2023-04-14T00:00:00+00:00</updated><id>https://www.gustavwengel.dk/software-specifications</id><content type="html" xml:base="https://www.gustavwengel.dk/software-specifications"><![CDATA[<p>At Climatiq, we practice a <a href="https://www.productboard.com/glossary/dual-track-agile/">dual-track agile</a> approach, dedicating time to discover and conceptualize parts of our product before implementing them in code. A colleague asked me about the best way to write a specification for the engineering team to effectively implement a conceptual design. The title of the post is a bit of trickery, because the ideal specification in this case is probably “no specification”<sup id="fnref:0" role="doc-noteref"><a href="#fn:0" class="footnote" rel="footnote">1</a></sup>. Creating a specification for a large piece of software isn’t worth the time, and it’s probably not even possible. Here’s why:</p>

<h2 id="code-is-a-specification">Code <strong>is</strong> a Specification</h2>

<p>Code <strong>is</strong> a specification for what the machine should do. To write a specification thorough enough for a developer to implement it exactly, you’re essentially writing the entire program. Historically, programming was viewed as a manufacturing activity - take in some specifications and spit out code. However coding is actually <a href="https://wiki.c2.com/?TheSourceCodeIsTheDesign">primarily a design activity</a>.</p>

<p>This means you can’t craft a specification and just toss it over the fence. What you can do is give the developers the best possible conditions for performing this design activity.<br />
You do that by ensuring they understand the problem. That leads me to my next point.</p>

<h2 id="focus-on-the-problem-not-the-solution">Focus on the Problem, Not the Solution</h2>

<p>The most crucial aspect of handing over a project to developers is providing them with the necessary context, such as how the project fits into existing products, who the users are, and what they’re trying to accomplish. Communicate this information using user stories or another method that clearly conveys the user’s needs and limitations.</p>

<p>Instead of prescribing exactly what something should do, you should first explain why it is necessary. By doing so, developers can act independently, removing many rounds of feedback loops and making the questions they ask more well-informed.</p>

<h2 id="prototype">Prototype</h2>
<p>Though the solution isn’t the most important part of the handover, you should share any thoughts you have on it. One great way to do this is by sharing a prototype that demonstrates your proposed solution. Prototypes can range in fidelity from sketches and bullet points to interactive user interface prototypes or Excel-based prototypes demonstrating the functionality.</p>

<p>Prototyping also helps you refine your thinking about the solution. If you struggle to express your logic, it could be because you haven’t considered all the edge cases.</p>

<p>Remember not to go overboard with the prototype. If you’re working with higher fidelity materials, you might not need to complete more than 50-60% of it. Use the prototype as a tool for discussion and further collaboration, not necessarily as the final blueprint for developers to replicate.</p>

<h2 id="identify-rabbit-holes-and-risks">Identify Rabbit Holes and Risks</h2>

<p>Before you hand over a project, think about any risks and “rabbit holes.” Rabbit holes refer to situations where there are technical unknowns or unsolved design problems that can significantly extend the project’s completion time. By being upfront about these uncertainties, you can initiate a discussion about how to handle them early on.</p>

<p>Be prepared for developers to have their own assessments of risks and rabbit holes, based on the underlying program architecture or challenges they haven’t faced before.</p>

<h2 id="clarify-whats-not-being-done">Clarify What’s Not Being Done</h2>

<p>Outline what you are not tackling in the project proposal to set boundaries and prevent developers from spending time on unrelated tasks. This clarity will help developers understand where their focus should be and where rough edges are acceptable.</p>

<h2 id="create-an-artifact">Create an Artifact</h2>
<p>A good handover should result in one or more artifacts that capture relevant information at a high level. A prototype you’ve developed or collaborated on is one such artifact.</p>

<p>Whether you have a prototype, you should also create a document explaining the high-level problem and solution proposal. You can create this document yourself or in collaboration with an engineer while discussing the problem. Producing a document during the conversation helps clear up any potential misunderstandings, as you’ll need to agree on what’s written on the page.</p>

<p>The purpose of these artifacts is to serve as a reference for both you and the developers, allowing you to make adjustments and corrections when necessary. A shared, collaborative document is ideal, as it enables developers to ask questions and you can edit the document to provide answers.</p>

<hr />

<p>When working with software developers, the key to success isn’t writing detailed specifications. Instead, it’s about empowering the engineering team to understand the problem and the proposed solution well enough that they can work independently and only need to clarify major questions with you. The more developers can work independently, the less time you need to spend in back-and-forth communication, and the higher the <a href="/rules-of-thumb-for-project-management-as-ic">effective work absorption rate of the project will be</a>. Communication is important, but focusing on the problem is much more critical than dwelling on the exact solution.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:0" role="doc-endnote">
      <p>This doesn’t apply for cases when e.g. a senior developer is breaking down tasks to help a junior developer, only for handing over larger projects. <a href="#fnref:0" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Gustav Wengel</name></author><summary type="html"><![CDATA[At Climatiq, we practice a dual-track agile approach, dedicating time to discover and conceptualize parts of our product before implementing them in code. A colleague asked me about the best way to write a specification for the engineering team to effectively implement a conceptual design. The title of the post is a bit of trickery, because the ideal specification in this case is probably “no specification”1. Creating a specification for a large piece of software isn’t worth the time, and it’s probably not even possible. Here’s why: This doesn’t apply for cases when e.g. a senior developer is breaking down tasks to help a junior developer, only for handing over larger projects. &#8617;]]></summary></entry></feed>