<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
<channel>
  <title>Seanland</title>
  <link>https://seanland.ca</link>
  <description>A technologist at heart who loves DIY and self sufficiency. Sharing stories and experiences.</description>
  <language>en-us</language>
  <managingEditor>me@seanland.ca (Sean Clarke)</managingEditor>
  <webMaster>me@seanland.ca (Sean Clarke)</webMaster>
  <atom:link href="https://seanland.ca/rss.xml" rel="self" type="application/rss+xml" />
    <lastBuildDate>Sat, 28 Feb 2026 04:46:00 GMT</lastBuildDate>

  <item>
      <title>Install and Setup Bazzite on the GPD Win 4</title>
      <link>https://seanland.ca/posts/2026-02-27-installing-bazzite-on-win4</link>
      <description>As Linux support is becoming even more widely supported more guides should populate the internet!  Here is a quick runthrough - with resources - on how to set up Bazzite on the 6800U version of the Win 4.</description>
      <pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2026-02-27-installing-bazzite-on-win4</guid>
      <enclosure url="https://seanland.ca/img/2026/bazzite-win4.png" type="image/png" />
      <category>gaming</category>
      <content:encoded><![CDATA[<h1>Install and Setup Bazzite on the GPD Win 4</h1>
<h3><em>This isn't meant to be a technical how-to, more so, a suggested path from an old Win 4 to a rejuvenated one.  Plus, there isn't enough content about Bazzite on the GPD Win 4.</em></h3>
<p>This device is entering it's forth year of existance.  I ordered it December 25, 2022, apparently an expensive Christmas gift to myself.  I have had great experiences using it on trips, hooking it up to the GPD G1 for desktop gaming and just sitting on the couch at home to <a href="https://seanland.ca/gaming">play some of the games I like</a>.  As time has gone on, I have used it less and less.  I have focused my time elsewhere or simply just brought my laptop with me if I was looking to game.  Now, I am getting back to a state where I don't want to lug around my "desktop replacement" and simply would love to carry a tablet and a small gaming device.  Re-enter the Win 4. </p>
<p>It needed a fresh start.  AI is getting baked into everything and bloat being added by traditional operating system vendors.  <a href="https://bazzite.gg/">Bazzite</a> has become my daily driver as an operating system.  I have been using it as my daily driver on my laptop and former desktop for around a year and a half.  It is based on <a href="https://fedoraproject.org/atomic-desktops/silverblue/">Fedora Silverblue</a>.  It is the fresh start my old Win 4 needs.   </p>
<h2>Select an Image</h2>
<p>The images are built in a few different combinations.  You can see them all <a href="https://bazzite.gg/#image-picker">here</a>.</p>
<p>Select <strong>GPD</strong> for the hardware and <strong>Gnome</strong> for the desktop environment.  Gnome is my preferred desktop environment, KDE is also available.  </p>
<p>Next, select the <strong>Legacy ISO</strong>.  It will save time during the install process.</p>
<p>Make a bootable image.  I use <a href="https://www.amazon.ca/Iodd-Iodd2531-Black-Virtual-Enclosures/dp/B00TDJ4BJU">this device</a>.  If you like to format machines all the time, highly recommend you get something like this. </p>
<h2>Install the Operating System</h2>
<p>Hit <strong>Fn + F7</strong> to get into the boot menu</p>
<p><strong><em>NOTE: Accessing these menus can sometimes be a pain on the GPD; from my experience.  If you don't wait long enough between boot attempts the device can get hung.  So, hold down the power, do a hard power off and sit for 15 seconds if you are having those issues.</em></strong></p>
<p>Start the installation process...</p>
<p>Don't forget to setup the following: </p>
<ul>
<li>Timezone</li>
<li>Network (I actually usually do it after install)</li>
<li>User</li>
<li><strong>Select the appropriate drive</strong><ul>
<li>This is always ugly, select the right drive! </li>
</ul>
</li>
</ul>
<p>If you haven't formatted your device in the past, you will have to do the <strong>Custom</strong> installation route, where you delete the partitions on your old drive and create new ones on this one.  (Or you can do more advanced options if you know how)</p>
<p>Wait... wait... Wait some more...</p>
<p>Go check on your device, make sure it is plugged in. </p>
<p>Get back to waiting... </p>
<p>When it is ready.  Hit the <strong>Reboot</strong> button. </p>
<p>Go through the menus and set up the device.</p>
<h2>Let's Start Setting It Up!</h2>
<p>These are the tweaks and installations I am making on the device.</p>
<h3>Allow Downloads While Playing</h3>
<p>I don't play many online games, so I enable this on all my devices.  I do like staying up to date with the games!  In Steam, go to the settings, then <strong>Downloads</strong>.  Toggle <strong>Allow Downloads During Gameplay</strong>.</p>
<h3>Enable Battery Percentage</h3>
<p>In Steam options, go to <strong>Power</strong> then toggle <strong>Battery Percentage</strong></p>
<h3>Switch to Desktop</h3>
<p>In the Deck mode, go to <strong>Power</strong> then hit <strong>Switch to Desktop</strong>.  This part is substantially easier with an external keyboard, mouse and monitor (see Accessories below).</p>
<h3>Run Bazzite Portal Setup</h3>
<p>Go to All Apps, under <strong>Utilities</strong> there will be the <strong>Bazzite Portal Setup</strong>.  It will bring up a local site, via Firefox with a number of applications in it.  I suggest going through all the steps to install additional pieces of software. </p>
<p>This is how I installed the following applications: </p>
<ul>
<li>Steam Deck Plugin Loader and any Decky addons</li>
<li>EmuDeck</li>
<li>Jellyfin</li>
</ul>
<h3>Go to Bazaar and Add More Apps!</h3>
<p>I simply replaced Firefox with Brave and instead Feishin for music playback.</p>
<h3>Install Certificate (this is for my self hosting)</h3>
<p>The services I <a href="https://seanland.ca/self-hosting">self host</a> are behind https.  This is to make the certificates trusted by adding the self signed certificate authority.</p>
<h3>Setup VPN</h3>
<p>All my devices connect back to my centralized VPN network to access certain resources like media and self hosted applications.  This will be done by importing a wireguard configuration file.  </p>
<p>This is done under <strong>Settings &gt; Network</strong> of the operating system.  Hit <strong>+</strong> on the VPN section, then <strong>Import from file...</strong>.</p>
<h2>Accessories</h2>
<ul>
<li><strong>ShargeDisk</strong> - portable SSD; if storage beyond the microsd is not enough.</li>
<li><strong>Battery Bank</strong> - I carry around a 10000mah bank when I travel, that doubles the life. </li>
<li><strong>Bluetooth Earbuds/Headphones</strong> - This is more for the media from my perspective.  I like to have a pair of the Lenovo GM2s with me.  </li>
<li><strong>USB-C Hub</strong> - This is how you add the keyboard, mouse and monitor.</li>
<li><strong>USB-C to HDMI Cable</strong> - now you have an HTPC.</li>
<li><strong>Carrying Case</strong> - I am assuming if you have had the device this long you already have one!</li>
</ul>
<h2>Conclusion</h2>
<p>Now I have a linux based handheld that can steam music and video, play modern day games on Steam (and Gog via Lutris) as well as emulate games as recent as Playstation 3.  Hook it up to a TV and host a Jackbox night or simply sit in a corner listening to music playing your favourite classic.  </p>
<h2>BONUS: GPD Win 4 Mini Review</h2>
<p>This device is great.  This is my third GPD product and I have never experienced any major hardware issues.  The "Win" line of devices are most definitely enthusiast devices that pack a pretty hard punch.  Would I buy it again? For sure.  Would I buy the 5?  Maybe, if I was still in the market.  Would I buy another GPD product? Yes, I would.  I would take this over the Steam Deck since, well, it is more than a portable gaming device.  It is a computer.  You can do what you want with it.  When it is no longer used, maybe it becomes a <a href="https://seanland.ca/self-hosting">member of my k8s cluster</a> with a built in backup.   </p>]]></content:encoded>
  </item>
  <item>
      <title>Fail2ban, Building Beyond the First Rule</title>
      <link>https://seanland.ca/posts/2025-11-01-fail2ban-building-beyond-the-first-rule</link>
      <description>To build effective fail2ban rules, you should understand what type of traffic is coming into your server.  We take a look at logs and build out rules based on the traffic we discover.</description>
      <pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-11-01-fail2ban-building-beyond-the-first-rule</guid>
      <enclosure url="https://seanland.ca/img/2025/fail2ban-part-2.png" type="image/png" />
      <category>security</category>
      <category>self-hosting</category>
      <content:encoded><![CDATA[<h1>Fail2ban, Building Beyond the First Rule</h1>
<p>In Februray, I wrote a post about <a href="https://seanland.ca/posts/2025-02-06-the-power-of-one-fail2ban-rule">the power of one fail2ban rule</a>.  That post has become one of, if not, the most popular post I have ever written.  I figured it was well deserving of a sequel!</p>
<p>Let's provide an update and look at three addition rules that could be implemented.  </p>
<h2>An Update on the "One Rule" 10 months later</h2>
<p>In short the goal of that one rule was to ban an IP where someone was trying to login using an invalid user.  It looked for logins via ssh and would ban an IP for one month.</p>
<p>The numbers as they look today:</p>
<p>There were 750 banned IP addresses currently on the list.</p>
<p><strong>The IP addresses were fed into Claude Sonnet 4.0 for analysis - these are the generated findings</strong></p>
<pre><code class="language-markdown">🇨🇳 CHINA - 392 IPs (52.3%)
Primary Ranges: 1.x, 14.x, 27.x, 36.x, 39.x, 42.x, 43.x, 49.x, 58.x, 59.x, 61.x, 101.x, 106.x, 110.x, 112-125.x, 171.x, 175.x, 180-183.x, 210-223.x

Notable concentrations:

103.x range: 67 IPs
101.x range: 35 IPs
43.x range: 23 IPs
27.x range: 21 IPs
14.x range: 18 IPs

🇺🇸 UNITED STATES - 140 IPs (18.7%)
Primary Ranges: 104.x, 107.x, 128.x, 129.x, 134.x, 136-139.x, 142-143.x, 146-147.x, 149.x, 152.x, 157.x, 159-167.x, 172.x, 198-199.x, 204-209.x

Major cloud providers:

DigitalOcean: ~34 IPs
AWS/Google Cloud: ~24 IPs
Various hosting providers: ~82 IPs

🌍 AFRICA - 47 IPs (6.3%)
Countries breakdown:

South Africa: 23 IPs (102.x range)
Nigeria: 14 IPs (197.x range)
Egypt: 6 IPs (196.x range)
Other African nations: 4 IPs

🇮🇳 INDIA - 41 IPs (5.5%)
Primary Ranges: 103.x (mixed with other Asian countries)

Concentrated in 103.x addressing space
Mix of ISPs and mobile networks

🇧🇷 BRAZIL - 23 IPs (3.1%)
Primary Ranges: 177.x, 179.x, 181.x, 186-191.x, 200-201.x

Distributed across major Brazilian ISPs
Mix of residential and commercial ranges

🇪🇺 EUROPE - 67 IPs (8.9%)
Country breakdown:

Germany: 17 IPs
Netherlands: 14 IPs
France: 11 IPs
United Kingdom: 9 IPs
Russia: 8 IPs
Other EU countries: 8 IPs

🌏 OTHER ASIA-PACIFIC - 40 IPs (5.3%)
Countries:

Vietnam: 12 IPs
Thailand: 9 IPs
Indonesia: 8 IPs
Philippines: 6 IPs
South Korea: 3 IPs
Japan: 2 IPs
</code></pre>
<p>It is interesting to see the high level details around the IP addresses.  I am curious how they would differ if my IP address was in a different block; maybe it wouldn't be so high if it was hosted by a different provider or in a different region.  </p>
<h2>Three More Rules!</h2>
<p>Three more rules to help limit traffic, improve security and hopefully provide hosters will better usage data.  Thinking logically about who you are trying to attract, what you are hosting and the tech stack you are using can all be helpful when defining rules. </p>
<h3>Ban Random Scans</h3>
<p>Seeing a flurry of these logs: </p>
<pre><code class="language-bash">xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:09 +0000] &quot;GET /hk.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:09 +0000] &quot;GET /hk.php HTTP/1.1&quot; 404 196 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:09 +0000] &quot;GET /hook.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:09 +0000] &quot;GET /hook.php HTTP/1.1&quot; 404 196 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:09 +0000] &quot;GET /atomlib.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:09 +0000] &quot;GET /atomlib.php HTTP/1.1&quot; 404 196 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:09 +0000] &quot;GET /geck.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /geck.php HTTP/1.1&quot; 404 196 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /file88.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /file88.php HTTP/1.1&quot; 404 196 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /gold.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /gold.php HTTP/1.1&quot; 404 196 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /moo.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /moo.php HTTP/1.1&quot; 404 196 &quot;-&quot; &quot;-&quot;
xxx.xxx.xxx.xxx - - [23/Oct/2025:11:09:10 +0000] &quot;GET /file2.php HTTP/1.1&quot; 301 178 &quot;-&quot; &quot;
</code></pre>
<p>xxx.xxx.xxx.xxx is the redacted IP address.</p>
<p>This is an easy rule for me to make a ban around.  I run a static html site.  I do not have any PHP.  If someone is looking for PHP, that is an instant red flag for me.  So, let's make a rule banning a get request, ending in a ".php" that is a 301 or 404. </p>
<p>Create the filter.</p>
<pre><code class="language-bash"># /etc/fail2ban/filter.d/php-scan.conf
[Definition]
failregex = ^&lt;HOST&gt; .* &quot;GET .*\.php HTTP/.*&quot; (301|404) .*$
ignoreregex =
</code></pre>
<p>Create the jail and add the following. </p>
<pre><code class="language-bash"># /etc/fail2ban/jail.d/jail.local
[php-scan]
enabled = true
port = http,https
filter = php-scan
logpath = /var/log/nginx/access.log
maxretry = 3
findtime = 300
bantime = 86400
action = iptables-multiport[name=php-scan, port=&quot;http,https&quot;, protocol=tcp]
</code></pre>
<p>If you did run a [custom] PHP site, you could modify this filter to look for folders exclusive to Wordpress sites as an example.  Spoiler Alert, there are a lot of scans for Wordpress specific paths.   </p>
<h3>Ban Bots</h3>
<p>This rule could go either way, some people want to ban bots, others will want to have them.  Either way, I am sure both parties want to have control over what the bots access and which bots scrape their site.  One relevent, present day use case, is banning bots from AI companies as an example.  </p>
<pre><code class="language-bash"># Using AI to block AI!
[Definition]
failregex = ^&lt;HOST&gt; .* &quot;.*&quot; \d+ \d+ &quot;.*&quot; &quot;.*(ChatGPT-User|GPTBot|ChatGPT|OpenAI|openai\.com|CCBot.*OpenAI).*&quot;$
ignoreregex =
</code></pre>
<p>(You should be able to use the previous jail and make the appropriate modifications - yes, you can do it!)</p>
<p>This is simple a rule based on the bot metadata.  This is not guarenteed to work perfect, as there have been <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/several-ai-companies-said-to-be-ignoring-robots-dot-txt-exclusion-scraping-content-without-permission-report">reports of companies ignoring general bot practices</a> when it comes to AI companies.  You may want to enchance rules based on activity or IP blocks... speaking of...</p>
<h3>Ban IP Ranges</h3>
<p>Lastly, create rules based on IP addresses.  This is a method of limiting access to geo regions or even avoiding blocks from certain cloud providers.  There are two things to call out, 1.  IP addresses aren't perfect when it comes to specific regions and 2.  VPNs are a fairly simple way to mitigate this rule.  This rule should still be considered low hanging fruit, low effort and high reward.  </p>
<pre><code class="language-bash"># Blocking the 103.0.0.0/8 block!
[Definition]
failregex = ^&lt;HOST&gt; - .* &quot;(GET|POST|PUT|DELETE|HEAD|OPTIONS|PATCH).*HTTP.*&quot;.*$ 
ignoreregex =
[Init]
# Only match IPs starting with 103
prefregex = ^103\.
</code></pre>
<p>(Same goes for the jail on this one.  You are doing great!  Try again!)</p>
<h2>Remember!</h2>
<p>Just a few reminders, and tips to help when creating rules: </p>
<ul>
<li>make sure you change the jail name for the rules; copy pasta is easy to mess up when making multiple rules</li>
<li>be careful not to make rules that ban yourself!  (IE.  IP ranges, rules that can trigger by typos, etc.)</li>
<li>keep the action, bantime and logpaths specifically in mind when generating out the filter.  TIP: use the "fail2ban-regex" command to test out rules.</li>
</ul>
<p>Also, this is a way to assist other tools and help augment your configuration.  Ensure that all configurations are set up in layers.  IE.  Disable PHP even though you don't use it on the server.  Block access to IP address at the firewall or network level, before the proxy.  Use a robots.txt.</p>
<p>I hope this helps you on your journey to hosting a more secure and efficient site!</p>]]></content:encoded>
  </item>
  <item>
      <title>How to Install Nextcloud-AIO with an External Proxy</title>
      <link>https://seanland.ca/posts/2025-09-26-how-to-install-nextcloud-with-an-external-proxy</link>
      <description>When installing Nextcloud-AIO with an external - non-"same system docker" - proxy the documents can be very confusing.  This guide is to help simplify the process and get you up and running, hopefully sooner.</description>
      <pubDate>Fri, 26 Sep 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-09-26-how-to-install-nextcloud-with-an-external-proxy</guid>
      <enclosure url="https://seanland.ca/img/2025/nextcloud-proxy.png" type="image/png" />
      <category>self-hosting</category>
      <category>projects</category>
      <content:encoded><![CDATA[<h1>How to Install Nextcloud-AIO with an External Proxy</h1>
<p>I would like to start by complimenting the Nextcloud team with their <a href="https://github.com/nextcloud/all-in-one">documentation</a>.  It is in great depth and has details for every exception to the standard deployment, however, because of this, it can be intimidating for a new comer trying to Nextcloud trying to figure something out.  Once I began to understand the Nextcloud ecosystem a bit more, the documentation becuase invaluable.  Therefore, I hope this post, can be a bridge to easier consumption of the documentation.</p>
<h2>Configuration</h2>
<p>This is the basic flow for my configuration.</p>
<ul>
<li>Internal DNS providing the address via pihole</li>
<li>Users connect to Caddy that has self signed certificates<ul>
<li><em>Note: certificate chains are already set up on my machine, so the certificates are "valid".  That may not be the case for you.</em></li>
</ul>
</li>
<li>Caddy passes through a firewall</li>
<li>Requests hit Docker container engine hosted in Unraid</li>
</ul>
<h2>Important Points to Know</h2>
<ul>
<li>The default mode is proxy mode for the AIO (All in One; in case you missed it.)</li>
<li>The AIO image spins up other containers for addition apps and services</li>
<li>There are <strong>two serivces</strong> that are very important. <ul>
<li>The "apache" service - this is for end user connection</li>
<li>The "master" container - a little dated term, but, this is for admin.  I will refer to this as the "AIO Container"</li>
</ul>
</li>
<li>The master container needs access to the docker.  I know, I am not a fan either, but, this is how it spins up the other containers.</li>
<li>We will be "flip flopping" proxy configurations because, well, it seemed to make the most sense to me, so read carefully. If you think it is a typo, it probably isn't.</li>
</ul>
<h2>Steps</h2>
<p>These steps will mostly be theoretical and have code snippets specific to minor configuration changes.  The process is the trickier part as opposed to the technology.  </p>
<h3>Deploy the Nextcloud-AIO Image</h3>
<p>These details will not be included.  If you haven't deployed an image before, you should probably start a little more basic for some practice!  You will have to do a bunch of port configuring and networking.  It is important to understand those concepts!</p>
<p>This is where it is important to note the <strong>APACHE_PORT</strong>.  You will need it. </p>
<p>Also, disable </p>
<h3>Configure the Proxy to Connect to the AIO Container</h3>
<p>Just reiterating, this is the configuration for Caddy.  That is my proxy of choice.  <a href="https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md">Their documentation for other proxies</a> is in great detail, however, like I stated earlier the ecosystem might make it complicated.  </p>
<p>This is an example configuration that worked.  With the following changes being made: </p>
<ul>
<li>DOMAIN: the domain you will access nextcloud with. </li>
<li>IP: the external IP of the docker host of the AIO container</li>
<li>PORT: the exposed port of the AIO container</li>
<li>tls: this should be configured to your certificates</li>
</ul>
<pre><code class="language-bash">DOMAIN {
    tls /certs/tls.crt /certs/tls.key
    header {
        Strict-Transport-Security &quot;max-age=63072000; includeSubDomains; preload&quot;
    }
    reverse_proxy https://IP:PORT {
        transport http {
            tls_insecure_skip_verify
        }
    }    
}
</code></pre>
<p>Now to answer some questions, why are we proxied to an https server?  Well, the AIO Container runs with a self signed certificate deployed.  This is why we are proxying to it and skipping the verification. </p>
<h3>Setup the AIO Container</h3>
<p>Go through the setup process.  Ensure you are using the domain you want to connect to.  From what I remember reading, it isn't very simple to change the domain after the fact (couldn't tell you why, sorry.).</p>
<p>Once you have completed the setup and you go to login, you will notice you get directed to a page that won't let you log in.  This is a good sign and very annoying!  This is why I wrote this guide.  </p>
<h3>Configure the Proxy to Connect to the Apache Service</h3>
<p>Once you have gotten stuck at the login loop, we re-configure the "end user" connection to the Apache service.  Recall the notes, I wrote earlier.  Take note of the admin user and initial password before you re-setup the proxy.   </p>
<ul>
<li>DOMAIN: the domain you will access nextcloud with. </li>
<li>IP: the external IP of the docker host of the <em>new</em> Apache container (should be the same as above - unless you doing custom docker stuff)</li>
<li>APACHE_PORT: the exposed Apache port of the <em>new</em> Apache container</li>
<li>tls: this should be configured to your certificates</li>
</ul>
<pre><code class="language-bash">DOMAIN {
    tls /certs/tls.crt /certs/tls.key
    header {
        Strict-Transport-Security &quot;max-age=63072000; includeSubDomains; preload&quot;
    }
    reverse_proxy http://IP:APACHE_PORT
}
</code></pre>
<p>As you can see, we are now connecting via http.  That is not a typo.  </p>
<h3>Login?!</h3>
<p>In theory, you should be connecting to the appropriate service now and being prompted to login via a username/password box!  This is where we want to be!  Congrats!</p>
<p>If you didn't get the lovely login screen, attempt the following: </p>
<ul>
<li>Did you wait for all the containers to start up? Go back a step and ensure all containers are up.</li>
<li>Did all the images required get deployed?  On my setup I ran out of space at one point which didn't allow images to be pulled. </li>
<li>Try incognito, old faithful here. </li>
</ul>
<h2>Limitations</h2>
<ul>
<li>Other Apps will require custom configurations<ul>
<li>Talk requires specific ports</li>
<li>Office (Collabora backend) requires specific access for configuration</li>
</ul>
</li>
<li>Non-docker deployments become complex<ul>
<li>I wanted to do the AIO image on k8s, but, it didn't seem worth attempting.</li>
</ul>
</li>
<li>How do I access the AIO Container?  Three different ways:  <ol>
<li>Set the proxy back to connect to the AIO container</li>
<li>Setup another proxy configuration to connect to it (might required additional work)</li>
<li>Access it via IP on the network of the docker machine (this was my route if I really needed to)</li>
</ol>
</li>
</ul>
<h2>Conclusion</h2>
<p>At this point, you should be able to run basic Nextcloud functionality.  Things like file operations and using built in tooling of Nextcloud should be fine.  As listed in the limitations, operations that require custom, backend services may require some work and tweaking.  I think at the end of the day, this might not be the right solution for me, since, I do have a lot of overlapping services and this might be overkilling a simple problem I have.  It was definitely an interesting learning experience. </p>
<p>This took way longer to figure out than it should have.  It came from a lack of knowledge on the Nextcloud ecosystem.  Hopefully this can help speed run people interested in Nextcloud.  Also, this was written off the top of my head, so if I missed any useful points or left dead ends feel free to <a href="https://seanland.ca/contact">contact me</a>.  Happy Clouding!  </p>]]></content:encoded>
  </item>
  <item>
      <title>A Basic Guide to Building An HP Mini k8s Cluster</title>
      <link>https://seanland.ca/posts/2025-04-23-a-basic-guide-to-an-hp-mini-k8s-cluster</link>
      <description>Learning kubernetes can be overwhelming.  This is meant to help anyone start learning by setting up a cluster with two machines; in this case HP Minis.</description>
      <pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-04-23-a-basic-guide-to-an-hp-mini-k8s-cluster</guid>
      <enclosure url="https://seanland.ca/img/2025/header-k8s-guide.png" type="image/png" />
      <category>self-hosting</category>
      <category>projects</category>
      <content:encoded><![CDATA[<h1>A Basic Guide to Building An HP Mini k8s Cluster</h1>
<p>For all you homelabbers who haven't started with kubernetes, here's the time!  Hopefully, this will help make a complicated task a bit easier.  Also, this doesn't have to be done with HP Minis (that was simply the hardware I used), it can be done with VMs, old computers and even Raspberry Pis (might be software limitations within these steps).  </p>
<p>This will be the start of my third cluster rebuild (simply to keep my mind fresh and practice).  This example will show creating a HP mini control plane and a VM as the first worker node.  It will be repeatable for other HP minis, I am simply using a VM to try something new myself! </p>
<h2>Let's Prepare the Environments</h2>
<p>First, give the machines fun names that represent what they are.  I have named my systems on space, ninja turtles and ants.  Let's use bees and refer to these as "queen" and "worker".  The "queen" will be the control-plane and the "worker" will be a random node.  As you add more nodes, you will be adding more "worker"s.</p>
<p>These systems will both be running ubuntu server 24.04.2 as a baseline.  They are also both fresh installs.  So we... </p>
<p>Start with the standard updating and upgrading your packages. </p>
<pre><code class="language-bash">sudo apt-get update &amp;&amp; sudo apt-get upgrade -y
</code></pre>
<p>Disable swap on all the nodes (When referring to nodes, that's all machines unless otherwise specified).  This may not be required anymore, but, <a href="https://github.com/kubernetes/kubernetes/issues/53533">there is a long history here</a>.  </p>
<p>Edit <code>/etc/fstab</code> and comment out the swap line, so it looks similar to below.  </p>
<pre><code class="language-bash">sudo vi /etc/fstab
# changed line should look similar to this.
#/swap.img  none    swap    sw  0   0
</code></pre>
<p>Also, run <code>swapoff -a</code> to disable swap without rebooting. </p>
<pre><code class="language-bash">sudo swapoff -a
</code></pre>
<p>Since we are using ubuntu, we will install the required repositories for the kubernetes packages in <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management">Debian-based distributions</a>.</p>
<pre><code class="language-bash">sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list
</code></pre>
<p>This is their "new" way of managing packages.  This will also pin the repository to a specific minor version, so be mindful of that going forward if you are looking to upgrade!</p>
<p>We will now add the repositories for Docker to install the <code>containerd.io</code> package.</p>
<pre><code class="language-bash">sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to apt sources:
echo \
  &quot;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release &amp;&amp; echo &quot;${UBUNTU_CODENAME:-$VERSION_CODENAME}&quot;) stable&quot; | \
  sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<p>Update the apt repositories again!</p>
<pre><code class="language-bash"># Update to include the new repositories  
sudo apt-get update
</code></pre>
<p>Now let's install the <code>containerd.io</code> package.  </p>
<pre><code class="language-bash">sudo apt-get install containerd.io
</code></pre>
<p><em>Optional: Note the version if you do want to pin the package for future upgrades.  At the time I wrote this containerd.io was at 1.7.27.</em></p>
<h2>Begin with the k8s Bits</h2>
<p>We will install different packages on the <strong>queen</strong> and <strong>worker</strong>.  </p>
<p>For the <strong>queen</strong>, we will be installing the additional package kubectl to orchestrate the cluster.  </p>
<pre><code class="language-bash">sudo apt-get install -y kubelet kubeadm kubectl
</code></pre>
<p><em>Optional:  This is the pinning part, or "hold" in the context of an apt package.  This will prevent you from upgrading the package in the event a new one is released.  I would suggest this for the more important clusters to have better control of your upgrade process.</em></p>
<pre><code class="language-bash">sudo apt-mark hold kubelet kubeadm kubectl containerd.io
&lt;/pre&gt;&lt;/code&gt;
&lt;figcaption&gt;Don't forget to include containerd.io&lt;/figcaption&gt;
&lt;/figure&gt;

For the **worker**, do the same, minus the `kubectl` package. 

```bash
sudo apt-get install -y kubelet kubeadm
# Optional Step
sudo apt-mark hold kubelet kubeadm containerd.io
</code></pre>
<p>Back to running commands on both nodes, here are some additional configurations to optimize kernel operations for containerized environments.  Create the file <code>/etc/modules-load.d/containerd.conf</code> with the following contents. </p>
<pre><code class="language-bash">overlay
br_netfilter
</code></pre>
<p>Now, let's load the modules to the kernel. </p>
<pre><code class="language-bash">sudo modprobe overlay
sudo modprobe br_netfilter
</code></pre>
<p>The next step is setting up some networking.  Create the file <code>/etc/sysctl.d/kubernetes.conf</code> and add the following. </p>
<pre><code class="language-bash">net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
</code></pre>
<p>Finally, run the following command to reload the kernel. </p>
<pre><code class="language-bash">sudo sysctl --system
</code></pre>
<h2>(Maybe, Optional) Adding CIFS Support</h2>
<p>This step may be optional, if you do want your cluster to access SMB shares, you want ot do this step as well.  </p>
<pre><code class="language-bash">sudo apt-get install cifs-utils
&lt;!-- # Set the volume plugin directory --&gt;
VOLUME_PLUGIN_DIR=&quot;/usr/libexec/kubernetes/kubelet-plugins/volume/exec&quot;
&lt;!-- # Create the fstab~cifs directory in the volume plugin directory --&gt;
sudo mkdir -p &quot;$VOLUME_PLUGIN_DIR/fstab~cifs&quot;
&lt;!-- # Change to the fstab~cifs directory --&gt;
cd &quot;$VOLUME_PLUGIN_DIR/fstab~cifs&quot;
&lt;!-- # Download the cifs script from Github --&gt;
sudo curl -L -O https://raw.githubusercontent.com/fstab/cifs/master/cifs
&lt;!-- # Change the permission of the cifs script to be executable --&gt;
sudo chmod 755 cifs
</code></pre>
<h2>Setup containerd</h2>
<p>Populate a default config.toml for containerd. </p>
<pre><code class="language-bash">sudo containerd config default | sudo tee /etc/containerd/config.toml
</code></pre>
<p>We then allow containerd to be utilized by the <code>SystemdCgroup</code>.  Find the appropriate section in <code>/etc/containerd/config.toml</code> and change it to the following: </p>
<pre><code class="language-bash"></code></pre>
<p>[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true</p>
<pre><code></code></pre>
<p>Restart the containerd service</p>
<pre><code class="language-bash">sudo systemctl restart containerd
</code></pre>
<h2>Let's Start a Cluster!</h2>
<p>Head over to <strong>queen</strong> and run a <code>kubeadm init</code> command.  Please insert the IP of the <strong>queen</strong> machine in the command.  That will the be "--apiserver-advertise-address": </p>
<pre><code class="language-bash">sudo kubeadm init --pod-network-cidr=10.15.0.0/16 --apiserver-advertise-address=&lt;QUEEN IP&gt; --ignore-preflight-errors=all
</code></pre>
<p>You will also be provided with a join command, cut that and keep it handy to add any <strong>worker</strong> nodes.  The output will look something like this: </p>
<pre><code class="language-bash">Then you can join any number of worker nodes by running the following on each as root:

kubeadm join &lt;QUEEN IP&gt;:6443 --token &lt;TOKEN&gt; \
    --discovery-token-ca-cert-hash sha256:&lt;REDACTED&gt;
</code></pre>
<h2>Access to Cluster</h2>
<p>We are now going to give the <strong>queen</strong> the ability to connect to the cluster using <code>kubectl</code>.  On <strong>queen</strong> run the following commands.</p>
<pre><code class="language-bash">mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
</code></pre>
<h2>Setup the Cluster Networking</h2>
<p>All the commands in this section will be run on <strong>queen</strong>. </p>
<p>We will configure <a href="https://github.com/flannel-io/flannel">flannel</a> for the "layer 3 network fabric".</p>
<p>Download it to <strong>queen</strong>, modify the network parameter and deploy to the cluster.</p>
<pre><code class="language-bash">wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

vi kube-flannel.yml

# Modify the following parameters, ensure the --pod-network-cidr from the kubeadm command matches the network below
---
  net-conf.json: |
    {
      &quot;Network&quot;: &quot;10.15.0.0/16&quot;,
      &quot;Backend&quot;: {
        &quot;Type&quot;: &quot;vxlan&quot;
      }
    }
---

# Once saved, apply the yml
kubectl apply -f kube-flannel.yml
</code></pre>
<p>We are now going to add <a href="https://metallb.io/">metallb</a>.  This is to offer load balancing functionality for the bare metal cluster that is being built.  We will start by enabling strictARP. </p>
<pre><code class="language-bash">kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e &quot;s/strictARP: false/strictARP: true/&quot; | \
kubectl apply -f - -n kube-system
</code></pre>
<p>We will now install the most recent version (at the time - check the <a href="https://metallb.io/installation/">site</a> for the most recent edition ).</p>
<pre><code class="language-bash">kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml
</code></pre>
<p>We will then create a required secret for metallb. </p>
<pre><code class="language-bash">kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey=&quot;$(openssl rand -base64 128)&quot;
</code></pre>
<h2>Let's Get the Worker to Join</h2>
<p>Remember that <code>kubeadm init</code> output.  This is when we add it.  Run that command on the <strong>worker</strong></p>
<p>At this point, you should see any output similar to this.</p>
<pre><code class="language-bash">NAMESPACE        NAME                           READY   STATUS    RESTARTS        AGE
kube-flannel     kube-flannel-ds-kw5s6          1/1     Running   0               64s
kube-flannel     kube-flannel-ds-vl84d          1/1     Running   1 (3m27s ago)   36m
kube-system      coredns-674b8bbfcf-frw88       1/1     Running   1 (3m27s ago)   81m
kube-system      coredns-674b8bbfcf-n6r2h       1/1     Running   1 (3m27s ago)   81m
kube-system      etcd-nest                      1/1     Running   1 (3m27s ago)   81m
kube-system      kube-apiserver-nest            1/1     Running   1 (3m27s ago)   81m
kube-system      kube-controller-manager-nest   1/1     Running   1 (3m27s ago)   81m
kube-system      kube-proxy-4lnmf               1/1     Running   0               64s
kube-system      kube-proxy-9nkxs               1/1     Running   1 (3m27s ago)   81m
kube-system      kube-scheduler-nest            1/1     Running   1 (3m27s ago)   81m
metallb-system   controller-bb5f47665-r2w8g     1/1     Running   0               6m20s
metallb-system   speaker-ll7pj                  1/1     Running   0               49s
metallb-system   speaker-zgm26                  1/1     Running   2 (3m16s ago)   19m
</code></pre>
<p>The services might be still pending, or in crash loops.  That is fine, it will be resolved shortly, in theory.  IE.  The metallb-system controller will try and run on a <strong>worker</strong>. </p>
<h2>The Network Pool</h2>
<p>Next, we set up the IPAddressPool for the service IP address that will be available for utilization.  Create a yml titled <code>metallb-pool.yml</code> with the following contents:</p>
<pre><code class="language-bash">apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.50-192.168.1.200

---

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system
</code></pre>
<p>Apply the yml file.</p>
<pre><code class="language-bash">kubectl apply -f metallb-pool.yml
</code></pre>
<h2>Congratulations!</h2>
<p>If all went well, you now have the start of a kubernetes adventure.  You can find a test deployment of a proxy, or different container.  It should run fine.  I did test this cluster with a few secrets and a deployment.</p>
<p>I hope this was helpful, please feel free to <a href="https://seanland.ca/contact">contact me</a> if you have any concerns, suggestions or feedback.  Thanks for reading!</p>
<h2>BONUS: Installing Helm</h2>
<p>Helm has become a very popular "package manager for kubernetes" (that's their tagline on their <a href="https://helm.sh">main page</a>).  These are literally the commands from their site.  It is a super simple installation to help get people started. </p>
<pre><code class="language-bash">curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg &gt; /dev/null
sudo apt-get install apt-transport-https --yes
echo &quot;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main&quot; | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
</code></pre>]]></content:encoded>
  </item>
  <item>
      <title>Migrating My Repositories From Self Hosted GitLab to Self Hosted Gitea</title>
      <link>https://seanland.ca/posts/2025-04-10-how-i-migrated-my-repositories-from-gitlab-to-gitea</link>
      <description>I have decided to change source code management systems from GitLab to Gitea.  This is how I went about migrating all of the repositories.</description>
      <pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-04-10-how-i-migrated-my-repositories-from-gitlab-to-gitea</guid>
      <enclosure url="https://seanland.ca/img/2025/scm-migration.png" type="image/png" />
      <category>self-hosting</category>
      <category>projects</category>
      <content:encoded><![CDATA[<h1>Migrating My Repositories From Self Hosted GitLab to Self Hosted Gitea</h1>
<p>GitLab has been a great source code management (SCM) tool over the past four years.  I have expended it beyond simple just repositories, however, it just hasn't stuck.  Three years ago, I had migrated from Gitea to GitLab simply because I wanted a tool with a built in container registry, which - at the time - Gitea did not have.  That has changed.  Gitea also has, Gitea actions and plans to become federated.  GitLab has also started to feel a little clunky.  The one thing I recall about Gitea was how quick it was.  It feels like now is a good time to go back.  This is how I did it.  </p>
<h2>One by One Repository Migration</h2>
<p>This was actually what I was hoping to do, however, it does not sound like it will be possible.  I will not be investigating it deeply.  From what I have read, there are some issues with https to http redirection issue when it comes to payloads and how Gitea does it.  With multiple proxies and a kuberneters configuration, this isn't somehting I want to play with.  Next plan. </p>
<h2>Mass Migration Using Six Year Old Script</h2>
<p>I have come across the <a href="https://github.com/up2early/MigrateGitlabToGogs">up2early/MigrateGitlabToGogs</a> repository; the migration tool recommended by Gitea when migrating from GitLab to Gitea.  It is six years old, but, a plus is it doesn't have any issues listed!</p>
<p>The utility migrates repositories one by one.  Let's start by writing a script to automate the ... script.  </p>
<p>OK - long story short, I bailed.  This script seemed to execute migration operation similarily to how the build in migration tool works.  The results were returning null. </p>
<p>Time to pivot to automating a manual task. </p>
<h2>Mass Migration Through Cloning and Pushing</h2>
<p>As I manually work through what all the steps are going to be, I have come up with this list of tasks. </p>
<ol>
<li>Find all the repositories</li>
<li>Loop through the repositories</li>
<li>Clone the individual repository</li>
<li>Change directory and move into the repository folder</li>
<li>Remove the original origin</li>
<li>Add the new origin</li>
<li>Push the code base to the "new" origin</li>
<li>Remove the local code base</li>
</ol>
<p>Here are the limitations doing it this way for those that are trying. </p>
<ul>
<li>Obviously, well, hopefully obviously - you can only do this with repositories you have access (token access to) to</li>
<li>This is only going to bring over the selected branches (I am doing only the default)</li>
<li>It will also not work for repositories in an organization</li>
<li><strong>You will lose everything not in the repository.  This is fine for me, might not be for you!</strong></li>
</ul>
<p>There are two required steps to automate this.</p>
<h3>Need to Enable git-to-create?!</h3>
<p>Using this method and gitea, we are going to be pushing to create repositories.  For security, this might be a function that is best temporary enabled for the migration.</p>
<p>Below is the environment variable that has to be set.  Below are the settings for kubernetes, modify those values for whatever configuration you require. </p>
<pre><code class="language-python">    - name: GITEA__REPOSITORY__ENABLE_PUSH_CREATE_USER
      value: &quot;true&quot;
</code></pre>
<h3>Save Local Credentials</h3>
<p><strong>Note: This may not be the best practice for all use cases; do with caution.  You are storing credentials.</strong></p>
<p>After you have logged in once, you should be able to store your credentials. </p>
<pre><code class="language-bash">git config --global credential.helper store&lt;/pre&gt;&lt;/code&gt;
</code></pre>
<p>You can confirm if they are stored by looking at the git-credneitals file.</p>
<pre><code class="language-bash">more ~/.git-credentials
</code></pre>
<h3>What Do All These Steps Look Like Manually?</h3>
<p>If we break down the above steps one by one, this is what the commands will look like using an example repository.</p>
<pre><code class="language-bash">git clone https://gitlab.snld.ca/seanland/example.git
cd example
git remote remove origin
git remote add origin https://git.snld.ca/seanland/example.git
# We have to figure out what branch to use
git branch --show-current
# Default will always be private, just in case - repositories can be made public after
git push -o repo.private=true -u origin master
cd ..
# Remove the directory (this may be required to have elevated permissions)
sudo rm -r example
</code></pre>
<h3>Let's Find a Way to Automate This</h3>
<p>I have decided to do this in two steps: </p>
<ol>
<li>Get a list of all the repositories in GitLab.  <em>Base code was AI generated.</em></li>
</ol>
<pre><code class="language-python">GITLAB_BASE_URL = &quot;&quot;&lt;insert url&gt;&quot;&quot;
PERSONAL_ACCESS_TOKEN = &quot;&lt;insert personal access token&gt;&quot;

# Define the headers for authentication
headers = {
    &quot;Private-Token&quot;: PERSONAL_ACCESS_TOKEN
}

# Function to fetch project data from GitLab
def fetch_projects():
    projects = []
    page = 1

    while True:
        response = requests.get(f&quot;{GITLAB_BASE_URL}/projects&quot;, headers=headers, params={&quot;per_page&quot;: 100, &quot;page&quot;: page})
        if response.status_code != 200:
            print(f&quot;Error fetching data: {response.status_code}, {response.text}&quot;)
            break

        data = response.json()
        if not data:
            break

        projects.extend(data)
        page += 1

    return projects

# Function to write repository URLs to a file
def write_repo_urls_to_file(projects, filename=&quot;repositories.txt&quot;):
    with open(filename, &quot;w&quot;) as file:
        for project in projects:
            file.write(f&quot;{project['http_url_to_repo']}\n&quot;)
    print(f&quot;Repository URLs have been written to {filename}&quot;)

# Main execution
if __name__ == &quot;__main__&quot;:
    projects = fetch_projects()
    write_repo_urls_to_file(projects)
</code></pre>
<ol>
<li>We run the initial steps against the list of repositories.  You will also need to change the 'new_origin' to appropriately fit your needs.  <em>Base code also generated by AI.  I am not claiming to have written this from scratch.  I was surprised when they both mostly worked.</em></li>
</ol>
<pre><code class="language-python">import subprocess
import os

def run_command(command):
    &quot;&quot;&quot;Runs a shell command and returns the output.&quot;&quot;&quot;
    result = subprocess.run(command, shell=True, text=True, capture_output=True)
    if result.returncode != 0:
        print(f&quot;Error executing command: {command}\n{result.stderr}&quot;)
        return None
    return result.stdout.strip()

def process_repository(url):
    # Extract repository name from URL
    repo_name = url.split('/')[-1].replace('.git', '')

    # Clone the repository
    print(f&quot;Cloning {url}...&quot;)
    run_command(f&quot;git clone {url}&quot;)

    if os.path.exists(repo_name):
        os.chdir(repo_name)

        # Remove old origin and add new one
        print(&quot;Updating remote origin...&quot;)
        run_command(&quot;git remote remove origin&quot;)

        # Replace 'gitlab' with 'git' in the URL to form the new origin URL
        new_origin = url.replace(&quot;gitlab.snld.ca&quot;, &quot;git.snld.ca&quot;)
        #run_command(f&quot;git remote add origin {new_origin}&quot;)

        # Determine the current branch
        print(&quot;Determining the current branch...&quot;)
        current_branch = run_command(&quot;git branch --show-current&quot;)
        if not current_branch:
            print(&quot;Unable to determine the current branch. Skipping push operation.&quot;)
        else:
            # Push to the new remote, assuming 'master' is the main branch
            print(f&quot;Pushing to {new_origin} on branch {current_branch}...&quot;)
            #run_command(f&quot;git push -o repo.private=true -u origin {current_branch}&quot;)

        # Move back to the base directory and remove the cloned repo
        os.chdir(&quot;..&quot;)
        #print(f&quot;Removing the directory {repo_name}...&quot;)
        #run_command(f&quot;sudo rm -r {repo_name}&quot;)

def main():
    # Path to the text file containing the list of URLs
    file_path = 'repositories.txt'  # Make sure to set the correct path to your file

    try:
        with open(file_path, 'r') as file:
            urls = file.readlines()
            for url in urls:
                url = url.strip()  # Remove newline characters
                if url:  # Make sure it's not an empty line
                    process_repository(url)
    except FileNotFoundError:
        print(f&quot;File not found: {file_path}&quot;)
    except Exception as e:
        print(f&quot;An error occurred: {e}&quot;)

if __name__ == &quot;__main__&quot;:
    main()
</code></pre>
<h2>What's Left?</h2>
<ul>
<li>I had to manually migrate an Organization.  There were only nine repositories, so I simply used the script to clone the repositories and remove the origin.  I, then, manually added the new origin and pushed the repositories to Gitea.</li>
<li>I need to add context to all the repositories: Descriptions, Issues (if I want), Wiki (if I want) and Gitea actions (which I will be doing!)</li>
<li>Set the GitLab repositories to Archived so I do not accidentally push code to them. </li>
</ul>
<h3>Setting GitLab Repositories to Archived</h3>
<p>Run a script to archive all of the projects in the GitLab instance.  <em>Again, whipped some words into an LLM.  This is isn't a learning to program task, more a method to solve a problem.</em></p>
<pre><code class="language-python">import requests

# Configuration
base_url = &quot;&lt;insert url&gt;&quot;
access_token = &quot;&lt;insert personal access token&gt;&quot;

# Headers for the API request
headers = {
    'PRIVATE-TOKEN': access_token
}

# Function to list all projects
def list_all_projects():
    projects = []
    page = 1
    per_page = 100  # Number of projects per page

    while True:
        response = requests.get(
            f&quot;{base_url}/projects&quot;,
            headers=headers,
            params={'per_page': per_page, 'page': page}
        )
        response.raise_for_status()  # Raises an error for bad responses

        current_projects = response.json()
        if not current_projects:
            break

        projects.extend(current_projects)
        page += 1

    return projects

# Function to archive a project
def archive_project(project_id):
    response = requests.post(
        f&quot;{base_url}/projects/{project_id}/archive&quot;,
        headers=headers
    )
    response.raise_for_status()
    if response.status_code == 201:
        print(f&quot;Project ID {project_id} archived successfully.&quot;)
    else:
        print(f&quot;Failed to archive project ID {project_id}.&quot;)

def main():
    projects = list_all_projects()
    print(f&quot;Found {len(projects)} projects.&quot;)

    for project in projects:
        print(f&quot;Archiving project: {project['name']} (ID: {project['id']})&quot;)
        archive_project(project['id'])

if __name__ == &quot;__main__&quot;:
    main()
</code></pre>
<p>Now, you should be unable to push new code to all the repositories!  </p>
<h2>In Summary</h2>
<p>We have three different scripts.</p>
<ol>
<li>One to list all of your repositories</li>
<li>The bait and switch script; the clone n' push. </li>
<li>The sleeper: archiving all of your repositories</li>
</ol>
<p>It is by no means perfect, but, it's more than I had at my disposal.  It also meets all the requirements for my goals.  I can still begin the migration off of Jenkins since I have completed the migration off of GitLab.  </p>]]></content:encoded>
  </item>
  <item>
      <title>How I Manage My Email Accounts</title>
      <link>https://seanland.ca/posts/2025-04-06-how-I-manage-my-email-accounts</link>
      <description>Working in security, utilizing multiple email accounts, as well as watching the technical world evolve in both good and bad ways, I have decided to improve my email situation.</description>
      <pubDate>Sun, 06 Apr 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-04-06-how-I-manage-my-email-accounts</guid>
      <enclosure url="https://seanland.ca/img/2025/email-accounts-link.png" type="image/png" />
      <category>security</category>
      <category>personal</category>
      <content:encoded><![CDATA[<h1>How I Manage My Email Accounts</h1>
<p>I have tried multiple email services.  An email server is one (or multiple) piece of technology I am not really interesting in self hosting.  It is very complicated from what I found.  There are great, easy to deploy solutions, however, there are so many other complications with establishing a reputation and not getting blacklisted.</p>
<h2>How It Was</h2>
<p>I, originally, had two domains with active email accounts.  Those were my personal, non-gmail account associated with <a href="https://seanland.ca">seanland.ca</a> and the <a href="https://seanland.ca/posts/2024-04-15-so-I-started-business">business account for Seanland Entertainment</a>.</p>
<p>I was using Office356 for the business account.  I had the package that contained all the goods; basically all the Office applications and email.  The cost was ~20 CAD a month.  It seemed to be worth it.  Or, so I thought.</p>
<p>My personal email was being hosted at <a href="https://hey.com">hey.com</a>.  The service was incredible.  Also, I love and firmly believe in the business philosophy of <a href="https://en.wikipedia.org/wiki/David_Heinemeier_Hansson">DHH</a>.  It feels like his methodolgy is for the good of the people that he works with.  I have also read a few of his books.  The cost associated with the service was ~15 CAD a month after exchange.</p>
<h2>Why the Change?</h2>
<p>There are two main reasons I wanted to set up a better solution for my emails. </p>
<h3>The Cost</h3>
<p>For ~35 CAD a month, I felt like I was paying too much to host two email accounts.  Sure, there we added benefits for both the services.  </p>
<p>hey.com had an amazing mentality around emails and how you take action on them.  The apps had linux support, which is awesome and a rarity.</p>
<p>Office365 is great for business applications, however, I self-host <a href="https://cryptpad.fr/">CryptPad</a>, an incredible office suite that is web based and well assessible anywhere with my VPN back home.  And...</p>
<h3>I Needed More</h3>
<p>I needed more.  I have started to have the desire to migrate away from my gmail account.  So, I needed to add an additional domain for how I wanted to start using my email.  I wasn't willing to increase the cost by at minimum $5 CAD to an account to O365 and I didn't want to host an email on yet another service to save a few bucks.    </p>
<h2>What Am I Doing Now?</h2>
<p>Not getting too deep into politics, beliefs, privacy and security, <a href="proton.me">Proton</a> seemed to be the best option (Not affiliated and not a referral link).   The reasons I did switch: </p>
<ol>
<li>Cost Savings: The pricing overall went down to ~13 CAD a month.</li>
<li>Hosted Location: This is part of the privacy, security, politics piece.  I like that it is in Switzerland, however, the CEO has said some - let's say - debatable things. </li>
<li>Bonus Functionality: VPN Service, Cloud Drive, Password Manager, Crpytowallet as well as the required Calendar and Mail clients.</li>
<li>Linux Support: ♥️😍🥰</li>
<li>Three domain and fifteen email address support.</li>
<li>Built in, simple, Gmail migration service.</li>
<li>The actual privacy and tracking blocking.</li>
<li>The apps are open source! </li>
</ol>
<p>The third domain has officially been added.  The goal of this third domain was to set up a mailbox that I use for signing up to different services and accounts.  I will be using one mailbox with aliases depending on the service.  It is a slow and tedious process.</p>
<p>Now all three of my addresses, as well as any aliases I use land in one mailbox.  Depending on the address an email is being set to dictates what happens to set email.  So, for different services, emails will be auto labelled and routed to different folders, or just labelled and left in the main inbox to be moved after being addressed. </p>
<p>I am doing this to hopefully avoid spam and have better control over email address squatters.  I literally have someone that uses my gmail address to sign up for their personal activities.  It is also scary to think, what information they have ignorantly shared with me.  I was literally able to find out where they live due to their carelessness.  Maybe this is a blog for the future...   </p>
<h2>The Results</h2>
<p>I have had this set up for a few months now.  I am extremely happy with service.  The apps are great across all the ecosystems I use and I think my email methodology is slowly paying off.  </p>
<p>If you are looking to take ownership of your email and migrating to a paid service.  I think what proton offers is hard to argue with.  I am always looking to use the best and I mean that ethically, economically and technically.  If you do have a better option, please let me know! Thanks for reading.</p>]]></content:encoded>
  </item>
  <item>
      <title>The Magic of Upgrading PC Fans</title>
      <link>https://seanland.ca/posts/2025-02-08-the-magic-of-upgrading-pc-fans</link>
      <description>At times, the simpliest answer is the right one.  I sit here with a fully loaded PC running a little too hot for comfort.  Let's throw some more fans at it.  This is the adventure of upgrading fans.</description>
      <pubDate>Sat, 15 Feb 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-02-08-the-magic-of-upgrading-pc-fans</guid>
      <enclosure url="https://seanland.ca/img/2025/unraid/upgrade-fans-header.png" type="image/png" />
      <category>self-hosting</category>
      <category>projects</category>
      <content:encoded><![CDATA[<h1>The Magic of Upgrading PC Fans</h1>
<h3><em>This is a picture heavy story about upgrading fans.</em></h3>
<p>I load up a couple drives to fill out my <a href="https://www.fractal-design.com/products/cases/node/node-804/">node 804</a> (great home server case).  I stare at my drive temperatures, watching one drive going off the walls - a new one - on the cusp of triggering heat warnings.  I think to myself, is this defective out of the box? </p>
<p>I contact my friend who works in SMB IT management, thinking he must deal with defective drives all the time.  I ask him "You ever heard of a drive idling above 50C?"</p>
<p>He response simply with "What type of cooling do you have in the case?"</p>
<p>Duh, A fully loaded case with default fans.  This drive is probably sitting right on a heat souce that isn't get the heat dispersed.  </p>
<h2>The Starting Point</h2>
<p>Here we are, I have yet to install the fans in the machine.  This is a live blog!  These are the metrics of the server at writing. </p>
<p><img alt="Screenshot of all the drive temperatures" src="https://seanland.ca/img/2025/unraid/temp-before.png" />
<em>Screenshot of all the drive temperatures</em></p>
<p>Disk 5, the culprit, you can see the nice yellow 53C right beside it.  Having said that, you can see the interesting array of temperatures amongst the drives.  If you are famililar with the case, or you did look at the design.  The eight drives - which are all holding drives - are in two four drive stacks side by side. </p>
<p>There is an 18C swing between the coolest drive (which we can assume is idle) and the hottest drive, which is idle (I have seen the temperature when it is not).</p>
<p>Let's fix it. </p>
<h2>The Solution</h2>
<p>I know this post is titled "The Magic of Upgrading PC Fans".  I also haven't installed them yet, so, I am sure you are asking "how do you know this will work?". </p>
<p>Easy, I have used these fans and I have confidence in them.  </p>
<p>I bought four <a href="https://www.amazon.ca/gp/product/B07CG2PGY6?th=1">Noctua NF-P12 Redux-1700 PWM</a> (amazon.ca link, not a referral link).  They run for ~$18 CAD a piece; in total ~$75-80 CAD after taxes.  A minor price for the potential longevity of the drives.</p>
<h2>The Inspection</h2>
<p>Ugh.  Cables.  Dust.  This is why I hate upgrades.</p>
<p><img alt="This is one packed case.  Every time I have to play with wires I cry a little." src="https://seanland.ca/img/2025/unraid/wires.png" />
<em>This is one packed case.  Every time I have to play with wires I cry a little.</em></p>
<p><img alt="Okay, it's not awful, but, still looks like a cheap Halloween decoration." src="https://seanland.ca/img/2025/unraid/dust.png" />
<em>Okay, it's not awful, but, still looks like a cheap Halloween decoration.</em></p>
<h2>The Install</h2>
<p>The case is divided into two bays.  One for the - let's call it - motherboard side and they other for the PSU and drive bays.  I will only be added fans to the drive bay side.  </p>
<p>Immediately, I am second guessing my choice of four fans for it.  I envisioned two fans on the front blowing out (consistent with the other side's one fan) and two fans on the top blowing out.  </p>
<p>Nope. </p>
<p>I settled on replacing the back fan with an upgrade and putting the two fans at the front. </p>
<p>But, wait. </p>
<p>It is never that simple.  The fan controller with the case only accepts three pin fans.</p>
<p>Now what? </p>
<p>Time to MacGyver something for now.  I scope the motherboard for any chasis fan pins.  There are two.  One is in use for the case fan on the motherboard side.  Sorry, fan, not for you any more.  Since, I didn't want to go back to the drive side, I simply plug this fan into the optional CPU fan pins.  I plug the rear fan and one of the two front fans into the two chasis fan pins.</p>
<p>Boot it up! </p>
<p><img alt="Screenshot of all the drive temperatures right at boot." src="https://seanland.ca/img/2025/unraid/all-fans-boot.png" />
<em>Screenshot of all the drive temperatures right at boot.</em></p>
<h2>12... 24... 72 Hours Later...</h2>
<p>I had gotten side tracked and ordered a fan controller.  I ordered the <a href="https://www.amazon.ca/dp/B0BP23WWTX">Thermalright Fan HUB Controller REV. A</a> (still not using referral links) for $15 CAD.  Let's call the total just under $100 for this computer's life changing upgrade.    </p>
<p>Here are the results after a week.  </p>
<p><img alt="This is running with two upgrade fans." src="https://seanland.ca/img/2025/unraid/two-fan-boot.png" />
<em>This is running with two upgrade fans.</em></p>
<p>Not good enough.  Now I have a warning on a new drive.  Moving wires haves probably dampened the airflow.  </p>
<p>Updated plan: Let's install the fan controller, get the three Noctua fans running on the hard drive side.  Place the fourth Noctua fan on the motherboard side as the "rear" fan and put the two original Fractal fans at the front of the motherboard side.</p>
<h2>Updated Plan Numbers</h2>
<p>Here's the initial boot... </p>
<p><img alt="Screenshot of all the drive temperatures right at boot with all fans operating." src="https://seanland.ca/img/2025/unraid/all-fans-boot-2.png" />
<em>Screenshot of all the drive temperatures right at boot with all fans operating.</em></p>
<p>Also, updated cost.  The cost of stupidity.  I left the USB in the side of the case when I flipped the case.  I put some weight on the top of the case and pushed the port into the case.  Fortunately, it is still functioning.  The USB is just submuerged into the case a little too far.  Dodged one there.  </p>
<h2>12 Hours Later...</h2>
<p>There you have it! This is what we can expect for numbers.  </p>
<p><img alt="Screenshot of all the drive temperatures right at boot" src="https://seanland.ca/img/2025/unraid/12-hours-later.png" />
<em>Screenshot of all the drive temperatures right at boot</em></p>
<p>The cables are a mess.  I nearly (actually did) broke my USB port; which is also my boot drive for Unraid.  BUT, Mission Accomplished! </p>
<h2>Moral of the Story / tldr;</h2>
<ol>
<li>Fans are worth it. </li>
<li>If you can afford it, max out the machines off the bad; including fans.</li>
<li>Upgrading a machine adds years to it's life, but, probably takes more off of yours (alluding to the frustration).</li>
<li>Cable management does matter for airflow.</li>
<li>Dust sucks as well; actually it clogs.  Get rid of it.  </li>
</ol>
<p>In seriousness, this $100 should have been spent while I built the machine.  Cost was the biggest factor back that.  I also - ignorantly - thought this was going to remain a smaller server and wouldn't need the extra wind power.  Things changed.</p>
<p>A happy computer is a cool computer. </p>]]></content:encoded>
  </item>
  <item>
      <title>The Power of One fail2ban Rule</title>
      <link>https://seanland.ca/posts/2025-02-06-the-power-of-one-fail2ban-rule</link>
      <description>Having an VPS in AWS isn't always the safest thing by default.  fail2ban is a tool that can be used to increase the security of it.  Here is how you can set up one simple rule that will have a great amount of impact.</description>
      <pubDate>Thu, 06 Feb 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-02-06-the-power-of-one-fail2ban-rule</guid>
      <enclosure url="https://seanland.ca/img/2025/banner-fail2ban.png" type="image/png" />
      <category>security</category>
      <content:encoded><![CDATA[<h1>The Power of One fail2ban Rule</h1>
<p>If you run any sort of server in the cloud, I can (unofficially) guarantee you there is someone out there trying to get in it.  They are easy targets.  Companies are still migrating to the cloud.  There are tons of people that still don't know how to use the cloud.  Taking even a step back, there are tons of people that don't know how to secure a computer; heck I am sure I could do a better job in a lot of cases.  </p>
<h2>So How Do I Know?</h2>
<p>Login and review the logs; see what you can find if you don't believe me.  If you don't have any sort of logging enabled (in this case access logs) you might one to start enabling those! </p>
<p>Run the command: </p>
<pre><code class="language-bash">more /var/log/access.log
</code></pre>
<p>This will start you at the top of your log file and allow you to work your way down with the space bar.  You will probably get my point real quick.</p>
<p>I am sure you will be something similar to this below.</p>
<figure>
<img src="https://seanland.ca/img/2025/access-logs_1200ma_.png">
<figcaption>The access logs, your logs will have a local IP address.  (I redacted mine.)</figcaption>
</figure>

<h2>Setup</h2>
<p>First install fail2ban.  We are using ubuntu and are just going to pull the apt package. </p>
<pre><code class="language-bash">sudo apt-get install fail2ban
</code></pre>
<p>We have to make two configuration changes.  We are going to have to make a filter - which is essentially a regex that triggers - as well as a jail.  A jail is the rule that triggers an action.  Basically, if a jail calls a filter, baddies get punished. </p>
<p>Insert this for the filter, the location is commented in the code block. </p>
<pre><code class="language-bash"># /etc/fail2ban/filter.d/sshd-invaliduser.conf
[Definition]
failregex = ^.*sshd.*: Invalid user .* from &lt;HOST&gt; port .*$
ignoreregex =
</code></pre>
<p>Create the jail and add the following. </p>
<pre><code class="language-bash"># /etc/fail2ban/jail.d/jail.local
[sshd-invaliduser]
enabled = true
filter = sshd-invaliduser
action = iptables[name=SSHD, port=ssh, protocol=tcp]
logpath = /var/log/auth.log
maxretry = 1 
bantime = 2592000
</code></pre>
<p>Now simply restart the service. </p>
<pre><code class="language-bash">sudo systemctl restart fail2ban
</code></pre>
<p>Check the status of the jail</p>
<pre><code class="language-bash">sudo fail2ban-client status sshd-invaliduser
</code></pre>
<p>There you have it, you can capture all those prisoners! </p>
<h2>Wait, What Did We Just Do?</h2>
<p>In short, we ban an IP address from contacting a port using a protocol based on a filter.  The filter is looking for an invalid user.  </p>
<p>Looking at the filter, it is a regular expression.  Compare it to the logs you have found.  The key things we are trying to filter on are the process name (sshd), the term "Invalid User" as well as the host.  The host is the IP address that gets "jailed".  You may have figured it out, but, yes, we are banning anyone that enters an invalid user.  </p>
<h3>OK, Question, Why?</h3>
<p>It is common for bad actors to just try common usernames and see what sticks.  This is also followed by default passwords.  Therefore, think twice when you delay changing your default password.  Make sure it's the first thing you do.  I would also suggest you use a custom user - especially they if are an admin account - followed by disabling the default one.  </p>
<h3>The Jail</h3>
<p>The filters are generally self explanatory, but, here is a breakdown:</p>
<ul>
<li><strong>[sshd-invaliduser]</strong> - The name</li>
<li><strong>enabled = true</strong> - If it is enabled</li>
<li><strong>filter = sshd-invaliduser</strong> - The associated filter (the other thing)</li>
<li><strong>action = iptables[name=SSHD, port=ssh, protocol=tcp]</strong> - What happens when we hit the criteria (the punishment)</li>
<li><strong>logpath = /var/log/auth.log</strong> - What is log is being monitored</li>
<li><strong>maxretry = 1</strong> - How many times it can triggered</li>
<li><strong>bantime = 2592000</strong> - The ban time in seconds - yes, this is a month. </li>
</ul>
<h3>What Can Go Wrong?</h3>
<p>Yes, you can ban yourself.  So don't do something silly.  IE.  This is a helpful practice when you have a scripted login using a private key.  This might not be the best option if you are manually typing your login every time.  So, two things, automate it securely and have a backup plan. </p>
<h2>Some Interesting Findings!</h2>
<p>There are two really interesting findings from the result of this. </p>
<ol>
<li>The volume of unique IPs with incorrect users is interesting.  I am sure there is spoofing and bots, etc., even considering within the first four days of having this configured <strong>76 IP addresses have been punished for a month.</strong></li>
<li>Some of the incorrect users banned have some logic to them.  An example is, the user "seanland" has been attempted.  This makes me think the bad actors are either using the domain as a blind attempt, or are adding logic to their attack;  potential using generative AI in some form.  </li>
</ol>
<h2>What Else Can I Do?</h2>
<p>Well, basically anything you want.  This is a simple starting point.  Here are some other ideas to help lockdown your servers and services:</p>
<ol>
<li>Look at specific application logs, filter on specific end points you don't want targeted.  An example, filter a log for 404s and block popular calls (For the pages that actually don't exist.  Like maybe your site doesn't have admin.php, but, people keep trying to access it.)</li>
<li>Go deeper into ssh attempts, limit erroneous logins on the user you use! </li>
<li>Get notified on specific bans!  Maybe there are scenarios you want to be notified of?  Login? # of failed attempts etc. </li>
<li>Constantly monitor your logs and look for new rules that you can create. </li>
</ol>
<p>fail2ban is a very powerful tool, where the possibilities are nearly endless.  The learning curve also is not too steep.  I hope this was helpful!</p>
<p>Happy Jailin'!</p>]]></content:encoded>
  </item>
  <item>
      <title>Using An OpenWRT One to Extend a Network to a Remote Location</title>
      <link>https://seanland.ca/posts/2025-01-04-using-an-openwrt-one-to-extend-a-network-to-a-remote-location</link>
      <description>I want the compound to be able to access my network at home, heck, I want it to be like the network at home.  If I can get it set up like a network local to the environment, myself and the family will have a lot more of a technically comfortable time at the compound.</description>
      <pubDate>Sat, 04 Jan 2025 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2025-01-04-using-an-openwrt-one-to-extend-a-network-to-a-remote-location</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>self-hosting</category>
      <category>projects</category>
      <content:encoded><![CDATA[<h1>Using An OpenWRT One to Extend a Network to a Remote Location</h1>
<p>First, why this project?  <a href="https://seanland.ca/posts/2024-05-30-the-first-acre-is-the-hardest">The place that has become to be known as the compound</a> is essentially our cottage, or home away from home.  We have our own little basement apartment and love to get away from the city to spend some time there.  We have two sources of internet there.  We have Starlink, which was brought over from my parent's old place and I decided to get a home 5G connection, to see what it is like.  I am hoping to use this as a stable backup for a site-to-site connection for two main reasons.  One would be a site-to-site connection for media and whatever, one a network scale instead of an individual device.  Two would be sending information from sensors and/or IoT devices back to the main servers for processing (future project).</p>
<h2>The OpenWRT One</h2>
<p>As someone who loves DD-WRT and has never used OpenWRT, one day shopping on AliExpress for <a href="https://seanland.ca/watches/">watch parts</a> I came across the OpenWRT One.  I instantly was sold once I read that is was a collaboration between BananaPi and OpenWRT.  Though, I have never used either of the products, from what I knew of them, this seemed to be a product I was willing to support (I hope I am not wrong about that).  So, I bought it. </p>
<h2>What Am I Working With?</h2>
<p>The network is already designed to take Wireguard connections at home.  I have incoming Wireguard connections terminate at a pfSense firewall and are given specific access depending on the incoming client.  </p>
<p>The goal is to have this network treated in a similar fashion.  Most of the clients are not given access to the servers themselves, but, are provided with a specific DNS server to connect to and one or two proxies to redirect the approved traffic.  This will be given to all the clients of the compound site.  </p>
<h2>Let's Get Started</h2>
<figure>
<img src="https://seanland.ca/img/2025/openwrt-one/openwrtbox_1200ma_.png">
<figcaption>The box!</figcaption>
</figure>

<p>I opened the box to the new toy and - as usual - started running into the typical technical hurdles:</p>
<ol>
<li>Went to connect to WiFi; Oh, it's disabled by default</li>
<li>Went to connect via ethernet; Oh, my laptop doesn't have an ethernet adapter</li>
<li>Finally, connected!  Let's update the firmware; What do you mean this version doesn't have the UI installed</li>
<li>Now how do I change the IP addresses...</li>
</ol>
<p>Anyways, after a silly amount of time and lessons learned, we are at a spot where we have a device to configure.</p>
<p>The default LAN IP Address is the infamous <code>192.168.1.1</code>.  Let's SSH into the box and get that changed: </p>
<pre><code class="language-bash">uci set network.lan.ipaddr='192.168.100.1
uci commit network
/etc/init.d/network restart
</code></pre>
<p>I restarted the unit to receive a new IP address.  </p>
<p>Next step is to install the required packages for Wireguard (References to Luci are the packages for the UI).  </p>
<pre><code class="language-bash">apk update
apk upgrade
apk add luci wireguard-tools luci-proto-wireguard
</code></pre>
<p>Now, I will reboot again (or you can restart the services).</p>
<p>At this point let's generate the public and private key pair. </p>
<pre><code class="language-bash">wg genkey | tee wg.key | wg pubkey &gt; wg.pub
</code></pre>
<p>This will produce a public key and private key, placing them in the corresponding files <code>wg.key</code> and <code>wg.pub</code>.  We will need these for the configuration on the server and this, the client.  Now, take the keys and some other parameters then place them in environment variables, so we do not store these in plain text anywhere on the machine; we will delete the files after.</p>
<p><strong><strong>NOTE: Make sure you obtain the peer public key as well.  I have placed that in <code>wg-peer.pub</code>.  This will be used for the peer connection.</strong></strong></p>
<p><strong><strong>NOTE 2: All this can be done via the CLI - awesome - I just decided to do a more visual approach since I was learning the UI, CLI and OpenWRT in general!</strong></strong></p>
<h2>Poking Through The UI To Get This Done</h2>
<h4>Again, we will be going through the UI as examples, this can all be done easily with the command line and uci.</h4>
<p>I did not go through the details of the menu, so you may have to do some hunting.  I went more specific into the configuration piece.  </p>
<p>So, start by hunting down the interface page and adding a new interface.  Set the interface to the Wireguard VPN type which will now be there; it wouldn't have been there before.  Once it has been added you will be brought to the next page, shown below.     </p>
<figure>
<img src="https://seanland.ca/img/2025/openwrt-one/wg-gen_1200ma_.png">
<figcaption>The general settings for the WireGuard interface.</figcaption>
</figure>

<p>This is where we insert the information on the local machine, aka the OpenWRT One.  The information to be filled in is the private key, public key and IP Address.  The public and private keys were the ones generated earlier.  After that we move to the Advanced Settings tab. </p>
<figure>
<img src="https://seanland.ca/img/2025/openwrt-one/wg-adv_1200ma_.png">
<figcaption>The advanced settings for the WireGuard interface.</figcaption>
</figure>

<p>This is where we can set the custom DNS server (read the caveats section!) as well as disabled the IPv6 assignment; which I am currently not looking to utilize.  After that, move to the Firewall Settings. </p>
<figure>
<img src="https://seanland.ca/img/2025/openwrt-one/wg-fw_1200ma_.png">
<figcaption>The firewall settings for the WireGuard interface.</figcaption>
</figure>

<p>Here we are simply creating a new zone called "vpn".  We are skipping the DHCP Server tab and moving to the Peers tab. </p>
<figure>
<img src="https://seanland.ca/img/2025/openwrt-one/wg-peers_1200ma_.png">
<figcaption>The peers settings for the WireGuard interface.</figcaption>
</figure>

<p>This is the last tab and where we will be setting up the "peer" or the "home network".  </p>
<figure>
<img src="https://seanland.ca/img/2025/openwrt-one/wg-edit-peer_1200ma_.png">
<figcaption>Adding the details for the peer.</figcaption>
</figure>

<p>This is where we configure the destination.  The key things to fill out:</p>
<ul>
<li>The public key:  This is the peer public key.  The one you got from the other end of the connection</li>
<li>Description: Yeah, call it something. </li>
<li>Allowed IPs: This is where we want to set 0.0.0.0/0.  This will route all IPs to this peer.</li>
<li>Endpoint Host: The domain or IP address of the peer. </li>
<li>Endpoint Port: The port the connection will be created on. </li>
<li>Persistent Keep Alive: Set this to 25.</li>
</ul>
<p>Now save this peer.  Lastly, we want to head to the Firewall Zones and configure the VPN firewalls.    </p>
<figure>
<img src="https://seanland.ca/img/2025/openwrt-one/fw-zone_1200ma_.png">
<figcaption>The VPN zone configuration.</figcaption>
</figure>

<p>Simply, this configuration - as shown above - is to allow the lan traffic to go through the wg network.  </p>
<p>After this point, one last restart should have you on the way! </p>
<h2>Caveats With This Setup!</h2>
<p>Two big things I want to point out:</p>
<ol>
<li>The DNS will be leaking like a firehose.</li>
<li>There is definitely more firewall finetuning that can take place. </li>
</ol>
<p>The DNS situation is tricky, as we want it to be established "PostUp" of the interface.  In WireGuard, I can simply add a one-liner that modifies to an internal (DNS from the peer.) DNS server.  I attempted to use a hotplug script on OpenWRT, but, it would break the VPN tunnel upon restart (from what I can tell).  The domain (the peer is a domain) would pull a local IP and not work.  This is probably the method I have to take, but, I need to do some more testing.  For now, I have gotten around it by defining the DNS on the machine itself.  Fortunately, I do not have a large number of machines connecting to that network, especially now and the connection is going to be a permanent full tunnel; so, little hassle.</p>
<p>The security aspect, though there are not a lot of risk with how it is.  I could tweak the actual rules beyond the zones.  I do have a number of proxies and a firewall beyond this peer, so there are additional mitigating factors.  A few other things I will do in the future, is remove the UI and simply do all the commands via SSH.  After that, I will just lock the unit down in terms of access both to the box and for the traffic.</p>
<h2>Takeaways</h2>
<p>This is my first time using OpenWRT and WireGuard to establish a tunnel from one site to another for personal use.  I am excited to be able to utilize that network, just like I am at home.  I will also have network services running at the compound just like home IE. Media and even better, the PiHole.  The latency and performance decrease over the VPN should be negligable or absolutely minimal.  Worst case, after all of this, I have also built a portable firewall and router.  This is a great little project for anyone that wants to travel for work and pleasure.  It is configured in a such a way that it should be easily transplanted anywhere you go (unless set location blocks the traffic).</p>
<p>I hope this was helpful for anyone else looking to do something similar.  It is fairly simple to get up and running.  Good Luck! </p>]]></content:encoded>
  </item>
  <item>
      <title>Keeping My Mastodon Instance Lean</title>
      <link>https://seanland.ca/posts/2024-12-13-keeping-mastodon-instance-lean</link>
      <description>The internet is full of information.  People live on social media.  Mastodon, being self-hosted can be a space hog real quick.  This is how I keep it lean.</description>
      <pubDate>Fri, 13 Dec 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-12-13-keeping-mastodon-instance-lean</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>self-hosting</category>
      <category>projects</category>
      <content:encoded><![CDATA[<h1>Keeping My Mastodon Instance Lean</h1>
<p>A year ago, I ran into an issue where my Mastodon instance locked up.  Everything appeared to be working, however, posts weren't updating and I was unable to post myself.  It was strange as no settings had been changed and there hadn't be an outage.  </p>
<p>To the logs I go... </p>
<p>It appears that I had run out of space.  This was inevitable as, I run my Mastodon instance on a self hosted kubernetes cluster with tuned, locked resources.  So, how do I fix it?  To the internet I go.  </p>
<p>I came across the blog post by <a href="https://ricard.social/@ricard">Ricard</a> entitled <a href="https://ricard.dev/improving-mastodons-disk-usage/">Improving Mastodon’s disk usage</a>.  A great description is provided as to what each command does.  Essentially, that post is the foundation to what is shared here.  </p>
<p>Similarly, I will be setting up a cronjob.  I plan to run a monthly clean up job, removing anything older than two weeks.</p>
<p>This is what the script looks like.  I shall call it <strong>mastodon-cleaner.sh</strong>.</p>
<pre><code class="language-bash">#!/bin/bash

# Run the command and assign the output to a variable
output=$(kubectl get pods -o wide | grep mastodon)

# Use awk to get the first word
instance=$(echo $output | awk '{print $1}')

# Prune remote accounts that never interacted with a local user
kubectl exec -it $instance -- tootctl accounts prune

# Remove remote statuses that local users never interacted with older than 14 days
kubectl exec -it $instance -- tootctl statuses remove --days 14;

# Remove media attachments older than 14 days
kubectl exec -it $instance -- tootctl media remove --days 14;

# Remove all headers (including people I follow)
kubectl exec -it $instance -- tootctl media remove --remove-headers --include-follows --days 0;

# Remove link previews older than 14 days
kubectl exec -it $instance -- tootctl preview_cards remove --days 14;

# Remove files not linked to any post
kubectl exec -it $instance -- tootctl media remove-orphans;
</code></pre>
<p><em>mastodon-cleaner.sh</em></p>
<p>You can see the commands are different and so is the "--days" parameter.  As stated earlier, I only want to keep two weeks of history.  Also, this being a kubernetes instance, the commands are going to be specific to the pod.  That is where the first two commands come in.</p>
<p>Next, the script will be run on the control-plane node of the kubernetes cluster.  </p>
<pre><code class="language-bash">0 3 1 * * /bin/bash /home/{user}/mastodon-cleaner.sh
</code></pre>
<p>This will be run the first day of every month at 3am.  </p>
<p>The post doesn't discuss the permissions associated with running kubectl, kubernetes or the cron user specific permissions, however, I hope it gives a basic perspective into how solutions to other's problems can be added to solve your own (Don't forget to give them credit for their work!).  Happy Hosting! </p>]]></content:encoded>
  </item>
  <item>
      <title>Defcon 32 In My Eyes</title>
      <link>https://seanland.ca/posts/2024-08-12-defcon-32-in-my-eyes</link>
      <description>After attending my first Defcon, this is a reflection of my experience.</description>
      <pubDate>Thu, 26 Sep 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-08-12-defcon-32-in-my-eyes</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>security</category>
      <content:encoded><![CDATA[<h1>Defcon 32 In My Eyes</h1>
<p>It has been over a month since I attended Defcon 32.  I have had time to process and reflect on my experience.  I wanted to do this much sooner, but, didn't really have time to get around to it; specifically so I wouldn't forget anything.  In my defense, I don't think I have forgotten much which is kind of a testiment to my experience.</p>
<figure>
<img src="https://seanland.ca/img/2024/defcon/IMG_20240808_163135_1200ma_.png">
<figcaption>The excitement picture right after I received my badge!</figcaption>
</figure>

<h2>But, What is it?</h2>
<p>Well, what it isn't is <a href="https://blackhat.com/us-24/">Black Hat</a>.  It's funny, people tend to have a persception that Black Hat is the "big hacker conference" that happens every year.  No, no, that is <a href="https://defcon.org/?mob=1">Defcon</a>.  Black Hat is the marketing event people go to prior to Defcon to get information on the different products out there to protect against the potential things you could learn at Defcon.  Okay, that's not directly true, but it is painting a picture. </p>
<p>Black Hat is the marketing event.  "Come buy this", "This is how our company can protect you against this" and "Come get some free swag" are all common phrases at Black Hat.  "Did you figure out or badge?", "Did you check out that village?" and "DON'T FUCK IT UP!" are common phrases at Defcon.  Defcon is the educational and dare I say, community or social event around cyber security. </p>
<h3>So, tell us about it...</h3>
<p>First, a commercial break for some of the other perks of going to Vegas for a conference.</p>
<figure>
<img src="https://seanland.ca/img/2024/defcon/IMG_20240808_195803_1200ma_.png">
<figcaption>You aren't experiencing Vegas without a show.  Dead and Co. at the Sphere.</figcaption>
</figure>

<h2>The Community</h2>
<p>This is literally the most inclusive community I have ever experience.  Just to relate, I have play sports all my life, worked many different jobs, attended various pride events and even volunteered for a number of organizations.  I repeat, this is the most inclusive community.  It is both clique-y and cult-y in the most positive ways.  There are tons of small groups within this larger entity targeting specific interests (the clique-y part) as well as a very distinct following, culture and - dare I say - way of life within the conference (that's the cult-y).  It's incredible.  All of this, while being completely mindful of the individuals attending the conference.  IE. Privacy is a big factor and there is a level of respect to ask to take pictures/videos.</p>
<p>You can go there and approach anyone to start a conversation (at least from my experience).  You will find parties and events just from interacting with people.  There are traditions and even events that resonate through the years.  There are even hacking "celebrities" (I saw <a href="https://darknetdiaries.com/">Jack Rhysider from Darknet Diaries</a> and <a href="https://www.youtube.com/channel/UCVeW9qkBjo3zosnqUbG7CFw">John Hammond - Youtube</a>) in attendance taking the roles of judges or hosting parties.</p>
<h2>The Sessions</h2>
<p>The number of sessions and topics feel endless.  You could spend your entire conference experience in one village (a village it basically a topic based section IE. Social Engineering, Ham Radio, AppSec, etc.) if you really wanted to. I decided to just show up and pick things as I found them.  This isn't necessarily the best approach, especially because of the limited room in certain sessions and "Linecon" (part of the cult-y piece!).  </p>
<p>I was able to attend sessions on the following topics: 
- Social Engineering - where they do it live, it was super fun.
- Introduction to Machine Learning in Quantum Computing - this was right over my head. 
- An AppSec session on a type of exploit in PDFs - boy, this is going to bother me that I can't remember the name of the exploit. 
- A hack-along type session on Prototype Pollution - that was great.
- Sending temperature information over Ham radio signals - this was informative as I wanted to do a similar project using LoRa
- Hacker Jeopardy - more of a must see event.</p>
<figure>
<img src="https://seanland.ca/img/2024/defcon/IMG_20240809_210049_1200ma_.png">
<figcaption>Hacker Jeopardy, a peek from the back of the room.</figcaption>
</figure>

<p>This doesn't include everything, like the activity stations, social pieces, parties or other type of events.  I could have also done way more sessions, however, my focus was just soaking in everything to figure out if this is something I actually enjoy and how do I make the most out of it every year if I do!</p>
<h2>The Badge</h2>
<figure>
<img src="https://seanland.ca/img/2024/defcon/IMG_20240809_114018_1200ma_.png">
<figcaption>The badge opened up to explore how it functions.</figcaption>
</figure>

<p>I honestly feel some people just go for the badges.  This year's badge was quite a badge.  It coincided with the Raspberry Pi launch of the 2050, intentionally.  The badge itself was a custom built 2050 specifically for the conference.  You should look up the specs yourself, but, in short, it had two modes, badge mode where you could customize your lights and look fairly unique if you chose and game mode.  In game mode, you were literally playing a gameboy game of the conference, exploring, collecting QR codes and even using the game as a map.  If was running a custom firmware with a gameboy emulator, so you could literally just play Pokemon instead if you wanted to.  Yes, I saw someone do it and Yes, I started a conversation with "Is that Pokemon Yellow?"</p>
<h2>Would I Do It Again?</h2>
<p>The day we were leaving Vegas, I told my friend, this has become my annual educational conference.  One month later, I still feel the exact same way.  I have learnt things, had the opportunity to attend some fun parties (Vegas pool parties and Off Strip penthouse parties), made new friends and just have a great time doing what I enjoy.  Knowing what I do now, I would plan some activities ahead.  I would try and priortise getting to the store and a few of the hardware based sessions.  I will also look ahead on the Hacker Tracker App and be more conscious of which are my "Must Do" sessions.  I really wish I attended the Cruise Ship hacking simulation (yes, there were many CTFs like that).  I still have so much to learn about the conference, events and histories.  Things like the Illuminati party, Goons and Black Badges.  I also want to get more involved with the community.  So much to do, for one weekend of the year.  Hope to see you at Defcon 33!  </p>]]></content:encoded>
  </item>
  <item>
      <title>The Traveling Tech Stack</title>
      <link>https://seanland.ca/posts/2024-07-06-the-traveling-tech-stack</link>
      <description>As we begin to build out the cottage area, technology is going to be become more of an integral part.  Here is what I am currently lugging around (not necessarily all at once).</description>
      <pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-07-06-the-traveling-tech-stack</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>personal</category>
      <category>tech</category>
      <content:encoded><![CDATA[<h1>The Traveling Tech Stack</h1>
<p>We have hit <a href="https://seanland.ca/100-days-to-offload">double digits on #100DaysToOffload</a>.  We have the new space.  I am heading to Vegas for Black Hat and DefCon.  I foresee an endless amount of time spend between the house and "The Compound" (name is still a work in progress).  There is a <a href="https://seanland.ca/posts/2024-05-30-the-first-acre-is-the-hardest">lot of work to be done at "The Compound"</a>; as progress we do have the fence installed now!  There is also a never ending list of stuff to do at the house.  </p>
<p>With that, portablility and parity are key when it comes to the technology I use on a daily basis.  I have fully migrated away from using a desktop, yes, even for gaming.  I do not need to have heavy - well super heavy - processing power on me when I have an internet connection.  Even if I don't have an internet connection, it can most likely wait.  </p>
<figure>
<img src="https://seanland.ca/img/2024/tech/IMG_20240705_221741_1200ma_.png">
<figcaption>The late night theatre and gaming setup.</figcaption>
</figure>

<h2>The Goods</h2>
<p>These have been accumulated over the years.  They have also been an evolving of learning and pivoiting.  One of the biggest factors was buying a GPD G1 external graphics cards, after trying the Razer Chroma X with an Nvidia 760.  Trial, error and sticking with what works has brought me to these pieces of technology: </p>
<ul>
<li>Framework 13 - yes, basically maxed out.  </li>
<li>Win4 - yes, also maxed out.</li>
<li>Fiio X5 (Gen 2)</li>
<li>Jackery 1000w with 2 100w solar panels</li>
<li>Dewalt DCR010 Bluetooth Speaker</li>
<li>Viewsonic VG1655 Portable Monitor</li>
<li>ViewSonic M1 Mini Portable LED Projector</li>
<li>Logitech M510</li>
<li>Foldable Bluetooth keyboard from AliExpress</li>
<li>2 x-mini II speakers</li>
<li>Miscellaneous chargers and cables </li>
</ul>
<h2>The Possibilities</h2>
<p>The power that this setup packs is quite impressive.  The space this all takes up (though I don't really take it all at once) is very, very minimal.  You have to keep in mind is that this is also excessive depending on the purpose of the trip.  </p>
<h3>Off Grid Everything!</h3>
<p>All the devices have batteries.  A constant power source is not required.  What if you did have a constant power source?  Like the sun?  I guess that means you could use these devices everywhere.  </p>
<figure>
<img src="https://seanland.ca/img/2024/tech/IMG_20240705_122507_1200ma_.png">
<figcaption>Jackery getting Jacked!</figcaption>
</figure>

<p>Outdoor movies, firepit musics, portable gaming, and the list goes on.  Everything, everywhere. </p>
<h3>Doggie Radio</h3>
<p>The x-minis and the Fiio X5 now have a semi permanent place as the doggie radio.  I have had different versions of this with a Raspberry Pi, where I have <a href="https://git.snld.ca/seanland/doggie-radio">programmed a basic php site to use mpd to play playlists of music</a>.</p>
<figure>
<img src="https://seanland.ca/img/2024/tech/IMG_20240705_215147_1200ma_.png">
<figcaption>Portable Doggie Radio</figcaption>
</figure>

<p>This is the offline version.  It gets the job done, provide ample volume to hide the trucks and low grade thunder that Chase and Zoey fear.  </p>
<h3>Desktop and Portable Gaming Experience</h3>
<p>Last but not least, the true home away from home experience.  Aside from a nicer mouse and keyboard, the only difference I have away from home is the dual 4k monitors and the G1 eGPU.  In terms of usablility, I have do anything I want away from home, sure they come on a smaller monitor at lower resolution, but, they are all still possible.  I also have the exact same environment (minus the above) and data when out and about.  If I have internet, I have VPN access to my infrastructure, similar to what I have at home (well, some intensional security limitations).  Home away from home. </p>
<h2>Pack the Bags</h2>
<p>As, I pack up for my trip to Vegas, what I bring is more so determined by what I have time to do, versus what will fit.  I could bring everything if I wanted (minus obvious redactions and replacements, like headphones), it is just a matter of what is worth bringing.  That is the struggle.  Will I have time to game?  Will I want to write from the hotel room? on the plane?</p>
<p>What does your travelling tech stack look like?  <a href="https://seanland.ca/contact">Let me know!</a></p>]]></content:encoded>
  </item>
  <item>
      <title>Project: Office Shed</title>
      <link>https://seanland.ca/posts/2024-07-03-project-office-shed</link>
      <description>Tackling one project at a time, here is an initial look at the Office Shed project with some visionary notes.</description>
      <pubDate>Wed, 03 Jul 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-07-03-project-office-shed</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>projects</category>
      <category>seanland</category>
      <content:encoded><![CDATA[<h1>Project: Office Shed</h1>
<p>We are up to <a href="https://seanland.ca/100-days-to-offload">Day 9 of #100DaysToOffload</a>, pacing a little slower than I would like, but, there is still time!  There is also no lack of content!  I will need to build <a href="https://seanland.ca/posts/2024-06-22-bye-bye-bunnyman">off of my first sale</a>, <a href="https://seanland.ca/posts/2024-05-30-the-first-acre-is-the-hardest">stay up to date on projects (like this post)</a>, <a href="https://seanland.ca/posts/2024-04-11-revisting-a-vacationing-mind">spew out the millions of other things running through my head</a> and just remain focused on continually writing.  </p>
<p>The first major project (<a href="https://seanland.ca/posts/2024-05-30-the-first-acre-is-the-hardest">refer back here</a>) that I will be undertaking is the office.  My work sanctuary away from home.  Well, my play sanctuary away from home as well.  This will be space I will work my day job from; not to be confused with the workshop space in the garage, I will have for my woodworking and lasering projects. </p>
<h2>What Do We Have?</h2>
<p>We have a structure.  An 80+ year old, former chicken coop, readapted to a insulated storage shed to be transformed into the "Office Shed" or as Jacob would like to put it the "Gaming Room".  It does have two windows, one of which is operaable for some fresh air.    </p>
<figure>
<img src="https://seanland.ca/img/2024/shed/office-shed-rocketbook_1200ma_.png">
<figcaption>The Top Down Measurements</figcaption>
</figure>

<p>What is that square in the middle of the drawing you ask?  Well, that is - most likely - structural support.  Why is it there?  I am not sure, which is probably the most concerning part.  It doesn't seem to be the size to require a support pillar in the middle of it.  </p>
<p>Call it a feature or a bug!  Let's call it a feature, we are going to work with it and turn it into a support wall; creating an office and a lounge! </p>
<h3>The Front</h3>
<figure>
<img src="https://seanland.ca/img/2024/shed/IMG_20240627_200840_1200ma_.png">
<figcaption>The Outside of the Shed</figcaption>
</figure>

<p>The sliding doors have grown on me, not sure if the colour has yet.  It definitely needs a wash down and some accenting features, maybe a lawn gnome or a frog, potentially an "on-air" light?  I don't have any real plans for the exterior at this time except maybe a wash down with a power washer and some sort of footer to prevent tiny rodents from finding a home.</p>
<p>The one, super cool idea that will happen, is adding that double side glass tint to the door.  I will be taking down the blinds and putting that reflective tint.  That should give it a nice darkened effect on the inside and hopefully help to manage the temperature internally. </p>
<figure>
<img src="https://seanland.ca/img/2024/shed/IMG_20240627_200859_1200ma_.png">
<figcaption>The Inside of the Shed</figcaption>
</figure>

<p>Taking a step in, yes, it does require a lot of work.  The obvious thing to point out is the oil of the floor.  That is where the generator used to be store.  The carpet will be pulled and the floor will be finished.  I am thinking some sort of flooring <a href="https://seanland.ca/posts/2024-05-31-the-closet-not-a-horror-movie">comparable to what I used in the closet project</a>.  It is cheap and comfy, especially when you throw down some bean bag chairs!</p>
<h3>The Office</h3>
<p>Looking through the imaginary wall, this side is coined "the office".  This will be the section, where I have my work monitors, a little space for electronic projects and some pictures to create a lovely, real life, zoom backdrop.  </p>
<figure>
<img src="https://seanland.ca/img/2024/shed/IMG_20240627_200915_1200ma_.png">
<figcaption>The View of the Office Space</figcaption>
</figure>

<p>Ideally, I am looking at turningn the wall into some sort of simple shelf storage.  I am playing with the idea of tiny shelves, a mini hidden fridge (basically a fan on a plate to pull away heat, like the silly usb fridges that were a fad in the early 00s) or a hidden bar.  Maybe I will end up doing a little of all three.  </p>
<h3>The Lounge</h3>
<p>The fun space.  The place to go on a cool summer night, when the fire pit isn't in use, or there is a light drizzle and you just want chill.  Pulling out a fizzy beverage and just letting the games, shows or movies take you away from the real world for a short period of time.  </p>
<figure>
<img src="https://seanland.ca/img/2024/shed/IMG_20240627_200908_1200ma_.png">
<figcaption>The Views of the Lounge Space</figcaption>
</figure>

<p>Mounting a larger TV on the wall, having some consoles, or maybe a HTPC to play games on the television, or do we add the GPD G1 dock up to it?  Is there going to be a second smaller screen to multi task playing games and watching the Leafs play (<a href="https://seanland.ca/posts/2024-05-07-flipping-leaf-tickets-for-additional-income-or-not">plug about me flipping playoff tickets</a>)?  What is max capacity?  I think having 4 people comfortable fit would be ideal, maybe it has to be two?  Maybe there is a vision of a swing wall to shrink the office to nothing and expand the area?</p>
<h2>The Vision</h2>
<p>Is to have a space, then a space within a space.  This is an attempt to work more freely away from my basement, as well as, have more fresh air and accessibility to the outdoors when geeking out.  I mean, obviously, leaving the basement isn't physically that difficult, but, that is not the hard part.  There is always something to do along the way.  It isnt' a place to focus.  Simply being here, while it is empty, waiting for the fence to be put up, I am so much more productive, a lot of the reason comes from - what I suspect is - <a href="https://seanland.ca/posts/2024-04-11-revisting-a-vacationing-mind">my vacationing mind</a>.  This space eliminates the flaws of the basement.   </p>
<p>Have an idea?  <a href="https://seanland.ca/contact">Feel free to reach out, would love to hear you thoughts!</a></p>]]></content:encoded>
  </item>
  <item>
      <title>Bye Bye Bunnyman, My First eBay Sale!</title>
      <link>https://seanland.ca/posts/2024-06-22-bye-bye-bunnyman</link>
      <description>I have completed my first transaction on eBay through the Seanland Entertainment brand.  This is a cost analysis of the one product sold.</description>
      <pubDate>Sat, 22 Jun 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-06-22-bye-bye-bunnyman</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>business</category>
      <content:encoded><![CDATA[<h1>Bye Bye Bunnyman, My First eBay Sale!</h1>
<figure>
<img src="https://seanland.ca/img/2024/bunnyman-email_1200ma_.png">
<figcaption>The exciting email in the inbox!</figcaption>
</figure>

<p><a href="https://seanland.ca/posts/2024-04-15-so-I-started-business">After starting up the business</a> and <a href="https://seanland.ca/posts/2024-04-25-my-first-inventory-score.md">accuring some inventory</a>, we finally have our first sale!  It occurred appropriate two months after starting the business, officially.  So, what has happened in that time?</p>
<h2>What Have I Done in Two Months?</h2>
<p>In short, not much.  Thinking about it, this sale probably took so long because it is the <strong>only</strong> DVD that I adjusted the price for.  All the other DVDs are priced at $10, simply to get in the motion or posting items.  When I posted this, I noticed the other historic listings for a DVD of Bunnyman were priced over $20 CAD.</p>
<p>A lot of thought has been put in how to make it easier, in my defense!  I have been playing around with different LLM models to try and find a way to optimize my posting an information gathering.  I am hoping to use image detection to identify an UPC, take that information scrape what the media title is and gather all the appropriate information that needs to be posted on the listing.  </p>
<h2>Breaking Down the Sale</h2>
<figure>
<img src="https://seanland.ca/img/2024/ebay-details_1200ma_.png">
<figcaption>The details of the eBay transaction.</figcaption>
</figure>

<p>Yay! $18.00 CAD, or is it?</p>
<table>
<thead>
<tr>
<th>Costs</th>
<th>Amount</th>
</tr>
</thead>
<tbody>
<tr>
<td>Product</td>
<td>$0.00</td>
</tr>
<tr>
<td>Thank You Card</td>
<td>$0.04</td>
</tr>
<tr>
<td>Envelope</td>
<td>$0.41</td>
</tr>
<tr>
<td>Shipping</td>
<td>$3.88</td>
</tr>
<tr>
<td>eBay Fees</td>
<td>$3.89</td>
</tr>
<tr>
<td><strong>Total Costs</strong></td>
<td><strong>$8.22</strong></td>
</tr>
</tbody>
</table>
<p>So, this was eye opening, but, not overly unexpected.  It's not cheap to turn over inexpensive product.  I feel, quality was important, so I increased expense probably a little more than I really had to, but, long term, the above break down should decrease.  It will however decrease marginally at a much greater expense.  IE.  Buying in larger bulk quantities.  </p>
<table>
<thead>
<tr>
<th>Revenue</th>
<th>Amount</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sale Price</td>
<td>$18.00</td>
</tr>
<tr>
<td><strong>Total Profit</strong></td>
<td><strong>$9.78</strong></td>
</tr>
</tbody>
</table>
<p>A profit (and not the biblical kind)!  Well, there is kind of an obvious asterisk here.  The product was "free".  Free for this project at least.  These are the DVDs, I wish to offload in a fun fashion.  The rarity and desire for this DVD has definitely made it profitable, but, unless I am selling all of my DVDs for $15+, I am predicting some losses in my future.</p>
<h2>Where do I Increase Margins?</h2>
<figure>
<img src="https://seanland.ca/img/2024/excel-breakdown_1200ma_.png">
<figcaption>The estimated costs based on what was online.</figcaption>
</figure>

<p>Looking at it long term, the easiest ways to increase margins is through volume:</p>
<ul>
<li>Increasing the number or sales will decrease the transaction fees</li>
<li>Buying media in bulk will decrease the cost of product</li>
<li>Buying shipping material in bulk will decrease the per item costs</li>
<li>Using one shipping company, again, will decrease the cost per unit with spend</li>
</ul>
<p>Increasing the revenue is a little trickier.  This is where the only real options are:</p>
<ul>
<li>Selecting more specific items, IE. Highly desired, limited edition, speciality items</li>
<li>Build a community, where repeat purchases are rewarded</li>
<li>both!</li>
</ul>
<h2>What's Next?</h2>
<figure>
<img src="https://seanland.ca/img/2024/excel-inventory_1200ma_.png">
<figcaption>The current state of inventory tracking.</figcaption>
</figure>

<p>Some easy checkboxes to hit off:</p>
<ul>
<li>Build out a custom store!  Leaning on the "community" piece.  If there is a place that customers can keep returning, knocking out that transaction fee, prices can stay cheap and ideally provide a better experience</li>
<li>Keep up with the inventory.  There are at least 100 DVDs and CDs to add online. </li>
</ul>
<h2>The Reality</h2>
<p>There are two realities to face, one, this is a hobby and two, the margins are low.  At this stage, the biggest requirement is time.  Based on the current return on the time spend, it has to be considered a for fun project and/or money pit.  And, it is.  </p>
<p>Though I say all of this, it is a hobby and I enjoy it.  Over time, there is potential for it to change.  If I am to knock off the majority of those check boxes, maybe, just maybe, this could become a realistic business opportunity.  I mean, people do it all the time.  </p>
<p>At the end of the day, my goal is just to expand my personal CD collection, if I can cover it with this hobby or make some money along the way to expand my collection, that sounds like a win! </p>]]></content:encoded>
  </item>
  <item>
      <title>The Closet: Not a Horror Movie</title>
      <link>https://seanland.ca/posts/2024-05-31-the-closet-not-a-horror-movie</link>
      <description>Well, it might be a horror movie to some.  This is how my closet renovation went down.  My first major self completed (ish) renovation.</description>
      <pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-05-31-the-closet-not-a-horror-movie</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>personal</category>
      <category>projects</category>
      <content:encoded><![CDATA[<h1>The Closet: Not a Horror Movie</h1>
<p>Six days before baby Emilia was born we encountered a sneaky, deadly, infectious nemisis lurking in our closet.  Mold.  It had been the result of a slow leak from a bathroom renovation the year prior; one drop at a time every shower for an entire year.  </p>
<p><em>This is going to be a picture heavy post, well, only five pictures.</em></p>
<figure>
<img src="https://seanland.ca/img/2024/IMG_20240324_190631_1200ma_.png">
<figcaption>The reveal, up close</figcaption>
</figure>

<figure>
<img src="https://seanland.ca/img/2024/IMG_20240329_183455_1200ma_.png">
<figcaption>The reveal, taking a few steps back</figcaption>
</figure>

<p>The mold was discovered March 24, 2024.  We immediately vacated the room and lock it down to prevent the circulation of spores.  The living room became our new home.  We reach out to our contractor who helped us with the bathroom hoping he had experience addressing these types of issues.  He's coming from out of town and is available March 30th.  "Perfect, see you then!".</p>
<p>March 30th comes around,  here I am sitting in the hospital waiting for Emilia to enter the world trying to chase down <a href="https://patrickclarke.ca">Patrick</a>, my brother, at 6am to meet the contractor, as we won't be leaving the hospital any time soon.  Patrick spends the better part of the day supporting the team to ensure Emilia has a mold free come to come to.</p>
<figure>
<img src="https://seanland.ca/img/2024/IMG_20240414_165020_1200ma_.png">
<figcaption>No more mold, but, a lot more work.</figcaption>
</figure>

<p>Now, the ball is in my court.  The mold is contained and removed, there is a blank canvas of a closet in front of me.  The original plan was to replace the carpet, however, Patrick convinced it might just be better long term to add a non-carpet flooring, so we do. </p>
<p>Not before, I sand down the walls - something I always spend too much time and make a huge mess doing - and give the closet a fresh coat of paint.  The colour of choice, is whatever can is the most full in the basement.  I think it is the same colour as our bathroom, but, I am not even sure.  Two and a bit coats later...      </p>
<p>Diana, starts builidng out her vision of the closet using the IKEA Closet builder.  I am then tasked with building out mine.  We submit our $3000, 800lb pound order that requires one shipment and one trip to the closest IKEA warehouse.  Once it arrives, I spend any spare bits of time cleaning and building... </p>
<p>Two months from the original discovery date, here we are.  </p>
<figure>
<img src="https://seanland.ca/img/2024/IMG_20240525_105637_1200ma_.png">
<figcaption>Completed closet, empty</figcaption>
</figure>

<figure>
<img src="https://seanland.ca/img/2024/IMG_20240525_185842_1200ma_.png">
<figcaption>Completed closet, full</figcaption>
</figure>

<p>We are back in the room and living a normal, but, different life with a third human being in the house.  I have way more usable space than I ever had, as well as a structured, organization for all my different pieces and styles of clothing.  As an example, I have a specific section for my growing collection of Leafs (<a href="https://seanland.ca/posts/2024-05-07-flipping-leaf-tickets-for-additional-income-or-not">read about me trying to flip my tickets</a>) jerseys!</p>
<h2>Conclusion</h2>
<p>This project made me feel capable of doing anything.  It was a great experience and I do still have a lot to learn, however, it was quite an accomplishment.  Not that it was overly difficult or complicated, more so the scale, time to complete and finalization of the project (Right, the baseboards aren't done.  That wasn't in scope, <em>wink</em>). his project will have pave the path for <a href="https://seanland.ca/posts/2024-05-30-the-first-acre-is-the-hardest">the other projects planned for Seanland</a>.</p>]]></content:encoded>
  </item>
  <item>
      <title>The First Acre is the Hardest</title>
      <link>https://seanland.ca/posts/2024-05-30-the-first-acre-is-the-hardest</link>
      <description>For the first time in my life, we have an easily accessible large parcel of land.  Here are the initial plans.</description>
      <pubDate>Thu, 30 May 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-05-30-the-first-acre-is-the-hardest</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>personal</category>
      <category>projects</category>
      <category>seanland</category>
      <content:encoded><![CDATA[<h1>The First Acre is the Hardest</h1>
<p>I have finally been able to acquire a property that I can comfortably refer to as Seanland!  </p>
<p>It is a new house for my mother.  It is on just under an acre, however, offers a close location with unlimited possibilities!  It contains two smallish outbuildings - think garden shed size - one is insulated, one IS a garden shed.  Beyond that, it has two larger strucutures, one is - what I will call - a farmer's garden.  It is where the equipment is store, such as tractors, mowers, ATVs, snowmobiles, etc.  The other larger structure is a two bay garage with a hoist.  Yes, a hoist!  This is a handyman's dream (I have some skills to gain)!</p>
<p>Since this property is only, approximately, thirty minutes from the immediate family, I am hoping this can become the property the family will flock to.  We did like the property as it had a mini-esque basement apartment for us to use; a single bedroom, bathroom and living room/kitchenette.  This provides the three of us with more than enough to stay for a short period of time or an extended period of time.  There are tons of spots to park a trailer and/or pitch a tent for out of towners.  There will also be a spare bedroom for an individual or couple.  </p>
<p>It's great!  It also provides the space to start building out some of those projects I have always wanted to!</p>
<h2>Greenhouse</h2>
<p>I have always wanted to have a full grown greenhouse.  Somewhere we can grow a large amount of fruits and vegetables (and fish - yes, always wanted to do Aquaponics) in a controlled environment, optimizing the space provided.  Beyond that, I was really hoping to automate it as well.  Long term goals included adding sensors for temperature to open and close windows, adding solar power for some minor power and maintaining the systems, building out automated feeders for an aquaponics systems, metrics around the temperature, moisutre and water levels, the list goes on.  </p>
<h2>Mini Spa</h2>
<p>A basic area, with a sauna, firepit nearby (part of Outdoor area further down), cold bath, outdoor shower and wood fed hot bath/tub.  It should also include some aestetics, such as fountains, rocks and foliage.  Just an area to chill out, have a beer, relax and promote healthy living!  This will be a lower priority item, but, I definitely think it would be awesome.</p>
<h2>Tiny Office</h2>
<p>The insulated shed is my future office space.  That is the one piece that I have mentally set in stone.  I suspect there will be days I have to drive up to the property and will be working remotely there.  I will need my own space for taking calls and just working in piece.  Apparently, that structure, is a now 80 year old chicken coop, that was converted into the former owner's mini house and will be Sean's office.  I will definitely have to redo the flooring, which I had now done one time with my brother, so I am almost an expert!</p>
<h2>Dog Run / Backyard</h2>
<p>This will be number one of the list.  The first plan for the property is to fence an area.  Beyond fencing the area, the second plan is to build a bathroom for the dogs.  At any given time, there will be two to four dogs on the property.  That's a lot of poop if unmanaged, heck, two dogs generate enough poop as it is.  This space will also double - to a degree (see next point) - as the backyard.  We should have a space for backyard games, such as spikeball, mini soccer, ring toss, etc.  I am sure some of this will overflow outside the encaged area, to be figured out.    </p>
<h2>Outdoor Entertainment Area (kitchen, firepit, bar)</h2>
<p>Last but not least, the outdoor entertainment area, this is basically the hang out space if I were to describe it as a different way.  The place you go to sit, hang out, have a beverage, chat, etc.  The pieces I want to add, are an outdoor kitchen, firepit and bar area.  To me, the outdoor kitchen should have a grill and pizza oven, those are the criteria.  The firepit, a group of chairs around a firepit.  The bar area, is the debatable content, does it have TVs, or taps, or a bar top, power?  This is still up in the air, however, the "area" is not.</p>
<p>Beyond all the major projects listed above, I want to enhance the orchard (maybe add some trees to it, or move it to the greehouse, etc), well, the two big apple trees already in place.  For the winter, it would be awesome to have a rink of some kind, I recall doing that when we were really young.  There is a tiny deck, that could be finished up and be a great place to have a coffee in the morning!  Anyways, I am sure this new property will help mix up the blog content a whole lot more!</p>]]></content:encoded>
  </item>
  <item>
      <title>Flipping Leaf Tickets for Additional Income; or not?</title>
      <link>https://seanland.ca/posts/2024-05-07-flipping-leaf-tickets-for-additional-income-or-not</link>
      <description>I have always wanted to flip Leafs tickets.  I had the opportunity to flip Playoff tickets.  This is my story.</description>
      <pubDate>Tue, 07 May 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-05-07-flipping-leaf-tickets-for-additional-income-or-not</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>business</category>
      <content:encoded><![CDATA[<h1>Flipping Leaf Tickets for Additional Income; or not?</h1>
<p>While on paternity leave, I saw an email come in stating "Leafs Playoff Presales starts tomorrow at 2pm".  I thought, heck, I have never been able to get Leaf playoffs tickets myself.  I am sure I can could flip these, worst case I will just go to a playoff game.  </p>
<h2>What Did I Buy and For How Much?</h2>
<p>I was able to secure two sets of tickets.  I was originally going to settle for one guaranteed set, then decided "hey, I can still get access to these tickets that would be great!".  I was able to purchase tickets in the Gold section, on the corner where the Leafs shoot twice!</p>
<h3>First Set</h3>
<p>Sec 110, Row 23, Seat 3</p>
<p>Sec 110, Row 23, Seat 4</p>
<ul>
<li>$689 per ticket</li>
<li>$23.25 per ticket service fee</li>
<li><strong>Total Price</strong> $1424.50</li>
</ul>
<h3>Second Set</h3>
<p>Sec 117, Row 15, Seat 3</p>
<p>Sec 117, Row 15, Seat 4</p>
<ul>
<li>$815.00 per ticket</li>
<li>$23.25 per ticket service fee</li>
<li><strong>Total Price</strong> $1676.50</li>
</ul>
<p>With a cost of $3101 CAD, I now had 4 great tickets to Game's 4 and (potentially) 6 of the first round of the playoffs in Leaf's Nation!  I was super excited and was expecting to make a few extra bucks flipping these tickets!</p>
<h2>Posting the tickets!</h2>
<p>Game 4 was posted for $1000 per ticket, Game 6 was posted for $1100 per ticket.  The expected return after Ticketmaster fees was $3780, a smooth $679 for queuing up and finding some great seats to share with the public at a nominal finders fee.  Or so, I thought...</p>
<h2>Game 4</h2>
<p>Two days prior to the game, the tickets sold.  They flew up the shelf like hot cakes.  The Leafs were down 2-1, this was going to be the game where they tied it at two!  Highly desirable! SOLD!</p>
<p>I am sitting here, up $375.50 on the first pair of tickets.  Nice!</p>
<h2>Game 6</h2>
<p>The return to Toronto, the Leafs down 3-1 in the series.  The season on the verge of another heartbreak.  Toronto fans are harsh and overly negative at these points.  I wait and wait, the tickets are burning a hole in my metaphorical pocket.  What do I do?  Let's hedge my bets.  At noon, game day, I drop my prices to breakeven, I take into account the margin I have made on the first set.  I am baffled at the number of people dumping tickets below face value.  I am not the only one looking at taking a loss on the tickets.</p>
<p>Three hours before game time.  The tickets still aren't selling.  Do, I want to go negative on these in order to dump them?  Hell no.  I reach out to some friends and family, "who wants to come to the game for $300 and beer", a completely fair offer if I say so my self.  I felt like a number of people really did try to join, but, I unreasonably didn't give enough time to this grand opportunity.  No takers, at that price, can I even get someone to come for free? </p>
<h2>The Game</h2>
<p>Luckily, I had a friend downtown, not a die hard hockey fan - actually - a person who had never been to a live hockey game, let alone a Boston - Toronto game, let alone a playoff game, let alone an elimination game, you get the point.  He was very gracious and to his credit, I did not pay for a single thing while I was there.   </p>
<figure>
<img src="https://seanland.ca/img/2024/IMG_20240502_210500_1200ma_.png">
<figcaption>My future career professional sports photographer, if I do say so myself.</figcaption>
</figure>

<p>The game ended in a 2-1 victory for the "good guys".  They had successfully pushed for a Game 7 back in Boston.  An event that had been a repeating nightmare for this group, but, this time will be different right?</p>
<h2>The Result</h2>
<p>The game was great.  It did however cost me $1301, plus two uber rides and one train ride; approximately $1500.  I understood the risk.  I did get to see a great hockey game, it was a unique experience.  I did get to save some money on the ticket expenses by selling the first set.  I did "pick" the better of two games to go to.</p>
<p>However, the real winners here are the Boston Bruins who advance to the next round and Ticketmaster who generated at least $600 off of me.  Which actually blows my mind.  I think this was fun to try at least once, but, my advice is just wait until moments before the game and try and pick something up.  Flipping long term probably does have a profitable path, it does come at a cost.  </p>]]></content:encoded>
  </item>
  <item>
      <title>My First Inventory Score</title>
      <link>https://seanland.ca/posts/2024-04-25-my-first-inventory-score</link>
      <description>I have obtained some product to sell and it is not coming from my own possessions!</description>
      <pubDate>Fri, 26 Apr 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-04-25-my-first-inventory-score</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>business</category>
      <content:encoded><![CDATA[<h1>My First Inventory Score</h1>
<p>I started off with posting my own possesssions; some old DVDs that have been sitting under the couch in a plastic bin.  This would be the most profitable strategy if I consider them to have zero monetary value being in my own house, however, not a sustainable one (nor, actually profitable).  </p>
<h2>Let's Pivot to the Business Goal</h2>
<p>Easy, to make money.  I am not looking to become a millionaire.  I am looking for a sustainable way - that is fun - to make hobby money and expand my collection of media; specifically my CDs.  This will be a side hustle, a learning experience and a whole lot of fun.  So far, it has completely been a money suck with zero return on investment.  I can't wait for the first sale!  </p>
<h2>Back to the Goods</h2>
<h3>I will add a picture here!</h3>
<p>The first acquired inventory, two boxes of boxes amassing a total of 152 individual discs in total at a cost of $75 CAD from a lovely lady named Karen.  The best part is, I am not even going to sell them all.  The taste of Karen is very close to my own.  I will be expanding my collection, selling duplicates and hopefully even migating the cost through some sales.  What does that math look like. </p>
<h2>Reaching a Profit</h2>
<p>Currently, I am posting my old DVDs on eBay to try and learn the process.  The goal with these CDs is to sell them via the store (that doesn't exist yet - a post will come soon)  In order to understand the cost, I am limiting the target market, intentionally.  </p>
<ul>
<li>I will only be shipping to Canada, to start</li>
<li>I will only be shipping individual CDs</li>
<li>I will be starting out on eBay</li>
<li>I will only be using Canada Post to ship</li>
</ul>
<h2>The Cost Breakdown</h2>
<p>For this batch specifically this is what the cost looklike without any sort of platform or advertizing cuts; hence the need for a personal store.  Here are the material costs: </p>
<ul>
<li>CD - $0,50 (actually 0.493, but, can't round down with cost. My opinion. Fractions of a cent; watch Office Space)</li>
<li>Thank you card (advertising for site) - $0.04</li>
<li>Padded Envelope - $0.45</li>
<li>Postage - $1.94 - $3.19 depending on weight. </li>
</ul>
<p>Therefore, each CD will have to be sold, online, through a market for at least $2.93 to $4.18, plus whatever market fees!</p>
<h2>How to Lower Costs</h2>
<p>Long term, I will need to find a better solution to lower the costs here to maximize profit.  The margins are most likely going to be very small, so the focus has to be on experience and volume - or well, targeting high end pieces (IE. Demo tracks, limited edition releases, etc.).  How do I cut costs?</p>
<ul>
<li>Selling solely through the store</li>
<li>The cuts the stores take is going to be high, this will be the best way to maximize profits long term</li>
<li>Order larger volumes of envelopes. </li>
<li>Order larger volumes of thank you cards.</li>
<li>Ship in larger volumes.  </li>
<li>Price per unit should - in theory - go down.  There is a lot more research to be done here, but, shipping and packaging will both be recalculated. </li>
<li>Find free product?  Improve sourcing?  Use damaged goods for other products?  Zero waste? </li>
<li>Save time; time is money!</li>
</ul>
<h2>The Outlook</h2>
<p>The focus now is getting that first sale.  This will most likely be a money pit - not a crazy upfront cost - to start, but, should bring in some small funds and help clean up the hous a bit!</p>
<p>Not only that, a fun way to learn a bunch of new skills.  Online marketing? SEO? Online Sales? Auction site? Buying/Selling Online.  As someone who works in cyber security sales, this is a whole new ball game.  Do I go to flea markets? Garage sales?  Other places? The possibilities...</p>]]></content:encoded>
  </item>
  <item>
      <title>So, I Started a Business!</title>
      <link>https://seanland.ca/posts/2024-04-15-so-I-started-business</link>
      <description>I have decided to take some time and focus on something I have wanted to do for a while.  Buy more CDs!  I mean, start my own business!</description>
      <pubDate>Mon, 22 Apr 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-04-15-so-I-started-business</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>business</category>
      <content:encoded><![CDATA[<h1>So, I Started a Business!</h1>
<p>As I began to clear out some of my old computer parts, I realized I have a love for trading off goods.  I already knew I had a love for collecting CDs.  Let's mix them together; to start.</p>
<h2>Seanland Entertainment</h2>
<p>I slowly went down a dark hole.  I can have store front, one escape room, themed events, a hacker space, patreon for live streams, the list goes on.  I did stick on the name Seanland Entertainment; basic, functional with room for expansion on activities. </p>
<figure>
<img src="https://seanland.ca/img/2024/IMG_0388_1200ma_.png">
<figcaption>The current state of the studio!</figcaption>
</figure>

<h2>Where did I start with the business?</h2>
<ul>
<li>I registered a business</li>
<li>I have a business mailbox through UPS (this is the "business address")</li>
<li>I registered with a small business account on Canada Post</li>
<li>I picked the products</li>
<li>I am in the process of opening an Amazon seller account</li>
<li>I am working on a logo, business cards and stickers!</li>
<li>I ordered packing material and a label printer</li>
<li>Refurbished a "work computer" with materials at home (a k8s node that I hadn't set up)</li>
</ul>
<p>Total cost, just under $1000 CAD. </p>
<h2>Where do I start with the product?</h2>
<p>Easy, my own collection.  I have a bunch of DVDs I want to get rid of.  I will then source lots of CDs from different mediums (Facebook Marketplace, Kijiji, Garage Sales, Talize, etc.).  The products are going to be limited to two envelope sizes, the "CD size" and the "DVD size".  If it is a piece of entertainment that can fit into either size, we are golden to sell. </p>
<h2>What's next?!</h2>
<p>I want to build an inventory system.  This is where some of my other interests and skills come in.  Beyond that, it would be nice to have a standalone store front to direct customers to.  The fees on some of these platforms are fairly high considering the price of some used discs.  My goal is not necessarily to raise prices, but, lower costs while building a following of sorts.  Collectors are an interesting group of individuals and most definitely a community.</p>
<h3>Website coming soon!</h3>]]></content:encoded>
  </item>
  <item>
      <title>Revisiting a Vacationing Mind</title>
      <link>https://seanland.ca/posts/2024-04-11-revisting-a-vacationing-mind</link>
      <description>Analyzing the last time I felt inspired to do side work while on my first all inclusive vacation to Mexico.</description>
      <pubDate>Thu, 11 Apr 2024 00:00:00 GMT</pubDate>
      <guid>https://seanland.ca/posts/2024-04-11-revisting-a-vacationing-mind</guid>
      <enclosure url="https://seanland.ca/img/default.png" type="image/png" />
      <category>projects</category>
      <category>personal</category>
      <content:encoded><![CDATA[<h1>Revisiting a Vacationing Mind</h1>
<p>Wielding my trusty Rocketbook Mini while sipping on fruity beverages, reading through "The Art of Non-Conformaity" and occasionally jumping in the water for a dip, I came up with five pages - small pages - of chicken scratch trying to figure out what I want to do next.  I had just finished reading "F**k It, Do What You Love" and was nearing completion of "The Art of Non-Conformaity".  I needed to write down my brain.  </p>
<p>This is what I came up with...</p>
<figure>
<img src="https://seanland.ca/img/2024/vacation-mexico-notes-2023_1200ma_.png">
<figcaption>Five mini pages of raw notes!</figcaption>
</figure>

<h2>Let me throw this into a table...</h2>
<table>
<thead>
<tr>
<th>Page 1</th>
<th>Page 2</th>
<th>Page 3</th>
<th>Page 4</th>
<th>Page 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>Make a Masters</td>
<td>Microbrewery</td>
<td>"Make a Masters"</td>
<td>"Living Canada by/for Canadians"</td>
<td>"Make a Masters"</td>
</tr>
<tr>
<td>Custom Watches</td>
<td>Custom Wine Cellar System</td>
<td>Objective Based Outcomes <br />- 3 Key Objectives<br />- Quarterly Objectives<br />- Weekly Objectives</td>
<td>Preface</td>
<td>Build MVP ---&gt; including:<br />- Description<br />- Requirements<br />-Links to people partaking(me)<br />- Donation<br />- Sample (Sean's)</td>
</tr>
<tr>
<td>Custom Hats</td>
<td>Music!</td>
<td>Major/Minors <br />-&gt; Lateral Helpful Skills <br />-&gt; General knowledge, languages, marketing</td>
<td>Intro -&gt; Not exclusively for Canadians</td>
<td>Buy Domain</td>
</tr>
<tr>
<td>Hostel</td>
<td>Radio Station Online</td>
<td>5-6 Daily Checklist</td>
<td>Children</td>
<td>Share Publicly<br />-&gt; Mastodon</td>
</tr>
<tr>
<td>Hostel Management System</td>
<td>Buying/Selling - eBay, Amazon, Kijiji, Facebook, etc.</td>
<td>1 Weekly Report</td>
<td>Teens</td>
<td></td>
</tr>
<tr>
<td>Open Source Development</td>
<td>OSAR, First Aid / Search and Rescue</td>
<td>1 Quarterly Assessment -&gt; Re-Assessment</td>
<td>As Adults</td>
<td></td>
</tr>
<tr>
<td>Blogging / Writing</td>
<td>Self Sufficiency</td>
<td>1 Outcome Based Thesis</td>
<td>As New Canadians</td>
<td></td>
</tr>
<tr>
<td>Automation of Systems</td>
<td>Re-evaluate pad.snld.ca</td>
<td>What is a "Masters"</td>
<td>"Mini-USA"</td>
<td></td>
</tr>
<tr>
<td>Traveling</td>
<td>Prioritize Use of Time</td>
<td>This is designed to be a lifestyle change</td>
<td>Family Trips</td>
<td></td>
</tr>
<tr>
<td>DIY</td>
<td>Minimize waste of time</td>
<td>Additional Key Requirements <br />-&gt; Travel component, language?, news?, networking?</td>
<td>Schools</td>
<td></td>
</tr>
<tr>
<td>Hostel Directory</td>
<td>Focus on low cost, high margin</td>
<td>Mission Statement, why?</td>
<td>Volunteering</td>
<td></td>
</tr>
<tr>
<td>Automated Gardening System</td>
<td>"Legacy" projects take time</td>
<td>Address Potential Objectives</td>
<td>How to see</td>
<td></td>
</tr>
<tr>
<td>Version Management System</td>
<td>Consistent output</td>
<td>Saca Sugar Wombat Cat? (I actually don't know what I wrote here it's smudged)</td>
<td>Environment</td>
<td></td>
</tr>
<tr>
<td>Canada by/for Canadians Book</td>
<td>Trust</td>
<td></td>
<td>Global Warming</td>
<td></td>
</tr>
<tr>
<td>Micro Startup AI Generator</td>
<td>Accountability</td>
<td></td>
<td>Where to go</td>
<td></td>
</tr>
<tr>
<td>Border Collie Adventure Bot</td>
<td>Balance, Healthy lifestyle</td>
<td></td>
<td>What to do</td>
<td></td>
</tr>
<tr>
<td>MinMax</td>
<td>Declutter -&gt; refine possessions</td>
<td></td>
<td>Sights</td>
<td></td>
</tr>
<tr>
<td>RFID Jukebox</td>
<td>Donate, sell, etc.</td>
<td></td>
<td>Thkning Canadian</td>
<td></td>
</tr>
<tr>
<td>Mame Arcade</td>
<td></td>
<td></td>
<td>Stats</td>
<td></td>
</tr>
<tr>
<td>WeWork Startup Hub</td>
<td></td>
<td></td>
<td>Where do I live?</td>
<td></td>
</tr>
<tr>
<td>Adult Entertainment, IE. Escape Room with Bar</td>
<td></td>
<td></td>
<td>What I don't know?</td>
<td></td>
</tr>
<tr>
<td>PERSONAL BRAND!!</td>
<td></td>
<td></td>
<td>Conclusion<br />- Will Canada always be home?<br />-What it takes to be Canadian</td>
<td></td>
</tr>
<tr>
<td>Automated Trading &lt;-- Statistics Betting Analyzer</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h2>Now, let's put these into categories.</h2>
<ul>
<li>Business</li>
<li>Travel</li>
<li>Technology</li>
<li>Lifestyle</li>
<li>Creativity (Art; music goes here and Writing)</li>
</ul>
<p>Let's start by setting some definitions.  We have the "things" and the left over "traits".  "Things" are items that fall into the above categories.  IE.  "Donate, and Sell" is a "Business Thing", "Mame Arcade" is a "Technology Thing" and "Self Sufficiency" is a "Lifestyle Thing".  The remaining "traits" make up who I am (or who I will need to become) as a person. </p>
<p>If we remove all the "things" and focus on traits that were listed in notes (recognize Page 4 topics are mostly chapters for a book): </p>
<ul>
<li>Consistent Output</li>
<li>Trust</li>
<li>Accountability</li>
<li>Declutter</li>
<li>Prioritize Use of Time</li>
<li>PERSONAL BRAND!! (let's keep this independent of business)</li>
</ul>
<h3>Consistent Output</h3>
<p>As a person, who would much rather do twelve hours of work at once, instead of one hour of work twelve times, this has to be a prime area of focus. Most "things" don't happen over night.  Any of the "things" list above don't happen over night.  They are slow burns, "the long game" and will require long term dedication. Focus on small wins and breaking tasks up into small, achievable chunks.  Remain Consistent.  </p>
<h3>Trust</h3>
<p>Have trust in yourself.  Understand your own decisions and make them.  If it doesn't feel right, don't do it.  I almost want to put honourable in here as well, or even faith.  It really comes down to believing to a degree. </p>
<h3>Accountability</h3>
<p>If you say you are going to deliver, do it.  There are excuses around every corner, don't let them get in the way.  Just because you are on vacation doesn't mean you can't write if you want to.  How long do you have for lunch versus how long does it take you to eat.  You are your own worst enemy and you can be your own best friend.  Deliver.  </p>
<h3>Declutter</h3>
<p>Clean up possessions, Clean up tasks, Clean up everything.  I am notorious for collecting technology bits.  Do I have a need for them all?  Is it really a collection?  Or am I just making excuses?  Find a purpose for something, or get rid of it.  </p>
<h3>Prioritize Use of Time</h3>
<p>Complimenting Decluttering, declutter your calendar.  Prioritize your time.  People need breaks, so take them.  You also need to work to achieve your goals.  If you don't have time, but, have money outsource.  If you have no money, but, have time, figure out a way to do the task at hand.  Time is literally money.  </p>
<h3>PERSONAL BRAND!!</h3>
<p>Who you are.  This is important for a million reasons.  You represent everything you touch.  Do you sell a product?  Is the business yours?  Is that your face on your social media account?  Do a good job and if you don't make sure you own it.  People who buy at small business, buy from the people.  People like service and people like sticking with who they know.</p>
<h2>What Does This All Mean?</h2>
<p>We all have our "things"; heck even call them goals.  It's what we are striving for.  To some people this may just be a job, or starting a new career, or changing careers, we all want some "thing".  The issues comes down to what we choose to do to achieve these goals.  </p>
<p>I am not a writer, but, here I am writing.  I may not be great at business, but, I sell software for a living.  What skills do I have and what traits do I need to improve to deliver on my "things".  How do I take the next steps to achiever my goals, it's by constantly improving and recognizing where I need to improve. </p>
<p>Thanks for reading! <a href="https://seanland.ca/contact">Would love to hear your thoughts.</a>   </p>
<p><em><a href="https://seanland.ca/100-days-to-offload">Day 2 of #100DaysToOffload</a></em></p>]]></content:encoded>
  </item>
</channel>
</rss>