<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title><![CDATA[Damir's Corner]]></title>
        <description><![CDATA[Notes from Daily Encounters with Technology]]></description>
        <link>https://www.damirscorner.com</link>
        <generator>RSS for Node</generator>
        <lastBuildDate>Fri, 27 Feb 2026 06:20:11 GMT</lastBuildDate>
        <atom:link href="https://www.damirscorner.com/blog/posts/rss.xml" rel="self" type="application/rss+xml"/>
        <author><![CDATA[Damir Arh]]></author>
        <pubDate>Fri, 27 Feb 2026 06:18:10 GMT</pubDate>
        <item>
            <title><![CDATA[Logging into Teams without a license]]></title>
            <description><![CDATA[<p>In the company I work for, we&#39;re not using Teams internally and our users don&#39;t have a license for it. But some of our customers do use Teams and pay for it. They also add users from our Entra ID which gives them access to Teams inside their organization. This worked great until it suddenly didn&#39;t anymore.</p>
<p>When an existing user tried to login into Teams on a new device, they couldn&#39;t. However, Teams still worked fine on the old device where they logged in at an earlier point in time. Further investigation showed that the issue wasn&#39;t user-specific, the same happened to all users. This meant that the problem was organization-wide.</p>
<p>The issue was even more difficult to troubleshoot because there was no useful error message accompanying the failed login in Teams application. Fortunately it could easily be reproduced when trying to log into <a href="https://teams.microsoft.com/v2/">Teams</a> from a private browser window. After logging in with a company Microsoft account, a page with a <strong>Sign in</strong> button showed up. Clicking the button didn&#39;t seem to do anything.</p>
<p><img src="img/20260227-TeamsLoginPage.png" alt="Teams login page in private browser window"></p>
<p>However, the browser developer tools provided more details. Each click resulted in a failed call to <code>https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/token</code>. And the response body contained the following error message:</p>
<blockquote>
<p>AADSTS500014: The service principal for resource &#39;<a href="https://api.spaces.skype.com">https://api.spaces.skype.com</a>&#39; is disabled. This indicate that a subscription within the tenant has lapsed, or that the administrator for this tenant has disabled the application, preventing tokens from being issued for it.</p>
</blockquote>
<p>This helped us discover that <a href="https://learn.microsoft.com/en-us/answers/questions/980425/the-service-principal-for-resource-https-api-space">we weren&#39;t the only ones with this issue</a>. And it identified the root cause of the error: the Microsoft Teams application has become disabled. Could we somehow reenable it?</p>
<p>It was time to take a closer look in the <a href="https://entra.microsoft.com">Microsoft Entra admin center</a>. The Microsoft Teams app should be listed on the <strong>Enterprise apps</strong> page. It didn&#39;t look like it at first sight:</p>
<p><img src="img/20260227-TeamsEnterpriseApplications.png" alt="Default enterprise applications filter in Microsoft Entra"></p>
<p>Removing the <strong>Application type == Enterprise Applications</strong> filter helped. Now, there were <em>a lot</em> of Microsoft Teams applications listed:</p>
<p><img src="img/20260227-TeamsMicrosoftTeamsApplications.png" alt="Microsoft Teams enterprise applications in Microsoft Entra"></p>
<p><strong>Microsoft Teams Services</strong> was the application I was looking for. On its <strong>Properties</strong> page I could see that it was <strong>Deactivated</strong>:</p>
<p><img src="img/20260227-TeamsMicrosoftTeamsProperties.png" alt="Microsoft Teams properties in Microsoft Entra"></p>
<p>Toggling the <strong>Enabled for users to sign-in</strong> switch to <strong>Yes</strong> changed its <strong>Activation status</strong> to <strong>Activated</strong>. It was time try logging into Teams from a private browser window again. Unfortunately it still didn&#39;t work. But there was now a different error in the browser development tools:</p>
<blockquote>
<p>AADSTS500014: The service principal for resource &#39;00000003-0000-0ff1-ce00-000000000000&#39; is disabled. This indicate that a subscription within the tenant has lapsed, or that the administrator for this tenant has disabled the application, preventing tokens from being issued for it.</p>
</blockquote>
<p>Finding the right application in the Microsoft Entra Enterprise applications list was even easier this time. I could search by the <strong>Application ID</strong> value from the error message. It was the <strong>Office 365 SharePoint Service</strong> application and it was also <strong>Deactivated</strong>. I reactivated it the same way as <strong>Microsoft Teams</strong>. And I got yet another error when I tried to log in:</p>
<blockquote>
<p>AADSTS7000112: Application &#39;a164aee5-7d0a-46bb-9404-37421d58bdf7&#39;(Microsoft Teams AuthSvc) is disabled.</p>
</blockquote>
<p>This time both the application <strong>Name</strong> and <strong>Application ID</strong> were listed in the error message. As if they were trying to make it easier for me. I reactivated <strong>Microsoft Teams AuthSvc</strong> just like the other two applications.</p>
<p>This finally fixed the issue for good. The login in the private browser window succeeded. I got a list of organizations I was added to. After choosing one of them I was logged into their Teams tenant. It worked from the Teams application as well. Let&#39;s just hope it stays that way.</p>
<p>And if not, I know what to check first. In the <strong>Microsoft Entra admin center</strong> the following <strong>Enterprise applications</strong> must be <strong>Activated</strong> (i.e., the <strong>Enabled for users to sign-in</strong> switch on the application <strong>Properties</strong> page must be set to <strong>Yes</strong>):</p>
<ul>
<li>Microsoft Teams Services, Application ID: cc15fd57-2c6c-4117-a88c-83b1d56b4bbe</li>
<li>Office 365 SharePoint Online, Application ID: 00000003-0000-0ff1-ce00-000000000000</li>
<li>Microsoft Teams AuthSvc, Application ID: a164aee5-7d0a-46bb-9404-37421d58bdf7</li>
</ul>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20260227-LoggingIntoTeamsWithoutALicense.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20260227-LoggingIntoTeamsWithoutALicense.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Disable Quarkus endpoints with a request filter]]></title>
            <description><![CDATA[<p>I needed a simple way to disable all endpoints in a Quarkus application based on a configuration property. A <a href="https://quarkus.io/guides/rest#request-or-response-filters">request filter</a> was the right tool for the job, but some care had to be taken to not also disable <a href="https://quarkus.io/guides/dev-ui">the Dev UI</a>.</p>
<p>A server request filter is a method annotated with <code>@ServerRequestFilter</code>. To access a configuration property, inject it into is parent class:</p>
<pre class="highlight"><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Filters</span> </span>{
    <span class="hljs-annotation">@ConfigProperty</span>(name=<span class="hljs-string">"features.endpoints-enabled"</span>)
    <span class="hljs-keyword">boolean</span> featureEndpointsEnabled;

    <span class="hljs-annotation">@ServerRequestFilter</span>
    <span class="hljs-keyword">public</span> RestResponse&lt;?&gt; featureFlagFilter(ContainerRequestContext requestContext) {
        <span class="hljs-keyword">if</span> (!featureEndpointsEnabled &amp;&amp;
                !requestContext.getUriInfo().getPath().startsWith(<span class="hljs-string">"/q/"</span>)) {
            <span class="hljs-keyword">return</span> RestResponse.ResponseBuilder.notFound().build();
        }
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">null</span>;
    }
}
</code></pre>
<p>When the method returns <code>null</code>, the processing will proceed, making the endpoint accessible. When the method returns a <code>RestResponse</code>, the processing stops and that response will be returned to the caller.</p>
<p>If you still want the Dev UI to work when you disable the endpoints, you need to check the requested path, and not block requests to paths which start with <code>/q/</code>.</p>
<p>To test the filter, you would need to change the value of a configuration property for the particular test. One way to do this is by using <a href="https://quarkus.io/blog/overriding-configuration-from-test-code/#approach-1-quarkus-test-profiles">Quarkus test profiles</a>. The test method must be placed in a separate test class which also acts as a test profile:</p>
<pre class="highlight"><code class="hljs java"><span class="hljs-annotation">@QuarkusTest</span>
<span class="hljs-annotation">@TestProfile</span>(DisabledExampleResourceTest.class)
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">DisabledExampleResourceTest</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">QuarkusTestProfile</span> </span>{

    <span class="hljs-annotation">@Override</span>
    <span class="hljs-keyword">public</span> Map&lt;String, String&gt; getConfigOverrides() {
        <span class="hljs-keyword">return</span> Map.of(
                <span class="hljs-string">"features.endpoints-enabled"</span>, <span class="hljs-string">"false"</span>
        );
    }

    <span class="hljs-annotation">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">testHelloEndpointDisabled</span><span class="hljs-params">()</span> </span>{
        given()
                .when().get(<span class="hljs-string">"/hello"</span>)
                .then()
                .statusCode(<span class="hljs-number">404</span>)
                .body(is(<span class="hljs-string">""</span>));
    }
}
</code></pre>
<p>The <code>getConfigOverrides</code> method returns the key-value pairs of configuration properties to override.</p>
<p>You can find a sample project in my <a href="https://github.com/DamirsCorner/20260102-quarkus-request-filter">GitHub repository</a>. It adds the filter to a new Quarkus project with a sample endpoint, and a test that validates its behavior.</p>
<p>Although request filters aren&#39;t all the flexible, they are a great fit for a functionality that affects all or many endpoints in the application.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20260102-DisableQuarkusEndpointsWithARequestFilter.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20260102-DisableQuarkusEndpointsWithARequestFilter.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 02 Jan 2026 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Switching between JDKs in Windows]]></title>
            <description><![CDATA[<p>IntelliJ IDEA does a great at <a href="https://www.jetbrains.com/help/idea/sdk.html">managing multiple different JDK versions</a> and using the right one for each project. However, a different solution is needed for the command line. I tried <a href="https://vfox.dev">vfox</a>, but I couldn&#39;t get <a href="https://vfox.dev/usage/core-commands.html#use">version switching</a> to work. I could still use it as a glorified JDK <a href="https://vfox.dev/usage/core-commands.html#install">installer</a>, but IntelliJ IDEA already fulfills this role. I now use PowerShell to switch between JDKs installed from IntelliJ IDEA in the command line as well.</p>
<p>To start with a clean slate, I first removed all the JDKs except the ones installed from IntelliJ IDEA. This included the ones listed in Windows Installed apps and those installed through vfox while I was testing it. I also made sure to delete the <code>JAVA_HOME</code> environment variable (at system and user level) and removed any Java related entries from the <code>PATH</code> environment variable (again at system and user level).</p>
<p>I then checked the JDKs I had installed in IntelliJ IDEA. To do that, I opened a Java project in IntelliJ IDEA, opened <strong>File</strong> &gt; <strong>Project Structure...</strong> from the main menu and navigated to <strong>Platform Settings</strong> &gt; <strong>SDKs</strong>. I selected each JDK in the list and copied the JDK home paths for future reference:</p>
<ul>
<li><code>openjdk-25</code>: <code>C:\Users\damir\.jdks\openjdk-25.0.1</code></li>
<li><code>openjdk-23</code>: <code>C:\Users\damir\.jdks\openjdk-23.0.1</code></li>
</ul>
<p><img src="img/20251226-JdkIdea.png" alt="JDK management in IntelliJ IDEA"></p>
<p>I decided to use JDK 25 by default, so I made the following changes to my user environment variables:</p>
<ul>
<li>I set <code>JAVA_HOME</code> to the JDK path: <code>C:\Users\damir\.jdks\openjdk-25.0.1</code>.</li>
<li>I added the <code>bin</code> subfolder inside it to <code>PATH</code>: <code>C:\Users\damir\.jdks\openjdk-25.0.1\bin</code>.</li>
</ul>
<p>This was enough to get Java 25 working in a newly opened terminal:</p>
<pre class="highlight"><code class="hljs no-highlight">➜ java -version
openjdk version &quot;25.0.1&quot; 2025-10-21
OpenJDK Runtime Environment (build 25.0.1+8-27)
OpenJDK 64-Bit Server VM (build 25.0.1+8-27, mixed mode, sharing)
</code></pre>
<p>For switching to a different JDK version, I first created a generic PowerShell script that sets the <code>JAVA_HOME</code> and <code>PATH</code> variables correctly for a JDK in a given folder:</p>
<pre class="highlight"><code class="hljs powershell"><span class="hljs-keyword">param</span>(
  [Parameter(Mandatory)]
  [string]<span class="hljs-variable">$Path</span>
)

<span class="hljs-variable">$env:JAVA_HOME</span> = <span class="hljs-variable">$Path</span>
<span class="hljs-variable">$env:Path</span> = <span class="hljs-variable">$env:JAVA_HOME</span> + <span class="hljs-string">"\bin;"</span> + <span class="hljs-variable">$env:Path</span>;
</code></pre>
<p>Two details worth mentioning:</p>
<ul>
<li>The environment variables are only changed for the given session. This allows me to use a different JDK version in each session.</li>
<li>Having the <code>bin</code> subfolder added to the start of the <code>PATH</code> causes the files from this folder to be preferred over the ones from the default JDK, which is listed later in the path.</li>
</ul>
<p>For convenience, I added the following two functions to <a href="https://learn.microsoft.com/en-us/powershell/scripting/learn/shell/creating-profiles#adding-customizations-to-your-profile">my <code>$PROFILE</code> file</a>:</p>
<pre class="highlight"><code class="hljs powershell"><span class="hljs-keyword">Function</span> Set-Jdk25
{
    Set-Jdk <span class="hljs-string">"C:\Users\damir\.jdks\openjdk-25.0.1"</span>
}

<span class="hljs-keyword">Function</span> Set-Jdk23
{
    Set-Jdk <span class="hljs-string">"C:\Users\damir\.jdks\openjdk-23.0.1"</span>
}
</code></pre>
<p>Now I can use them to switch to a different JDK in the current session:</p>
<pre class="highlight"><code class="hljs no-highlight">➜ Set-Jdk23
➜ java -version
openjdk version &quot;23.0.1&quot; 2024-10-15
OpenJDK Runtime Environment (build 23.0.1+11-39)
OpenJDK 64-Bit Server VM (build 23.0.1+11-39, mixed mode, sharing)
</code></pre>
<p>When I change the JDKs installed through IntelliJ IDEA, I only need to update the functions in my <code>$PROFILE</code> to match the set of installed JDKs and their paths. If I want to change the default JDK as well, I also need to update the <code>JAVA_HOME</code> and <code>PATH</code> environments accordingly.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20251226-SwitchingBetweenJdksInWindows.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20251226-SwitchingBetweenJdksInWindows.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 26 Dec 2025 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Git repository on a network share]]></title>
            <description><![CDATA[<p>I recently played around with having some version-controlled files on a remote Linux server with SSH access. I still wanted to edit them locally in Visual Studio Code, so the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh">Remote - SSH extension</a> seemed a perfect fit. Unfortunately, connecting to the server from Visual Studio Code failed with the following error:</p>
<blockquote>
<p>The remote host may not meet VS Code Server&#39;s prerequisites for glibc and libstdc++ (The remote host does not meet the prerequisites for running VS Code Server)</p>
</blockquote>
<p>The server did not meet <a href="https://code.visualstudio.com/docs/remote/faq#_can-i-run-vs-code-server-on-older-linux-distributions">the prerequisites</a>, so I had to start looking for other solutions.</p>
<p>I decided to share the directory with the files I wanted to edit on the server so that I could access them from my computer. Using this approach, I managed to initialize a Git repository, but as soon as I wanted to perform any Git operation inside it, it failed with the following error:</p>
<pre class="highlight"><code class="hljs no-highlight">fatal: detected dubious ownership in repository at &#39;//my-server/my-share/my-dir&#39;
&#39;//my-server/my-share/my-dir&#39; is owned by:
    (inconvertible) (S-1-5-21-577097838-3836064388-3576385918-3054)
but the current user is:
    MyWinPC/damir (S-1-5-21-804102101-2538954194-3365188178-1001)
To add an exception for this directory, call:

    git config --global --add safe.directory &#39;%(prefix)///my-server/my-share/my-dir&#39;
</code></pre>
<p>It was because of a security measure, <a href="https://github.blog/open-source/git/highlights-from-git-2-36/#stricter-repository-ownership-checks">introduced in Git 2.35.2</a>. Changing the ownership of the files on the share wasn&#39;t an option, so I executed the command suggested in the error message to bypass the check for that specific directory:</p>
<pre class="highlight"><code class="hljs bash">git config --global --add safe.directory <span class="hljs-string">'%(prefix)///my-server/my-share/my-dir'</span>
</code></pre>
<p>This has indeed resolved the issue, so that I could perform all Git operations as usual, both form command line and from GUI clients.</p>
<p>Before proceeding, I wanted to make sure that all the files would have Linux line endings although I would be editing them from Windows. In Visual Studio Code you can use the <strong>Change End of Line Sequence</strong> command to select the line endings for each file:</p>
<p><img src="img/20251219-GitShareEolFile.png" alt="Configuring line endings for a file"></p>
<p>This works great for existing files, but there&#39;s always a risk of forgetting to switch the line endings when creating a new file. To avoid this, it&#39;s best to set the default value in workspace settings:</p>
<p><img src="img/20251219-GitShareEolWorkspace.png" alt="Configuring line endings for a workspace">
This setting will be persisted in <code>.vscode/settings.json</code> file, which can also be added to the git repository:</p>
<pre class="highlight"><code class="hljs json">{
  "<span class="hljs-attribute">files.eol</span>": <span class="hljs-value"><span class="hljs-string">"\n"</span>
</span>}
</code></pre>
<p>Git has <a href="https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration#_formatting_and_whitespace">its own approach to handling end of line differences</a> between Windows and Linux in files committed to a Git repository. If you&#39;re working in Windows, you most likely have it enabled globally with the <code>core.autocrlf</code> setting set to <code>true</code>. In this particular scenario, this setting would interfere with using Linux line endings in Windows. To prevent that, the setting can be turned off locally for this repository by running the following command from any directory inside it:</p>
<pre class="highlight"><code class="hljs bash">git config core.autocrlf <span class="hljs-literal">false</span>
</code></pre>
<p>With all this configured, I could edit the files on the remote Linux server from Visual Studio Code in Windows and have the files versioned in a Git repository, although Visual Studio Code couldn&#39;t connect remotely to the server using SSH. The experience was still inferior, but it worked well enough for my particular use case.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20251219-GitRepositoryOnANetworkShare.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20251219-GitRepositoryOnANetworkShare.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 19 Dec 2025 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Per-folder git configuration]]></title>
            <description><![CDATA[<p>My work consists of writing code for multiple clients and for some of them I need to use a different email in my git commits. I recently learned that there is a better approach for handling this than configuring email per repository.</p>
<p>During <a href="https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup">initial git setup</a> you need to configure your name and email address to be used in git commits. Many graphical git clients provide user interface to do it, but you can also do it from command line:</p>
<pre class="highlight"><code class="hljs bash">git config --global user.name <span class="hljs-string">"Me"</span>
git config --global user.email <span class="hljs-string">"email@primary.domain"</span>
</code></pre>
<p>This configuration is saved to <code>~/.gitconfig</code> in your home folder:</p>
<pre class="highlight"><code class="hljs no-highlight">[user]
    name = Me
    email = email@primary.domain
</code></pre>
<p>By default, these settings are going to be used for all repositories you work with on your machine. You can however override any git configuration value for a specific repository. This includes your name and email. Some graphical git clients support this as well, but you can also do it by running the following commands from a folder inside that repository:</p>
<pre class="highlight"><code class="hljs bash">git config user.name <span class="hljs-string">"Me"</span>
git config user.email <span class="hljs-string">"email@secondary.domain"</span>
</code></pre>
<p>This configuration is saved to <code>.git/config</code> located at the repository root folder:</p>
<pre class="highlight"><code class="hljs no-highlight">[user]
    name = Me
    email = email@secondary.domain
</code></pre>
<p>When alternative configuration is set for a repository like this, the name and email from it are going to be used for all commits to this repository.</p>
<p>Although this configuration process is simple enough, it doesn&#39;t scale well if your client uses microservices. It&#39;s too easy to forget changing the configuration for every single microservice repository you clone to your machine and start contributing to it. And in that case the wrong email from your default global configuration is going to be used.</p>
<p>Fortunately, you can use <a href="https://git-scm.com/docs/git-config#_conditional_includes">conditional includes</a> to set the same configuration for all git repositories inside a specific parent folder. On my work machine, I already have all repositories for a certain client inside the same folder, so this matches well with my existing organization.</p>
<p>To use conditional includes, you first need to create a file with git configuration settings in standard format so that you can include it. I put it in the folder containing all the repositories from a specific client, but you could put it anywhere you like:</p>
<pre class="highlight"><code class="hljs no-highlight">[user]
    name = Me
    email = email@secondary.domain
</code></pre>
<p>Then, you need to open your global git configuration file in <code>~/.gitconfig</code> and conditionally include the configuration file you just created:</p>
<pre class="highlight"><code class="hljs no-highlight">[includeIf &quot;gitdir:~/Git/MyClient/&quot;]
    path = ~/Git/MyClient/.gitconfig
</code></pre>
<p>If a repository matches the path after <code>gitdir</code>, the configuration file specified after <code>path</code> will be evaluated and will override any configuration values defined earlier in your global configuration file. If you&#39;re in a case-insensitive file system, you might want to use <code>gitdir/i</code> instead of <code>gitdir</code> to make the path comparison case-insensitive.</p>
<p>For now, I&#39;m only using conditional includes for user and email configuration when I need to change it for all repositories of a certain client. But it&#39;s a very flexible tool in the toolbelt, so I might find another use for it in the future.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20251114-PerFolderGitConfiguration.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20251114-PerFolderGitConfiguration.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 14 Nov 2025 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Adding a second NIC to Proxmox VM]]></title>
            <description><![CDATA[<p>As I&#39;m adding more services to my home lab server, I reached a point at which I want to expose some of them to the internet but keep others available only in the internal network. To securely do so from a single virtual machine, I want to serve the two groups of services via a different interface controller. But my virtual machine only had one. How difficult could it be to add a second one?</p>
<p>As the first step, I had to add a new network device to the virtual machine. I&#39;m hosting it inside <a href="https://www.proxmox.com/en/products/proxmox-virtual-environment/overview">Proxmox VE</a> which made this fairly easy:</p>
<ul>
<li>Select the right virtual machine in the <strong>Server View</strong> tree view.</li>
<li>Open its <strong>Hardware</strong> page.</li>
<li>Select <strong>Network Device</strong> from the <strong>Add</strong> dropdown button.
<img src="img/20251107-ProxmoxNicAdd.png" alt="Add a new network device in Proxmox"></li>
<li>Keep the default values in the dialog unchanged and click <strong>Add</strong>
<img src="img/20251107-ProxmoxNicConfig.png" alt="Network device configuration dialog in Proxmox"></li>
</ul>
<p>This was enough for the network interface to show up in my Ubuntu virtual machine, as I listed them using:</p>
<pre class="highlight"><code class="hljs bash">ip addr show
</code></pre>
<p>By comparing the output with the one from before I added the new network device, I could identify a new interface named <code>ens##</code>:</p>
<pre class="highlight"><code class="hljs no-highlight">3: ens19: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether bc:24:11:2c:da:0f brd ff:ff:ff:ff:ff:ff
    altname enp0s19
</code></pre>
<p>Since its <code>state</code> was <code>DOWN</code>, I had to enable it first:</p>
<pre class="highlight"><code class="hljs bash"><span class="hljs-built_in">sudo</span> ip link <span class="hljs-keyword">set</span> ens19 up
</code></pre>
<p>This changed its <code>state</code> to <code>UP</code>:</p>
<pre class="highlight"><code class="hljs no-highlight">3: ens19: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:2c:da:0f brd ff:ff:ff:ff:ff:ff
    altname enp0s19
    inet6 fe80::be24:11ff:fe2c:da0f/64 scope link
       valid_lft forever preferred_lft forever
</code></pre>
<p>But it still didn&#39;t have an IPv4 (<code>inet</code>) address when compared to the other network interface:</p>
<pre class="highlight"><code class="hljs no-highlight">3: ens19: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:2c:da:0f brd ff:ff:ff:ff:ff:ff
    altname enp0s19
    inet 192.168.1.98/24 metric 100 brd 192.168.1.255 scope global dynamic ens19
       valid_lft 86352sec preferred_lft 86352sec
    inet6 fe80::be24:11ff:fe2c:da0f/64 scope link
       valid_lft forever preferred_lft forever
</code></pre>
<p>To ensure that its IP address will be predictable, I added a new IP reservation to my DHCP server before continuing. The MAC address of the network interface is listed after <code>link/ether</code>, i.e., <code>bc:24:11:2c:da:0f</code> in my case.</p>
<p>My Ubuntu server uses <a href="https://netplan.io">Netplan</a> for network configuration. Its configuration file is located in <code>/etc/netplan</code>. I had to modify the only file there, named <code>50-cloud-init.yaml</code>, and enable DHCP for the new network interface with an identical entry as it was already there for the old network interface:</p>
<pre class="highlight"><code class="hljs less"><span class="hljs-attribute">network</span>:
  <span class="hljs-attribute">ethernets</span>:
    <span class="hljs-attribute">ens18</span>:
      <span class="hljs-attribute">dhcp4</span>: true
    <span class="hljs-attribute">ens19</span>:
      <span class="hljs-attribute">dhcp4</span>: true
  <span class="hljs-attribute">version</span>: <span class="hljs-number">2</span>
</code></pre>
<p>I saved the the changes and applied them using:</p>
<pre class="highlight"><code class="hljs bash"><span class="hljs-built_in">sudo</span> netplan apply
</code></pre>
<p>This was enough for the new network interface to acquire an IPv4 address:</p>
<pre class="highlight"><code class="hljs no-highlight">3: ens19: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:2c:da:0f brd ff:ff:ff:ff:ff:ff
    altname enp0s19
    inet 192.168.1.98/24 metric 100 brd 192.168.1.255 scope global dynamic ens19
       valid_lft 86399sec preferred_lft 86399sec
    inet6 fe80::be24:11ff:fe2c:da0f/64 scope link
       valid_lft forever preferred_lft forever
</code></pre>
<p>Still, I was a bit worried because of the following comment at the top of the <code>/etc/netplan/50-cloud-init.yaml</code> configuration file:</p>
<pre class="highlight"><code class="hljs vala"><span class="hljs-preprocessor"># This file is generated from information provided by the datasource.  Changes</span>
<span class="hljs-preprocessor"># to it will not persist across an instance reboot.  To disable cloud-init's</span>
<span class="hljs-preprocessor"># network configuration capabilities, write a file</span>
<span class="hljs-preprocessor"># /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:</span>
<span class="hljs-preprocessor"># network: {config: disabled}</span>
</code></pre>
<p>The contents of the <code>/etc/cloud/cloud-init.disabled</code> file put me somewhat at ease:</p>
<pre class="highlight"><code class="hljs no-highlight">Disabled by Ubuntu live installer after first boot.
To re-enable cloud-init on this image run:
  sudo cloud-init clean --machine-id
</code></pre>
<p>To be certain, I decided to reboot the virtual machine and check that the network interface remains properly configured afterwards. It did.</p>
<p>With all this in place, I could now modify my <code>docker-compose.yml</code> file to only publish the selected ports on one IP instead of all of them, using <a href="https://docs.docker.com/engine/network/port-publishing/#publishing-ports">the extended <code>ports</code> syntax</a>:</p>
<pre class="highlight"><code class="hljs haml">ports:
  -<span class="ruby"> <span class="hljs-string">"192.168.1.98:80:80"</span>
</span>  -<span class="ruby"> <span class="hljs-string">"192.168.1.98:443:443"</span>
</span></code></pre>
<p>This allowed me to publish the same ports from two different containers, for each one on a different IP. Only the container with the ports published on the IP with port forwarding set up on my router can be accessed externally. The other one is only accessible in the internal network.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20251107-AddingASecondNicToProxmoxVm.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20251107-AddingASecondNicToProxmoxVm.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 07 Nov 2025 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[PowerShell Tab Title in Windows Terminal]]></title>
            <description><![CDATA[<p>I often have several tabs open in my <a href="https://github.com/microsoft/terminal">Windows Terminal</a>. And I like how I can easily identify the right one by its title if it&#39;s a tab for WSL or an SSH connection.</p>
<p><img src="img/20251031-WinTermTitleWslSsh.png" alt="Windows Terminal tab title for WSL and SSH"></p>
<p>Only PowerShell tabs have &quot;PowerShell&quot; as their fixed title by default. Sure, Windows Terminal allows me to rename tabs, but I often don&#39;t bother to do it, especially since the title can easily become inaccurate.</p>
<p><img src="img/20251031-WinTermRenameTitle.png" alt="Rename tab title in Windows Terminal"></p>
<p>Fortunately, PowerShell can also auto-update the tab title, it&#39;s just not configured to do so by default. And <a href="https://ohmyposh.dev">Oh My Posh</a>, which <a href="20250502-OhMyPoshUpdateToNonModuleVersion.html">I am using</a>, has <a href="https://ohmyposh.dev/docs/configuration/title">even better support for that</a>. Again not enabled by default, at least not in <a href="https://ohmyposh.dev/docs/themes#negligible">the theme I chose</a>.</p>
<p>To enable it, only a single line has to be added to your configuration file. If you don&#39;t know where that file is located, check your PowerShell <code>$PROFILE</code> file:</p>
<pre class="highlight"><code class="hljs powershell">code <span class="hljs-variable">$PROFILE</span>
</code></pre>
<p>Inside it, there should be a call to <code>oh-my-posh init pwsh</code>, similar to the following:</p>
<pre class="highlight"><code class="hljs powershell">oh-my-posh init pwsh --config <span class="hljs-string">"<span class="hljs-variable">$env:POSH_THEMES_PATH</span>\negligible.omp.json"</span> | <span class="hljs-built_in">Invoke-Expression</span>
</code></pre>
<p>The configuration file is passed as <a href="https://ohmyposh.dev/docs/installation/customize?shell=powershell">the value of the <code>--config</code> argument</a>. If the file is located in <code>$env:POSH_THEMES_PATH</code> or only a name of the theme is used without the extension (e.g. <code>--config negligible</code>), you&#39;re still using one of the provided themes without any modifications. In that case, you should copy it from <code>$env:POSH_THEMES_PATH</code> to somewhere else, e.g., your home directory, before modifying it:</p>
<pre class="highlight"><code class="hljs powershell">cp <span class="hljs-variable">$env:POSH_THEMES_PATH</span>/negligible.omp.json ~/my.negligible.omp.json
</code></pre>
<p>If you do that, don&#39;t forget to modify the <code>oh-my-posh init pwsh</code> call in your <code>$PROFILE</code> file to point to the file in the new location, e.g.:</p>
<pre class="highlight"><code class="hljs powershell">oh-my-posh init pwsh --config <span class="hljs-string">"~\my.negligible.omp.json"</span> | <span class="hljs-built_in">Invoke-Expression</span>
</code></pre>
<p>Now you&#39;re ready to modify your configuration file, i.e., set a value for <code>console_title_template</code> to how you want your title to be set. The following would set it to your current directory:</p>
<pre class="highlight"><code class="hljs json">{
  "<span class="hljs-attribute">console_title_template</span>": <span class="hljs-value"><span class="hljs-string">"{{.PWD}}"</span>
</span>}
</code></pre>
<p>And make it look like this:</p>
<p><img src="img/20251031-WinTermTitlePwsh.png" alt="Custom Windows Terminal tab title for PowerShell"></p>
<p>The setting value is formatted as an Oh My Posh template. You can learn about them <a href="https://ohmyposh.dev/docs/configuration/templates">in the documentation</a>, or simply choose one of the <a href="https://ohmyposh.dev/docs/configuration/title">common examples for tab title</a>.</p>
<p>The setting will go into effect, when you open a new PowerShell tab or reload the profile in an already opened tab:</p>
<pre class="highlight"><code class="hljs powershell">. <span class="hljs-variable">$PROFILE</span>
</code></pre>
<p>If you&#39;re a regular user of Windows Terminal or PowerShell in general as I am, you won&#39;t regret taking the time to configure auto updates for tab title. It doesn&#39;t take a lot of effort and it makes it much more convenient to switch between multiple tabs because you can immediately recognize the one you&#39;re looking for just by reading its title.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20251031-PowerShellTabTitleInWindowsTerminal.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20251031-PowerShellTabTitleInWindowsTerminal.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 31 Oct 2025 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Installing 32-bit Xerox printer in Linux]]></title>
            <description><![CDATA[<p>With <a href="https://support.microsoft.com/en-gb/windows/windows-10-support-has-ended-on-october-14-2025-2ca8b313-1946-43d3-b55c-2b95b107f281">Windows 10 support officially ending</a>, I decided to install <a href="https://www.linuxmint.com">Linux Mint</a> on one of my older machines which isn&#39;t supported by Windows 11. The transition was really smooth for the most part with the exception of getting my <a href="https://www.support.xerox.com/en-us/product/phaser-6000/content/120854">Xerox Phaser 6000 printer</a> to work.</p>
<p>Linux Mint comes with a large collection of printer drivers, so <a href="https://www.reallinuxuser.com/how-to-setup-your-printer-in-linux-mint/">most printers should be recognized automatically and just work</a>. That was not the case for my printer. The add printer dialog suggested to use the driver for Xerox Phaser 6100 instead, but unfortunately my printer didn&#39;t like that and its error code suggested that I was using an incompatible printer driver.</p>
<p>On the bright side, Xerox offers <a href="https://www.support.xerox.com/en-us/product/phaser-6010/downloads?language=en&amp;platform=linux">a downloadable driver</a> for Debian-based Linux distributions like Linux Mint. However, it&#39;s a 32-bit driver and when I tried to install the DEB package, it failed due to missing dependencies. I tried to <a href="https://linux.die.net/man/8/apt-get">fix broken dependencies with <code>apt-get</code></a> as suggested in <a href="https://forums.linuxmint.com/viewtopic.php?p=694202&amp;sid=ad87bd366eb6e50e49dcd0aa1a198f31#p694202">a Linux Mint forum post</a> and it worked:</p>
<pre class="highlight"><code class="hljs bash"><span class="hljs-built_in">sudo</span> apt-get update
<span class="hljs-built_in">sudo</span> apt-get install <span class="hljs-operator">-f</span>
</code></pre>
<p>I tried to add the printer through the dialog again, and it was correctly recognized now. But as soon as I tried to print a test page, it failed again. This time even before the print job was sent to the printer. But that meant that an error might be logged in <code>/var/log/cups</code>. And indeed it was:</p>
<pre class="highlight"><code class="hljs no-highlight">D [12/Oct/2025:11:06:29 +0200] [Job 10] Xerox-Phaser-6000B: error while loading shared libraries: libcupsimage.so.2: cannot open shared object file: No such file or directory
</code></pre>
<p>I followed the <a href="https://askubuntu.com/a/513944/512569">advice from Ask Ubuntu</a> and installed the missing 32-bit library:</p>
<pre class="highlight"><code class="hljs bash"><span class="hljs-built_in">sudo</span> apt-get install libcupsimage2:i386
</code></pre>
<p>That was enough to get my printer working. The test page printed successfully. And printing from apps worked as well.</p>
<p>Now it was time to share the printer so that I could use it from my Windows 11 machine. I spent several hours tinkering with <a href="https://www.samba.org">Samba</a> without success: I could get the file sharing to work, but not printing.</p>
<p>In the end, I got it working without Samba, <a href="https://www.zdnet.com/article/how-to-share-a-printer-on-linux-with-cups-and-samba/">using only CUPS and IPP</a>. I had to make the following changes in the CUPS configuration file, <code>/etc/cups/cupsd.conf</code>:</p>
<ul>
<li>To make CUPS accessible from the local network, I changed the line containing <code>Listen localhost:631</code> to <code>Listen 0.0.0.0:631</code>.</li>
<li>To allow access to the printers from the local network, I searched for the <code>&lt;Location /&gt;</code> block and added <code>Allow all</code> at the end to make it look like this:<pre class="highlight"><code class="hljs no-highlight">&lt;Location /&gt;
  Order allow,deny
  Allow all
&lt;/Location&gt;
</code></pre>
</li>
<li>To make the printer discoverable over the network, I changed <code>Browsing Off</code> to <code>Browsing On</code>.</li>
</ul>
<p>I restarted CUPS to apply the configuration changes:</p>
<pre class="highlight"><code class="hljs bash"><span class="hljs-built_in">sudo</span> systemctl restart cups
</code></pre>
<p>With these changes in place, it was time to connect to the printer from my Windows 11 machine. I opened <strong>Settings</strong> and navigated to <strong>Bluetooth &amp; Devices</strong>, and to <strong>Printers &amp; scanners</strong> from there. When I clicked on <strong>Add device</strong> next to <strong>Add a printer or scanner</strong>, the printer showed up:</p>
<p><img src="img/20251017-LinuxXeroxPrinterAdd.png" alt="Adding CUPS/IPP printer in Windows 11"></p>
<p>I clicked <strong>Add device</strong> and waited for a long time with the <strong>Connecting</strong> status shown. Then it was <strong>Installing...</strong> for a couple of seconds, and finally <strong>Ready</strong>. The printer was added to my list of installed printers. I could successfully print the test page and use it from applications.</p>
<p>The process of getting my printer working wasn&#39;t as straightforward as I expected, but in the end I got it to work. It would likely take me much less time if I had more Linux experience. But even without it, i succeeded thanks to many resources online. And learned a few things in the process. Still better than replacing my old computer, printer, or both.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20251017-Installing32bitXeroxPrinterInLinux.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20251017-Installing32bitXeroxPrinterInLinux.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Switching to NuGet trusted publishing]]></title>
            <description><![CDATA[<p>When the <a href="https://devblogs.microsoft.com/dotnet/enhanced-security-is-here-with-the-new-trust-publishing-on-nuget-org/">support for trusted publishing on NuGet was announced</a>, I decided to try switching to it for my side project as soon as possible. The process ended up being very smooth.</p>
<p>As described in <a href="20240927-NuGetPackagesAndReleasesInGitHubActions.html">a previous blog post</a>, I am publishing NuGet packages using GitHub Actions. Until now, I had a NuGet API key stored in secrets and used it to authenticate with NuGet:</p>
<pre class="highlight"><code class="hljs stylus">- name: Publish NuGet package  
  <span class="hljs-keyword">if</span>: <span class="hljs-function"><span class="hljs-title">startsWith</span><span class="hljs-params">(env.tag, <span class="hljs-string">'v'</span>)</span></span>  
  run: dotnet nuget push nupkg<span class="hljs-comment">/*.nupkg -k ${{ secrets.NUGET_API_KEY }} -s https://nuget.org
</span></code></pre>
<p>It worked perfectly fine for me. The only downside was having to update the API Key every year because the old one expired. Fortunately, NuGet reminds you about this in advance. It&#39;s only a couple of weeks since I had to do it last time, so I was very happy when I learned that I might not have to do it anymore.</p>
<p>The necessary steps are well described in <a href="https://learn.microsoft.com/en-us/nuget/nuget-org/trusted-publishing">the documentation</a>.</p>
<p>First, a new policy has to be created in <a href="https://www.nuget.org/account/trustedpublishing">the Trusted Publishing section of your NuGet settings</a>. You need to provide the following data:</p>
<ul>
<li><strong>Policy Name:</strong> A name for the policy so that you will recognize it in the future. I used my project name.</li>
<li><strong>Package Owner:</strong> Your username or the name of the organization owning the package you will be publishing.</li>
<li><strong>Repository Owner:</strong> GitHub user or organization that owns the repository inside which the GitHub Actions workflow will be running.</li>
<li><strong>Repository:</strong> The name of the repository inside which the GitHub Actions workflow will be running.</li>
<li><strong>Workflow File:</strong> Name of the workflow (YAML) file inside <code>.github/workflows</code> directory for the workflow which will publish the package.</li>
<li><strong>Environment:</strong> Name of the GitHub Actions environment from which the package will be published. Can be empty if you&#39;re not using environments.</li>
</ul>
<p>Once you have created the policy, it&#39;s time to modify the workflow file. In my case I only had to:</p>
<ul>
<li><p>Add a new step before the package publishing step to retrieve a temporary NuGet API key using OpenID Connect (OIDC). <a href="https://github.com/marketplace/actions/nuget-login">The NuGet Login action</a> takes care of that. You only need to provide it your NuGet username (not email), preferably read from a secret (don&#39;t forget to add the secret to GitHub Actions if you use this approach):</p>
<pre class="highlight"><code class="hljs cs">- name: <span class="hljs-function">NuGet <span class="hljs-title">login</span> <span class="hljs-params">(<span class="hljs-keyword">get</span> temporary API key <span class="hljs-keyword">using</span> OIDC)</span>  
  uses: NuGet/login@v1  
  id: login  
  with:  
    user: $</span>{{secrets.NUGET_USER}}
</code></pre>
</li>
<li><p>Read the NuGet API key for the publishing step from the output of this new step instead of from a secret:</p>
<pre class="highlight"><code class="hljs stylus">- name: Publish NuGet package  
  <span class="hljs-keyword">if</span>: <span class="hljs-function"><span class="hljs-title">startsWith</span><span class="hljs-params">(env.tag, <span class="hljs-string">'v'</span>)</span></span>  
  run: dotnet nuget push nupkg<span class="hljs-comment">/*.nupkg -k ${{ steps.login.outputs.NUGET_API_KEY }} -s https://nuget.org
</span></code></pre>
</li>
</ul>
<p>After you commit these changes to your repository, the workflow should successfully publish the NuGet package using the trusted publishing flow instead of the long lived API key from the secrets. I tested the process by releasing a new patch version of my project.</p>
<p>When that succeeded, it was time for the final cleanup:</p>
<ul>
<li>I deleted the NuGet API key secret from GitHub Actions.</li>
<li>I deleted the API key from <a href="https://www.nuget.org/account/apikeys">the NuGet settings</a>.</li>
</ul>
<p>The trusted publishing flow was surprisingly simple to implement in a working GitHub Actions workflow for NuGet package publishing. The few changes that need to be done are well documented. I would recommend everyone who is publishing NuGet packages from GitHub Actions to replace the persisted API key with this new flow.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20251003-SwitchingToNuGetTrustedPublishing.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20251003-SwitchingToNuGetTrustedPublishing.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 03 Oct 2025 00:00:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Selective VPN routing in Windows]]></title>
            <description><![CDATA[<p>I mostly work remotely and to gain access to many work resources, I need a VPN connection to my office. When you connect, all of your network traffic is routed through the VPN connection by default. This <a href="https://superuser.com/a/1698885/30513">can be reconfigured</a> so that only the traffic to your office network is routed through the VPN. If you also need to access other resources through your VPN, e.g., cloud-hosted services whitelisted for your office IP, you need to properly route traffic for those as well.</p>
<p>For me, it all started pretty simple, with one such service. I got its IP using <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/ping">the <code>ping</code> command</a> and added a new network route. I could have used <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/route_ws2008">the <code>route</code> command</a>, but I found it easier to use <a href="https://learn.microsoft.com/en-us/powershell/module/nettcpip/new-netroute">the <code>New-NetRoute</code> PowerShell cmdlet</a> because it can resolve the network interface by its alias and I don&#39;t have to search for the index of my VPN interface on the <code>route print</code> output:</p>
<pre class="highlight"><code class="hljs powershell">New-NetRoute -DestinationPrefix <span class="hljs-number">13.227</span>.<span class="hljs-number">180.4</span>/<span class="hljs-number">32</span> -InterfaceAlias MyVPN -NextHop <span class="hljs-number">192.168</span>.<span class="hljs-number">2.1</span>
</code></pre>
<p>With time, it got more complicated:</p>
<ul>
<li>There were domain names with more than one IP address. I could get all of those with a single call to <a href="https://learn.microsoft.com/en-us/powershell/module/dnsclient/resolve-dnsname">the <code>Resolve-DnsName</code> cmdlet</a> instead of multiple <code>ping</code> calls. But I still had to call <code>New-NetRoute</code> for each IP.</li>
<li>The IP addresses of some domains changed with time. Now I had to keep track of those and every time, it happened, I had to remove the routes for the old IPs using <a href="https://learn.microsoft.com/en-us/powershell/module/nettcpip/remove-netroute">the <code>Remove-NetRoute</code> cmdlet</a> before adding new ones.</li>
</ul>
<p>It all required too much effort to maintain and became error prone. It was time to write some scripts to simplify my life. After some thought and experimentation, I wrote the following PowerShell script and named it <code>New-NetRouteForHostname.ps1</code>:</p>
<pre class="highlight"><code class="hljs powershell"><span class="hljs-keyword">param</span>(
  [Parameter(Mandatory)]
  [string]<span class="hljs-variable">$Hostname</span>,
  [bool]<span class="hljs-variable">$Persist</span> = <span class="hljs-number">0</span>,
  [string]<span class="hljs-variable">$InterfaceAlias</span> = <span class="hljs-string">"MyVPN"</span>,
  [string]<span class="hljs-variable">$NextHop</span> = <span class="hljs-string">"192.168.2.1"</span>
)

<span class="hljs-variable">$addresses</span> = Resolve-DnsName <span class="hljs-variable">$Hostname</span> | <span class="hljs-built_in">Where-Object</span> {<span class="hljs-variable">$_</span>.Type <span class="hljs-operator">-eq</span> <span class="hljs-string">"A"</span>}
<span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$address</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$addresses</span>) {
  <span class="hljs-keyword">if</span> (<span class="hljs-variable">$Persist</span>) {
    New-NetRoute -DestinationPrefix <span class="hljs-string">"$(<span class="hljs-variable">$address</span>.IPAddress)/32"</span> `
      -InterfaceAlias <span class="hljs-variable">$InterfaceAlias</span> -NextHop <span class="hljs-variable">$NextHop</span>
  } <span class="hljs-keyword">else</span> {
    New-NetRoute -DestinationPrefix <span class="hljs-string">"$(<span class="hljs-variable">$address</span>.IPAddress)/32"</span> `
      -InterfaceAlias <span class="hljs-variable">$InterfaceAlias</span> -NextHop <span class="hljs-variable">$NextHop</span> -PolicyStore ActiveStore
  }
}
</code></pre>
<p>It first calls <code>Resolve-DnsName</code> to get all addresses, i.e., <code>A</code> records, for a domain name. It then calls <code>New-NetRoute</code> for each one to add a new network route for it. The script outputs the routes it added.</p>
<p>The <code>$Persist</code> parameter allows different handling for domain names based on whether their IPs are static or not:</p>
<ul>
<li>For domain names with static IPs, I set it to <code>$true</code>, so that <code>PolicyStore</code> is not set and the route is persisted across reboots and VPN reconnections.</li>
<li>For domain names with IPs that change, I set it to <code>$false</code>. This sets the <code>PolicyStore</code> to <code>ActiveStore</code> so that the route isn&#39;t persisted.</li>
</ul>
<p>The <code>$InterfaceAlias</code> and <code>$NextHop</code> parameters allow me to call the script for a different VPN connection. Their default values match my office VPN, so I can usually simply omit them:</p>
<pre class="highlight"><code class="hljs powershell">New-NetRouteForHostname dynamic-ip.mydomain.net
</code></pre>
<p>Since the <code>New-NetRoute</code> requires admin privileges to run, the script must be run in an elevated PowerShell prompt, or using <a href="https://learn.microsoft.com/en-us/windows/advanced-settings/sudo/">the <code>sudo</code> command</a>:</p>
<pre class="highlight"><code class="hljs powershell">sudo pwsh -Command New-NetRouteForHostname dynamic-ip.mydomain.net
</code></pre>
<p>I know that the script is far from perfect, but it&#39;s good enough for now. If needed, I can always improve it further.</p>
]]></description>
            <link>https://www.damirscorner.com/blog/posts/20250926-SelectiveVpnRoutingInWindows.html</link>
            <guid isPermaLink="true">https://www.damirscorner.com/blog/posts/20250926-SelectiveVpnRoutingInWindows.html</guid>
            <dc:creator><![CDATA[Damir Arh]]></dc:creator>
            <pubDate>Fri, 26 Sep 2025 00:00:00 GMT</pubDate>
        </item>
    </channel>
</rss>