Create Isolated Public Facing Services in Incus + Netbird + PhoenixNAP

The purpose of this post is to support discussion for the ‘Create Isolated Public Facing Services in Incus + Netbird + PhoenixNAP’ blog post.

Let me know if you have comments, questions or concerns.

For chuck-stack priority support and training Join Now. To learn more about the stack-academy, view the stack-academy page.

so Chuck, why doesn’t Incus come as default with a ‘no traffic’ policy? While your prescription of blocking RFC1918 address space sorta addresses the issue, wouldn’t we want to start with a deny all by default policy and then only allow what we strictly need?

That is a great question! I am not close enough to the project to speak on their behalf. Here are my thoughts:

  • There is some inherent convenience with the current all access default
  • It could be that most organizations erect instances in a semi-protected environment (instead of the open web like I did)

In my experience, the incus discussion forum is welcoming of such questions.

I hope this helps!

Chuck

@bmullan and I were having a side-bar on how to use Incus in a true (legally distinct entity sharing same hardware) sense and I posit that it’s really not set up for that. When I offer multitenant to clients I intend to dedicate the hardware and switch ports (and VLans) to each legal entitity so I don’t have to worry about Incus limitations. Though if I really wanted to ‘risk’ it I guess what I would do is slave the INcus bridges to specific VLAN interfaces since as a MSP I control the vlan assignment. Would just need to hamstring the client from being able to make network-layer changes.

I’ll check out the user forum.

What about the incus => project architecture prevents it from being ‘legally’ isolated?

separation has to be provable. If you’re trying to do that via subnets, what do you do when N clients want to use the same IP ranges? I haven’t dug into how fine-grained Incus permission sets are. In VMware this is trivial to do. But most Linux tools assume everyone plays nice or has full control.

Guys,

For about a year, I rented an incus ‘project’ space from Stéphane on the same cluster that he uses to host the incus images. The situation worked quite well. Here are the notable details:

  • I added the remote server to my local cli client
  • I interacted with the instance just like any other incus server
  • He had keycloak configured to authenticate me (which was a nice touch)
  • The only restriction he placed on me (other than the hard limits like memory, cpu and storage) was that I could only create VMs (not containers). He did this because he needed the ability to move my project from node to node on the cluster without my downtime. This was because he has a practice of applying patches and restarting every node every week.

Does this help?

Regards,

I cannot speak to your details. What I can speak to is how I work with Stéphane. I have a couple of customers that use Incus. They do nothing heavy with it, and all networks are private. I prepay for his support hours, and I ask questions publicly when possible (for the benefit of everyone in the community). His support has been nothing short of amazing.

He did not ask for this arrangement. I volunteered because this is how I believe we can best support an open source project like incus. It ensures the right people are properly incentivized.

If you are considering building a business around incus, I feel like a couple of hours of paid support will be beneficial. In my experience he is happy to jump on a video call and discuss all matters related to incus (small and large).

I hope this helps! Chuck