-
Notifications
You must be signed in to change notification settings - Fork 257
-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm Chart for Kubernetes Deployment #255
Comments
@briantopping, certainly, whatever we can do to show people reasonable, working, and secure deployments of containerized FreeIPA is helpful. The initial focus of this project and repo was on barebone docker-based setups, then we've added support for Atomic host, and we've shown In the last couple of months, I've been adding and hardening the tests for basic docker-based setups in Travis CI, so anything for which we could have automated tests / CI would be a plus. Is there anything that makes sense to add to this repository, or would separate repo be easier? |
That's exciting, thanks @adelton. The primary Helm repo is at https://github.com/helm/charts, and that one comes pre-configured for the client to install from. Other repositories are not unheard of, adding the repository is an additional step that must be taken before a chart can be used and can be cumbersome if it is only for one product. I'm open for anything tho. That sounds like great progress on the system, congrats! I've learned from your Docker code and also from the efforts to deploy it in Kubernetes. It certainly could not have happened without your work. I have seen that there were efforts for OpenShift, but admittedly the JSON format has caused me to avoid looking inside. It's something I need to do as there is no sense having needlessly divergent deployment strategies. I've most recently been looking at https://libreswan.org/wiki/IKEv1_XAUTH_with_FreeOTP_and_FreeIPA and it's struck me that it might be good to isolate the IPA client build and the FreeIPA deployment that uses it. There's a lot of momentum in Kubernetes service authentication (Istio, Knowing that this is interesting, I'll start putting something together and we'll revisit as it's working! |
For the IPA client container inspiration, look at the |
@adelton I was in airports the last few days so just getting a chance to look over your generous notes. My first go at the container before writing above was to take the CentOS build for the server and strip out the server parts. It looks like they use different strategies to get the container booted in a manner that What I guess I am wondering: is some "grand unification" possible? By that I mean creating a core container that the server build is based on and is known to work for clients. Admittedly, this is a gratuitous optimization, things work as they are today. On the other hand, if there's a single core container that "just works" for a baseline deployment of server or client, such a developer knows that if what they are doing doesn't work, it's probably their fault (which is actually a great confidence builder). I don't know the ins and outs of the container and client. In non-container environments, everything just works. I'm kind of pushing the use cases in the effort of getting to that same level of reliability. Doing the bootstrap in Go is something that comes to mind so it can be more (shall we say) "deterministic". Indeed this is all just speculation at this point and looking for ways to strengthen the foundation in the process of solving existing problems. I'm learning here, but also looking to contribute with patterns that are scalable to other projects over time. |
From my experience, the server and client are completely separate things and are best to be treated as separate. For server, you don't assume any further modifications of that image. It's supposed to be an application which makes some services available on network ports, and you don't care about the internals and it just works. Think of PostgreSQL container -- it's a single-purpose container. The server container is systemd-based, and it is not going to change any time soon. So the behaviour of the container is driven by For the client, on the other hand, you assume that it will be a building block of some additional application. That application will likely not be running under systemd (in the container), so that already gives you completely different modes of operation. You likely expect fast startup time for the client container. |
That's all been very helpful to understand the scripts that are there and consolidate them a bit for the needs I have. I'm in more of a Kubernetes environment and I've created a dependency on the use of One thing I have noticed is I'm wondering if you could explain https://github.com/adelton/webauthinfra/blob/0f8604b4d71007f51888571a137521c5938c162c/src/init-data#L24, it seems that it is intended to run when the container is already set up and just restarting. If I read it correctly, it's copying files from EDIT: I see, it is copying the files back to the root volume. I got used to the pattern of the current server build where the data volume was symlinked from root. I'll be good to reconcile everything, leaving the rest of the comment as a status update, no response necessary |
Is this about the IPA server container or about IPA client?
The main reason was that https://github.com/adelton/webauthinfra is a developer and development / testing environment. So after the initial setup is done (and |
Thanks Jan, I did reason that out after staring at it some. I see now how the Very excited for this! Thanks again for your help! |
One thing I just considered: It might be very interesting if the IPA installers were able to check for an existing symlink before installing any given file, then put the file in the place the symlink pointed to in case one existed. In this manner, the Combined with a |
Are you talking about IPA server or client? |
Seems like that technique would work for either client or server, both for initial server or replica install. |
@briantopping I see that this issue was closed in November. Was any progress toward a Helm Chart made? I also need to deploy FreeIPA into a Kubernetes environment and would love to manage it with Helm. I'm happy to contribute back to the OSS effort if you have a starting point. |
Any update? |
@rojopolis @codejamninja Reopening this now that I have some mileage with the scripts. I'll start posting what I have in a separate repo that we can work on together until it's stable. The good news is I've been using it for a few months and don't feel like I'm going to be doing folks a disservice by sharing them. The bad news is I haven't started the templates at all yet, they are just manifests. Here's where I know they need work:
Something that would be a "nice to have" is to build it with Operator Pattern out of the box. If anyone else feels strongly about this and has done it before, I would love to help, but I'm not sure I'm ready to lead on that front. I'll add a followup as soon as I have a repo together! I think we can make pretty short work of this problem and bring the sources back to this repo. |
@briantopping can you please share your scripts. I have a lot of helm experience and would find them extremely useful. I would be willing to contribute. |
Yes I’d like to take a cut at it first. I was impressed by the csi-ceph layout and will probably emulate that unless you have strong opinions about it. |
Of the issues listed in #255 (comment), are any of them generic FreeIPA containerization issues that should be handled in this repository. For example, you mention "Logging needs to be fixed. It's currently just filling up the PV with noise and they aren't propagating logs like good pods should." but I'm not sure if you talk about your Kubernetes/Helm setup or what freeipa-container does in general. |
@adelton It may have to do with the specifics around how kubernetes expects logs, I think it's just out to console. I'm guessing the container may need to be parameterized if the logs to console are undesirable outside of a k8s environment. That's not difficult. I was mostly making mental notes, apologies if that was out of context. |
@briantopping do you have an eta on when you'll post the scripts? I don't really care if they're nice, I just kinda want to dig through what you have. I'm ok with the csi-ceph layout, although I think it would be nice to add support for rancher charts. I have a nice generator for helm charts I use a lot. https://www.npmjs.com/package/generator-helm-chart You can also take a look at my personal helm charts if you need inspiration. |
I was thinking this weekend? Def not trying to be clingy with them. Thrilled you have interest and want to learn by seeing your impressions instead of not really engaging with it and only appreciating where they end up. I hope that makes sense. |
I'm sorry @briantopping I didn't get you. Looking forward to this weekend! |
@codejamninja I am moving with this as promised and its at the top of my stack until it’s done or client work starts, which I don’t have scheduled at the moment. As I review my work from some months ago in the context of what I know now, a straight Helm template isn’t going to cut it. The good news is I have a design that I am prototyping, the bad news is what I’ve done in the past isn’t really usable without documentation that will be worthless in short order. So I don’t have anything for you besides this right now. Will post here when I do! |
Update from here: Spent most of this week in discovery mode on census-instrumentation/opencensus-proto#200. At this point, I'm going to get back into the actual controller as a Go/dep bulld at this point. Hopefully there's some resolution there, I'd like to release this using Bazel. |
Been a couple of months here since last update and since Im a glutton for punishment and I just deployed 1.15 onto a CentOS pacemaker, corosync, pcsd cluster with GFS2/DRBD underlying for |
Yah sorry about that. Summer came and I ended up with endless projects on the house. I have started and pushed the operator, it looks good so far, but it's not... quite... there... Hmm. |
Yeah you'll have that :). Nothing public to read short of this thread? I kinda have an itch to at least get something deployed. I don't know that I'd want to have a single container... Maybe a nice statefulset indexed like they do those in the prom operator with types for ldaps and dnses and certmongers and so on and flags for all of the relevant tunables :). Lots of work man. |
Please do feel free to help with the repo! https://github.com/briantopping/freeipa-operator. It has some whizzy Go in there, but the point is to make it so the declarative aspects of creating a stable FreeIPA deployment are separated from the programatic ones of understanding (for instance) what's already been deployed. It's a pretty tricky situation, I certainly do not want all the fancy operator poo if it ends up wiping out my corporate identity system! Eventually, the operator should provide for:
@ShuttR: I agree with all the other stuff you're proposing as well!! My manual deployment of FreeIPA is as a SS, I just need to get the YAMLs working in the operator context.. |
I'll TAL man! Maybe this weekend!
…On Wed, Jul 24, 2019 at 5:34 PM Brian Topping ***@***.***> wrote:
Please do feel free to help with the repo!
https://github.com/briantopping/freeipa-operator. It has some whizzy Go
in there, but the point is to make it so the declarative aspects of
creating a stable FreeIPA deployment are separated from the programatic
ones of understanding (for instance) what's already been deployed. It's a
pretty tricky situation, I certainly do not want all the fancy operator poo
if it ends up wiping out my corporate identity system!
Eventually, the operator should provide for:
- Scaling instances, starting with zero instances and protecting the
last instance
- Managing intentional version migration rather than random upgrades
because the cluster was rebooted
- Backups
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#255?email_source=notifications&email_token=AMIHSGOULOSQTIEYXN5CG5DQBDDGHA5CNFSM4GFBL2T2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2XVXPQ#issuecomment-514808766>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMIHSGJBBNOKRVKFPO47K3DQBDDGHANCNFSM4GFBL2TQ>
.
|
@abbra Heya. Read through this today: https://www.freeipa.org/page/Troubleshooting/PrivilegeSeparation Should the httpd even be trying to open /etc/krb5.keytab? Somethings horked me thinks :( . Probably hostname/dns conjanglement. But this is what it is unless I hack some records into CoreDNS. It may simply be easier to use Host Networking... Then most of this goes away since DNS is what it is for the host. But it will eat up ALL of those ports as INADDR_ANY tho if Im not mistaken :( And that's a lot of ports! EDIT - But oh wow, if I use host networking, we're going to be in a different situation. Then I believe I couldn't enroll the actual Kubelet server since it would be the same hostname and ergo identity as the master itself. Perhaps I could play games with keys and mounting them in both the Kubelet and the container, but that sounds iffy too... |
@codejamninja Howdy. I have shared pretty much all I have so far above :) You'll have to ask @briantopping for his repo wherein he has some skeletons of a controller using kubebuilder and some templates for the primitives. Hell, I'm just trying to find an architecture that works inside of Kubernetes and doesn't require so much fudging that it's basically a rube goldberg machine to begin with. Part of me is really starting to wonder how things are going with KubeVirt and if this isn't really the best possible test case in my lab for that tech :) |
@ShuttR I'm sure many have tried and failed to get freeipa working on kubernetes. Honestly, I'm quite sure it's doable. It's just a matter of pushing through. I have no idea if this was encouraging or discouraging. Hopefully, it's encouraging. |
@codejamninja I have been using FreeIPA as a StatefulSet for about a year now. It's just creating the operator for it... |
I mean all that's really left for me is the PTR record... and right now on the local IPA DNS, it's nxdomain. I just straced and captures the entire thing from soup to nuts (fyi, kubectl plugins are great. Kubectl sniff actually spins up tcpdump in a remote container and it's a plugin availavble with krew)
Well there's some stuff to sift :P I have work work to do, so not tonight for me :P |
@briantopping Yeah I have no idea how Im on the failbus here... The latest, I got it to fwd/rev everything by dynamically adding an --ip-address argument in my shim script. So at this point the hostname is 100% copacetic (ish), but I still seem to have a broken gss auth module in httpd :( . 100.64 is me using CGnat for the Pod network. 192.168.101 is my internal IPv4 :( [root@ipa01 /]# for ipaddr in |
Slooooowwww down... This is not working code yet! Y'all asked for a link to WIP, I provided it. It still needs work. If you need something that works out of the box, it's not there yet, at all... |
Im good... Just truckin along :P I had it working as a statefulset but there were still fwd/rev issues. I know i can get it there. Strange that there are no forward/reverse issues now and it's more not working :) . Im definitely not seeing httpd interact with gssproxy when it starts or is hit. I think that's what is supposed to happen... I do see the kt in the /var/.../gssproxy dir so grr... After that, Ill switch over to making it stampable and getting into the controller. |
Getting into the code, it's not hitting gssproxy... Now we're cooking with gas I think! But it's way too late :) |
I think we get the 'SPNEGO cannot find mechanisms to negotiate' from Now, in FreeIPA 4.5+ that should be correct -- a process where Few assumptions here:
You are saying there is no keytab in |
Firstly, I really appreciate your assist on this. You obviously know every last bit of this code intimately and you and the team ought to be quote proud. This is great stuff. It's too bad it got perfected when the cloud basically declared victory and nobody has accounts on anything anymore, but that's life :) . Thank you so much for your time with this and once I can get over whatever pain this is, making it work for others might not be so difficult :P So the checklist...
[1]:
[2]:
[3]:
[4]
|
Ok, thanks for the confirmation. Gssproxy wouldn't be triggered by your kinit/ipa ping runs from root. It doesn't support interposing root processes. It is also not triggered for any other access unless there is a variable
It should have |
It does, I checked httpd's environment:
```
[Service]
Environment=KRB5CCNAME=/tmp/krb5cc-httpd
Environment=GSS_USE_PROXY=yes
Environment=KDCPROXY_CONFIG=/etc/ipa/kdcproxy/kdcproxy.conf
ExecStartPre=/usr/libexec/ipa/ipa-httpd-kdcproxy
```
And double checked here:
```
[root@ipa01 /]# pgrep httpd
4536
<snip>
[root@ipa01 /]# cat -v /proc/4536/environ
LANG=C^@path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin^@NOTIFY_SOCKET=/run/systemd/notify^@KRB5CCNAME=/tmp/krb5cc-httpd^@GSS_USE_PROXY=yes^@KDCPROXY_CONFIG=/etc/ipa/kdcproxy/kdcproxy.conf^@
```
…On Tue, Aug 6, 2019 at 5:26 AM Alexander Bokovoy ***@***.***> wrote:
Ok, thanks for the confirmation. Gssproxy wouldn't be triggered by your
kinit/ipa ping runs from root. It doesn't support interposing root
processes.
It is also not triggered for any other access unless there is a variable
GSS_USE_PROXY set to some value (yes) in the environment of that process.
Could you please show output of
systemctl cat httpd
It should have Environment=GSS_USE_PROXY=yes
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#255?email_source=notifications&email_token=AMIHSGLLZUEUORC265JZMZDQDE7V3A5CNFSM4GFBL2T2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3UQVSI#issuecomment-518589129>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMIHSGJ3HYYQIVRAPDFYVS3QDE7V3ANCNFSM4GFBL2TQ>
.
|
I am having a hard time finding a repository with podman image building scripts and kubernetes yamls. Are I missing something? Looking to see how I can contribute to Freeipa on a helm chart. |
Building with podman is the same as building with docker -- A pod-defining YAML file to get you started on Kubernetes could be for example
|
The whole point of diverging into an operator was the benefits that would come to managing FreeIPA clusters. It would be great if FreeIPA was simple and orthogonal enough that HPA was an option and things like that, but dependencies on legacies like dbus make that a difficult ask. They were good directions at the time that led to difficult containerization options today.
I have no excuse for the delay on this except for aforementioned summer projects, the last one being installing a six person hot tub. Northern hemisphere summertime has come to a close. I’m happy to work intensively with others that have competency in Golang to get this finished.
|
If I have your yaml configured into a Kubernetes stateful set, has there been any work in automatically joining them? Any designs? I've seen the use of a kubernetes job to do this in cockroachdb (they have certificates). |
@fire I pushed the branch at https://github.com/briantopping/freeipa-operator/tree/WIP. This code is where I left it last time I worked on it, which was not compiling. Apologies for the state. StatefulSets are just types managed by an internal operator, so to speak. So the goal here was to do the same thing, to get our operator code managing the lifecycle of FreeIPA pods. In turn, we'd be able to manage the storage and peering relationships more delicately, hopefully ensuring the process is flawless to the user and enabling higher level capabilities like backup and version upgrades more closely tied to the success of actual instance health. |
What is the problem with dbus specifically? I can imagine that the general fact that the FreeIPA container is multi-daemon systemd based is a problem but as for dbus, I feel we have ironed out the long-standing issue with certmonger startup in #283 and that was not really an issue caused by dbus, just manifested there. |
I'd like to see working and stable FreeIPA replica creation first, before attempting to do multiple ones with StatefulSets. I've added support for that via 506f523 but I've only tested that on OpenShift and we don't have any sort of CI around that -- we've recently lost even the FreeIPA master in OpenShift/Kubernetes automated testing that we had. If someone is willing to setup testing environment where we could test FreeIPA servers and replicas in multiple version of OpenShift and Kubernetes with the existing code, it would greatly boost our ability to add and sustain support for StatefulSets. |
Why does Quay not have versions of FreeIPA yet? My k8s cluster just died with expired certificates. When I rebooted, it appears to have pulled a different version with different filesystem expectations or something. I knew this was going to happen someday and have been noting it for at least a year. Doesn't matter much what we do here if a well-intended change to the only docker image people can reference blows up user deployments. I simply don't want the latest version if I am not expecting it. Most production environments are the same. |
The Quay plan is being discussed in #246. In general though, if you want to use a specific version in a production, push (and tag) that specific version to your registry. Do not rely on external registries to be available or have the image next time you need it, no matter if it's Docker Hub or Quay. |
There are many arguments against that position and they are relevant to this effort. If this operator is ever to be idempotent and the backing repository is not idempotent, then the operator will have to contain it's own repository since the operator can't depend on one in the environment. No operators do this, or it would be a part of the operator toolkits because it's an expectation.
The point is when a cache is not reliable, the dependency for reliability gets pushed out to the clients. This increases complexity by an enormous amount. Does this operator now need to start including it's own registry? Can you find me other operators that have registries built into them? |
Initially this issue was about Helm Chart, not Helm Operator. I'm not sure why the Operator couldn't depend on the repository (registry) in the environment. In any case, this is not as much about where the registry lives but what's in there and how it's tagged. It should be just as possible to manually push (and tag) the "good" images into Docker Hub, to different namespace. The experience with Red Hat is providing curated and tagged images Working on freeipa-container outside of my work duties, I don't have the capacity to manually tag new things. I'm happy when failed Travis CI's cron job reminds me that new packages were pushed to yum repositories that broke the setup ... and even that does not always catch everything early enough and then users experience build issues, as yesterday's case #280 (comment) shows. I'd expect members of the community to do the curating and tagging if they deem it important. I suggest to use #246 for discussions around that. It has been silent for the past half a year. |
As documented, a Helm chart is unfortunately not capable enough. I completely understand the problem around "spare time after work". :) I didn't know that there was a production repository. The full link for the browser is https://access.redhat.com/containers/?tab=tags#/registry.access.redhat.com/rhel7/ipa-server. That should be sufficient for the needs of either a Helm chart or an operator to provide a reliable experience for users. I'll close #246 with that info. |
Hello @briantopping, what is the story and status here? Does this issue need to stay open, or can we close it, potentially with a link somewhere where the work is being done? |
I'm interested in putting together a Helm chart for deploying FreeIPA in Kubernetes. Here's a few reasons why I think this would be a good idea for the community:
#154 (comment) is an example of the kind of knowledge that could be embodied in a Helm chart. I have been considering such a pattern for my own production deployment, but I have been concerned that it opens other challenges that I am not prepared for. I think this project would provide an open forum for debating such patterns, making sure that the investment by core team members to answer specialized questions rolled back into deployable artifacts.
Of course, this project would "stand on the shoulders of giants". None of it would be possible without the great work that's already went into containerizing FreeIPA, so thanks again for that.
Thoughts? I would like to see the effort eventually under the umbrella of this GitHub organization, so makes sense to open up discussion early.
The text was updated successfully, but these errors were encountered: