Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outputs for Kubernetes Terraform provider #1165

Closed
liamwh opened this issue Jan 10, 2024 · 9 comments
Closed

Outputs for Kubernetes Terraform provider #1165

liamwh opened this issue Jan 10, 2024 · 9 comments

Comments

@liamwh
Copy link

liamwh commented Jan 10, 2024

Description

I would love to be able to use the kubernetes terraform provider using outputs from kube hetzner, something akin to the following for DO:

provider "kubernetes" {
  host                   = digitalocean_kubernetes_cluster.veloxide-k8s-cluster.endpoint
  token                  = digitalocean_kubernetes_cluster.veloxide-k8s-cluster.kube_config[0].token
  cluster_ca_certificate = base64decode(digitalocean_kubernetes_cluster.veloxide-k8s-cluster.kube_config[0].cluster_ca_certificate)
}

Which comes from:

resource "digitalocean_kubernetes_cluster" "veloxide-k8s-cluster" {
  name                             = "k8s-1-28-2-do-0-ams3-1703291481360"
  region                           = "ams3"
  auto_upgrade                     = true
  version                          = data.digitalocean_kubernetes_versions.k8s.latest_version

  node_pool {
    name       = "veloxidenodepool"
    size       = "s-2vcpu-4gb"
    auto_scale = true
    min_nodes  = 1
    max_nodes  = 5
  }
}
@mysticaltech
Copy link
Collaborator

@liamwh That would be really nice indeed. @kube-hetzner/core FYI.

@valkenburg-prevue-ch
Copy link
Contributor

You can already do that:


provider "kubernetes" {
  host                   = module.kube_hetzner.kubeconfig_data.host
  client_certificate     = module.kube_hetzner.kubeconfig_data.client_certificate
  client_key             = module.kube_hetzner.kubeconfig_data.client_key
  cluster_ca_certificate = module.kube_hetzner.kubeconfig_data.cluster_ca_certificate
  ignore_annotations = [
    ".*cattle\\.io.*",
  ]
  ignore_labels = [
    ".*cattle\\.io.*",
  ]
}

And I did that for a while, before concluding that is was an unstable situation. Occasionally, the provider would want to initialize before the cluster was built, so I would be stuck with a terraform state that would refuse to build the cluster because not all providers were fully configured.

I ended up separating my cluster configuration in three "independent" terraform folders, with each their own state, and I run them in a sequence.

  1. (cluster) kube.tf
  2. (core infra on cluster) with the kubeconfig.yaml from 1., setup longhorn, hashicorp vault, service mesh, etc.
  3. (applications) with the configured terraform vault provider (only possible after finalizing step 2), setup all applications with need for vault, etc.

So you can do it, and I concluded it was not the best solution. Your milage might vary...

@liamwh
Copy link
Author

liamwh commented Jan 16, 2024

Amazing, will give this a go and report back, thank you very much!

@liamwh
Copy link
Author

liamwh commented Jan 19, 2024

I am getting this error often, any idea what I can do about it?

module.kube-hetzner.data.remote_file.kubeconfig: Refreshing...
module.kube-hetzner.null_resource.kustomization: Refreshing state... [id=1680158679973432701]
module.kube-hetzner.null_resource.configure_autoscaler[0]: Refreshing state... [id=4638238659851963355]
module.kube-hetzner.null_resource.configure_floating_ip["3-0-egress"]: Refreshing state... [id=4273720719089111240]
module.kube-hetzner.data.remote_file.kubeconfig: Refresh complete after 4s [id=88.99.36.56:22:/etc/rancher/k3s/k3s.yaml]
module.kube-hetzner.local_sensitive_file.kubeconfig[0]: Refreshing state... [id=b6cb6f78b4f1a23598db3e2f8de60b983224b5c3]
╷
│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│ 
│ 
╵
Operation failed: failed running terraform plan (exit 1)

@mysticaltech
Copy link
Collaborator

@liamwh Did you sort this out? Either way, could you share your the code you use to do that please (without sensitive values of course). It would be greatly appreciated.

@andi0b
Copy link
Contributor

andi0b commented Jan 27, 2024

It would be great to have access to many more things via outputs. Also the node pools. While playing around to optimize my longhorn volumes issue (#1195) I added this to the outputs for example. So I can add volumes to nodes in the custom way I would like to do it.

# added at the end of /output.tf

output "control_planes" {
  value = module.control_planes
  description = "All control plane items"
}

output "agents" {
  value = module.agents
  description = "All agent items"
}

And yes, I understand that I can break a lot of stuff with that freedom ;)

@mysticaltech
Copy link
Collaborator

@andi0b Looking good, PR most welcome!

@andi0b
Copy link
Contributor

andi0b commented Feb 1, 2024

@andi0b Looking good, PR most welcome!

I have to look into it again, this only exposes the values from the host tf-submodule, I think it should also expose more information, also things from the main TF module, and maybe merged with the node pool input variables.

I've seen that there is a bigger re-factor planned for the next version (more submodules), it might be better to wait for this to be completed. To not implement something now that will soon lead to a breaking change. Do you have an estimation when this refactoring will be finished?

@mysticaltech
Copy link
Collaborator

@andi0b You are absolutely right, best to wait for v3. @aleksasiriski is leading the push to v3, we do not have a time estimate yet, but it will come soon enough. We will keep this FR in mind and slip it in if we can.

@mysticaltech mysticaltech changed the title [Feature Request]: Outputs for Kubernetes Terraform provider Outputs for Kubernetes Terraform provider May 23, 2024
@kube-hetzner kube-hetzner locked and limited conversation to collaborators May 23, 2024
@mysticaltech mysticaltech converted this issue into discussion #1357 May 23, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants