How to onboard a new project to a cluster on Operate First Cloud#
This guide explains how you can onboard a new project to one of the Operate First clusters. This guide itself is a JupyterHub notebook so you’re supposed to follow all the intstrustions here by running this with JupyterHub
Prerequisites#
Have a team name ready
Have an OpenShift namespace name ready
Have a list of GitHub user accounts that require access
If running this notebook locally, you need to have the following tools installed:
Know which cluster you are looking to provision an OpenShift namespace to
Run the following command to see the list of available Clusters:
!cd /tmp && kustomize build https://github.com/operate-first/apps/acm/overlays/moc/infra/managedclusters?ref=master | yq e -N '.metadata.name' -
balrog
curator
demo
ocp-prod
ocp-staging
osc-cl1
smaug
Outcomes#
A pull request against the
operate-first/apps
repository.Create a namespace on the desired Operate First cluster.
Create an OCP group with your team’s GH users as members.
Provide project admin level access to the newly created namespace for this group.
Steps#
1. Enter your info#
In this guide we will use a couple of facts about your team and project. To make it easier to follow this guide, let’s define these values beforehand.
import os
import json
import tempfile
import yaml
# User variables
GITHUB_USERNAME = "HumairAK" # If this notebook is executed within Jupyter Hub on Operate First, you can use the `JUPYTERHUB_USER` variables instead
# Namespace specific variables
NAMESPACE_NAME="demo-projecx"
NAMESPACE_DISPLAY_NAME="Demo Project Namespace"
TEAM_NAME="demo-teamx"
# Pick a Quota from: x-small, small, medium, large.
# See details here: https://www.operate-first.cloud/apps/content/cluster-scope/quotas.html
QUOTA="small"
# If instead you want to use a custom quota, ignore QUOTA and set the following to True
CUSTOM_QUOTA=False
# Target cluster variables
TARGET_CLUSTER_NAME = "smaug"
TARGET_CLUSTER_REGION = "moc"
NAMESPACE_ADMINS_LST = [GITHUB_USERNAME,] # list of LOWERCASE github usernames of the namespace admins
TARGET_CLUSTER=TARGET_CLUSTER_REGION+"/"+TARGET_CLUSTER_NAME
NAMESPACE_ADMINS=json.dumps([u.lower() for u in NAMESPACE_ADMINS_LST]).replace("\"", "\\\"")
2. Fork and clone the apps repository#
Please fork/clone the operate-first/apps repository. We’ll be working within this repository only.
Note: If you already have a forked & clone repository, please ensure your master branch is up to date with upstream master.
Go to operate-first/apps.
Click on the fork button, this will fork the repo to your GitHub account.
Run the commands below to clone the forked repo.
WORKDIR=tempfile.mkdtemp()
!echo Working in directory ${WORKDIR}
!git clone https://github.com/{GITHUB_USERNAME}/apps.git {WORKDIR}
Working in directory $/tmp/tmpjmgbmtb6
Cloning into '/tmp/tmpjmgbmtb6'...
remote: Enumerating objects: 16488, done.
remote: Counting objects: 100% (3586/3586), done.
remote: Compressing objects: 100% (1533/1533), done.
remote: Total 16488 (delta 2202), reused 3046 (delta 1937), pack-reused 12902
Receiving objects: 100% (16488/16488), 7.24 MiB | 18.29 MiB/s, done.
Resolving deltas: 100% (8219/8219), done.
3. Create Base resources#
We store all our generic configurations in a base
location, from where we selectively choose and deploy assets to target clusters. In this case, we need to create an namespace, group, and a project rolebinding for your team. To do this we’ll use the opfcli
to help us out.
%cd {WORKDIR}
!opfcli create-project {NAMESPACE_NAME} {TEAM_NAME} -d "{NAMESPACE_DISPLAY_NAME}"
/tmp/tmpjmgbmtb6
INFO[0000] writing group definition to /tmp/tmpjmgbmtb6/cluster-scope/base/user.openshift.io/groups/demo-teamx
INFO[0000] writing rbac definition to /tmp/tmpjmgbmtb6/cluster-scope/components/project-admin-rolebindings/demo-teamx
INFO[0000] writing namespace definition to /tmp/tmpjmgbmtb6/cluster-scope/base/core/namespaces/demo-projecx
4. Adding namespace resources to the target cluster#
Run the following code to ensure your namespace is created in the target cluster.
%cd {WORKDIR}/cluster-scope/overlays/prod/{TARGET_CLUSTER}/
!kustomize edit add resource ../../../../base/core/namespaces/{NAMESPACE_NAME}
/tmp/tmpjmgbmtb6/cluster-scope/overlays/prod/moc/smaug
The line added above by kustomize edit
does not add the entry alphabetically, so we are going to sort it here ourselves (for human readability), you can also do it manually.
kustomization_path = WORKDIR + "/cluster-scope/overlays/prod/" + TARGET_CLUSTER + "/kustomization.yaml"
with open(kustomization_path, "r") as f:
kustomization = yaml.safe_load(f)
kustomization['resources'].sort()
with open(kustomization_path, 'w') as f:
yaml.dump(kustomization, f)
5. Add your quota to the namespace#
With the namespace manifest created, we now want to ensure the appropriate quota is added. If you picked the custom quota option you can skip to the next code block.
%cd {WORKDIR}/cluster-scope/base/core/namespaces/{NAMESPACE_NAME}
if(not CUSTOM_QUOTA):
!kustomize edit add component ../../../../components/resourcequotas/{QUOTA}
/tmp/tmpjmgbmtb6/cluster-scope/base/core/namespaces/demo-projecx
If you require a custom quota, please enter the values for the resources you require:
CPU_LIMIT = "1"
CPU_REQUESTS = "1"
MEMORY_LIMIT = "4Gi"
MEMORY_REQUEST = "4Gi"
STORAGE = "10Gi"
NUMBER_OF_BUCKETS = 1
Now run the following code to create the custom quota:
%cd {WORKDIR}/cluster-scope
if CUSTOM_QUOTA:
custom_quota_path = "base/core/namespaces/{0}/resourcequota.yaml".format(NAMESPACE_NAME)
custom_quota = yaml.safe_load(
"""
apiVersion: v1
kind: ResourceQuota
metadata:
name: {0}-custom
spec:
hard:
limits.cpu: {1}
limits.memory: {2}
requests.cpu: {3}
requests.memory: {4}
requests.storage: {5}
count/objectbucketclaims.objectbucket.io: {6}
""".format(NAMESPACE_NAME, CPU_LIMIT, CPU_REQUESTS, MEMORY_LIMIT, MEMORY_REQUEST, STORAGE, NUMBER_OF_BUCKETS))
with open(custom_quota_path, 'w') as f:
yaml.dump(custom_quota, f)
/tmp/tmpjmgbmtb6/cluster-scope
We now include this Custom Resource along with our namespace build.
%cd {WORKDIR}/cluster-scope/base/core/namespaces/{NAMESPACE_NAME}
!kustomize edit add resource resourcequota.yaml
/tmp/tmpjmgbmtb6/cluster-scope/base/core/namespaces/demo-projecx
6. Adding group to Operate-First clusters#
We have created the OCP group manifest, now let’s ensure that it is deployed to all our clusters by updating our common kustomization.yaml
that is deployed on all clusters.
%cd {WORKDIR}/cluster-scope/overlays/prod/common
!kustomize edit add resource ../../../base/user.openshift.io/groups/{TEAM_NAME}
/tmp/tmpjmgbmtb6/cluster-scope/overlays/prod/common
Once again we sort the resources in this file to ensure human readability.
kustomization_path = WORKDIR + "/cluster-scope/overlays/prod/common/kustomization.yaml"
with open(kustomization_path, "r") as f:
kustomization = yaml.safe_load(f)
kustomization['resources'].sort()
with open(kustomization_path, 'w') as f:
yaml.dump(kustomization, f)
7. Populate your OCP Group#
Let’s now add all the users you specified earlier to the OpenShift group making them project admins for the namespace we just created.
%cd {WORKDIR}
!yq e -i ".users = {NAMESPACE_ADMINS}" -P cluster-scope/base/user.openshift.io/groups/{TEAM_NAME}/group.yaml
/tmp/tmpjmgbmtb6
Finalize#
Review your changes by running the following:
!git status
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: cluster-scope/overlays/prod/common/kustomization.yaml
modified: cluster-scope/overlays/prod/moc/smaug/kustomization.yaml
Untracked files:
(use "git add <file>..." to include in what will be committed)
cluster-scope/base/core/namespaces/demo-projecx/
cluster-scope/base/user.openshift.io/groups/demo-teamx/
cluster-scope/components/project-admin-rolebindings/demo-teamx/
no changes added to commit (use "git add" and/or "git commit -a")
Now let’s stage, commit, and push your changes to your GitHub account.
!git add .
!git commit -m "feat(onboarding): Add team {TEAM_NAME}"
!git push
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: cluster-scope/overlays/prod/emea/demo/kustomization.yaml
Untracked files:
(use "git add <file>..." to include in what will be committed)
cluster-scope/base/core/namespaces/demo-project/
cluster-scope/base/user.openshift.io/groups/demo-team/
cluster-scope/components/project-admin-rolebindings/demo-team/
no changes added to commit (use "git add" and/or "git commit -a")
[master bf5372e] feat(onboarding): Add team demo-team
7 files changed, 41 insertions(+)
create mode 100644 cluster-scope/base/core/namespaces/demo-project/kustomization.yaml
create mode 100644 cluster-scope/base/core/namespaces/demo-project/namespace.yaml
create mode 100644 cluster-scope/base/user.openshift.io/groups/demo-team/group.yaml
create mode 100644 cluster-scope/base/user.openshift.io/groups/demo-team/kustomization.yaml
create mode 100644 cluster-scope/components/project-admin-rolebindings/demo-team/kustomization.yaml
create mode 100644 cluster-scope/components/project-admin-rolebindings/demo-team/rbac.yaml
Enumerating objects: 37, done.
Counting objects: 100% (37/37), done.
Delta compression using up to 12 threads
Compressing objects: 100% (23/23), done.
Writing objects: 100% (24/24), 2.29 KiB | 2.29 MiB/s, done.
Total 24 (delta 10), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (10/10), completed with 9 local objects.
To https://github.com/4n4nd/apps.git
da4a111..bf5372e master -> master
Note: You should see a popup to enter login creds for your GitHub account. If you encounter any issues when attempting a push. A workaround is to use the notebook terminal to git push. In the notebook app, go to
File > New > Terminal
, navigate to the cloned directory and run the git push command from there.
Once pushed send a pull request against the operate-first/apps repository.
Once the pull request is merged, all the desired changes will be applied by our ArgoCD instance and the listed users should have admin access to the specified namespaces in the target cluster.