How to create custom service account login for EKS kubernetes access kubeconfig

Sometime for some automation to apply kubectl you may need a service account based login for running kubectl command. In order to do that you will need the required access for it and relevant ~/kube/config file.

Here is how you can generate one. This is an example for AWS EKS cluster.

wget https://github.com/jayeshmahajan/k8s-utility/blob/master/serviceaccount.sh

#!/bin/bash
#
# run in context of account
# ex. dev
# ./deployer.sh ClusterName CustomUser My_Env
_clustername=$1
_username_=$2
_env_=$3
export ROLE="cluster-admin"
export NS="kube-system"
echo "create service account ${_username_} for env ${_env_}"
kubectl create sa $_username_ -n $NS
echo "Bind SA ${_username_} with ClusterRole ${ROLE} for environment ${_env_}"
kubectl create clusterrolebinding $_username_ \
 --serviceaccount=$NS:$_username_ \
 --clusterrole=${ROLE} 
SECRET_NAME=$(kubectl get sa $_username_ -n $NS -o json | jq -r .secrets[0].name)
TOKEN=$(kubectl get secrets $SECRET_NAME -n $NS -o json | jq -r .data.token | base64 -D)
CA=$(kubectl get secrets $SECRET_NAME -n $NS -o json | jq -r '.data | .["ca.crt"]')
SERVER=$(aws eks describe-cluster --name $_clustername | jq -r .cluster.endpoint)
cat <<-EOF > $_username_-$_env_.yaml
apiVersion: v1
kind: Config
users:
- name: $_username_
  user:
    token: $TOKEN
clusters:
- cluster:
    certificate-authority-data: $CA
    server: $SERVER
  name: $_username_
contexts:
- context:
    cluster: $_username_
    user: $_username_
  name: $_username_
current-context: $_username_
EOF
echo "Created kubeconfig $_username_-$_env_.yaml"

sh +x serviceaccount.sh ClusterName ServiceAccount Environment

kubectl get nodes –kubeconfig ServiceAccount_Environment.yaml # replace yaml file with the one thats generate as part of output.

CKA certification cluster troubleshooting questions

1. Very important things to remember. 

The string here and the path are very important. 
Always check the logs of kublet service. 

If the api server connection timeout then make sure that you are not missing anything in firewall. 
Make sure to check the logs to see the process is not complaining about it. Make sure path provides for all the configuration in config.yaml is correct and there is no syntax error. The logs will print details if there is any syntax error. 

[root@master ~]# cat /var/lib/kubelet/config.yaml | grep static
staticPodPath: /etc/kubernetes/manifests

2. POD and service DNS is not resolving in kubernetes.

Make sure that the busybox that you are trying to resolve it from is correct version. It should be following as per kubernetes doc.

kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml

I was running older version of busybox that made pod DNS not working.

Pod DNS is:

10-3-3-3.<your_namespace>.pod.cluster.local

Service DNS is:

servicename.s<your_namespace>.svc.cluster.local

Raspberry Pi – how to setup Wifi WPA2-PSK

The configs below also set a manual IP at 192.168.1.60.

/etc/network/interfaces:

auto lo

iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet manual
address 192.168.1.60 # change it to the static IP that you want.
netmask 255.255.255.0
gateway 192.168.1.1
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf

/etc/wpa_supplicant/wpa_supplicant.conf:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
ssid="replace_with_your_ssid"
psk="replace_with_your_password"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP
group=CCMP
auth_alg=OPEN
}

Then I installed wicd and wicd-curses with the following commands:

sudo apt-get install wicd
sudo apt-get install wicd-curses

Run wicd-curses at the command line and setup your wireless network and let it automatically connect to this network on startup.

wicd-curses

-> Select wifi

-> C for config

-> Select the option to use this wi-fi at boot time to connect

-> C (Connect)

-> Save

Reboot and I was able to connect to my wireless network.

terraform : Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed

Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
status code: 400, request id:

 

ID: 222Whatever-222Whatever-222Whatever-d86c-222Whatever
Path: terraform.tfstate
Operation: OperationTypePlan
Who: username@hostname
Version: 0.11.7
Created: 2018-09-27 15:02:22.226277904 +0000 UTC
Info:

 

Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the “-lock=false”
flag, but this is not recommended.

 

Fix:

terraform force-unlock 222Whatever-222Whatever-222Whatever-d86c-222Whatever . # this is the ID provided in Error message

How to move terraform state from one bucket to another?

From your existing config/s3 repo setup. Download the state with following command.

  1. terraform state pull > terraform.tfstate
  2. aws s3 cp –sse AES256 terraform.tfstate s3://Bucket_Name/Whatever_Path/terraform.tfstate. ## there are two – before sse
  3. Updated your backend config with new s3 location and change the profile for that account in your terrafrom config or backend config.
  4. Run terrafrom init

It will throw an error such as

[code]

Error loading state:
state data in S3 does not have the expected content.

This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: fe1212121Blah_Blah_Blah_1mduynend

Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.

[/code]

4. Go to your dynamoDB table config that you have setup in your AWS console for the table and LockID string. Search for the KEY that you have provided for LockID and change the value there with above mentioned fe1212121Blah_Blah_Blah_1mduynend value in last error.

5. Run terraform init again

This should move your S3 state from one bucket to new account’s bucket.

Gpg decryption error

While trying to decrypt the secrets in single line command line below I was getting error.

[code]

cat file <or echo "whatever">  | base64 –decode | gpg -d

[/code]

[code]
gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: No secret key

[/code]

The reason for the key that you have used is password protected. The pipe won’t work with gpg if your key is password protected.

[code]

gpg –export "Jayesh-key" | base64 # To get your key

gpg –list-keys

[/code]

In order to get that working. Either you remove pipe in 2 commands.

[code]

echo "whatever" | base64 –decode > file.gpg

gpg -d file.gpg

[/code]

or you can modify your key to be without password by providing blank password but thats not a recommended or ideal way.

[code]

gpg –edit-key YourKey

gpg prmpt > passwd

Once it prompts enter existing password to unlock. Once done just enter for blank password.

gpg prompt > save

[/code]

Puppet Error 400 on SERVER: A duplicate resource was found while collecting exported resources, with the type and title Host

Even after clearing the certs from puppetmaster and client, if you are getting below error on your puppet client

Error 400 on SERVER: A duplicate resource was found while collecting exported resources, with the type and title Host

then its because you have messed up with certs badly.

Here is what you need to do.

Check the certs name from ls -al ${PUPPET_HOME}/ssl/ -R

once of the cert above has multiple hostname in it. Find all the hostname that comes as part of above command and delete all of them from puppet master.

once done clean the ssl folder from client

rm -rf ${PUPPET_HOME}/ssl/

and run the puppet agent.

Linux run script/service after few mins of reboot

You can use systemd timers to execute script a minute after boot.

First, create service file (/etc/systemd/system/myscript.service):

[Unit]
Description=MyScript

[Service]
Type=simple
ExecStart=/usr/local/bin/myscript

Then create timer (/etc/systemd/system/myscript.timer):

[Unit]
Description=Runs myscript every hour

[Timer]
# Time to wait after booting before activation
OnBootSec=1min
Unit=myscript.service

[Install]
WantedBy=multi-user.target
Now enable and run it:

# systemctl enable myscript.timer
# systemctl start myscript.timer