How Kubernetes Autoscaling Works: Step-by-Step Guide

Kubernetes autoscaling is a dynamic mechanism that adjusts the number of running Pods based on workload demand. This feature helps achieve efficient resource utilization, cost-effectiveness, and resilient application performance. In this article, we will dive deep into the workings of Horizontal Pod Autoscaler (HPA), the most commonly used autoscaling mechanism in Kubernetes. We’ll break down its components, metrics collection, pod readiness handling, and how it ultimately makes scaling decisions.

1. Autoscaling Components Overview

Kubernetes supports three main types of autoscaling:

  • Horizontal Pod Autoscaler (HPA): Scales the number of pod replicas.
  • Vertical Pod Autoscaler (VPA): Adjusts resource requests/limits for containers.
  • Cluster Autoscaler (CA): Adds/removes nodes based on pending pods.

This article focuses on HPA.

Key Components:

  • HorizontalPodAutoscaler (HPA) resource: Declares desired behavior for scaling.
  • Metrics Server: Collects CPU and memory usage from kubelets and exposes metrics.k8s.io API.
  • Custom Metrics Adapter (optional): For custom or external metrics.
  • Controller Manager: Houses the HPA controller.
  • Kubelet: Reports node and pod metrics.

2. Setting up the HPA Resource

An HPA is defined using a Kubernetes manifest (YAML or JSON). For example:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60

This configuration aims to maintain 60% average CPU utilization across Pods.

3. Pod Initialization and Readiness Considerations

Before metrics are collected from Pods, Kubernetes needs to account for startup behavior:

  • `horizontal-pod-autoscaler-initial-readiness-delay` : Default 30s. Time window after pod start during which rapid transitions between Ready/Unready are ignored.
  • `horizontal-pod-autoscaler-cpu-initialization-period`: Default 5m. CPU metrics are not considered for autoscaling unless the pod is Ready and metrics are collected after this period.
  • Readiness Probe: Used to determine if the pod is ready to receive traffic.

Best Practice: Delay readinessProbe success until the startup CPU/memory burst has subsided.

4. Metrics Collection

Metrics are gathered every 15 seconds (default value of --horizontal-pod-autoscaler-sync-period).

Metric Types Supported:

  • Resource Metrics: CPU, memory.
  • Custom Metrics: Provided by Prometheus adapter.
  • External Metrics: Cloud APIs, business KPIs, etc.

Metrics Flow:

  1. Kubelet exposes metrics to the Metrics Server.
  2. Metrics Server serves those metrics via metrics.k8s.io API.
  3. HPA controller fetches per-pod metrics from Metrics API.
  4. Average metrics (CPU/memory) are calculated.

5. Scaling Decision Logic

The HPA controller computes the desired number of replicas using this formula:

Example:

  • Current CPU usage: 80%
  • Target usage: 60%
  • Current replicas: 5

Tolerance: No scaling occurs if the usage is within 10% of the target (default).

6. Handling Unready or Initializing Pods

Pods in the following states are excluded from metric calculations:

  • Still initializing.
  • Readiness probe failed.
  • Missing metrics.
  • Just restarted.

When metrics are missing, Kubernetes assumes:

  • 0% usage for scale up.
  • 100% usage for scale down.

This conservative approach avoids premature scaling decisions.

7. Stabilization and Scaling Limits

Stabilization Window:

  • Defined by --horizontal-pod-autoscaler-downscale-stabilization (default 5m).
  • Prevents frequent downscaling.

Scaling Policies (Autoscaling/v2 only):

  • Configure maximum pods added/removed per minute.
  • Can be percentage or absolute number.
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 4
periodSeconds: 60

8. Triggering the Scaling Action

Once the controller:

  1. Fetches metrics.
  2. Computes average.
  3. Applies tolerance.
  4. Filters out unready pods.
  5. Applies stabilization policy.

Then it triggers a PATCH request to the target’s scale subresource, updating .spec.replicas.

Example:

PATCH /apis/apps/v1/namespaces/default/deployments/web-app/scale
{
"spec": {
"replicas": 7
}
}

9. Monitoring and Observability

Use the following tools:

  • kubectl get hpa
  • Metrics dashboards (Prometheus + Grafana)
  • Logs from kube-controller-manager
  • Events on the HPA object (kubectl describe hpa)

Conclusion

Kubernetes Horizontal Pod Autoscaler is a powerful mechanism that intelligently scales your applications. It integrates with metric systems, considers pod lifecycle state, uses dynamic algorithms, and is highly configurable. By understanding how each component works — from metric collection to replica adjustment — you can optimize your scaling policies for performance and cost.

References:

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics

Kubernetes: How traffic flows from internet to container via Istio

Kubernetes: How traffic flows from internet to container via Istio

Let’s walk through the traffic flow from the internet to your application containers in GKE using Istio. This will include how the traffic passes through various components such as NodePort, Istio Gateway, VirtualService, kube-proxy, Kubernetes services, sidecar (Envoy proxy), and ultimately reaches the application.

Given the configuration provided for the Istio Gateway and VirtualServices, this flow applies to both frontend.mydomain.com and backend.mydomain.com.

Traffic Flow

  1. Client Request: The client sends an HTTPS request to either frontend.mydomain.com or backend.mydomain.com.
  2. Passes through WAF → DDOS, SQL Injection, cross site scripting etc
  3. Cloud Load Balancer: Routes the traffic to the appropriate GKE node via a NodePort.
  4. Istio Ingress Gateway: Handles mutual TLS (mTLS) authentication and decrypts the traffic.
  5. VirtualService: Based on the host (frontend.mydomain.com or backend.mydomain.com), the VirtualService routes the traffic to the corresponding Kubernetes service.
  6. Kube-proxy and Kubernetes Service: The kube-proxy forwards the traffic from the ClusterIP service to the appropriate application pod.
  7. Envoy Sidecar: The Envoy proxy in the pod processes the request and forwards it to the application container.
  8. Application: The application processes the request and sends a response back, following the same path in reverse.

1. Traffic from Internet to Google Cloud Load Balancer

  1. A client (user or service) on the internet makes an HTTPS request to either frontend.mydomain.com or backend.mydomain.com.
  2. The request first reaches the Google Cloud Load Balancer (GCLB) associated with your GKE cluster. This load balancer is automatically provisioned by GKE when you define an Ingress or Gateway resource.
  3. The GCLB forwards the request to a NodePort on one of the GKE cluster nodes.

2. NodePort and Istio Ingress Gateway

  1. The NodePort on the GKE node receives the traffic and forwards it to the Istio Ingress Gateway pod, which is part of the istio-ingressgateway service running on the cluster nodes.

The Istio Ingress Gateway is defined in the Gateway resource:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foocorp-gateway
namespace: default
spec:
selector:
istio: ingressgateway # Uses Istio's default ingress gateway
servers:
- port:
number: 443
name: https-frontend
protocol: HTTPS
tls:
mode: MUTUAL
credentialName: "backend-credential"
hosts:
- "backend.mydomain.com"
- port:
number: 443
name: https-backend
protocol: HTTPS
tls:
mode: MUTUAL
credentialName: "backend-credential"
hosts:
- "backend.mydomain.com"

The Istio Gateway handles mutual TLS (mTLS) based on the tls.mode: MUTUAL configuration. Both the client and the server authenticate each other using the “backend-credential” certificate stored in the cluster. This ensures secure communication between the client and the cluster.

3. VirtualService Routing

  1. Once the Istio Gateway accepts the connection and decrypts the traffic, it uses the VirtualService configuration to route the request. The traffic is matched based on the host and URI.

For requests to frontend.mydomain.com, the VirtualService for the frontend service is


apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: frontend
spec:
hosts:
- "frontend.mydomain.com"
gateways:
- foocorp-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: frontend.org-namespace.svc.cluster.local
port:
number: 80

For requests to backend.mydomain.com, the VirtualService for the backend service is :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: backend
spec:
hosts:
- "backend.mydomain.com"
gateways:
- foocorp-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: backend.org-namespace.svc.cluster.local
port:
number: 80
  1. The VirtualService directs the traffic to the corresponding Kubernetes service within the cluster (e.g., frontend.org-namespace.svc.cluster.local or backend.org-namespace.svc.cluster.local), forwarding the request to port 80.

4. Kubernetes Service (ClusterIP) and kube-proxy

  1. After the traffic is routed to the appropriate Kubernetes Service, the kube-proxy component comes into play.
  2. The ClusterIP service acts as an internal load balancer and directs traffic to the appropriate pods running your application (frontend or backend) by forwarding requests to the pod IPs.
  3. The kube-proxy manages the routing rules and forwards the traffic to one of the available pod instances.

5. Envoy Sidecar Proxy

  1. The request reaches the destination pod, but before entering the application container, it passes through the Envoy sidecar proxy. Istio injects this sidecar into every pod, and it is responsible for managing inbound and outbound traffic for the pod. The sidecar:
  2. Enforces security policies (like mTLS).
  3. Provides traffic observability.
  4. Routes traffic internally between services.
  5. Envoy then forwards the request to the actual application container running within the pod.

6. Application Container

  1. Finally, the request is processed by the application container (either frontend or backend, depending on the route). The application responds to the request and sends the response back to the client following the reverse path:
  2. Application → Sidecar (Envoy) → kube-proxy → ClusterIP service → Istio VirtualService → Istio Gateway → Google Cloud Load Balancer → Client.

Benefits of Using Istio in This Setup

  • mTLS: Secure communication between clients and services via mutual TLS.
  • Routing Control: Fine-grained routing rules managed by Istio’s Gateway and VirtualService resources.
  • Service Discovery: Kubernetes services (frontend.org-namespace.svc.cluster.local, backend.org-namespace.svc.cluster.local) allow automatic service discovery and load balancing.
  • Sidecar Proxy: The Envoy sidecar provides enhanced observability, security, and control over traffic at the pod level.
  • This configuration ensures that traffic is securely and efficiently routed to the appropriate backend services in your GKE cluster.

Reverse traffic flow from Container to other application within a cluster through gateway

Network connection check without SSH/kubectl access

Streamlining Network Connectivity Validation in Enterprise Environments

Pain Area

In many enterprises, SOC compliance dictates that the engineers who develop code should not be the ones who deploy it. This segregation of duties is essential for maintaining security and compliance, but it introduces significant inefficiencies, especially when it comes to network connectivity and firewall requests.

Typically, developers need to open network connectivity or firewall requests but lack the necessary permissions to validate these connections. Instead, they rely on DevOps, SRE, or operations teams to test and validate the connections once the firewall or network team has opened them. This process is fraught with delays and inefficiencies:

  • Coordination Challenges: Developers must coordinate with multiple teams, each with its own schedule and priorities.
  • Wasted Time: Operations teams often find themselves waiting idly for the firewall team to complete their tasks before they can perform their validations.
  • Productivity Loss: The constant back-and-forth reduces overall productivity, as both developers and operations teams spend more time on administrative coordination than on actual development and operations work.

Task

The goal was to create a solution that empowers developers to independently validate network connections without breaching SOC compliance guidelines or relying on operations teams. The solution needed to:

  • Reduce Dependency: Eliminate the need for operations teams to perform routine connection validations.
  • Increase Efficiency: Enable developers to quickly and independently verify network connectivity.
  • Enhance Productivity: Allow all teams to focus on their core responsibilities without unnecessary delays.

Action

To address this pain point, I developed a code-based solution that automates the network connectivity validation process. This solution utilizes the code available in the tcpcheck repository on GitHub and leverages a ready-to-use Docker image from Docker Hub. The key steps involved in the solution are:

  1. Deploy Docker Container: Use the tcpcheck Docker image to deploy a container within the enterprise network. This container provides an internal DNS URL that can be accessed by anyone within the network.
  2. HTTP Call Functionality: Allow any user within the network to make an HTTP call to the internal DNS URL, passing the desired DNS and port number to check TCP or HTTPS connections.
  3. Automated Validation: The script within the Docker container automatically performs the connection validation and returns the results, indicating whether the connection is successful or not.

The tcpcheck tool is designed to be straightforward and easy to use. By integrating it into the internal DNS URL, it enables seamless and efficient network connectivity checks. Here’s how to use the Docker image:


codedocker pull jayeshmahajan/tcpcheck:latest
docker run -d -p 8080:8080 jayeshmahajan/tcpcheck

Once the container is running, you can check a connection by making an HTTP request to the internal DNS URL. For example:

curl http://<internal_dns_url>:8080/check?host=<target_host>&port=<target_port>

curl 'http://<internal_dns_url/check_http_connection?domain=www.google.com'


"message": "HTTP connection successful",
"dns_result": {
"cname": null,
"ips": [
"142.250.81.238"
]
}
}=============================
http://<internal_dns_url/check_http_connection?protocol=https&domain=wrong.host.badssl.com
{
"message": "SSL connection failed",
"dns_result": {
"cname": null,
"ips": [
"104.154.89.105"
]
},
"error": "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'wrong.host.badssl.com'. (_ssl.c:1133)"
}=============================
http://<internal_dns_url/check_http_connection?protocol=https&domain=www.ge.com
{
"message": "HTTP connection successful",
"dns_result": {
"cname": "www.ge.com.cdn.cloudflare.net.",
"ips": [
"104.18.31.171",
"104.18.30.171"
]
}
}

Result

The implementation of this solution yielded significant benefits:

  • Reduced Dependency on Operations Teams: Developers can now independently validate network connections, eliminating the need for constant coordination with operations teams.
  • Faster Feature Delivery: By empowering developers to perform their own validations, the solution speeds up the development and deployment process, allowing features to be delivered more quickly.
  • Improved Productivity: Both developers and operations teams can now focus on their primary tasks, reducing downtime and increasing overall productivity.
  • Enhanced Responsiveness During Outages: In the event of a network outage, any team member, including leadership, can quickly validate network connections without waiting for the operations team, leading to faster issue resolution.

This innovative solution has proven to be particularly useful during network outages, where time is of the essence. By removing unnecessary dependencies and streamlining the validation process, it ensures that everyone in the organization can contribute to maintaining a robust and reliable network infrastructure.


This approach not only enhances operational efficiency but also aligns with SOC compliance requirements, demonstrating that security and productivity can go hand in hand with the right solutions in place. The tcpcheck tool and its Docker image exemplify how a simple yet effective solution can make a significant impact on enterprise operations.

Which database to choose from cloud?

Cloud is the Limit: Google Cloud Platform Database Services
GCP offers several database services that you can choose from.

Database decision tree


Cloud SQL:
A relational GCP database service that is fully managed and compatible with MySQL, PostgreSQL and SQL Server, Cloud SQL includes features like automated backups, data replication, and disaster recovery to ensure high availability and flexibility.

When to choose: From ‘lift and shift’ of on-premise SQL databases to the cloud to handling large-scale SQL data analytics to supporting CMS data storage and scalability and deployment of micro services, Cloud SQL has many uses and is a better option when you need relational database capabilities but don’t need storage capacity over 10TB.

Cloud Spanner:
Another fully managed, relational Google Cloud database service, Cloud Spanner differs from Cloud SQL by focusing on combining the benefits of relational structure and non-relational scalability. It provides consistency across rows and high-performance operations and includes features like built-in security, automatic replication, and multi-language support.

When to choose: Cloud Spanner should be your go-to option if you plan on using large amounts of data (more than 10TB) and need transactional consistency. It is also a perfect choice if you wish to use sharding for higher throughput and accessibility.

BigQuery:
With BigQuery you can perform data analyses via SQL and query stream-data. Since BigQuery is a serverless data warehouse that’s fully managed, its built-in Data Transfer Service helps you migrate data from on-premises resources, including Teradata.

It incorporates features for machine learning, business intelligence, and geospatial analysis that are provided through BigQuery ML, BI Engine, and GIS.

When to choose: Use cases for BigQuery involve process analytics and optimization, big data processing and analytics, data warehouse modernisation, machine learning-based behavioural analytics and predictions.

Cloud Bigtable:
It is a fully managed NoSQL Google Cloud database service that is designed for large operational and analytics workloads. Cloud Bigtable includes features for high availability and zero-downtime configuration changes. You can practically integrate it with a variety of tools, including Apache tools and Google Cloud services.

Cloud Bigtable use cases cover financial analysis and prediction, IoT data ingestion, processing, and analytics, and hyper-personalised marketing applications.

When to choose: Cloud Bigtable is a good option if you are using large amounts of single key data and is preferable for low-latency, high throughput workloads.

Cloud Firestore:
A fully managed, serverless NoSQL GCP database designed for the development of serverless apps, Cloud Firestore can be used to store, sync, and query data for web, mobile, and IoT applications. With critical features like offline support, live synchronization, and built-in security, you can even integrate Firestore with Firebase, GCP’s mobile development platform, for easier app creation.

Cloud Firestore use cases include mobile and web applications with both online and offline capabilities, multi-user, collaborative applications, real-time analytics, social media applications, and gaming forums and leaderboards.

When to choose: When your focus lies on app development and you need live synchronization and offline support.

Firebase Realtime Database:
This is a NoSQL Google Cloud database that is a part of the Firebase platform. It allows you to store and sync data in real-time and includes caching capabilities for offline use. It also enables you to implement declarative authentication, matching users by identity or pattern.

It includes mobile and web software development kits for easier app development.
Use cases for Firebase Realtime Database involve development of apps that work across devices, advertisement optimisation and personalisation, and third-party payment processing.

Cloud Memorystore:
Designed to be secure, highly available, and scalable, Cloud Memorystore is a fully managed, in-memory Google Cloud data store that enables you to create application caches with sub-millisecond latency for data access.

Use cases for Cloud Memorystore include ‘lift and shift’ migration of applications, machine learning applications, real-time analytics, low latency data caching and retrieval.

When to choose: If you are using key-value datasets and your main focus is transaction latency.

Choosing the database on key questions
I also created this flowchart that can show a direction in terms of selecting the database:

Docker how to run process as different user from vm

Problem statement:

When you run docker container, lots of enterprise organization do not allow you to run container as root or as sudo because it compromises the container file system access and for some other various reason.

I run into same situation while that I wanted to run the process as a user but my virtual machine instance username doesn’t match with container’s username’s UID.

e.g.

$ id
uid=1000(circleci) gid=1000(circleci) groups=1000(circleci),4(adm),20(dialout),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),117(netdev),118(lxd),997(docker)
$ docker run -it --rm cimg/node:$CIRCLECI_NODE_TAG id
uid=3031(circleci) gid=3031(circleci) groups=3031(circleci),4(adm),20(dialout),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),117(netdev),118(lxd),997(docker)

so when you mount the file and run as circleci user it would fail.

Resolution:

If you want to run this as same uid as what it has on VM

Python Handy notes

print(“Hello, World!”)
x = 5
y = “John”
print(x)
print(y)
x = int(1) # x will be 1
x = float(1) # x will be 1.0
a = ” Hello, World! “
print(a.strip())
print(a[1])
print(a.split(“,”))
print(a.replace(“a”,”b”))
thislist = [“apple”, “banana”, “cherry”]
print(thislist)
print(thislist[1])
thislist.append(“orange”)
thislist.insert(1, “orange”)
thislist.remove(“banana”)
thislist.pop()
del thislist[0]
thislist.clear()
mylist = thislist.copy()
thistuple = (“apple”, “banana”, “cherry”)
print(thistuple)
if “apple” in thistuple:
print(“Yes, ‘apple’ is in the fruits tuple”)
thisset = {“apple”, “banana”, “cherry”}
thisset.add(“orange”)
print(thisset)
thisset.update([“orange”, “mango”, “grapes”])
print(thisset)
thisset.discard(“banana”)
thisdict = {
“brand”: “Ford”,
“model”: “Mustang”,
“year”: 1964
}
print(thisdict)
x = thisdict[“model”]
thisdict[“year”] = 2018
for x in thisdict:
print(x)
for x in thisdict:
print(dict[x])
for x in thisdict.values():
print(x)
for x, y in thisdict.items():
print(x, y)
if “model” in thisdict:
print(“Yes, ‘model’ is one of the keys in the thisdict dictionary”)
thisdict.pop(“model”)
if b > a:
print(“b is greater than a”)
elif a == b:
print(“a and b are equal”)
else:
print(“a is greater than b”)
i = 1
while i < 6:
print(i)
i += 1
for x in range(2, 6):
print(x)
for x in range(2, 30, 3):
print(x)
def my_function():
print(“Hello from a function”)
try:
print(x)
except:
print(“An exception occurred”)
try:
print(x)
except NameError:
print(“Variable x is not defined”)
except:
print(“Something else went wrong”)
f = open(“demofile.txt”)
f = open(“demofile.txt”, “r”)
print(f.read())
print(f.readline())
f.close()
f = open(“demofile2.txt”, “a”)
f.write(“Now the file has more content!”)
f.close()
f = open(“myfile.txt”, “w”) // create new if it doesnt exist
f = open(“myfile.txt”, “x”) //create new
import os
os.remove(“demofile.txt”)

print(“Hello, World!”)

x = 5
y = “John”
print(x)
print(y)

x = int(1) # x will be 1

x = float(1) # x will be 1.0

a = ” Hello, World! “
print(a.strip())

print(a[1])

print(a.split(“,”))

print(a.replace(“a”,”b”))

thislist = [“apple”, “banana”, “cherry”]
print(thislist)
print(thislist[1])

thislist.append(“orange”)

thislist.insert(1, “orange”)

thislist.remove(“banana”)

thislist.pop()

del thislist[0]

thislist.clear()

mylist = thislist.copy()

thistuple = (“apple”, “banana”, “cherry”)
print(thistuple)

if “apple” in thistuple:
print(“Yes, ‘apple’ is in the fruits tuple”)

thisset = {“apple”, “banana”, “cherry”}

thisset.add(“orange”)

print(thisset)

thisset.update([“orange”, “mango”, “grapes”])

print(thisset)

thisset.discard(“banana”)

thisdict = {
“brand”: “Ford”,
“model”: “Mustang”,
“year”: 1964
}
print(thisdict)

x = thisdict[“model”]

thisdict[“year”] = 2018

for x in thisdict:
print(x)

for x in thisdict:
print(dict[x])

for x in thisdict.values():
print(x)

for x, y in thisdict.items():
print(x, y)

if “model” in thisdict:
print(“Yes, ‘model’ is one of the keys in the thisdict dictionary”)

thisdict.pop(“model”)

if b > a:
print(“b is greater than a”)

elif a == b:
print(“a and b are equal”)

else:
print(“a is greater than b”)

i = 1
while i < 6:
print(i)
i += 1

for x in range(2, 6):
print(x)

for x in range(2, 30, 3):
print(x)

def my_function():
print(“Hello from a function”)

try:
print(x)
except:
print(“An exception occurred”)

try:
print(x)
except NameError:
print(“Variable x is not defined”)
except:
print(“Something else went wrong”)

f = open(“demofile.txt”)

f = open(“demofile.txt”, “r”)
print(f.read())

print(f.readline())

f.close()

f = open(“demofile2.txt”, “a”)
f.write(“Now the file has more content!”)
f.close()

f = open(“myfile.txt”, “w”) // create new if it doesnt exist

f = open(“myfile.txt”, “x”) //create new

import os
os.remove(“demofile.txt”)

Kubernetes container how to debug

Most of the time the container that you are running is very slim and has tight security due to which you can’t troubleshoot the process easily.

q.g. youcant run kubectl exec to troubleshoot.

You can use
kubectl debug to create a copy f the Pod with Configuration values changed for debugging purpose

Here is how you can do that
1. Copy the pod while adding a new cotainer and share the process of the existing container in new container.
kubectl get pod my-pod -n nameSpace

so your command would be to create a copy of my-app named my-app-debug that adds a new Ubuntu container for debugging

kubectl debug my-app -it –image=ubuntu –share-process –copy-to=my-app-debug


Flags and values:

The -i flag causes kubectl debug to attach to the new container by default. You can prevent this by specifying –attach=false. If your session becomes disconnected you can reattach using kubectl attach.

The –share-processes allows the containers in this Pod to see processes from the other containers in the Pod.

kubectl debug automatically generates a container name if you don’t choose one using the –container flag.

SSL Error : LibreSSL SSL_connect: SSL_ERROR_SYSCALL or openssl s_client write:errno=54

   Trying 1.1.1.1....
* TCP_NODELAY set
* Connected to example.com (1.1.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to example.com:443
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to example.com:443
openssl s_client -connect example.com:443 -msg 
CONNECTED(00000006)
>>> TLS 1.2 Handshake [length 00bf], ClientHello
*
*
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated

If you are trying to connect to the site and its throwing above error then

Its MOST probably an issue with your SSL certificates private key. Sometime the way Private keys are placed on the your proxy/web/server end gets corrupted while copy pasting and its not able to send the response as “Server hello” As you can see above.

Double check with Private key if its base64 decode format to make sure the keys are matching correctly.

Also sometime the keys format are in following format.

-----BEGIN PRIVATE KEY-----
MIIEv******************************************EQ
*
*
-----END PRIVATE KEY-----
-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-XXXX-CBC,11111111

mJiISQA***************************KJUH/ijPU
*
*
-----END RSA PRIVATE KEY-----⏎

If you see above the 1st key do not have RSA string in it.

The 2nd key have some other strings in first 2 lines before it started encoded string.

This creates issue on SSL cert at server side while responding to the request. Depending on what kind of server you are running you should convert your .pem/.pfx file in correct private key format.


-----BEGIN RSA PRIVATE KEY-----
***
-----END RSA PRIVATE KEY-----⏎

To FIX this: You need to get your private key in correct format by using following command.

# RSA private key

openssl pkcs12 -in myfile.pfx -nocerts -nodes | openssl rsa -out privkey.pem

Some other handy command.

openssl x509 -text -noout -in /tmp/cert.kn
#if your .pfx/.pem file is password protected.
echo "YOUR_PASSWORD" > passfile

# Public Key
openssl pkcs12 -in myfile.pfx -nokeys -out certificate.pem -passin file:passfile
# RSA private key
openssl pkcs12 -in myfile.pfx  -nocerts -nodes | openssl rsa -out privkey.pem
# Private key
openssl pkcs12 -in myfile.pfx -nocerts -out private-key.pem -nodes

## if you want to use on AWS Certificate Manager.
openssl pkcs12 -in $pfx_cert -nocerts -nodes -passin file:passfile | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > $certs_dir/server.key
openssl pkcs12 -in $pfx_cert -clcerts -nokeys -passin file:passfile -out $certs_dir/cert.pem
openssl pkcs12 -in $pfx_cert -cacerts -nokeys -passin file:passfile -out $certs_dir/chain.pem

Hope this is helpful!

Terraform plan and apply from plan out file



terraform init -input=false -backend=true -backend-config="bucket=${WHATEVER_S3_BUCKET}" -backend-config="key=state/terraform.tfstate" -backend-config="region=us-east-1" -backend-config="profile=${WHATEVER_PROFILE}"


terraform plan -var-file=tfvars/${ENV}.tfvars -out tf.out


terraform apply "tf.out" #  -auto-approve