Which database to choose from cloud?

Cloud is the Limit: Google Cloud Platform Database Services
GCP offers several database services that you can choose from.

Database decision tree


Cloud SQL:
A relational GCP database service that is fully managed and compatible with MySQL, PostgreSQL and SQL Server, Cloud SQL includes features like automated backups, data replication, and disaster recovery to ensure high availability and flexibility.

When to choose: From ‘lift and shift’ of on-premise SQL databases to the cloud to handling large-scale SQL data analytics to supporting CMS data storage and scalability and deployment of micro services, Cloud SQL has many uses and is a better option when you need relational database capabilities but don’t need storage capacity over 10TB.

Cloud Spanner:
Another fully managed, relational Google Cloud database service, Cloud Spanner differs from Cloud SQL by focusing on combining the benefits of relational structure and non-relational scalability. It provides consistency across rows and high-performance operations and includes features like built-in security, automatic replication, and multi-language support.

When to choose: Cloud Spanner should be your go-to option if you plan on using large amounts of data (more than 10TB) and need transactional consistency. It is also a perfect choice if you wish to use sharding for higher throughput and accessibility.

BigQuery:
With BigQuery you can perform data analyses via SQL and query stream-data. Since BigQuery is a serverless data warehouse that’s fully managed, its built-in Data Transfer Service helps you migrate data from on-premises resources, including Teradata.

It incorporates features for machine learning, business intelligence, and geospatial analysis that are provided through BigQuery ML, BI Engine, and GIS.

When to choose: Use cases for BigQuery involve process analytics and optimization, big data processing and analytics, data warehouse modernisation, machine learning-based behavioural analytics and predictions.

Cloud Bigtable:
It is a fully managed NoSQL Google Cloud database service that is designed for large operational and analytics workloads. Cloud Bigtable includes features for high availability and zero-downtime configuration changes. You can practically integrate it with a variety of tools, including Apache tools and Google Cloud services.

Cloud Bigtable use cases cover financial analysis and prediction, IoT data ingestion, processing, and analytics, and hyper-personalised marketing applications.

When to choose: Cloud Bigtable is a good option if you are using large amounts of single key data and is preferable for low-latency, high throughput workloads.

Cloud Firestore:
A fully managed, serverless NoSQL GCP database designed for the development of serverless apps, Cloud Firestore can be used to store, sync, and query data for web, mobile, and IoT applications. With critical features like offline support, live synchronization, and built-in security, you can even integrate Firestore with Firebase, GCP’s mobile development platform, for easier app creation.

Cloud Firestore use cases include mobile and web applications with both online and offline capabilities, multi-user, collaborative applications, real-time analytics, social media applications, and gaming forums and leaderboards.

When to choose: When your focus lies on app development and you need live synchronization and offline support.

Firebase Realtime Database:
This is a NoSQL Google Cloud database that is a part of the Firebase platform. It allows you to store and sync data in real-time and includes caching capabilities for offline use. It also enables you to implement declarative authentication, matching users by identity or pattern.

It includes mobile and web software development kits for easier app development.
Use cases for Firebase Realtime Database involve development of apps that work across devices, advertisement optimisation and personalisation, and third-party payment processing.

Cloud Memorystore:
Designed to be secure, highly available, and scalable, Cloud Memorystore is a fully managed, in-memory Google Cloud data store that enables you to create application caches with sub-millisecond latency for data access.

Use cases for Cloud Memorystore include ‘lift and shift’ migration of applications, machine learning applications, real-time analytics, low latency data caching and retrieval.

When to choose: If you are using key-value datasets and your main focus is transaction latency.

Choosing the database on key questions
I also created this flowchart that can show a direction in terms of selecting the database:

Docker how to run process as different user from vm

Problem statement:

When you run docker container, lots of enterprise organization do not allow you to run container as root or as sudo because it compromises the container file system access and for some other various reason.

I run into same situation while that I wanted to run the process as a user but my virtual machine instance username doesn’t match with container’s username’s UID.

e.g.

$ id
uid=1000(circleci) gid=1000(circleci) groups=1000(circleci),4(adm),20(dialout),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),117(netdev),118(lxd),997(docker)
$ docker run -it --rm cimg/node:$CIRCLECI_NODE_TAG id
uid=3031(circleci) gid=3031(circleci) groups=3031(circleci),4(adm),20(dialout),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),117(netdev),118(lxd),997(docker)

so when you mount the file and run as circleci user it would fail.

Resolution:

If you want to run this as same uid as what it has on VM

Python Handy notes

print(“Hello, World!”)
x = 5
y = “John”
print(x)
print(y)
x = int(1) # x will be 1
x = float(1) # x will be 1.0
a = ” Hello, World! “
print(a.strip())
print(a[1])
print(a.split(“,”))
print(a.replace(“a”,”b”))
thislist = [“apple”, “banana”, “cherry”]
print(thislist)
print(thislist[1])
thislist.append(“orange”)
thislist.insert(1, “orange”)
thislist.remove(“banana”)
thislist.pop()
del thislist[0]
thislist.clear()
mylist = thislist.copy()
thistuple = (“apple”, “banana”, “cherry”)
print(thistuple)
if “apple” in thistuple:
print(“Yes, ‘apple’ is in the fruits tuple”)
thisset = {“apple”, “banana”, “cherry”}
thisset.add(“orange”)
print(thisset)
thisset.update([“orange”, “mango”, “grapes”])
print(thisset)
thisset.discard(“banana”)
thisdict = {
“brand”: “Ford”,
“model”: “Mustang”,
“year”: 1964
}
print(thisdict)
x = thisdict[“model”]
thisdict[“year”] = 2018
for x in thisdict:
print(x)
for x in thisdict:
print(dict[x])
for x in thisdict.values():
print(x)
for x, y in thisdict.items():
print(x, y)
if “model” in thisdict:
print(“Yes, ‘model’ is one of the keys in the thisdict dictionary”)
thisdict.pop(“model”)
if b > a:
print(“b is greater than a”)
elif a == b:
print(“a and b are equal”)
else:
print(“a is greater than b”)
i = 1
while i < 6:
print(i)
i += 1
for x in range(2, 6):
print(x)
for x in range(2, 30, 3):
print(x)
def my_function():
print(“Hello from a function”)
try:
print(x)
except:
print(“An exception occurred”)
try:
print(x)
except NameError:
print(“Variable x is not defined”)
except:
print(“Something else went wrong”)
f = open(“demofile.txt”)
f = open(“demofile.txt”, “r”)
print(f.read())
print(f.readline())
f.close()
f = open(“demofile2.txt”, “a”)
f.write(“Now the file has more content!”)
f.close()
f = open(“myfile.txt”, “w”) // create new if it doesnt exist
f = open(“myfile.txt”, “x”) //create new
import os
os.remove(“demofile.txt”)

print(“Hello, World!”)

x = 5
y = “John”
print(x)
print(y)

x = int(1) # x will be 1

x = float(1) # x will be 1.0

a = ” Hello, World! “
print(a.strip())

print(a[1])

print(a.split(“,”))

print(a.replace(“a”,”b”))

thislist = [“apple”, “banana”, “cherry”]
print(thislist)
print(thislist[1])

thislist.append(“orange”)

thislist.insert(1, “orange”)

thislist.remove(“banana”)

thislist.pop()

del thislist[0]

thislist.clear()

mylist = thislist.copy()

thistuple = (“apple”, “banana”, “cherry”)
print(thistuple)

if “apple” in thistuple:
print(“Yes, ‘apple’ is in the fruits tuple”)

thisset = {“apple”, “banana”, “cherry”}

thisset.add(“orange”)

print(thisset)

thisset.update([“orange”, “mango”, “grapes”])

print(thisset)

thisset.discard(“banana”)

thisdict = {
“brand”: “Ford”,
“model”: “Mustang”,
“year”: 1964
}
print(thisdict)

x = thisdict[“model”]

thisdict[“year”] = 2018

for x in thisdict:
print(x)

for x in thisdict:
print(dict[x])

for x in thisdict.values():
print(x)

for x, y in thisdict.items():
print(x, y)

if “model” in thisdict:
print(“Yes, ‘model’ is one of the keys in the thisdict dictionary”)

thisdict.pop(“model”)

if b > a:
print(“b is greater than a”)

elif a == b:
print(“a and b are equal”)

else:
print(“a is greater than b”)

i = 1
while i < 6:
print(i)
i += 1

for x in range(2, 6):
print(x)

for x in range(2, 30, 3):
print(x)

def my_function():
print(“Hello from a function”)

try:
print(x)
except:
print(“An exception occurred”)

try:
print(x)
except NameError:
print(“Variable x is not defined”)
except:
print(“Something else went wrong”)

f = open(“demofile.txt”)

f = open(“demofile.txt”, “r”)
print(f.read())

print(f.readline())

f.close()

f = open(“demofile2.txt”, “a”)
f.write(“Now the file has more content!”)
f.close()

f = open(“myfile.txt”, “w”) // create new if it doesnt exist

f = open(“myfile.txt”, “x”) //create new

import os
os.remove(“demofile.txt”)

Kubernetes container how to debug

Most of the time the container that you are running is very slim and has tight security due to which you can’t troubleshoot the process easily.

q.g. youcant run kubectl exec to troubleshoot.

You can use
kubectl debug to create a copy f the Pod with Configuration values changed for debugging purpose

Here is how you can do that
1. Copy the pod while adding a new cotainer and share the process of the existing container in new container.
kubectl get pod my-pod -n nameSpace

so your command would be to create a copy of my-app named my-app-debug that adds a new Ubuntu container for debugging

kubectl debug my-app -it –image=ubuntu –share-process –copy-to=my-app-debug


Flags and values:

The -i flag causes kubectl debug to attach to the new container by default. You can prevent this by specifying –attach=false. If your session becomes disconnected you can reattach using kubectl attach.

The –share-processes allows the containers in this Pod to see processes from the other containers in the Pod.

kubectl debug automatically generates a container name if you don’t choose one using the –container flag.

SSL Error : LibreSSL SSL_connect: SSL_ERROR_SYSCALL or openssl s_client write:errno=54

   Trying 1.1.1.1....
* TCP_NODELAY set
* Connected to example.com (1.1.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to example.com:443
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to example.com:443
openssl s_client -connect example.com:443 -msg 
CONNECTED(00000006)
>>> TLS 1.2 Handshake [length 00bf], ClientHello
*
*
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated

If you are trying to connect to the site and its throwing above error then

Its MOST probably an issue with your SSL certificates private key. Sometime the way Private keys are placed on the your proxy/web/server end gets corrupted while copy pasting and its not able to send the response as “Server hello” As you can see above.

Double check with Private key if its base64 decode format to make sure the keys are matching correctly.

Also sometime the keys format are in following format.

-----BEGIN PRIVATE KEY-----
MIIEv******************************************EQ
*
*
-----END PRIVATE KEY-----
-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-XXXX-CBC,11111111

mJiISQA***************************KJUH/ijPU
*
*
-----END RSA PRIVATE KEY-----⏎

If you see above the 1st key do not have RSA string in it.

The 2nd key have some other strings in first 2 lines before it started encoded string.

This creates issue on SSL cert at server side while responding to the request. Depending on what kind of server you are running you should convert your .pem/.pfx file in correct private key format.


-----BEGIN RSA PRIVATE KEY-----
***
-----END RSA PRIVATE KEY-----⏎

To FIX this: You need to get your private key in correct format by using following command.

# RSA private key

openssl pkcs12 -in myfile.pfx -nocerts -nodes | openssl rsa -out privkey.pem

Some other handy command.

openssl x509 -text -noout -in /tmp/cert.kn
#if your .pfx/.pem file is password protected.
echo "YOUR_PASSWORD" > passfile

# Public Key
openssl pkcs12 -in myfile.pfx -nokeys -out certificate.pem -passin file:passfile
# RSA private key
openssl pkcs12 -in myfile.pfx  -nocerts -nodes | openssl rsa -out privkey.pem
# Private key
openssl pkcs12 -in myfile.pfx -nocerts -out private-key.pem -nodes

## if you want to use on AWS Certificate Manager.
openssl pkcs12 -in $pfx_cert -nocerts -nodes -passin file:passfile | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > $certs_dir/server.key
openssl pkcs12 -in $pfx_cert -clcerts -nokeys -passin file:passfile -out $certs_dir/cert.pem
openssl pkcs12 -in $pfx_cert -cacerts -nokeys -passin file:passfile -out $certs_dir/chain.pem

Hope this is helpful!

Terraform plan and apply from plan out file



terraform init -input=false -backend=true -backend-config="bucket=${WHATEVER_S3_BUCKET}" -backend-config="key=state/terraform.tfstate" -backend-config="region=us-east-1" -backend-config="profile=${WHATEVER_PROFILE}"


terraform plan -var-file=tfvars/${ENV}.tfvars -out tf.out


terraform apply "tf.out" #  -auto-approve

How to renew your GPG key

gpg –list-keys
this gives you a list of all the keys on your computer. you need this to find the keyname that you are trying to update.
## name_of_the_key=`gpg –list-keys |grep -i Jayesh |grep -i uid |awk ‘{print $4}’`
gpg –edit-key [name_of_the_key]
command> list
lists the available subkeys
command> key [subkey]
choose the number of the subkey you want to edit; e.g. key 1
command> expire
expire lets you set a new expiration date for the subkey.
command> save