terraform : Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed

Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
status code: 400, request id:

 

ID: 222Whatever-222Whatever-222Whatever-d86c-222Whatever
Path: terraform.tfstate
Operation: OperationTypePlan
Who: username@hostname
Version: 0.11.7
Created: 2018-09-27 15:02:22.226277904 +0000 UTC
Info:

 

Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the “-lock=false”
flag, but this is not recommended.

 

Fix:

terraform force-unlock 222Whatever-222Whatever-222Whatever-d86c-222Whatever . # this is the ID provided in Error message

How to move terraform state from one bucket to another?

From your existing config/s3 repo setup. Download the state with following command.

  1. terraform state pull > terraform.tfstate
  2. aws s3 cp –sse AES256 terraform.tfstate s3://Bucket_Name/Whatever_Path/terraform.tfstate
  3. Updated your backend config with new s3 location and change the profile for that account in your terrafrom config or backend config.
  4. Run terrafrom init

It will throw an error such as


Error loading state:
state data in S3 does not have the expected content.

This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: fe1212121Blah_Blah_Blah_1mduynend

Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.

4. Go to your dynamoDB table config that you have setup in your AWS console for the table and LockID string. Search for the KEY that you have provided for LockID and change the value there with above mentioned fe1212121Blah_Blah_Blah_1mduynend value in last error.

5. Run terraform init again

 

This should move your S3 state from one bucket to new account’s bucket.

Gpg decryption error

While trying to decrypt the secrets in single line command line below I was getting error.


cat file <or echo "whatever">  | base64 --decode | gpg -d

gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: No secret key

The reason for the key that you have used is password protected. The pipe won’t work with gpg if your key is password protected.


gpg --export "Jayesh-key" | base64 # To get your key

gpg --list-keys

In order to get that working. Either you remove pipe in 2 commands.


echo "whatever" | base64 --decode > file.gpg

gpg -d file.gpg

or you can modify your key to be without password by providing blank password but thats not a recommended or ideal way.


gpg --edit-key YourKey

gpg prmpt > passwd

Once it prompts enter existing password to unlock. Once done just enter for blank password.

gpg prompt > save

Puppet Error 400 on SERVER: A duplicate resource was found while collecting exported resources, with the type and title Host

Even after clearing the certs from puppetmaster and client, if you are getting below error on your puppet client

Error 400 on SERVER: A duplicate resource was found while collecting exported resources, with the type and title Host

then its because you have messed up with certs badly.

Here is what you need to do.

Check the certs name from ls -al ${PUPPET_HOME}/ssl/ -R

once of the cert above has multiple hostname in it. Find all the hostname that comes as part of above command and delete all of them from puppet master.

once done clean the ssl folder from client

rm -rf ${PUPPET_HOME}/ssl/

and run the puppet agent.

Linux run script/service after few mins of reboot

You can use systemd timers to execute script a minute after boot.

First, create service file (/etc/systemd/system/myscript.service):

[Unit]
Description=MyScript

[Service]
Type=simple
ExecStart=/usr/local/bin/myscript

Then create timer (/etc/systemd/system/myscript.timer):

[Unit]
Description=Runs myscript every hour

[Timer]
# Time to wait after booting before activation
OnBootSec=1min
Unit=myscript.service

[Install]
WantedBy=multi-user.target
Now enable and run it:

# systemctl enable myscript.timer
# systemctl start myscript.timer

.so: cannot open shared object file: Permission denied

When I start the application in debug mode (sh -x) it works fine after taking few seconds while loading the libraries but without debug mode it failes with .so : cannot open shared object file: Permission denied even though it has proper read permission as well as proper read permission to all parent folders.

Then I figured out that its due to selinux in enforcing state.

Change : /etc/selinux/config from SELINUX=enforcing ## or permissive to SELINUX=disabled