Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
status code: 400, request id:
Created: 2018-09-27 15:02:22.226277904 +0000 UTC
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the “-lock=false”
flag, but this is not recommended.
terraform force-unlock 222Whatever-222Whatever-222Whatever-d86c-222Whatever . # this is the ID provided in Error message
From your existing config/s3 repo setup. Download the state with following command.
- terraform state pull > terraform.tfstate
- aws s3 cp –sse AES256 terraform.tfstate s3://Bucket_Name/Whatever_Path/terraform.tfstate. ## there are two – before sse
- Updated your backend config with new s3 location and change the profile for that account in your terrafrom config or backend config.
- Run terrafrom init
It will throw an error such as
Error loading state:
state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: fe1212121Blah_Blah_Blah_1mduynend
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
4. Go to your dynamoDB table config that you have setup in your AWS console for the table and LockID string. Search for the KEY that you have provided for LockID and change the value there with above mentioned fe1212121Blah_Blah_Blah_1mduynend value in last error.
5. Run terraform init again
This should move your S3 state from one bucket to new account’s bucket.
While trying to decrypt the secrets in single line command line below I was getting error.
cat file <or echo "whatever"> | base64 --decode | gpg -d
gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: No secret key
The reason for the key that you have used is password protected. The pipe won’t work with gpg if your key is password protected.
gpg --export "Jayesh-key" | base64 # To get your key
In order to get that working. Either you remove pipe in 2 commands.
echo "whatever" | base64 --decode > file.gpg
gpg -d file.gpg
or you can modify your key to be without password by providing blank password but thats not a recommended or ideal way.
gpg --edit-key YourKey
gpg prmpt > passwd
Once it prompts enter existing password to unlock. Once done just enter for blank password.
gpg prompt > save