You can use systemd timers to execute script a minute after boot.
First, create service file (/etc/systemd/system/myscript.service):
Then create timer (/etc/systemd/system/myscript.timer):
Description=Runs myscript every hour
# Time to wait after booting before activation
Now enable and run it:
# systemctl enable myscript.timer
# systemctl start myscript.timer
Add/Append following in httpd.conf
and remove existing line
from apache config.
1. Close Google chrome
2. Go to this directory
cd ~/Library/Application\ Support/Google/Chrome
3. rename Default folder.
Lots of things changed on centos7 and it doenst log all details in /var/log/messages. You have to use journeling for puppet logs.
Tail : journalctl -f -u pe-puppet
Full: journalctl -l -u pe-puppet : full logs
When I start the application in debug mode (sh -x) it works fine after taking few seconds while loading the libraries but without debug mode it failes with .so : cannot open shared object file: Permission denied even though it has proper read permission as well as proper read permission to all parent folders.
Then I figured out that its due to selinux in enforcing state.
Change : /etc/selinux/config from SELINUX=enforcing ## or permissive to SELINUX=disabled
ls -alp /etc/ssh/ssh_host_dsa_key.pub | cut -d ” ” -f6-10
the keys are generated when you install the os.
For Redhat system you can also check following.
rpm -qi basesystem | grep “Install Date”
Boston T MBTA time it takes to travel. For those who wants to plan out travel by MBTA and wanted to know how much time it takes to travel to and from different station. This should be useful to them.
Install rngd application.
apt-get install rng-tools
ls -l /dev/urandom
rngd -r /dev/urandom
Then create your GPG key.
You have pressed some keys that have caused this behavior.
Fix Press following keys on your keyboard : Ctrl + Page UP/Down key.
NIC Teaming and NIC bonding are two different things.
NIC Teaming uses one of two methods, failover, and load-balancing with fail over. With a team you do not get a single 2gb connection (with two 1 gb NICs). You get two pipes that act as one, but merely are load balancing the traffic over each NIC, and each NIC acts as a fail over to the other. If you transfer a 100 gb file, you are not going to get 2gb of throughput…you still only get 1 gb, but you will not kill the network performance because the second NIC is still available to service other traffic.
True bonding would be taking two NICs and bonding them together to get a single fat pipe. This requires the switch to support this as well. I have not seen much bonding in the server world…more done at the network level.
VMWare acts the same way. It is purely load balancing and fail over. Since VMWare is done at the OS level, you can mix and match different vendor NICs in a team. I have done this without issue. Just make sure they are on the HCL.