How To Recover Lost LVM

I thought I had lost all my data. Luckily, it was just my cheap desktop server’s SATA with error and not the hard disk it self. After I had unplugged from the desktop server and moved into another HP server, my hard disk seems back to normal again.

But the problem is I had lost the LVM info. In order to recovers back my precious data, google helps me again. I found the solution by:

# pvscan
PV /dev/sdb1 VG data lvm2 [931.51 GiB / 220.51 GiB free]
PV /dev/sda5 VG core01 lvm2 [232.64 GiB / 63.35 GiB free]
PV /dev/sdc5 lvm2 [931.27 GiB]
Total: 3 [2.05 TiB] / in use: 2 [1.14 TiB] / in no VG: 1 [931.27 GiB]

I had recovered my data disk and I quickly back it up to another healthy drive.

#linux #harddisk_recovery

How To Always Get Connected Without Being Disconnected on Mobile

Tiring being disconnect when switching between network? Conventional SSH just unable to perform when switching from WiFi to Mobile network. Thanks to my friend who recommends me to use mosh. It really solves all my headache. I am able to use the android JuiceSSH and perform the commands even I am disconnected and reconnects.

Installing Mosh on the latest Debian is should be straight forward unless like what I had done where I need to maintain some legacy stuffs. To install on Debian 7.0:
# apt-get install mosh

From firewall level just enable UDP forwarding from 60001-61000.

#Mosh #Linux #SSH #mobileterminal #JuiceSSH

Unable to Compile Protocol Buffers For MOSH

Recently, my friend had recommended me to try mosh. I had installed and found that is quite good and useful. For debian users, just apt-get install mosh and that is it. For my case, a bit complicated because I need to maintain the original ssh server that I have so I decided to just compile and install.

Too bad, I bumped into this error:

...
checking for protoc... no
configure: error: cannot find protoc, the Protocol Buffers compiler
...

In order to solve this, I just apt-get install the libprotobuf-dev.

# apt-get install protobuf-compiler

MooseFS Implementation

Recently I am trying to find out the best solution for replicating my hard disk across all servers at home and externally by syncing them all.

Tested glusterfs not so satisfying with the results. Thanks to google again I discovered MooseFS. After all the test I had done, it seems MooseFS is the way to go. It does not depending on the version of the OS you use. As long as you can compile from scratch and it will work all the way.

Below is the example of creating the basic MooseFS. We basically needs at lease three servers. One as Master which will serves the client’s mount command and also with some nice GUI web reports. Another will be the MetaLogger also known as backup of the Master server. Lastly, the chunk servers as the disk nodes. I am using my OpenVZ and it works great!

All the servers below will be installed with the prerequisite packages:
# apt-get install build-essential python pkg-config libghc6-zlib-dev

Master Server:
# groupadd mfs
# useradd -g mfs mfs
# cd /usr/src/
# tar -zxvf mfs-1.6.27-5.tar.gz
# cd mfs-1.6.27
# ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount --enable-mfscgiserv --enable-mfscgi
# make
# make install
# cd /etc/mfs
# cp mfsmaster.cfg.dist mfsmaster.cfg
# cp mfsmetalogger.cfg.dist mfsmetalogger.cfg
# cp mfsexports.cfg.dist mfsexports.cfg

Now, lets change the mfsexports.cfg details. Change to your network range and comment out the original (*).

192.168.22.0/24 / rw,alldirs,maproot=0

For the beginning, lets empty out the metadata at /var/lib/mfs:

# cd /var/lib/mfs
# cp metadata.mfs.empty metadata.mfs

At the hosts, add the msfmaster that refers to the machine

192.168.22.1 mfsmaster

Finally, startup the mfsmaster and also monitoring it:

# /usr/sbin/mfsmaster start
# /usr/sbin/mfscgiserv

Once confirmed all done, remember to automate it by starting up each of every reboot.

Backup (MetaLogger) Server:
This server should be same specification from the Master server. Metalogger server setup should be similar with the Master server setup.

# groupadd mfs
# useradd -g mfs mfs
# cd /usr/src
# tar -zxvf mfs-1.6.27-5.tar.gz
# cd mfs-1.6.27
# ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount --enable-mfscgiserv --enable-mfscgi
# make
# make install
# cp /etc/mfs
# cp mfsmetalogger.cfg.dist mfsmetalogger.cfg

Edit the hosts file and append it as below for the /etc/hosts:
192.168.22.1 mfsmaster

Finally start up the services:

# /usr/sbin/mfsmetalogger start

Chunk Server Installation:
Below steps are almost identical to the above commands:
# groupadd mfs
# useradd -g mfs mfs
# cd /usr/src
# tar -zxvf mfs-1.6.27-5.tar.gz
# cd mfs-1.6.27
# ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --enable-mfscgiserv --enable-mfscgi
# make
# make install

Lets prepare the configuration files for the chunk servers:
# cd /etc/mfs
# cp mfschunkserver.cfg.dist mfschunkserver.cfg
# cp mfshdd.cfg.dist mfshdd.cfg

In this chapter, we will use /mnt/testchunks/ as the main mount point – mfshdd.cfg:

/mnt/testchunks

Also, lets make the directory owns by mfs.

chown -R mfs:mfs /mnt/testchucks

Edit the hosts file and append it as below for the /etc/hosts:
192.168.22.1 mfsmaster

Lets startup the mfschunkserver:

/usr/sbin/mfschunkserver start

Client’s Computers:
Finally, mounting it from client’s machine will need to compile the moosefs packages as well.

# apt-get install libfuse-dev fuse
# cd /usr/src
# tar -zxvf mfs-1.6.27-5.tar.gz
# cd mfs-1.6.27
# ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver
# make
# make install

Also add the master ip to the /etc/hosts:

192.168.22.1 mfsmaster

Mounting the folder to the /mnt folder on the client’s machine.

# /usr/bin/mfsmount /mnt -H mfsmaster

Yeah, that is it to just for installing the most advance distributed file system. It also have some nice web GUI. If the master ip is 192.168.22.1 then the url will be something like http://192.168.22.1:9425

#MooseFS #Distributed_File_System

TiNC – How To Setup

I had been using tinc quite a while. What is TiNC? TiNC is a Virtual Private Network. It is using secure private network between hosts over the internet. It is almost supported on many OS.

Below is the example configure between two machines over the internet. One as the server and the other as the client. I am using Debian 7.0 as for this example.

On the server:

# apt-get install tinc
# cd /etc/tinc/
# mkdir mynetwork
# cd !$

A few files and a folder will be needed in this particular example. Lets make the folder and creates the file.

Creates the hosts folder:

# mkdir hosts

Create the tinc.conf file. Name the host file as server:

# vim tinc.conf
Name = server
AddressFamily = ipv4
Interface = tun0

Next to create the tinc-down file. Let the tinc knows what to do when the interface is down:

# cat > tinc-down < ifconfig \$INTERFACE down
EOF
# chmod 755 tinc-down

Once this done, lets work on the tinc-up file. This will determine the server IP of the VPN network you wish to setup. I will pick 192.168.111.0 network as my example.

# cat > tinc-up < ifconfig \$INTERFACE 192.168.111.1 netmask 255.255.255.0
EOF
# chmod 755 tinc-up

We are almost there. Lets create the server details:

# cd hosts
# cat > server < Address = www.bluebert.info
Port = 655
Subnet = 192.168.111.1/32
EOF

Lets generate the key and we are almost done for setting up the server tinc:

# tincd -n mynetwork -K4096
Generating 4096 bits keys:
....................................................................................++ p
...............++ q
Done.
Please enter a file to save private RSA key to [/etc/tinc/mynetwork/rsa_key.priv]:
Please enter a file to save public RSA key to [/etc/tinc/mynetwork/hosts/server]:

Start the tinc server:

# tincd -n mynetwork

You should be able to see something as below:

# ifconfig tun0
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.111.1 P-t-P:192.168.111.1 Mask:255.255.255.0
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

Now lets proceed to the client side and do the same thing by apt-get install tinc. I presume the client already done the installation part. I will just jump direct to configuring the client. Configuring the client is almost the same as the above. I will use 192.168.111.2 as the client IP.

We also creating the client's network name based on what we had done on the server:

# cd /etc/tinc/
# mkdir mynetwork
# cd !$

Creates the hosts folder as what we had done on the server:

# mkdir hosts

Create the tinc.conf file. Name the host file as client:

# vim tinc.conf
Name = client
AddressFamily = ipv4
Interface = tun0
ConnectTo = server

Next to create the tinc-down file:

# cat > tinc-down < ifconfig \$INTERFACE down
EOF
# chmod 755 tinc-down

Create the client's tinc-up file. We will use 192.168.111.2 as client IP.

# cat > tinc-up < ifconfig \$INTERFACE 192.168.111.2 netmask 255.255.255.0
EOF
# chmod 755 tinc-up

We are almost there. Here, we will need to copy the server's key to the hosts folder and also create the client's key too. Once done, we will need to copy the client's key back to the server's hosts.

# cd hosts
# scp root@server1:/etc/tinc/mynetwork/hosts/server .
# cat > client < Subnet = 192.168.111.2/32
Port = 655
EOF

Time to generate the key and copy back to the server's hosts folder.

# tincd -n mynetwork -K4096
Generating 4096 bits keys:
.................++ p
................................++ q
Done.
Please enter a file to save private RSA key to [/etc/tinc/mynetwork/rsa_key.priv]:
Please enter a file to save public RSA key to [/etc/tinc/mynetwork/hosts/client]:

# scp client root@server:/etc/tinc/mynetwork/hosts/

Lastly, lets try to fire up and ping the host:

# ifconfig tun0
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.111.2 P-t-P:192.168.111.2 Mask:255.255.255.0
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:252 (252.0 B) TX bytes:252 (252.0 B)

# ping -c 1 192.168.111.1
PING 192.168.111.1 (192.168.111.1) 56(84) bytes of data.
64 bytes from 192.168.111.1: icmp_req=1 ttl=64 time=17.8 ms

--- 192.168.111.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 17.847/17.847/17.847/0.000 ms

Yeap and that's it. All the setup is done and ready to use. Remember to put to some rc.local or anyway to auto start the route. Also remember to allow it over the firewall level.

#linux #tinc #vpn

Converting Physical Host to OpenVZ containers

Well, before I revisit the OpenVZ I was having all my servers under KVM (Kernel-based Virtual Machine). It was very cool at that time. But, whenever the load spikes up, the whole slot of the servers will get affected and slows down. It was even a firewall router, it also slows down the network connectivity and everything.

OpenVZ saves me again and right now I am migrating the existing KVM to virtual and with the same credentials and everything without redo everything.

First of all thanks to https://openvz.org/Physical_to_container and I am able to work it out. I created the virtual containers with my special script and I stop the container. Since I am using ploop, I need to manually mount the drive to the temporary mount point.

# ploop mount /vz/private/100/root.hdd/root.hdd
Adding delta dev=/dev/ploop54922 img=/vz/private/4008/root.hdd/root.hdd (rw)
# mount -o loop /dev/ploop54922p1 /mnt

Goto the original physical host and start rsyncing the data to the ploop device. First of all, mount the driv

PhysicalHost:# cd /tmp && mkdir roothost && cd roothost
PhysicalHost:# mount /dev/mapper/PhysicalHost-root /tmp/roothost; cd /tmp/roothost
PhysicalHost:# rsync -av --progress . root@openvzhost:/mnt/

OpenVZHost
OpenVZHost:# cd /mnt
OpenVZHost:/tmp # sed -i -e 's/^[0-9].*getty.*tty/#&/g' etc/inittab
OpenVZHost:# ln -sf /proc/mounts etc/mtab
OpenVZHost:# cp -a /dev/ttyp* /dev/ptyp* dev/
OpenVZHost:# cd ~ && umount /tmp/roothost

Umount the ploop:

OpenVZHost:# ploop umount /vz/private/100/root.hdd/root.hdd

Remember to change the network configurations if the network range is changed.

#openvz #openvz_migration

Office Door Alert Using Arduino

1508075_10152232313277070_1714722034_n 1965042_10152232313202070_216995323_n 1655909_10152232313362070_28364279_n
Some quick hack had been done by me again using Arduino. I had used my spare Arduino and just using analog reading over the reed switch. Is so simple and easy. The power source just tapped from the magnetic bar and the moment for the magnet is engaged then the arduino will be powered up and it will try to get the reading from the analogue input. If more than 7 seconds, it will give a very noisy annoying sound and will make you close the glass door. Below are the coding:

// Date: 27/02/2014
// Description:
// Developed for Door Access Alert Used Only

int buzzer_pin = 8;
int tone_length = 150;
int freq_low = 2800;
int freq_high = 6300;
int limit = 4;
int voltage_pin = 0;
int voltage = 0;
int voltage_trigger = 500;
int counter;
int delaySecs = 6;
int warning_LED = 13;
unsigned long time;

void setup() {
pinMode(buzzer_pin, OUTPUT);
pinMode(warning_LED, OUTPUT);

for(int k = 0; k < 1; k++) {
for(int i = 0; i < 10; i++) {
tone(buzzer_pin, 5000, 10);
delay(2);
tone(buzzer_pin, 5300, 10);
delay(10);
tone(buzzer_pin, 2500, 10);
delay(5);
tone(buzzer_pin, 4500, 5);
delay(8);
}
}

tone(buzzer_pin, 0, 0);
delay(10);
noTone(buzzer_pin);
delay(10);
}

void loop () {
if(analogRead(voltage_pin) < voltage_trigger) {
// Before starting to sound the siren, try to check the voltage
// again for 5 secs. If more than that…sound the siren till
// the voltage drop.
digitalWrite(warning_LED, HIGH);
delay(delaySecs * 1000);
if(analogRead(voltage_pin) < voltage_trigger) {
onSiren();
}
digitalWrite(warning_LED, LOW);
}
}

void onSiren() {
int run = 1;
while(run == 1) {
counter++;
// Increasing the tone
for(int i = freq_low; i <= freq_high; i+=2) { // tone(buzzer_pin, i, tone_length); tone(buzzer_pin, i, tone_length); // delayMicroseconds(10); } for(int i = freq_high; i >= freq_low; i-=20) {
// tone(buzzer_pin, i, tone_length);
tone(buzzer_pin, i, tone_length);
// delayMicroseconds(10);
}

// Checking voltage till less more than trigger point
if(analogRead(voltage_pin) > voltage_trigger) {
for(int i = 0; i < 10; i++) {
tone(buzzer_pin, 3000, 50);
delay(10);
tone(buzzer_pin, 4000, 50);
delay(10);
tone(buzzer_pin, 2500, 50);
delay(10);
}
tone(buzzer_pin, 0, 0);
noTone(buzzer_pin);
delay(10);
run = 0;
}
}
}

#arduino #dooralertsystem #quickhack

Raspberry Pi Print Server

IMG_20140407_024703During weekend, I had encountered that my laptop as print server was dead. Anyway, it was having too many power connections and some loosely connected.

Just nice that my raspberry pi as router was just decommissioned and lying there doing nothing. So I took up the effort to convert it to become a print server and later will be some backup station as the printing won’t be that often anyway. My Raspberry Pi is installed with gentoo distro equipped with only 256MB Ram and 16GB class 10 sdcard. It was the first version of the Raspberry Pi.

Installing the cupsd is a challenge because gentoo really requires a lot of patient for waiting it to compile. The bright side is once the binaries completed to compile it will really executes at very light weight.

Since I am having some openvz hosts, I also had built a client that serves as Google print from my android. Just goto the Google and choose advance settings then setup the printer by adding into the list.

#raspberrypi #cups #printserver

A Simple One Line Command Cron For Monitoring A Server

Well, had been a while monitoring my home server from external server. It is the only way I know when it is down and I will be informed through email. Just a simple one line script with netcat and it serves the purpose.

Below is the part of my cron syntax:

...
...
*/1 * * * * URL=www.bluebert.info; EMAIL=mail@example.com; FLAG=/tmp/informed; if nc -z -w2 ${URL} 80 2>/dev/null; then if [ -e ${FLAG} ]; then rm -f ${FLAG}; echo "Site is RECOVERED" | mail -s "${URL} is UP" ${EMAIL}; fi; else if [ ! -e ${FLAG} ]; then touch ${FLAG}; echo "Site is DOWN" | mail -s "${URL} is DOWN" ${EMAIL}; fi; fi
...
...

If there is more sites to monitor, it can gather all and put into a script to scan at one go.