1. go to sites-available folder in /etc/apache2/sites-available
2. open your site file using vi
3. go to line options and remove indexes word.
if you are in load balance environment, remove at two (more) servers.
restart apache server after the changes to the file.
/etc/init.d/apache2 restart.
Total Pageviews
Monday, December 27, 2010
Wednesday, December 22, 2010
Amazon S3 mounting
http://code.google.com/p/s3fs/wiki/FuseOverAmazon
Installation Notes:
http://code.google.com/p/s3fs/wiki/InstallationNotes
In case link fails for installation notes, please refer bottom of this page.
TO install FUSE 2.X
wget http://sourceforge.net/projects/fuse/files/fuse-2.X/2.8.5/fuse-2.8.5.tar.gz
tar zxvf fuse-2.8.5
./configure
make
make isntall
create password file (eg: .passwd-s3fs) with your amazon keys in /home/ubuntu;
The s3fs password file has this format (use this format if you have only one set of credentials):
accessKeyId:secretAccessKey
If have more than one set of credentials, then you can have default credentials as specified above, but this syntax will be recognized as well:
bucketName:accessKeyId:secretAccessKey
The password file shall not have others permissions; so execute the following command:
chmod 600 .passwd-s3fs
to mount s3 bucket as a drive:
s3fs <s3 bucket name> <local Path> -ouse_cache=/tmp -opasswd_file=.passwd-s3fs -ouse_rrs=1 -o allow_other
To mount on booting:
echo 's3fs <bucketname> <loal path> -ouse_cache=/tmp -opasswd_file=.passwd-s3fs -ouse_rrs=1 -o allow_other' > /etc/init.d/s3_byte
chmod 755 /etc/init.d/s3_byte
update-rc.d s3_byte defaults
Command to know S3fs version installed: S3fs version: s3fs --version
To check errors in log:
grep s3fs /var/log/syslog OR
grep s3fs /var/log/messages
Update:
Make sure updatedb is not indexing the mounted s3 bucket, else the cost of s3 will shootsup exponentially. To de-activate "updatedb", edit /etc/updatedb.conf with
1. PRUNEFS = fuse-s3fs
2. PRUNEPATH = your s3 bucket mount point.
http://manpages.ubuntu.com/manpages/natty/man8/updatedb.8.html
General Instructions
From released tarball
Download: http://s3fs.googlecode.com/files/s3fs-1.61.tar.gzDownload SHA1 checksum: 8f6561ce00b41c667b738595fdb7b42196c5eee6
Download size: 154904
- tar xvzf s3fs-1.61.tar.gz
- cd s3fs-1.61/
- ./configure --prefix=/usr
- make
- make install (as root)
From subversion repository
- svn checkout http://s3fs.googlecode.com/svn/trunk/ s3fs
- cd s3fs/
- autoreconf --install (or ./autogen.sh)
- ./configure --prefix=/usr
- make
- make install (as root)
Notes for Specific Operating Systems
Debian / Ubuntu
Tested on Ubuntu 10.10Install prerequisites before compiling:
- apt-get install build-essential
- apt-get install libfuse-dev
- apt-get install fuse-utils
- apt-get install libcurl4-openssl-dev
- apt-get install libxml2-dev
- apt-get install mime-support
Fedora / CentOS
Tested on Fedora 14 Desktop Edition and CentOS 5.5 (Note: on Nov 25, 2010 with s3fs version 1.16, newer versions of s3fs have not been formally tested on these platforms)Note: See the comment below on how to get FUSE 2.8.4 installed on CentOS 5.5
Install prerequisites before compiling:
- yum install gcc
- yum install libstdc++-devel
- yum install gcc-c++
- yum install fuse
- yum install fuse-devel
- yum install curl-devel
- yum install libxml2-devel
- yum install openssl-devel
- yum install mailcap
Tuesday, December 21, 2010
Monday, December 20, 2010
How to remotely manage an Amazon RDS instance with PHPMyAdmin
EDIT: /etc/mysql/my.cnf ; change bind-address as 0.0.0.0 (to accept all TCP/IP addresses)
go to mysql command line: (type mysql -u root -p) - perform as a root (sudo su)
GRANT ALL ON *.* TO username@IP-address IDENTIFIED BY 'password';
IP-address: IP address of the server where PhpMyAdmin server is installed.
username/pasword: username/password using which phpmyadmin connects the mysql server (i.e the same username and pasword of the mysql server)
NOW: got to PHPMyAdmin server and change the /etc/phpmyadmin/config.in.php as per the link:
http://www.linux.com/archive/feature/130016
references:
http://www.debianhelp.co.uk/remotemysql.htm
http://www.absolutelytech.com/2010/06/22/howto-use-local-phpmyadmin-with-remote-mysql/
go to mysql command line: (type mysql -u root -p) - perform as a root (sudo su)
GRANT ALL ON *.* TO username@IP-address IDENTIFIED BY 'password';
IP-address: IP address of the server where PhpMyAdmin server is installed.
username/pasword: username/password using which phpmyadmin connects the mysql server (i.e the same username and pasword of the mysql server)
NOW: got to PHPMyAdmin server and change the /etc/phpmyadmin/config.in.php as per the link:
http://www.linux.com/archive/feature/130016
references:
http://www.debianhelp.co.uk/remotemysql.htm
http://www.absolutelytech.com/2010/06/22/howto-use-local-phpmyadmin-with-remote-mysql/
How to remotely manage an Amazon RDS instance with PHPMyAdmin
The biggest thing I was considering with Amazon Relational Database
Service was how to manage it. A command line interface is NOT efficient
for database management so I needed to be sure that I would be able to
utilize software on my computer to manage my data. PHPMyAdmin was my
software of choice. It supports multiple servers and has pretty much
everything I need.
So what does it take to get everything up and running? First, signup for Amazon RDS and get your instance up and running. Amazon's RDS Getting Started Guide is a great resource and I'd highly recommend it. The one thing I had trouble with was the DB Security Group setup. When you go to add access for an CIDR/IP it provides a recommended value. It took some messing around to determine that this default value isn't actually what needed to be there. If you're not able to connect to your instance when it's all said and done, be sure to double check this value. The IP they provided did not match the IP address that was provided to us by our ISP. Once you've created your DB Instance and setup the security group you're good to go.
I'm going to assume you've already got PHPMyAdmin up and running. What you need to do is modify config.inc.php to recognize the new server. Your config file should look something like this:
PHPMyAdmin uses $cfg['Servers'][$i] so that it can support multiple servers on one installation. Having more than 1 server will give you the option to select a server when you login. After that last } you'll want to add the following code, but of course with your own Amazon RDS instance URL.
You're ready to go, simply refresh your PHPMyAdmin page if you're already logged in and you'll see the new server.
Remember, if you have trouble connecting, your IP/Permissions must be wrong!
So what does it take to get everything up and running? First, signup for Amazon RDS and get your instance up and running. Amazon's RDS Getting Started Guide is a great resource and I'd highly recommend it. The one thing I had trouble with was the DB Security Group setup. When you go to add access for an CIDR/IP it provides a recommended value. It took some messing around to determine that this default value isn't actually what needed to be there. If you're not able to connect to your instance when it's all said and done, be sure to double check this value. The IP they provided did not match the IP address that was provided to us by our ISP. Once you've created your DB Instance and setup the security group you're good to go.
I'm going to assume you've already got PHPMyAdmin up and running. What you need to do is modify config.inc.php to recognize the new server. Your config file should look something like this:
/* Configure according to dbconfig-common if enabled */ if (!empty($dbname)) { /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = 'changeme'; $cfg['Servers'][$i]['hide_db'] = '(mysql|information_schema|phpmyadmin)'; /* Server parameters */ if (empty($dbserver)) $dbserver = 'localhost'; $cfg['Servers'][$i]['host'] = $dbserver; if (!empty($dbport)) { $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['port'] = $dbport; } //$cfg['Servers'][$i]['compress'] = false; /* Select mysqli if your server has it */ $cfg['Servers'][$i]['extension'] = 'mysqli'; /* Optional: User for advanced features */ //$cfg['Servers'][$i]['controluser'] = $dbuser; //$cfg['Servers'][$i]['controlpass'] = $dbpass; /* Optional: Advanced phpMyAdmin features */ $cfg['Servers'][$i]['pmadb'] = $dbname; $cfg['Servers'][$i]['bookmarktable'] = 'pma_bookmark'; $cfg['Servers'][$i]['relation'] = 'pma_relation'; $cfg['Servers'][$i]['table_info'] = 'pma_table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma_table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma_pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma_column_info'; $cfg['Servers'][$i]['history'] = 'pma_history'; $cfg['Servers'][$i]['designer_coords'] = 'pma_designer_coords'; /* Uncomment the following to enable logging in to passwordless accounts, * after taking note of the associated security risks. */ // $cfg['Servers'][$i]['AllowNoPassword'] = TRUE; /* Advance to next server for rest of config */ $i++; }
PHPMyAdmin uses $cfg['Servers'][$i] so that it can support multiple servers on one installation. Having more than 1 server will give you the option to select a server when you login. After that last } you'll want to add the following code, but of course with your own Amazon RDS instance URL.
$cfg['Servers'][$i]['auth_type'] = 'HTTP'; $cfg['Servers'][$i]['hide_db'] = '(mysql|information_schema|phpmyadmin)'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'xxxxx.l2kj35ncj3.us-east-1.rds.amazonaws.com';
You're ready to go, simply refresh your PHPMyAdmin page if you're already logged in and you'll see the new server.
Remember, if you have trouble connecting, your IP/Permissions must be wrong!
Labels:
amazon-rds,
phpmyadmin
Thursday, December 16, 2010
How to uninstall from ubuntu
How to uninstall from ubuntu:
apt-get --purge remove <package>
http://www.linuxquestions.org/questions/debian-26/how-do-i-get-apt-get-to-completely-uninstall-a-package-237772
apt-get --purge remove <package>
http://www.linuxquestions.org/questions/debian-26/how-do-i-get-apt-get-to-completely-uninstall-a-package-237772
Wednesday, December 15, 2010
Installing LAMP on ubuntu
Installing Apache + PHP
sudo apt-get install apache2 php5 libapache2-mod-php5sudo /etc/init.d/apache2 restart
Installing MySQL Database Server
apt-get install mysql-server mysql-client php5-mysql
PhpMyAdmin Installation
apt-get install phpmyadmin
The phpmyadmin configuration file is located at: /etc/phpmyadmin folder.To set up under Apache all you need to do is include the following line in /etc/apache2/apache2.conf:
--
Include /etc/phpmyadmin/apache.conf
--http://www.howtoforge.com/ubuntu_debian_lamp_server
ps aux | grep apache2 | grep -iv grep
In case if image fuinctioins not working check you got installed the php5-gd package. call Phpinfo(); function in your php code.
For Installing php5-gd : "sudo apt-get install php5-gd"
In case above link fails:
Build Your Own Debian/Ubuntu LAMP Server - Quick & Easy Do it Yourself Installation
- Apache 2 - Linux Web server
- MySQL 5 - MySQL Database Server
- PHP4/5 - PHP Scripting Language
- phpMyAdmin - Web-based database admin software.
First, let us prepare a system that has a minimum requirement of Debian/Ubuntu version of linux with atleast 256MB of RAM available. Anything less than this minimum ram will cause lot of problems since we are running a server along especially mysql and webmin requires lot of RAM to run properly. Mysql will give you this nasty error "cannot connect to mysql.sock" if you dont have enough memory in your server.
I love debian/ubuntu based linux because of my enormous affinity towards this command apt-get. As a starter knowing this one command, It is so easy to install packages and you dont need to worry about package dependency and configuration. You need to buy a dedicated server or a VPS package if you want to setup your own server. If you want to experiment with the server and installation it is recommended to buy a vps package from various hosts. I prefer vpslink because of their pricing. Believe it or not it is so easy to install and configure your server yourself eventhough you are new are to linux and dedicated/vps hosting.
First download PuTTy if you are accessing your server through SSH. Just enter the IP of your server with root login to access your host. As you probably know, Webmin is a freely available server control panel and we will setup this once we have completed the LAMP server and Mail Server. Webmin makes more easier for us to fine tune our linux box.
Before proceeding to install, update the necessary packages with debian with this command.
apt-get install update
1. Installing Apache + PHP
Apache is one of the most famous web server which runs on most linux based servers. With just few commands you can configure apache to run with PHP 4 or PHP 5.If you want to install PHP 4, just apt-get
apt-get install apache2 php4 libapache2-mod-php4
To install PHP5, just run the following on linux shell. Note that if you dont specify packages with '4', PHP5 will be automatically installed.apt-get install apache2 php5 libapache2-mod-php5
Apache configuration file is located at: /etc/apache2/apache2.conf and your web folder is /var/www.To check whether php is installed and running properly, just create a test.php in your /var/www folder with phpinfo() function exactly as shown below.
nano /var/www/test.php
# test.php <?php phpinfo(); ?>Point your browser to http://ip.address/test.php or http://domain/test.php and this should show all your php configuration and default settings.
You can edit necessary values or setup virtual domains using apache configuration file.
2. Installing MySQL Database Server
Installing mysql database server is always necessary if you are running a database driven ecommerce site. Remember running mysql server to a fair extend requires atleast 256mb of RAM in your server. So unless you are running database driven sites you dont absolutely need mysql. The following commands will install mysql 5 server and mysql 5 client.apt-get install mysql-server mysql-client php5-mysql
Note: If you have already installed php4, you should make a slight change like this.apt-get install mysql-server mysql-client php4-mysql
The configuration file of mysql is located at: /etc/mysql/my.cnfCreating users to use MySQL and Changing Root Password
By default mysql creates user as root and runs with no passport. You might need to change the root password.To change Root Password
mysql -u root
mysql> USE mysql;
mysql> UPDATE user SET Password=PASSWORD('new-password') WHERE user='root';
mysql> FLUSH PRIVILEGES;
You must never use root password, so you might need to create a user to connect to mysql database for a PHP script. Alternatively you can add users to mysql database by using a control panel like webmin or phpMyAdmin to easily create or assign database permission to users. We will install Webmin and phpmyadmin during later once we complete basic installation.mysql> USE mysql;
mysql> UPDATE user SET Password=PASSWORD('new-password') WHERE user='root';
mysql> FLUSH PRIVILEGES;
3. PhpMyAdmin Installation
PhpMyAdmin is a nice web based database management and administration software and easy to install and configure under apache. Managing databases with tables couldnt be much simpler by using phpmyadmin.All you need to do is:
apt-get install phpmyadmin
The phpmyadmin configuration file is located at: /etc/phpmyadmin folder.To set up under Apache all you need to do is include the following line in /etc/apache2/apache2.conf:
Include /etc/phpmyadmin/apache.confNow restart Apache:
/etc/init.d/apache2 restart
Point your browser to: http://domain/phpmyadminThat's it! MySQL and phpMyAdmin are ready. Log in with your mysql root password and create users to connect to database from your php script.
This tutorial was written and contributed to HowToForge by Scott who currently runs MySQL-Apache-PHP.com. Permission is fully granted to copy/republish this tutorial in any form, provided a source is mentioned with a live link back to the authors site.
My SQL cluster on ubuntu/debian
http://www.howtoforge.com/loadbalanced_mysql_cluster_debian
TO start management node: ndb_mgmd -f /var/lib/config.ini
If you change the config.ini- the coomand is
ndb_mgmd -f /var/lib/config.ini --reload
In case of change in IPaddress and unable to start the management server (ndb_mgmd) go and remove the file in /usr/local/mysql/mysql-cluster/ndb_1_config.bin.1
How To Set Up A Load-Balanced MySQL Cluster
Version 1.0
Author: Falko Timme <ft [at] falkotimme [dot] com>
Last edited 03/27/2006
This tutorial shows how to configure a MySQL 5 cluster with three nodes: two storage nodes and one management node. This cluster is load-balanced by a high-availability load balancer that in fact has two nodes that use the Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).
In this document I use Debian Sarge for all nodes. Therefore the setup might differ a bit for other distributions. The MySQL version I use in this setup is 5.0.19. If you do not want to use MySQL 5, you can use MySQL 4.1 as well, although I haven't tested it.
This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.
This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails, and the management server is not running, then the data on the two cluster nodes will become inconsistent ("split brain"). We also need it for configuring the MySQL cluster.
So normally we would need five machines for our setup:
cd mysql-max-5.0.19-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgmNext, we must create the cluster configuration file, /var/lib/mysql-cluster/config.ini:
Please replace the IP addresses in the file appropriately.
Then we start the cluster management server:
sql1.example.com / sql2.example.com:
sql1.example.com / sql2.example.com:
Make sure you fill in the correct IP address of the MySQL cluster management server.
Next we create the data directories and start the MySQL server on both cluster nodes:
sql1.example.com / sql2.example.com:
Now is a good time to set a password for the MySQL root user:
sql1.example.com / sql2.example.com:
sql1.example.com / sql2.example.com:
loadb1.example.com:
Now type show; at the command prompt:
If you see that your nodes are connected, then everything's ok!
Type
Now we create a test database with a test table and some data on sql1.example.com:
sql1.example.com:
The result of the SELECT statement should be:
Now we create the same database on sql2.example.com (yes, we still have to create it, but afterwards testtable and its data should be replicated to sql2.example.com because testtable uses ENGINE=NDBCLUSTER):
sql2.example.com:
So the data was replicated from sql1.example.com to sql2.example.com. Now we insert another row into testtable:
sql2.example.com:
sql1.example.com:
So both MySQL cluster nodes alwas have the same data!
Now let's see what happens if we stop node 1 (sql1.example.com): Run
sql1.example.com:
Now let's check the cluster status on our management server (loadb1.example.com):
loadb1.example.com:
You see, sql1.example.com is not connected anymore.
Type
Let's check sql2.example.com:
sql2.example.com:
Ok, all tests went fine, so let's start our sql1.example.com node again:
sql1.example.com:
loadb1.example.com:
This means that the cluster nodes sql1.example.com and sql2.example.com and also the cluster management server have shut down.
Run
To start the cluster management server, do this on loadb1.example.com:
loadb1.example.com:
sql1.example.com / sql2.example.com:
Type
PAGE-5:
The solution is to have a load balancer in front of the MySQL cluster which (as its name suggests) balances the load between the MySQL cluster nodes. The load blanacer configures a virtual IP address that is shared between the cluster nodes, and all your applications use this virtual IP address to access the cluster. If one of the nodes fails, then your applications will still work, because the load balancer redirects the requests to the working node.
Now in this scenario the load balancer becomes the bottleneck. What happens if the load balancer fails? Therefore we will configure two load balancers (loadb1.example.com and loadb2.example.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. Both load balancers use heartbeat to check if the other load balancer is still alive, and both load balancers also use ldirectord, the actual load balancer the splits up the load onto the cluster nodes. heartbeat and ldirectord are provided by the Ultra Monkey package that we will install.
It is important that loadb1.example.com and loadb2.example.com have support for IPVS (IP Virtual Server) in their kernels. IPVS implements transport-layer load balancing inside the Linux kernel.
loadb1.example.com / loadb2.example.com:
In order to load the IPVS kernel modules at boot time, we list the modules in /etc/modules:
loadb1.example.com / loadb2.example.com:
Now we edit /etc/apt/sources.list and add the Ultra Monkey repositories (don't remove the other repositories), and then we install Ultra Monkey:
loadb1.example.com / loadb2.example.com:
you can ignore it.
Answer the following questions:
loadb1.example.com / loadb2.example.com:
loadb1.example.com / loadb2.example.com:
loadb1.example.com / loadb2.example.com:
Please note: you must list the node names (in this case loadb1 and loadb2) as shown by
You must list one of the load balancer node names (here: loadb1) and list the virtual IP address (192.168.0.105) together with the correct netmask (24) and broadcast address (192.168.0.255). If you are unsure about the correct settings, http://www.subnetmask.info/ might help you.
somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.
/etc/ha.d/authkeys should be readable by root only, therefore we do this:
loadb1.example.com / loadb2.example.com:
loadb1.example.com / loadb2.example.com:
Please fill in the correct virtual IP address (192.168.0.105) and the correct IP addresses of your MySQL cluster nodes (192.168.0.101 and 192.168.0.102). 3306 is the port that MySQL runs on by default. We also specify a MySQL user (ldirector) and password (ldirectorpassword), a database (ldirectordb) and an SQL query. ldirectord uses this information to make test requests to the MySQL cluster nodes to check if they are still available. We are going to create the ldirector database with the ldirector user in the next step.
Now we create the necessary system startup links for heartbeat and remove those of ldirectord (bacause ldirectord will be started by heartbeat):
loadb1.example.com / loadb2.example.com:
sql1.example.com:
sql1.example.com / sql2.example.com:
sql1.example.com / sql2.example.com:
Add this section for the virtual IP address to /etc/network/interfaces:
sql1.example.com / sql2.example.com:
loadb1.example.com / loadb2.example.com:
loadb1.example.com / loadb2.example.com:
The hot-standby should show this:
loadb1.example.com / loadb2.example.com:
Output on the hot-standby:
loadb1.example.com / loadb2.example.com:
Output on the hot-standby:
loadb1.example.com / loadb2.example.com:
Output on the hot-standby:
If your tests went fine, you can now try to access the MySQL database from a totally different server in the same network (192.168.0.x) using the virtual IP address 192.168.0.105:
You can now switch off one of the MySQL cluster nodes for test purposes; you should then still be able to connect to the MySQL database.
- All data is stored in RAM! Therefore you need lots of RAM on your cluster nodes. The formula how much RAM you need on ech node goes like this:
- The cluster management node listens on port 1186, and anyone can connect. So that's definitely not secure, and therefore you should run your cluster in an isolated private network!
It's a good idea to have a look at the MySQL Cluster FAQ: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html and also at the MySQL Cluster documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html
MySQL Cluster documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html
MySQL Cluster FAQ: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html
Ultra Monkey: http://www.ultramonkey.org/
The High-Availability Linux Project: http://www.linux-ha.org/
TO start management node: ndb_mgmd -f /var/lib/config.ini
If you change the config.ini- the coomand is
ndb_mgmd -f /var/lib/config.ini --reload
In case of change in IPaddress and unable to start the management server (ndb_mgmd) go and remove the file in /usr/local/mysql/mysql-cluster/ndb_1_config.bin.1
Link to My SQL cluster software:
wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.1/mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz/from/http://mysql.mirror.rafal.ca/
wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.1/mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz/from/http://mysql.mirror.rafal.ca/
tar xvfz MySQL-Cluster-7.1/mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz
How To Set Up A Load-Balanced MySQL Cluster
Version 1.0 Author: Falko Timme <ft [at] falkotimme [dot] com>
Last edited 03/27/2006
This tutorial shows how to configure a MySQL 5 cluster with three nodes: two storage nodes and one management node. This cluster is load-balanced by a high-availability load balancer that in fact has two nodes that use the Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).
In this document I use Debian Sarge for all nodes. Therefore the setup might differ a bit for other distributions. The MySQL version I use in this setup is 5.0.19. If you do not want to use MySQL 5, you can use MySQL 4.1 as well, although I haven't tested it.
This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.
This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
1 My Servers
I use the following Debian servers that are all in the same network (192.168.0.x in this example):- sql1.example.com: 192.168.0.101 MySQL cluster node 1
- sql2.example.com: 192.168.0.102 MySQL cluster node 2
- loadb1.example.com: 192.168.0.103 Load Balancer 1 / MySQL cluster management server
- loadb2.example.com: 192.168.0.104 Load Balancer 2
Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails, and the management server is not running, then the data on the two cluster nodes will become inconsistent ("split brain"). We also need it for configuring the MySQL cluster.
So normally we would need five machines for our setup:
2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5
As the MySQL cluster management server does not use many resources, and the system would just sit there doing nothing, we can put our first load balancer on the same machine, which saves us one machine, so we end up with four machines. 2 Set Up The MySQL Cluster Management Server
First we have to download MySQL 5.0.19 (the max version!) and install the cluster management server (ndb_mgmd) and the cluster management client (ndb_mgm - it can be used to monitor what's going on in the cluster). The following steps are carried out on loadb1.example.com (192.168.0.103):loadb1.example.com:
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.1/mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz/from/http://mysql.mirror.rafal.ca/
tar xvfz MySQL-Cluster-7.1/mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz
cd mysql-max-5.0.19-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm
loadb1.example.com:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini
cd /var/lib/mysql-cluster
vi config.ini
[NDBD DEFAULT] NoOfReplicas=2 [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] [TCP DEFAULT] # Section for the cluster management node [NDB_MGMD] # IP address of the management node (this system) HostName=192.168.0.103 # Section for the storage nodes [NDBD] # IP address of the first storage node HostName=192.168.0.101 DataDir= /var/lib/mysql-cluster [NDBD] # IP address of the second storage node HostName=192.168.0.102 DataDir=/var/lib/mysql-cluster # one [MYSQLD] per storage node [MYSQLD] [MYSQLD] |
Then we start the cluster management server:
loadb1.example.com:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
It makes sense to automatically start the management server at system boot time, so we create a very simple init script and the appropriate startup links:loadb1.example.com:
echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.ini' > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd
update-rc.d ndb_mgmd defaults
chmod 755 /etc/init.d/ndb_mgmd
update-rc.d ndb_mgmd defaults
PAGE-2
3 Set Up The MySQL Cluster Nodes (Storage Nodes)
Now we install mysql-max-5.0.19 on both sql1.example.com and sql2.example.com:sql1.example.com / sql2.example.com:
groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
ln -s mysql-max-5.0.19-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin
Then we create the MySQL configuration file /etc/my.cnf on both nodes:useradd -g mysql mysql
cd /usr/local/
wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
ln -s mysql-max-5.0.19-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin
sql1.example.com / sql2.example.com:
vi /etc/my.cnf
[mysqld] ndbcluster # IP address of the cluster management node ndb-connectstring=192.168.0.103 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=192.168.0.103 |
Next we create the data directories and start the MySQL server on both cluster nodes:
sql1.example.com / sql2.example.com:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start
(Please note: we have to run ndbd --initial only when the start MySQL for the first time, and if /var/lib/mysql-cluster/config.ini on loadb1.example.com changes.) cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start
Now is a good time to set a password for the MySQL root user:
sql1.example.com / sql2.example.com:
mysqladmin -u root password yourrootsqlpassword
We want to start the cluster nodes at boot time, so we create an ndbd init script and the appropriate system startup links:sql1.example.com / sql2.example.com:
echo 'ndbd' > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd
update-rc.d ndbd defaults
chmod 755 /etc/init.d/ndbd
update-rc.d ndbd defaults
PAGE-3:
4 Test The MySQL Cluster
Our MySQL cluster configuration is already finished, now it's time to test it. On the cluster management server (loadb1.example.com), run the cluster management client ndb_mgm to check if the cluster nodes are connected:loadb1.example.com:
ndb_mgm
You should see this:-- NDB Cluster -- Management Client -- ndb_mgm> |
show;
The output should be like this:ndb_mgm> show; Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.0.101 (Version: 5.0.19, Nodegroup: 0, Master) id=3 @192.168.0.102 (Version: 5.0.19, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.0.103 (Version: 5.0.19) [mysqld(API)] 2 node(s) id=4 @192.168.0.101 (Version: 5.0.19) id=5 @192.168.0.102 (Version: 5.0.19) ndb_mgm> |
Type
quit;
to leave the ndb_mgm client console. Now we create a test database with a test table and some data on sql1.example.com:
sql1.example.com:
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
quit;
(Have a look at the CREATE statment: We must use ENGINE=NDBCLUSTER for all database tables that we want to get clustered! If you use another engine, then clustering will not work!)CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
quit;
The result of the SELECT statement should be:
mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.03 sec) |
sql2.example.com:
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
SELECT * FROM testtable;
The SELECT statement should deliver you the same result as before on sql1.example.com:CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
SELECT * FROM testtable;
mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.04 sec) |
sql2.example.com:
INSERT INTO testtable () VALUES (2);
quit;
Now let's go back to sql1.example.com and check if we see the new row there:quit;
sql1.example.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
You should see something like this: USE mysqlclustertest;
SELECT * FROM testtable;
quit;
mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | | 2 | +------+ 2 rows in set (0.05 sec) |
Now let's see what happens if we stop node 1 (sql1.example.com): Run
sql1.example.com:
killall ndbd
and check withps aux | grep ndbd | grep -iv grep
that all ndbd processes have terminated. If you still see ndbd processes, run anotherkillall ndbd
until all ndbd processes are gone. Now let's check the cluster status on our management server (loadb1.example.com):
loadb1.example.com:
ndb_mgm
On the ndb_mgm console, issueshow;
and you should see this:ndb_mgm> show; Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from 192.168.0.101) id=3 @192.168.0.102 (Version: 5.0.19, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.0.103 (Version: 5.0.19) [mysqld(API)] 2 node(s) id=4 @192.168.0.101 (Version: 5.0.19) id=5 @192.168.0.102 (Version: 5.0.19) ndb_mgm> |
Type
quit;
to leave the ndb_mgm console. Let's check sql2.example.com:
sql2.example.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
The result of the SELECT query should still beUSE mysqlclustertest;
SELECT * FROM testtable;
quit;
mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | | 2 | +------+ 2 rows in set (0.17 sec) |
sql1.example.com:
ndbd
PAGE4:
5 How To Restart The Cluster
Now let's asume you want to restart the MySQL cluster, for example because you have changed /var/lib/mysql-cluster/config.ini on loadb1.example.com or for some other reason. To do this, you use the ndb_mgm cluster management client on loadb1.example.com:loadb1.example.com:
ndb_mgm
On the ndb_mgm console, you typeshutdown;
You will then see something like this:ndb_mgm> shutdown; Node 3: Cluster shutdown initiated Node 2: Node shutdown completed. 2 NDB Cluster node(s) have shutdown. NDB Cluster management server shutdown. ndb_mgm> |
Run
quit;
to leave the ndb_mgm console. To start the cluster management server, do this on loadb1.example.com:
loadb1.example.com:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
and on sql1.example.com and sql2.example.com you runsql1.example.com / sql2.example.com:
ndbd
or, if you have changed /var/lib/mysql-cluster/config.ini on loadb1.example.com:ndbd --initial
Afterwards, you can check on loadb1.example.com if the cluster has restarted:loadb1.example.com:
ndb_mgm
On the ndb_mgm console, type show;
to see the current status of the cluster. It might take a few seconds after a restart until all nodes are reported as connected. Type
quit;
to leave the ndb_mgm console.PAGE-5:
6 Configure The Load Balancers
Our MySQL cluster is finished now, and you could start using it now. However, we don't have a single IP address that we can use to access the cluster, which means you must configure your applications in a way that a part of it uses the MySQL cluster node 1 (sql1.example.com), and the rest uses the other node (sql2.example.com). Of course, all your applications could just use one node, but what's the point then in having a cluster if you do not split up the load between the cluster nodes? Another problem is, what happens if one of the cluster nodes fails? Then the applications that use this cluster node cannot work anymore.The solution is to have a load balancer in front of the MySQL cluster which (as its name suggests) balances the load between the MySQL cluster nodes. The load blanacer configures a virtual IP address that is shared between the cluster nodes, and all your applications use this virtual IP address to access the cluster. If one of the nodes fails, then your applications will still work, because the load balancer redirects the requests to the working node.
Now in this scenario the load balancer becomes the bottleneck. What happens if the load balancer fails? Therefore we will configure two load balancers (loadb1.example.com and loadb2.example.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. Both load balancers use heartbeat to check if the other load balancer is still alive, and both load balancers also use ldirectord, the actual load balancer the splits up the load onto the cluster nodes. heartbeat and ldirectord are provided by the Ultra Monkey package that we will install.
It is important that loadb1.example.com and loadb2.example.com have support for IPVS (IP Virtual Server) in their kernels. IPVS implements transport-layer load balancing inside the Linux kernel.
6.1 Install Ultra Monkey
Ok, let's start: first we enable IPVS on loadb1.example.com and loadb2.example.com:loadb1.example.com / loadb2.example.com:
modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr
loadb1.example.com / loadb2.example.com:
vi /etc/modules
ip_vs_dh ip_vs_ftp ip_vs ip_vs_lblc ip_vs_lblcr ip_vs_lc ip_vs_nq ip_vs_rr ip_vs_sed ip_vs_sh ip_vs_wlc ip_vs_wrr |
loadb1.example.com / loadb2.example.com:
vi /etc/apt/sources.list
deb http://www.ultramonkey.org/download/3/ sarge main deb-src http://www.ultramonkey.org/download/3 sarge main |
apt-get update
apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev
Now Ultra Monkey is being installed. If you see this warning:apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev
¦ libsensors3 not functional ¦ ¦ ¦ ¦ It appears that your kernel is not compiled with sensors support. As a ¦ ¦ result, libsensors3 will not be functional on your system. ¦ ¦ ¦ ¦ If you want to enable it, have a look at "I2C Hardware Sensors Chip ¦ ¦ support" in your kernel configuration. ¦ |
Answer the following questions:
Do you want to automatically load IPVS rules on boot?
<-- No
<-- No
Select a daemon method.
<-- none
The libdbd-mysql-perl package we've just installed does not work with MySQL 5 (we use MySQL 5 on our MySQL cluster...), so we install the newest DBD::mysql Perl package:<-- none
loadb1.example.com / loadb2.example.com:
cd /tmp
wget http://search.cpan.org/CPAN/authors/id/C/CA/CAPTTOFU/DBD-mysql-3.0002.tar.gz
tar xvfz DBD-mysql-3.0002.tar.gz
cd DBD-mysql-3.0002
perl Makefile.PL
make
make install
We must enable packet forwarding:wget http://search.cpan.org/CPAN/authors/id/C/CA/CAPTTOFU/DBD-mysql-3.0002.tar.gz
tar xvfz DBD-mysql-3.0002.tar.gz
cd DBD-mysql-3.0002
perl Makefile.PL
make
make install
loadb1.example.com / loadb2.example.com:
vi /etc/sysctl.conf
# Enables packet forwarding net.ipv4.ip_forward = 1 |
sysctl -p
PAGE6:
6.2 Configure heartbeat
Next we configure heartbeat by creating three files (all three files must be identical on loadb1.example.com and loadb2.example.com):loadb1.example.com / loadb2.example.com:
vi /etc/ha.d/ha.cf
logfacility local0 bcast eth0 mcast eth0 225.0.0.1 694 1 0 auto_failback off node loadb1 node loadb2 respawn hacluster /usr/lib/heartbeat/ipfail apiauth ipfail gid=haclient uid=hacluster |
uname -n
Other than that, you don't have to change anything in the file.vi /etc/ha.d/haresources
loadb1 \ ldirectord::ldirectord.cf \ LVSSyncDaemonSwap::master \ IPaddr2::192.168.0.105/24/eth0/192.168.0.255 |
vi /etc/ha.d/authkeys
auth 3 3 md5 somerandomstring |
loadb1.example.com / loadb2.example.com:
chmod 600 /etc/ha.d/authkeys
6.3 Configure ldirectord
Now we create the configuration file for ldirectord, the load balancer:loadb1.example.com / loadb2.example.com:
vi /etc/ha.d/ldirectord.cf
# Global Directives checktimeout=10 checkinterval=2 autoreload=no logfile="local0" quiescent=yes virtual = 192.168.0.105:3306 service = mysql real = 192.168.0.101:3306 gate real = 192.168.0.102:3306 gate checktype = negotiate login = "ldirector" passwd = "ldirectorpassword" database = "ldirectordb" request = "SELECT * FROM connectioncheck" scheduler = wrr |
Now we create the necessary system startup links for heartbeat and remove those of ldirectord (bacause ldirectord will be started by heartbeat):
loadb1.example.com / loadb2.example.com:
update-rc.d -f heartbeat remove
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove
PAGE7:
6.4 Create A Database Called ldirector
Next we create the ldirector database on our MySQL cluster nodes sql1.example.com and sql2.example.com. This database will be used by our load balancers to check the availability of the MySQL cluster nodes.sql1.example.com:
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
USE ldirectordb;
CREATE TABLE connectioncheck (i INT) ENGINE=NDBCLUSTER;
INSERT INTO connectioncheck () VALUES (1);
quit;
sql2.example.com:GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
USE ldirectordb;
CREATE TABLE connectioncheck (i INT) ENGINE=NDBCLUSTER;
INSERT INTO connectioncheck () VALUES (1);
quit;
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
quit;
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
quit;
6.5 Prepare The MySQL Cluster Nodes For Load Balancing
Finally we must configure our MySQL cluster nodes sql1.example.com and sql2.example.com to accept requests on the virtual IP address 192.168.0.105.sql1.example.com / sql2.example.com:
apt-get install iproute
Add the following to /etc/sysctl.conf:sql1.example.com / sql2.example.com:
vi /etc/sysctl.conf
# Enable configuration of arp_ignore option net.ipv4.conf.all.arp_ignore = 1 # When an arp request is received on eth0, only respond if that address is # configured on eth0. In particular, do not respond if the address is # configured on lo net.ipv4.conf.eth0.arp_ignore = 1 # Ditto for eth1, add for all ARPing interfaces #net.ipv4.conf.eth1.arp_ignore = 1 # Enable configuration of arp_announce option net.ipv4.conf.all.arp_announce = 2 # When making an ARP request sent through eth0 Always use an address that # is configured on eth0 as the source address of the ARP request. If this # is not set, and packets are being sent out eth0 for an address that is on # lo, and an arp request is required, then the address on lo will be used. # As the source IP address of arp requests is entered into the ARP cache on # the destination, it has the effect of announcing this address. This is # not desirable in this case as adresses on lo on the real-servers should # be announced only by the linux-director. net.ipv4.conf.eth0.arp_announce = 2 # Ditto for eth1, add for all ARPing interfaces #net.ipv4.conf.eth1.arp_announce = 2 |
sysctl -p
sql1.example.com / sql2.example.com:
vi /etc/network/interfaces
auto lo:0 iface lo:0 inet static address 192.168.0.105 netmask 255.255.255.255 pre-up sysctl -p > /dev/null |
ifup lo:0
PAGE9:
7 Start The Load Balancer And Do Some Testing
Now we can start our two load balancers for the first time:loadb1.example.com / loadb2.example.com:
/etc/init.d/ldirectord stop
/etc/init.d/heartbeat start
If you don't see errors, you should now reboot both load balancers:/etc/init.d/heartbeat start
loadb1.example.com / loadb2.example.com:
shutdown -r now
After the reboot we can check if both load balancers work as expected :loadb1.example.com / loadb2.example.com:
ip addr sh eth0
The active load balancer should list the virtual IP address (192.168.0.105):2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:45:fc:f8 brd ff:ff:ff:ff:ff:ff inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0 inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0 |
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:16:c1:4e brd ff:ff:ff:ff:ff:ff inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0 |
ldirectord ldirectord.cf status
Output on the active load balancer:ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1603 |
ldirectord is stopped for /etc/ha.d/ldirectord.cf |
ipvsadm -L -n
Output on the active load balancer:IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.105:3306 wrr -> 192.168.0.101:3306 Route 1 0 0 -> 192.168.0.102:3306 Route 1 0 0 |
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn |
/etc/ha.d/resource.d/LVSSyncDaemonSwap master status
Output on the active load balancer:master running (ipvs_syncmaster pid: 1766) |
master stopped (ipvs_syncbackup pid: 1440) |
mysql -h 192.168.0.105 -u ldirector -p
(Please note: your MySQL client must at least be of version 4.1; older versions do not work with MySQL 5.) You can now switch off one of the MySQL cluster nodes for test purposes; you should then still be able to connect to the MySQL database.
8 Annotations
There are some important things to keep in mind when running a MySQL cluster:- All data is stored in RAM! Therefore you need lots of RAM on your cluster nodes. The formula how much RAM you need on ech node goes like this:
(SizeofDatabase × NumberOfReplicas × 1.1 ) / NumberOfDataNodes
So if you have a database that is 1 GB of size, you would need 1.1 GB RAM on each node! - The cluster management node listens on port 1186, and anyone can connect. So that's definitely not secure, and therefore you should run your cluster in an isolated private network!
It's a good idea to have a look at the MySQL Cluster FAQ: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html and also at the MySQL Cluster documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html
Links
MySQL: http://www.mysql.com/MySQL Cluster documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html
MySQL Cluster FAQ: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html
Ultra Monkey: http://www.ultramonkey.org/
The High-Availability Linux Project: http://www.linux-ha.org/
Monday, December 13, 2010
amazon webservice setup
1. create account with AWS (Amazon web service)
2. choose OS instance
ubuntu server images: http://uec-images.ubuntu.com/releases/10.04/release/
3. Open SSH port in AWS security group
4. use PuttyGEN to convert pem key to ppk.
4. configure putty with private key.
reference:
http://it.toolbox.com/blogs/managing-infosec/connecting-to-amazon-aws-from-windows-to-a-linux-ami-30656
AFTER the Linux box ready:
1. apt-get upgarde
2. apt-get install build-essential
2. choose OS instance
ubuntu server images: http://uec-images.ubuntu.com/releases/10.04/release/
3. Open SSH port in AWS security group
4. use PuttyGEN to convert pem key to ppk.
4. configure putty with private key.
reference:
http://it.toolbox.com/blogs/managing-infosec/connecting-to-amazon-aws-from-windows-to-a-linux-ami-30656
AFTER the Linux box ready:
1. apt-get upgarde
2. apt-get install build-essential
Monday, October 18, 2010
error mounting in fstab boot error ubuntu
If unable to boot because of wrong entry in fstab for mounting any filesystem in Ubuntu Linux (may work for other *nix flavors too, not sure), eg: usb mounting, press "s" while booting (after screen struck up), which allows booting by escaping the mounting.
Tuesday, October 12, 2010
mounting freeNAS NFS on ubuntu
http://hardforum.com/showthread.php?t=1262438
apt-get install nfs-common on client system
mount -t nfs <192.168..>:/mnt/folder /mnt/folder
automatic mount on boot:
open /etc/fstab and entry:
<ip>:/mnt/<folder> <localfolder> nfs defaults 0 0
NOTE: in NFS server edit /etc/exports as below
/mnt/<folder name> -alldirs -mapall=root -network 192.168.5.0 -mask 255.255.255.0
make sure the authorised network is same as ur network range; eg: 192.168.5.0
mouonting ssh file system:
http://www.ubuntugeek.com/mount-a-remote-folder-using-ssh-on-ubuntu.html
http://www.neowin.net/forum/topic/574196-setup-a-simple-nas-device/
https://help.ubuntu.com/community/SettingUpNFSHowTo
apt-get install nfs-common on client system
mount -t nfs <192.168..>:/mnt/folder /mnt/folder
automatic mount on boot:
open /etc/fstab and entry:
<ip>:/mnt/<folder> <localfolder> nfs defaults 0 0
NOTE: in NFS server edit /etc/exports as below
/mnt/<folder name> -alldirs -mapall=root -network 192.168.5.0 -mask 255.255.255.0
make sure the authorised network is same as ur network range; eg: 192.168.5.0
mouonting ssh file system:
http://www.ubuntugeek.com/mount-a-remote-folder-using-ssh-on-ubuntu.html
http://www.neowin.net/forum/topic/574196-setup-a-simple-nas-device/
https://help.ubuntu.com/community/SettingUpNFSHowTo
Saturday, October 9, 2010
FreeNAS setup
http://dailycupoftech.com/howto-install-freenas/
http://www.dailycupoftech.com/freenas-system-and-skill-requirements/
http://dailycupoftech.com/freenas-basic-configuration/
http://www.dailycupoftech.com/2006/11/27/freenas-week-configuring-disks/
http://www.dailycupoftech.com/configuring-disks-in-freenas/
http://www.dailycupoftech.com/2006/11/28/freenas-week-windows-shares/
http://www.dailycupoftech.com/creating-windows-shares-on-freenas/
Mounting NAS on Linux:
http://forums.fedoraforum.org/showthread.php?t=191084
http://www.dailycupoftech.com/freenas-system-and-skill-requirements/
http://dailycupoftech.com/freenas-basic-configuration/
http://www.dailycupoftech.com/2006/11/27/freenas-week-configuring-disks/
http://www.dailycupoftech.com/configuring-disks-in-freenas/
http://www.dailycupoftech.com/2006/11/28/freenas-week-windows-shares/
http://www.dailycupoftech.com/creating-windows-shares-on-freenas/
Mounting NAS on Linux:
http://forums.fedoraforum.org/showthread.php?t=191084
Virtual hosts with apache server- windows & linux
Linux (Ubuntu) :
http://ubuntu-tutorials.com/2008/01/09/setting-up-name-based-virtual-hosting/
https://help.ubuntu.com/8.04/serverguide/C/httpd.html
Windows:
Virtual host configuration with WAMP server
http://guides.jlbn.net/setvh/setvh3.html
http://www.thewebhostinghero.com/tutorials/apache-alias.html
http://ubuntu-tutorials.com/2008/01/09/setting-up-name-based-virtual-hosting/
https://help.ubuntu.com/8.04/serverguide/C/httpd.html
Windows:
Virtual host configuration with WAMP server
http://guides.jlbn.net/setvh/setvh3.html
http://www.thewebhostinghero.com/tutorials/apache-alias.html
Friday, October 8, 2010
pure-ftpd mysql on ubuntu
NOTE: make sure PHP5, Mysql server are installed and also php5-mysql.
1. apt-get install pure-ftpd-mysql
2. Install User Manager: This is Optional
reference: Instructions in this link is simple and works perfectly.
http://www.howtoforge.com/virtual-hosting-with-pureftpd-mysql-on-ubuntu-8.10
e. Linux Commands reference: http://www.pixelbeat.org/cmdline.html
http://www.howtoforge.com/virtual-hosting-with-pureftpd-and-mysql-ubuntu-7.10
After installation:
1. Make sure to backup the existing mysql.conf to mysql_orig.conf.
2. pureftpd-mysql.conf -content to be copied to a file mysql.conf in /etc/pure-ftpd/db.
3. restart the pure-ftpd : /etc/init.d/pure-ftpd-mysql restart
4. check ftp port (21) is open on the server
It is OK to have FTP server on one box, My SQL on another box. In this scenario, the mysql.conf file need to be updated with IP address of the MYSQL server.
to restrict user in the home directory:
echo "yes" > /etc/pure-ftpd/conf/ChrootEveryone
1. apt-get install pure-ftpd-mysql
2. Install User Manager: This is Optional
- Download using command line in Linux: wget http://machiel.generaal.net/files/pureftpd/ftp_v2.1.tar.gz
- untar command : tar -xvf ftp_v2.1.tar.gz
- User Manager for Pure-ftpd - http://machiel.generaal.net/index.php?subject=user_manager_pureftpd
reference: Instructions in this link is simple and works perfectly.
http://www.howtoforge.com/virtual-hosting-with-pureftpd-mysql-on-ubuntu-8.10
e. Linux Commands reference: http://www.pixelbeat.org/cmdline.html
http://www.howtoforge.com/virtual-hosting-with-pureftpd-and-mysql-ubuntu-7.10
After installation:
1. Make sure to backup the existing mysql.conf to mysql_orig.conf.
2. pureftpd-mysql.conf -content to be copied to a file mysql.conf in /etc/pure-ftpd/db.
3. restart the pure-ftpd : /etc/init.d/pure-ftpd-mysql restart
4. check ftp port (21) is open on the server
It is OK to have FTP server on one box, My SQL on another box. In this scenario, the mysql.conf file need to be updated with IP address of the MYSQL server.
to restrict user in the home directory:
echo "yes" > /etc/pure-ftpd/conf/ChrootEveryone
Monday, October 4, 2010
SMTP EMail for PHP On Ubuntu
installing pear package for php: apt-get install php-pear
sudo apt-get install php-pear
sudo pear install mail
sudo pear install Net_SMTP
sudo pear install Auth_SASL
sudo pear install mail_mime
start/stop/restart apache web server:
/etc/init.d/apache2 restart
/etc/init.d/apache2 start
/etc/init.d/apache2 stop
Installing PEAR Mail for PHP On Ubuntu
sudo apt-get install php-pear
sudo pear install mail
sudo pear install Net_SMTP
sudo pear install Auth_SASL
sudo pear install mail_mime
start/stop/restart apache web server:
/etc/init.d/apache2 restart
/etc/init.d/apache2 start
/etc/init.d/apache2 stop
Subscribe to:
Posts (Atom)