HostwareSupport-Linux Hosting technical support for cPanel, Plesk, Directadmin servers

Linux Hosting technical support for cPanel, Plesk, Directadmin and No control panel servers offers and listing
http://hostwaresupport.com/

Dec 9, 2011

phpBB SQL Error - 'phpbb_sessions' is marked as crashed and should be repaired

General errorL
SQL ERROR [ mysqli ]


Table './mysql/phpbb_login_attempts' is marked as crashed and should be repaired [145]
**********************************************************************

Recently, my phpBB forum encountered a SQL error, shown below:
'phpbb_sessions' is marked as crashed and should be repaired [145]
The first thing that I did was to access phpMyAdmin to try an repair the table with the SQL command:
REPAIR TABLE 'phpbb_sessions'
Unfortunately, that does not work. I did some research online and finally, I managed to resolve the issue. It takes a bit of tweaking nonetheless, but it's a fairly simple procedure.
Solution to Resolve phpBB error
If you have access to phpMyAdmin application, under cPanel (from your web hosting service provider), then it would make things a lot simpler.
  1. Under your control panel , click on phpMyAdmin icon.
Select your phpBB database, and then click on the "xxx_sessions", where "xxx" represents your phpBB database suffix (In my example, my prefix is phpbb).
php Sessions

Next, click on SQL, located that top of the phpMyAdmin application. This will then open up the dialog box.
phpMyAdmin SQL

  1. In the dialog box (shown below), type in the following SQL command.
    SQL Dialog box
    SQL Code Below
    DROP TABLE IF EXISTS phpbb_sessions;
    CREATE TABLE phpbb_sessions (
    session_id binary(32) DEFAULT '' NOT NULL,
    session_user_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
    session_forum_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
    session_last_visit int(11) UNSIGNED DEFAULT '0' NOT NULL,
    session_start int(11) UNSIGNED DEFAULT '0' NOT NULL,
    session_time int(11) UNSIGNED DEFAULT '0' NOT NULL,
    session_ip varbinary(40) DEFAULT '' NOT NULL,
    session_browser varbinary(150) DEFAULT '' NOT NULL,
    session_forwarded_for varbinary(255) DEFAULT '' NOT NULL,
    session_page blob NOT NULL,
    session_viewonline tinyint(1) UNSIGNED DEFAULT '1' NOT NULL,
    session_autologin tinyint(1) UNSIGNED DEFAULT '0' NOT NULL,
    session_admin tinyint(1) UNSIGNED DEFAULT '0' NOT NULL,
    PRIMARY KEY (session_id),
    KEY session_time (session_time),
    KEY session_user_id (session_user_id),
    KEY session_fid (session_forum_id)
    );
  2. The above code will typically delete the 'phpbb_sessions' table and re-create a new one.
After you have completed the above procedure, you should see no error on the phpBB Forums now.
Fixed..!
***************************************************************************************************************************

Sep 18, 2011

How to backup your Mysql database with phpMyAdmin



Introduction
It is very important to do backup of your MySql database, you will probably realize it when it is too late.
A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server it is very easy to keep them safe from crashes, you just have a copy of them on your own PC and then upload them again after the web server is restored after the crash. All the content in the MySql database must also be backed up. A lot of web service providers say they do backup of all the files, but you should never blindly trust them. If you have spent a lot of time making the content and it is only stored in the Mysql server, you will feel very bad if it gets lost for ever. Backing it up once every month or so makes sure you never loose too much of your work in case of a server crash, and it will make you sleep better at night. It is easy and fast, so there is no reason for not doing it.

Backup of Mysql database
It is assumed that you have phpMyAdmin installed since a lot of web service providers use it.

0. Open phpMyAdmin.
1. Click Export in the Menu to get to where you can backup you MySql database.

2. Make sure that you have selected to export your entire database, and not just one table. There should be as many tables in the export list as showing under the database name.

3. Select"SQL"-> for output format, Check "Structure" and "Add AUTO_INCREMENT" value. Check "Enclose table and field name with backquotes". Check "DATA", check use "hexadecimal for binary field". Export type set to "INSERT".
4. Check "Save as file", do not change the file name, use compression if you want. Then click "GO" to download the backup file.
========================================================================
Restoring a backup of a MySql database
1. To restore a database, you click the SQL tab.
2. On the "SQL"-page , unclick the show query here again.
3. Browse to your backup of the database.
4. Click Go.
========================================================================

Without phpMyAdminphpMyAdmin has some file size limits so if you have large databases it may no be possible to backup using phpMyAdmin. Then you have to use the command line tools that comes with Mysql. Please note that this method is untested.

Mysql backup without phpMyAdmin
PHPMyAdmin can't handle large databases. In that case straight mysql code will help.
1. Change your directory to the directory you want to dump things to:
root@tune:~> cd files/blog
2. Use mysqldump (man mysqldump is available):
root@tune:~/files/blog> mysqldump --add-drop-table -h mysqlhostserver  -u mysqlusername -p databasename (tablename tablename tablename) | bzip2  -c > blog.bak.sql.bz2  Enter password: (enter your mysql password) root@tune~/files/blog>
Example: mysqldump --add-drop-table -h db01.example.net -u dbocodex -p dbwp | bzip2 -c > blog.bak.sql.bz2  Enter password: my-password root@tune~/files/blog> 
The bzip2 -c after the pipe | means the backup is compressed on the fly.
Mysql restore without phpMyAdmin
The restore process consists of unarchiving your archived database dump, and importing it into your Mysql database.
Assuming your backup is a .bz2 file, creating using instructions similar to those given for Backing up your database using Mysql commands, the following steps will guide you through restoring your database :
1. Unzip your .bz2 file:
root@tune:~/files/blog> bzip2 -d blog.bak.sql.bz2
Note: If your database backup was a .tar.gz called blog.bak.sql.tar.gz file, then,
tar zxvf blog.bak.sql.tar.gz
is the command that should be used instead of the above.2. Put the backed-up sql back into mysql:
root@tune:~/files/blog> mysql -h mysqlhostserver -u mysqlusername  -p databasename < blog.bak.sql  Enter password: (enter your mysql password) root@tune~/files/blog:> 
========================================================================

Table mysql.servers doesn’t exist: Problem adding a database user in plesk Or restarting mysql

You may receive a “Table ‘mysql.servers’ doesn’t exist” error message while adding a database user in Plesk OR while restarting the Mysql service. The complete error message look like:

Error: Connection to the database server has failed: Table 'mysql.servers' doesn't exist

OR

Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist

The problem mostly occurs when Mysql server is upgraded from an older to a newer version and the upgrade remains incomplete. Since Mysql often introduces new tables with the newer versions, you need to run the “mysql_fix_privilege_tables” script located in the “/usr/bin/” directory so the mysql database is updated with the latest contents thus fixing the privileges of the database users as well.

To fix the issue, ssh to your server as root and execute the command:

On a plain Linux OR Linux/cPanel server:

# mysql_fix_privilege_tables --user=root --password= --verbose

On a Linux/Plesk server:

# mysql_fix_privilege_tables --user=admin --password=`cat /etc/psa/.psa.shadow` --verbose

BTW, on a Linux/Plesk server, you may see the following error message sometimes:

Got a failure from command:
cat /usr/share/mysql/mysql_fix_privilege_tables.sql | /usr/bin/mysql –no-defaults –force –user=root –host=localhost –database=mysql
If you get an ‘Access denied’ error, you should run this script again and
give the MySQL root user password as an argument with the –password= option

In such a situation, use user/password argument with privilege command as follows:

# mysql_fix_privilege_tables --user=admin --password=`cat /etc/psa/.psa.shadow`  --verbose

where, –verbose will display the detailed output.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------


7

Sep 16, 2011

Different Qmail queue tricks in Linux plesk


Following are some of the qmail commands for Plesk server.

1) To check the mail queue in plesk from command line, you can use the command :

[root@vps init.d]# /var/qmail/bin/qmail-qstat
messages in queue: 1
messages in queue but not yet preprocessed: 0
[root@vps init.d]#

2) You can examine the queue with qmail-qread.

# /var/qmail/bin/qmail-qread
root@vps init.d]# /var/qmail/bin/qmail-qread
16 Sep 2011 06:14:22 GMT #218926412 299 anonymous@vps.test.com
remote dave@davidlbird.com
[root@vps init.d]#
3) From the qread command you get the message’s id . In the above example , one of the id is 218926412 . Now you can find the file holding the email in/var/qmail/queue with “find “command.


[root@vps init.d]# find /var/qmail/queue -iname 218926412

/var/qmail/queue/mess/15/218926412
/var/qmail/queue/remote/15/218926412
/var/qmail/queue/info/15/218926412

[root@vps init.d]# vi /var/qmail/queue/mess/22/524514

Received: (qmail 9264 invoked by uid 10000); 16 Sep 2011 06:14:22 -0700
Date: 16 Sep 2011 06:14:22 -0700
Message-ID: <20110916131422 .9260.qmail=".9260.qmail" vps.pcbuniverse.com="vps.pcbuniverse.com">
To: Dave@DavidLBird.com
Subject: the subject
From: tchase@pcbuniverse.com
Reply-To: tchase@pcbuniverse.com
X-Mailer: PHP/5.1.6
hello
~


5) If you wish to remove the emails with some patterns , you can use qmail-remove ( You can download it from http://www.linuxmagic.com/opensource/qmail/qmail-remove )

# /etc/init.d/qmail stop (Stop qmail before removing)
# /var/qmail/bin/qmail-remove -r -p “Time Passing”
(considering that “Time Passing” was the subject of the email )
The above steps can be used to track Spammers .
Do you wish to completely remove all the mails from queue? Just run the below commands.
find /var/qmail/queue/mess -type f -exec rm {} \;
find /var/qmail/queue/info -type f -exec rm {} \;
find /var/qmail/queue/local -type f -exec rm {} \;
find /var/qmail/queue/intd -type f -exec rm {} \;
find /var/qmail/queue/todo -type f -exec rm {} \;
find /var/qmail/queue/remote -type f -exec rm {} \;

Sep 12, 2011

Under plesk account if domain webamail link shows blue screen or default plesk page.. then how can you fixed this..

Solution:
Login to your plesk server through root



[master@tunevps ~]$ sudo su - root

[master@tunevps ~]
[master@tunevps ~]cd /etc/psa/webmail/horde/conf.d/

[master@tunevps ~:/etc/psa/webmail/horde/conf.d]
(root)>ls -al | grep testvps.com
[master@tunevps ~:/etc/psa/webmail/horde/conf.d]
[master@tunevps ~] cp -a /etc/psa-webmail/horde/conf.d/tugawarsportfishing.com /etc/psa/webmail/horde/conf.d

[master@tunevps ~:/etc/psa/webmail/horde/conf.d]
[master@tunevps] service httpd stop
Stopping httpd: [ OK ]
[master@tunevps:/etc/psa/webmail/horde/conf.d]
[master@tunevps] service httpd start
Starting httpd: [ OK ]

This should be fixed your problem

====================================================================

Sep 1, 2011

Easy 5 Ways to Increase the PHP Memory Limit in WordPress

  • In case you are a blogger & you use Wordpress; on some hosts you’ll notice a ‘Fatal Error : Memory Size Exhausted’ when you install a lot of plugins, upgrade to the latest WordPress version or even you’ll find an error in your Dashboard widget which prevents them from loading fully. In case you want to reduce your plugins, then you can take a look at the Top 10 Wordpress Plugins I use on DailyBlogging. Such types of problems arise because the PHP Memory Limit of your Host is pretty less than what the process requires for performing it’s functions. In such a case you would need to follow these 5 tips to Increase your host’s PHP Memory Limit.
  • ( I got when I upgraded from WordPress 3.0.3 –> WordPress 3.0.4. You’ll find that there is some problem with class-http.php on line 1408. It’s not necessary that you’ll also get the same error. But as far as WordPress Upgrades are concerned; you’ll experience a similar error on your Dashboard too.)
  1. Increase the limit via PHP.ini file
ou can directly increase the PHP Memory Limit if you’ve access to the PHP.ini file. Most small Shared hosting servers won’t give you access to the PHP.ini file. But some servers allow you to create a duplicate PHP.ini in your respective site Directories whose data/values will override the default PHP.ini values. [ad#adsense-incontent]

To do that you just need to create a file with the name ‘php.ini’ in the directory where your WordPress site is installed. In that just add the command memory_limit = 64M in it to increase the Memory Limit to 64 MB.

2. Changing the Memory Limit via wp-config.php

If you don’t want to mess with the PHP.ini file, then you can go for this method. In this you won’t be needing to create any extra file in your Directory. Just Adding define('WP_MEMORY_LIMIT', '64M'); in your ‘wp-config.php’ file would increase your PHP Memory Limit to 64 MB.

3 Modifying the .htaccess file to Increase the Memory Limit

A Default WordPress Installation won’t have a .htaccess file. But in case you already have it for some purposes like ‘301 Redirection’ which is important in SEO of any site. In such a case just add the command php_value memory_limit 64M in your ‘.htaccess’ file and your memory limit will increase to 64 MB.

4. Changing the Memory Limit via install.php

This method is just an Alternate to the php.ini method. Because the function of the code we use in this method is same as what we put in the php.ini file. You just need to place the code ini_set('memory_limit','32M'); in the ‘install.php’ file which is placed in the wp-admin folder of your WordPress installation.

5 Have a talk with your Host

If you are a person who is new to all these techie sounding things. Then it’s better to have a Live Chat / a Call with your Host right away. It’s your right to talk to them & get the necessary changes you need as you’ve paid for it. I would recommend you use some Quality WordPress hosts available

====================================================================

Aug 20, 2011

How to : Avoid Dr.Web update notifications

Avoid Dr.Web update notifications
In latest plesk version (plesk 9.5) following notifications generated from plesk and send to plesk administrator
which irritate sometimes. You can stop such notifications by applying following fix on server.


Error:

Sample of email headers and notification.
Quote:
Return-Path:
Received: (qmail 28144 invoked by uid 111); 28 Apr 2010 12:36:06 +0000
Date: 28 Apr 2010 12:36:06 +0000
Message-ID: <20100428123606.27994.qmail[@]vps.tunevps.com>
From: root[@]vps.tunevps.com (Cron Daemon)
To: drweb[@]vps.tunevps.com
Subject: Cron /opt/drweb/update.pl
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
X-Cron-Env:
X-Cron-Env:
X-Cron-Env:
X-Cron-Env:
X-Cron-Env:

Dr.Web (R) update details:
Update server: http://update.us.drweb.com/unix/500
Update has begun at Wed Apr 28 12:36:02 2010 Update has finished at Wed Apr 28 12:36:05 2010

Following files has been updated:
/var/drweb/bases/drwdaily.vdb
/var/drweb/bases/drwtoday.vdb
/var/drweb/bases/dwntoday.vdb
/var/drweb/bases/dwrtoday.vdb
/var/drweb/updates/timestamp


Generally these notifications generate by cron deamon, provide output of command which updates Dr.Web antivirus databases. The command is executed by user 'drweb' from configuration file /etc/cron.d/drweb-update:
Quote:
*/30 * * * * drweb /opt/drweb/update.pl
In older version of plesk such notifications were not sent as drweb' mail alias did not exist and these messages were discarded.
But in latest plesk version like 9.5.1 mail alias 'drweb' refers to mail address of plesk control panel administrator. So emails send to 'drweb@$HOSTNAME' receive to mail address of plesk control panel administrator

You can avoid such emails by doing little changes in cron.

Please open file /etc/cron.d/drweb-update using any suitable editor like vi, nano etc. and add '>/dev/null 2>&1' at the end of the line:
Quote:
*/30 * * * * drweb /opt/drweb/update.pl >/dev/null 2>&1
In this case no email will be generated.

Superb:
====================================================================

Aug 11, 2011

Mysql database size shows 0mb in cpanel: how to solve

Updae Mysql databases through the following command through root

# /usr/local/cpanel/bin/setupdbmap
OR
1) Edit /var/cpanel/cpanel.config

Change:

disk_usage_include_sqldbs=0

to

disk_usage_include_sqldbs=1

Then run the following:

/scripts/update_db_cache

This may take a few minutes if you have a ton of users with databases, but after this, you should see the database disk usage show up accurately in cPanel.

Done...

Aug 8, 2011

essage not sent. Server replied:Requested action aborted: error in processing451 Temporary local problem - please try later

How to fixed above error: (Exim)

go through exim mainlog:

(i have tried to fix from cpanel using fix mailbox or the other way its still same. can someone give me information why this happen)

Try editing your /etc/localdomains to your liking. In mine, I included every actual and parked domain on the server, as well as the hostname for the server.

Then try removing the file /etc/remotedomains:

# rm /etc/remotedomains

Then put an empty remotedomains back:

# touch /etc/remotedomains


You should now have a good localdomains, and an empty remotedomains.

Try running /scripts/mailperm now... when I did this, it left the localdomains file alone. I'm guessing that the contents of the remotedomains file may have a bearing on the contents of the localdomains file.

Do this at your own risk, though... I don't know what other affects this could have. I just saw in the script that it looked at the remotedomains file. I also noticed there are two options available, neither of which I tried:

# /scripts/mailperm --skiplocaldomains
# /scripts/mailperm --skipserverperm


Everything should be fine now....
FIXED.... :)




How mail is restored from VDS accounts when it's archived

Simply fire few commands in client VDS...

[vds@root/]$ cd /var/spool/mail
[vds@root/]$ ls | grep test
test
test.20114506.gz
[vds@root/]$ zcat mkelly.20110806.gz >> test

OR

(root)>su - tune.com
[vds@tune.com /]$ cd /var/spool/mail
[vds@tune.com mail]$ ll | grep andy
-rw-rw---- 1 tune.com vuser 0 june 11:34 andy
-rw-rw---- 1 tune.com vuser 45782 Aug 27 12:25 andy.20110827.gz
[vds@tune.com mail]$ zcat andy.45784864.gz >> andy


Done..

Aug 7, 2011

Horde mail - qmail returned error code 100

There was an error sending your message: sendmail returned error code 111

ARUN Post in PLESK BACKEND, PLESK FRONTEND, QMAIL
0

While trying to send mails using QMAIL and PLESK I am getting the error below :

There was an error sending your message: sendmail returned error code 111

Solution :

If any error try stopping QMAIL.

cd /var/qmail/bin

cp -p qmail-local.moved qmail-local cp -p qmail-remote.moved qmail-remote cp -p qmail-queue.moved qmail-queue
killall qmail-remote qmail-queue qmail-local
ls -la qmail-queue qmail-local qmail-remote -r-xr-xr-x 1 root qmail 44060 Jun 13 2006 qmail-local -r-s--x--x 1 qmailq qmail 15784 Jan 26 14:06 qmail-queue -r-xr-xr-x 1 root qmail 43364 Jun 13 2006 qmail-remote

Start qmail..

Fixed...

Jun 15, 2011

RPC: Program not registered

* when i used commander in rhel6 # howmount -e i got above error.. this error shows that "NFS Portmap: RPC: Program not registered"

So i went to hosts.allow on the FC4 and entered my address

# vim /etc/hosts.allow
i.e.
portmap : xxx.xxx.xxx.xxx/255.255.255.0 : allow
portmap : ALL

wq!

After That restart the services

#/etc/init.d/nfs restart
#/etc/init.d/portreserv restart



Jun 13, 2011

SFTP Error : No supported authentication methods available

While accessing FTP with SFTP you might face following error

No supported authentication methods available.

Try to work with normal FTP and it might work fine but SFTP will not.

For this just check the file /var/log/secure on the server. You can find error message as

June 11 8:13:55 server sshd[88121]: Received disconnect from XX.XX.XX.XX: 14: No supported authentication methods available

The problem is faced due to PasswordAuthentication setting in /etc/ssh/sshd_config.

If the setting PasswordAuthentication is disabled in the SSH configuration file SFTP will not function. For this you can set PasswordAuthentication to on.

PasswordAuthentication on

Once you edit the file save and restart the sshd service using the command

root@server [~]# /etc/init.d/sshd restart

Now try to login the SFTP. It should sort your issue.

Mar 10, 2011

How to mount partition with ntfs file system

* Purpose of Title step by step guide of, how to mount partition with NTFS file system on the Linux operating system. This title having two parts:)
* mount NTFS file system read only access
* mount NTFS file system with read write access

Mount NTFS file system with read only access
2.1. NTFS kernel support

Majority of current Linux distributions supports NTFS file system out of the box. To be more specific, support for NTFS file system is more feature of Linux kernel modules rather than Linux distributions. First verify if we have NTFS modules installed on our system.

# ls /lib/modules/2.6.18-5-686/kernel/fs/ | grep ntfs

check for NTFS kernel support


NTFS module is presented. Let's identify NTFS partition.
2.2. Identifying partition with NTFS file system

One simple way to identify NTFS partition is:

fdisk -l | grep NTFS

Identifying partition with NTFS file system

There it is: /dev/sdbX
2.3. Mount NTFS partition

First create a mount point:

mkdir /mnt/windows

Then simply use mount command to mount it:

mount -t ntfs /dev/sdb1 /mnt/windows

Mount NTFS partition using linux
Now we can access NTFS partition and its files with read write access.

Mount NTFS file system with read write access

Mounting NTFS file system with read write access permissions is a bit more complicated. This involves installation of addition software such as fuse and ntfs-3g. In both cases you probably need to use your package management tool such as yum, apt-get, synaptic etc.. and install it from your standard distribution repository. Check for packages ntfs-3g and fuse. We take the other path which consists of manual compilation and installation fuse and ntfs-3g from source code.
3.1. Install addition software
3.1.1. Fuse Install

Download source code from: http://fuse.sourceforge.net/

# wget http://easynews.dl.sourceforge.net/sourceforge/fuse/fuse-2.7.1.tar.gz

Compile and install fuse source code:
Extract source file:

# tar xzf fuse-2.7.1.tar.gz

Compile and install

# cd fuse-2.7.1
./configure --exec-prefix=/; make; make install

Compile and install fuse source code
3.1.2. ntfs-3g install

Download source code from: http://www.ntfs-3g.org/index.html#download

# wget http://www.ntfs-3g.org/ntfs-3g-1.1120.tgz

Extract source file:

# tar xzf ntfs-3g-1.1120.tgz

Compile and install ntfs-3g source code
NOTE: Make sure that you have pkg-config package installed, otherwise you get this error message:

checking for pkg-config... no
checking for FUSE_MODULE... configure: error: FUSE >= 2.6.0 was not found. Either it's not fully
installed (e.g. fuse, fuse-utils, libfuse, libfuse2, libfuse-dev, etc packages) or files from an old
version are still present. See FUSE at http://fuse.sf.net/

# cd ntfs-3g-1.1120
./configure; make; make install

Compile and install ntfs-3g source code
3.2. Mount ntfs partition with read write access

# mount -t ntfs-3g /dev/sdb1 /mnt/windows

NOTE: ntfs-3g recommends to have at least kernel version 2.6.20 and higher.

# mount -t ntfs-3g /dev/sdb1 /mnt/windows
WARNING: Deficient Linux kernel detected. Some driver features are
not available (swap file on NTFS, boot from NTFS by LILO), and
unmount is not safe unless it's made sure the ntfs-3g process
naturally terminates after calling 'umount'. If you wish this
message to disappear then you should upgrade to at least kernel
version 2.6.20, or request help from your distribution to fix
the kernel problem. The below web page has more information:
http://ntfs-3g.org/support.html#fuse26

==================================================================================

Mar 4, 2011

How To recover deleted files using 'rm -rf' command in ext3fs

Let's jump into mini-user guide to recover a deleted file from ext3FS.
I'm having a file called 'giis.txt'
$ls -il giis.txt
15 -rw-rw-r-- 2 root root 20 Apr 17 12:08 giis.txt

note: "-il" option displays the files inode number.(which is 15)

And it's contents are :
$ cat giis.txt
this is giis file
Now i'm going to delete that file.
$rm giis.txt
rm: remove write-protected regular file `giis.txt'? y

Using Journal and Inode number

Remember if the system is reboot the journal entries will be lost. So you can recover a file from Journal as long as system is /NOT/ shutdown or restarted.

Recovering from journal using file Inode number

Since we know that giis.txt files inode number is 15,use that debugfs.

debugfs: logdump -i <15>
FS block 1006 logged at sequence 404351, journal block 7241
(inode block for inode 15):
Inode: 15 Type: regular Mode: 0664 Flags: 0x0 Generation: 0
User: 0 Group: 0 Size: 20
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 8
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x48159f2d -- Mon Apr 28 15:25:57 2008
atime: 0x48159f27 -- Mon Apr 28 15:25:51 2008
mtime: 0x4806f070 -- Thu Apr 17 12:08:40 2008
Blocks: (0+1): 10234
No magic number at block 7247: end of journal.

Look carefully you can see the line
Blocks: (0+1): 10234
That's the address (data block) where the content of inode 15 is stored.
So now we know the block address too.
Then what are you waiting for?Go ahead and just use dd commad to extract data from that address.(block number)
dd if=/dev/sda5 of=/opt/giis_R/txt bs=4096 count=1 skip= 10234
1+0 records in
1+0 records out
if refers to input device.
of refers to output device.
bs refers to block size.
count indicates 'how many no.of blocks you want to dump?'[we need only one block]
skip tells to skip 10234 blocks from start and dump the next block.
Now let's check the content of the txt file
$cat txt
this is giis file

Yes....we got it :-)

Ok..we recovered a file based on it's inode number. I know what's in your mind,
"Is it possible to recover a file if we don't know it's inode number?" yes...it's a very good question.Actually you can recover...but how?
Using Journal and Filename

So you want to know about how to recover a file ,if we don't know it's inode number????
Sadly that's not possible.You should know the file's inode number.Yes.I can hear you saying "How to remeber file's inode number?".It's simple run ls -i command and it will display inode number and filename.Some thing like below output
4243207 133708_id 3698526 inotify-tools-3.12
3698256 133708.tgz 4373821 ipc.pdf
3698265 16px-Feed-icon.svg.png 3767366 james
3698357 5-Oct-2001.ppt 4373815 journal-api.pdf
3762755 apache_sh 3698376 JSJr
3697893 arabia 3762748 kamal
3697869 bach 3700575 Kernel book-Daniel P. Bovet. Marco Cesati
3697836 bach.zip 3698675 lan.html
3697971 bb 3698427 lecture23.ppt
3764000 cartMail.php 3703871 Lin
2678038 childshell 3697802 link to payment
3701078 cms-fixes 3893379 linux-cmd.txt

Can you see the number in front of file names ?.They are inode numbers :)
Read the list 10 times loudly and just memorize it ;-)
Just kidding ;-)..No one can remember inode number of all files...huhhh...relaxed now?
then how to recover a file if we don't know it's inode number????Sadly that's not possible... :-) [back to line number1.It's an recursive doc :-) ]

Let's start exploring.
I'm deleting a file called exthide2.txt.
$rm exthide2.txt
rm: remove write-protected regular file `exthide2.txt'? y
I don't know it's number.Let's check the debugfs command.
Use debugfs to list files -d option
debugfs: ls -d
2 (12) . 2 (12) .. 11 (20) lost+found 2347777 (20) oss
<2121567> (20) exthide2.txt
you can see the deleted file exthide2.txt it's different from others It's inode number
2121567 is surrounded by < and > . Deleted files inode are surrounded by < and >. Now got a chance to recover the file :-)

So we can use this inode number with logdump:
debugfs: logdump -i <2121567>
Inode 2121567 is at group 65, block 2129985, offset 3840
Journal starts at block 1, transaction 405642
FS block 2129985 logged at sequence 405644, journal block 9
(inode block for inode 2121567):
Inode: 2121567 Type: bad type Mode: 0000 Flags: 0x0 Generation: 0
User: 0 Group: 0 Size: 0
File ACL: 0 Directory ACL: 0
Links: 0 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x00000000 -- Thu Jan 1 05:30:00 1970
atime: 0x00000000 -- Thu Jan 1 05:30:00 1970
mtime: 0x00000000 -- Thu Jan 1 05:30:00 1970
Blocks:
FS block 2129985 logged at sequence 405648, journal block 64
(inode block for inode 2121567):
Inode: 2121567 Type: regular Mode: 0664 Flags: 0x0 Generation: 913772093
User: 100 Group: 0 Size: 31
File ACL: 2130943 Directory ACL: 0
Links: 1 Blockcount: 16
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x4821d5d0 -- Wed May 7 21:46:16 2008
atime: 0x4821d8be -- Wed May 7 21:58:46 2008
mtime: 0x4821d5d0 -- Wed May 7 21:46:16 2008
Blocks: (0+1): 2142216

Let's explore this result carefully.You could see a entry like this -
FS block 2129985 logged at sequence 405644, journal block 9
and it's displayed the type as
Type: bad type
(inode block for inode 2121567):
parse the entries carefully,you can see timestamps and then we reach
Blocks:
nothing is specified against blocks.Let's parse the next journal block:
FS block 2129985 logged at sequence 405648, journal block 64
(inode block for inode 2121567):
yes.here you can see an entry like this :
Blocks: (0+1): 2142216
This data address of deleted file. Let's try and dump data using dd command.
$sudo dd if=/dev/sda5 of=/home/oss/exthide_recovered.txt bs=4096 skip=2142216 count=1
Now see what's in that file -- i'm curious
$ cat exthide_recovered.txt
this is exthide file added now

wow shocking output but that's what we wanted :-) ...now we recovered a file with it's name..
Refresh the steps involved in this process :
1)Find inode number of file using debugfs: ls -d
2)Find data block of inode using debugfs:logdump
3)Finally use data block number with dd command to exact the contents.

======================you are done==================================================