How To Patch Running Linux Kernel Source Tree

Some people like to know about patching running Linux kernel. Patching production kernel is a risky business. Following procedure will help you to fix the problem.

Step # 1: Make sure your product is affected

First find out if your product is affected by reported exploit. For example, vmsplice() but only affects RHEL 5.x but RHEL 4.x,3.x, and 2.1.x are not affected at all. You can always obtain this information by visiting vendors bug reporting system called bugzilla. Also make sure bug affects your architectures. For example, a bug may only affect 64 bit or 32 bit platform.

Step # 2: Apply patch

You better apply and test patch in a test environment. Please note that some vendors such as Redhat and Suse modifies or backports kernel. So it is good idea to apply patch to their kernel source code tree. Otherwise you can always grab and apply patch to latest kernel version.


Step # 3: How do I apply kernel patch?

WARNING! These instructions require having the skills of a sysadmin. Personally, I avoid recompiling any kernel unless absolutely necessary. Most our production boxes (over 1400+) are powered by mix of RHEL 4 and 5. Wrong kernel option can disable hardware or may not boot system at all. If you don’t understand the internal kernel dependencies don’t try this on a production box.

Change directory to your kernel source code:
# cd linux-2.6.xx.yy
Download and save patch file as fix.vmsplice.exploit.patch:
# cat fix.vmsplice.exploit.patch
Output:

--- a/fs/splice.c
+++ b/fs/splice.c
@@ -1234,7 +1234,7 @@ static int get_iovec_page_array(const struct iovec __user *iov,
if (unlikely(!len))
break;
error = -EFAULT;
- if (unlikely(!base))
+ if (!access_ok(VERIFY_READ, base, len))
break;

/*

Now apply patch using patch command, enter:
# patch <>
Now recompile and install Linux kernel.

I hope this quick and dirty guide will save someones time. On a related note Erek has unofficial patched RPMs for CentOS / RHEL distros.

 

How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Fedora 8) - Page 4

9 Testing

Now let's simulate a hard drive failure. It doesn't matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:

mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md2 --fail /dev/sdb3

mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm --manage /dev/md2 --remove /dev/sdb3

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda's place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0]
104320 blocks [2/1] [U_]

md1 : active raid1 sda2[0]
513984 blocks [2/1] [U_]

md2 : active raid1 sda3[0]
4618560 blocks [2/1] [U_]

unused devices:
[root@server1 ~]#

The output of

fdisk -l

should look as follows:

[root@server1 ~]# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0007b217

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 77 514080 fd Linux raid autodetect
/dev/sda3 78 652 4618687+ fd Linux raid autodetect

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/md2: 4729 MB, 4729405440 bytes
2 heads, 4 sectors/track, 1154640 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md1: 526 MB, 526319616 bytes
2 heads, 4 sectors/track, 128496 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table
[root@server1 ~]#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

)

[root@server1 ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System
/dev/sdb1 * 63 208844 208782 fd Linux raid autodetect
/dev/sdb2 208845 1237004 1028160 fd Linux raid autodetect
/dev/sdb3 1237005 10474379 9237375 fd Linux raid autodetect
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
[root@server1 ~]#

Afterwards we remove any remains of a previous RAID array from /dev/sdb...

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3

... and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1
mdadm -a /dev/md1 /dev/sdb2
mdadm -a /dev/md2 /dev/sdb3

Now take a look at

cat /proc/mdstat

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
513984 blocks [2/2] [UU]

md2 : active raid1 sdb3[2] sda3[0]
4618560 blocks [2/1] [U_]
[===>.................] recovery = 15.4% (715584/4618560) finish=4.9min speed=13222K/sec

unused devices:
[root@server1 ~]#

Wait until the synchronization has finished:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
513984 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
4618560 blocks [2/2] [UU]

unused devices:
[root@server1 ~]#

Then run

grub

and install the bootloader on both HDDs:

root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit

That's it. You've just replaced a failed hard drive in your RAID1 array.

10 Links


Previous

 

How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Fedora 8) - Page 3

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

[root@server1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 4.4G 2.4G 1.8G 58% /
/dev/md0 99M 15M 80M 16% /boot
tmpfs 185M 0 185M 0% /dev/shm
[root@server1 ~]#

The output of

cat /proc/mdstat

should be as follows:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1]
104320 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
513984 blocks [2/1] [_U]

md2 : active raid1 sdb3[1]
4618560 blocks [2/1] [_U]

unused devices:
[root@server1 ~]#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

[root@server1 ~]# fdisk /dev/sda

Command (m for help): <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@server1 ~]#

Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdb1[1]
104320 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
513984 blocks [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[1]
4618560 blocks [2/1] [_U]
[=====>...............] recovery = 29.9% (1384256/4618560) finish=2.3min speed=22626K/sec

unused devices:
[root@server1 ~]#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdb1[1]
104320 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
513984 blocks [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
4618560 blocks [2/2] [UU]

unused devices:
[root@server1 ~]#

).

Then adjust /etc/mdadm.conf to the new situation:

mdadm --examine --scan > /etc/mdadm.conf

/etc/mdadm.conf should now look something like this:

cat /etc/mdadm.conf

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=2848a3f5:cd1c26b6:e762ed83:696752f9
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=8a004bac:92261691:227767de:4adf6592
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=939f1c71:be9c10fd:d9e5f8c6:a46bcd49

8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 with hd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/sda3
# initrd /initrd-version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.1-42.fc8)
root (hd1,0)
kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/md2 rhgb quiet
initrd /initrd-2.6.23.1-42.fc8.img

title Fedora (2.6.23.1-42.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/md2 rhgb quiet
initrd /initrd-2.6.23.1-42.fc8.img

#title Fedora (2.6.23.1-42.fc8)
# root (hd0,0)
# kernel /vmlinuz-2.6.23.1-42.fc8 ro root=LABEL=/ rhgb quiet
# initrd /initrd-2.6.23.1-42.fc8.img

Afterwards, update your ramdisk:

mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig2
mkinitrd /boot/initrd-`uname -r`.img `uname -r`

... and reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running Fedora 8 system!



Previous Next

 

How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Fedora 8) - Page 2

4 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3

The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
4618560 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
513984 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
104320 blocks [2/1] [_U]

unused devices:
[root@server1 ~]#


Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2

Next we create /etc/mdadm.conf as follows:

mdadm --examine --scan > /etc/mdadm.conf

Display the contents of the file:

cat /etc/mdadm.conf

In the file you should now see details about our three (degraded) RAID arrays:

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=2848a3f5:cd1c26b6:e762ed83:696752f9
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=8a004bac:92261691:227767de:4adf6592
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=939f1c71:be9c10fd:d9e5f8c6:a46bcd49

5 Adjusting The System To RAID1

Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array /dev/md1):

mkdir /mnt/md0
mkdir /mnt/md2

mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2

You should now find both arrays in the output of

mount

[root@server1 ~]# mount
/dev/sda3 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
[root@server1 ~]#

Next we modify /etc/fstab. Replace LABEL=/boot with /dev/md0, LABEL=SWAP-sda2 with /dev/md1, and LABEL=/ with /dev/md2 so that the file looks as follows:

vi /etc/fstab

/dev/md2                 /                       ext3    defaults        1 1
/dev/md0 /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/md1 swap swap defaults 0 0

Next replace LABEL=/boot with /dev/md0 and LABEL=/ with /dev/md2 in /etc/mtab:

vi /etc/mtab

/dev/md2 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/md0 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0

Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback=1 right after default=0:

vi /boot/grub/menu.lst

[...]
default=0
fallback=1
[...]

This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root=LABEL=/ with root=/dev/md2 and root (hd0,0) with root (hd1,0):

[...]
title Fedora (2.6.23.1-42.fc8)
root (hd1,0)
kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/md2 rhgb quiet
initrd /initrd-2.6.23.1-42.fc8.img

title Fedora (2.6.23.1-42.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.1-42.fc8 ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6.23.1-42.fc8.img

The whole file should look something like this:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/sda3
# initrd /initrd-version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.1-42.fc8)
root (hd1,0)
kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/md2 rhgb quiet
initrd /initrd-2.6.23.1-42.fc8.img

title Fedora (2.6.23.1-42.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.1-42.fc8 ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6.23.1-42.fc8.img

root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:

mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig
mkinitrd /boot/initrd-`uname -r`.img `uname -r`

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0

6 Preparing GRUB (Part 1)

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

grub>

root (hd1,0)

grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd

grub>

setup (hd1)

grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

reboot

PreviousNext

 

California firm buys Utah-based Linux

A Silicon Valley company has bought the assets of Utah supercomputer maker Linux Networx Inc. for an undisclosed amount of stock.
Silicon Graphics Inc. acquired key Linux Networx software, patents, technology and expertise, Sunnyvale, Calif.-based SGI said Thursday.
It isn't clear what will happen to Linux Networx. David Morton, chief technology officer of the Bluffdale-based company, declined to comment.
SGI plans to keep an office at an undetermined location in the Salt Lake City area.
"We have made offers to a portion of Linux Networx employees, but they haven't been obligated to answer yet. So we can't say how many will be joining us," said Joan Roy, SGI senior director of marketing.
Linux Network designs and makes clustered high-performance computers based on the Linux operating system. Its machines are used in scientific research, oil and gas exploration, and graphics rendering.
"SGI has a really nice fit with some of the things that Linux Networx did really well. It was a pioneer in cluster computing. They've made a lot of progress in the marketplace in creating really high-performance computing solutions," Roy said.
Customers have included defense contractors Lockheed Martin and Northrop Grumman, the Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NASA, BMW, Toyota and Royal Dutch Shell.
Linux Networx is privately held. Investors have included the Canopy Group, Wasatch Venture Fund, Oak Investment Partners and Tudor Ventures.

source : Salt Lake Tribune

 

'Linux Next ' Begins To Take Shape

Make no mistake about, the Linux 2.6.x kernel is a *large* undertaking that just keeps getting bigger and bigger. Apparently it's also getting harder to maintain as well in terms of ensuring that regressions don't occur and that new code is fully tested.

That's where the new 'Linux Next' effort comes in.

Linux next started off as a 'dream' of kernel maintainer Andrew Morton who has noted that few kernel developers are testing other kernel developers' development code which is leading to some problems.

Morton has proposed a "linux-next" tree that once per day would merge various Linux subsystem trees and then run compilation tests after applying each tree. While that may sound simple enough, in practice it's no small task.

Kernel developer Stephen Rothwell has stepped up to the plate and has announced that he will help to run part of the Linux next tree. While the effort could well serve to make the Linux development process more complicated, its goal clearly is to ensure a higher overall code quality by making sure code merges actually work before Linus Torvalds actually pushes out a RC (release candidate).

The way i see it from my simple laypersons point of view, Linux next forces code to be a whole lot cleaner before it gets submitted and forces more testing, earlier and more often - which ultimately is a great thing.

There has been some very 'healthy' discussion on the Linux Kernel Mailing List (LKML) about Linux next with perhaps the most colorful language coming from non-other than Linus Torvalds himself.

If you're not confident enough about your work, don't push it out! It's
that simple. Pushing out to a public branch is a small "release".

Have the [EXPLETIVE DELETED]back-bone to be able to stand behind what you did!
It sure will be interesting to see how Linux-next plays out over time, I for one am very optimistic.

 

Red Hat Expands JBoss SOA, Community Efforts

Red Hat is out this week with a series of initiatives to further expand and develop its JBoss middleware platform. On the commercially available product side there is the JBoss SOA (define) platform and on the community side there are three separate projects including Black Tie (for BEA Tuxedo migration), RHQ (a management effort for middleware management) and SOA Governance.

All told the projects are part of Red Hat's effort to accelerate JBoss to take on 50 percent of the enterprise middleware market.

"As you look at these projects they first start off in the community but they will become products and part of our product portfolio," Craig Muzilla, VP Middleware Business at Red Hat explained. "We believe that they will all help to accelerate our open source middleware since they all relate to the challenges that IT has."

During a conference call with the media, JBoss CTO Sacha Labourey explained that the Black Tie effort came out of JBoss's acquisition of Ajuna in 2005. The goal of Black Tie is simple - to get users of BEA's Tuxedo transaction server.

Black Tie which is expected to have its first open source release in the next 60 days will allow for interoperability with Tuxedo as a transaction server. Labourey claimed it could possibly serve as a replacement to Tuxedo in certain scenarios as well.

On the management side, in a joint open source effort with management vendor Hypernic, JBoss is developing an open middleware management project called RHQ.

The goal of RHQ is not necessarily intended to be a standalone effort but rather to act as a framework on which a complete management product is based. JBoss Operations Network v2.0 (JON 2.0) from Red Hat will be one such product. JON 2 is set for a Spring of 2008 release.

In terms of governance, JBoss is kicking off a series of projects all under the larger banner of SOA governance. Craig Muzilla, VP Middleware Business at Red Hat, explained that all of the SOA governance projects are aimed at helping the adoption of JBoss's middleware. While Muzilla could not provide all the hard details on the SOA governance project he did indicate that there will be at least three core areas including registry, repository and policy management.

The registry effort will have a project in it called JBoss DNA which Muzilla described as a metadata repository that will be based on technology acquired from Metamatrix. Red Hat acquired Metamatrix for an undisclosed sum in April of 2007.

In addition to the new open source efforts, JBoss also announced the general availability of its SOA platform. The JBoss SOA platform is an integrated set of JBoss technologies that have been combined to form a full SOA solution. Among the JBoss tools included in JBoss SOA is JBoss ESB (for service integration), jBPM (for workflow) and JBoss Rules (for policy).

Summing up, Labourey said JBoss' announcements are about innovation of middleware and SOA.

"It's not innovation for the sake of innovation," he said. "It's about enterprise acceleration."

 

Making music with M-Audio on Linux

M-Audio has supplied hardware and software to computer-based musicians for 20 years. Its new "make-music-now" line of products, aimed at musicians just getting into computers or PC users with an interest in music, includes a microphone, speakers, drum machine, and DJ mixer deck. Unfortunately, its bundled software, called Session, is for Windows only. Our challenge was to try out this hardware -- specifically the KeyStudio MIDI keyboard and Fast Track audio interface -- with Linux applications. We were half successful.

The KeyStudio keyboard is well made, with 49 full-size, touch-sensitive keys. The action feels a little light, but the 'touch' is OK and you soon adjust to playing forte or pianissimo. Like Fast Track, it is USB-powered. There are Pitch Bend and Modulation wheels on the keyboard to tweak the sound as you play, and Octave buttons for when you want to emulate a piccolo or double bass.

The keyboard has no built-in sound capabilities of its own; it is intended to be used with the Session software, which has a good collection of MIDI samples and effects and outputs via Fast Track, or with M-Audio's Micro USB Audio Interface, which is supplied with Session. Our plan was to try it with LMMS, Rosegarden, Timidity, and any other Linux MIDI and audio software we find.

Whether you have basic on-board sound or a surround sound card for gaming, inputs are usually limited to line-in and microphone jacks. These are adequate for VoIP phone calls or recording from tape/record decks, but not much else. They are certainly not compatible with electric guitar jacks or XLR plugs for stage mics. According to M-Audio, the Fast Track USB interface is 'ideal for recording guitar and vocals,' but it could also be used to record any line level sound source, as long as you have the correct cable.

The design is nice and simple; the front panel has three knobs, one controlling the mic input level, another controlling the input/playback mix ratio, and the third controlling the main output level (this only affects the headphone and the RCA output volume). It also sports signal and peak indicator LEDs, 1/8-inch stereo headphone output, and a stereo/mono monitoring selection button.

The rear panel of the unit consists of a balanced XLR input socket, a quarter-inch jack line/instrument input, input level switch button (line/guitar), stereo RCA outputs, USB connection socket and also a Kensington lock connector. There's no power supply needed; Fast Track takes its power from the USB port. A possible 'gotcha' is that Fast Track replaces your sound card or on-board sound system, so you need to plug a couple of speakers into the interface.

ALSA and ASIO

Typical PC sound systems are created for playback, with inputs a poor second-level task they can just about cope with. In a Microsoft Direct Sound system the signal is transferred from the sound card via PCI to the CPU and back, and during the trip it may have to queue up while the CPU does other stuff for Windows. To avoid such latency Steinberg (of Cubase fame) developed the ASIO (Audio Stream Input/Output) protocol, which allows the audio interface to connect directly with the PC hardware, reducing latency.

Under Linux we have a similar latency problem with ALSA (Advanced Linux Sound System), but there is no Linux implementation of ASIO. There are hacks involving compiling some Wine code with Steinberg drivers, but instead we turned to LMMS, the real-time kernel JACK, and Ubuntu Studio.

KeyStudio and LMMS

Though work on LMMS (Linux MultiMedia Studio) began in 2003, the project is still in its infancy. We did our testing under KDE in Kubuntu 7.10, and the version supplied by the Ubuntu repositories is 0.3.0, and it is a bit unstable but usable. LMMS is a MIDI editor with built-in recording and playback which works with 'dumb' keyboards like this one with no sound system of their own, or with hardware synths.

When you install LMMS, plug in the KeyStudio keyboard, and run LMMS, the software automatically detects the keyboard. To get started, click the Samples tab and open the instruments folder. Double-click the instrument of your choice to open it in the Beat+Baseline Editor. Click the keyboard icon, select MIDI Input, and select your keyboard. Repeat the process for MIDI output and select your sound card. Now you're ready to start playing. To record, double-click on the track in Beat+Baseline editor to bring up a Piano Roll editor with a Record Button.

LMMS comes with a decent supply of virtual instruments and some beat and bass loops. It somehow manages to avoid latency problems, and it just works. However, it cannot save a MIDI file to send to a real muso to arrange properly, and it can't create a traditional score upon which you can enter the lyrics. However, Rosegarden with LilyPond can.

KeyStudio and Rosegarden

Rosegarden is, like LMMS, primarily a MIDI editor, but unlike LMMS it is aimed at professional users and follows the normal Linux practice of linking to existing applications rather than being a standalone application. Rosegarden can link to various software synths, effects, drum simulators, and audio applications via JACK (Jack Audio Connection Kit), a software version of the cat's cradle of cables you see in real studios. Rosegarden can also link to LilyPond, which is a conventional musical notation editor that lets you print 'real music.'

However, these applications don't work well under Kubuntu. Start Rosegarden and it will tell you the JACK server isn't running and you don't have a low-latency kernel. JACK can link every bit of audio or MIDI software and hardware with ever other bit, but it won't play nice with aRts the KDE sound system; run JACK and aRts dies, so you get no audio output.

To continue, we installed Ubuntu Studio, which is a distro in its own right, but you can also install it as a meta package from a conventional Ubuntu or Kubuntu installation. As a distro it comes with a patched kernel Linux-RT (Real Time) which gives priority to media work and reduces latency. If you install Studio from normal (K)Ubuntu, install the RT kernel first. Reboot and press Escape to enter the GRUB boot menu, and choose the RT kernel. If the system loads correctly, you can install Studio.

Rosegarden is less intuitive than LMMS, but ultimately more versatile. Once you've discovered all the settings in JACK needed to make it work, then it works well with the KeyStudio keyboard. We also tested the KeyStudio with ZynAddSubFX, a virtual synth included in the Ubuntu Studio distribution. The setup configuration was simple -- just a case of creating a connection between the MIDI out device of the KeyStudio and the MIDI in device of the ZynAddSubFX using the Connection window in the MIDI settings section of JACK. Thus the MIDI from the keyboard is sent to Rosegarden, Rosegarden outputs to ZynAddSubFX, and in the Audio lists you connect ZynAddSubFX to the sound card. Performance was good, with no apparent delay in hearing a sound after striking a key.

Ubuntu Studio, JACK, and Fast Track

Buoyed by our success with the KeyStudio, we set up Fast Track USB, but this time we didn't get far. The unit's power LED lights up when plugged in, and the unit is correctly recognized by JACK and listed as Fast Track in the interface list. A problem with any audio recording on a computer is that the PC's other activities can interrupt the smooth flow of data, resulting in pops and pauses. On Linux systems these are known as Xruns, and JACK will let you know if you are suffering from them. Fast Track didn't cause any, which is good, but we were not able to record any audio in Audacity, Rosegarden, or Ardour. Out of the three, Audacity was the only one that gave any indication that something was wrong, telling us to check our interface settings whenever we attempted recording. Rosegarden and Ardour didn't throw up any errors at all, they just failed to capture or transmit any audio to or from the Fast Track.

There is an open source driver for M-Audio USB interfaces, but unsurprisingly it hasn't been updated for the recently released Fast Track yet. Until it is, it doesn't look like we'll be using Fast Track on Linux.

However, KeyStudio's support under Linux is a triumph for open standards. The keyboard uses MIDI and works with MIDI software on any platform, much as you'd expect. Fast Track uses Steinberg's ASIO, and doesn't.

 

Zabbix 1.4.4 From Source On Debian Etch

Originally posted on Zabbix 1.4.4 from source on Debian Etch. This guide will walk you through installing Zabbix 1.4.4 from source on Debian Etch. 1.4.4 has many improvements over what is currently available in apt, and it's not hard so you might as well do it this way. *Note: this walkthrough assumes that you will be running the zabbix database on the same machine as the frontend. You dont have to obviously, just do the mysql setup on whatever db server you are using and point the necessary things to it.

Required Packages: build-essential libmysqlclient-dev libssl-dev libsnmp-dev apache2 libapache2-mod-php5 php5-gd php5-mysql mysql-server

aptitude -y install build-essential libmysqlclient-dev libssl-dev libsnmp-dev apache2 libapache2-mod-php5 php5-gd php5-mysql mysql-server

Zabbix needs to have its own user and group so let's create them (you need to do this as root).

groupadd zabbix
useradd -c 'Zabbix' -d /home/zabbix -g zabbix -s /bin/bash zabbix
mkdir /home/zabbix
chown zabbix:zabbix /home/zabbix

Let's set up the MySQL database for zabbix.

mysql -p -u root
create database zabbix;
grant all on zabbix.* to 'zabbix'@'localhost' identified by 'PASSWORD';
quit;

Where PASSWORD is the password you want zabbix to connect to the database with.

Let's go ahead and grab the zabbix source.

su - zabbix
wget http://internap.dl.sourceforge.net/sourceforge/zabbix/zabbix-1.4.4.tar.gz
tar zxvf zabbix-1.4.4.tar.gz
cd zabbix-1.4.4

Now let's build the source and install zabbix_server and zabbix_agentd.

./configure --prefix=/usr --with-mysql --with-net-snmp --enable-server --enable-agent
make
exit
make install

We need to add the zabbix ports to /etc/services, and create some config files for zabbix.

echo "
zabbix_agent 10050/tcp # Zabbix ports
zabbix_trap 10051/tcp" >> /etc/services
mkdir -p /etc/zabbix
chown -R zabbix:zabbix /etc/zabbix
cp misc/conf/zabbix_* /etc/zabbix
vim /etc/zabbix/zabbix_agentd.conf

ensure Server=127.0.0.1

vim /etc/zabbix/zabbix_server.conf

max_execution_time = 300
date.timezone = UTC
ensure DBHost=localhost or your db host
ensure DBName=zabbix
ensure DBUser=zabbix
ensure DBPassword=ZABBIX_PASSWORD

where ZABBIX_PASSWORD is the password you set when creating db.

The zabbix package has init scripts for Debian and they only need minor modification to get them working so let's use them.

cp /home/zabbix/zabbix-1.4.4/misc/init.d/debian/* /etc/init.d/

Now modify both of those scripts changing

DAEMON=/home/zabbix/bin/${NAME}

to

DAEMON=/usr/sbin/${NAME}

Great; now we just need to get the database schema loaded, and then we need to set up the frontend. Let's load the database schema first.

mysql -u root -p zabbix < /home/zabbix/zabbix-1.4.4/create/schema/mysql.sql
mysql -u root -p zabbix < /home/zabbix/zabbix-1.4.4/create/data/data.sql
mysql -u root -p zabbix < /home/zabbix/zabbix-1.4.4/create/data/images_mysql.sql

Great; now it's just the frontend left. I like to put all of my webaps down in /var/www.

mkdir -p /var/www/zabbix
cp -R /home/zabbix/zabbix-1.4.4/frontends/php/* /var/www/zabbix/
chown -R zabbix:zabbix /var/www/zabbix/*

Create /etc/apache2/sites-available/zabbix with the following content:


ServerName zabbix.fqdn.tld
DocumentRoot /var/www/zabbix

Options FollowSymLinks
AllowOverride None



I disable the default site, and enable the zabbix site with a2ensite:

a2ensite zabbix
a2dissite default

Just a few minor edits in /etc/php5/apache2/php.ini:

max_execution_time = 300
date.timezone = UTC

Restart apache, zabbix-server, and zabbix-agent and you should be ready to rock and roll. You will need to browse to your zabbix frontend and complete the web-driven install which should be easy enough.

/etc/init.d/apache2 restart
/etc/init.d/zabbix-server start
/etc/init.d/zabbix-agent start
update-rc.d zabbix-server default
update-rc.d zabbix-agent default

* adapted and updated from http://www.howtoforge.com/zabbix_network_monitoring
Zabbix 1.4.4 from source on Debian Etch

 

Scripting Scribus

Have you ever said, "This program is pretty nice, but I wish it would ..."? For applications that offer the capability, scripting gives users the ability to customize, extend, and tailor a program to meet their needs. Scribus, a free page layout program that runs on Linux (and Mac OS and Windows) uses the Python programming language for user scripting. Python scripting in Scribus can drastically improve your work flow, and it's relatively easy for beginners to not only use scripts, but also write them.

Scripts are useful for page layout in a few interrelated ways, including automating repetitive tasks and tasks that involve measuring, such as placing page elements and creating page guides.

Not much is required to use Python scripts in Scribus. If your distribution successfully runs Scribus, then you probably have the required Python components. For this evaluation, I downloaded, compiled, and installed the latest stable Python version (1.3.3.11). You can start a Python script from within Scribus by going to the Script menu and choosing either Execute Script..., which opens a dialog box for selecting a script to run, or Scribus Scripts, which lists the scripts in the /usr/share/scribus/scripts directory, which contains the official scripts for Scribus. Placing additional scripts in that directory (as the root user or using sudo) makes those scripts available from the menu.

Two official scripts are provided: CalendarWizard.py and FontSample.py. Both feature dialog boxes with numerous options that showcase how Python scripts can extend the functionality of Scribus. The font sample script can take a long time to run, depending on your processor speed, memory, and number of fonts, but Scribus displays a handy progress bar showing the percentage of script completion.

Additionally, the /usr/share/scribus/samples directory contains 15 scripts intended not just for immediate use, but also as samples to study when creating your own scripts. The scripts in the samples directory are varied and range from a heavily commented boilerplate script (boilerplate.py) to functional scripts that, for example, set up a layout for CD pockets (pochette_CD.py), or add legends to images (legende.py). As the titles of some of the scripts indicate, many have comments and even dialog box text written in the native languages of the script authors, but the script description is usually in English.
More Scripts

More Scribus scripts are available online at the Scribus wiki's page on scripts and plugins. I found here a script to make empty space around an image inside an image frame -- something not yet possible in Scribus. The script works by drawing a second, empty frame 10 measurement units larger than the selected image or text frame. When I first ran the script, I had my default units set to inches, and the script created a 10-inch border around the image I selected. If you want to use this script without modification, be sure that your default units are set for points.

A more comprehensive approach to manipulating images uses a series of scripts for aligning an image inside its frame, scaling an image to fill a frame proportionally (i.e., scaling the image to the largest size possible within the frame while keeping its proportions intact), and scaling and aligning an image via a wizard and an advanced wizard that build upon the first two scripts. These scripts are great examples of how Python scripting extends Scribus's capabilities.

Using scripts that others have written is as simple as copying them from the Web page, pasting them into a text editor (preferably one that is aware of Python syntax, such as Emacs, Vim, Kate, or gedit), and then saving the script to a file ending in .py. You can then run the script from the Scribus script menu. The advantage of pasting the script into a syntax-aware text editor is that white space is important in Python, and a good editor will help you check that everything is aligned correctly.
Writing a script

Prior to doing the research for this article, I had not done any programming in Python. I did, however, have extensive experience using Adobe's scripting language for PageMaker, and I found that most of the principles carried over. A wonderful resource for beginners wanting to learn more about Python is the 110-page PDF tutorial A Byte of Python by Swaroop C H. It is well-written, easy to follow, and may be the best introduction to Python programming available.

Armed with a little bit of knowledge of Python, and having the scripting application programming interface (API) available online and from Scribus's help menu, I set out to write a couple of scripts. With all the sample scripts available, I did not have to start from scratch.

I began by modifying the script for making an empty space around an image so that the space would be 10 points regardless of the default measurement unit set by the user. To do that, I needed to get the current units setting, store it in a variable, temporarily set the units to points, and then reset the units to their original setting. To accomplish those tasks, I added the following commands to the script I downloaded:

* userUnit = getUnit() -- set a variable storing the current units setting
* setUnit(0) -- sets the units to points (0). (1) is millimeters (2) is inches and (3) is picas.
* setUnit(userUnit) -- resets the units to the original setting


The script as modified appears below. Because of the way the original author set the script to make sure the script is run from within Scribus, the commands I added needed to be prefaced with scribus..

#!/usr/bin/env python # -*- coding: utf-8 -*- import sys try: import scribus except ImportError: print "This script only works from within Scribus" sys.exit(1) def makebox(x,y,w,h): a = scribus.createImage(x, y, w, h) scribus.textFlowsAroundFrame(a, 1) def main(): if scribus.haveDoc(): scribus.setRedraw(1) userUnit = scribus.getUnit() scribus.setUnit(0) x,y = scribus.getPosition() w,h = scribus.getSize() x2 = x - border y2 = y - border w2 = w + border * 2 h2 = h + border * 2 makebox(x2,y2,w2,h2) scribus.redrawAll() scribus.setUnit(userUnit) else: result = scribus.messageBox('Error','You need a Document open, and a frame selected.') # Change the 'border' value to change the size of space around your frame border = 10 main()


When I first started working with Scribus, I missed having the ability to make a single underline of a text or image frame. This capability is particularly handy when setting up page headers. Looking at the sample scripts, I saw that legende.py did something similar to what I wanted to do. That script gets the size and location of an image frame and then places a text box a few millimeters below the lower right corner of the box. I needed to do something similar, except that I needed my script to draw the line from the lower left corner to the lower right corner without an offset. So I modified the legende.py script and saved it as underline_block.py.

The key to making the script work is realizing that the getPosition function gets the x and y page coordinates of the upper left corner of the frame. To get the positions of the other corners, I need the height and width of the frame. When that information is stored in variables, then drawing the line is a matter of specifying the x and y coordinates of the lower two corners in relation to the height and width. The command createLine(x, y+h, x+l, y+h) accomplishes drawing the line from the bottom left to the bottom right. The full script is below:

#!/usr/bin/env python # -*- coding: utf-8 -*- """ Draws a line below the selected frame. """ import sys try: from scribus import * except ImportError: print "This script only runs from within Scribus." sys.exit(1) import os def main(): userUnit = getUnit() setUnit(1) sel_count = selectionCount() if sel_count == 0: messageBox("underline_block.py", "Please select the object to add a line to before running this script.", ICON_INFORMATION) sys.exit(1) x,y = getPosition() l,h = getSize() createLine(x, y+h, x+l, y+h) setUnit(userUnit) if __name__ == '__main__': main()


Like any other type of programming, creating scripts is an iterative process. When writing or modifying a script, it is easy to work in a cycle of edit, save, run, check, and edit. The iterative approach also applies to improving scripts. After using the underline_block.py script for a while, I may want to modify it so that I can choose to add a line above the block rather than below it. To do that, I'll need to add a dialog box so I can choose the position. If I do that, I may want to add something to the dialog so I can choose the line style too. Each of the embellishments make the script more general and more useful.

As the examples illustrate, Python scripts are a useful way to customize and extend Scribus regardless of your level of programming experience.

 

Linpus offers a Linux for newbies and experts alike

Linpus Technologies has long been known in Taiwan for its Linux distributions. Now, it wants to become a player in the global Linux market with its new Linux distribution Linpus Linux Lite, which features a dual-mode user interface. One mode is for people who may never have used a computer before; the other is for experienced Linux users.

According to the company, these two modes are Easy and Normal. Easy mode uses large, colorful icons, arranging software in terms of its use. So, for example, instead of offering users a choice of Web browser and e-mail programs, there's an icon for the Internet. Under this icon, there are other icons for Firefox, as well as links that use Firefox to automatically connect to Google Maps, Wikipedia and YouTube. If users want a more traditional PC interface, they merely need to tap an icon on the master tool bar and they'll switch to Normal mode, which is a KDE 3.5x desktop.

This functional approach to the desktop is quite similar to that of Good OS' gOS 2.0. With gOS, which is deployed on Everex's inexpensive gPC, both Internet and office applications are built around Google's online software stack. Linpus offers a middle-of-the-road approach with an easy-to-use, functional desktop interface, but with the more usual PC-based applications underneath it.

Linux Lite is also designed to run on minimal hardware. Linpus claims the product will run well on PCs with 366MHz processors, 128MB of DRAM (dynamic RAM) and 512MB of disk space. At the same time, Linux Lite comes with an assortment of open-source software staples, such as OpenOffice.org.

"Our objective with this product was to create an operating system that offered choice and addressed specifically the ease-of-use needs of end users of UMPC [Ultra-Mobile PC] devices," Warren Coles, Linpus' sales and marketing manager, said in a statement. "If you are using a small screen, if you are a child, older person or inexperienced user, you will find the icon interface particularly helpful."

While Linpus would be happy to see end users pick up Linux Lite, the company is really targeting hardware vendors. "Our company has always been committed to creating user-friendly, mass-market Linux," Coles said. "Because of this, we have invested our time into not just being another desktop distribution, but in resolving all the issues involved in getting desktop Linux to market.

"Specifically, we provide unprecedented levels of support for hardware vendors —- and we recently pioneered our own preload solution and have worked extremely hard to create stable sleep and suspend modes for notebooks," he said.

"By having operating system, application and driver teams working side by side, in close proximity to the hardware manufacturers, we offer tremendous quality, value and time-to-market strengths," Coles added. "Ultimately, both the consumer and Linux enthusiasts benefit from a smooth, stable, out-of-the-box Linux operating system at the best price."

Reading between the lines, Linpus is encouraging would-be North American resellers and systems integrators to work with Linpus and Taiwanese PC vendors to deliver inexpensive, small laptops to the American market. Asustek has already shown this approach can be successful, with its popular Eee Linux desktop and laptop PCs.

 

Government/corporate project declares plan to promote OSS within the EU

An ambitious initiative that aims to bring open source software to a new level in Europe hopes to make competition with US companies more interesting. QualiPSo is a four-year project partly funded by the EU. Its mission is to "bring together the major players of a new way to use and deploy open source software (OSS), fostering its quality and trust from corporations and governments."

QualiPSo members include corporations, universities, and public administrations (PA) of various kinds from 20 countries. The main industrial players are Mandriva, Atos Origin, Bull, and Engineering Ingegneria Informatica. While Qualipso founders include organizations from China and Brazil, the main project focus now is on Europe.

QualiPSo was officially launched a year ago. Last month, the group held its first international conference in Rome to present its mission and its initial results. On the first day of the two-day event, several speakers explained why their companies are promoting OSS on such a large scale. The second day was devoted to presenting the main Qualipso subprojects.

What does QualiPSo do?

E-government is the area where Qualipso members hope to make the most money, and where they think they can impact the most EU citizens. Many citizens couldn't care less whether their tax office runs closed or open source software, or if the provider of that software is an American or European corporation, but they do care if filing tax forms online is safe and cheap, and results in quick action. Thus current plans call for QualiPSo to work in 10 distinct areas toward , a word with many different meanings.

During a face-to-face talk, QualiPSo representatives explained to me what they exactly mean by interoperability. According to QualiPSo, large organizations spend about 40% of their IT budget on integration projects. OSS is not interoperable per se, but often the real obstacles are not in the code. Development and publication of proper design practices or open, fully compatible software interfaces are the first and simplest space in which QualiPSo will work to improve OSS interoperability.

On a different plane, metadata such as software categories, relevant technologies, or developer skills aren't stored or presented in a coherent way in SourceForge.net, BerliOS, or other repositories. Next generation software forges from QualiPSo would make it possible for an integrator to build complete OSS products combining (and maintaining) components stored in different repositories with the minimum possible effort.

The last type of interoperability addressed by QualiPSo -- and maybe the most important -- is the organizational and bureaucratic one. E-government and quick business decisions remain dreams if the three different departments that have to approve a budget change do it through three different procedures incompatible in terminology, security, and interfaces. Qualipso members will provide support to integrate all such procedures or guarantee that they really are interoperable.

Thorough interoperability testing at all these levels is costly, time-consuming, and boring enough to attract little volunteer work, if any. To make such testing easier, QualiPSo plans to create lightweight test suites to evaluate the actual interoperability of OSS components and their quality from this and other points of view. More details are in the Interoperability page on QualiPSo's Web site.

Another interesting item in the QualiPSo agenda is the legal subproject. A single programmer merrily hacking in his basement for personal fun may simply patch and recompile GPL code or stick a GPL or similar label on any source code he releases online and be done with it. A large corporation or PA cannot afford legal troubles, especially if it operates in countries with different legislation than USA, the country where most current OSS licenses were designed. As SCO and others have demonstrated, even when it's certain that the bad guys are wrong, proving it wastes a lot of time and money that would have been spent better elsewhere (especially if it was public money). QualiPSo plans to provide a family of OSS licenses for both software and documentation guaranteed to be valid under European laws, together with methodologies to evaluate and properly manage any intellectual property issues.

Four QualiPSo Competence Centers are scheduled to open in Berlin, Madrid, Paris, and Rome during the fall of 2008. Their purpose will be to make all the QualiPSo resources, services, methods, and tools mentioned here and on the Web site available to potential OSS adopters, whether they be individuals, businesses, or PAs.

Critics from the trenches

Roberto Galoppini and other bloggers have noted that the Qualipso reports cost a lot of money and contain little new information, and asked questions such as: Is the amount of public money going into QualiPSo excessive? Will that public money benefit only the corporations that are members of the project? Will the Competence Centers and other initiatives be abandoned as soon as public funding isn't enough to sustain them? During conference breaks I heard or overheard comments along these lines by several attendees.

Jean-Pierre Laisné, leader of the Competence Center subproject, acknowledged that much of the information in the Qualipso reports isn't new. However, he says, it still is information that needed to be organized and declared officially, in a way that constitutes a formal commitment to support OSS from local businesses to states and other large organizations in Europe that, for any reason, cannot or will not listen to hackers in the street. This is more or less the same thing that blogger Dana Blankenhorn said about the cost and apparent obviousness of QualiPSo.

Right now, QualiPSo is a way for Europe-based corporations to get the biggest possible slice of OSS-related contracts from large private or public European organizations. The fact that the group already includes Brazilian and Chinese members -- that is, that software houses in those countries may join the group to apply the same strategy in their home markets -- makes Qualipso all the more interesting, especially for US observers. Qualipso may become a home for software vendors outside the USA that want to kick IBM, Microsoft, Sun, and Oracle out of their local markets -- something no non-US company can do today alone.

If European PAs are to cost less, be more transparent and efficient, and generally move away from manual, paper-based procedures that are expensive and slow, there must be clear rules, tools, and practices to build and recognize quality OSS -- that is, software that is solid, reliable, actually interoperable in the real world, and completely compatible at all levels with local laws. However, hammering out all the deadly boring details of how to implement interoperable bureaucratic procedures in software is something that no volunteer is ever going to do. This is an area where a bit of assistance from the private sector in the form of an organization like QualiPSo wouldn't hurt, at least in some EU countries.

So far, it's not clear how open QualiPSo's operations will be, or how much its activities will benefit all of the European OSS community, not just QualiPSo members. Besides these concerns, in this first year there has also been grumbling about the lack of a published work plan and, in general, of enough information and interaction between QualiPSo and the community. There is still time to fix this now that the project has officially gone public.

However, QualiPSo may make it harder for European PAs at all levels, from parliaments to the smallest city or school council, to ignore OSS, no matter who proposes it. QualiPSo may officially bring OSS, even in Europe, to a level where you cannot be fired for not choosing Microsoft or any other proprietary software. The goal of the "Exploitation and dissemination" subproject is to "promote OSS at a political level, as well as laws and regulations supporting OSS." Mentioning QualiPSo reports with their EU blessings could also be an excellent argument for all the European public employees who promote OSS, such as the ROSPA group in Italy, to convince their managers that it is safe, after all, to create local IT jobs by buying OSS products and services by (any) local businesses.

Even the fact that companies and PAs have already spent public money may make it easier for citizens to demand control and involvement from their representatives, both inside QualiPSo and in any other situation where OSS gets many public praises but much less public funds. If OSS is so good that even the EU partners with big corporations to spread it, why isn't it used more?

All in all, there are plenty of good reasons to follow QualiPSo with interest and see where it will go in the upcoming months.

 

Discover the possibilities of the /proc folder

The /proc directory is a strange beast. It doesn't really exist, yet you can explore it. Its zero-length files are neither binary nor text, yet you can examine and display them. This special directory holds all the details about your Linux system, including its kernel, processes, and configuration parameters. By studying the /proc directory, you can learn how Linux commands work, and you can even do some administrative tasks.

Under Linux, everything is managed as a file; even devices are accessed as files (in the /dev directory). Although you might think that "normal" files are either text or binary (or possibly device or pipe files), the /proc directory contains a stranger type: virtual files. These files are listed, but don't actually exist on disk; the operating system creates them on the fly if you try to read them.

Most virtual files always have a current timestamp, which indicates that they are constantly being kept up to date. The /proc directory itself is created every time you boot your box. You need to work as root to be able to examine the whole directory; some of the files (such as the process-related ones) are owned by the user who launched it. Although almost all the files are read-only, a few writable ones (notably in /proc/sys) allow you to change kernel parameters. (Of course, you must be careful if you do this.)

/proc directory organization

The /proc directory is organized in virtual directories and subdirectories, and it groups files by similar topic. Working as root, the ls /code/* command brings up something like this:


1 2432 3340 3715 3762 5441 815 devices modules
129 2474 3358 3716 3764 5445 acpi diskstats mounts
1290 248 3413 3717 3812 5459 asound dma mtrr
133 2486 3435 3718 3813 5479 bus execdomains partitions
1420 2489 3439 3728 3814 557 dri fb self
165 276 3450 3731 39 5842 driver filesystems slabinfo
166 280 36 3733 3973 5854 fs interrupts splash
2 2812 3602 3734 4 6 ide iomem stat
2267 3 3603 3735 40 6381 irq ioports swaps
2268 326 3614 3737 4083 6558 net kallsyms sysrq-trigger
2282 327 3696 3739 4868 6561 scsi kcore timer_list
2285 3284 3697 3742 4873 6961 sys keys timer_stats
2295 329 3700 3744 4878 7206 sysvipc key-users uptime
2335 3295 3701 3745 5 7207 tty kmsg version
2400 330 3706 3747 5109 7222 buddyinfo loadavg vmcore
2401 3318 3709 3749 5112 7225 cmdline locks vmstat
2427 3329 3710 3751 541 7244 config.gz meminfo zoneinfo
2428 3336 3714 3753 5440 752 cpuinfo misc

The numbered directories (more on them later) correspond to each running process; a special self symlink points to the current process. Some virtual files provide hardware information, such as /proc/cpuinfo, /proc/meminfo, and /proc/interrupts. Others give file-related info, such as /proc/filesystems or /proc/partitions. The files under /proc/sys are related to kernel configuration parameters, as we'll see.

The cat /proc/meminfo command might bring up something like this:

# cat /proc/meminfo
MemTotal: 483488 kB
MemFree: 9348 kB
Buffers: 6796 kB
Cached: 168292 kB
...several lines snipped...

If you try the top or free commands, you might recognize some of these numbers. In fact, several well-known utilities access the /proc directory to get their information. For example, if you want to know what kernel you're running, you might try uname -srv, or go to the source and type cat /proc/version. Some other interesting files include:

  • /proc/apm: Provides information on Advanced Power Management, if it's installed.
  • /proc/acpi: A similar directory that offers plenty of data on the more modern Advanced Configuration and Power Interface. For example, to see if your laptop is connected to the AC power, you can use cat /proc/acpi/ac_adapter/AC/state to get either "on line" or "off line."
  • /proc/cmdline: Shows the parameters that were passed to the kernel at boot time. In my case, it contains root=/dev/disk/by-id/scsi-SATA_FUJITSU_MHS2040_NLA5T3314DW3-part3 vga=0x317 resume=/dev/sda2 splash=silent PROFILE=QuintaWiFi, which tells me which partition is the root of the filesystem, which VGA mode to use, and more. The last parameter has to do with openSUSE's System Configuration Profile Management.
  • /proc/cpuinfo: Provides data on the processor of your box. For example, in my laptop, cat /proc/cpuinfo gets me a listing that starts with:
  • processor : 0
    vendor_id : AuthenticAMD
    cpu family : 6
    model : 8
    model name : Mobile AMD Athlon(tm) XP 2200+
    stepping : 1
    cpu MHz : 927.549
    cache size : 256 KB

    This shows that I have only one processor, numbered 0, of the 80686 family (the 6 in cpu family goes as the middle digit): an AMD Athlon XP, running at less than 1GHz.

  • /proc/loadav: A related file that shows the average load on the processor; its information includes CPU usage in the last minute, last five minutes, and last 10 minutes, as well as the number of currently running processes.
  • /proc/stat: Also gives statistics, but goes back to the last boot.

  • /proc/uptime: A short file that has only two numbers: how many seconds your box has been up, and how many seconds it has been idle.
  • /proc/devices: Displays all currently configured and loaded character and block devices. /proc/ide and /proc/scsi provide data on IDE and SCSI devices.
  • /proc/ioports: Shows you information about the regions used for I/O communication with those devices.
  • /proc/dma: Shows the Direct Memory Access channels in use.
  • /proc/filesystems: Shows which filesystem types are supported by your kernel. A portion of this file might look like this:
  • nodev sysfs
    nodev rootfs
    nodev bdev
    nodev proc
    nodev cpuset
    ...some lines snipped...
    nodev ramfs
    nodev hugetlbfs
    nodev mqueue
    ext3
    nodev usbfs
    ext2
    nodev autofs

    The first column shows whether the filesystem is mounted on a block device. In my case, I have partitions configured with ext2 and ext3 mounted.

  • /proc/mounts: Shows all the mounts used by your machine (its output looks much like /etc/mtab). Similarly, /proc/partitions and /proc/swaps show all partitions and swap space.

  • /proc/fs: If you're exporting filesystems with NFS, this directory has among its many subdirectories and files /proc/fs/nfsd/exports, which shows the file system that are being shared and their permissions.
  • /proc/net: You can't beat this for network information. Describing each file in this directory would require too much space, but it includes /dev (each network device), several iptables (firewall) related files, net and socket statistics, wireless information, and more.

There are also several RAM-related files. I've already mentioned /proc/meminfo, but you've also got /proc/iomem, which shows you how RAM memory is used in your box, and /proc/kcore, which represents the physical RAM of your box. Unlike most other virtual files, /proc/kcore shows a size that's equal to your RAM plus a small overhead. (Don't try to cat this file, because its contents are binary and will mess up your screen.) Finally, there are many hardware-related files and directories, such as /proc/interrupts and /proc/irq, /proc/pci (all PCI devices), /proc/bus, and so on, but they include very specific information, which most users won't need.

What's in a process?

As I said, the numerical named directories represent all running processes. When a process ends, its /proc directory disappears automatically. If you check any of these directories while they exist, you will find plenty of files, such as:

attr cpuset fdinfo mountstats stat
auxv cwd loginuid oom_adj statm
clear_refs environ maps oom_score status
cmdline exe mem root task
coredump_filter fd mounts smaps wchan

Let's take a look at the principal files:

  • cmdline: Contains the command that started the process, with all its parameters.
  • cwd: A symlink to the current working directory (CWD) for the process; exe links to the process executable, and root links to its root directory.
  • environ: Shows all environment variables for the process.
  • fd: Contains all file descriptors for a process, showing which files or devices it is using.
  • maps, statm, and mem: Deal with the memory in use by the process.
  • stat and status: Provide information about the status of the process, but the latter is far clearer than the former.

These files provide several script programming challenges. For example, if you want to hunt for zombie processes, you could scan all numbered directories and check whether "(Z) Zombie" appears in the /status file. I once needed to check whether a certain program was running; I did a scan and looked at the /cmdline files instead, searching for the desired string. (You can also do this by working with the output of the ps command, but that's not the point here.) And if you want to program a better-looking top, all the needed information is right at your fingertips.

Tweaking the system: /proc/sys

/proc/sys not only provides information about the system, it also allows you to change kernel parameters on the fly, and enable or disable features. (Of course, this could prove harmful to your system -- consider yourself warned!)

To determine whether you can configure a file or if it's just read-only, use ls -ld; if a file has the "W" attribute, it means you may use it to configure the kernel somehow. For example, ls -ld /proc/kernel/* starts like this:

dr-xr-xr-x 0 root root 0 2008-01-26 00:49 pty
dr-xr-xr-x 0 root root 0 2008-01-26 00:49 random
-rw-r--r-- 1 root root 0 2008-01-26 00:49 acct
-rw-r--r-- 1 root root 0 2008-01-26 00:49 acpi_video_flags
-rw-r--r-- 1 root root 0 2008-01-26 00:49 audit_argv_kb
-r--r--r-- 1 root root 0 2008-01-26 00:49 bootloader_type
-rw------- 1 root root 0 2008-01-26 00:49 cad_pid
-rw------- 1 root root 0 2008-01-26 00:49 cap-bound

You can see that bootloader_type isn't meant to be changed, but other files are. To change a file, use something like echo 10 >/proc/sys/vm/swappiness. This particular example would allow you to tune the virtual memory paging performance. By the way, these changes are only temporary, and their effects will disappear when you reboot your system; use sysctl and the /etc/sysctl.conf file to effect more permanent changes.

Let's take a high-level look at the /proc/sys directories:

  • debug: Has (surprise!) debugging information. This is good if you're into kernel development.
  • dev: Provides parameters for specific devices on your system; for example, check the /dev/cdrom directory.
  • fs: Offers data on every possible aspect of the filesystem.
  • kernel: Lets you affect the kernel configuration and operation directly.
  • net: Lets you control network-related matters. Be careful, because messing with this can make you lose connectivity!
  • vm: Deals with the VM subsystem.

Conclusion

The /proc special directory provides full detailed information about the inner workings of Linux and lets you fine-tune many aspects of its configuration. If you spend some time learning all the possibilities of this directory, you'll be able to get a more perfect Linux box. And isn't that something we all want?

 

Your next phone could run Linux

Linux seems to have chosen the 2008 Mobile World Congress to quietly make its way onto the new consumer devices so in abundance at the annual mobile Mecca.

Texas Instruments “G-Phone”
Texas Instruments “G-Phone”

Stalwarts like Symbian and Microsoft have been somewhat upstaged by the rough-and-tough technology concept demonstrations of Google’s mobile platform, Android. Both Qualcomm and Texas Instruments showed impressive demos of the platform, with developer boards and some concept devices on show.

Texas Instruments showed off a development board, a development handset and – the show-stopper – Android running on a mobile form factor device. Qualcomm’s demo, on a development board, featured a touch screen and a custom-made mole-whacking game – apparently created in 60 minutes on Google’s Android software development kit (SDK).

Android offers cell phone manufacturers a “stack” of software for rolling out on their mobile phones. Manufacturers will be able to utilise the software to give them a firm base – including operating system, middleware and typical cell phone applications like SMS, contacts, voice and web browser.

Qualcomm’s Android offering
Qualcomm’s Android offering

It is hoped that the Linux stack and good SDKs will promote application developers to create more apps for the mobile platform. Vodafone’s CEO, Arun Sarin, stated that he believed there should be no more than four or five operating systems for mobile phones, compared to the 40 in the market currently. The proliferation of mobile platforms has severly hamstrung the roll-out of applications for the “fourth screen”.

While Android offers a full stack, the LiMo (Linux Mobile) Foundation delivers a unified middleware and OS layer – the manufacturers build all the applications on top of the platform. Some manufacturers clearly prefer this model, giving them the ability to completely customise the user experience on their platforms. LiMo is significantly more advanced than Android after its year in the market – the LiMo Foundation showed off phones from the likes of Motorola, LG, NEC, Panasonic and Samsung.

Motorola’s Motorokr E8 LiMo phone
Motorola’s Motorokr E8 LiMo phone

Thanks to Google’s lead in the software, the Android phones have been slugged “G-Phones”, in response to Apple’s phone nomenclature, although more than 30 technology companies are part of the Open Handset Alliance, the group backing Android.

LiMo representatives stated that Android would not compete directly with its foundation, although Android’s promoters told Tectonic that they believed that Android’s full solution will prove more popular over time.

Since Android has been demo’d running live on processors and chipsets from TI and Qualcomm, the platform is technically ready for manufacturers to develop and prototype the solution. We can expect some Android devices at next year’s MWC. Should Sarin’s vision of four or five operating systems come true, Linux is a safe bet as one of them.

Source : http://www.tectonic.co.za

 

KnowledgeTree Document Management System On Ubuntu 7.10 Server

This guide will walk you through installing the KnowledgeTree Document Management System on Ubuntu 7.10 Server. This guide does not include any pictures. I just felt with this type of install, that they were not warranted.

Please note that this installation is performed on a base install of Ubuntu 7.10. Since the KnowledgeTree stack installer contains its own versions of Apache and MySQL, it will cause problems on an existing LAMP server.

With that warning out of the way, let's begin.

After you have installed Ubuntu 7.10 Server (remember - a base installation; do not install Apache or MySQL), we need to perform a few steps to get the system ready.

Edit sources.list

In this step, I will edit out the CD-ROM from the sources.list configuration. You do not have to perform this step, I just don't like using the CD for software installations.

sudo nano /etc/apt/sources.list

The section that we are looking for will read:

deb cdrom:[Ubuntu-Server 7.10 _Gutsy Gibbon_ - Release i386 (20071016)]/ gutsy main restricted

Add a "#" in front of that line so that it reads:

# deb cdrom:[Ubuntu-Server 7.10 _Gutsy Gibbon_ - Release i386 (20071016)]/ gutsy main restricted

Press "Ctrl o" to write out the changes and "Ctrl x" to exit nano.

With that done, you need to update sources.list. This can be done by typing:

sudo apt-get update

After the update has finished, you will want to ensure that you have the most recent updates for your server. Run the following command to perform the upgrade:

sudo apt-get upgrade

Finally, you can install openssh-server, so that the rest of the installation can be performed remotely.

sudo apt-get install openssh-server

The rest of this tutorial can be performed remotely using an SSH client, such as PuTTY

In order to use the email functionality with KnowledgeTree, you will want to install an SMTP server. For this guide, I will use Sendmail.

sudo apt-get install sendmail

When prompted, type "y" to install Sendmail and its dependencies. There should be a total of nine packages installed, including:

liblockfile1 m4 make procmail sendmail sendmail-base sendmail-bin sendmail-cf sensible-mda

Obtaining And Installing KnowledgeTree

You are now ready to get the KnowledgeTree installer. I like to work out of the tmp directory for installations such as this. To get there, enter the following command:

cd /tmp/

Use the following command to get the KnowledgeTree installer. At the time of writing this article, the most recent version stood at 3.4.6.

sudo wget http://internap.dl.sourceforge.net/sourceforge/kt-dms/ktdms-oss-3.4.6-linux-installer.bin

Before the installation can begin, you need to change the permissions on the installer to allow it to run. Run the following command:

sudo chmod +x ktdms-oss-3.4.6-linux-installer.bin

With that done, it is time to begin installing KnowledgeTree. To do this, run:

sudo ./ktdms-oss-3.4.6-linux-installer.bin

The following text is how the installation process will play out. You will be prompted during the installation enter information pertinent to your environment. These entries are in red text..

Do you accept this license? [y/n]:
Enter y and press enter
Please specify the directory where KnowledgeTree Document Management System OSS will be installed
Installation directory [/opt/ktdms]: Enter for default
----------------------------------------------------------------------------
MySQL Root Password
Initial password for the DMS root user account
created during the MySQL database installation.
Password : Enter a password of your choosing
Re-enter : Re-enter the same password
----------------------------------------------------------------------------
MySQL User Password
Initial password for the DMS user account
created during the MySQL database installation.
Password : Enter a password of your choosing
Re-enter : Re-enter the same password
----------------------------------------------------------------------------
DB Port
Please enter the port for your MySQL database.
MySQL database Port [3306]: Enter for default
----------------------------------------------------------------------------
WebServer Port
Please enter the port that Apache will listen to by default.
Apache Web Server Port [8080]: Enter for default
----------------------------------------------------------------------------
SSL Support
Do you wish to install SSL support?
Install SSL support [y/n]: This is a personal choice, but since this setup is for home use, I will enter "n", so that SSL is not enabled.
----------------------------------------------------------------------------
Help us make KnowledgeTree a better product
Please help us improve KnowledgeTree by telling us a bit about yourself.We will use this information to more effectively tailor KnowledgeTree for your industry,organization size,and,if you agree,to notify you of news about KnowledgeTree and its family of products.
[1] Yes, I want to register with KnowledgeTree
[2] No, I prefer to skip registration
Please choose an option [1] : Again, this is a personal choice. For the tutorial, I will enter "2" for no.
Please Note: We will not share your information with 3rd parties without your consent nor will we send you information not directly related to KnowledgeTree products and services.Please see our Privacy and Data Retention Policies for more information
----------------------------------------------------------------------------
Setup is now ready to begin installing KnowledgeTree Document Management System OSS on your computer.
Do you want to continue? [Y/n]: "Y" to proceed
----------------------------------------------------------------------------
Setup has finished installing KnowledgeTree Document Management System OSS on your computer.
View Readme file? [Y/n]? Again, this is a personal choice. For the tutorial, I will enter "n" for no.
Open Online Release Notes [Y/n]: Again, this is a personal choice. For the tutorial, I will enter "n" for no.
Launch KnowledgeTree DMS now? [Y/n]: Enter "y" to launch KnowledgeTree

If you are using KnowledgeTree locally, you can open a browser and go to http://127.0.0.1:8080 to reach the dashboard. If you are using KnowledgeTree remotely, you can get to the dashboard by using the server's IP address, such as http://192.168.1.115:8080

The default login information is as follows:
Username: admin
Password: admin

You may also choose your language here, if English (which is the default) is not your native language.

Configure Email Functionality

In order to get the email functionality working, we need to edit the [email] section of the config.ini. Run the following command:

sudo nano /opt/ktdms/config.ini

Enter your password when prompted.

Once you are in the config.ini file, you can easily find the section to edit by using the search function. Press "CTRL W" and on the search line enter:

enter emailServer = none

Now change this to read:

emailServer = /usr/sbin/sendmail

Press "Ctrl o" to write out your changes and "Ctrl x" to exit nano.

Now, if you refresh the KnowledgeTree dashboard, the email warning will be gone.

Starting And Stopping KnowledgeTree

You can control KnowledgeTree with the following commands:

sudo/opt/ktdms/./dmsctl.sh start
sudo /opt/ktdms/./dmsctl.sh stop
sudo /opt/ktdms/./dmsctl.sh restart

I prefer not to have to do this though, so you can add a command to your crontab and have it run automatically on boot. If you would prefer this, use the following command:

sudo crontab -e

Paste the following line into your crontab:

@reboot /opt/ktdms/./dmsctl.sh start

One thing you will notice though, is that KnowledgeTree asks for your MySQL password when starting. To get around this (if security is not a concern), you can edit the dmsctl.sh file in your KnowledgeTree directory.

sudo nano /opt/ktdms/dmsctl.sh

In the dmsctl.sh file, find the following section:

MYSQL_PASSWORD=""

Now change it so that it reads along the following lines:

MYSQL_PASSWORD="MySQL root password you created during the installation"

Press "Ctrl o" to write out your changes and "Ctrl x" to exit nano.

To test and ensure that everything is working as it should be you can reboot and KnowledgeTree should start automatically.

If you want to test this, run the following command:

sudo shutdown -r now

Links For Administration Guides And Assistance

Administration and Configuration
KnowledgeTree Forums
Backing up and restoring KnowledgeTree