Meet the Anti-Nmap: PSAD (EnGarde Secure Linux)

The Port Scan Attack Detector (psad) is an excellent tool for detecting various types of suspicious traffic, including port scans from popular tools such as Nmap, DDoS attacks, and other efforts to brute force certain protocols on your system. By analyzing firewall logs, psad can not only pick up on certain attack patterns, but even manipulate firewall rules to properly respond to suspicious activity.

This article will walk the reader through an EnGarde Secure Linux implementation of psad, from the initial iptables rules setup to the deployment of psad on the server side. By the end of the article, the user will be able to detect certain Nmap scans and have psad respond to these scans by blocking the source.

Prerequisites

You will need:

- A machine with EnGarde Secure Community 3.0.18 or above installed to do your development on. These commands should NOT be run on a production server since psad will eventually deny any type of access from the remote scanning machine!

- A separate machine on the same network with Nmap installed on it. You will be running certain scans on the server from this machine.

Once you have all the above you may log in as root, transition over to sysadm_r, and disable SELinux:

newrole -r sysadm_r

[psad_server]# newrole -r sysadm_r
Authenticating root.
Password:

[psad_server]# setenforce 0


Throughout the HowTo, the server will be referred to as psad_server and the Nmap scanning machine as nmap_scanner.

Install psad

EnGarde Secure Linux makes the installation of psad a breeze due to its Guardian Digital Secure Network (GDSN). You can install the package through the command line:

apt-get install psad

...or log in to WebTool and download the package from the package manager interface.

We shall get around to the setup of psad after we configure the firewalls on psad_server to log packets:

iptables Rules Setup

Since iptables is installed out of the box on EnGarde Secure Linux, you only have to run two simple commands to start logging packets with iptables:

iptables -A INPUT -j LOG
iptables -A FORWARD -j LOG

From here on out incoming packets (especially those of Nmap scans) will be logged. Let's see if we can start detecting such scans by setting up psad to do so.

psad Configuration

On psad_server, use your favorite editor to modify the /etc/psad/psad.conf file. We're interested in the following tunables:

EMAIL_ADDRESSES
HOSTNAME
SYSLOG_DAEMON
ETC_SYSLOGNG_CONF


The EMAIL_ADDRESSES should be whichever email addresses you wish to have psad send feedback to. This feedback includes error messages and alerts of potential dangerous scans depending on danger levels which can be fine-tuned for your purposes.

- The HOSTNAME tunable will be the hostname of the psad_server machine.

- The SYSLOG_DAEMON refers to the logging daemon for the machine. For EnGarde Secure Linux, this should be set to 'syslog-ng'.

- The ETC_SYSLOGNG_CONF refers to the direct path of the syslog-ng daemon's configuration file. For EnGarde Secure Linux, this should be set to '/etc/syslog-ng.conf'.

- Once you've properly configured those tunables, you can start the psad daemon:

/etc/init.d/psad start

[psad_server]# /etc/init.d/psad start
[ SUCCESSFUL ] psad Daemons

Note:

As far as danger levels are concerned, these range from one to five
and are assigned to the IP addresses from which an attack or scan is detected. They are assigned based on the number of packets sent, port range, thetime interval of the scan, whether or not the signatures of the packets match up with psad signature attacks, and the IP address where the packet originated from. Depending on the number of such packets, a level is assigned as per the configuration file. For more information on danger levels and ideas for fine-tuning them, please refer to the resources at the end of the article.

psad - Active Detection

We will now use psad to detect certain Nmap scans. On the Nmap scanning machine, run a TCP connect() scan by executing the following:

nmap -sT 1.2.3.4

Replace 1.2.3.4 with the IP address of your psad_server.

If we check the /var/log/psad/fwdata file on the psad_server, you will find the following:

Feb 2 11:58:11 psad_server kernel: IN=eth0 OUT=
MAC=00:0c:29:78:22:73:00:0c:76:4b:f6:3e:08:00 SRC=5.6.7.8
DST=1.2.3.4 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=23609 DF PROTO=TCP
SPT=49021 DPT=113 WINDOW=5840 RES=0x00 SYN URGP=0

We can see that SRC will have the IP address of the nmap_scanner machine, and DST will have the address of the psad_server. Also note that PROTO=TCP, showing that the attack was a TCP connect() scan.

If you had previously configured psad to send email alerts, you will begin receiving emails concerning this scan showing lots more data than these log messages can ever produce. There are configuration tunables in the /etc/psad/psad.conf file to limit and even disable email:

EMAIL_LIMIT
ALERTING_METHODS
EMAIL_ALERT_DANGER_LEVEL

EMAIL_LIMIT defines the maximum number of emails a configured user will receive for a given IP address.

ALERTING_METHODS can be set to noemail, nosyslog, and ALL, depending on whether you want only syslog-ng messages, email alerts, or both.

EMAIL_ALERT_DANGER_LEVEL is the minimum danger level that must be hit in order for psad to send email alerts concerning a detection. The default setting is one, so you can expect lots of emails for this tutorial's purpose.

Here is an example email showing psad output of the previous Nmap scan:

Subject: [psad-alert] DL2 src: nmap_scanner.yournetwork.com dst:
psad_server.yournetwork.com

Danger level: [2] (out of 5)

Scanned UDP ports: [32772: 1 packets, Nmap: -sU]
iptables chain: INPUT, 1 packets

Source: 5.6.7.8
DNS: nmap_scanner.yournetwork.com
OS guess: Linux (2.4.x kernel)

Destination: 1.2.3.4
DNS: psad_server.yournetwork.com

Overall scan start: Mon Feb 2 11:57:19 2008
Total email alerts: 2
Complete TCP range: [64-49400]
Complete UDP range: [32772]
Syslog hostname: unknown

Global stats: chain: interface: TCP: UDP: ICMP:
INPUT eth0 40 1 0

[+] TCP scan signatures:

"P2P Napster Client Data communication attempt"
dst port: 5555 (no server bound to local port)
flags: SYN
sid: 564
chain: INPUT
packets: 1
classtype: policy-violation

As you can see, psad does a wonderful job of taking packet data from logs, analyzing it and producing useful information on the type of scans used.

psad - Active Defense

One of the more prominent features of psad is its active defense implementation - being able to detect Nmap scans is nice, but how do you respond? Let's configure psad to automatically block the source of such scans upon detection.

Before implementing this feature, it is obvious for certain security veterans who are reading this article that there is a definite tradeoff for enforcing an active response policy. Although malicious traffic will be blocked, there is always the risk of blocking out valid traffic. Certain attackers can exploit active defenses and turn it against the target by attempting to spoof valid addresses, thus blocking out otherwise harmless traffic.

This only happens in cases where the active response system has been configured to respond to nearly ALL types of potentially harmful traffic, including port scans or port sweeps. This also applies to traffic which does not require bidirectional communication with the target. A better strategy to employ is to only respond to traffic where bidirectional communication is required i.e. TCP connections. Even then, one must take care to tailor their active response to certain types of TCP connections, such as attempted SQL injection attacks, etc. Please be sure you are absolutely positive of how your detection scheme is working before deploying an active defense.

Using your favorite editor, modify the /etc/psad/psad.conf file. We're interested in the following tunables:

ENABLE_AUTO_IDS
AUTO_IDS_DANGER_LEVEL

ENABLE_AUTO_IDS should be set to 'Y' to enable the automated IDS response.

AUTO_IDS_DANGER_LEVEL, for this HowTo's sake, will be set to '3'. This danger level is customizable and the setting we use in this HowTo is for demonstration purposes only.

Restart the psad on the psad_server:

/etc/init.d/psad restart

[psad_server]# /etc/init.d/psad restart
[ SUCCESSFUL ] psadwatchd Daemon
[ SUCCESSFUL ] psad Daemon
[ SUCCESSFUL ] kmsgsd Daemon
[ SUCCESSFUL ] psad Daemons

From the nmap_scanner machine, we'll run an Nmap SYN scan along with the '-P0' switch - this type of scan uses no ping and does not fully complete a TCP connection, resulting in fast scans. This usually requires root privileges, and is considered more of a dangerous scan - just the type of scan that psad detects at a higher danger level.

nmap -sS -P0 -n 1.2.3.4

Replace the '1.2.3.4' with the IP address of your psad_server machine.

psad will detect the SYN scans, and since the danger level of this scan is 3, it manipulates the iptables rules to block the source of the scans. This can be verified on the psad_server by running the following command:

psad --fw-list

[psad_server]# psad --fw-list
[+] Listing chains from IPT_AUTO_CHAIN keywords...

Chain PSAD_BLOCK_INPUT (1 references)
pkts bytes target prot opt in out source destination
820 36080 DROP all -- * * 5.6.7.8 0.0.0.0/0

Chain PSAD_BLOCK_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * 0.0.0.0/0 5.6.7.8

Chain PSAD_BLOCK_FORWARD (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * 0.0.0.0/0 5.6.7.8
0 0 DROP all -- * * 5.6.7.8 0.0.0.0/0

You will even receive an email alerts that inform you of the scan detection, as well as an email informing you that iptables rules have been added to auto-block the nmap_scanner!

Wrapping It All Up

Congratulations, you've successfully implemented psad to actively detect and respond to signature Nmap scans!

Keep in mind this is one of the more basic setups for psad. You can go even further and adjust danger levels to suit degrees of paranoia, put psad into forensics mode, incorporate the software with DShield, and even manually use psad to manipulate iptables rules. A great resource for psad research is 'Linux Firewalls' by Michael Rash. Rash includes several chapters on psad covering not only theory but advanced implementation of psad from start to finish. If you wish to gain suggestions for an advanced, finely-tuned active defense setup with psad, be sure to check this book out!

Have fun implementing an active defense against those who try to scan your system!

(by Eckie S. from Linuxsecurity.com)

Resources

http://www.linuxsecurity.com

http://www.guardiandigital.com

"'Linux Firewalls' by Michael Rash"

'Knock, Knock, Knockin' on EnGarde's Door'

 

Acer to launch low-cost PCs this year-paper

Acer, the world's No. 3 computer vendor, plans to start selling low-cost laptop PCs this year, following a recent strong reception for similar models from competitors, media reported on Wednesday.

Acer, which previously said it had not planned to sell cheap notebook computers, has changed course to develop PCs to target a new customer base, the Chinese-language Commercial Times quoted company Chairman J.T. Wang as saying.

The company planned to launch the PCs in the second or third quarter of this year, the report said.

It said that Acer was still developing the new model, which could be 7-9 inches wide, and could cost around $470.

Acer declined to comment on the report.

On Wednesday, shares of Acer had risen 2.39 percent to T$49.25 by 0256 GMT, outperforming the benchmark TAIEX index which advanced 0.72 percent.

Taiwan's Asustek Computer Inc, a competitor to Acer, launched its line of low-cost Eee PC laptops last year, with a price tag of as little as $200.

Acer said the new computers would not cannibalise its current business, as such models were aimed at low penetration markets such as PCs for children and developing markets, according to the report, echoing similar previous comments from Asustek.

Asustek has so far been successful in marketing and selling its child-friendly Linux-based notebook globally, although profit margins for the products are thin, analysts have said.

Acer competes closely with China's Lenovo and larger rivals Hewlett-Packard and Dell.

The firm posted a 77 percent surge in its fourth quarter net profit earlier in the week and said it expects to ship 40 percent more notebook PCs this year from 2007, while its overall PC shipments would rise by 30 to 35 percent.

"Yeah more Linux-based notebook !!!"

 

Fedora + Eee PC = Eeedora

I am a fan of affordable technology. I like relatively cheap gadgets, and I like open source. When I heard about Asus’ Eee PC, I took it with a certain grain of salt. I thought that maybe it was just another company trying to take a piece of the pie from the One Laptop Per Child initiative.

Then the more I read about the OLPC, the more I realized that the two gadgets may have been created for different purposes. The OLPC is a non-profit, educational-social project, while the Eee PC is an affordable subnotebook being sold with the intent for profit.

The Eee PC’s price range varies from approximately $300 to $500; within that range you can get a configuration with a 2 GB, 4 GB, or 8 GB solid state drive, and for the 4 GB and 8 GB models, you can opt for an embedded webcam as well. All models come with 3 USB ports, 1 MMC/SD port, and a VGA port for an external display, which can display up to 1600×1280 resolution.

By default, the Eee PC comes with a slightly modified version of Xandros Linux with KDE as its window manager. The Linux layman will most likely not realize that it is indeed running KDE because of a feature called ”Easy Mode” that hides the KDE desktop and gives the user only icons to the main apps in the system.

Note: The Xandros install uses unionfs for its filesystem, which is very common for Live CD installations. However, one of its features is that the space used by an application cannot be freed once that application is uninstalled. So, if you tried to uninstall OpenOffice to free up a few megabytes on your file system, unionfs would still report the same amount of used megabytes on your system.

Because the Eee PC is a full-blown Intel-based computer, there is absolutely nothing stopping us from installing other Linux distributions on it. At first glance, the only catch is the fact that the Eee PC doesn’t have a built-in CD/DVD-ROM, but by using open source tools like livecd-iso-to-disk from the Fedora distribution, we can install live images onto a USB thumb drive and boot the Eee PC from it. That’s where Eeedora comes in.
What’s Eeedora?

Eeedora is a Fedora-based live distribution created and maintained by Martin Andrews. Martin decided to create the distribution for power users who are more comfortable in the Red Hat-based environment rather than Xandros, which is Debian-based.

Eeedora is based on the most current version of Fedora (8); it uses XFCE as the window manager; the live image download is currently less than 350 MB; and it gives the user full access to the yum repos for the Fedora distribution, allowing you to install the larger packages like Gimp, OpenOffice, and Thunderbird.

Eeedora in its current state works flawlessly with most of the hardware available under the Eee PC, coming up a little short still with webcam support and resume issues after a suspend. Yet it has been my experience so far that it works very well on the Eee PC.

Also of note–Eeedora doesn’t use ext3. It uses ext2 to minimize disk use, so you should be aware that if devices are not unmounted properly, suspend/resume and hard shutdown could damage your install more frequently than if it was running ext3.
Installing Eeedora on the Eee PC

The following instructions will work on any of the models of the Eee PC:

1. Download the Eeedora ISO image file.

2. On your Fedora desktop (or laptop), install the livecd-iso-to-disk script.

# yum install livecd-tools

3. Plug your USB thumbdrive into the computer. The haldaemon should automatically mount it, and you will see an icon for the thumbdrive show up on your desktop.

4. Open Terminal and become root:

# su -

5. Find out which Linux device your USB thumbdrive is mapped as:

# mount

You will see a few lines on your terminal, and one of them will look like this:

/dev/sdb1 on /media/disk1 vfat (rw)

Haldaemon will mount your USB thumbdrive using the same label it identified the device with on your desktop when the icon showed up. In the case of this example “/dev/sdb1″ is my device.

6. Install the image onto your USB thumbdrive:

# livecd-iso-to-disk the-file-you-downloaded.iso /dev/sdb1

Note: You don’t need to format your USB thumbdrive; livecd-iso-to-disk will install the image without destroying your existing data (assuming it has enough space on the drive). But it never hurts to have a backup copy.

7. Unmount your USB thumbdrive and plug it into your Eee PC.

8. Boot up your Eee PC. Press F2 to go into the BIOS, and make sure you make your USB thumbdrive the first hard disk the BIOS sees. Press F10 to save, and the Eeedora grub screen should start up.

9. Once you are into the system, there will be an install icon on the desktop that you can use to install the OS on the actual SSD.
Known issues

As I’ve mentioned before, Eeedora is a work in progress, and Martin is always welcoming feedback from the community. I’ve had the chance to report a few bugs on it and got almost instant return from him.

Read about more of the outstanding issues in Eeedora.
Conclusion

You might be asking why would anyone be interested in getting a notebook like the Eee PC. The keyboard is small, the screen is small (7 inches at 800×480), and the storage is minimal. Personally, I see the Eee PC as a tool that makes me a bit more mobile than before. Its dimensions could been seen as a disadvantage, although for my purposes it is an advantage. I even sold my iPod, because now I use the Eee PC as my media player in the car while going back and forth from work. I don’t necessarily recommend it to anyone who uses their MP3 player while exercising, but for a drive, it is pretty great.

The Eee PC has also become a tool in which I started discovering applications in the open source world that I’ve never had the chance or desire to try. Most of us have plenty of storage space install everything from a Fedora DVD and use the “big apps” in our community like Gnome, KDE, Thunderbird, etc. Now with a very limited amount of space (in my case 2 GB), I’ve started playing with XFCE, Wifi-radar, and Sylpheed, among others.

You get a chance to use Linux with a different mindset, from a different perspective.

 

NEC shows off Linux mobile phones

NEC has thrown its weight behind mobile Linux with the introduction of four handsets based on the LiMo specification..

LiMo is a the result of a push towards a shared, hardware-independent mobile phone operating system by several handset manufacturers including Motorola, LG Electronics and Panasonic.

NEC describes its handsets as the world's first LiMo-compliant mobile phones, even though several of its partners in the LiMo Foundation have already released details of compatible handsets, including Motorola and Panasonic.

"The breadth of the initial generation of LiMo handsets consolidates LiMo's role as the unifying force within mobile Linux and highlights the strong momentum established in the 12 months since LiMo was launched," said Morgan Gillis, executive director of the LiMo Foundation.

Among NEC's new phones is the N905i, a 3G/GSM phone with HSDPA for data connectivity, mobile TV reception, GPS and support for wireless payment services.

 

Open source workers can earn more money !!!

IT workers who specialise in free and open source software are earning more than the national average for IT, according to the results of Australia's first open source census.

The average full time salary of respondents to the Australian Open Source Industry and Community Census was between $76,000 and $100,000, but the 10 percent working on open source full time were earning “a lot more” according Pia Waugh of Waugh Partners consultancy, which conducted the survey.

“The people who were working on free software full time were earning more than the average for the general community,” she said.

When compared to Australian salaries across the board, salaries for full time open source workers were almost three times the national median.

Women IT workers didn't fare as well though – the full time women workers who responded were earning an average of $46,000 to $60,000, Waugh said.

Previewing the results of the census at Linux.conf.au on Friday, Pia and Jeff Waugh of Waugh Partners Consultancy said the online survey attracted 327 respondents who were working on open source software in either a personal or professional capacity. The majority of them (57 percent) were hobbyists who don't get paid to work on open source. Twenty-four percent were working on open source in their paid job some of the time, while the highest paid segment were the 10 percent working on open source full time.

Waugh Partners believed the sample size was greater than 5 percent of the total open source industry size, making it a credible representation of the whole industry.

“It suggests that people who work with open source are likely to have better skills and are likely to get better jobs,” Jeff Waugh said. “That is a really good message to take out to the education sector. We hope it will reinforce the decision by universities who do open source software, and the ones who aren't doing it will need to compete.”

While many of the respondents said their knowledge of open source was a self taught skill, Queensland universities led the field of institutions attended by the respondents.

The majority of respondents to the survey had completed some of their study at Queensland's University of Technology (QUT), while the University of Sydney came second. Two of the top four unis nominated were in Queensland.

Source : itnews

 

Five must-have apps for a new Linux install

I tend to hammer my Ubuntu laptop. Running a website like Tectonic means I am constantly installing new applications to try them out. Many of which I later have to remove or lie forgotten on the hard disk until I start to wonder where the +40GB of free hard disk space went to. And when that happens I tend to back up the essentials - email, documents and website backups - format my hard disk and install a clean version of Ubuntu. Doing this every few months means that a few times a year I get to really consider what the most important applications on my desktop are.

My most recent re-install was this weekend. I was running short of hard disk space and things were slowing down noticeably. I could have spent a good few hours cleaning out my hard disk but I don’t really want to. Sometimes a good clean-install is what is required.

The essential tools
So, having re-installed a brand new copy of Ubuntu and required updates, there are a few applications that I immediately download because, without them, I would not be able to do most of my day-to-day work. Here, in no particular order, are the five application or tools I have to have but aren’t included in a default Ubuntu install. If you work in media or website development many of these might sound familiar.

gFTP

gFTP
gFTP has been around since the early days of Linux and while not flashy and full of features it does the job at hand, which is upload and download files for the sites I manage. gFTP’s clear interface and simple navigation make it an essential part of my desktop arsenal. I know that Ubuntu has the ability to connect to FTP sites using the nautilus file manager but I still find the side-by-side arrangement of gFTP, and the ability to compare a local development site with a live hosted one, essential. gFTP is also lightweight and quick, which makes it essential.
Install gFTP:
sudo apt-get install gftp

InkscapeInkscape

Inkscape
For most graphic and drawing needs Inkscape is the best possible application. I use it every day for simple logos, icons and pictures for the websites I manage. There are many other, sometimes more feature-full, graphics alternatives available but I find that Inkscape is straighforward to use and the many features it does have don’t get in the way of doing simple graphics tasks. Combined with the Gimp, which is included in the Ubuntu default install, pretty much any graphics task is easy to do.
Install Inkscape:
sudo apt-get install inkscape

Apache, MySQL and PHP
I’ve put these together because there really is no point in having one but not the others. If you do any web development you’ll want to install the lot. Running a webserver on your own machine is the only way to develop and test websites. There was a time when installing these three and getting them to work together was something of a headache. In Ubuntu now it’s pretty much taken care of. To install MySQL you need to:
sudo apt-get install mysql-server-5.0
During the install you will be prompted for a root password. Make sure to give one so you can log into MySQL when you’re done.
Installing PHP and Apache next is equally simple:
sudo aptitude install apache2 php5 libapache2-mod-php5
Once you’ve done that restart the Apache server:
sudo /etc/init.d/apache2 restart
Point your browser to http://localhost to test if it works.

Bluefish

Bluefish
This is another of those applications that have been around since the early days of Linux and I have grown to feel quite attached to it. Bluefish is a programming tool ideal for HTML and PHP work but equally at home with other languages. Syntax highlighting and a collection of pre-built HTML and PHP elements make Bluefish an everyday tool of mine. Like many of my other favourite and most-used applications Bluefish hides a great number of features behind a seemingly simple interface. One of these is Bluefish’s colour dropper feature which picks colours from anywhere on your screen and converts to HTML-friendly codes. It’s ideal for colour-matching for website designs.
Install Bluefish:
sudo apt-get install bluefish

Firefox extensions
The only other thing I need to install on a clean install of Ubuntu is a handful of Firefox extensions: Firebug, TinyURL Creator and Web Developer. I find Firebug is fantastic at pinpointing weaknesses in the wbsites I am working on. It can isolate elements that are slowing down the site or just not working correctly. Web developer does similar things but I find that it is better for collecting amazing amounts of information about any website, from the size of the website to embedded images and styles. On a daily basis I use both.The other extension I always have is the TinyURL Creator. I spend a lot of my day sending or storing links to information I want to share. 300-character URLs are ugly and cumbersome.

Got favourite applications you can’t live without? Tell us in the comments.

 

Prototype for a Fedora virtual machine appliance builder

For the oVirt project the end product distributed to users consists of a LiveCD image to serve as the 'managed node' for hosting guests, and a virtual machine appliance to serve as the 'admin node' for the web UI. The excellant Fedora LiveCD creator tools obviously already deal with the first use case. For the second though we don't currently have a solution. The way we build the admin node appliance is to boot a virtual machine and run anaconda with a kickstart, and then grab the resulting installed disk image. While this works it involves a number of error-prone steps. Appliance images are not inherantly different from LiveCDs - instead of a ext3 filesystem inside an ISO using syslinux, we want a number of filesystems inside a partitioned disk using grub. The overall OS installation method is the same in both use cases.

After a day's hacking I've managed to re-factor the internals of the LiveCD creator, and add a new installation class able to create virtual machine appliances. As its input it takes a kickstart file, and the names and sizes for one or more output files (which will act as the disks). It reads the 'part' entries from the kickstart file and uses parted to create suitable partitions across the disks. It then uses kpartx to map the partitions and mounts them all in the chroot. The regular LiveCD installation process then takes place. Once complete, it writes a grub config and installs the bootloader into the MBR. The result is one or more files representing the appliance's virtual disks which can be directly booted in KVM / Xen / VMware.

The virt-image tool defines a simple XML format which can be used to describe a virtual appliance. It specifies things like minimum recommended RAM and VCPUs, the disks associated with the appliance, and the hypervisor requirements for booting it (eg Xen paravirt vs bare metal / fullvirt). Given one of these XML files, the virt-image tool can use libvirt to directly deploy a virtual machine without requiring any further user input. So an obvious extra feature for the virtual appliance creator is to output a virt-image XML description. With a demo kickstart file for the oVirt admin node, I end up with 2 disks:

-rwxr-xr-x 1 root     root     5242880001 2008-02-17 14:48 ovirt-wui-os.raw
-rwxr-xr-x 1 root root 1048576001 2008-02-17 14:48 ovirt-wui-data.raw

And an associated XML file


ovirt-wui



x86_64








1
262144









To deploy the appliance under KVM I run

# virt-image --connect qemu:///system ovirt-wui.xml
# virsh --connect qemu:///system list
Id Name State
----------------------------------
1 ovirt-wui running

Now raw disk images are really quite large - in this example I have a 5 GB and a 1 GB image. The LiveCD creator saves space by using resize2fs to shrink the ext3 filesystem, but this won't help disk images since the partitions are a fixed size regardless of what the filesystem size is. So to allow smaller the appliance creator is able to call out to qemu-img to convert the raw file into a qcow2 (QEMU/KVM) or vmdk (VMWare) disk image, both of which are grow on demand formats. The qcow2 image can even be compressed. Wtth the qcow2 format the disks for the oVirt WUI reduce to 600 KB and 1.9 GB.

The LiveCD tools have already seen immense popularity in the Fedora community. Once I polish off this new code to be production quality, it is my hope that we'll see similar uptake by people interested in creating and distributing appliances. The great thing about basing the appliance creator on the Live CD codebase and using kickstart files for both, is that you can easily switch between doing regular anaconda installs, creating Live CDs and creating appliances at will, with a single kickstart file.

Source : Fedora

 

The £99 laptop: how can it be so cheap?

A new laptop computer for just £99 sounds like the kind of offer found in a spam e-mail or on a dodgy auction website. But the British company Elonex is launching the country’s first sub £100 computer later this month and hopes to be making 200,000 of them by the summer. It will be aimed at schoolchildren and teenagers, and looks set to throw the market for budget laptops wide open.

Called the One, it can be used as a traditional notebook computer or, with the screen detached from the keyboard, as a portable “tablet” – albeit without the planned touchscreen that Elonex had to abandon to hit its £99 price tag. Wi-fi technology lets users access the internet or swap music (and homework) files between computers wirelessly.

Personal files can be stored on the laptop’s 1GB of built-in memory or on a tough digital wristband (1-8GB, from £10) that children can plug into the USB socket of whichever computer they happen to be using, be it the One, a PC at school or their parents’ laptop.

So how can Elonex make a computer for so little? After all, UK consumers paid an average of £477 for a new laptop in 2007, according to the retail analyst GfK.

The secret is simple: open-source software. The One runs on Linux, which is a rival to Windows but completely free to use. Open-source software can be freely swapped or modified by anyone who wants it. In the past such operating systems (there are several of them) have been outgunned by the more sophisticated Windows programs. However, an open-source operating system is ideal for low-cost devices as it performs well on less powerful, cheaper hardware.

Naturally, the One is more basic than all-singing, all-dancing notebooks. Nonetheless, it includes a free word processor and spreadsheet, a free web browser and free e-mail software. It has a 7in screen, a rubbery little keyboard and no CD drive. And it all runs on an ageing chip that was designed before its target audience of seven-year-olds were even born.

InGear had an exclusive hands-on look at a preproduction One. The keyboard was slow and spongy and the built-in speakers could be louder but the screen was bright and the software package impressively varied (if rather sluggish) on this prototype.

Preloaded programs ranged from instant messaging software and a photo editor to games and an MP3 player. Moving files to and from the USB wristband was easy enough – and there’s a Bluetooth version with 2GB of memory (£120) that lets you swap files with mobile phones too.

Elonex will be launching the computer at the Education Show at the NEC in Birmingham at the end of this month, and is targeting schools as potential buyers.

The Elonex One isn’t the only low-cost educational laptop out there, however. Asus launched an open-source laptop in the run-up to Christmas last year. The Eee PC (about £200) has proved popular with adults as well as children, with its first shipment selling out nationwide within hours of its November release.

The One Laptop per Child initiative, which began in America, hopes to offer a “Give one, get one” event this year in Britain, where consumers can buy two computers – one for themselves and one for a child abroad – for about £200.

But open-source software has its problems. If no one owns it, there’s no one to complain to when things go wrong – and the One has no antivirus or firewall software built in. The old-fashioned feel of the One’s programs could also flummox modern cyber-kids used to the slick menus, wizards and plug-and-play simplicity of Windows.

Of course, in the context of laptops costing more than £1,000 – and even copies of Microsoft Office software retailing at as much as £120 – paying £99 for a fully functional, internet-ready laptop packed with software isn’t a huge risk to take.

And it’s this magic price that is the One’s biggest asset. The more that parents choose to buy Ones, the more music and games their kids will share, and the more sought after it will become. A laptop as the coolest thing in the playground? Stranger things have happened.

 

Red Hat unveils three new open-source projects

Red Hat has said its JBoss Enterprise SOA Platform will be available later this month and introduced three new open source projects designed to infuse transaction, management and other capabilities into its middleware.

The announcements come a day after the company laid out its seven-year goal to own 50% of middleware deployments using JBoss to anchor platforms for portals, SOA, and application servers and services. Red Hat said its open source SOA platform would incorporate innovation derived from an array of open source projects offered on JBoss.org, three of which have just been introduced.

The three projects are Black Tie, which will create a transaction engine to integrate or replace legacy transaction monitors, specifically Tuxedo, with the JBoss.org Transactions project, JBoss DNA, a registry for network services and RHQ, an SOA management platform that will eventually support both JBoss and Fedora platforms.

The SOA Platform already incorporates components that started out as open source projects, including JBoss ESB, JBoss jBPM and JBoss Rules.

ESB provides application and service integration, mediation, transformation and registry technology; jBPM adds service orchestration and workflow; while Rules includes management and integration of business policy and rules, as well as, content-based routing that relies on rules.

Red Hat is following the same model it has for its Linux OS development. Court innovation among the vast Fedora open source project community and then tap the results for inclusion in Red Hat Enterprise Linux where it can be stabilised and supported.

"We want to be disruptive with our innovation, but not disruptive in production" environments, said Sacha Labourey, vice president of engineering middleware at Red Hat.

The SOA platform is designed to provide infrastructure to support SOAs, and application and business-process integration. The platform combines enterprise application integration, business process and rule management and event-driven architecture technologies. Red Hat officials say the platform is architected to support users involved in small-scale integration projects to full-blown SOA infrastructure deployments.

Red Hat has taken on a number of partners to complement its efforts, including Active Endpoints, Amberpoint, SeeWhy, SOA Software, Vitria Technology, Information Builders and iWay Software.

Red Hat said its Black Tie project would kick off in 60 days. The JBoss DNA project, the first in a series of SOA governance projects, is slated to begin in 30 days with more projects to be announced in 60 days.The RHQ project is already up and running.

Craig Muzilla, vice president of the middleware business unit, said it was hard to say when commercial products would spring from the projects, but he said users could look for results by year-end.

BlackTie will add C, C++ and mainframe compatible transaction capabilities to the JBoss.org Transactions project. The project will focus on emulating transaction-processing monitor application programming interfaces, and providing open source based legacy services that include security, naming, clustering and transactions.
Red Hat said the project would support the ATMI programming interface to ease migrations. The Black Tie project is derived from technology from Ajuna, which JBoss acquired in 2005 before being bought by Red Hat.

With its governance project, Red Hat hopes to set the tone for open source SOA management. JBoss DNA, a metadata repository and UDDI registry, is the kick-off project for what will be a number of management components, according to Muzilla. The project is based on technology Red Hat acquired when it bought MetaMetrix in April 2007.

Red Hat also unveiled it RHQ management project, which it said would serve as the code base for the JBoss Operations Network v2.0, which is due to ship in the first half of this year. The Operations Network is the management foundation for Red Hat's middleware strategy. The RHQ project aims to develop a common services management platform.

 

Master-Master Replication With MySQL 5 On Fedora 8 - Page 3

3.4 Export MySQL Dump On System 1

Now we create a dump of the existing database and transfer it to system 2.

mysql -u root -p

USE exampledb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

The output should look like this. Note down the file and the position - you'll need both later.

+------------------+----------+---------------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+---------------------+------------------+
| mysql-bin.000004 | 98 | exampledb,exampledb | |
+------------------+----------+---------------------+------------------+
1 row in set (0.00 sec)

Open a second terminal for system 1, create the dump and transfer it to system 2. Don't leave the MySQL-shell at this point - otherwise you'll loose the read-lock.

cd /tmp/
mysqldump -u root -p%mysql_root_password% --opt exampledb > sqldump.sql
scp sqldump.sql root@192.168.0.200:/tmp/

Afterwards close the second terminal and switch back to the first. Remove the read-lock and leave the MySQL-shell.

UNLOCK TABLES;
quit;

3.5 Import MySQL Dump On System 2

Time to import the database dump on system 2.

mysqladmin --user=root --password=%mysql_root_password% stop-slave
cd /tmp/
mysql -u root -p%mysql_root_password% exampledb <>

3.6 System 2 As Master

Now we need information about the master status on system 2.

mysql -u root -p
USE exampledb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

The output should look like this. Note down the file and the position - you'll need both later.

+------------------+----------+---------------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+---------------------+------------------+
| mysql-bin.000003 | 958 | exampledb,exampledb | |
+------------------+----------+---------------------+------------------+
1 row in set (0.00 sec)

Afterwards remove the read-lock.

UNLOCK TABLES;

At this point we're ready to become the master for system 1. Replace %mysql_slaveuser_password% with the password you choose and be sure that you replace the values for MASTER_LOG_FILE and MASTER_LOG_POS with the values that you noted down at step 3.4!

CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave2_user', MASTER_PASSWORD='%mysql_slaveuser_password%', MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=98;

Now start the slave ...

START SLAVE;

... and take a look at the slave status. It's very important that both, Slave_IO_Running and Slave_SQL_Running are set to Yes. If they're not, something went wrong and you should take a look at the logs.

SHOW SLAVE STATUS;

+----------------------------------+---------------+-------------+-------------+---------------+------------------+---------------------+--------------------+---------------+-----------------------+------------------+-------------------+---------------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+
| Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master |
+----------------------------------+---------------+-------------+-------------+---------------+------------------+---------------------+--------------------+---------------+-----------------------+------------------+-------------------+---------------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+
| Waiting for master to send event | 192.168.0.100 | slave2_user | 3306 | 60 | mysql-bin.000004 | 98 | slave-relay.000002 | 235 | mysql-bin.000004 | Yes | Yes | exampledb,exampledb | | | | | | 0 | | 0 | 98 | 235 | None | | 0 | No | | | | | | 0 |
+----------------------------------+---------------+-------------+-------------+---------------+------------------+---------------------+--------------------+---------------+-----------------------+------------------+-------------------+---------------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+
1 row in set (0.00 sec)

Afterwards leave the MySQL-shell.

quit;

3.7 System 1 As Master

Open a MySQL-shell on system 1 ...

mysql -u root -p

... and stop the slave.

STOP SLAVE;

At this point we're ready to become the master for system 2. Replace %mysql_slaveuser_password% with the password you choose and be sure that you replace the values for MASTER_LOG_FILE and MASTER_LOG_POS with the values that you noted down at step 3.6!

CHANGE MASTER TO MASTER_HOST='192.168.0.200', MASTER_USER='slave1_user', MASTER_PASSWORD='%mysql_slaveuser_password%', MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=958;

Now start the slave ...

START SLAVE;

... and take a look at the slave status. It's very important that both, Slave_IO_Running and Slave_SQL_Running are set to Yes. If they're not, something went wrong and you should take a look at the logs.

SHOW SLAVE STATUS;

+----------------------------------+---------------+-------------+-------------+---------------+------------------+---------------------+--------------------+---------------+-----------------------+------------------+-------------------+---------------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+
| Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master |
+----------------------------------+---------------+-------------+-------------+---------------+------------------+---------------------+--------------------+---------------+-----------------------+------------------+-------------------+---------------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+
| Waiting for master to send event | 192.168.0.200 | slave1_user | 3306 | 60 | mysql-bin.000003 | 958 | slave-relay.000002 | 235 | mysql-bin.000003 | Yes | Yes | exampledb,exampledb | | | | | | 0 | | 0 | 958 | 235 | None | | 0 | No | | | | | | 0 |
+----------------------------------+---------------+-------------+-------------+---------------+------------------+---------------------+--------------------+---------------+-----------------------+------------------+-------------------+---------------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+
1 row in set (0.00 sec)

Afterwards leave the MySQL shell.

quit;

If all went ok, the master-master replication is working now. Check your logs on both systems if you encounter problems.

4 Links

 

Master-Master Replication With MySQL 5 On Fedora 8 - Page 2

3 Replication

3.1 Firewall Configuration On Both Systems

Versions of system-config-firewall-tui before 1.0.12-4.x had a bug in conjunction with custom rules (they were not aquired) - so check which version is installed on your system.

yum list installed | grep firewall

If the installed version is lower than 1.0.12-4.x you have to update to the new version. While I was writing this howto, the new version was only available in the updates-testing repository.

yum --enablerepo=updates-testing update system-config-firewall-tui

In order that the mysql servers are able to connect each other you have to open the port 3306 (tcp) on both systems.

system-config-firewall

Click on "Customize"

Insert the MySQL-port into the section "Other Ports" as shown on the screenshot below and click on "OK" to save the settings.
Click on "OK".

3.2 Log Directory On Both Systems

In order that the MySQL server is able to create log-files we have to create a directory and pass the ownership to MySQL.

mkdir /var/log/mysql/
chown mysql:mysql /var/log/mysql/

3.3 MySQL Configuration

In the next two steps we adjust the MySQL configuration on both systems for master-master replication.

3.3.1 System 1

vi /etc/my.cnf

Add the following lines to the section [mysqld]:

server-id = 1
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 1

master-host = 192.168.0.200
master-user = slave1_user
master-password = %mysql_slaveuser_password%
master-connect-retry = 60
replicate-do-db = exampledb

log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db = exampledb

relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index

expire_logs_days = 10
max_binlog_size = 500M

Afterwards restart the MySQL server.

/etc/init.d/mysqld restart

3.3.2 System 2

vi /etc/my.cnf

Add the following lines to the section [mysqld]:

server-id = 2
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 2

master-host = 192.168.0.100
master-user = slave2_user
master-password = %mysql_slaveuser_password%
master-connect-retry = 60
replicate-do-db = exampledb

log-bin= /var/log/mysql/mysql-bin.log
binlog-do-db = exampledb

relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index

expire_logs_days = 10
max_binlog_size = 500M

Afterwards restart the MySQL server.

/etc/init.d/mysqld restart

Previous||Next

 

Master-Master Replication With MySQL 5 On Fedora 8

This document describes how to set up master-master replication with MySQL 5 on Fedora 8. Since version 5, MySQL comes with built-in support for master-master replication, solving the problem that can happen with self-generated keys. In former MySQL versions, the problem with master-master replication was that conflicts arose immediately if node A and node B both inserted an auto-incrementing key on the same table. The advantages of master-master replication over the traditional master-slave replication are that you don't have to modify your applications to make write accesses only to the master, and that it is easier to provide high-availability because if the master fails, you still have the other master.

This howto is a practical guide without any warranty - it doesn't cover the theoretical backgrounds. There are many ways to set up such a system - this is the way I chose.

1 Preparation

For this howto I set up two Fedora 8 systems (minimal installation without gui etc.) with the following configuration.

1.1 System 1

Hostname: server1.example.com
IP: 192.168.0.100

1.2 System 2

Hostname: server2.example.com
IP: 192.168.0.200

2 MySQL

2.1 Needed Packages On Both Systems

If you haven't installed MySQL on both systems you can install it (client & server) via:

yum -y install mysql mysql-server

2.2 MySQL Server Initial Start On Both Systems

Start the MySQL server.

/etc/init.d/mysqld start

2.3 MySQL Root Password

2.3.1 Both Systems

Set a password for the MySQL root-user on localhost.

mysqladmin -u root password %sql_root_password%

2.3.2 System 1

Set a password for the MySQL root-user on server1.example.com.

mysqladmin -u root -h server1.example.com password %mysql_root_password%

2.3.3 System 2

Set a password for the MySQL root-user on server2.example.com.

mysqladmin -u root -h server2.example.com password %mysql_root_password%

2.4 MySQL Replication User

2.4.1 System 1

Create the replication user that System 2 will use to access the MySQL database on System 1.

mysql -u root -p

GRANT REPLICATION SLAVE ON *.* TO 'slave2_user'@'%' IDENTIFIED BY '%mysql_slaveuser_password%';
FLUSH PRIVILEGES;
quit;

2.4.2 System 2

Create the replication user that System 1 will use to access the MySQL database on System 2.

mysql -u root -p

GRANT REPLICATION SLAVE ON *.* TO 'slave1_user'@'%' IDENTIFIED BY '%mysql_slaveuser_password%';
FLUSH PRIVILEGES;
quit;

2.5 Database On System 2

I proceed on the assumption that the database exampledb is already existing on System 1 - containing tables with records. So we have to create an empty database with the same name as the existing database on System 1.

mysql -u root -p

CREATE DATABASE exampledb;
quit;

 

Ubuntu Goes Commercial ?

If youread a post by Bruce Byfield, where he raises an interesting question: after the fact that Canonical will try and offer commercial software from a specific repository, would anyone use it? And if not, could it alienate other users of Ubuntu from using the distribution at all?

He continues to argue for a whole 2-pages-long article for something that I don't even think exist. His main point is that this idea of commercial repositories has been tried out before and it didn't work. Why try now? After all, it's just a matter of time until something else will replace our current software:

A download service might find a temporary niche in offering software for which no free equivalent exists. For instance, despite recent improvements in apps like Kooka and Tesseract, someone who regularly needed to convert scanned text to a usable format might welcome a GNU/Linux version of OmniPage. The trouble is, given the speed with which free software is developing, such a market would be temporary, lasting a year or two at most. A service specializing in these niches would continually lose out to maturing free software, with no prospect of replacement products.
But why doesn't he see that this service may not be different from other software distribution methods? It seems he more argues the fact that there are proprietary and commercial application offers in Linux, than the fact that they are provided in Ubuntu. But, as it seems to me, the main reason for Canonical to do so is not for all Ubuntu Desktop users - its for business users and maybe even Ubuntu Server users, who may use those proprietary applications for their businesses and need a standard way of installing applications. Why should the way of installing Parallels be different than one for installing Open Office? It should not.

Sun has its own software distribution system, just as Apple's Mac OS X and MS Windows do. Why is it forbidden for Linux distributions to have one that includes commercial software?

I can provide the example of commercial software that I have used and had to install on Linux: IBM Rational ClearCase (and trust me, moving to other version management tools was much more expensive in human-hours because of the huge amounts of code and fast workforce turnover). Yes, the are free/open source alternatives, but they were not viable for that specific case.

I see the offer by Canonical as very pragmatic, practical and not hurting Ubuntu in no way. Ubuntu is Linux distribution. Canonical is the company behind it, which goal is to make money. So what is the problem that they try to monetize the free infrastructure they supported to build? The infrastructure is and will remain free, and as there's no additional effort required (except maybe for billing system in-place), Canonical has nothing to loose - and much to gain.

Here's another question while we're here: why the author doesn't criticize the Red Hat's model where you pay for the distribution first, and then if you use proprietary software, then for the software once more? Is it that much better? I don't see users ditch Red Hat and its siblings (Fedora and CentOS) just because Red Hat has proprietary parts in it.

And I don't believe that Ubuntu users will drop using Ubuntu because Canonical has proprietary repositories.

I side Canonical in this specific case not because I'm pro-Ubuntu. While I am pro-Ubuntu, I'm really distribution-agnostic person (although I do have some emotional and personal allegiance to Gentoo). But I think that author just emotionally reacts on the offering of something proprietary for Linux. While it is perfectly fine for some users to be upset, business people might actually be glad that they will be able to get the software they anyway want or need in a standard fashi

 

SCO Lives! Aarrgh! Rawrr!

The more I watch SCO's progress -- from Unix vendor to patent-wielding lawsuit machine to bankrupt has-been, and now a privately funded corporate reboot -- the more I feel like I'm watching one of those cheesy 1960s Japanese monster movies with a nigh-unkillable creature from outer space. The super heat ray didn't work on the monster, the mysterious Element X that spews out Radiation Y didn't have any effect either, and now the scientists are falling back on the absolute last resort plan of them all: Awaken Godzilla! Would that we had Godzilla here, though.

Yes, SCO has lurched to life once more. The details of SCO's resurrection are still sketchy, but the plan seems plain to anyone who's followed the story so far. The way I see it, the "tremendous investment opportunity" that SCO's new investors are talking about in their statement is a) to drag out the court battle with Novell (NSDQ: NOVL) and IBM (NYSE: IBM) as long as humanly possible, b) score as many wild hits as possible in court to scare people away from Linux and open source, and c) Profit!

I do have to wonder how much SNCP, SCO's new investor, understands about what it's getting into. The one sentence from the release that hints at a business plan other than suing everything that moves is "SNCP has developed a business plan for SCO that includes unveiling new product lines aimed at global customers", which is as vague as trying to predict the weather a year from Monday. Do you know of anyone with even a kernel (pun intended) of technical savvy who would have anything to do with SCO at this point, either as an investor, a customer, or an employee?

My hope is that SNCP will pump a bunch of money into SCO, discover that there's no immediate benefit to doing so other than protracted legal struggles and, eventually needing to pay off the $25 million it owes Novell, give up and move on to another boondoggle. My nightmare, however, is a rejuvenated SCO that manages to continue being an indefinite irritant in the side of open source everywhere. Having Mothra nesting in the Tokyo Tower seems positively benign in comparison

Full Article

 

The Demise Of Commercial Open Source

Steve Goodman, co-founder and CEO of network management startup PacketTrap Networks, is predicting that commercial open source companies are doomed to fail. Goodman's not railing against open source or commercial software, per se. It's converting the former into the latter that he sees as inherently flawed.

Goodman makes his argument in a blog posting published on PacketTrap's Web site. "The interest of a commercial vendor is opposite to that of an open source project," he writes. "Commercial vendors answer to road maps, salespeople, and shareholders."

A white paper lays out the argument in more detail. In it, PacketTrap refers to commercial open source as "proprietary open source" and identifies 21 startups--from ActiveGrid to Zimbra--that it puts into that bucket.

What does Goodman think is the right approach? His own, of course. PacketTrap is a commercial software company that integrates open source network monitoring and management tools into its own PT360 Tool Suite. Rather than trying to manage an open source project as, say, MySQL has done, PacketTrap leaves project management to the open source community and concentrates on developing a commercial platform that works with the code that community delivers, such as Nagios for network monitoring.

PT360 is in beta testing now. It's aimed at the mid-market, though large companies such as Boeing, Home Depot, Pfizer, and the U.S. Navy are early adopters.

A basic version PT360 is free, while a professional version is due in the next few weeks. This so-called "freemium" model-- give a product away, then charge for a better version -- has its own critics.

Full Article

 

Going Mobile: The Year of the Smart Phone Startup

If you've always been itching to launch a startup but just couldn't come up with a killer idea, well, your ship is about to come in. No, it won't be quite as good as the Internet Bubble years, when any fool could raise a few million (hell, $30 or $40 million) to sell dog food online - no, really - but not bad, either.

When things are more or less steady state, you have to do something new and original to have a viable business plan in the tech space. But when times and technology really change (one of those paradigm shifty things), then you don't actually have to come up with something new to do at all - you just have to be the first to do something old in a new way. If you look back, that's what 95% of the Bubble companies tried.

True, 95% of those companies also failed. But that's not likely to happen this time around. This time, things will be a lot different, because while the platform is new, almost all of the trial and error on the business models has already occurred, the users are already trained to eat the dog food (as it were), the money is primed to flow, the standards are in place - and here's the really new twist - open source software has made the scene.


So where exactly is this grand opportunity to be found? I expounded on it in my monthly column for MHT (formerly MassHighTech), the New England high tech paper, last week in a piece that reads in part as follows:

The market segment in question is the mobile sector, where 2008 will usher in a multiyear period of opportunity for entrepreneurs and investors. The dynamics will echo two boom periods of the past -- the rapid expansion of the PC marketplace in the early 1980s, and the Internet explosion of the late 1990s. The device that will most robustly deliver on these antecedents is the smart phone, initially deployed (like the first personal computers) with many competing operating systems, and now able (like the PCs of the Internet boom) to satisfactorily access the Internet and the web.

In many ways, however, this boom will be better. Unlike the early, anemic, expensive PCs that people had never used before, a smart phone is simply a much more versatile telephone -- something a billion people already own. With a decade of Internet and web experience behind us, there will be far fewer failed efforts to determine what people really will and won't do online. And these mobile devices will be able to perform new tricks, using as many as nine separate on-board radios to interact with an ever-expanding "Internet of things," such as ATMs, film kiosks, movie posters and much more.

Best of all, the underlying technology is far less proprietary than it was during either the PC or the Internet boom. Various flavors of Linux now power the majority of mobile devices, and the Google Android project aims to provide developers with platform independence as well. The final part of the equation fell into place in just the last few months, as dominant telecommunications carriers grudgingly came to realize that they are better served (assuming they still have a choice) by opening their phones to independent software vendors than by shutting them out.

The result is a wonderful convergence of factors creating rapidly accessible opportunities for startups -- an abundance of empty open standards and open-source-based niches, alignment with the strategic direction of giants such as Google Inc. and Motorola Inc., and a coincident industry shift toward provisioning software as a service. Hundreds of opportunities -- many obvious -- offer all types of services to mobile, locationally aware platforms, from social networking, to push advertising to financial services.

Do I really believe all that? Yes I do. I don't expect that it will reach full flower in 2008, but I definitely expect the bus to leave the station and pick up real speed this year. It's going to be a very big bus, and most of the seats are still empty.

Just don't try and sell Kibble (R) to Smartphone users. We already know that dog won't hunt.

You can read the rest of the article here.

 

OpenOffice Text Files That Are Larger Than 65,536 Rows Cannot Open / Imported

This sucks, Open office 2.3 spreadsheet cannot open or import text files that are larger than 65,536 rows. Basically, I need 100k rows. However, it is possible to recompile OO to extend rows limitation. From the OO wiki hack page:

Well, it depends on what your goal is. For personal use you may set MAXROWCOUNT_DEFINE in sc/inc/address.hxx to a different value, multiple of 128, and recompile the application respectively the libsc680*.so and shove it under your existing installation. However, doing so implies that you don’t save to any binary file format like Excel or whatsoever, otherwise you risk loss of data. You’ll also encounter drawing layer misfits in higher row numbers, may experience performance problems, and there may be other quirks lurking. Note that it generally works for data and formulas, but otherwise is completely untested.

For the number of columns the same applies to the MAXCOLCOUNT_DEFINE in sc/inc/address.hxx, just that the value must be a multiple of 16 instead.

My text file is truncated at 65,536 and I was dumped with the following error message:

I hope this will be correcting, i'm so desperate, since i will use open office for my final exam

 

Sr. Linux Administrator

Here you see the specifics of the job announcement.

Company:
Apex Systems

Country:
United States

Title of job:
Sr. Linux Administrator

Job Description:
Apex Systems Inc is a technical staffing firm that assists companies and organizations with information technology staffing needs in every industry. Our client has an excellent Linux Administrator position available for the right candidates. If you are looking for an exciting, innovative opportunity with the chance to excel, then this is the opportunity for you.

We are seeking a Sr. Linux Administrator!

Summary:
The ideal candidate will have a very strong background in Linux administration with a minimum of 3 years devoted to Linux administration. This candidate also must have very strong experience with web servers as well as application servers. This includes installation, configuration, and administration. The following technologies are strongly desired, but not required: Windows 2000 & 2003, SQL Server, Jboss administration and Java.


If you feel you are a qualified candidate please email your resume to bserra@apexsystemsinc.com with **Linux Admin** in the subject line.

Primary Skills:
Linux , Apache Webserver

Secondary Skills:
Java , Jboss

Salary Range:
70 K - 80 K USD

Telecommuting:
No

Industry Experience:


Education:
Technical Ability / Experience is all that counts (Any)

Permits:
GreenCard or other US Work permit needed
Work permit for United States needed

Company Information:
Name: Intervise, Inc.
Email: agibbs@intervise.com
Telephone: (240) 599-9326
Address: 10110 Molecular Dr.
City: Rockville
ZIP Code: 20850
Country: United States
Web:

 

Lead Systems Engineer

Here you see the specifics of the job announcement.

Company:
lastminute.com labs

Country:
United Kingdom

Title of job:
Lead Systems Engineer

Job Description:
We're an innovation team creating beta applications as part of lastminute.com / Travelocity Europe, based in central London, UK. This is the person who will create our environment for rapidly developing and launching beta applications - who can design it and make it all work at the push of a button. Will need some serious Linux skills, lots of database knowledge, power over networks, scripting and automation abilities, build management and source control mastery and the ability to deal with vendors for equipment and hosting. Most of our work is Ruby on Rails, with a variety of client technologies.


See complete Job description

Primary Skills:
Linux , Apache Webserver

Secondary Skills:
MySQL , Ruby on Rails

Salary Range:
Negotiable

Telecommuting:
No

Industry Experience:


Education:


Permits:
EU Work permit needed
Work permit for United Kingdom needed




Company Information:

Name: lastminute.com labs
Email: labsjobs@googlemail.com
Telephone: 1234
Address: 39 Victoria St
City: London
ZIP Code: SW1H0EE
Country: United Kingdom
Web:





 

How To Patch Running Linux Kernel Source Tree

Some people like to know about patching running Linux kernel. Patching production kernel is a risky business. Following procedure will help you to fix the problem.

Step # 1: Make sure your product is affected

First find out if your product is affected by reported exploit. For example, vmsplice() but only affects RHEL 5.x but RHEL 4.x,3.x, and 2.1.x are not affected at all. You can always obtain this information by visiting vendors bug reporting system called bugzilla. Also make sure bug affects your architectures. For example, a bug may only affect 64 bit or 32 bit platform.

Step # 2: Apply patch

You better apply and test patch in a test environment. Please note that some vendors such as Redhat and Suse modifies or backports kernel. So it is good idea to apply patch to their kernel source code tree. Otherwise you can always grab and apply patch to latest kernel version.


Step # 3: How do I apply kernel patch?

WARNING! These instructions require having the skills of a sysadmin. Personally, I avoid recompiling any kernel unless absolutely necessary. Most our production boxes (over 1400+) are powered by mix of RHEL 4 and 5. Wrong kernel option can disable hardware or may not boot system at all. If you don’t understand the internal kernel dependencies don’t try this on a production box.

Change directory to your kernel source code:
# cd linux-2.6.xx.yy
Download and save patch file as fix.vmsplice.exploit.patch:
# cat fix.vmsplice.exploit.patch
Output:

--- a/fs/splice.c
+++ b/fs/splice.c
@@ -1234,7 +1234,7 @@ static int get_iovec_page_array(const struct iovec __user *iov,
if (unlikely(!len))
break;
error = -EFAULT;
- if (unlikely(!base))
+ if (!access_ok(VERIFY_READ, base, len))
break;

/*

Now apply patch using patch command, enter:
# patch <>
Now recompile and install Linux kernel.

I hope this quick and dirty guide will save someones time. On a related note Erek has unofficial patched RPMs for CentOS / RHEL distros.