Showing posts with label News. Show all posts
Showing posts with label News. Show all posts

Open source project: Func, the Fedora Unified Network Controller

Author :Michael DeHaan


Func had an interesting beginning. It began not in a whiteboard-lined conference room, but in a small coffeeshop in Chapel Hill, North Carolina. Greg DeKoenigsberg, Adrian Likins, Seth Vidal, and I were discussing how to make Linux easier to manage for large install bases. That’s when we came up with the idea for Func.

While Fedora contains excellent open source management applications for a variety of tasks, it still lacked a good remote scripting framework roughly analogous to the features provided by system-config-* applications. It turns out this was something many of us wanted to write for a long time–but for some reason, we never did. So, why not build it?

A fair amount of commercial management software seems to get built and sold without consulting the people who end up using it–systems administrators. While these applications may present extremely well-crafted graphical user interfaces with enterprise-grade reliability and scalability features, they often lack solid scripting ability or require development using complex SOAP APIs to get things done.

For managing very large install bases, these aspects impose barriers to automation. System administrators tend to prefer things written in Perl, Python, or bash. Automation is critical.

The most commonly used remote management tool for Linux is probably SSH. While being a very useful tool for manipulating a single machine remotely, it is challenging to integrate with an environment where machines are frequently reinstalled or where complex remote actions need to be scripted. SSH wasn’t meant to be a multi-system remote scripting tool, and it’s definitely not meant to be something you build other applications on top of. Futhermore, integrating SSH key deployment with kickstart (even with tools like Cobbler to help) can be difficult.

On the other end of the management spectrum, there are configuration management systems such as Puppet, cfengine, and bcfg2. These solutions are great for pushing configuration files around and describing the way infrastructure should look (or making it look that way), but are not as well-suited for remote scripting and one-off tasks.

We wanted to create a solution that filled this void–something absolutely simple, rapid to deploy, easy to use and easy to expand. This would become Func.

Furthermore, we wanted to challenge ourselves, so we decided to create the first release of Func in two weeks time. This was a goal we managed to exceed, as we had it submitted to Fedora in about eight days.

Func works by having a very minimalistic daemon (funcd) installed on each managed machine, which we call a “minion.” Each minion, when it is first run, receives SSL certificates from a remote “certmaster,” which can either be automatically signed or manually approved by an administrator. Client software (in the form of the command line tool (“func”) or the Client API) can then address specific minions from the central server (called the “overlord”), or even address a large set of them at once. Communication is currently only from the overlord to the minion, but intra-minion communication is coming.

To help describe what func can do, the following command shows the available system memory on all example.org machines being managed.

func “*.example.org” show hardware systemMemory

The above also illustrates Func’s globbing feature. Similar globs, such as “*” or “a*” work as expected–communicating with all servers, or all servers starting with “a”, respectively. Of course, addressing only a single system works as well.

The Func project page also lists example code for doing the same thing (for various func modules) in just a handful of lines of Python. This should be easily understandable even if you do not know Python. (And if you don’t, it’s easy to pick up.)

Here’s a quick Python example:

import func.overlord.client as fc
client = fc.Client("*.example.org;*.example.com")
client.service.start("acme-server")

The initial Func release contained modules for remotely manipulating services, viewing hardware inventory (via Smolt), running remote commands, and many other tasks commonly found in systems management apps. More importantly though, it exposed a trivially simple pluggable model, allowing any application to drop in a module on a remote machine and instantly have it be accessible by the Func “overlord”, whether by command-line or Python scripting. Func is not strictly for systems management–Func is a truly pluggable framework for any application that needs two-way secure communication.

An example of Func’s power is shown by the func-inventory application. Func-inventory is an application that checks on all of the nodes in your infrastructure, and inventories all the Func modules they have running. The results are stored in git (a distributed version control system), and can be viewed with apps like “gitk,” “gitweb,” or “git log.” Func-inventory can therefore be used to see if drives disappear, or if new packages are installed. It is very easy to use Func-inventory to report on all types of changes throughout an organization.

While this is interesting, it is more impressive to note that Func-inventory is only about 200 lines of Python, and was written in only half of a work day. Func contains a very powerful scripting API. Func-inventory ships as part of Func and is installed into /usr/bin.

Other applications contained in Func’s source tree as examples include an exploding battery finder for laptops (which would have been very handy earlier this year) and a failed drive detector (that works by using SMART). Each of these applications are really only a handful of lines of Python. If you’re a Perl or bash hacker, Python is very easy to pick up and Func may get you hooked.

Another useful feature of Func is newly added support for parallelism. Func operations running on remote machines may be slow to complete. They can now be executed in multiple processes, with Func handling the multi-process aspects and combining results as if things were executed in a single process. This is supported both via the Func command line and the Python API. More performance-related tweaks will go into Func as time goes on.

Func is still young. Since starting the project only a few months ago, interest in Func has grown rapidly. It has a IRC channel (#func) on irc.freenode.net, as well as a mailing list. We’ve received a wide variety of patches, and are happy to see the beginnings of support for other distributions, with contributions including both BSD and OpenSuSE. The great advantage to open source is in being able to collaborate with such a diverse user base. Whether you have an idea for a new module, need a secure network communication path for your new application, or just want to use existing Func modules to automate your environment, everyone is invited to stop by IRC and the mailing list.

Want to install Func and try it out? Func is available in Fedora and EPEL. See the Func project page for more details.

We would like to reiterate that Func is your application–by sharing ideas and features among its users, Func grows more powerful for everyone that uses it–the true beauty of Open Source. If you write an interesting Func module, we hope you’ll share it with us. Func modules are easy to write and we expect to amass a very large library of them.

If you have a need to manage a very large number of remote machines and are wish for something a bit more sophisticated than SSH for automation purposes–or just need a secure remote communications channel for a new project–Func is the application for you.

Resources

Source : Func

 

Acer to launch low-cost PCs this year-paper

Acer, the world's No. 3 computer vendor, plans to start selling low-cost laptop PCs this year, following a recent strong reception for similar models from competitors, media reported on Wednesday.

Acer, which previously said it had not planned to sell cheap notebook computers, has changed course to develop PCs to target a new customer base, the Chinese-language Commercial Times quoted company Chairman J.T. Wang as saying.

The company planned to launch the PCs in the second or third quarter of this year, the report said.

It said that Acer was still developing the new model, which could be 7-9 inches wide, and could cost around $470.

Acer declined to comment on the report.

On Wednesday, shares of Acer had risen 2.39 percent to T$49.25 by 0256 GMT, outperforming the benchmark TAIEX index which advanced 0.72 percent.

Taiwan's Asustek Computer Inc, a competitor to Acer, launched its line of low-cost Eee PC laptops last year, with a price tag of as little as $200.

Acer said the new computers would not cannibalise its current business, as such models were aimed at low penetration markets such as PCs for children and developing markets, according to the report, echoing similar previous comments from Asustek.

Asustek has so far been successful in marketing and selling its child-friendly Linux-based notebook globally, although profit margins for the products are thin, analysts have said.

Acer competes closely with China's Lenovo and larger rivals Hewlett-Packard and Dell.

The firm posted a 77 percent surge in its fourth quarter net profit earlier in the week and said it expects to ship 40 percent more notebook PCs this year from 2007, while its overall PC shipments would rise by 30 to 35 percent.

"Yeah more Linux-based notebook !!!"

 

NEC shows off Linux mobile phones

NEC has thrown its weight behind mobile Linux with the introduction of four handsets based on the LiMo specification..

LiMo is a the result of a push towards a shared, hardware-independent mobile phone operating system by several handset manufacturers including Motorola, LG Electronics and Panasonic.

NEC describes its handsets as the world's first LiMo-compliant mobile phones, even though several of its partners in the LiMo Foundation have already released details of compatible handsets, including Motorola and Panasonic.

"The breadth of the initial generation of LiMo handsets consolidates LiMo's role as the unifying force within mobile Linux and highlights the strong momentum established in the 12 months since LiMo was launched," said Morgan Gillis, executive director of the LiMo Foundation.

Among NEC's new phones is the N905i, a 3G/GSM phone with HSDPA for data connectivity, mobile TV reception, GPS and support for wireless payment services.

 

Open source workers can earn more money !!!

IT workers who specialise in free and open source software are earning more than the national average for IT, according to the results of Australia's first open source census.

The average full time salary of respondents to the Australian Open Source Industry and Community Census was between $76,000 and $100,000, but the 10 percent working on open source full time were earning “a lot more” according Pia Waugh of Waugh Partners consultancy, which conducted the survey.

“The people who were working on free software full time were earning more than the average for the general community,” she said.

When compared to Australian salaries across the board, salaries for full time open source workers were almost three times the national median.

Women IT workers didn't fare as well though – the full time women workers who responded were earning an average of $46,000 to $60,000, Waugh said.

Previewing the results of the census at Linux.conf.au on Friday, Pia and Jeff Waugh of Waugh Partners Consultancy said the online survey attracted 327 respondents who were working on open source software in either a personal or professional capacity. The majority of them (57 percent) were hobbyists who don't get paid to work on open source. Twenty-four percent were working on open source in their paid job some of the time, while the highest paid segment were the 10 percent working on open source full time.

Waugh Partners believed the sample size was greater than 5 percent of the total open source industry size, making it a credible representation of the whole industry.

“It suggests that people who work with open source are likely to have better skills and are likely to get better jobs,” Jeff Waugh said. “That is a really good message to take out to the education sector. We hope it will reinforce the decision by universities who do open source software, and the ones who aren't doing it will need to compete.”

While many of the respondents said their knowledge of open source was a self taught skill, Queensland universities led the field of institutions attended by the respondents.

The majority of respondents to the survey had completed some of their study at Queensland's University of Technology (QUT), while the University of Sydney came second. Two of the top four unis nominated were in Queensland.

Source : itnews

 

Red Hat unveils three new open-source projects

Red Hat has said its JBoss Enterprise SOA Platform will be available later this month and introduced three new open source projects designed to infuse transaction, management and other capabilities into its middleware.

The announcements come a day after the company laid out its seven-year goal to own 50% of middleware deployments using JBoss to anchor platforms for portals, SOA, and application servers and services. Red Hat said its open source SOA platform would incorporate innovation derived from an array of open source projects offered on JBoss.org, three of which have just been introduced.

The three projects are Black Tie, which will create a transaction engine to integrate or replace legacy transaction monitors, specifically Tuxedo, with the JBoss.org Transactions project, JBoss DNA, a registry for network services and RHQ, an SOA management platform that will eventually support both JBoss and Fedora platforms.

The SOA Platform already incorporates components that started out as open source projects, including JBoss ESB, JBoss jBPM and JBoss Rules.

ESB provides application and service integration, mediation, transformation and registry technology; jBPM adds service orchestration and workflow; while Rules includes management and integration of business policy and rules, as well as, content-based routing that relies on rules.

Red Hat is following the same model it has for its Linux OS development. Court innovation among the vast Fedora open source project community and then tap the results for inclusion in Red Hat Enterprise Linux where it can be stabilised and supported.

"We want to be disruptive with our innovation, but not disruptive in production" environments, said Sacha Labourey, vice president of engineering middleware at Red Hat.

The SOA platform is designed to provide infrastructure to support SOAs, and application and business-process integration. The platform combines enterprise application integration, business process and rule management and event-driven architecture technologies. Red Hat officials say the platform is architected to support users involved in small-scale integration projects to full-blown SOA infrastructure deployments.

Red Hat has taken on a number of partners to complement its efforts, including Active Endpoints, Amberpoint, SeeWhy, SOA Software, Vitria Technology, Information Builders and iWay Software.

Red Hat said its Black Tie project would kick off in 60 days. The JBoss DNA project, the first in a series of SOA governance projects, is slated to begin in 30 days with more projects to be announced in 60 days.The RHQ project is already up and running.

Craig Muzilla, vice president of the middleware business unit, said it was hard to say when commercial products would spring from the projects, but he said users could look for results by year-end.

BlackTie will add C, C++ and mainframe compatible transaction capabilities to the JBoss.org Transactions project. The project will focus on emulating transaction-processing monitor application programming interfaces, and providing open source based legacy services that include security, naming, clustering and transactions.
Red Hat said the project would support the ATMI programming interface to ease migrations. The Black Tie project is derived from technology from Ajuna, which JBoss acquired in 2005 before being bought by Red Hat.

With its governance project, Red Hat hopes to set the tone for open source SOA management. JBoss DNA, a metadata repository and UDDI registry, is the kick-off project for what will be a number of management components, according to Muzilla. The project is based on technology Red Hat acquired when it bought MetaMetrix in April 2007.

Red Hat also unveiled it RHQ management project, which it said would serve as the code base for the JBoss Operations Network v2.0, which is due to ship in the first half of this year. The Operations Network is the management foundation for Red Hat's middleware strategy. The RHQ project aims to develop a common services management platform.

 

Ubuntu Goes Commercial ?

If youread a post by Bruce Byfield, where he raises an interesting question: after the fact that Canonical will try and offer commercial software from a specific repository, would anyone use it? And if not, could it alienate other users of Ubuntu from using the distribution at all?

He continues to argue for a whole 2-pages-long article for something that I don't even think exist. His main point is that this idea of commercial repositories has been tried out before and it didn't work. Why try now? After all, it's just a matter of time until something else will replace our current software:

A download service might find a temporary niche in offering software for which no free equivalent exists. For instance, despite recent improvements in apps like Kooka and Tesseract, someone who regularly needed to convert scanned text to a usable format might welcome a GNU/Linux version of OmniPage. The trouble is, given the speed with which free software is developing, such a market would be temporary, lasting a year or two at most. A service specializing in these niches would continually lose out to maturing free software, with no prospect of replacement products.
But why doesn't he see that this service may not be different from other software distribution methods? It seems he more argues the fact that there are proprietary and commercial application offers in Linux, than the fact that they are provided in Ubuntu. But, as it seems to me, the main reason for Canonical to do so is not for all Ubuntu Desktop users - its for business users and maybe even Ubuntu Server users, who may use those proprietary applications for their businesses and need a standard way of installing applications. Why should the way of installing Parallels be different than one for installing Open Office? It should not.

Sun has its own software distribution system, just as Apple's Mac OS X and MS Windows do. Why is it forbidden for Linux distributions to have one that includes commercial software?

I can provide the example of commercial software that I have used and had to install on Linux: IBM Rational ClearCase (and trust me, moving to other version management tools was much more expensive in human-hours because of the huge amounts of code and fast workforce turnover). Yes, the are free/open source alternatives, but they were not viable for that specific case.

I see the offer by Canonical as very pragmatic, practical and not hurting Ubuntu in no way. Ubuntu is Linux distribution. Canonical is the company behind it, which goal is to make money. So what is the problem that they try to monetize the free infrastructure they supported to build? The infrastructure is and will remain free, and as there's no additional effort required (except maybe for billing system in-place), Canonical has nothing to loose - and much to gain.

Here's another question while we're here: why the author doesn't criticize the Red Hat's model where you pay for the distribution first, and then if you use proprietary software, then for the software once more? Is it that much better? I don't see users ditch Red Hat and its siblings (Fedora and CentOS) just because Red Hat has proprietary parts in it.

And I don't believe that Ubuntu users will drop using Ubuntu because Canonical has proprietary repositories.

I side Canonical in this specific case not because I'm pro-Ubuntu. While I am pro-Ubuntu, I'm really distribution-agnostic person (although I do have some emotional and personal allegiance to Gentoo). But I think that author just emotionally reacts on the offering of something proprietary for Linux. While it is perfectly fine for some users to be upset, business people might actually be glad that they will be able to get the software they anyway want or need in a standard fashi

 

SCO Lives! Aarrgh! Rawrr!

The more I watch SCO's progress -- from Unix vendor to patent-wielding lawsuit machine to bankrupt has-been, and now a privately funded corporate reboot -- the more I feel like I'm watching one of those cheesy 1960s Japanese monster movies with a nigh-unkillable creature from outer space. The super heat ray didn't work on the monster, the mysterious Element X that spews out Radiation Y didn't have any effect either, and now the scientists are falling back on the absolute last resort plan of them all: Awaken Godzilla! Would that we had Godzilla here, though.

Yes, SCO has lurched to life once more. The details of SCO's resurrection are still sketchy, but the plan seems plain to anyone who's followed the story so far. The way I see it, the "tremendous investment opportunity" that SCO's new investors are talking about in their statement is a) to drag out the court battle with Novell (NSDQ: NOVL) and IBM (NYSE: IBM) as long as humanly possible, b) score as many wild hits as possible in court to scare people away from Linux and open source, and c) Profit!

I do have to wonder how much SNCP, SCO's new investor, understands about what it's getting into. The one sentence from the release that hints at a business plan other than suing everything that moves is "SNCP has developed a business plan for SCO that includes unveiling new product lines aimed at global customers", which is as vague as trying to predict the weather a year from Monday. Do you know of anyone with even a kernel (pun intended) of technical savvy who would have anything to do with SCO at this point, either as an investor, a customer, or an employee?

My hope is that SNCP will pump a bunch of money into SCO, discover that there's no immediate benefit to doing so other than protracted legal struggles and, eventually needing to pay off the $25 million it owes Novell, give up and move on to another boondoggle. My nightmare, however, is a rejuvenated SCO that manages to continue being an indefinite irritant in the side of open source everywhere. Having Mothra nesting in the Tokyo Tower seems positively benign in comparison

Full Article

 

The Demise Of Commercial Open Source

Steve Goodman, co-founder and CEO of network management startup PacketTrap Networks, is predicting that commercial open source companies are doomed to fail. Goodman's not railing against open source or commercial software, per se. It's converting the former into the latter that he sees as inherently flawed.

Goodman makes his argument in a blog posting published on PacketTrap's Web site. "The interest of a commercial vendor is opposite to that of an open source project," he writes. "Commercial vendors answer to road maps, salespeople, and shareholders."

A white paper lays out the argument in more detail. In it, PacketTrap refers to commercial open source as "proprietary open source" and identifies 21 startups--from ActiveGrid to Zimbra--that it puts into that bucket.

What does Goodman think is the right approach? His own, of course. PacketTrap is a commercial software company that integrates open source network monitoring and management tools into its own PT360 Tool Suite. Rather than trying to manage an open source project as, say, MySQL has done, PacketTrap leaves project management to the open source community and concentrates on developing a commercial platform that works with the code that community delivers, such as Nagios for network monitoring.

PT360 is in beta testing now. It's aimed at the mid-market, though large companies such as Boeing, Home Depot, Pfizer, and the U.S. Navy are early adopters.

A basic version PT360 is free, while a professional version is due in the next few weeks. This so-called "freemium" model-- give a product away, then charge for a better version -- has its own critics.

Full Article

 

OpenOffice Text Files That Are Larger Than 65,536 Rows Cannot Open / Imported

This sucks, Open office 2.3 spreadsheet cannot open or import text files that are larger than 65,536 rows. Basically, I need 100k rows. However, it is possible to recompile OO to extend rows limitation. From the OO wiki hack page:

Well, it depends on what your goal is. For personal use you may set MAXROWCOUNT_DEFINE in sc/inc/address.hxx to a different value, multiple of 128, and recompile the application respectively the libsc680*.so and shove it under your existing installation. However, doing so implies that you don’t save to any binary file format like Excel or whatsoever, otherwise you risk loss of data. You’ll also encounter drawing layer misfits in higher row numbers, may experience performance problems, and there may be other quirks lurking. Note that it generally works for data and formulas, but otherwise is completely untested.

For the number of columns the same applies to the MAXCOLCOUNT_DEFINE in sc/inc/address.hxx, just that the value must be a multiple of 16 instead.

My text file is truncated at 65,536 and I was dumped with the following error message:

I hope this will be correcting, i'm so desperate, since i will use open office for my final exam

 

California firm buys Utah-based Linux

A Silicon Valley company has bought the assets of Utah supercomputer maker Linux Networx Inc. for an undisclosed amount of stock.
Silicon Graphics Inc. acquired key Linux Networx software, patents, technology and expertise, Sunnyvale, Calif.-based SGI said Thursday.
It isn't clear what will happen to Linux Networx. David Morton, chief technology officer of the Bluffdale-based company, declined to comment.
SGI plans to keep an office at an undetermined location in the Salt Lake City area.
"We have made offers to a portion of Linux Networx employees, but they haven't been obligated to answer yet. So we can't say how many will be joining us," said Joan Roy, SGI senior director of marketing.
Linux Network designs and makes clustered high-performance computers based on the Linux operating system. Its machines are used in scientific research, oil and gas exploration, and graphics rendering.
"SGI has a really nice fit with some of the things that Linux Networx did really well. It was a pioneer in cluster computing. They've made a lot of progress in the marketplace in creating really high-performance computing solutions," Roy said.
Customers have included defense contractors Lockheed Martin and Northrop Grumman, the Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NASA, BMW, Toyota and Royal Dutch Shell.
Linux Networx is privately held. Investors have included the Canopy Group, Wasatch Venture Fund, Oak Investment Partners and Tudor Ventures.

source : Salt Lake Tribune

 

'Linux Next ' Begins To Take Shape

Make no mistake about, the Linux 2.6.x kernel is a *large* undertaking that just keeps getting bigger and bigger. Apparently it's also getting harder to maintain as well in terms of ensuring that regressions don't occur and that new code is fully tested.

That's where the new 'Linux Next' effort comes in.

Linux next started off as a 'dream' of kernel maintainer Andrew Morton who has noted that few kernel developers are testing other kernel developers' development code which is leading to some problems.

Morton has proposed a "linux-next" tree that once per day would merge various Linux subsystem trees and then run compilation tests after applying each tree. While that may sound simple enough, in practice it's no small task.

Kernel developer Stephen Rothwell has stepped up to the plate and has announced that he will help to run part of the Linux next tree. While the effort could well serve to make the Linux development process more complicated, its goal clearly is to ensure a higher overall code quality by making sure code merges actually work before Linus Torvalds actually pushes out a RC (release candidate).

The way i see it from my simple laypersons point of view, Linux next forces code to be a whole lot cleaner before it gets submitted and forces more testing, earlier and more often - which ultimately is a great thing.

There has been some very 'healthy' discussion on the Linux Kernel Mailing List (LKML) about Linux next with perhaps the most colorful language coming from non-other than Linus Torvalds himself.

If you're not confident enough about your work, don't push it out! It's
that simple. Pushing out to a public branch is a small "release".

Have the [EXPLETIVE DELETED]back-bone to be able to stand behind what you did!
It sure will be interesting to see how Linux-next plays out over time, I for one am very optimistic.

 

Scripting Scribus

Have you ever said, "This program is pretty nice, but I wish it would ..."? For applications that offer the capability, scripting gives users the ability to customize, extend, and tailor a program to meet their needs. Scribus, a free page layout program that runs on Linux (and Mac OS and Windows) uses the Python programming language for user scripting. Python scripting in Scribus can drastically improve your work flow, and it's relatively easy for beginners to not only use scripts, but also write them.

Scripts are useful for page layout in a few interrelated ways, including automating repetitive tasks and tasks that involve measuring, such as placing page elements and creating page guides.

Not much is required to use Python scripts in Scribus. If your distribution successfully runs Scribus, then you probably have the required Python components. For this evaluation, I downloaded, compiled, and installed the latest stable Python version (1.3.3.11). You can start a Python script from within Scribus by going to the Script menu and choosing either Execute Script..., which opens a dialog box for selecting a script to run, or Scribus Scripts, which lists the scripts in the /usr/share/scribus/scripts directory, which contains the official scripts for Scribus. Placing additional scripts in that directory (as the root user or using sudo) makes those scripts available from the menu.

Two official scripts are provided: CalendarWizard.py and FontSample.py. Both feature dialog boxes with numerous options that showcase how Python scripts can extend the functionality of Scribus. The font sample script can take a long time to run, depending on your processor speed, memory, and number of fonts, but Scribus displays a handy progress bar showing the percentage of script completion.

Additionally, the /usr/share/scribus/samples directory contains 15 scripts intended not just for immediate use, but also as samples to study when creating your own scripts. The scripts in the samples directory are varied and range from a heavily commented boilerplate script (boilerplate.py) to functional scripts that, for example, set up a layout for CD pockets (pochette_CD.py), or add legends to images (legende.py). As the titles of some of the scripts indicate, many have comments and even dialog box text written in the native languages of the script authors, but the script description is usually in English.
More Scripts

More Scribus scripts are available online at the Scribus wiki's page on scripts and plugins. I found here a script to make empty space around an image inside an image frame -- something not yet possible in Scribus. The script works by drawing a second, empty frame 10 measurement units larger than the selected image or text frame. When I first ran the script, I had my default units set to inches, and the script created a 10-inch border around the image I selected. If you want to use this script without modification, be sure that your default units are set for points.

A more comprehensive approach to manipulating images uses a series of scripts for aligning an image inside its frame, scaling an image to fill a frame proportionally (i.e., scaling the image to the largest size possible within the frame while keeping its proportions intact), and scaling and aligning an image via a wizard and an advanced wizard that build upon the first two scripts. These scripts are great examples of how Python scripting extends Scribus's capabilities.

Using scripts that others have written is as simple as copying them from the Web page, pasting them into a text editor (preferably one that is aware of Python syntax, such as Emacs, Vim, Kate, or gedit), and then saving the script to a file ending in .py. You can then run the script from the Scribus script menu. The advantage of pasting the script into a syntax-aware text editor is that white space is important in Python, and a good editor will help you check that everything is aligned correctly.
Writing a script

Prior to doing the research for this article, I had not done any programming in Python. I did, however, have extensive experience using Adobe's scripting language for PageMaker, and I found that most of the principles carried over. A wonderful resource for beginners wanting to learn more about Python is the 110-page PDF tutorial A Byte of Python by Swaroop C H. It is well-written, easy to follow, and may be the best introduction to Python programming available.

Armed with a little bit of knowledge of Python, and having the scripting application programming interface (API) available online and from Scribus's help menu, I set out to write a couple of scripts. With all the sample scripts available, I did not have to start from scratch.

I began by modifying the script for making an empty space around an image so that the space would be 10 points regardless of the default measurement unit set by the user. To do that, I needed to get the current units setting, store it in a variable, temporarily set the units to points, and then reset the units to their original setting. To accomplish those tasks, I added the following commands to the script I downloaded:

* userUnit = getUnit() -- set a variable storing the current units setting
* setUnit(0) -- sets the units to points (0). (1) is millimeters (2) is inches and (3) is picas.
* setUnit(userUnit) -- resets the units to the original setting


The script as modified appears below. Because of the way the original author set the script to make sure the script is run from within Scribus, the commands I added needed to be prefaced with scribus..

#!/usr/bin/env python # -*- coding: utf-8 -*- import sys try: import scribus except ImportError: print "This script only works from within Scribus" sys.exit(1) def makebox(x,y,w,h): a = scribus.createImage(x, y, w, h) scribus.textFlowsAroundFrame(a, 1) def main(): if scribus.haveDoc(): scribus.setRedraw(1) userUnit = scribus.getUnit() scribus.setUnit(0) x,y = scribus.getPosition() w,h = scribus.getSize() x2 = x - border y2 = y - border w2 = w + border * 2 h2 = h + border * 2 makebox(x2,y2,w2,h2) scribus.redrawAll() scribus.setUnit(userUnit) else: result = scribus.messageBox('Error','You need a Document open, and a frame selected.') # Change the 'border' value to change the size of space around your frame border = 10 main()


When I first started working with Scribus, I missed having the ability to make a single underline of a text or image frame. This capability is particularly handy when setting up page headers. Looking at the sample scripts, I saw that legende.py did something similar to what I wanted to do. That script gets the size and location of an image frame and then places a text box a few millimeters below the lower right corner of the box. I needed to do something similar, except that I needed my script to draw the line from the lower left corner to the lower right corner without an offset. So I modified the legende.py script and saved it as underline_block.py.

The key to making the script work is realizing that the getPosition function gets the x and y page coordinates of the upper left corner of the frame. To get the positions of the other corners, I need the height and width of the frame. When that information is stored in variables, then drawing the line is a matter of specifying the x and y coordinates of the lower two corners in relation to the height and width. The command createLine(x, y+h, x+l, y+h) accomplishes drawing the line from the bottom left to the bottom right. The full script is below:

#!/usr/bin/env python # -*- coding: utf-8 -*- """ Draws a line below the selected frame. """ import sys try: from scribus import * except ImportError: print "This script only runs from within Scribus." sys.exit(1) import os def main(): userUnit = getUnit() setUnit(1) sel_count = selectionCount() if sel_count == 0: messageBox("underline_block.py", "Please select the object to add a line to before running this script.", ICON_INFORMATION) sys.exit(1) x,y = getPosition() l,h = getSize() createLine(x, y+h, x+l, y+h) setUnit(userUnit) if __name__ == '__main__': main()


Like any other type of programming, creating scripts is an iterative process. When writing or modifying a script, it is easy to work in a cycle of edit, save, run, check, and edit. The iterative approach also applies to improving scripts. After using the underline_block.py script for a while, I may want to modify it so that I can choose to add a line above the block rather than below it. To do that, I'll need to add a dialog box so I can choose the position. If I do that, I may want to add something to the dialog so I can choose the line style too. Each of the embellishments make the script more general and more useful.

As the examples illustrate, Python scripts are a useful way to customize and extend Scribus regardless of your level of programming experience.

 

Linpus offers a Linux for newbies and experts alike

Linpus Technologies has long been known in Taiwan for its Linux distributions. Now, it wants to become a player in the global Linux market with its new Linux distribution Linpus Linux Lite, which features a dual-mode user interface. One mode is for people who may never have used a computer before; the other is for experienced Linux users.

According to the company, these two modes are Easy and Normal. Easy mode uses large, colorful icons, arranging software in terms of its use. So, for example, instead of offering users a choice of Web browser and e-mail programs, there's an icon for the Internet. Under this icon, there are other icons for Firefox, as well as links that use Firefox to automatically connect to Google Maps, Wikipedia and YouTube. If users want a more traditional PC interface, they merely need to tap an icon on the master tool bar and they'll switch to Normal mode, which is a KDE 3.5x desktop.

This functional approach to the desktop is quite similar to that of Good OS' gOS 2.0. With gOS, which is deployed on Everex's inexpensive gPC, both Internet and office applications are built around Google's online software stack. Linpus offers a middle-of-the-road approach with an easy-to-use, functional desktop interface, but with the more usual PC-based applications underneath it.

Linux Lite is also designed to run on minimal hardware. Linpus claims the product will run well on PCs with 366MHz processors, 128MB of DRAM (dynamic RAM) and 512MB of disk space. At the same time, Linux Lite comes with an assortment of open-source software staples, such as OpenOffice.org.

"Our objective with this product was to create an operating system that offered choice and addressed specifically the ease-of-use needs of end users of UMPC [Ultra-Mobile PC] devices," Warren Coles, Linpus' sales and marketing manager, said in a statement. "If you are using a small screen, if you are a child, older person or inexperienced user, you will find the icon interface particularly helpful."

While Linpus would be happy to see end users pick up Linux Lite, the company is really targeting hardware vendors. "Our company has always been committed to creating user-friendly, mass-market Linux," Coles said. "Because of this, we have invested our time into not just being another desktop distribution, but in resolving all the issues involved in getting desktop Linux to market.

"Specifically, we provide unprecedented levels of support for hardware vendors —- and we recently pioneered our own preload solution and have worked extremely hard to create stable sleep and suspend modes for notebooks," he said.

"By having operating system, application and driver teams working side by side, in close proximity to the hardware manufacturers, we offer tremendous quality, value and time-to-market strengths," Coles added. "Ultimately, both the consumer and Linux enthusiasts benefit from a smooth, stable, out-of-the-box Linux operating system at the best price."

Reading between the lines, Linpus is encouraging would-be North American resellers and systems integrators to work with Linpus and Taiwanese PC vendors to deliver inexpensive, small laptops to the American market. Asustek has already shown this approach can be successful, with its popular Eee Linux desktop and laptop PCs.

 

Government/corporate project declares plan to promote OSS within the EU

An ambitious initiative that aims to bring open source software to a new level in Europe hopes to make competition with US companies more interesting. QualiPSo is a four-year project partly funded by the EU. Its mission is to "bring together the major players of a new way to use and deploy open source software (OSS), fostering its quality and trust from corporations and governments."

QualiPSo members include corporations, universities, and public administrations (PA) of various kinds from 20 countries. The main industrial players are Mandriva, Atos Origin, Bull, and Engineering Ingegneria Informatica. While Qualipso founders include organizations from China and Brazil, the main project focus now is on Europe.

QualiPSo was officially launched a year ago. Last month, the group held its first international conference in Rome to present its mission and its initial results. On the first day of the two-day event, several speakers explained why their companies are promoting OSS on such a large scale. The second day was devoted to presenting the main Qualipso subprojects.

What does QualiPSo do?

E-government is the area where Qualipso members hope to make the most money, and where they think they can impact the most EU citizens. Many citizens couldn't care less whether their tax office runs closed or open source software, or if the provider of that software is an American or European corporation, but they do care if filing tax forms online is safe and cheap, and results in quick action. Thus current plans call for QualiPSo to work in 10 distinct areas toward , a word with many different meanings.

During a face-to-face talk, QualiPSo representatives explained to me what they exactly mean by interoperability. According to QualiPSo, large organizations spend about 40% of their IT budget on integration projects. OSS is not interoperable per se, but often the real obstacles are not in the code. Development and publication of proper design practices or open, fully compatible software interfaces are the first and simplest space in which QualiPSo will work to improve OSS interoperability.

On a different plane, metadata such as software categories, relevant technologies, or developer skills aren't stored or presented in a coherent way in SourceForge.net, BerliOS, or other repositories. Next generation software forges from QualiPSo would make it possible for an integrator to build complete OSS products combining (and maintaining) components stored in different repositories with the minimum possible effort.

The last type of interoperability addressed by QualiPSo -- and maybe the most important -- is the organizational and bureaucratic one. E-government and quick business decisions remain dreams if the three different departments that have to approve a budget change do it through three different procedures incompatible in terminology, security, and interfaces. Qualipso members will provide support to integrate all such procedures or guarantee that they really are interoperable.

Thorough interoperability testing at all these levels is costly, time-consuming, and boring enough to attract little volunteer work, if any. To make such testing easier, QualiPSo plans to create lightweight test suites to evaluate the actual interoperability of OSS components and their quality from this and other points of view. More details are in the Interoperability page on QualiPSo's Web site.

Another interesting item in the QualiPSo agenda is the legal subproject. A single programmer merrily hacking in his basement for personal fun may simply patch and recompile GPL code or stick a GPL or similar label on any source code he releases online and be done with it. A large corporation or PA cannot afford legal troubles, especially if it operates in countries with different legislation than USA, the country where most current OSS licenses were designed. As SCO and others have demonstrated, even when it's certain that the bad guys are wrong, proving it wastes a lot of time and money that would have been spent better elsewhere (especially if it was public money). QualiPSo plans to provide a family of OSS licenses for both software and documentation guaranteed to be valid under European laws, together with methodologies to evaluate and properly manage any intellectual property issues.

Four QualiPSo Competence Centers are scheduled to open in Berlin, Madrid, Paris, and Rome during the fall of 2008. Their purpose will be to make all the QualiPSo resources, services, methods, and tools mentioned here and on the Web site available to potential OSS adopters, whether they be individuals, businesses, or PAs.

Critics from the trenches

Roberto Galoppini and other bloggers have noted that the Qualipso reports cost a lot of money and contain little new information, and asked questions such as: Is the amount of public money going into QualiPSo excessive? Will that public money benefit only the corporations that are members of the project? Will the Competence Centers and other initiatives be abandoned as soon as public funding isn't enough to sustain them? During conference breaks I heard or overheard comments along these lines by several attendees.

Jean-Pierre Laisné, leader of the Competence Center subproject, acknowledged that much of the information in the Qualipso reports isn't new. However, he says, it still is information that needed to be organized and declared officially, in a way that constitutes a formal commitment to support OSS from local businesses to states and other large organizations in Europe that, for any reason, cannot or will not listen to hackers in the street. This is more or less the same thing that blogger Dana Blankenhorn said about the cost and apparent obviousness of QualiPSo.

Right now, QualiPSo is a way for Europe-based corporations to get the biggest possible slice of OSS-related contracts from large private or public European organizations. The fact that the group already includes Brazilian and Chinese members -- that is, that software houses in those countries may join the group to apply the same strategy in their home markets -- makes Qualipso all the more interesting, especially for US observers. Qualipso may become a home for software vendors outside the USA that want to kick IBM, Microsoft, Sun, and Oracle out of their local markets -- something no non-US company can do today alone.

If European PAs are to cost less, be more transparent and efficient, and generally move away from manual, paper-based procedures that are expensive and slow, there must be clear rules, tools, and practices to build and recognize quality OSS -- that is, software that is solid, reliable, actually interoperable in the real world, and completely compatible at all levels with local laws. However, hammering out all the deadly boring details of how to implement interoperable bureaucratic procedures in software is something that no volunteer is ever going to do. This is an area where a bit of assistance from the private sector in the form of an organization like QualiPSo wouldn't hurt, at least in some EU countries.

So far, it's not clear how open QualiPSo's operations will be, or how much its activities will benefit all of the European OSS community, not just QualiPSo members. Besides these concerns, in this first year there has also been grumbling about the lack of a published work plan and, in general, of enough information and interaction between QualiPSo and the community. There is still time to fix this now that the project has officially gone public.

However, QualiPSo may make it harder for European PAs at all levels, from parliaments to the smallest city or school council, to ignore OSS, no matter who proposes it. QualiPSo may officially bring OSS, even in Europe, to a level where you cannot be fired for not choosing Microsoft or any other proprietary software. The goal of the "Exploitation and dissemination" subproject is to "promote OSS at a political level, as well as laws and regulations supporting OSS." Mentioning QualiPSo reports with their EU blessings could also be an excellent argument for all the European public employees who promote OSS, such as the ROSPA group in Italy, to convince their managers that it is safe, after all, to create local IT jobs by buying OSS products and services by (any) local businesses.

Even the fact that companies and PAs have already spent public money may make it easier for citizens to demand control and involvement from their representatives, both inside QualiPSo and in any other situation where OSS gets many public praises but much less public funds. If OSS is so good that even the EU partners with big corporations to spread it, why isn't it used more?

All in all, there are plenty of good reasons to follow QualiPSo with interest and see where it will go in the upcoming months.

 

Discover the possibilities of the /proc folder

The /proc directory is a strange beast. It doesn't really exist, yet you can explore it. Its zero-length files are neither binary nor text, yet you can examine and display them. This special directory holds all the details about your Linux system, including its kernel, processes, and configuration parameters. By studying the /proc directory, you can learn how Linux commands work, and you can even do some administrative tasks.

Under Linux, everything is managed as a file; even devices are accessed as files (in the /dev directory). Although you might think that "normal" files are either text or binary (or possibly device or pipe files), the /proc directory contains a stranger type: virtual files. These files are listed, but don't actually exist on disk; the operating system creates them on the fly if you try to read them.

Most virtual files always have a current timestamp, which indicates that they are constantly being kept up to date. The /proc directory itself is created every time you boot your box. You need to work as root to be able to examine the whole directory; some of the files (such as the process-related ones) are owned by the user who launched it. Although almost all the files are read-only, a few writable ones (notably in /proc/sys) allow you to change kernel parameters. (Of course, you must be careful if you do this.)

/proc directory organization

The /proc directory is organized in virtual directories and subdirectories, and it groups files by similar topic. Working as root, the ls /code/* command brings up something like this:


1 2432 3340 3715 3762 5441 815 devices modules
129 2474 3358 3716 3764 5445 acpi diskstats mounts
1290 248 3413 3717 3812 5459 asound dma mtrr
133 2486 3435 3718 3813 5479 bus execdomains partitions
1420 2489 3439 3728 3814 557 dri fb self
165 276 3450 3731 39 5842 driver filesystems slabinfo
166 280 36 3733 3973 5854 fs interrupts splash
2 2812 3602 3734 4 6 ide iomem stat
2267 3 3603 3735 40 6381 irq ioports swaps
2268 326 3614 3737 4083 6558 net kallsyms sysrq-trigger
2282 327 3696 3739 4868 6561 scsi kcore timer_list
2285 3284 3697 3742 4873 6961 sys keys timer_stats
2295 329 3700 3744 4878 7206 sysvipc key-users uptime
2335 3295 3701 3745 5 7207 tty kmsg version
2400 330 3706 3747 5109 7222 buddyinfo loadavg vmcore
2401 3318 3709 3749 5112 7225 cmdline locks vmstat
2427 3329 3710 3751 541 7244 config.gz meminfo zoneinfo
2428 3336 3714 3753 5440 752 cpuinfo misc

The numbered directories (more on them later) correspond to each running process; a special self symlink points to the current process. Some virtual files provide hardware information, such as /proc/cpuinfo, /proc/meminfo, and /proc/interrupts. Others give file-related info, such as /proc/filesystems or /proc/partitions. The files under /proc/sys are related to kernel configuration parameters, as we'll see.

The cat /proc/meminfo command might bring up something like this:

# cat /proc/meminfo
MemTotal: 483488 kB
MemFree: 9348 kB
Buffers: 6796 kB
Cached: 168292 kB
...several lines snipped...

If you try the top or free commands, you might recognize some of these numbers. In fact, several well-known utilities access the /proc directory to get their information. For example, if you want to know what kernel you're running, you might try uname -srv, or go to the source and type cat /proc/version. Some other interesting files include:

  • /proc/apm: Provides information on Advanced Power Management, if it's installed.
  • /proc/acpi: A similar directory that offers plenty of data on the more modern Advanced Configuration and Power Interface. For example, to see if your laptop is connected to the AC power, you can use cat /proc/acpi/ac_adapter/AC/state to get either "on line" or "off line."
  • /proc/cmdline: Shows the parameters that were passed to the kernel at boot time. In my case, it contains root=/dev/disk/by-id/scsi-SATA_FUJITSU_MHS2040_NLA5T3314DW3-part3 vga=0x317 resume=/dev/sda2 splash=silent PROFILE=QuintaWiFi, which tells me which partition is the root of the filesystem, which VGA mode to use, and more. The last parameter has to do with openSUSE's System Configuration Profile Management.
  • /proc/cpuinfo: Provides data on the processor of your box. For example, in my laptop, cat /proc/cpuinfo gets me a listing that starts with:
  • processor : 0
    vendor_id : AuthenticAMD
    cpu family : 6
    model : 8
    model name : Mobile AMD Athlon(tm) XP 2200+
    stepping : 1
    cpu MHz : 927.549
    cache size : 256 KB

    This shows that I have only one processor, numbered 0, of the 80686 family (the 6 in cpu family goes as the middle digit): an AMD Athlon XP, running at less than 1GHz.

  • /proc/loadav: A related file that shows the average load on the processor; its information includes CPU usage in the last minute, last five minutes, and last 10 minutes, as well as the number of currently running processes.
  • /proc/stat: Also gives statistics, but goes back to the last boot.

  • /proc/uptime: A short file that has only two numbers: how many seconds your box has been up, and how many seconds it has been idle.
  • /proc/devices: Displays all currently configured and loaded character and block devices. /proc/ide and /proc/scsi provide data on IDE and SCSI devices.
  • /proc/ioports: Shows you information about the regions used for I/O communication with those devices.
  • /proc/dma: Shows the Direct Memory Access channels in use.
  • /proc/filesystems: Shows which filesystem types are supported by your kernel. A portion of this file might look like this:
  • nodev sysfs
    nodev rootfs
    nodev bdev
    nodev proc
    nodev cpuset
    ...some lines snipped...
    nodev ramfs
    nodev hugetlbfs
    nodev mqueue
    ext3
    nodev usbfs
    ext2
    nodev autofs

    The first column shows whether the filesystem is mounted on a block device. In my case, I have partitions configured with ext2 and ext3 mounted.

  • /proc/mounts: Shows all the mounts used by your machine (its output looks much like /etc/mtab). Similarly, /proc/partitions and /proc/swaps show all partitions and swap space.

  • /proc/fs: If you're exporting filesystems with NFS, this directory has among its many subdirectories and files /proc/fs/nfsd/exports, which shows the file system that are being shared and their permissions.
  • /proc/net: You can't beat this for network information. Describing each file in this directory would require too much space, but it includes /dev (each network device), several iptables (firewall) related files, net and socket statistics, wireless information, and more.

There are also several RAM-related files. I've already mentioned /proc/meminfo, but you've also got /proc/iomem, which shows you how RAM memory is used in your box, and /proc/kcore, which represents the physical RAM of your box. Unlike most other virtual files, /proc/kcore shows a size that's equal to your RAM plus a small overhead. (Don't try to cat this file, because its contents are binary and will mess up your screen.) Finally, there are many hardware-related files and directories, such as /proc/interrupts and /proc/irq, /proc/pci (all PCI devices), /proc/bus, and so on, but they include very specific information, which most users won't need.

What's in a process?

As I said, the numerical named directories represent all running processes. When a process ends, its /proc directory disappears automatically. If you check any of these directories while they exist, you will find plenty of files, such as:

attr cpuset fdinfo mountstats stat
auxv cwd loginuid oom_adj statm
clear_refs environ maps oom_score status
cmdline exe mem root task
coredump_filter fd mounts smaps wchan

Let's take a look at the principal files:

  • cmdline: Contains the command that started the process, with all its parameters.
  • cwd: A symlink to the current working directory (CWD) for the process; exe links to the process executable, and root links to its root directory.
  • environ: Shows all environment variables for the process.
  • fd: Contains all file descriptors for a process, showing which files or devices it is using.
  • maps, statm, and mem: Deal with the memory in use by the process.
  • stat and status: Provide information about the status of the process, but the latter is far clearer than the former.

These files provide several script programming challenges. For example, if you want to hunt for zombie processes, you could scan all numbered directories and check whether "(Z) Zombie" appears in the /status file. I once needed to check whether a certain program was running; I did a scan and looked at the /cmdline files instead, searching for the desired string. (You can also do this by working with the output of the ps command, but that's not the point here.) And if you want to program a better-looking top, all the needed information is right at your fingertips.

Tweaking the system: /proc/sys

/proc/sys not only provides information about the system, it also allows you to change kernel parameters on the fly, and enable or disable features. (Of course, this could prove harmful to your system -- consider yourself warned!)

To determine whether you can configure a file or if it's just read-only, use ls -ld; if a file has the "W" attribute, it means you may use it to configure the kernel somehow. For example, ls -ld /proc/kernel/* starts like this:

dr-xr-xr-x 0 root root 0 2008-01-26 00:49 pty
dr-xr-xr-x 0 root root 0 2008-01-26 00:49 random
-rw-r--r-- 1 root root 0 2008-01-26 00:49 acct
-rw-r--r-- 1 root root 0 2008-01-26 00:49 acpi_video_flags
-rw-r--r-- 1 root root 0 2008-01-26 00:49 audit_argv_kb
-r--r--r-- 1 root root 0 2008-01-26 00:49 bootloader_type
-rw------- 1 root root 0 2008-01-26 00:49 cad_pid
-rw------- 1 root root 0 2008-01-26 00:49 cap-bound

You can see that bootloader_type isn't meant to be changed, but other files are. To change a file, use something like echo 10 >/proc/sys/vm/swappiness. This particular example would allow you to tune the virtual memory paging performance. By the way, these changes are only temporary, and their effects will disappear when you reboot your system; use sysctl and the /etc/sysctl.conf file to effect more permanent changes.

Let's take a high-level look at the /proc/sys directories:

  • debug: Has (surprise!) debugging information. This is good if you're into kernel development.
  • dev: Provides parameters for specific devices on your system; for example, check the /dev/cdrom directory.
  • fs: Offers data on every possible aspect of the filesystem.
  • kernel: Lets you affect the kernel configuration and operation directly.
  • net: Lets you control network-related matters. Be careful, because messing with this can make you lose connectivity!
  • vm: Deals with the VM subsystem.

Conclusion

The /proc special directory provides full detailed information about the inner workings of Linux and lets you fine-tune many aspects of its configuration. If you spend some time learning all the possibilities of this directory, you'll be able to get a more perfect Linux box. And isn't that something we all want?

 

Your next phone could run Linux

Linux seems to have chosen the 2008 Mobile World Congress to quietly make its way onto the new consumer devices so in abundance at the annual mobile Mecca.

Texas Instruments “G-Phone”
Texas Instruments “G-Phone”

Stalwarts like Symbian and Microsoft have been somewhat upstaged by the rough-and-tough technology concept demonstrations of Google’s mobile platform, Android. Both Qualcomm and Texas Instruments showed impressive demos of the platform, with developer boards and some concept devices on show.

Texas Instruments showed off a development board, a development handset and – the show-stopper – Android running on a mobile form factor device. Qualcomm’s demo, on a development board, featured a touch screen and a custom-made mole-whacking game – apparently created in 60 minutes on Google’s Android software development kit (SDK).

Android offers cell phone manufacturers a “stack” of software for rolling out on their mobile phones. Manufacturers will be able to utilise the software to give them a firm base – including operating system, middleware and typical cell phone applications like SMS, contacts, voice and web browser.

Qualcomm’s Android offering
Qualcomm’s Android offering

It is hoped that the Linux stack and good SDKs will promote application developers to create more apps for the mobile platform. Vodafone’s CEO, Arun Sarin, stated that he believed there should be no more than four or five operating systems for mobile phones, compared to the 40 in the market currently. The proliferation of mobile platforms has severly hamstrung the roll-out of applications for the “fourth screen”.

While Android offers a full stack, the LiMo (Linux Mobile) Foundation delivers a unified middleware and OS layer – the manufacturers build all the applications on top of the platform. Some manufacturers clearly prefer this model, giving them the ability to completely customise the user experience on their platforms. LiMo is significantly more advanced than Android after its year in the market – the LiMo Foundation showed off phones from the likes of Motorola, LG, NEC, Panasonic and Samsung.

Motorola’s Motorokr E8 LiMo phone
Motorola’s Motorokr E8 LiMo phone

Thanks to Google’s lead in the software, the Android phones have been slugged “G-Phones”, in response to Apple’s phone nomenclature, although more than 30 technology companies are part of the Open Handset Alliance, the group backing Android.

LiMo representatives stated that Android would not compete directly with its foundation, although Android’s promoters told Tectonic that they believed that Android’s full solution will prove more popular over time.

Since Android has been demo’d running live on processors and chipsets from TI and Qualcomm, the platform is technically ready for manufacturers to develop and prototype the solution. We can expect some Android devices at next year’s MWC. Should Sarin’s vision of four or five operating systems come true, Linux is a safe bet as one of them.

Source : http://www.tectonic.co.za

 

Take advantage of multiple CPU cores during file compression

With the number of CPU cores in desktop machines moving from two to four and soon eight, the ability to execute computationally expensive tasks in parallel is becoming more important. The mgzip tools that can take advantage of multiple CPU cores during file compression, while pbzip2 uses multiple cores for both compression and decompression.

A file compressed with pbzip2 can be decompressed with the standard bzip2 program, while one compressed with mgzip can be decompressed with the standard gunzip utility. You can find pbzip2 packages for Fedora 8 in the standard repositories and install it using yum install pbzip2. However, mgzip doesn't appear to be packaged for many distributions, so you must install it from source.

When building mgzip you might discover that it fails to compile because the zlib header defines gz_header and that variable is used by mgzip.c to contain the hex values of a valid gzip archive. You can fix this easily by adding a prefix to the gz_header variable and the few references to it in mgzip.c. It doesn't really matter what prefix you use, as long as it makes a valid C identifier that is different from gz_header; for example, mgzip_gz_header.

$ make gcc -g -O2 -c -o mgzip.o mgzip.c mgzip.c:40: error: 'gz_header' redeclared as different kind of symbol /usr/include/zlib.h:124: error: previous declaration of 'gz_header' was here mgzip.c: In function 'compress_infile_to_outfile': mgzip.c:530: warning: cast to pointer from integer of different size

Running the utilities

Both tools offer similar command-line switches to the non-parallel versions that you're already familiar with. Some options are missing; for example, mgzip does not offer the --recursive option that gzip has.

One caveat with pbunzip2 is that it will only use multiple cores if the bzip2 compressed file was created with pbzip2. This is because pbzip2 compresses a file in pieces that can be decompressed in parallel. This means that if you download linux-2.6.23.tar.bz2 from a kernel source mirror, you will only be able to use a single CPU core to decompress it. Since the quoted size increase of using multiple pieces for pbzip2 is very small (less than 0.2%, from pbzip2's manual), it would be nice if the main bzip2 program would default to creating pieces as well in the future to produce bzip2 files that are more friendly to multicore downloaders.

Examples of simple use of both tools is shown below. They should be familiar to anyone who has used bzip2 and gzip before. As you can see from the size of the compressed files, pbzip2 comes much closer to producing a compressed file that is the same size as the non-parallel compression tool's output. The pbzip2 output is 0.46% larger than the output of bzip2.

$ bunzip2 linux-2.6.23.tar.bz2 $ gzip -c linux-2.6.23.tar > linux-2.6.23.tar.gzip $ mgzip -c linux-2.6.23.tar > linux-2.6.23.tar.mgzip $ ls -lh -rw-r----- 1 ben ben 253M 2008-01-19 18:55 linux-2.6.23.tar -rw-rw-r-- 1 ben ben 56M 2008-01-19 18:57 linux-2.6.23.tar.gzip -rw-rw-r-- 1 ben ben 67M 2008-01-19 18:57 linux-2.6.23.tar.mgzip $ gunzip -c linux-2.6.23.tar.mgzip > linux-2.6.23.tar.mgzip-gunzip $ md5sum linux-2.6.23.tar.mgzip-gunzip linux-2.6.23.tar 853c87de6fe51e57a0b10eb4dbb12113 linux-2.6.23.tar.mgzip-gunzip 853c87de6fe51e57a0b10eb4dbb12113 linux-2.6.23.tar $ bzip2 -c -k -9 linux-2.6.23.tar > linux-2.6.23.tar.bzip2 $ pbzip2 -c -k -9 linux-2.6.23.tar > linux-2.6.23.tar.pbzip2 $ ls -lh -rw-r----- 1 ben ben 253M 2008-01-19 18:55 linux-2.6.23.tar -rw-rw-r-- 1 ben ben 56M 2008-01-19 18:57 linux-2.6.23.tar.gzip -rw-rw-r-- 1 ben ben 67M 2008-01-19 18:57 linux-2.6.23.tar.mgzip -rw-rw-r-- 1 ben ben 44M 2008-01-19 19:03 linux-2.6.23.tar.bzip2 -rw-rw-r-- 1 ben ben 44M 2008-01-19 19:01 linux-2.6.23.tar.pbzip2 $ ls -l -rw-r----- 1 ben ben 264704000 2008-01-19 18:55 linux-2.6.23.tar -rw-rw-r-- 1 ben ben 45488158 2008-01-19 19:03 linux-2.6.23.tar.bzip2 -rw-rw-r-- 1 ben ben 57928789 2008-01-19 18:57 linux-2.6.23.tar.gzip -rw-rw-r-- 1 ben ben 69968799 2008-01-19 18:57 linux-2.6.23.tar.mgzip -rw-rw-r-- 1 ben ben 45695449 2008-01-19 19:01 linux-2.6.23.tar.pbzip2


Because mgzip can read the data to be compressed from stdin, you can pipe an uncompressed tar file to it. A major drawback to the currently available version of pbzip2, however, is that input to the utility cannot come from stdin or a pipe. This means that you need to create a real tar file before you can compress it with pbzip2. Shown below are commands to extract a tarball compressed with pbzip2 using multiple CPU cores and a method of compressing a tar file with pbzip2.

$ pbunzip2 -c /tmp/test/linux-2.6.23.tar.pbzip2 | tar xvf - $ tar cvf linux-2.6.23.tar linux-2.6.23 $ pbzip2 -9 linux-2.6.23.tar $ tar cvO linux-2.6.23 | mgzip > linux-2.6.23.tar.gz

I ran some benchmarks to test the performance gain I could get with these parallel compression tools. I tested them on an Intel Q6600 2.4GHz quad core machine, using the kernel linux-2.6.23.tar file, which I picked for its availability and because source tar files are likely to be relevant to many Linux.com readers.

Comparisons of the compressed file sizes and times to compress are revealing. mgzip at default compression speed was substantially faster than gzip, but also produced an output file that was quite a bit larger than gzip's. With -9 compression, mgzip is only twice as fast as gzip on a quad core machine, but the compressed file is much closer in size to what gzip would produce. For both tests, pbzip2 produced an output file that was similar in size to what bzip2 would make. For bzip2 -9 level compression, pbzip2 took reasonable advantage of four cores, requiring only 31% the time that bzip2 needed.

Decompression times are also interesting. These were both performed on the output of pbzip2 in order to ensure that multicore decompression was possible.

With mgzip and pbzip2 you can take advantage of all your CPU cores to shorten compression and decompression times. This obviously has the largest impact when you are waiting for an archive to decompress before you can proceed with another task. Using the normal bzip2 you would have effectively wasted three of four cores on a Q6600 quad core machine during (de)compression operations. You might also set up a cron job to recompress bzip2 files downloaded from the Internet to pbzip2 format so that when the time comes to expand one of them later, the work can be spread across your cores.

 

BenQ to launch Linux ultramobile device in Q2

Taiwan's BenQ is showing off a new user interface on an ultramobile PC that it plans to start marketing in the second quarter of this year, a spokeswoman for the company said Tuesday.

The device is being displayed at the Mobile World Congress in Barcelona as part of BenQ's new mobile offerings. It was first shown at the Consumer Electronics Show in Las Vegas earlier this year.

BenQ has taken on the new moniker coined by Intel Corp. -- mobile Internet device (MID) -- for its new gadget, a name that appears to be replacing the term ultramobile PC. Ultramobiles have so far not fared well in global markets, despite a much-hyped launch and backing by heavyweights such as Microsoft Corp. and Intel.

BenQ's MID sports a Linux operating system, but the company tweaked the user interface to work more closely with its functions. Although full details have not yet been released, the company has said the MID is equipped for wireless Internet use via Wi-Fi, or with third-generation telecommunications networks, which also enable voice phone calls.

China's Red Flag developed the Linux operating system, but BenQ customized the interface to make its MID unique, said Jean Hsu, a BenQ representative.

The MID also features a 4.8-in. touch-screen, 0.3-megapixel Web cam, and on-board sensors that minimize and pop up all open Windows when you shake the device, instead of making you touch each tab individually.

BenQ's MID uses Intel's Menlow set of chips, which includes a low-power microprocessor code-named Silverthorne and a chip set codenamed Poulsbo. Intel designed Menlow for ultramobile devices.

Companies are developing ultramobile PCs and MIDs in a bid to attract users to device slightly smaller than notebook PCs, but with full PC functionality. Some analysts see the devices as the PC industry's answer to smart phones, but point out that many ultramobiles do not include telecommunications functions. BenQ's new MID does include telecommunications capabilities with its 3G support.

 

Free software goes Hollywood

As the Writers Guild of America's strike enters its fourth month, one of its key issues -- the sharing of profits from online distribution -- is encouraging the rise of new production companies that are exploring alternative methods of production and distribution. Along with Hollywood Disrupted and Founders Media Group, these new companies include Virtual Artists, whose goal is to bring free software developers and Hollywood writers together to experiment.

Virtual Artists started in early December, when Free Software Foundation director Henry Poole was attending a wedding in Los Angeles. At a party at the house of Poole's business partner Brad Burkhart, he started talking to Aaron Mendelsohn, whom Poole describes as "the second youngest member of the Writers Guild board and a member of the negotiating committee for this strike." Poole's comments about alternative online distributions led Mendelsohn to invite him to address a group of writers on the subject.

"A couple of weeks later," Poole says, Brian Behlendorf, best known as the founder of the Apache Project and Collabnet, "hosted a similar get-together in his home in San Francisco. We brought a few folks from the free software community and also some folk who understand online community building. At that point, we started to have discussions about disruptive technologies."

From these discussion, Virtual Artists was born. According to Poole, the new company has four interests: "distribution systems for distributing media through computers, televisions, and mobiles; software for collaboration; community-building software that gives power to the audiences for them to participate in a direct relationship with the creatives; and helping the creative process for traditionally produced materials.

"We're looking at building an alignment between writers and free software developers and new media workers," Poole says. "It's the alignment between the writers of code and the writers of content that we're focusing on."

Besides himself and Behlendorf, Poole declines to name any of the other members of the free software community involved with Virtual Artists, except to say that a member of the Miro project attended the meeting at Behlendorf's house.

However, the lineup of movie and TV writers interested in Virtual Artists represents what Mendelsohn is referring to as an "A-list" of Hollywood writers. Besides Mendelsohn, they include such movie writers as Academy Award winner Ron Bass (Rain Man and Joy Luck Club) ; Susannah Grant (Erin Brockovitch and Pocahontas); and Terry George (Hotel Rwanda). TV writers involved include Neal Baer (Law and Order); Ton Fontana (Homicide and Oz) and Tony Award winner Warren Leight (Law and Order and Criminal Intent).

"Everybody has the opinion that this kind of thing would be a tough sell to the writers," Poole says. "Actually, it's been quite easy. The writers are completely excited by this. They're used to work-for-hire. They don't have any ownership at all. And with, for example, a Creative Commons license, they can own their own work."

Certainly, Virtual Artists has had no trouble raising the $200,000 in seed capital it sought from writers. In addition, Poole says, "We're having discussions with other folks to the tune of $30 million," including traditional content distributors -- although he concedes that "they're not going to drop their current business" quite yet.

Reactions from other player in the TV and film industry are not yet forthcoming, but Poole observes that "right now, everybody's close-lipped because of the writer's strike." However, he says, "The truth is, we're interested in partnering at such point with traditional industry."

Alternative models

Poole makes no secret of the fact that Virtual Artists is going to be experimenting. "There's going to be a lot of learning along the way. There's a lot of things we don't know yet." For instance, while he tends to advocate Creative Commons Licenses, he is still unsure of exactly which license might be used. He admits, too, that more traditional licensing might be necessary for projects that are distributed through the existing channels.

However, as an example of the sort of experiment that Virtual Artists might consider, Poole cites Robert Greenwald of Brave New Films, who distributes his work using Creative Commons licenses. When Greenwald produced The High Cost of Low Price, his documentary on Wal-Mart, he showed it in private homes across the United States, and built a list of interested viewers. When he came to produce Iraq for Sale: The War Profiteers, he emailed those on his list, "and within 10 days, he had raised $250,000, which was enough to finance the production and development of the movie," Poole says.

By contrast, Poole suggests that Google is "not experimenting that much right now" on YouTube and Google Video. Google records the number of viewers that videos have -- and, at least potentially, who those viewers are -- but "they're not really sharing that core asset they have with the creators." If Google chose, it could use that information as an alternative distribution channel for major productions, but, because it is hoarding that information, it would really be "taking the place of the studios ... becoming the intermediary between the creatives and the audiences" instead of letting the two interact directly.

"What we're looking at doing that's innovative," Poole says, "is that we're looking at sharing, and setting up a system where the artists know who their audiences are and where they have the power, the ability, the tools, and the knowledge in an environment that is consistent with respect and privacy."

The free software connection

At the same time that Virtual Artists assists creatives to gain control of their works, Poole envisions the company becoming a partner of free software projects and enabling them to bootstrap their development process.

"A whole lot of folk are building the components of [potential] distribution systems," says Poole, "and they're looking for the right partnerships to take those projects out far and wide. And these projects really need a lot of capital to keep up. You've got groups like Blowtorch that have inadequate funding. And for the free software community to keep up, they need to put quite a lot of engineering time into it."

Exactly which projects Virtual Artists will work with is still uncertain. Just now, Poole says, "We're reaching out to maintainers of free software projects and finding out what their road maps are. As we develop our own projects, we're going to look for an intersection between what we need to do and what they need to do, and we'll find ways to finance their initiatives to do what needs doing."

However, Poole is certain that free software projects will be essential to Virtual Artists' success. "There are certain things you can do through code," Poole says. As [Lawrence] Lessig said in his book, [Code and Other Laws of Cyberspace], 'Code is law.' We can set the rules by developing the right work flows for what we want to do in having those relationships with the audience, and figuring out the right ways to monetize."

Common goals

Virtual Artists is still in its earliest stages. Besides the questions of whom to work with and the near certainty that some experiments will fail, other problems will undoubtedly arise.

For instance, one problem participants are already facing is a lack of common language. Poole cites the example of "development." "Development in the film business is basically the early work," he notes. "Development in software is actually production. There's a language that needs to be developed between free software and Hollywood, because, right now, terminology has different meanings."

Poole also suggests that, in the free software community, "There's just a real lack of trust" of the film industry.

All the same, he remains optimistic, believing that the differences between Hollywood creatives and free software developers are minor compared to what they have in common.

"The thing is, we're all struggling with the same epic story, all looking for the economic justice behind it all. The big corporations have a habit of control, whether they're doing it to software writers or creative writers, it's the same story. So they really have a lot in common. We're pulling together some really innovative and creative people who want to experiment, and we're going to look for ways that really resonate with everybody."