May 2008 Archives

xsm in captivity.

| No Comments
Got to put stuff on XSM in somewhere.

The problem is that, while it seems interesting and there's obviously some interest in getting it running (since it's been merged,) there's no documentation.  I'm not inherently interested enough in security to wade through the snippets i can find and figure out what capabilities XSM buys me, and how to enable them.

Nonetheless, got to be done.  Probably in the increasingly mythical "tips" chapter.

a snippet for my todo list

| No Comments
"As of recent Xen versions, the dom0 administrator can use the ionice command to set i/o priorities for domUs."

Got to figure out where to put that.

aimed squarely at our audience.

| No Comments
Productive day, in a sense.  Not much editing got done, but we spent a lot of time testing stuff in the provisioning and profiling chapters.  I'm adding a section on using pypxeboot directly, which I'm. . .  kind of enthusiastic about.  Worked out some bugs in our explanation of multiple-domain profiling.

I don't think I ever feel as if I'm faking it more intensely than when I write about profiling.  I mean, this is Xen for sysadmins.  I'm a sysadmin.  If I need to use the profiler something has almost certainly gone wrong.

But we did hit a _perfect_ example, which I'm rewriting the section around.  (Mirrored LVM spends way too much time in IOwait.)  I've got to compose that email to the xen-devel list, see if there's something related to the pit_read_counter function that's causing this to happen.  That's what my oprofile runs suggest.  Now we just need some confirmation, maybe a happy ending to this story.

Of course, the troubleshooting chapter isn't supposed to be an exhaustive compendium of errors.  That's what the Internet is for.  We've listed only error messages that lend themselves to easy solutions, or to illustrating some troubleshooting technique.

That means that messages like the following, which is a flat-out Xen / Linux bug (fixed in RedHat's .14, not sure about other distros) simply don't appear:

Bad pte = e5707067, process = ???, vm_flags = 100073, vaddr = 252000
 [<c0453809>] vm_normal_page+0xb7/0xd3
 [<c045454c>] unmap_vmas+0x3d1/0x761
 [<c0458f3c>] exit_mmap+0x6d/0xe4
 [<c041abd4>] mmput+0x25/0x69
 [<c047084f>] flush_old_exec+0x62c/0x8b2
 [<c046fcd7>] kernel_read+0x32/0x43
 [<c048d0f1>] load_elf_binary+0x494/0x15e4
 [<c0467043>] do_sync_read+0xb6/0xf1
 [<c044d4af>] __alloc_pages+0x57/0x282
 [<c04dcc45>] copy_from_user+0x31/0x5d
 [<c04dcc45>] copy_from_user+0x31/0x5d
 [<c046fa8a>] search_binary_handler+0x99/0x219
 [<c04713bf>] do_execve+0x158/0x1f5
 [<c040337d>] sys_execve+0x2a/0x4a
 [<c040534f>] syscall_call+0x7/0xb

It seems like it's dishonest to describe only problems that we've solved.  But what we don't know -- well, that would fill volumes and be dispiriting. 

not as dangerous as advertised.

| No Comments
Okay, so storage wasn't quite ready to go.

We had been a bit too trusting and not properly vetted certain whispered rumors of the dark and horrible consequences of letting your LVM snapshots fill up.  In our defense, the way that people generally use LVM snapshots makes it very unlikely that they'll fill, and apparently it's not a commonly-seen failure, disk being cheap. . .

Anyway.  No excuse.  Experimental verification is the cornerstone of science, so we tested it.

 # lvcreate -n origin -L 1G LogVol01
 # lvcreate -n snap -L 100M --snapshot LogVol01/origin

Now, once we've made 100M of changes to origin, testsnap should fill up.  If you've been reading the LVM snapshot warnings, the earth will then erupt in fire, pitch will rain from the sky, the crust will split and a cavernous maw with teeth the size of the Tokyo Tower will emerge to consume humanity.  The lucky portion of it, anyway.

This did not happen.  We filled it by making a filesystem and copying stuff in from /usr .  Turns out that the machine keeps running merrily.   There are still errors, of course:

device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception

Followed by a bunch of errors of the form:

Buffered I/O error on device dm-3, logical block 585

We unmounted the snapshot, tried to remount it, no dice.  It was, as McCoy would say, dead.  The original LV was fine, however.

actually i quite like redhat.

| No Comments
Today I worked on the chapter that purports to tell people how to create domU images from scratch.  It's got some minor technical issues -- I don't know what version of cobbler I used to test the stuff I wrote, but it's like nothing that I can find any reference for today.  Wrote maybe 400 words, tested some cobbler and pypxeboot related stuff.  I should also toss in some stuff about making a distro mirror.  Maybe.

We also need to test pypxeboot again.  I swear I've seen it work, but my memory has been. . .  less than reliable lately.  Luke claims that it doesn't work and never has.  It'll all end in tears, I know it.

Apart from that, the other stuff looks ready to go -- tar, using the distro package manager, installing via qemu, even systemimager.  It's mostly just cobbler that I'm worried about.  Damn redhatisms.  Oh, look, my spell check claims "redhatism" isn't even a word.  TAKE THAT, REDHAT.  BELIEF IN YOU IS TANTAMOUNT TO ERROR.

storage goes to layout.

| No Comments
Decided that a new approach was called for, and pared down the storage chapter.  I think the chapter, as we've got it now, does a good job of presenting the basics of storage with Xen.  Stuff related to copy-on-write has been mostly passed to hosting, while network storage is mostly sent to migration.

Casualties of the process included QCOW images, which worked at one point, then stopped working, and dmuserspace, which I get the strong impression no one actually uses.

It's a pity, because it's actually a pretty cool technology -- but it's got too much bit rot.  At times I almost had it working, but there was no guarantee that these steps would continue working, and I couldn't in good conscience advice people to download patches from a mailing list's archive.

Ah well.  Here are the first few paragraphs of the section that I wrote about it, just to frustrate future web searchers coming and looking for useful information:


DmUserspace is an extension to the basic device-mapper concept that
allows the kernel to forward requests for blocks on pseudo-devices to
a userspace program, which responds with an appropriate destination
device and set of block addresses.  This allows you, the
administrator, to have block devices that dynamically resize and move
themselves as needed.

This facility works well with Copy-on-Write, because it avoids the
need to pre-allocate backing store that one finds with LVM, or the
simpler CoW scripts on the Xen liveCD.  Instead, DmUserspace works
with a userspace daemon that finds and allocates sectors to
pseudo-devices automatically as needed.  Because this is a userspace
program, it can be flexible -- for example, one suggestion on the
Xen-devel mailing list was to have a device that transparently fetched
and cached from Amazon's S3.

Although this sounds great (and is pretty great,) it doesn't come with
Xen, and it's not trivial to set up.  You'll need a fully set-up
compilation environment, including a kernel source tree.  If you've
got your ducks in a row, read on

About this Archive

This page is an archive of entries from May 2008 listed from newest to oldest.

April 2008 is the previous archive.

June 2008 is the next archive.

Find recent content on the main index or look in the archives to find all content.