I've reviewed Ubuntu's NetBook remix and then 10.10's Unity, and wasn't impressed with what appeared to be beta-ware at best. Now that 11.04 is out, I felt I should give it a spin before abandoning my preferred distribution of the last several years. I grabbed the amd64 11.04 live cd, used unetbootin to make a bootable USB Key, and booted my x201 to the Natty Narwhal.
First the good. The Unity interface has been polished up a bit, it's smoother and less clunky. Some small space savings features are effective: the maximized window title bar in the panel and the mouse-over file menu in the panel (ala Mac OS). The narrow scrollbar will a mouse over grab bar is actually particularly nice (or is this just GTK3?).
We are being harassed by collections agencies looking for individuals we do not know and do not live here. We recently changed our number for a similar problem with agencies looking for individuals with the same initials (m. hart) as our previous public number. We now have a private number, but that number has been associated with persons with unpaid debts in the past. I am extremely frustrated that the primary use for my phone service is for collections agencies to try and reach people I do not know. Phone service providers need to provide their customers with a means to prevent this. For example, a blocked number list that I can add numbers to from the web interface. The caller should just hear the phone ring and never be directed to voicemail or receive any kind of message. If something doesn't change soon, I am considering canceling my voip service since I strongly object to paying for a service which provides more rights to collections agencies than to me.
For now, I've filed complaints against GE Money with anyone who will listen and will be canceling every account I have with them. Good-bye GAP card. If you object to being called repeatedly at inconvenient times, having your child woken up early, and being lied to over the phone - consider doing the same.
I recently returned from the Intel Open Source Technology Summit (OSTS). As with any conference worth attending, I returned inspired, and humbled. The attendees were top caliber people from engineering and management, along with special invitees, including Linus Torvalds. While my senior management's keynote was inspiring (truly, very well done), Linus's address was insightful in that signature why-didn't-I-see-that kind of way, and my team's technical leadership proved itself to be exceptionally competent once again, there is one concept I came away with that will have a larger impact than any other, and the individual who planted the seed likely doesn't even remember the conversation. Hacking away late into the evening in the lodge's "library" on too little sleep over a crippled network, my colleague said to me, regarding a problem I was having difficulty making progress on, "I always assume that the code was written the way it is because that was the easiest way for the person who wrote it." - or something to that effect. That thought has been percolating in the back of my mind for several days, slowly flagging memories of my programming history, building insight, and forming a new resolve.
I've done much of my best programming when the result wasn't critical, without regard for how it might be criticized, without much thought for how others might do it. No concern for why others wrote what I'm debugging the way they did. This same individual also noted that there is an optimal level of beer consumption for programming. That level being where inhibitions are diminished, but cognitive ability is still mostly intact. While it is always good to carefully consider how a thing might be done, obsessing over the perfect solution can hinder progress, and fear of rejection and criticism can halt it.
A colleague and good friend from my days at IBM was fond of the saying, "The perfect is the enemy of the good". A working solution is better than a plan for a perfect solution. In open source, we have the advantage of being able to expose our work to the critical eyes of a vast community of brilliant individuals with a wide range of experiences. This group is well known for harsh criticism. Criticism of a working solution eventually leads to something approaching the perfect solution.
I intend to take this insight to heart, to have more confidence in myself, despite my naturally self-critical nature. To obsess less over the perfect solution, despite my predisposition to obsessive behavior, and progress toward a working solution using my individual experience as my guide. Finally, to best my aversion to rejection, and proudly present my working solutions and leverage criticism to approach a perfect solution.
Work and family life was busy, so it was a few days before I could put the QNAP TS-419P+ to the test with some representative use cases. But before I did, I spent some time educating myself with RAID levels and came to the conclusion that until I am in desperate need of more storage space, RAID 5 just doesn't make sense. Here's why. RAID 5 distributes parity across all the drives in the array, this parity calculation is both compute intensive and IO intensive. Every write requires the parity calculation, and data must be written to every drive. With the low power CPU already the bottleneck for throughput, adding an additional load didn't seem like a good idea. More importantly is data integrity. RAID 5 allows for a single drive to fail without any loss of data. However, while rebuilding the array, if one of the remaining three drives were to fail, all the data is lost. Rebuilding a RAID 5 array after a single disk failure is also very compute and IO intensive as every disk must be read in order to restore the blocks to the new drive.
A better option for a four drive array is RAID 10, a striped pair of mirrored disks. In this configuration writes only affect the two drives in the mirror, and the data integrity is much improved. After a single drive fails, it is restored by copying it's sibling in the mirrored pair. If one of the three remaining drives were to fail, there is only a 33% chance that data will be unrecoverable as either drive from the other mirrored pair could fail without a problem.
The cost for this is total volume size. RAID 5 provides SIZE*(N-1) while RAID 10 provides SIZE*(N/2). With 4 1.5TB drives, RAID 5 yields a 4.5TB volume, while RAID 10 yields only 3TB. When drives were expensive, this 50% gain was significant, but when 2TB drives can be had for under $100, and larger drives becoming available every year (3TB drives now ship with consumer level NAS products), the principle value of RAID 5 is not as convincing as it once was. With RAID 10 and current technology (3TB drives) I can still double my capacity, and that should only improve in the coming years.
OK, on to configuration. I took to configuring the NAS for use with my home Linux network. I should preface this by saying I am not a network file system expert, not even an experienced user. I have setup NFS enough times to know the homogeneous uid/gid thing is a pain and that there are plenty of failing corner cases with respect to dropped connections, file locking, etc. QNAP claims to support NFS, so I expected them to provide the necessary tooling in their oft-praised web interface. The sad truth is that NFS appears to be an afterthought, and the implementation barely merits an "[x] NFS" string on their marketing material. The UI allows you to add users, but not to specify (or modify) the uid or gid. This means that standard Unix file permissions simply do not work, and their solution appears to be to make the shares globally read-only or globally read-write. Two words: cop out. Fortunately QNAP does provide a root-only ssh shell, and I was able to log in and manually edit /etc/passwd and /etc/group to make my users match the rest of the network. Some careful recursive 'chmod g+s ug+rw o-rwx' commands provided me with the permissions I wanted - but avoiding that sort of work is precisely why I opted for QNAP instead of building my own. In this regard, they failed miserably.
The SMB story is better. The QNAP UI supports per volume user and group permissions. While the on disk representation is still globally read-write, it's not a problem as SMB performs it's own user authentication and only the admin user has ssh access anyway. I tinkered with this enough to get it working with the GNOME desktop file manager and with autofs. This might be the best way to access shares on the QNAP, even from a Linux system. Still, something about running SMB makes me feel like I need to shower.
For throughput tests I used the rsync daemon to copy my MythTV recordings to the QNAP. This consisted of 300GB of mostly mpeg2 files from 2 to 6 GB each. I used the UI as well as top to monitor the system status periodically during the transfer. The CPU was pegged at 100% for the duration of the transfer, and it averaged just over 20MB/s.QNAP claims 45MB/s writes over SMB and 42MB/s over FTP. Rsync should be faster if anything. Throughput was a disappointment. Following the transfer the QNAP remained under heavy load (7-8), and became fairly unresponsive. Watching the kernel logs I found a few Out of Memory errors, with apache and php being among the OOM Killer's victims. I raised the throughput and OOM issues with QNAP and my supplier. They weren't able to suggest any changes to improve throughput or identify why the OOM occurred. They did agree to allow me to return the TS-419P+ in exchange for a TS-459Pro+. The latter replaces the ARM CPU with a dual core Atom, doubles the RAM, and replaces the 16MB flash with a 512MB DOM. 20MB/s just wasn't cutting it, and a kernel OOM was just unacceptable. I shipped the QNAP TS-419P+ back and am impatiently awaiting a TS-459Pro+. Whether I keep the QNAP firmware or replace it with Ubuntu Server or perhaps a custom Yocto image is the subject for a future project.
As if the arrival of the final components for Rage wasn't enough tech debauchery, the present trucks also delivered a shiny new QNAP TS-419P+ and 4 Samsung Spinpoint F2 1.5TB drives. Devon helped me unpack everything and carefully mount the drives in their trays. He even helped me plug it in and start the initial setup process.
The QNAP packaging and physical documentation is simple, nay, spartan. Which I like. The device itself is smaller than I expected (always nice) but I was disappointed to find a separate power brick instead of a built-in power supply - this educed my excitement about the compact size of the unit a somewhat. The recommended "Linux Setup" was to connect a PC directly to the NAS and configure your networks to talk to eachother - this didn't appeal to me, so I just looked up the QNAP IP on my dd-wrt router and followed the directions for Windows and Mac - just without installing a qnap finder application.
The QNAP web interface is highly polished. Initial setup included setting the hostname (I selected Toph in keeping with my heroine theme for my personal machines), installing the latest firmware, setting an initial password, which network services to enable, and an initial RAID configuration. Perhaps this is obvious to everyone else, but be sure to unzip the firmware you download from the QNAP website, otherwise you'll just get an unhelpful error complaining the image is bad. I found the initial RAID selection to be odd as it is very limited. I chose RAID 5 as that is probably what I want to do, but the device offers a lot more options than a single RAID array using all the disks. Given the amount of time it takes to resync a 4.5 TB RAID 5 array - it seems like this step could be skipped and the user sent directly to the full-featured volume management admin screen at first login. Instead, after completing the initial setup, you are presented with this iTunes-wanna-be AJAX interface:
Here you can see the volume management screen - and an ascending time remaining field in the Status column. I really don't know how I'll partition things up, or if I even need to. The QNAP offers a _ton_ of flexibility in how you access your data. I'll need to spend a good deal of time considering them before I make a final decision. I'll reserve judgement on these features until then.
Out of the box, several network services are available for immediate configuration:
And finally, QNAP offers add-on packages in the form of QPKG, which oddly enough includes an IPKG application for even greater selection of packages. There are several media streaming servers available, including one that is pre-installed. The installation process appears a bit cumbersome, requiring the user to download the package to a PC and then upload it to the NAS for installation. I am looking forward to installing Python, possibly Twonky, and maybe MySQL and WordPress (I'm considering moving this blog away from Drupal and to something else).
So for now, my QNAP is resyncing its RAID 5 array. I hope to have the time to explore its many features soon, and I'll share my experience as I do. My initial impressions are good, and I'm optimistic that this will turn out to have been a good choice for our needs.
The present truck(s) were good to me today. I received the two Intel Xeon x5680 CPUs, the two Seagate Barracuda 1TB drives, the Intel 160GB G2 SSD, and the second heat sink. The SuperMicro hot-swap trays don't allow for mounting 2.5" drives, so I had to mount the SSD in a 3.5" bracket in a 5.25" bracket. Lame. As I mentioned in my last post, the first CPU cooler's fan conflicts with the rear chassis fan. Since I had to choose between the two, I chose to keep the larger (quieter) chassis fan, but I connected it to CPU 1 FAN instead of the FAN 5 header. This is a guess on my part, but I figure the CPU is first thing to get hot, and the most valuable component in the system, so it makes sense to me to let its temperature determine the fan speed. This may cause problems however as the fan speed used by the CPU 1 FAN is probably not appropriate for the larger fan, and I don't know how removing the FAN 5 connection will impact how the system decides to use the forward fan (which is smaller, and louder). Any insight readers may have here is very welcome.
Initial power-on is always exciting, this was no different, perhaps more so. After pressing the power button, Rage jumped to life like a wild beast startled from slumber. Her fans roared and her many bright beady eyes flickered their discontent. After familiarizing myself with her BIOS settings, I ran a quick Ubuntu 10 install off USB (it was absurdly fast). The BIOS RAID options were confusing at best, and I felt I just might get better results with software RAID via mdadm (at least more control). Rage is currently resyncing a RAID 1 array composed of the two 1TB SATA drives. I'm not sure quite how long this will take, and with her periodic snoring (loud fan bursts), I may just have to force her back into hibernation so my better half can sleep tonight.
Just as soon as I can I'll kick off a complete Yocto build and share the results. Following that, I'll run some burn in tests to ensure the memory, CPUs, and HDDs are all functioning properly. I haven't tested IPMI 2.0 support yet (remote access, KVM, etc.) I'll get to that soon as well.
The first round of components arrived for my Yocto Project and Linux Kernel development system. I haven't built a system like this (piece by piece) since I started using laptops in 2002. I had to learn all the new terms for all the same architectural bits. Spec'ing out the system was an interesting experience, and I learned something about categories at Newegg. Finding quality components can be a real challenge as you first have to sort through all the neon-lights-and-acrylic-chassis-viewing-window-crowd junk. But, there is a short cut - the term is "server". It's great, select "Server" to narrow the search for memory, CPUs, and especially cases and CPU coolers and all the teenage-gamer-consumer crap goes away and you're left with no-nonsense computing hardware. The heatsinks were under "server accessories" and not "cpu fans".
So first, the specs:
- Supermicro SC733TQ-665B Chassis
- Supermicro MBD-X8DTL-iF-O Motherboard
- Supermicro SNK-P0040AP4 CPU Heatsink and Cooling Fan
- 2 x Intel Xeon X5680 Westmere 3.33GHz 12MB L3 Cache LGA 1366 130W Six-Core Server Processor
- 2 x Patriot Signature 8GB 240-Pin ECC Registered DDR3 SDRAM 1333 (PC3 10600)
- 2 x Seagate Barracuda 1TB Sata II HDD
- Intel 160 GB G2 SSD
The machine will be put to a variety of uses, but most of the time it will be used for two things. First, as a build system for the Yocto Project. We build for four architectures, a variety of machines, and several image types. A typical build takes two hours (we are working on reducing that) and as my primary area of focus is the kernel, I try to build as many architectures as possible as I change things. Once built, these images can be tested in qemu. Being able to build these quickly and keep all the build trees around to facilitate incremental builds is important to keeping productive.
Secondly, I'll use this beast to continue to work on the futex subsystem, parallel locking constructs, and real-time. When it comes to catching locking bugs or identifying bottlenecks - there is simply no substitute for core count.
When it isn't busy with either of the above, I hope to use this system to build and test the mainline and tip Linux kernel trees.
Back to the assembly. For this stage, I only have the chassis, motherboard, and memory. I'm having to wait a bit on CPUs and disks. The assembly was straight forward, but I obsessed about airflow and cable management. Supermicro matches their chassis to their motherboards, so the usual time spent mapping and aligning every LED and switch connector was replaced with single ribbon connector - very nice. I still read through the manuals to make sure I was getting everything right. Turns out the motherboard has a built-in speaker where the manual says the speaker header should be, fine. There is some ducting to keep air flowing from the front of the chassis, over the motherboard, and out the back. I made sure I routed the SATA cables clear of that. Finally, the 665W ultraquiet PSU is not modular, so I had to find a place for all the cables I didn't use while minimizing obstructions to airflow for the chassis and the PSU itself. Some careful bundling and a couple wire ties seems to have wrapped that up nicely.
I also discovered that CPU1's fan conflicts with the rear chassis fan. I have a choice: I can remove the rear chassis fan, or I can remove the fan from the CPU heatsink (which was made easy by Supermicro). I'm somewhat disappointed in Supermicro here. This is their motherboard, with their recommended CPU fans, in their recommended chassis. Fortunately, the rear fan is immediately behind CPU1, and likely moves as much, if not more, air with less noise. If I do remove the CPU fan, do I connect the chassis fan to the CPU1FAN header, or leave it connected to the generic FAN5 header? I was pleased that both chassis fans and the CPU fans are four-wire fans, meaning their speed (and therefore noise level) can be controlled by the BIOS depending on temperature.
This motherboard support IPMI 2.0, meaning it has a service processor and a full graphical KVM. I'll be running this system headless connected via two gigabit links to my home network. I was very pleased overall with the quality of the Supermicro components, they are a significant step up from what I'm used to seeing in consumer computing and while not cheap, they were not particularly expensive either. Only time will tell, but I'm becoming a Supermicro fan... er.... enthusiast.
Next time: CPUs, HDDs, RAID setup and benchmarking!
Check out the full story with pictures at Devon's Bed on Lumberjocks.