failing like never before

26May/101

My Time with Arch

I've been a proud and content Arch Linux user for a little over one year and nine months now, far longer then I've spent with any other Linux distribution. Arch put a happy end to my constant distro-hopping lifestyle, and I've been so pleased with its simplicity and performance that I've spared nary a glance at any other distribution over these past 21 months. But in more recent times, I've been having some disagreements with my Arch system and so I've slowly started reverted to frequenting the old haunts of my distro-hopping days (i.e. distrowatch.com, and  other such distro news sites).

Stability has always been a much touted feature of Linux in general, but some distributions lay a greater claim to that attribute then others. Arch in particular has tended to be slightly more bleeding-edge then other distros, sacrificing stability for the newest features; packages are updated in the repository as soon as new versions are released and with  a relatively minimal amount of time (extremely minimal compared to other Linux distros like Debian) spent in testing in an Arch environment, just enough to ensure that the packages don't completely break the system. While this strategy has its benefits, namely that it allows users to get the latest and greatest software right when it comes out, it comes at the cost of stability (and security to some degree). And the more packages that I've added to my system, the more I've started to notice just how unstable Arch can be.

I generally run "pacman -Syu" to do a full system update at least every week, and I try not to let my system stay without an update for three weeks at the longest, so in general I'll stay pretty well up-to-date. But it has not been uncommon, that after performing a full update, that my system completely locks up or goes completely nuts. Take for example, just a few weeks ago, when a full system update made my Arch Linux partition completely unbootable and required that I boot a live CD and futz around in the configuration files. When my laptop was finally usable again, I had to mess around some more with my wireless drivers to get them working again. And lately, after my most recent system update, I've been having some problems where my laptop will occasionally freeze up and become totally unresponsive to everything except a hard reboot, and system logs show no behavior to be out of the ordinary. Of course, not all of the breaks in Arch have been this bad. About four months ago, a system update made it so that I could no longer hibernate, a problem which was easily remedied by a quick visit to the Arch wiki and a few short commands. Sadly, the list of weird errors goes on (although its not that long).

Two years ago, when I was moving off of Debian testing, Arch's bleeding edge packages were quite welcoming, but not that I've matured a little bit, I don't care as much about the latest features (Lets face the facts, the programs I've been using the most these past few weeks, have been vim, GCC, SVN, and Zoom). My first priority these days, is getting shit done. And if my laptop decides to go bat-shit-crazy now and then, it seriously hampers my ability to work properly. I don't mind a few bugs now and then, and I could probably even live with a rare kernel panic, but sometimes I get the feeling that Arch is maybe just a little too bleeding edge for me.

I mentioned earlier that another one of the costs of having the latest and greatest software, is security. A lot of the newest software releases tend to be not as well hammered out and therefore are slightly more prone to have security holes. I'm only a slightly paranoid Linux user, so while the lack of security is a little worrying to me, its not a huge deal breaker. Arch's lack of solid support for more powerful RBAC security modules like SELinux or AppArmor has also been a little worrying to me. I would love to be able to slap on some powerful RBAC policies on my laptop to give me greater piece of mind, but Arch's normally awesome wiki is a little lacking in help (although it seems that reccently, the SELinux page has gotten a little more meat to it).

This next point is a rather silly and illogical thing to hold against a distribution, but I feel that it needs to be said because its entered my thoughts a few times in the past year. Whenever I go in for an interview, I generally try to play up my Linux expereince (which is not incredible, but still fairly impressive enough). The logical question for an interviewer to ask of course, is "what distribution(s) do you use?" As soon as the words "Arch Linux" comes out of my mouth, I can see the interviewers knocking some points off of my interview. People always assume that Arch is just another one of those random "edge" distros that is basically just an Ubuntu/Fedora knockoff with some sparkles thrown in, and no maybe how much I explain it to them, I know that they don't respect an Archer as much as they respect a Slacker. So yeah, I'm a little shallow, but I do care about what people think about me, especially in interviews. A part of always wishes that Arch was just a little more mainstream and a little more well known.

So I've taken some pretty mean shots at Arch, but my comments shouldn't be miscontrued to indicate that hate I Arch. Quite the contrary in fact, I've loved using Arch. My old Arch review enumerates out more clearly the points of Arch that I really like, but I'll list them out here quickly.

  • Fast! - Compiled for i686 and lightweight with no extra cruft thrown in
  • Clean - there is nothing on my Arch system that I didn't put there
  • Simple - things tend to be very straightforward and elegently simple
  • Awesome documentation and user community - Arch's comprehensive wiki is in my opinion, one of its strongest selling points, also the forums are quite helpful.
  • Rolling updates - its nice not to have do some big update every six months...

All the reasons that I first came to love using Arch still hold true, its simply that as time has worn on, I've changed a bit: I don't care as much for bleeding edge features, stability and security have become bigger issues, and I've started caring about what other people think about me. So I've been asking myself, "is Arch still for me?" And I think the answer might be no. I purchased a used IBM Thinkpad reccently, and I don't think Arch is going to be my first choice for it.

It seems like its time for Arch and I to "take a break" in our long relationship. But don't worry Arch, its not you, its me.

Tagged as: , , 1 Comment
8May/100

Smashing the Stack for Extra Credit

(So this one is a little old... I have a habit of writing up drafts, stashing them away to be uploaded later, and then completely forgetting about them.)

A quick intro to buffer overflow attacks for the unlearned (feel free to skip this bit).

I would highly recommend reading AlephOne's Smashing the Stack for Fun and Profit if you really want to learn about buffer-overflow attacks, but you can read my bit instead if you just want a quick idea of whats its all about.

Programs written in non-type safe languages like C/C++, do not perform bounds checking on memory when doing reads and writes, and are therefore often to susceptible to what is known as a buffer-overflow attack. Basically, when a program allocates some array on the program stack, the possibility exists that if the programmer is not careful, her program may accidentally overwrite the bounds of the array. One could imagine a situation where a program allocates N amounts of bytes on the stack, reads in input from stdin using scanf (which terminates reading input when it hits a newline) and stores the read bytes in the allocated array. However, the user sitting at the terminal might decide to enter in more then N bytes of data, causing the program to overwrite the bounds of its array. Other values on the stack could then be unintentionally altered which could cause the program to execute erratically and even crash.

But a would-be attacker can do more then just crash a program with a buffer-overflow attack, they could potentially gain control of the program and even cause it to execute arbitrary code. (By "executing arbitrary code" I mean that the attacker could make the program do anything.) An attacker can take control of a program by writing down the program stack past the array's bounds and changing the return address stored on the stack, so that when the currently executing function returns control to the calling function, it actually ends up executing some completely different segment of code. At first glance, this seems rather useless. But an attacker can set the instruction pointer to anything, she could even make it point back to the start of the array that was just overwritten. A carefully crafted attack message, could cause the array to be filled with some bits of arbitrary assembly code (perhaps to fork a shell and then connect that shell to a remote machine) and then overwrite the return address to point to the top of the overwritten array.

A problem with the generic buffer overflow attack is that the starting location of the stack is determined at runtime and therefore can change slightly. This makes it difficult for an attacker to know exactly where the top of the array is every single time. A solution to this, is to use a "NOP slide," where the attack message doesn't immediately begin with the assembly code but rather begins with a stream of NOPs. A NOP is an assembly language instruction that basically does nothing (I believe that it was originally included in the x86 ISA to deal with hazards) so as long as the instruction pointer is reset to point into somewhere the NOP slide, the computer will "slide" (weeeeee!!!!) into the rest of the injected assembly code.

Sounds simple so far? Just you wait....

The trials and tribulations of my feeble attempt to "smash the stack."

The professor for my security class threw in a nice little extra credit problem on a homework assignment last quarter. One of the problems in the homework asked us to crash a flawed web-server using a buffer overflow attack, but we could get extra credit if we managed to root the server with the buffer overflow. Buffer overflow attacks on real software are nontrivial, something my professor made sure to emphasize when he told us that in his whole experience of teaching the class, only one student had ever successfully rooted the server. Now Scott Adams, author of Dilbert, mentioned in one of his books that the best way to motivate an engineer, is to tell them that a task is nearly impossible and that if they're not up for the challenge, then its no big deal because so-and-so could probably do it better. This must be true, because after discussion section, I went straight to a computer and started reading Smashing the Stack for Fun and Profit, intent on being the second person to "smash the stack" in my school.

I was able to crash the server software within less then one minute of swapping the machine's disk image in, as it was a ridiculously simple task to guess where an unbounded buffer was being used in the server code and force a segmentation fault. It took me a few more minutes to trace down the exact location of where the overflow was occurring (there was a location in the code where the program would use strcat() to copy a message), but as soon as I did, I booted up GDB and started gearing up for some hard work.

25Apr/100

The ipad… and Other Fail News

I stopped by Best Buy on my way to the grocery store to have a quick look at the much lauded Apple ipad. My opinion summed up in one word:

meh.

Pros:

  • shiny
  • long battery life
  • more portable then a typical laptop

Cons:

  • almost impossible to achieve a reasonably fast typing speed on it
  • shiny (which results in fingerprints and glare)
  • does nothing that my laptop can't do
  • can't do a lot of things my laptop can do
  • costs $150 more then the refurbished laptop I just bought
  • closed platform
  • wide-aspect movies look weird on a 4:3 screen
  • back-lit screens are not ideal for reading books
  • development work for the ipad must be done in Objective  C
  • less portable then a Motorola Droid or Nexus One (or even an iphone)

In other fail news:

A week ago, I got a big batch of images that needed to be resized and displayed on one of the websites that I manage. This required that I crop and resize each photo so that it be exactly the correct size to be displayed on the site, a time consuming and quite laborious task. So I figured I could probably whip up a script with Python and ImageMagick to help automate the process, with idea being that my program would allow the user to simply highlight the "relevant" area of an image and then the program would crop and resize it to the correct size.

I ended up having to use wxPython to do all the GUI type stuff, which meant I had to spend some time learning the ins and outs of GUI programming seeing as how my experience with that kind of stuff is fairly limited. So for the past week, I've been spending about an hour a day learning wxPython and knocking together a sort of program to make my life easier. Today, I looked at my image resizer program and realized I had created some of the most god-awful code ever seen by mankind.  It was basically 100+ lines of uncommented and completely unintelligible spaghetti code.

I threw my monster out and did the cropping and resizing by hand, which ended up taking me less than an hour.

Tagged as: , , , No Comments
18Apr/100

Things that Happened

What I did this weekend:

  • procrastinated
  • cooked enough food to last me to the middle of the week
  • ate all the food I cooked (I was hungry)
  • tried to sleep in but failed miserably (I ended up waking up at 7:50am)
  • rode my bike 25 miles, stopped and stared at the houses in Brentwood that probably cost more money then I'll ever make in ten lifetimes
  • procrastinated by looking at various electronic gadgets on-line that I have no need for and couldn't possibly afford
  • tried to work on my lab but was distracted by food
  • sat in the computer lab for about three hours, wrote two lines of code, and tried unsuccessfully to help someone with his Linux troubles
  • procrastinated by doing laundry and then sewing up the holes in my black jeans (there were a lot more holes the I realized)
  • tried to study but somehow ended up watching old Justice League Unlimited episodes on youtube
  • finally got my butt in gear around 7pm on Sunday night and hit the library

On another note, I added a basic captcha to the comment box on this blog in order to reduce the amount of spam Akismet had to handle (Akismet is great, but it does occasionally mark stuff incorrectly). Amazingly enough, a few spam bots are making their way past my captchas! Modern image processing is impressive stuff...

Filed under: Daily Log, School No Comments
16Apr/100

OCaml Infix Notation and Parenthesis

So when I first picked up LISP, I found myself hating everything about the language, from its distinctively un-C-like functional style, to the inordinate number of parenthesis required by the syntax. But before long, I found myself accidentally writing my math homework in prefix notation and putting parenthesis around all of my sentences, just out of pure habit. With time, I began to find the LISP style more relaxing to develop in, and started to understand the stark beauty of the language. A few months after my first excursion into LISP, I was telling people how LISP was the most beautiful language ever created.

Now, enter OCaml, a functional language free from legacy cruft, and inspired in some form from LISP. I thought I would love being able to develop in a LISP-like language without having to worry about ending statements with a dozen end-parenthesis (which I thought to be LISP's only big syntax flaw), but I found the lack of required parenthesis initially quite awkward. Yes, there are a lot of insipid little parenthesis in LISP, but the point of them is to clarify the code and they do! Parenthesis are what allow LISP to have such simple and easy to understand syntax. (Lets face it, Ocaml just doesn't have the "Its full of cars!" sort of easily understood syntax.) Of course, since OCaml makes parenthesis optional in most cases, one could simply add parenthesis to everything in OCaml, much as one would in LISP. I got over the parenthesis business in OCaml quite quickly, although I still miss them quite a bit.

The one thing I could never get over however, was the weird way OCaml uses built-in operators (like '+', '-', '/', etc.) in infix notation, but has all other functions used in typical prefix notation. By enclosing an operator in parenthesis, it can then be used in prefix notation, but this is a little clunky. I would think that it would be better to force all aspects of the language to follow the same common rules in order to reduce confusion, but the OCaml designers were apparently following a different line of reason from mine.

Although OCaml's odd mix of infix and prefix notation has remained a thorn in my side whenever I lay hand to keyboard to bang out some OCaml code, I've nevertheless managed to gain a good understanding of the language. I've also started to find the usefulness of the language, but my heart still pines for LISP...

30Jan/100

The ipad is Not the Kindle Killer

The inter-webs have been abuzz about the revelation of Apple's new ipad, and as always, I'm late to the blogging party. Now I'm no Apple fanboy, and I'm not particularly impressed by Apple's new slate. Personally I wouldn't buy it, but I'm sure there are tons of people out there that would love to own one. The one thing however, that has really been bugging me lately, is how so many people are proclaiming the ipad to be the Kindle killer, and that Amazon (and all other e-book reader makers) should just close up shop. Yes, the ipad is capable of providing so many more services then the Kindle, such as full web browsing, office programs, and movie playing capabilities, things that the Kindle cannot possibly offer. But Apple fans are forgetting that the reason that people buy Kindles and other e-book readers, is so that they can read books, and not browse the web.

E-ink, the display technology behind most e-book readers, is an amazing technology, and not just because of its superior battery life, but because reading it is like reading a newspaper. Anyone who has read plain text on a computer screen for hours at a time, knows that it is not a particularly fun experience. Back-lit screens are stressful on the eyes after long periods of time, whereas reading a good-old-fashioned paper book is a much easier experience. I own several e-books and I actually read Stephanie Meyer's God-awful Twilight book on my computer screen and it was not an experience that I want to repeat, not just because of the terribleness of the book but also because of how my eyes were starting to burn from staring at a back-lit screen for so long. Now I have a friend that says he likes to read books on his iphone for extended periods of time, and I'm fairly certain that hes either a freak of nature, or a bold faced liar. But aside from the scarce masochistic few who enjoy burning their retinas out staring at glowing boxes, most everybody else would rather read paper books.

E-ink has allowed for electronic reading devices that are easy and comfortable to read on. This is something that the multi-use ipad does not offer, and it is the reason that dedicated e-book readers like Amazon's Kindle aren't going anywhere just yet. I don't mean to imply that the ipad is doomed to failure, but rather that the ipad is a device meant to do many things and cannot compete with dedicated e-book readers like the Kindle.

Tagged as: , , No Comments
25Jan/103

Intel FDIV Bug

A few years or so back, I put up a bunch of my high school and early college papers on this blog (they're under the "literature" category). Its a sad state of affairs when looking back my high school papers, that I realized my writing skills were significantly better back in high school. But anyways, heres a paper I wrote for my engineering ethics course. Its not my best work, and it certainly lacks the finish of my old high school stuff, but its passable.

To the Intel Corp. Board of Directors: A Post Mortem Report of the Pentium Flaw
Abstract

The floating point division flaw in the original Intel Pentium CPU, which resulted in some floating point division operations being calculated improperly, was a result of a few poor engineering decisions and while avoidable, was not condemnable. The subsequent decisions made by Intel executives, to keep the flaw hidden and then to downplay its importance, were however, morally flawed. While Intel executives adhered to a utilitarian ethical framework, they forgot to consider the impact their decisions would have on Intel’s public image. Had Intel executives followed a combination of rights and utilitarian ethics, where the rights of the customer are upheld while the company’s wellbeing is still valued, executives would have reached the correct decision, which was to offer a full “no questions asked” replacement policy at the very first discovery of the flaw.

The Pentium “FDIV Bug”

Given certain types of input data, the floating point division instructions on the original Intel Pentium CPU would generate slightly erroneous results. This result was dubbed by the public as the “FDIV Bug,” as one of the assembly language instructions affected by the bug was the FDIV instruction. Although Intel initially attempted to keep information regarding the flaw hidden, it eventually became public knowledge. The subsequent actions of Intel executives regarding their handling of the flaw were morally questionable and ultimately resulted in great damage being done to Intel’s public image. A different set of ethical frameworks would have allowed Intel executives to have reached the correct decision.
Using the basic Microsoft Windows calculator, a Pentium user could check for the presence of the flaw by performing the following calculation:

(4195835 * 3145727) / 3145727

The expected result of dividing a number by itself is one, so the equation above should yield a result of 4,195,835 but the flawed Pentium Floating Point Unit (FPU) produced a value of 4,195,579; an error of 0.006%. Not all calculations performed by the FDIV instruction on a Pentium CPU were incorrect however. The occurrence and degree of inaccuracy of the floating point division calculations were highly dependent on the input data and specific divide instruction used, and in most cases, the flaw was not apparent at all. According to Intel Corp., the flaw would only be encountered once every 27,000 years under normal use, although other groups have produced significantly different failure rates.
The “FDIV Bug” did not affect Intel CPUs predating the Pentium, as the flaw was a defect in a new algorithm that was intended to provide improved floating point performance over the Intel 486 (the predecessor to the Pentium). The Pentium used a new radix 4 SRT algorithm (named after its creators Sweeney, Robertson, and Tocher) in its floating point division operations, which required the use of a lookup table to improve calculation speed (Intel Corp. Section 4). This lookup table was generated prior to assembly and then loaded into a hardware Programmable Lookup Array (PLA) on the Pentium chip. However, the script which downloaded the lookup table into the PLAs had a bug in it that caused some lookup table entries to be omitted from the PLAs. Consequently, floating point division instructions that required the missing entries from the lookup table would produce erroneous values. This flaw has since been fixed and the “FDIV Bug” is no longer apparent in newer Intel CPUs.

The Pentium flaw should have been easily discoverable in early testing of the CPU, but there was also a mistake in Intel’s proofs for the Pentium FPU. Intel engineers attempted to simplify testing, and assumed that the sign (“+” or “-“) of a number doesn’t enter into division operations except in the last step. Thus, the proof for the Pentium only checked half of the PLA, and assumed (incorrectly) that the other half of the PLA was simply the mirror image of what was checked (Price P. 2). Unfortunately, the untested half of the PLA contained the missing entries. The two easily discoverable flaws, one in the PLA loading script and the other in the PLA proof, conspired to hide each other from Intel engineers so that the Pentium’s flaw was not discovered until after production of the CPU began.

Events Surrounding the Flaw

Intel Corp. discovered the flaw in the Pentium’s floating point unit through testing, in June of 1994 (after production of the chip), but chose to keep the information private instead of disclosing it to their customers (Markoff). Although Intel modified the design of the Pentium, the modified chips did not begin to reach the market until November of 1994, and the sales of flawed chips were not halted. Dr. Thomas R. Nicely of Lynchburg College also independently discovered the “FDIV Bug” in June of 1994 and attempted to bring it to the attention of Intel Corp. in October of that year, whereupon an Intel representative confirmed the existence of the flaw and then ceased to provide Dr. Nicely with any more information (Nicely). Nicely then proceeded to make the Pentium floating point unit’s flaw known to the public via e-mail, causing news of the Pentium flaw to spread quickly. Concerned Pentium owners who learned of the flaw were told by Intel that the flaw was inconsequential and that no replacement policy was being offered.

23Jan/105

Miyata 914 – Acquisition and Review

About how I acquired a Miyata 914

For the past three months, while  on my way to class, I've been walking past a wheel-less bike chained to a bike rack underneath an overhang. The bike's distinctive bright green saddle was pretty much the only speck of color amidst a sea of dirty Huffys, so it was hard to miss. One day, I happened to catch a closer glimpse of the green-saddled bike and was surprised to notice that it was a Miyata (I have a soft spot for Miyatas, since I already own one), and closer inspection revealed it to be a Miyata 914. I spent several minutes examining the Miyata and noticed that aside from the thick layer of dust and grime that coated it and the lack of wheels, it was in surprisingly good condition. I started to wonder if the owner of the Miyata had graduated and forgotten his bike, or had simply abandoned it after the wheels were stolen. On the off chance that the latter was true, and hoping that the Miyata's owner still walked the same route to class, I left a note asking the owner to contact me if he had any wish of selling.

My note was gone the next day, and I received an e-mail from the Miyata's owner by the end of the week, saying that he was considering selling his bike and would I make an offer? Betting that any man who puts a kick-stand on a semi-pro bike (the atrocity!) and leaves it outside for three months, probably doesn't realize the worth of a good, splined, triple-butted Miyata CrMo steel frame, I offered him a low-ball offer of $50; high enough to tempt him into selling, but still low enough to make it a bargain buy. We eventually settled on $75, which was higher then I would have liked, but still pretty decent. I've been told that the Miyata 914 has the same frame as the top-of-the-line Miyata Team, but with slightly inferior components, and I saw a NOS 1990 Miyata Team selling for $600 on ebay, which makes the $65 I paid seem like daylight robbery. I think given the condition of the Miyata that I purchased, it could have fetched close to $200 on craigslist.

My initial suspicions about the owner were confirmed when I met him: he did not appear to be a cyclist and didn't realize the full worth of the Miyata 914. Strangely enough, he was several inches shorter then me (I'd put him around 5 foot 7 inches), which would have meant that ridding the 60 cm Miyata must have been extremely awkward for him.

As soon as money and bike exchanged hands, I raced home, threw some newspaper down and set up my bike stand in the middle of the living room (thank God my roommates weren't home...). I started with just cleaning the bike off first, and as soon as the dirt started to fall away, I began to realize that the 914 was actually in better condition then I had thought; the paint was only scratched in a few places, and the chainrings looked brand new.