failing like never before

10Aug/092

Day 1 of the Everyman

3am   – 6am - (Monday, August 10)

I went into this experiment already a little tired, so right now I am quite knackered. I have a feeling that I will be spending most of the next few days staring blankely at a computer screen watching TV reruns online (like I'm doing right now). Obviously, when I first woke up from my core nap I was extremely tired, but after a brief walk around the block in the cool air I'm feeling a bit better, albeit still slightly woozy.

One of the plus sides of going to sleep so tired I suppose, is that I fell right asleep without any problems.

This is extremely unscientific, but after every nap, I'll be taking a three minute typing test (courtesy of typingtest.com) to maybe test my dexterity and "wakefulness." Its extremely possible that my wakefulness will not be reflected in my typing scores, but I'll try nonetheless. My typing scores for this waking period are:

Net speed: 67 WPM
Accuracy: 95%
Gross Speed: 70 WPM

11am – 11:20am

I've always had trouble taking naps during the daytime, even when extremely tired, and this nap was no different. I probably didn't fall asleep until noon, and even then it was a bit of a light sleep and the alarm clock jolted me awake almost immediately. My awareness and reflexes were definetely not as good and I continued to feel a bit woozy throughout the next few hours.

Net Speed: 73 WPM

Accuracy: 90%

Gross Speed: 81 WPM

4pm   – 4:20pm

I tried quite hard to fall asleep but ultimately, was unable to fall asleep at all despite laying in bed for an hour. Eventually,  I decided to just forgo this nap entirely. College has taught me how to function on extremely  little sleep; on one occassion I managed to operate on three hours of sleep per day for almost four days. I felt like a sleepwalker during this period, and had a bit of a headache.

10pm – 10:20pm

It was easy to fall asleep for this period since this is when I usually go to bed. Unfortunately, my twenty minute nap turned into a seven hour snooze. My alarm must have gone off because when I woke up, I found that I had removed the battery from my cell phone.

10Aug/090

The Everyman Begins Now

(For those unfamilar with polyphasic sleep and the everyman sleep schedule... or you could JFGI)

I've been interested in polyphasic sleep ever since I read about it while not-doing homework, but have never managed to muster the courage to give it a go. However, my roommate from last year expressed some disbelief towards the idea that a person can subsist on a measly four hours of sleep per day, and so I brashly vowed to prove him wrong over the summer. It occurred to me, earlier tonight, that the summer is now beginning to near its end so I had best start with my polyphasic sleep experiment if I really want to try it.

I while be attempting to adopt a tried sleep schedule, called the everyman sleep schedule, which consists of a three hour core nap and three twenty minute naps throughout the day. The everyman schedule is so named because it is supposed to be suitable for "everyman." And unlike the fabled uberman sleep schedule, the everyman is a little more forgiving towards late and missed naps and since less naps are taken, the length of a waking period is longer.

I've read that some people have just used kitchen timers to wake them up from their naps, but I've decided to instead use the alarm clock on my cell phone instead, since it allows me to schedule multiple to go off multiple times a day every day.

My proposed sleeping schedule is as follows:

3am   - 6am

11am - 11:20am

4pm   - 4:20pm

10pm - 10:20pm

I will attempt to regularly blog my progress in this experiment and note the effectiveness of the sleeping schedule. All my posts related to my attempts will be posted under the category "polyphasic sleep."

Currently, it is 2am, and so I will be taking my core nap in one hour. Sadly, I have barely begun my polyphasic sleep schedule and I am already rather tired.

9Jun/090

Network Timeouts in C

Reccently, while coding up some P2P sharing software for class, I came across a problem that really got me stuck. (Note, that I forked different processes to handle each upload and upload.) When reading data from another peer, my peer would generally have to block until the other peer responded, since with network programming we can never expect all of our requests to return immediately. The problem was that occassionally, the other peer would decide to die entirely and I was left with a process that would block essentially forever since the signal it was waiting for was never going to come. Now the great thing about blocking reads is that they don't burn CPU time spinning around in a circle waiting for data to arrive, but they do take up space in the process table and in memory. And of course, if my blocked reading proccesses stayed around forever, it would be very simple for a malacious peer to bring my OS to a grinding halt. Essentially, what I needed was a way to make read() timeout. Now anyone vaguely familar with using internet browsers and other such network-reliant programs are probably familar with timeouts, but I had no idea how to implement it in C.

My first thought, was to use setrlimit(), a Unix function that allows the programmer to set the maximum amount of system resources (CPU time, VM size, created file sizes, etc., for more use "man 2 setrlimit"). When setrlimit() is used to set a maximum amount of CPU time, the process will recieve SIGXCPU when the CPU time soft limit is reached, and then the process is killed when the hard limit is reached. At the time, I was a bit groggy so setrlimit() seemed like a great solution, but of course anyone with half a brain (which I apparently didn't have at the time) will realize that setrlimit() is definetely not the solution to this problem. A blocking process doesn't run and therefore doesn't consume CPU time, so setting a maximum CPU time does pretty much nothing to the blocking process; it'll still keep blocking forever.

After a little bit of trawling the internet, I finally came upon the perfect solution: alarms! When alarm(unsigned int seconds) is called, it will raise SIGALRM after so many seconds, realtime seconds mind you and not CPU seconds consumed by the process, even if the process that called alarm() is blocking. I set the alarm right before I began a read() and used signal() to bind a  signal handler to SIGALRM, so that when the alarm went off my signal handler could gracefully kill the timed-out download process!

31May/090

Three Weeks in Biking

Distance Traveled: 13.497 miles
Average Speed: 12.9 MPH
Maximum Speed: 31.2 MPH
Elapsed Time: 1 hour, three minutes, and 12 seconds

Distance Traveled: 20.253 miles
Average Speed: 11.8 MPH
Maximum Speed: 31.2 MPH
Elapsed Time: 1 hour, forty-three minutes, and 10 seconds

Distance Traveled: 27.360 miles
Average Speed: 13.5 MPH
Maximum Speed: 31.2 MPH
Elapsed Time: 2 hours, two minutes, and 0 seconds

I really did a number on my two weeks ago. I took a shortcut through some bushes, which as it turned out were obscuring some rather large rocks, and somehow managed to derail my chain off both the front and back gears. After I reset the chain, I then noticed that the right pedal was spinning extremely freely; closer inspection revealed that the pedal was wobbling around on the axle and that the pedal housing was dropping bearings everywhere. When I got my bike back to my room and started to dissemble the pedal in hopes of repairing it, I noticed something quite odd.

It appeared that somehow, in the process of traveling through the bushes, I had managed to slam my pedal straight down into the ground, causing the plastic pedal housing to be pushed towards the bicycle body, which resulted in the pedal housing cracking down the center. Of course, the pedal was entirely non-rideable since putting much pressure on the pedal would make the pedal crack entirely in half.

So now I have a non-ridable bicycle. Go me.

12May/090

The Evils of FAT

I think we're all familar with the infamous FAT32 (File Allocation Table) file system, still in use after all these years despite the numerous superior alternatives avaliable. I myself am guilty for helping to extend FAT's unnaturally long lifetime on this earth, since in order for me to be able to have an external hard drive that is easily mountable and readable by all the major operating systems (Linux, OS X, and Windows), I had to format the disk using a commonly used file system. Unfortunately, FAT32 was the only option avaliable. It would be nice if Microsoft and Apple started including by default, drivers for some modern file system in their operating systems so that all machines could easily share external media without suffering from a performance penalty that is inherent in the file system. But this is extremely unlikely, so I won't spend too much time hoping. But what exactly is it that makes Fat such a terrible system?

Anyone using FAT32 on their external hard drive used to store media will probably be familar with FAT32's maximum file size limitation of 4GiB, which is actually larger then the maximum file size allowed in the original FAT file system. FAT also has a linear seek time within files, since in order to find the ith sector used to store data in a file, we must first find all the sectors preceding that sector. And of course, FAT has some pretty serious internal fragmentation issues, especially once we start creating and deleting files on the drive. On a physical hard drive, large internal fragmentation of files results in the hard drive head having to constantly seek around the drive in order to find the next 4KiB block. This is of course a bad thing, especially since the bottleneck on a physical hard drive is its seek time. Ideally, we would like to have continguous blocks used for a single file.

Another serious performance issue with FAT that is not readily noticeable, is the size of the File Allocation Table, and how large it grows when with the drive. Ideally, we would like to be able to just stick the whole FAT into RAM in order to improve performance, since having to cache the FAT on disk would mean we would have to do an extra read in order to read the FAT just so we could find a file on the drive. Now, lets assume that you're a movie fanatic, and in order to store your increasingly large movie collection, you've bought a new 1TB (sorry, not 1 TiB) hard drive. Of course, you want your drive to be able to function with multiple operating systems so you format it with FAT32. FAT32 uses 4KiB blocks, and each entry in the File Allocation Table takes up 4 bytes. Because we have a 1TB drive, we have about 238 entries in the table, which means that the File Allocation Table on your 1TB drive is about 1GiB. Some people might consider this to be a problem.

Of course, a fairly modern desktop computer will have 4GB of RAM so it could theoretically store the whole FAT in memory without having to cache it on disk, but I doubt that the operating system would like to dedicate a quarter of its memory to improving IO performance on one external hard disk.

So there you have, another good reason to stay away from FAT.

7May/091

Something is Not Quite Right

Take a look at this C function, and see if you can can catch whats wrong with it. (That is, what about this function could produce an error?)

void wait_for_ready(void)
{
    while ((inb(0x1F7) & 0x80) || !(inb(0x1F7) & 0x40))
        continue;
}

This question showed up on my midterm and stumped the hell out of me at the time. Now that I know the answer, I feel like a complete idiot for not spotting the problem initially, seeing as its so amazingly obvious. When I took the exam, I spent way too much time on this problem, completely overthinking it.

In this function, we're using a machine level instruction, inb, to read the value of some hardware port on an x86 machines. So far so good, or so I thought when I was taking my exam. But the problem is, is that the data we're reading can be a shared resource, that is, other applications could be writing and reading to it at the same time, so a race condition ensues. Even on a machine with a single processor, this is still a problem, since wait_for_ready() could read the data memory into register, then a context switch could occur, some application could write to that location, and then wait_for_read() regains control but operates with an old data value.

So simple. And now I feel like an idiot.

1May/090

GNU Readline

I reccently wrote a shell for a project in my CS class. One of the advanced features that my partner and I implmented in the shell, was tabbed completion, and in order to implemet this extremely useful shell feature, we used the GNU readline library. The GNU Readline library is a beast, and not in the good way. Its a great hulking pile of code and documentation, intended to provide a ton of features for reading typed lines. Once you figure out what all the function pointers, bindings, and generators in the Readline library are supposed to do, things become much more straightfoward, but it doesn't negate the fact that initially figuring out Readline is a bit of a pain in the butt.

The first thing I did after we were told that the Readline library could make our project easier to design, was to pop open a terminal and type "man readline". What I got was a basic summary of Readline, so in order to get the full library manual I had to resort to Google. I did however, happen to see this at the bottom of the manpage:

BUGS

It's too big and too slow.

Now if even the guys working on the Readline library think thats its too big and too slow, we may have a potential problem on our hands.

One of the plus sides of however Readline's enormity was that it does offer a whole slew of features, like a kill ring, and generators for tab-completion of filenames and usernames. It would be very nice though, if all these features could be implemented without the need for a manual that probably took me longer to read, then it did for me to code up a generator for command line completion.

Tagged as: , , , No Comments
1May/095

My Google Interview

Its been a while now, but here it is anyways...

About six or seven weeks ago, I scored a phone interview (technically two) with Google's IT department for a summer internship. Nobody was more surprised then I was that Google actually found my resume somewhat impressive enough to warrent a phone interview, especially considering my less-then-stellar GPA and the enormous number of super-intelligent applicants Google recieves every day. The two interviews were each forty-five minutes long, and the interviewers (both intelligent IT guys,  and not technically incompetent manager types) took pretty much all of the alloted time.

These days, almost every CS guy dreams of working for Google and so I've heard a few things about their interviews before, which I would like to mention before I get into my interview. A few years ago when I was an intern at Intel, they had a lady come in to tell all the high school interns about how to be successful in scoring future jobs. She spent a lot of time teaching us how to walk properly, shake hands, sit in a proper manner, dress, and answer generic interview questions. She told us that Google interviewers like to ask broad open ended questions like "how would you sell ice to an eskimo" and "why are manhole covers round," and promptly put us to answering similar questions. A few months later, I went to a Google Tech Talk at my University, where a Google software engineer was asked by someone in the audience if Google did in fact like to ask interview questions like "why are manhole covers round." The Google rep resonded with the following:

"In my time at Google I have interviewed several software engineers and I have never asked a question like that before. Google is not in the business of making manhole covers. If we did make manhole covers, we might ask those kinds of questions."

I think occassionally, a Google interviewer might throw in a brain teaser if they just want to burn some time, but apparently they don't do it too often.

Anyways, going on to my interview... I was interviewing for an IT position, so unlike the software developer positions where they barrage you with an endless stream of algorithim and programming questions (Why is quicksort log(n)? Whats the best sorting algorithim to use in this scenario? What data structure would you use for this? etc.) there was almost no programming invovled in my interview. And since the recruiter and HR person told me pretty much nothing about what I should expect, I went into the interview pretty much cold.