Answer by Drew Lanza:
I took the multi quarter sequence from Professor Knuth in the 70s. It was a mind blowing experience.
I still see him at Zott’s every now ad then on a warm day. He is still as friendly, funny, gracious and, dare I say goofy looking, as ever.
Taking the class was an experience. Many of us were undergrads who were struggling to keep up. This class was HARD. Some in the class were PhD students. A really diverse mix. Knuth was awesome. He would give us undergrads something to chew on. Then he would say he was going off on a deep tangent for the PhDs in the audience and for the rest of us not to worry. Rather than scary, we all found that inspiring.
Some funny stories:
Occasionally he would bring in PhD theses from other universities. He would tell us there was a math error in it. Extra credit if you could find the error. We left those for the PhDs in our cohort.
If you know his books, then you know that he’s got some really hard problems at the end of the sections. It was not unusual for him to turn to the undergrads and say, “If you find this area interesting and you’d like to solve this problem, come see me - there might be a PhD thesis in there somewhere.” His door was always open. He wasn’t kidding.
One of my proudest memories of Stanford was when I won one of Knuth’s weekly contests. Often the homework was a program to be written in MIX.
One week it was to read in an input deck that described a crossword puzzle and then to print it out. The challenge was to do it in the fewest instructions possible. I stayed up all night working on it.
I vividly remember that day. He would announce the winners at the end of class.
“This week was rather unusual”, he started out. “Third place goes to John Smith whose program was fifty instructions. Second place goes to Jane Doe with forty five.” John and Jane were PhD candidates - the winners usually were.
I was excited! I knew mine was shorter than that. Had an undergrad finally ‘scored one for the team’?
Knuth continued. “I am reluctant to award first place.” He looked at me and smiled. He sighed. “The winning program was half the length of the nearest runner up.”
So what’s the problem?
“When I first ran the program it seemed to be stuck in an infinite loop. The mainframe aborted the job after the default 1 second of CPU time. So I ran it again but gave it 3 seconds. It failed to complete. I looked at the code. I couldn’t make heads or tails of it. But the student has always turned in his work on time. So I took a chance on him and set the CPU time to 30 seconds.”
Oh God! I’d run it with a small input deck because CPU time was ridiculously expensive and we all had small budgets to use. Had that small deck failed to unearth an infinite loop bug? No, wait, Knuth had said first place. What’s up?
He sighed again. “It took over 20 seconds of CPU time, Mr. Lanza, more than $100. Please don’t do that again. But it did work and it was the shortest program so Mr. Lanza wins this week. But never again, ok?”
I was mortified. It was as if he was telling me I had cheated. Everyone certainly was looking at me that way. I slumped deep into my seat.
He grabbed me after class. I was terrified. Mortified. Petrified. Here was one of the smartest men in the world and a guy I really looked up to.
He laughed. “You took this assignment a little too literally. That program should’ve executed in linear time. Yours was exponential. Did you know that?”
“To be honest, Professor, you said the fewest instructions. I didn’t stop to think about run time.”
“Fair enough I suppose. But never again, ok? I couldn’t make heads or tails out of your code. How did your algorithm work?”
“Well, Professor, I took McCarthy’s LISP class last quarter. To be honest, I had a heck of a time with recursion. At about 2am, though, it struck me to treat this crossword problem like a homework assignment I’d had from McCarthy. It just starts by the first square of the crossword puzzle asking its neighbor squares if they know what to do. Then each neighbor square does the same thing recursively. Eventually the boundary squares figure out that they’re on the edge and they report that back.”
His eyes opened wide. He laughed and laughed. “But that meant you had to read in the entire deck that described the crossword puzzle for each and every one of those recursions looking for the boundary conditions. No wonder it took so long.”
“Well, Professor, it seemed to me that the best way to get to the fewest instructions was just to have all of the boxes in the crossword puzzle talk to each other and let them sort it out. That’s what the LISP problem we had to solve was all about.”
He looked at me in a fatherly way. “I’ll mention this to John (McCarthy) the next time I see him. I’m sure he’ll get a laugh. By the way, what grade did you get in his LISP class?”
“A C-. I never did get recursion.”
Knuth smiled at me and said, “Oh, I wouldn’t say that.” He walked away chuckling to himself.
It was a smooth journey solving simple variations of classical DP problems… Whenever i encountered a string, i made a dp table storing 1 if the string met the specifications of the problem and 0 otherwise. Few similar easy problems didn’t hurt much, all i had to do was to figure out a recursive idea and code it off.
But soon i started encountered “weird” problems that literally blew my mind. One of them was minimizing (max-min) of elements of ‘n’ given sets by storing them into an array and doing complex operations on the array. It didn’t even involve recursion!
All this baffled me, i wished i could get subjective enumeration and description of all the dp problems you have ever encountered in your programming career, that can help in better understanding of dp domains. Related links to similar problems are welcome.
Hoping to see an enthusiastic response covering all most all areas and variations of “D” “P”. Though there are infinite variations possible in programming , yet there are certainly few must-do problem on any domain of programming.
Any voluntary desire to enumerate problems of other topics besides DP, like greedy, amortized analysis, game theory, graph theory, probability, computational geometry or max flow are appreciated!
Many IO related system calls, like read(2), will block, that is not return, until there is activity. A socket can be placed in “non-blocking mode” which means that these system calls will return immediately if there is nothing pending.
A web server that is non-blocking means that it is able to have multiple requests in progress at the same time by the same process (or thread), because it uses non-blocking IO.
A blocking process is like the line at the post office. Each customer is one after another, no one advances until the person at the head of the queue has been served.
A non-blocking process could be thought of like a guy on a trading floor, trying to process the calls of a number of people at once, cycling through them handling them at different rates.
(this is not meant to be an observation or critique of the post office’s or a trading floor’s efficiency)
Now, obviously, web servers since time immemorial (the early 90s?), have been able to handle and process multiple requests at once. That is to say, the software/service itself has been able to (a network service that can only handle one request at a time isn’t a very robust service), but the architecture of how it handles multiple concurrent requests is dependent on if it is blocking or non-blocking.
Let’s use the Apache “prefork” worker model as an example as it is the simplest, most directly observable (I believe all the Apache worker MPMs are blocking, even in the hybrid multi-process multi-threading MPMs) blocking method. In the prefork model, Apache is configured to run a certain number of processes to handle requests. As requests come in, they get assigned to one of the processes, which does nothing else other than handle that request from start to finish. That process is not available to do anything else while it is handling a request. This is like the post office that opens multiple windows at a time so multiple people at the front of the line can be processed at once. When a window finishes with a customer, the next customer in line goes to that window, whichever it may be.
It’s called “prefork” (fork, create processes, before requests come in), but Apache has a number of options to control the spawning of the processes that actually do the work. As traffic increases, Apache will spawn more worker processes, and kill them off as traffic decreases. Tweaking Apache to handle a certain number of concurrent requests is often considered heavy wizardry ( http://catb.org/jargon/html/H/he… ), and has a lot of diverse variables, based on application design, module selection, workload, hardware profile, available memory and some other more arcane things.
This is considered inefficient because, in the vast majority of cases, processes spend much of their time waiting for something else, such as reading the request from the client, waiting for files to be read from disk, or talking to a database or other service. During these times, the process is said to be “blocking”, but is effectively idle. The obvious way to exploit this otherwise idle time is to start more processes to handle other requests. Each one of these processes is, by itself, blocking.
In the non-blocking model, a single process is able to handle multiple concurrent requests by interleaving non-blocking IO calls for all the requests. A non-blocking read doesn’t make the process wait for the other end to send data, it immediately returns and indicates there is nothing to read yet. This frees up the process to look for work elsewhere, like accepting a new request, or reading from another client, or checking if the database has returned results yet.
Here’s an example run of a single process handling two requests in a blocking model. Note that the second client actually connects in step 5, but its connection can not be accepted or even begin to being parsed until step 14, after the first request is completed.
Observe that the process only needs to keep track of a single connection at a time. When the first connection is complete, it can forget about it and reuse any storage it allocated to keep track of the first connection to keep track of the second. The attention of that process can not be interrupted to handle the second connection until the first is complete.
And here’s a hypothetical list of actions a non-blocking web server might take. Note that when the second client connects at step 5, it is immediately accepted in step 6 and the processing of it is interleaved with the processing of the first connection.
In this case, the single process had to allocate space for two connections and two requests. The attention of that that process is split between handling two requests.
In both of the above cases, there is a single thread of execution, doing the 24 steps outlined above. No threading is used (even though the of the order of execution with threads might use a similar example).
Here’s another way to think of blocking vs non-blocking. In the blocking mode, the operating system’s scheduler tells processes when to run based on external events. The state of the entire system is encoded in which stage of processing its request each process is in. In the non-blocking model, the process itself is completely in control of how it handles responding to external events, and the state of the entire system is encoded in the bookkeeping the non-blocking server needs to do for each connection. The blocking server doesn’t need to do any bookkeeping, since it only handles one request at a time (per thread of execution, process or thread).
Which one of these should you prefer? Writing code for blocking servers is usually easy to understand, mainly because they execute from start to finish and have very specific and obvious points where execution will be paused—and this pausing doesn’t matter to being able to understand the state of the entire system at any one time, because the request still executes from start to finish. Blocking servers, by needing a dedicated thread of execution for each request, can often eat up a lot of memory (and process generation time (if your fork or initialization time is slow)). There have been attempts to solve this by using multiple threads, which are considered more lightweight, within multiple processes. This can often be a red herring, since on modern operating systems, threads are scheduled the same way processes are (by the kernel), so you’ve gained complexity from using threads without a significant execution time advantage. Your mileage may vary. Additionally, if you’re using a threaded execution model in a blocking server (rather than completely isolated processes), all the dependent libraries need to be threadsafe, which has traditionally been difficult to achieve.
Non-blocking servers on the other hand have the advantage of having explicit shared storage between requests (up until the point you need to run more than one on the same machine, or have more than one machine), which is safer than threads sharing storage—no exclusive access controls (like semaphores) need to be maintained because there is only one thread of execution. This makes the stereotypical “chat server” straight forward to implement using a non-blocking server. While they only have a single thread of execution, the work for each request is split up with work for other requests. This can make debugging difficult, because there’s an increased chance of one request influencing the state of another.
Blocking servers are commonly either single-shot, meaning the process exits after the request is completed, or will exit after a certain amount of time or after a certain number of requests have been processed. This has the advantage of “cleaning things up” that might be leaked in the application or library code (the quality of which you can’t often control), such as memory or file descriptors. This is considered a brute force, but necessary, way of dealing with these things in long-running servers that handle multiple requests serially.
For an interesting critique of the perceived performance of non-blocking webserver design, see Ted Dziuba’s Node.js is Cancer ( http://teddziuba.com/2011/10/nod…), also
Answer by Neal Wu:
Make sure you realize it’s not a competition.
Success isn’t a zero-sum game. Instead of feeling jealous, try to change your attitude: if there are people are around you who you think are way ahead of you, that’s awesome! Seek them out and learn as much as you can from them. In the process they’ll gain perspective as well, just from teaching you the concepts and having to explain things from the ground up. And if for some reason they don’t think you’re worth their time, don’t worry; keep looking and you’ll find someone who’s willing to help.
Another thing I’ve found is that a key ingredient for success at anything is raw excitement. Take any chess grandmaster or Wimbledon winner, and you can be sure that when they started, they had no idea what they were doing either. How did they reach the top? Yes, through lots of practice, but plenty of people can just put in the number of hours. I’ve noticed a common factor among the most accomplished individuals is that the people who are really the best at something are also the ones who have the most enthusiasm for it, the ones who put the most energy into it. And if you ask them, they’ll usually say they were happy to put in all the time anyway, win or lose.
So basically, do it if you like it. Don’t let your late start stop you! (And honestly, you’re not starting that late at all.) If you really enjoy it, you’ll quickly find that you’ve learned much more than you ever thought you would.
On a somewhat meta note:
>Over the last two and a half years, we’ve picked up a few tips and tools about scaling Postgres that we wanted to share—things we wish we knew when we first launched Instagram.
A common failure mode for myself and, I suspect, others, is thinking that we have to know every single thing before we start. Good old geek perfectionism of wanting an ideal, elegant setup. A sort of Platonic Ideal, if you will.
These guys went on to build one of the hottest web properties on earth, and they didn’t get it all right up front.
If you’re postponing something because you think you need to master all the intricacies of EC2, Postgres, Rails or $Technology_Name, pay close attention to this example. While they were launching growing, and being acquired for a cool billion, were you agonizing over the perfect hba.conf?
More a note to myself than anything else :)
For various reasons, including performance and cost, Twitter has poured significant engineering effort into breaking down the site backend into smaller JVM based services. As a nice side effect we’ve been able to open source several of the libraries and other useful tools that came out of this…
These are some good links on fork, exec, clone and other system calls for process and thread manipulation.
I’ll add some more later.
Whoa! Lots of videos form Microsoft Research Labs. Haven’t seen any of them yet, hopefully there will be some accessible to the undergrad.