Skip to main content

Online Programming Competitions are Overrated

The title is not merely a clickbait, but my current opinion, after attending a programming competition for the first time. This post expresses my opinions on the hiring processes of [some of] the new age companies through programming competitions and algorithms-focused interviews.

I believe that the assessment for a senior/architect level programmer, should be done by finding how co-operative [s]he is with others to create interesting products and their history than by assessing how competitive [s]he is in a contest.

Algorithms

On my lone programming competition experience (on hackerrank), the focus of the challenges were on Algorithms (discrete math, combinatorics etc.).

Usage of standard, simple algorithms, instead of fancy, non-standard algorithms is a better idea in real life, where the products have to last for a long time, oblivious to changing programmers. Fancy algorithms are usually untested, harder to understand for a maintenance programmer.

Often, it is efficient to use the APIs provided by the standard library or ubiquitously popular libraries (say jquery). Unless you are working on specific areas (say compilers, memory management etc.) an in-depth of knowledge of a wide-range of algorithms may not be very beneficial (imo) in day-to-day work, elaborated in the next section.

Runtime Costs

There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable designs, Pluggable architectures, Points of Failures, etc.

Algorithms optimize mostly one aspect, CPU cycles. There are other aspects (say choice of Data structures, databases, frameworks, memory maps, indexes, How much to cache etc.) which have a bigger impact on the overall performance. CPU cycles are comparatively cheap and we can afford to waste them, instead of doing bad I/O or a non-scalable design.

Most of the times, if you choose proper datastructures and get your API design correct, we can plug the most efficient algorithm, without affecting the other parts of the system, iff your algorithm proves to be really a bottleneck. A good example is the Evolution of filesystems, schedulers in the Linux Kernel. Remember that Intelligent Design school of software development is a myth.

In my decade of experience, I have seen more performance problems due to poor choice of datastructures or unnecessary I/O, than due to poor selection of algorithms. Remember, Ken Thompson said: When in doubt, Use Brute Force. It is not important to get the right algorithm on the first try. Getting the skeleton right is more important. The individual algorithms can be changed, after profiling.

At the same time, this should not be misconstrued as an argument to use bubblesort.

The 10,000 hour rule

Doing well in online programming competitions is mostly the 10,000 hour rule in action. You spend time in enough competitions and solve enough problems, you will quickly know which algorithm or programming technique (say dynamic programming, greedy) to employ if you see a problem.

Being an expert at online programming competitions does not guarantee that [s]he could be trusted with building or maintaining a large scale system, that has to run long and the code live for years (say on the scale of filesystems, databases, etc.). In a competition, you solve a small problem at a microscopic level. In a large scale system, the effects of your code are systemic. Remember how the fdisk, sqlite, firefox fiasco ?!

In addition to programming skills, there are other skills needed such as build systems, dependency management (unless you are working on the kernel), SCM, Versioning, Library design aspects, automated testing, continuous integration etc. These skills cannot be assessed in online programming competitions. 

Hardware

In my competition, I was asked to solve problems in a machine that is constrained to run in a single thread. I do not know if it is a limitation on hackerrank, or if all online competitions enforce this.

If it is the practice in all online programming competitions, then it is a very bad idea. Although I could understand the infrastructure constraints for these sites, with the availability of the multi-core machines these days, your program is guaranteed to run on multiple cores. You miss out on a slew of evaluation options if the candidate is forced to think of single threaded design.

With the arrival of cloud VMs and the Google appengine elasticity, it is acceptable to throw more CPUs or machines at a program on-demand, without incurring high cost. It is okay to make use of a simpler, cleaner algorithm that is more readable and maintenance friendly (than a complex, performant algorithm), if it will scale better on increased number of CPUs or machines. The whole map-reduce model is built around a similar logic.

I don't claim that concurrency/parallelism/cloud is a panacea for all performance problems, but it is too big a thing to ignore while assessing a programmer. A somewhat detailed explanation is at the Redis creator's blog (strongly recommended to subscribe).

AHA Algorithms

I first heard of the concept of AHA Algorithms in the excellent book Programming Pearls by Jon Bentley. These are the algorithms which make very complicated, seemingly impossible problems look trivial, once you know the algorithm. It is impossible for a person to solve such problems within the span of the competition/interview if the candidate is not  aware of the algorithm earlier and/or does not get that AHA moment. Levenshtein Distance, Bitmap algorithms etc. fall in this category. It may not be wise to evaluate a person based on such problems.

Conclusion:

Candidizing (is that a word ?) long-term FOSS contributors for hiring may be an interesting alternative to hiring via online programming competitions or technical interviews. Both the interviews and contests have extrapolation errors when the person starts working on a job, especially on large scale systems.

I see that a lot of the new age companies are asking for github profile in their resumes, which is good. But I would prefer a more thorough analysis on long standing projects and not merely personal pet projects that may not be very large-scale or popular. Not every person works for a FOSS project in free time, is also a deterrent to holding such an approach.

Online programming competition websites could limit the number of participants in advance and give the participants an infrastructure that matches realtime development, instead of a input-output comparison engines, with a 4 second timeout.

Having said all these, these online programming contests are a nice way to improve one's skills and to think faster. I will be participating in a few more to make myself fitter. There may be other programming challenges which are better and test all aspects of an engineer. I should write about my view after an year or so.

One more thing: Zenefits & Riptide I/O

In other news, My classmates' companies Zenefits and Riptide I/O are hiring. Have a look if you are interested in a job change (or even otherwise). They are in an excellent (imo) stage where they are still a startup in engineering culture, but have an enormous funding to work on the next level of products. Should be an exciting opportunity for any curios engineer. Zenefits works on web technologies and delivers their SaaS. I would have joined Zenefits if they had an office in Bangalore. Riptide IO works on IoT and has some high profile customers.

Comments

This comment has been removed by the author.
I think, competitive programming, just says that the candidate has the ability to understand complex concepts, ability to practice and learn programming. They will probably learn the tricks of the trade, once they become employees and can be efficient at work. It is very similar to degrees.
Read somewhere that even Ph.d's are mostly hired for showing that they can put up with such kind of work, rather than the actual research work they did.
Vignesh Venkat said…
Whoa, by the time i finished reading this post, i could think of a counter point for pretty much every single point you made. I think this is the first time ever i'm in such a disagreement with an opinion of yours. Many such arguments about "how to hire the best software engineer" comes down to "there is no one best way - do what's best for what your company needs".

Anyway, here are some key points where i have contrary opinion:

> It is okay to make use of a simpler, cleaner algorithm that is more readable and maintenance friendly (than a
> complex, performant algorithm),

Yes, absolutely. But at the same time, during the interview, you *must* evaluate if the candidate is even capable of coming up with the complex better performant algorithm. It's one thing to use the simple algorithm *knowing* what other choices exist, it's totally another thing to use the simple algorithm because that's the only thing you can come up with.

> AHA Algorithms

Nothing much to disagree there. But any company worth it's name wouldn't encourage such questions during an interview.

> Re: Grading based on FOSS contributions and Github profile.

From my experience here (in the US), Usually only new grad candidates have an active Github profile. The signal to noise ratio on such Github profiles are appallingly bad. Most candidates don't have any actual FOSS contributions on Github. They merely use it as a dumping ground for the projects that they did over the course of their schooling. Most of those are similar to what we did for lab classes in PSG (half-assed and not worthy of making any judgement of a person based on just that). It is definitely a good sign if a person's Github profile shows that he has actually contributed significant code to others' repositories. But very very few candidates have such profiles which renders Github profile useless as a concrete measure of any given candidate's chances of succeeding.

> There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable
> designs, Pluggable architectures, Points of Failures, etc.

Most of these are not what you learn out of school. You learn these out of experience (by making mistakes and breaking things, of course!). So what other strong signal can you expect out of a new grad or a fairly unexperienced engineer other than algorithm skills.

Abstraction (e.g. using standard libraries) is generally good when doing actual work, but is it good when evaluating someone? Would you hire someone if they didn't know what the running time of inserting an element into java.util.ArrayList vs java.util.LinkedList was (even if they could design a large system in a decent way) ? I know for sure that i wouldn't.

My conclusion would be this:
I don't think any company makes offers based on a candidate's performance based on an online programming contest. But it is *definitely* a very good sign that the candidate has a good understanding of fundamentals of computer science. Once you have that, you can say with some degree of confidence that that person can learn good design techniques as (s)he grows up in his/her career. You can always evaluate the design and other skills in a face-to-face interview that follows the programming contest. Bottom line is, companies want a concrete way to decide whom to bring on to the face-to-face interviews from the millions of applications they receive and as of now one of the best solutions out there is online programming contests.
Vignesh Venkat said…
Whoa, by the time i finished reading this post, i could think of a counter point for pretty much every single point you made. I think this is the first time ever i'm in such a disagreement with an opinion of yours. Many such arguments about "how to hire the best software engineer" comes down to "there is no one best way - do what's best for what your company needs".

Anyway, here are some key points where i have contrary opinion:

> It is okay to make use of a simpler, cleaner algorithm that is more readable and maintenance friendly (than a
> complex, performant algorithm),

Yes, absolutely. But at the same time, during the interview, you *must* evaluate if the candidate is even capable of coming up with the complex better performant algorithm. It's one thing to use the simple algorithm *knowing* what other choices exist, it's totally another thing to use the simple algorithm because that's the only thing you can come up with.

> AHA Algorithms

Nothing much to disagree there. But any company worth it's name wouldn't encourage such questions during an interview.

> Re: Grading based on FOSS contributions and Github profile.

From my experience here (in the US), Usually only new grad candidates have an active Github profile. The signal to noise ratio on such Github profiles are appallingly bad. Most candidates don't have any actual FOSS contributions on Github. They merely use it as a dumping ground for the projects that they did over the course of their schooling. Most of those are similar to what we did for lab classes in PSG (half-assed and not worthy of making any judgement of a person based on just that). It is definitely a good sign if a person's Github profile shows that he has actually contributed significant code to others' repositories. But very very few candidates have such profiles which renders Github profile useless as a concrete measure of any given candidate's chances of succeeding.

> There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable
> designs, Pluggable architectures, Points of Failures, etc.

Most of these are not what you learn out of school. You learn these out of experience (by making mistakes and breaking things, of course!). So what other strong signal can you expect out of a new grad or a fairly unexperienced engineer other than algorithm skills.

Abstraction (e.g. using standard libraries) is generally good when doing actual work, but is it good when evaluating someone? Would you hire someone if they didn't know what the running time of inserting an element into java.util.ArrayList vs java.util.LinkedList was (even if they could design a large system in a decent way) ? I know for sure that i wouldn't.

My conclusion would be this:
I don't think any company makes offers based on a candidate's performance based on an online programming contest. But it is *definitely* a very good sign that the candidate has a good understanding of fundamentals of computer science. Once you have that, you can say with some degree of confidence that that person can learn good design techniques as (s)he grows up in his/her career. You can always evaluate the design and other skills in a face-to-face interview that follows the programming contest. Bottom line is, companies want a concrete way to decide whom to bring on to the face-to-face interviews from the millions of applications they receive and as of now one of the best solutions out there is online programming contests.
Vignesh Venkat said…
Whoa, by the time i finished reading this post, i could think of a
counter point for pretty much every single point you made. I think
this is the first time ever i'm in such a disagreement with an opinion
of yours. Many such arguments about "how to hire the best software
engineer" comes down to "there is no one best way - do what's best for
what your company needs".

Anyway, here are some key points where i have contrary opinion:

> It is okay to make use of a simpler, cleaner algorithm that is more readable and maintenance friendly (than a
> complex, performant algorithm),

Yes, absolutely. But at the same time, during the interview, you
*must* evaluate if the candidate is even capable of coming up with the
complex better performant algorithm. It's one thing to use the simple
algorithm *knowing* what other choices exist, it's totally another
thing to use the simple algorithm because that's the only thing you
can come up with.

> AHA Algorithms

Nothing much to disagree there. But any company worth it's name
wouldn't encourage such questions during an interview.

> Re: Grading based on FOSS contributions and Github profile.

From my experience here (in the US), Usually only new grad candidates
have an active Github profile. The signal to noise ratio on such
Github profiles are appallingly bad. Most candidates don't have any
actual FOSS contributions on Github. They merely use it as a dumping
ground for the projects that they did over the course of their
schooling. Most of those are similar to what we did for lab classes in
PSG (half-assed and not worthy of making any judgement of a person
based on just that). It is definitely a good sign if a person's Github
profile shows that he has actually contributed significant code to
others' repositories. But very very few candidates have such profiles
which renders Github profile useless as a concrete measure of any
given candidate's chances of succeeding.

> There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable
> designs, Pluggable architectures, Points of Failures, etc.

Most of these are not what you learn out of school. You learn these
out of experience (by making mistakes and breaking things, of
course!). So what other strong signal can you expect out of a new grad
or a fairly unexperienced engineer other than algorithm skills.

Abstraction (e.g. using standard libraries) is generally good when
doing actual work, but is it good when evaluating someone? Would you
hire someone if they didn't know what the running time of inserting an
element into java.util.ArrayList vs java.util.LinkedList was (even if
they could design a large system in a decent way) ? I know for sure
that i wouldn't.

My conclusion would be this:
I don't think any company makes offers based on a candidate's
performance based on an online programming contest. But it is
*definitely* a very good sign that the candidate has a good
understanding of fundamentals of computer science. Once you have that,
you can say with some degree of confidence that that person can learn
good design techniques as (s)he grows up in his/her career. You can
always evaluate the design and other skills in a face-to-face
interview that follows the programming contest. Bottom line is,
companies want a concrete way to decide whom to bring on to the
face-to-face interviews from the millions of applications they receive
and as of now one of the best solutions out there is online
programming contests. :-)
Vignesh Venkat said…
Whoa, by the time i finished reading this post, i could think of a
counter point for pretty much every single point you made. I think
this is the first time ever i'm in such a disagreement with an opinion
of yours. Many such arguments about "how to hire the best software
engineer" comes down to "there is no one best way - do what's best for
what your company needs".

Anyway, here are some key points where i have contrary opinion:

> It is okay to make use of a simpler, cleaner algorithm that is more readable and maintenance friendly (than a
> complex, performant algorithm),

Yes, absolutely. But at the same time, during the interview, you
*must* evaluate if the candidate is even capable of coming up with the
complex better performant algorithm. It's one thing to use the simple
algorithm *knowing* what other choices exist, it's totally another
thing to use the simple algorithm because that's the only thing you
can come up with.

> AHA Algorithms

Nothing much to disagree there. But any company worth it's name
wouldn't encourage such questions during an interview.

> Re: Grading based on FOSS contributions and Github profile.

From my experience here (in the US), Usually only new grad candidates
have an active Github profile. The signal to noise ratio on such
Github profiles are appallingly bad. Most candidates don't have any
actual FOSS contributions on Github. They merely use it as a dumping
ground for the projects that they did over the course of their
schooling. Most of those are similar to what we did for lab classes in
PSG (half-assed and not worthy of making any judgement of a person
based on just that). It is definitely a good sign if a person's Github
profile shows that he has actually contributed significant code to
others' repositories. But very very few candidates have such profiles
which renders Github profile useless as a concrete measure of any
given candidate's chances of succeeding.

> There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable
> designs, Pluggable architectures, Points of Failures, etc.

Most of these are not what you learn out of school. You learn these
out of experience (by making mistakes and breaking things, of
course!). So what other strong signal can you expect out of a new grad
or a fairly unexperienced engineer other than algorithm skills.

Abstraction (e.g. using standard libraries) is generally good when
doing actual work, but is it good when evaluating someone? Would you
hire someone if they didn't know what the running time of inserting an
element into java.util.ArrayList vs java.util.LinkedList was (even if
they could design a large system in a decent way) ? I know for sure
that i wouldn't.

My conclusion would be this:
I don't think any company makes offers based on a candidate's
performance based on an online programming contest. But it is
*definitely* a very good sign that the candidate has a good
understanding of fundamentals of computer science. Once you have that,
you can say with some degree of confidence that that person can learn
good design techniques as (s)he grows up in his/her career. You can
always evaluate the design and other skills in a face-to-face
interview that follows the programming contest. Bottom line is,
companies want a concrete way to decide whom to bring on to the
face-to-face interviews from the millions of applications they receive
and as of now one of the best solutions out there is online
programming contests. :-)
Sankar said…
I can understand that argument for Junior programmers but I am not convinced when people interview for "Architect" level positions (in India) and ask only Algorithm questions.
Vignesh said…
Vignesh Venkat: Whoa, by the time i finished reading this post, i could think of a
counter point for pretty much every single point you made. I think
this is the first time ever i'm in such a disagreement with an opinion
of yours. Many such arguments about "how to hire the best software
engineer" comes down to "there is no one best way - do what's best for
what your company needs".

Anyway, here are some key points where i have contrary opinion:

> It is okay to make use of a simpler, cleaner algorithm that is more readable and maintenance friendly (than a
> complex, performant algorithm),

Yes, absolutely. But at the same time, during the interview, you
*must* evaluate if the candidate is even capable of coming up with the
complex better performant algorithm. It's one thing to use the simple
algorithm *knowing* what other choices exist, it's totally another
thing to use the simple algorithm because that's the only thing you
can come up with.

> AHA Algorithms

Nothing much to disagree there. But any company worth it's name
wouldn't encourage such questions during an interview.

> Re: Grading based on FOSS contributions and Github profile.

From my experience here (in the US), Usually only new grad candidates
have an active Github profile. The signal to noise ratio on such
Github profiles are appallingly bad. Most candidates don't have any
actual FOSS contributions on Github. They merely use it as a dumping
ground for the projects that they did over the course of their
schooling. Most of those are similar to what we did for lab classes in
PSG (half-assed and not worthy of making any judgement of a person
based on just that). It is definitely a good sign if a person's Github
profile shows that he has actually contributed significant code to
others' repositories. But very very few candidates have such profiles
which renders Github profile useless as a concrete measure of any
given candidate's chances of succeeding.

> There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable
> designs, Pluggable architectures, Points of Failures, etc.

Most of these are not what you learn out of school. You learn these
out of experience (by making mistakes and breaking things, of
course!). So what other strong signal can you expect out of a new grad
or a fairly unexperienced engineer other than algorithm skills.

Abstraction (e.g. using standard libraries) is generally good when
doing actual work, but is it good when evaluating someone? Would you
hire someone if they didn't know what the running time of inserting an
element into java.util.ArrayList vs java.util.LinkedList was (even if
they could design a large system in a decent way) ? I know for sure
that i wouldn't.

My conclusion would be this:
I don't think any company makes offers based on a candidate's
performance based on an online programming contest. But it is
*definitely* a very good sign that the candidate has a good
understanding of fundamentals of computer science. Once you have that,
you can say with some degree of confidence that that person can learn
good design techniques as (s)he grows up in his/her career. You can
always evaluate the design and other skills in a face-to-face
interview that follows the programming contest. Bottom line is,
companies want a concrete way to decide whom to bring on to the
face-to-face interviews from the millions of applications they receive
and as of now one of the best solutions out there is online
programming contests. :-)
Sankar said…
> Many such arguments about "how to hire the best software
engineer" comes down to "there is no one best way - do what's best for
what your company needs".

I totally agree to this.

> during the interview, you *must* evaluate if the candidate is even capable of coming up with the
complex better performant algorithm.

Agreed. But the focus need not be exclusively on algorithms alone. Nor does it need to have more weightage than some other aspects (say Datastructures, which I feel are more important than algorithms).

> . The signal to noise ratio on such Github profiles are appallingly bad. Most candidates don't have any actual FOSS contributions on Github. They merely use it as a dumping ground for the projects that they did over the course of their schooling.

Agreed. Although the participation in FOSS projects is a good measure, not many good candidates have a good FOSS profile. Thankfully times are changing. Many new technologies by Google, Facebook etc. are now developed in open. In fact, .NET itself has moved to open.

> Most of these are not what you learn out of school.

Absolutely. As I mentioned, this post is mainly for companies that interview for "architect" level positions and ask only algorithm questions. I heard from many people that it is not the case in US (as a response to my post in mails). So I believe this is just an off-case in some Indian companies.

> now one of the best solutions out there is online programming contests. :-)
Agreed. Nothing is lost for the companies by trying this, as long as the other ways of recruiting are still open.

To give some context, a few of my ex-colleagues who are solid engineers have interviewed for some architect level positions [in a few silicon valley startups and in a big modern-day company in Bangalore/Chennai] and they were rejected based on their algorithms questions. No questions were asked on other aspects and this bothered me.
Vignesh Venkat said…
For your last paragraph, If that's the case then I agree too. I think I took your words a little out of context.
I don't think, "only" algorithm questions, for architect ever happens.
Unknown said…

Oke thank you pack the info is very helpful for us, success is always...
kain tenun

Popular posts from this blog

சைவமும் வைணவமும் தமிழும்

சைவம், வைணவம், இரண்டு சமயங்களும் தமிழ் இலக்கியத்தில் பெரும் பங்கு ஆற்றி இருக்கின்றன. இசுலாமியம், கிருத்துவம் இரண்டும் தமிழ்நாட்டில் வளரும் முன்; பவுத்தம், சமணம் இரண்டும் அழித்த பின்; சைவமும் வைணவமும் தங்களுக்குள் சண்டை போட்டுக் கொண்டாலும், இரண்டுமே தமிழ் இறை இலக்கியங்களை வளர்த்திருந்திருக்கின்றன. இரண்டுமே ஓரளவு தமிழ்ச் சிதைப்பும், வடமொழி தூக்கிப் பிடிப்பும் செய்திருக்கின்றன. kryes, சைவத்தோடு ஒப்பிடுகையில் வைணவம் குறைவாகவே தமிழுக்கு தீங்கு விளைவித்ததாக சொல்லி இருக்கிறார் ("தமிழ் முன் செல்ல", இன்ன பிற) (அவர் அப்படி நேரடியாக சொல்லாமல் இருக்கலாம், ஆனால் நான் அப்படிப் புரிந்து கொண்டிருக்கிறேன்). கடந்த சில மாதங்களாக யூடியூபில், சைவ வைணவ காணொளிகளைப் பார்த்து வருகிறேன். அதில் அவதானித்த சில கருத்துகள் கீழே. (முன்குறிப்பு: இதெல்லாம் எனக்குத் தோன்றியவை. இவை உண்மையாக இருக்கத் தேவையில்லை. உங்களுக்கு இதெல்லாம் தோன்றாமல் இருந்திருக்கலாம். ஆனால் அதற்காக என்னிடம் சண்டை போட வேண்டாம் :) ) 1) சைவ இலக்கியங்கள், (கிருத்துவம் போலவே) நிறைய அச்சங்களை ஊட்டுகின்றன. "நாய் நரிக்கோ இரை எதற்கோ (உ

The Tech Ladder

Prelude Last week some of us old friends met. We are all in a similar age group, with about 15 years of experience in the Indian IT industry. We were all programmers and were working on FOSS projects, during the time we passed out of college (in India).  About 15 years ago, FOSS was not as easy to contribute as today. From our college hostel or shared home internet connection, when we did a `cvs diff` , it used to go over the network, to generate a patch. Often it was faster to just duplicate the sources and locally do a `diff -up dir1 dir2`. Forget Fibre internet, even Broadband was not available widely. Mobile phones were for the super rich. Laptops even more scarce. But some of us found ways to contribute little changesets, mainly driven by our interest in programming. That is how we bonded as a group, across FOSS projects like openSUSE, GNOME, etc. Most of us worked in our College labs and/or from work laptops in personal time. We have not been in regular contact with each other, b

Pre vs Post Increment & Performance Impact

While browsing across some open-source projects, I have seen code snippets of type: for (i = 0; i < n; ++ i ) The pre-increment "++ i " confused me as to why one should use it, as post increment is most commonly used. Googling told me that pre-increment is faster than post increment as the value of i need not be stored to a temporary register before the increment operation . This sounded logical to me and I believed it and used pre-increment in all my loops. I didn't bother to measure it though. Few days back, Kernel developer Tejun Heo complained in twitter about the usage of pre-increment in loops that it doesn't give any benefits and affects readability. This time I got around to doing some measurements with the following code: int main() { int i, j ,k; #if 0 for (i = 0; i < 10000; i++) for (j = 0; j < 10000; j++) for (k = 0; k < 40; k++); #else for (i = 0; i < 10000; ++i) for (j