CS50xHelpers» Who are we?

CS50x Helpers is a website built by students of CS50 course to help other students having issues.

Ask your query

What is CS50x? » Harvard's Most Popular Course


CS50x is Harvard College's introduction to the intellectual enterprises of computer science and the art of programming for majors and non-majors alike, with or without prior programming experience. An entry-level course taught by David J. Malan, CS50x teaches students how to think algorithmically and solve problems efficiently.
Topics include abstraction, algorithms, data structures, encapsulation, resource management, security, software engineering, and web development.
Languages include C, PHP, and JavaScript plus SQL, CSS, and HTML. Problem sets inspired by real-world domains of biology, cryptography, finance, forensics, and gaming.

Enroll Now

From Wikipedia: "Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm."

The term analysis of algorithms is used to describe approaches to the study of the performance of algorithms. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem.

Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.
Anyone who’s read Programming Pearls or any other Computer Science books and doesn’t have a grounding in Mathematics will have hit a wall when they reached chapters that mention O(N log N) or other seemingly crazy syntax. Hopefully this article will help you gain an understanding of the basics of Big O and Logarithms.
A nice guide to Big O nottion is provided by Rob Bell in his paper at A beginner's guide to Big O nottion

The result of the analysis of an algorithm is usually a formula giving the amount of time, in terms of seconds, number of memory accesses, number of comparisons or some other metric, that the algorithm takes.

Upper bounds: Big-O

When comparing the running times of two algorithms, the lower order terms are unimportant when the higher order terms are different. Also unimportant are the constant coefficients of higher order terms; an algorithm that takes a time of 100n2 will still be faster than an algorithm that takes n3 for any value of n larger than 100. Since we're interested in the asymptotic behavior of the growth of the function, the constant factor can be ignored. The "big-Oh" notation tells us that a certain function will never exceed another, simpler function beyond a constant multiple and for large enough values of n. Big-Oh gives us a formal way of expressing asymptotic upper bounds, a way of bounding from above the growth of a function.

Lower bounds: Omega

Another way of grouping functions, like big-Oh, is to give an asymptotic lower bound. Given a complicated function f, we find a simple function g that, within a constant multiple and for large enough n, bounds f from below. This gives us a somewhat different family of functions.

Tight bounds: Theta

Neither big-Oh or Omega are completely satisfying; we would like a tight bound on how quickly our function grows. To say it doesn't grow any faster than something doesn't help us know how slowly it grows, and vice-versa. So we need something to give us a tigher bound; something that bounds a function from both above and below. We can combine big-Oh and Omega to give us a new set of functions, Theta. (source: Asymptotic nottopn)

In its simplest form, a logarithm answers the question: "How many of one number do we multiply to get another number?"

Example: How many 2s do we multiply to get 8?

Answer: 2 × 2 × 2 = 8, so we needed to multiply 3 of the 2s to get 8

So the logarithm is 3

We write "the number of 2s we need to multiply to get 8 is 3" as:

log2(8) = 3

The number we are multiplying is called the "base", so we can say:

"the logarithm of 8 with base 2 is 3"
or "log base 2 of 8 is 3"
or "the base-2 log of 8 is 3"

Notice we are dealing with three numbers:

the base: the number we are multiplying (a "2" in the example above)
how many times to use it in a multiplication (3 times, which is the logarithm)
The number we want to get (an "8")

A basic explaination can be found here
A more detailed and strict description is here

These are words. Can you show me a table where these values are compared?
Sure! Note that the natural logarithm has been used for the example.

n = log(n) n nlog(n) n^2 n^2log(n) n^3 e^n n^n n!
1 0.0 1 0 1 0 1 2.7 1 1
5 1.6 5 8 25 40 125 148.4 3125 120
10 2.3 10 23 100 2300 10000 22026.5 1.0x10^20 3.6x10^6
50 3.9 50 195 2500 9750 125000 5,2×10^21 8.9x10^84 3.0x10^64
100 4.6 100 460 10000 46000 1000000 2.7x10^43 1.0x10^200 9.3x10^157
200 5.3 200 1060 40000 212000 8000000 7.2x10^86 1.6x10^460 7.9x10^374
500 6.2 500 3100 250000 1550000 125000000 1.4x10^217 3.0x10^1349 1.2x10^1134
1000 6.9 1000 6900 1000000 6900000 1000000000 2.0x10^434 1.0x10^3000 4.0x10^2567
In other words, suppose that we have three distinct algorithms that perform the same task (let's say a sort, for simplicity) having respectively O(nlog(n)), O(n^2) and O(n^3) complexity. If the first algorithm needs 460 milliseconds to complete, the second would need 10 seconds, and the third would need 16 minutes and 40 seconds.

Latest CS50 NewsWe are working tirelessly to gather the most learned members in the community to help CS50 students having difficulty live, online. The service will be arranged very soon. We will be looking forward to your feedback in this regard.

CS50 is offered as CS50x through edX, a not-for-profit enterprise of its founding partners Harvard University and the Massachusetts Institute of Technology that features learning designed specifically for interactive study via the web. In other words, even if you're not a student at Harvard, you may take CS50 by registering for CS50x.

You may take CS50x at your own pace, starting and finishing anytime in 2015.

If you earn a passing grade on 9 problem sets, numbered from 0 to 8, and a final project in CS50x, you will receive an honor code certificate from HarvardX, which is the Harvard branch of edX.

The CS50 Appliance is a "virtual machine" that allows students easy access to the tools needed for the course. It equates with running a computer system inside your current one. Instructions for installation of the Appliance can be found here

No. Simply follow the instructions above to download and install the new appliance.

While not required, using the Appliance is highly recommended to make setup for the course easy and hassle-free.

Yes, but be aware that some problem sets are different this year.

Take some deep breaths and reach out to your classmates via CS50x Helpers, Facebook, Reddit, CS50 Discuss, or Twitter. Many people are willing to help you, and we don't want you to give up.

Latest CS50 NewsRequest form for subscription to Newsletter coming soon.

Latest CS50 NewsComing Soon.

“Demanding, but definitely doable. Social, but educational. A focused topic, but broadly applicable skills. CS50 is the quintessential Harvard course.”

From the CS50 website

“CS50 is exceptional for its size, its resources and the cult of personality around its charismatic leader. It is more than just a class at Harvard; it is a cultural touchstone, a lifestyle, a spectacle. This is CS50, and it’s here to stay.”

From the Harvard Crimson