TFM:Computer Science Language

From ProgSoc Wiki

Jump to: navigation, search

Contents

Computer Science Language

Thomas Given-Wilson

Not another language?!

If you have been around computers for any period of time you have no doubt run into a huge variety of languages from Java (how much class do you need?), to HTML (you know the L was for language already right?), to EN:US (I'm sure "humour" is the correct spelling...). So why would you want to take the time to learn some computer science language you have never heard of before? The short answer is that the language described in this chapter underlies all those programming languages, markup languages and even the business language around computers. Not only that, but it provides you with the tools to make learning all the others a lot easier!

This chapter lays out some of the foundational concepts of computer science as related to modern computing. As much as possible the "language" introduced will be a balance between formally correct (in the mathematical and logical sense) and the common usage in IT fields. Each of the sections is intended to be stand-alone, so you can skip any that seem too bizarre, or read in any order. That said, the topics are laid out in an order the author thinks will best build up to a broader understanding. The goal of this chapter is to provide you with the language to quickly pick up new computing-related knowledge and to recognise the similarities between the "different" parts of computing.

Computation and computability

Computing is what computers do, but defining exactly what they can (or can't) do has been very hard to determine. In the 1930s some very smart guys tried to formalise this and came up with a few ideas. The most well-known was Alan Turing and his model of computation: the Turing machine turing36. Ironically his definition of computation is anything a human could do by following simple instructions. So computers can do anything a really stupid human, or perhaps a trained monkey, can do!

Luckily Turing wasn't the only person looking into the problem, two other well known (produced something useful) mathematicians were Stephen Kleene and Alonzo Church. Kleene believed that all computation was exactly what could be determined by recursive functions (if you don't know what a recursive function is then hold on for a little longer -- it's coming) applied to natural numbers (the numbers 0,1,2,3 ...). Church had a similar approach using functions (that could be recursive) but decided numbers were too concrete and so abstracted everything away with lambda (λ) abstractions.

In a bid to create harmony between these models Turing showed that his Turing machines could express all of Kleene's recursive functions. Church also showed that his λ-calculus could express all the recursive functions. Then Gõdel stepped in and showed that any program could be turned into a natural number through Gõdelisation. Thus any program can be turned into a number that can be the input to any program: programs can run on programs.

Gõdel went even further and showed that no consistent logical system can be complete using his famous incompleteness theorem. In the mean time Turing and Church had shown that Turing machines could be emulated in λ-calculus and vice-versa. With Schõnfinkel's combinatory logic being equivalent to λ-calculus and also incomplete (thanks to Gõdel) relations soured resulting in WWII.

After the war ended Kleene came out with the Church-Turing Thesis, claiming that every effectively calculable function is general recursive. Or in programmers' terms: the definition of computation is recursive functions, and that Turing Machines and λ-calculus can compute these functions. With the horrific memories of previous disagreements fresh in the minds of the mathematics (and budding computer science) fields, this was not disputed.

So what do all these special terms actually mean? The following is a short description of the key terms without enough to get you started. (If you just want to program or talk to programmers and computer scientists TFM may be all you need!)

Natural numbers These are the whole numbers starting from 0 and increasing forever. (Some heretics believe they start from 1, but they have obviously never programmed in C, and so shall be ignored.)

Function A function is a mapping from an "input" to an "output". A function is considered to have a domain (all the valid inputs) and a range (all the valid outputs). A function on the natural numbers usually refers to a function whose domain and range is (some of) the natural numbers. A partial function is a function that has no defined output for some inputs.

Recursion Something is recursive when parts of its definition refer back to itself. A recursive function is a function that uses its own definition to determine some of its behaviour. For example, the factorial function can be defined recursively:


fact(x) = \left\{
\begin{array}{ll}
1 & x < 2\\
x * fact(x - 1) \; .
\end{array}
\right.

Here fact is a function that inputs a natural number x. If x is less than 2 then the output is 1. Otherwise fact is recursive, multiplying x by the output of fact(x − 1).

Gõdelisation A process that converts something (usually syntactically) into a natural number. Or more sneakily, a function from some syntactic input with an output in the range of natural numbers.

Computable Something is computable (also Turing computable, effectively computable, effectively calculable) if it can be expressed as a recursive function on natural numbers. Note that Gõdelisation allows for input to be converted into a natural number, so the limitation on natural numbers is usually not considered significant by programmers (but may be a great significance to theorists).

Turing Machine A model of computation created by Alan Turing based upon a collection of rules for manipulating symbols on a tape. This model has been the basis for imperative programming languages (e.g. C, Pascal) and through them most object-oriented languages (e.g. C#, Java).

Turing complete Traditionally and formally anything that can be computed on a Turing Machine. Corollary: something is Turing complete if it can compute any recursive function on natural numbers. More recently Turing complete refers to whether a programming language can do arbitrary computation. A programming language is not Turing complete if it can only compute some limited set of programs. For example, most Structured Query Languages (SQL) are NOT Turing complete as they do not allow recursion.

λ-calculus Another model of computation created by Alonzo Church that represents everything as functions. λ-calculus has formed the basis for functional programming languages and related styles such as declarative and logic.

Termination Termination is the property (of a function and/or system) that for all inputs it will produce an output eventually. Termination is a very useful property in mathematics and programming as it is possible to never have the program "get stuck" and not produce the correct output. Unfortunately Turing complete systems do not have termination as a property.

The halting problem The halting problem is a well known problem in computer science theory: to solve the halting problem is to have a program that, given another program, can always determine if the other program will terminate. The halting problem has been proven unsolvable by Gõdel's incompleteness theorem. (There is a proof of the halting problem for Turing Machines in the following section although it uses some formalism not introduced here!)

Theorem: There is a Turing Machine H that can determine if any Turing machine M will halt.

A proof by contradiction using Cantor diagonalisation. As Turing machines are represented syntactically on a Universal Turing Machine (UTM) they can be enumerated: label these Turing Machines M0,M1,M2,... The result of each of these TMs on the natural numbers is as follows:

0 1 2 ...
M0 6  ? 2 ...
M1 3 5  ? ...
... ... ... ... \ddots

where ? denotes that the TM did not halt. Define the result of the nth TM on the ith input as Mn(i). Define the halting solving machine H with output:
0 if Mi(i) is ?
Mi(i) + 1 if Mi(i) halts.
Now this machine H must be a member of the enumeration of all TMs, i.e. H = Mx for some x. However, by definition H always gives a different answer to Mx(x). Contradiction!

Comments

The aim of this section is to provide a brief, amusing and mostly correct overview of the problem of defining computation and computability. The descriptions of the terms above are a little sloppy and should not be used in a formal setting, but hopefully have provided you with the knowledge and language to discuss this topic knowledgably!

TODO: Above this comment is the first (rough) draft, below is notes on what to write.

Sets and structures

One of the inherent aspects of programming is dealing with "things". Initially this leads to a core question in programming: How do I keep track of my things? Due to the abstract nature of computing and programs it is quite possible to have the same thing appear many times, in many places... in fact you can even put something inside itself in a computer program (trying doing that with your desk)!

Keeping track of your things is complex and modern programming languages have a wide variety of data structures to help. The problem is knowing which is the right one for your things. Rather than attempt to survey all the possible structures in all languages and all paradigms, the remainder of this section will build up from simple to more complex structures from a theoretical perspective.

The usual starting point is with sets. Sets are popular because of their fairly simple nature and also because you can do a lot with them (there is a whole field of mathematics and logic devoted to set theory). Sets are popular in computer science both for their simplicity as a basis for data structures, and also for their properties in set theory.

The remainder of this section introduces sets and data structures. The first section introduces sets and their basic properties. The next section introduces basic set operations used in a lot of computer science and algorithm theory. Then the following section discusses some common data structures and their relation to sets.

Introduction to sets

Sets are usually denoted as follows:

{2,3,`a',"Hello World!'"}

the scrim braces { and } define the start and end respectively, with commas to separate the entries. In this example the set contains the numbers 2 and 3, the character `a' and the string "Hello World!". Let us call this set S.

One of the useful properties of sets is being able to determine if something is in a set. For example we could ask if 3 is in S, denoted as


3\in S \; .

This expression would be true, similarly we could ask if 3 was not in S with


3\notin S

and the answer would be false.

Another property of a set it is possible to determine is the size of the set. For example the size of S would be written as


|S| \; .

This concept of size is very primitive and each element of the set has a size of 1, so the following is true

| S | = 4

or the size of S is 4.

Although these properties alone allow a great deal of computation and logic (with some additional logical constructs), from the data structure perspective the focus is usually on set operators.[1]

Set operators

This section introduces the common operators upon sets from a purely theoretical perspective. Implementing them usually requires consideration of the process or algorithm involved and can be reviewed in light of the algorithms section.

Often it is useful to know if two sets are equal. As sets are not dependent upon the order of their elements, two sets are equal if they contain all of the same elements. For example the following is true

{3,4,5} = {4,5,3}

as both sets contain 3,4 and 5.

All of the set operations can be expressed using logic, for example two sets S1 and S2 are equal if the following is true


\forall x\in S_1,x \in S_2\wedge\forall y\in S_2,y\in S_1 \; .

That is, all the elements of S1 are in S2 and vice-versa.

Frequently it is necessary to combine two sets to create a new set with the elements of both. This is the union of two sets, for example the union of S1 and S2 is denoted as


S_1 \cup S_2

let us call this set T.

The following are some examples of unions,


\begin{array}{rcl}
\{1,2\}\cup \{'a'\} &=& \{1,2,'a'\}\\
\{1,2\}\cup \{3,4\} &=& \{4,3,2,1\}\\
\{1,2\}\cup \{1,2\} &=& \{1,2\} \; .
\end{array}

Note that sets do not consider the order of elements (as in the second line) and do not repeat elements (as in the third line).

The logical expression for a set T that is the union of the sets S1 and S2 is


\forall x\in T, x\in S_1 \vee x\in S_2 \; .

Every element of T is either in S1 or S2.

Another common operation is to find all the elements that are common to two sets, this is called the intersection. Some examples of intersections, denoted \cap, are


\begin{array}{rcll}
\{1,2,3\}\cap \{1,3,5\} &=& \{1,3\}\\
\{1,2,3\}\cap \{'a','b'\} &=& \{\}\\
\{'a','b'\}\cap\{'a','b'\} &=& \{'a','b'\} \; .
\end{array}

The intersection is the elements shared by both sets, and the empty set {} if they have no elements in common.

The logical expression for the intersection $T$ of two sets S1 and S2 is


\forall x\in T, x\in S_1 \wedge x\in S_2 \; .

So far all of the set operations have been symmetrical in nature, that is


\begin{array}{rcl}
S_1 = S_2 &\mbox{is the same as}& S_2 = S_1\\
S_1 \cup S_2 &\mbox{is the same as}& S_2 \cup S_1\\
S_1 \cap S_2 &\mbox{is the same as}& S_2 \cap S_1 \; .
\end{array}

The following set operations are asymmetric.

Often it is helpful to remove all the elements that are in one set from another. removing the elements of a set is called the compliment of that set. Some examples are


\begin{array}{rcll}
\{1,2,3,4,5\}\backslash\{2,4,6\} &=& \{1,3,5\}\\
\{1,2,3\}\backslash\{'a','b'\} &=& \{1,2,3\}\\
\{1,2\}\backslash\{1,2,3,4,5\} &=& \{\} \; .
\end{array}

The general form is S_1\backslash S_2 = T where T is the compliment of S2 in S1. The logic for this is


\forall x\in T, x\in S_1\wedge x\notin S_2 \; .

The final operation presented is the cross product. This is the combination of two sets where each element from the first set is paired with each element of the second set. Denoted as S_1\times S_2, it is perhaps best expressed through examples


\begin{array}{rcl}
\{1,2\}\times \{'a','b'\} &=& \{(1,'a'),(1,'b'),(2,'a'),(2,'b')\}\\
\{1,2,3\}\times \{'number'\} &=& \{(1,'number'),(2,'number'),(3,'number')\} \; .
\end{array}

Obviously the cross products of sets can become very large very quickly (|S_1\times S_2| = |S_1|\times |S_2|), however this forms the basis of many proofs in computer science as well as properties of some languages such as SQL.

The logical expression for T=S_1\times S_2 is


\forall x\forall y : (x,y)\in T, x\in S_1, y\in S_2 \; .

Although not strictly operations, it is useful to have the concept of a subset. A subset \subset is a set containing only elements of another set, but not necessarily all of them. For example


\begin{array}{rcl}
\{1\} &\subset& \{1,2,3\}\\
\{1,2,3\} &\subset& \{1,2,3\}\\
\{\} &\subset& \{1,2,3\}
\end{array}

are all true.

TODO: Stub/comment, maybe here, maybe algorithms

Dijkstra commented that programming is a unique profession where gigantic ratios, such as 109, "has to be bridged by a single technology" cruelty. The typical approach has been to abstract away the details, find solutions or representations that do not require knowledge of the actual numbers.

Data structures

TODO

Algorithms and programs

For as long as there have been humans there have been problems that plagued the humans and made their lives difficult. Solving these problems has been one of the endless sources of innovation and experimentation for the human race. Of course the solutions vary in effectiveness - apparently sacrificing animals to please the gods is no longer considered a common solution. These days humans tend to focus on solutions that are a little more reliable, they want a solution they can use and know it will work: an algorithm.

Algorithms gained their name due to poor translations from Arabic to Latin with some help from the Greeks. These days an algorithm is considered a process to solve a problem and also the basis of most programming. What sets an algorithm apart from a program is that an algorithm will eventually terminate and result in a solution (or at least an answer). Programs come with no such guarantees and may run forever. This distinction is often lost on most programmers as they consider an algorithm to be something a program does, and frequently program "algorithms" that do not meet the formal properties of an algorithm, usually due to programmer error, like not incrementing the counter in a loop.

Algorithm Formally an algorithm is a process to achieve an outcome. Knuth tAoCPvol1 identifies algorithms as having five important features:

  1. Finiteness: An algorithm will always terminate.
  1. Definiteness: Each step in the algorithm is clearly defined.
  1. Input: An algorithm has zero or many specified inputs.
  1. Output: An algorithm has one or more outputs.
  1. Effectiveness: An algorithm must be easy enough that a trained monkey can do it.

Of course not all algorithms are created equal, in fact there are huge areas of study (even some subjects at UTS in 2012 such as Data Mining Algorithms, Data Structures and Algorithms, and Advanced Data Mining Algorithms). Apart from the simple monkey test: "Can a trained monkey follow this algorithm?"[2] a more common concern is how efficient an algorithm is.

This being computers, such a simple question is non-trivial to answer. Does "efficient" mean: small number of steps (it fits in three lines of Haskell), small memory usage (I can run this on my phone), small runtime (see how I save 3 machine instructions by using the bitwise OR)... or something else entirely? Seeing as many of the answers to these questions rely on the programming language you are using to solve the algorithm, and of course the machine it runs on, a more general solution for evaluating algorithms was found: big 0 notation.

Big 0 notation is a general way to describe the complexity of an algorithm, or how much work is done based upon the input. Thus it is easy to compare how two algorithms that map the same inputs to the same outputs should perform against each other.

Authors note: I lost motivation here, algorithms kind of drifted into big 0 notation and now I need a break. BBL.

Define an algorithm: procedure for solving a problem - guaranteed to terminate! Algorithms have complexity... a simple example, a complex example. Programs in a more general sense.

Big O(h no)

A formalism for expressing how easy/hard something is. Why you should be aware of the basics and what this means (roughly).

NP-hard, now you know why these problems suck to code.

Sets

How we store things.

sets \rightarrow multisets,\ graphs
multisets \rightarrow sequences,\ lists
graphs \rightarrow trees
trees \rightarrow binary\ trees,\ rose\ trees
\ldots

Graphs

How they underlie everything in computer science: functional programs, memory management, class hierarchies, networks ...


  1. The operators can (probably) all be defined using the properties above and logical constructs, however this is outside the scope of TFM.
  2. Also known as the first year programmer test.
Personal tools