Concept Questions 160112
note: the questions are addressed in the order of the page numbers where
the term or phrase they are concerned with appears (not always the page
number the original asker cited).
- pp 5. translation is repeatedly used (first occurrance on page 5),
what are they referring to?
- Actually, the morpheme `translat' appears 299 times in the textbook.
One of the advantages of the pdf is that it is searchable.
- There were 18 usages in chapter 1 and 23 usages in chapter 2.
- Like many technical terms in computer science, translator has
a `colloquial' (common) usage and a `technical' (computer science specific)
usage. Here the intent is that you start by thinking of it colloquially,
but as you read more, you start to see how programming language people
use the term when they want to be technical. When they are not being
technical, they may be using it colloquially, so you could say it
has to be disambiguated by determining if the context is colloquial
- As we slide through the usages, some carry more meaning than others.
page 4 (first usage in content of textbook, third usage in textbook),
we get the notion of translating from a language to machine code.
An example of something that does this is the gcc compiler. An
example of something that doesn't always do this but would still be
something intended would be the java compiler. Most people would also
think of the ruby compiler as doing translation, but its target is definitely
not machine code, but some higher level target like java's jvm. I would
even view irb as doing translation although it is really just building
an internal data structure that it processes itself.
- pp 9. What is the difference between explicit and implicit?
- See discussion near end of
Writing That Paragraph 01
- More generally, implicit and explict are colloquial words defined
in any dictionary rather than technical terms.
- The google dictionary gives us for implicit:
implied though not plainly expressed.
"comments seen as implicit criticism of the policies"
synonyms: implied, hinted at, suggested, insinuated; unspoken, unexpressed, undeclared, unstated, tacit, unacknowledged, taken for granted; inherent, latent, underlying, inbuilt, incorporated; understood, inferred, deducible
- and for explicit: stated clearly and in detail, leaving no room for confusion or doubt.
"the speaker's intentions were not made explicit"
synonyms: clear, plain, straightforward, crystal clear, easily understandable; precise, exact, specific, unequivocal, unambiguous; detailed, comprehensive, exhaustive
- comparing these definitions, you might think that implicit meant a piece
of writing was poorly expressed and explicit meant that it was clearly
expressed. But rather than the writing itself, it is referring to the
intent of the speaker. So in both cases, we could be talking about
a well written piece of prose. But every statement carries with it a cloud
of implications, similarities to other statements, who the speaker is,
who the audience is, and how their shared culture governs their communications.
- See the notion of a `speech act' ( wikipedia entry)
- pp 11. What makes a language standardized?
- We skipped Section 1.4 in the text.
- The first five paragraphs of 1.4 seem to lay out the process.
What about it was unclear?
- pp 12. How do we prevent divergence in a computer language?
- We skipped Section 1.4 in the text.
- It is a natural process as described in the first paragraph
of 1.4.1. You can't prevent it any more than you can prevent people
making errors in their programs.
- You can slow down the rate of convergence if you make the
language little used or used primarily be people who all have
the same goals. Or, of course, if you make it proprietary so that
other people can't make their own versions
- pp 18. If a program is a model of some process in the real world and we
have the example of the auditing program (pp 19), what are the `processes'?
- This is a nice example of the problems caused when you change terms in
the middle of a discussion thinking both are just different ways of saying
the same thing.
- The author is using `process' in the standard English way (as opposed
to as a technical term in Computer Science relating to parallel programming).
- The `Google' definition for `process' that they post on the search
page is `a series of actions or steps taken ...'. So the `actions' listed
in the textbook for that example are the `processes' referred to.
- pp 18. What is the difference between high-level language and low-level
- It depends on your viewpoint as to where you want to draw the distinction.
- Most languages you know the author would seem to think were high-level.
For example on page 18, it says C is not low level because it has control
The obvious exception is assembly language (see page 21, Exhibit 2.4).
- Note that Exhibit 2.4 also refers to Forth as being low-level. So,
what is Forth? The author will mention it from time to time, but the
easiest way to get a sense of it is to call up wikipedia and ask
Forth wikipedia entry). If you look at the RC4 example near the
bottom of the wikipedia page, you see that Forth does have some sort of
control structures WHILE, LOOP, REPEAT, and function calls. It looks
like its main weakness is not having user defined types, but whether
or not this is the author's reasoning, I wouldn't know.
- pp 18. Can high level and low level be merged?
- From the author's usage, I would say know, because if you add high
level features to assembly language, it stops being assembly language,
which is pretty much the only low level language.
- If you raise your threshold high enough that C is a low level
language, then many people would say that C++ is an attempt to merge
high level concepts into it. Personally, I prefer to keep C separate
from the so called `high level concepts', and so would advocate using
Java rather than C++, relying on Java's native methods mechanism
(A HREF="https://en.wikipedia.org/wiki/Java_Native_Interface"> wikipedia entry) to
incorporate C into parts of a Java application where low level issues
mattered. Of course, Java isn't the only higher level language that
lets you mix with C. The Ruby approach is explained
Or for Scheme here. Or for Gnu Prolog here. Or for Haskell here.
- pp 18. What are the benifits of a builder's language over an
- See last paragraph on pp 18 for one example of such.
- pp 25, claims this is the question that all of Section 2.3 is
answering. Interestingly enough, after they say this, they never use
the term ``builder's language'' again. However, you may recall that
they did earlier give C as an example of a builder's language, so
pretty much everything they say in favor or against C is in favor
or against builder's languages (at least as commonly implemented).
- pp 19. What is the construction architect?
- The `construction architect' is not a technical term in computer science.
Instead, it is part of the analogy that the author was developing.
- On page 18, we start by distinguishing the roles of a builder that
does construction from an architect that does design. This is trying to
explain the two types of programming languages identified at the bottom
of page 17 and top of page 18. Suddenly, at the bottom of page 19, the
analogy goes from talking about the program builder and the program
architect, to the `construction architect'. I think this is a mistake
and that instead of `construction architect' it should have said
`program builder'. This term is only used this once in the whole textbook.
- pp 21. Can we ever truly know a programmer's `semantic intent'?
- I think I would say that implicit here is an appeal to the `reasonable
person' construct (see
wikipedia entry), where what one is interested in is what would a
reasonable programmer mean to be doing when they wrote this.
- alternatively, we could say that `semantic intent' is private
information and so only the programmer themself can confirm semantic
- pp 22. Exhibit 2.5. How does Pascal represent a sorted table.
- It doesn't. A Pascal programmer chooses a representation,
more likely an array of keys.
- It is also possible that a Pascal programmer might use
an ordered binary tree.
Of course Pascal also has no notion of an binary ordered tree, so really
the programmer is building something that is a collection of records
and pointers and we are just looking at it and saying that implicity
it is an ordered binary tree.
- pp 22. What is more important in a language, implicit or explicit
- I don't know how `importance' figures in here.
- Explicit is better
for communicating to another human what the code is doing, but it takes
more time for the programmer to make everything explicit. Implicit
makes coding easier, but reading harder.
- Sometimes performance is a factor, but then it becomes a question
of what can the compiler figure out and what does the programmer have
to make explicit to guide the compiler to the best implementation.
- pp 22. In giving examples of semantic basis, they seem to be using
syntactic constructs. How do they fit the definition of semantic basis?
- Each construct has two aspects, its syntactic aspect and its semantic
aspect. So here the reference is to the meaning of `type' rather than
to the form of `type', for example.
- pp 22. How does the translator work?
- It translates, see discussion of translation tagged to page 5 above.
- Usually, there would be a whole course on that called `Compiler Writing'.
However, there weren't enough people interested to sustain the course here.
In the textbook, we will get to some discussion of this. If you search
the text for references to `parsing', `program stack', `symbol table',
and of course `compiler' (274 references) and `interpreter' (42 references),
you will see that in the remaining material, there is some discussion
of what you would have to do to create your own language. While it is
unlikely that you will want to create Ruby 2, it is quite reasonable that
as you understand the usages of domain specific languages you will want to
incorporate them into your own applications and so how to do this is something
the course will speak to.
- pp 29, Exhibit 2.14 Does a meaningless operation effect size
or speed of the program? For example, a meaningless operation that
caused an unnecessary memory allocation.
- In these days of fast cpus, memory often dominates runtime.
If something that could have fit on one chunk of memory requires
two chunks because something that was unused was allocated in the
middle of it, then this could completely change the behavior of
the caching system and result in a massive slow down of a loop
which might result in a massive slow down of a program. But most
of the times it doesn't matter.
- pp 29, Exhibit 2.14 How does Pascal deal with meaningless operations?
- In this particular case, it would depends on the actual code.
If it could, the Pascal compiler would generate a compile time
error message saying you can't do that (for example if the array
x was declared from 1 to 100 and you wrote x then the compiler
can clearly see that won't work). If the compiler can't figure out
that there is a problem and also can't figure out that there isn't
a problem, then it would generate a test to check to see if you were
exceeding the array bound. Under this second situation, if you went
beyond the end of the array, it would immediately generate an error
message and exit. This is different than what C would do, which is
let you change the memory outside the array and then give you a core
dump if at some point later that caused such a problem you executed
an illegal machine code operation (usually by trying to reference a
`bad pointer', but possibly also something like dividing by zero).
- pp 30. Is the difference between a frequent and infrequent
feature that an infrequent feature must be implemented in a library?
- No, infrequent features can be part of the core of a system,
for example ^ is part of C and Java's basic language, but when was
the last time you used it?
- pp 31. How are global variables avoided with the principle of
locality in design?
- It is not so much that they are avoided as that the language
is designed so that they are not used, or more generally that they
are difficult to use. They are never really needed in serial
programming as one can always just pass around all the information
that anyone needs (the easiest way would be to put it all into a
larger record and then pass a pointer to that record).
- As an example of what happens when someone asks how to use
global variables in a language that is hostile to the notion,
- pp 32. Why does prohibiting global variables result in clean
semantics in a programming language?
- The short answer is that it is easier to describe the meaning
of something by describing the meaning of its pieces, which means
that very modular structures have cleaner semantics. Global variables
mess up modularity.
- The long answer is that we need to know what semantics means in
more detail. Section 4.3 will gets us started, at which point it might
be worth while revisiting this question.
- pp 32. Would like to know more about lambda.
- There are 150 references to lambda in the textbook. We will see
it again :-)
- Although mostly associated with Lisp/Scheme, lambda appears in
various ways in many languages. Ruby has a lambda constructor demo'd
in the Ruby Tricks 02 and appears in the Ruby reference card version 2.
In the org file, there is a note about it being newly added to both
Java and C++. However, it plays a major role in functional programming
(Scheme and Haskell). Indeed, Scheme was introduced in the 70s with
papers like: Scheme: An Interpreter for Extended Lambda Calculus;
Lambda: The Ultimate Imperative; and Lambda: The Ultimate Declarative
The Original 'Lambda Papers' by Guy Steele and Gerald Sussman
(which include a lot of implementation details for both software
and hardware approaches).
- The term Lambda Calculus
( wikipedia entry)
actually goes back to the work of Alonzo Church in the 1930s, before
computers, when computation was primarily a theoretical construct for
mathematicians arguing over what was possible in terms of solving
math problems. In 1960, Peter Landin explained the meaning of Algol 60 (a
predecessor of Pascal and C) by discussing how it was related to lambda
calculus; and ever since, it has played a major role in the study of
the semantics of programming languages. And in the design of languages
by people who care about their semantics being `clean' and don't want
to appeal directly to an axiomatic approach.
- pp 37. The examples of `too much flexibility' seem dated. Could
this happen in a modern language like C or C++.
- Well, whether or not something is `too much' is always a judgement
call. However, for C, you might want to take a look at
The International Obfuscated C Code Contest. In someways this
seems to be a contest to show that C is more flexible than necessary.
Consider for example
deak from the 2014
competition. Of course, with a little thought, it is probably obvious,
but if you don't see the semantic intent of the programmer, you
might want to look at
- pp 49. Why don't people just use one language if they are all so
- This is answered in the last two sentences of the first paragraph
of section 2.4.2.
- Related to divergent language discussion pp 12
- While details may be minor, with computers they are often important.
- Human nature is also an issue. People don't like to change once they
know a language, but technology and applications change making new languages
available that won't be adapted by people if they can avoid it (people using
Perl or Python when they could be using Ruby, for example).
- It would be a mistake to think that computer science courses teach
the whole range of programming. Most programming is application specific
and so would effectively cause people to have to be double majors in order
to have the right background to master it. The undergraduate program
(pretty much everywhere) focusses on issues where little domain specific
knowledge is necessary. The people who have domain specific knowledge
and do most of the world's programming often have little computer science
knowledge (which makes the world a scary place).
- For example, when computer science started, the main applications were
scientific computing and accounting. Given the compiler writing ability
of the time, Fortran and Cobol (and RPG) dominated the landscape and these were
what students learned. These are actually still the main usages of computers,
but current computer science students don't know enough science to be useful
with scientific computing (so scientists usually do their own programming,
sometimes using sophisticated support systems like Matlab or Mathematica/Maple
and sometimes rolling their own (shudder) in languages like C or Python).
Artificial Intelligence is pretty much the last part of scientific computing
left in computer science (but even there, it is hard to find computer science
students interested in the relevant domain knowledge in neuroscience,
philosophy, mathematics, linguistics, etc.).
Also, current computer science students don't know enough accounting to be
useful on accounting sorts of tasks, so accountants do most of their own
again using sophisticated (sort of) support systems like spread sheets.
This leaves relatively minor usages of computers such as games and web
applications as the current view of many people as to what computer science
does -- which is unfortunate, because now a days, many people work in these
fields successfully (make money) without computer science training.
There are occasionally exceptions when available packages don't
offer enough support (such as financial people wanting to do high speed
stock trading), but these are relatively rare.
[Conclusion: balance your study of computer science with the study of something
else where computer science can be applied (or with abstract mathematics if you
want to do theory instead of applications).]
- If you look at
http://top500.org/lists/2015/11/, you will see that the things
that are really worth computing are research, primarily scientific
simulations of one kind or another (weapons, energy, weather, etc.),
causing the people who do this stuff to buy/build the fastest
computers in the world. And, perhaps, generating BitCoins:-)
(cf Bitcoin's Creator Satoshi Nakamoto Is Probably This Unknown Australian Genius Wired Magazine, note addendum at
end of article).