Tuesday, May 20, 2008

Linux History

Take some serious time to read through and understand the history lecture, it has been structured to give you a fuller understanding of the roots of the Unix and Linux operating systems.

Unix has managed to influence every operating system available today.

It seems that most of the people who want to work in, or who actually work in Linux do not know the history of the operating system and as you will see, it will give you

a greater understanding of the software.

In short, Linux is an operating system based on UNIX (Developed by AT&T's Bell Labs division), which is based on MULTICS.

The following timeline will explain the main events that have affected the UNIX family of operating systems, of which Linux is one.

We pick up our history in the 1950s, when the first important event that affected UNIX took place.

Figure 1.1. PDP 7 with teletypewriter

Note

TTYs and Line-oriented Text Display which was the general input and output devices of the PDP 7

The term "tty" stands for "teletypewriter", which was an early form of terminal.

Teletypewriters, such as the one shown in the picture of the PDP-7 REF, were merely automatic typewriters producing hard-copy line-based output on continuous paper.

In these early days of computing, this kind of terminal output did not allow screen or cursor-based programs to function.

Hence the first text editors were "line-oriented", such as "ed" and later "ex". "Vi" was developed later, based on "ex", and was screen-oriented. It used the redrawable ability of cathode ray tube (CRT) displays to show text one screen at a time.

1955

The US government passed a decree imposing a restraint of trade against AT&T. The company was not permitted to make money from non-telecommunications business.

This is significant, because until 1982 (when the US Government finally broke up the AT&T telecommunications monopoly into smaller companies), AT&T could not sell operating systems, i.e. UNIX, for profit.

This had a great impact on the distribution of Unix as you will see throughout the rest of the History section, as AT&T chose to use the product internally first, and then distributed it to computer research institutions such as Universities.

1966

The Multiplexed Time Sharing and Computing System or MULTICS project was a joint attempt by General Electric (GE), AT&T Bell Labs and the Massachusetts Institute of Technology (MIT) at developing a stable multiuser operating system

The aim is to create an operating system that could support a lot of simultaneous users (thousands!).

Multics stands for Multiplexed Information and Computer service.

The people involved in the project at this time are Ken Thompson, Dennis Ritchie, Joseph Ossanna, Stuart Feldman, Doug McIIroy and Bob Morris.

Although a very simple version of MULTICS could now run on a GE645 computer, it could only support 3 people and therefore the original goals of this operating system had not been met, the research and development is just so expensive and Bell Labs withdraws their sponsorship. This meant that the other interested parties could not afford to carry the project on their own and so they also withdrew.

Dennis Ritchie and Ken Thompson now decide to continue this project on their own.

1969 to 1970

Ken Thompson Dennis Ritchie wrote a Space Travel Game that was actually a serious scientific astronomical simulation program. However the game was a disaster as the spaceship was hard to maneuver and used a lot of resources to run.

After developing the Space Travel Program they had learnt a lot more. With Canaday involved as well they were able to create the design for a new file system, which they built on PDP-7, called UNICS (Uniplexed Information and Computing Service), and this later became UNIX.

A note to UNIX traditionalists: We use the spelling "Unix" rather than "UNIX" in this course only for the sake of readability.

They attempted using a Fortran program to further develop Unix, but they found that it was not what they were looking for and so they turned to BCPL (Basic Combined Programming Language).

B was developed from BCPL and it was the first high-level language to be used on Unix with a PDP11/20.

Assembler/ compilers / hardware architecture

Lets draw a diagram of three different machines and then lets take a look at why developing in assembler is not always the best idea:

  1. Remember that all a computer actually does is mathematics.

  2. Am operating system is a "resource allocator" and a "controlling of operations" program.

  3. When computers first started becoming popular we had to use punch cards or load the programs directly into memory manually.

  4. Assembler is machine code and is specific to the machine type and hardware that you are working with. The instruction written for one machine cannot work for another machine at this low level.

  5. A computer has registers and instruction sets, and the instructions are binary coded, the assembly program talks to the machine in assembler which is translated to binary code.

Figure 1.2. Relationship between hardware, assembler and a compiler


Relationship between hardware, assembler and a compiler

So, if writing a program for a PDP-7 and using assembler, when wanting to move the program to a PDP-11 you would have to rewrite the entire assembler program, this time to suit the machine and hardware type for a PDP-11.

To remedy this, developers invented compilers for application programming tools. In other words if using Pascal to develop, the Pascal compiler for a PDP-7 would translate your program into assembly program and then assembler code for a PDP-7.

If wanting to port that program to a PDP-11, then get the Pascal compiler for a PDP-11 and recompile the original program on the PDP-11. It will then work as above.

This explains why the higher-level languages started being used, such as Pascal, Fortran etcetera. They are there to provide libraries between program and assembler. A compiler would be needed for each specific machine.

These days a compiler automatically generates the assembler code.

Figure 1.3. Dennis Richie and Ken Thompson working on a PDP-11.


So, the first Unix was written in the Assembler program of a PDP-7 machine, as we have now discussed though this is not going to make it easily portable to another type of architecture.

At this stage and because of the success of Unix Bell Labs now chooses to re-sponsor the project.

1971 - 1973

B is still considered too slow, so the team worked to develop Unix in a faster development program called New B or NB. They could now also afford to upgrade to a later model of the PDP range called a PDP11/45.

The C Programming language was developed in 1972 as a result of Ken Thompson and his team wanting to have a language to write Unix in. Although Ken Thompson worked with C initially eventually they needed more functionality which Dennis Ritchie then added.

It is also at this time that Unix "pipes" are also now developed, and this is seen as a milestone because of the power it added to the system [1]

Unix now had its own language and philosophy. Its power was generated by stringing programs together not by any one individual program.

A quote from "A quarter Century of Unix" by P Salus" states:

  • write programs that do one thing and do it well.

  • write programs that work together

  • write programs that handle text streams, because that is a universal interface.

1973 - 1974

More and more requests are coming in to AT&T to allow other companies and users to use the Unix system.

At this stage Unix is firmly entrenched at Universities and Colleges and AT&T refusing to supply bug-fixes and support on the system forced users to work together. (The start of Unix User Groups.)

Unix had been sold as a text processing system at AT&T internally and here the developers and users were the same community and therefore got direct feedback for new product and for bugs etcetera, Support was right there in same company, maybe even on the same office floor.

By using research organizations at Universities the bright varsity students got sucked up into this type of company after their studying, this was beneficial to research organizations and they continued to give the system to students.

Unix is still used these days used to teach students computer science.

The US patent office held the rights at this stage.

1974 - 1975

There are now 500 installations throughout the United States, mainly at Universities.

After 1974 military and commercial enterprises started demanding licenses to use Unix and AT&T decided to close the source and supply only binary distributions.

Berkley UCB did a lot of development on DARPA TCP/IP (bright brains for a good price), and the students also started adding on various other utilities, ultimately deciding to write Unix from scratch. (BSD Unix)

BSD3 utilities are available in System V Unix, when installing the operating system you should be asked if you would like to install the BSD Utilities, they will be installed into the /usr/ucb directory.

1976 - 1978

Unix, is able to be ported to an IBM 360, an Interdata 7/32 and an Interdata 8/32 proving that Unix is portable to systems other than those manufactured by DEC.

1978 "The C Programming Language" by Ritchie is published.

1978 Bill Joy creates "the "vi" editor a full screen editor, and at the same time he sees the need "to optimize the code for several different types of terminals, he decided to consolidate screen management by using an interpreter to redraw the screen. The interpreter was driven by the terminal's characteristics - termcap was born,". P Sulcas

1979

All other Unixs' branch from these two variants of the Unix code, AT&T Unix and BSD Unix. (See timeline below).

The release of AT&T Version 7 was the start of many of the Unix ports, the 32 bit ports and a product called Xenix, (an SCO and Microsoft joint product, and the fist Unix port that could run on an 8086 chip).

1980

By 1980, AT&T found that the operating system was a viable option for commercial development. Microprocessors were becoming very popular, and many other companies were allowed to license UNIX from AT&T. These companies ported UNIX to their machines. The simplicity and clarity of UNIX tempted many developers to enhance the product with their own improvements, which resulted in several varieties of UNIX.

1977 to 1983

From 1977 to 1982, Bell Labs combined features from the AT&T versions of UNIX into a single system called UNIX System 3.

Bell Labs then enhanced System 3 into System 4, a system that was only used internally at Bell Labs.

After further enhancements, System V was released and in 1983, AT&T officially announced their support for System V.

1982 Sun developed the Sparc processor, licensed BSD Unix called it SUN OS.

1983/4 Then licensed AT&T System V, made their changes and called that version Solaris. There is a lot of cross coding and an interesting note is that if though if doing the "uname" (uname is a command that supplies details of the current operating system for your interest) command on Solaris the report says SunOS is the operating system.

1985 - Some quotable quotes - "Xenix is the operating system future" and "640 KB memory is enough for anyone"

1989

In 1989, AT&T organized that System V, SUNOS, XENIX, and Berkeley 4xBSD were combined into one product called System V Release 4.0. This new product was created to ensure that there was one version of UNIX that would work on any machine available at that time.

The different versions of UNIX prompted AT&T to form a UNIX International Consortium. The aim of this consortium was to improve the marketing of UNIX, since the market was starting to demand clarity on standardizing the product.

1992 to 1998

By 1992, UNIX was readily available on an Intel platform, providing mainframe-type processing power on a PC. This made UNIX appealing to the end-user market.

VendorHardwareOperating System (Unix based)
HPPARiscHP-UX
IBMRS6000 / Power PCAIX
Digital / DEC / CompaqAlphaDigital Unix
NCR
DG-UX
SCOIntel PC CompatibleSCO Xenix, SCO Unix, SCO Open Server 5, UnixWare 7

Source code has changed hands a few times:
yearOwner of Source code
1969AT&T
1993Novell
1995SCO
2001Caldera, which started trading under the name "The SCO Group" in 2002
[Note]

Note

  1. Besides licensing Unix System V to vendors, Novell marketed its own flavor of Unix to the consumer market, called UnixWare.

  2. When Novell sold the Unix business to SCO, it transferred the Unix trademark to X/Open Company Ltd. now the Open Group www.opengroup.org

  3. SCO inherited UnixWare 2 from Novell and continued selling it under the SCO brand.

This is the story of Linux

Figure 1.4. Professor Andy Tannebaum

Professor Andy Tannebaum

985 Professor Andy Tanenbaum wrote a Unix like operating system from scratch, based on System V standards POSIX and IEEE, called MINIX for i386 for Intel PC aimed at university computer science research students.

MINIX was also bundled with a popular computer science operating system study book by that author. Although the operating system was free the book was to be purchased.

A Finnish student called Linus Torvald first came into contact with Unix like systems through his use of this MINIX at the university of Helsinki Finland in Computer Science.

Linus Torvald wanted to upgrade MINIX and put in features and improvements, but Andrew Tanenbaum wanted Minix the way it was and so Linus decided write his own kernel.

He released Linux on the Internet as an Open Source product and under his own license and then later in 1991 under the GPL.


If you want to travel around the world and be invited to speak at a lot of different places, just write a Unix operating system.


-- Linus Torvald

Figure 1.5. Linus Torvald

Linus Torvald

The FSF (Free Software Foundation), started by Richard Stallman, as a development effort to promote the use of Free Software, Stallman recognized the need to write a free and open source Unix-like operating system so that people could have a Unix system under a non-propriety non-restrictive commercial license

The FSF started a project called GNU to fulfill this aim GNU stands for "GNU is not Unix" (a recursive acronym).

By 1991 GNU had already amassed a compiler (GCC- GNU C Compiler), a C library, both very critical components of an operating system, and all associated generic Unix base programs (ls, cat, chmod etcetera).

They were missing a kernel, which was going to be called the GNU HURD (HURD is not yet complete 2004 April).

The FSF naturally adopted the Linux kernel to complete the GNU system to produce what is known as the GNU/Linux operating system, which is the correct term for all distributions of Linux like Red Hat Linux and SuSE Linux.

1994 Linux 1.0 release

Figure 1.6. Tux, the Linux mascot

Tux, the Linux mascot



Ref: http://learnlinux.tsf.org.za/courses/build/internals/internals-all.html#history

Monday, May 12, 2008

Null Pointer

Q: What is this infamous null pointer, anyway?


A: The language definition states that for each pointer type, there is a special value--the ``null pointer''--which is distinguishable from all other pointer values and which is ``guaranteed to compare unequal to a pointer to any object or function.'' That is, a null pointer points definitively nowhere; it is not the address of any object or function. The address-of operator & will never yield a null pointer, nor will a successful call to malloc.[footnote] (malloc does return a null pointer when it fails, and this is a typical use of null pointers: as a ``special'' pointer value with some other meaning, usually ``not allocated'' or ``not pointing anywhere yet.'')

A null pointer is conceptually different from an uninitialized pointer. A null pointer is known not to point to any object or function; an uninitialized pointer might point anywhere. See also questions 1.30, 7.1, and 7.31.

As mentioned above, there is a null pointer for each pointer type, and the internal values of null pointers for different types may be different. Although programmers need not know the internal values, the compiler must always be informed which type of null pointer is required, so that it can make the distinction if necessary (see questions 5.2, 5.5, and 5.6).



Q: How do I get a null pointer in my programs?


A: With a null pointer constant.

According to the language definition, an ``integral constant expression with the value 0'' in a pointer context is converted into a null pointer at compile time. That is, in an initialization, assignment, or comparison when one side is a variable or expression of pointer type, the compiler can tell that a constant 0 on the other side requests a null pointer, and generate the correctly-typed null pointer value. Therefore, the following fragments are perfectly legal:

 char *p = 0;
if(p != 0)
(See also question 5.3.)

However, an argument being passed to a function is not necessarily recognizable as a pointer context, and the compiler may not be able to tell that an unadorned 0 ``means'' a null pointer. To generate a null pointer in a function call context, an explicit cast may be required, to force the 0 to be recognized as a pointer. For example, the Unix system call execl takes a variable-length, null-pointer-terminated list of character pointer arguments, and is correctly called like this:

 execl("/bin/sh", "sh", "-c", "date", (char *)0);
If the (char *) cast on the last argument were omitted, the compiler would not know to pass a null pointer, and would pass an integer 0 instead. (Note that many Unix manuals get this example wrong; see also question 5.11.)

When function prototypes are in scope, argument passing becomes an ``assignment context,'' and most casts may safely be omitted, since the prototype tells the compiler that a pointer is required, and of which type, enabling it to correctly convert an unadorned 0. Function prototypes cannot provide the types for variable arguments in variable-length argument lists however, so explicit casts are still required for those arguments. (See also question 15.3.) It is probably safest to properly cast all null pointer constants in function calls, to guard against varargs functions or those without prototypes.


Q: Is the abbreviated pointer comparison ``if(p)'' to test for non-null pointers valid? What if the internal representation for null pointers is nonzero?


A: It is always valid.

When C requires the Boolean value of an expression, a false value is inferred when the expression compares equal to zero, and a true value otherwise. That is, whenever one writes

 if(expr)
where ``expr'' is any expression at all, the compiler essentially acts as if it had been written as
 if((expr) != 0)
Substituting the trivial pointer expression ``p'' for ``expr'', we have
 if(p) is equivalent to if(p != 0)
and this is a comparison context, so the compiler can tell that the (implicit) 0 is actually a null pointer constant, and use the correct null pointer value. There is no trickery involved here; compilers do work this way, and generate identical code for both constructs. The internal representation of a null pointer does not matter.

The boolean negation operator, !, can be described as follows:

 !expr is essentially equivalent to (expr)?0:1
or to ((expr) == 0)
which leads to the conclusion that
 if(!p) is equivalent to if(p == 0)

``Abbreviations'' such as if(p), though perfectly legal[footnote] , are considered by some to be bad style (and by others to be good style; see question 17.10).

See also question 9.2


Q: What is NULL and how is it defined?


A: As a matter of style, many programmers prefer not to have unadorned 0's scattered through their programs, some representing numbers and some representing pointers. Therefore, the preprocessor macro NULL is defined (by several headers, including and ) as a null pointer constant, typically 0 or ((void *)0) (see also question 5.6). A programmer who wishes to make explicit the distinction between 0 the integer and 0 the null pointer constant can then use NULL whenever a null pointer is required.

Using NULL is a stylistic convention only; the preprocessor turns NULL back into 0 which is then recognized by the compiler, in pointer contexts, as before. In particular, a cast may still be necessary before NULL (as before 0) in a function call argument. The table under question 5.2 above applies for NULL as well as 0 (an unadorned NULL is equivalent to an unadorned 0).

NULL should be used only as a pointer constant; see question 5.9


Q: How should NULL be defined on a machine which uses a nonzero bit pattern as the internal representation of a null pointer?


A: The same as on any other machine: as 0 (or some version of 0; see question 5.4).

Whenever a programmer requests a null pointer, either by writing ``0'' or ``NULL'', it is the compiler's responsibility to generate whatever bit pattern the machine uses for that null pointer. (Again, the compiler can tell that an unadorned 0 requests a null pointer when the 0 is in a pointer context; see question 5.2.) Therefore, #defining NULL as 0 on a machine for which internal null pointers are nonzero is as valid as on any other: the compiler must always be able to generate the machine's correct null pointers in response to unadorned 0's seen in pointer contexts. A constant 0 is a null pointer constant; NULL is just a convenient name for it (see also question 5.13).

(Section 4.1.5 of the C Standard states that NULL ``expands to an implementation-defined null pointer constant,'' which means that the implementation gets to choose which form of 0 to use and whether to use a void * cast; see questions 5.6 and 5.7. ``Implementation-defined'' here does not mean that NULL might be #defined to match some implementation-specific nonzero internal null pointer value.)


Q: If NULL were defined as follows:

 #define NULL ((char *)0)
wouldn't that make function calls which pass an uncast NULL work?

A: Not in the most general case. The complication is that there are machines which use different internal representations for pointers to different types of data. The suggested definition would make uncast NULL arguments to functions expecting pointers to characters work correctly, but pointer arguments of other types could still (in the absence of prototypes) require explicit casts. Furthermore, legal constructions such as

 FILE *fp = NULL;
could fail.

Nevertheless, ANSI C allows the alternate definition

 #define NULL ((void *)0)
for NULL. [footnote] Besides potentially helping incorrect programs to work (but only on machines with homogeneous pointers, thus questionably valid assistance), this definition may catch programs which use NULL incorrectly (e.g. when the ASCII NUL character was really intended; see question 5.9). See also question 5.7.

At any rate, ANSI function prototypes ensure that most (though not quite all; see question 5.2) pointer arguments are converted correctly when passed as function arguments, so the question is largely moot.

Programmers who are accustomed to modern, ``flat'' memory architectures may find the idea of ``different kinds of pointers'' very difficult to accept. See question 5.17 for some examples.


Q: My vendor provides header files that #define NULL as 0L. Why?


A: Some programs carelessly attempt to generate null pointers by using the NULL macro, without casts, in non-pointer contexts. (Doing so is not guaranteed to work; see questions 5.2 and 5.11.) On machines which have pointers larger than integers (such as PC compatibles in ``large'' model; see also question 5.17), a particular definition of NULL such as 0L can help these incorrect programs to work. (0L is a perfectly valid definition of NULL; it is an ``integral constant expression with value 0.'') Whether it is wise to coddle incorrect programs is debatable.



Q: Is NULL valid for pointers to functions?


A: Yes (but see question 4.13).


Q: If NULL and 0 are equivalent as null pointer constants, which should I use?


A: Many programmers believe that NULL should be used in all pointer contexts, as a reminder that the value is to be thought of as a pointer. Others feel that the confusion surrounding NULL and 0 is only compounded by hiding 0 behind a macro, and prefer to use unadorned 0 instead. There is no one right answer. (See also questions 9.4 and 17.10.) C programmers must understand that NULL and 0 are interchangeable in pointer contexts, and that an uncast 0 is perfectly acceptable. Any usage of NULL (as opposed to 0) should be considered a gentle reminder that a pointer is involved; programmers should not depend on it (either for their own understanding or the compiler's) for distinguishing pointer 0's from integer 0's.

It is only in pointer contexts that NULL and 0 are equivalent. NULL should not be used when another kind of 0 is required, even though it might work, because doing so sends the wrong stylistic message. (Furthermore, ANSI allows the definition of NULL to be ((void *)0), which will not work at all in non-pointer contexts.) In particular, do not use NULL when the ASCII null character (NUL) is desired. Provide your own definition

 #define NUL '\0'
if you must.

Q: But wouldn't it be better to use NULL (rather than 0), in case the value of NULL changes, perhaps on a machine with nonzero internal null pointers?


A: No. (Using NULL may be preferable, but not for this reason.) Although symbolic constants are often used in place of numbers because the numbers might change, this is not the reason that NULL is used in place of 0. Once again, the language guarantees that source-code 0's (in pointer contexts) generate null pointers. NULL is used only as a stylistic convention.


: I once used a compiler that wouldn't work unless NULL was used.

A: Unless the code being compiled was nonportable, that compiler was probably broken.

Perhaps the code used something like this nonportable version of an example from question 5.2:

 execl("/bin/sh", "sh", "-c", "date", NULL); /* WRONG */
Under a compiler which defines NULL to ((void *)0) (see question 5.6), this code will happen to work. [footnote] However, if pointers and integers have different sizes or representations, the (equally incorrect) code
 execl("/bin/sh", "sh", "-c", "date", 0); /* WRONG */
may not work.

Correct, portable code uses an explicit cast:

 execl("/bin/sh", "sh", "-c", "date", (char *)NULL);
With the cast, the code works correctly no matter what the machine's integer and pointer representations are, and no matter which form of null pointer constant the compiler has chosen as the definition of NULL. (The code fragment in question 5.2, which used 0 instead of NULL, is equally correct; see also question 5.9.) (In general, making decisions about a language based on the behavior of one particular compiler is likely to be counterproductive.)

Q: I use the preprocessor macro

#define Nullptr(type) (type *)0
to help me build null pointers of the correct type.
A: This trick, though popular and superficially attractive, does not buy much. It is not needed in assignments or comparisons; see question 5.2. (It does not even save keystrokes.) See also questions 9.1 and 10.2

Q: This is strange. NULL is guaranteed to be 0, but the null pointer is not?


A: When the term ``null'' or ``NULL'' is casually used, one of several things may be meant:

  1. The conceptual null pointer, the abstract language concept defined in question 5.1. It is implemented with...
  2. The internal (or run-time) representation of a null pointer, which may or may not be all-bits-0 and which may be different for different pointer types. The actual values should be of concern only to compiler writers. Authors of C programs never see them, since they use...
  3. The null pointer constant, which is a constant integer 0 [footnote] (see question 5.2). It is often hidden behind...
  4. The NULL macro, which is #defined to be 0 (see question 5.4). Finally, as red herrings, we have...
  5. The ASCII null character (NUL), which does have all bits zero, but has no necessary relation to the null pointer except in name; and...
  6. The ``null string,'' which is another name for the empty string (""). Using the term ``null string'' can be confusing in C, because an empty string involves a null ('\0') character, but not a null pointer, which brings us full circle...

In other words, to paraphrase the White Knight's description of his song in Through the Looking-Glass, the name of the null pointer is ``0'', but the name of the null pointer is called ``NULL'' (and we're not sure what the null pointer is).

This document uses the phrase ``null pointer'' (in lower case) for sense 1, the token ``0'' or the phrase ``null pointer constant'' for sense 3, and the capitalized word ``NULL'' for sense 4.[footnote]


Additional links: mnemonic device


Q: Why is there so much confusion surrounding null pointers? Why do these questions come up so often?


A: C programmers traditionally like to know a lot (perhaps more than they need to) about the underlying machine implementation. The fact that null pointers are represented both in source code, and internally to most machines, as zero invites unwarranted assumptions. The use of a preprocessor macro (NULL) may seem to suggest that the value could change some day, or on some weird machine. The construct ``if(p == 0)'' is easily misread as calling for conversion of p to an integral type, rather than 0 to a pointer type, before the comparison. Finally, the distinction between the several uses of the term ``null'' (listed in question 5.13) is often overlooked.

One good way to wade out of the confusion is to imagine that C used a keyword (perhaps nil, like Pascal) as a null pointer constant. The compiler could either turn nil into the appropriate type of null pointer when it could unambiguously determine that type from the source code, or complain when it could not. Now in fact, in C the keyword for a null pointer constant is not nil but 0, which works almost as well, except that an uncast 0 in a non-pointer context generates an integer zero instead of an error message, and if that uncast 0 was supposed to be a null pointer constant, the resulting program may not work.

Additional links: an article by Richard Stamp with another angle on the NULL/0 distinction



Q: I'm confused. I just can't understand all this null pointer stuff.


A: Here are two simple rules you can follow:

  1. When you want a null pointer constant in source code, use ``0'' or ``NULL''.
  2. If the usage of ``0'' or ``NULL'' is an argument in a function call, cast it to the pointer type expected by the function being called.

The rest of the discussion has to do with other people's misunderstandings, with the internal representation of null pointers (which you shouldn't need to know), and with the complexities of function prototypes. (Taking those complexities into account, we find that rule 2 is conservative[footnote] , of course; but it doesn't hurt.) Understand questions 5.1, 5.2, and 5.4, and consider 5.3, 5.9, 5.13, and 5.14, and you'll do fine.



Q: Is a run-time integral value of 0, cast to a pointer, guaranteed to be a null pointer?


A: No. Only constant integral expressions with value 0 are guaranteed to indicate null pointers. See also questions 4.14, 5.2, and 5.19.



Ref: http://c-faq.com/null/