Friday, July 17, 2020

Where C Language Grow Up For Users : Development of C Language

Growth that are used


UNIX

The success of our Interdata 8/32 portability experiment soon led to another Tom London and John Reiser on the DEC VAX 11/780. This machine became much more popular than Interdata, and Unix and C began to spread quickly, both inside and outside AT&T. Although by the mid-1970s Unix had used a variety of projects within the Bell System as well as a small group of research-oriented industrial, academic, and government organizations outside our company, its real growth only began after portability had been achieved. Of particular note were the System III and System V versions of the system from the emerging Computer Systems division of AT&T, based on the work of the development and research groups of the company, and the BSD series of publications by the University of California at Berkeley derived from Bell Laboratories research organizations.

In the 1980s, the use of the C-language spread widely, and compilers became available on almost every machine architecture and operating system; in particular, it became popular as a programming tool for personal computers, both for commercial software manufacturers for such machines and end-users interested in programming. At the beginning of the decade, almost every compiler was based on Johnson's PCC; by 1985, many independently produced compiler products were available.

 

Standardization

It was clear by 1982 that C required formal standardization. The best approximation to the standard, the first edition of K&R, no longer described the language in actual use; in particular, neither the void nor the enum type were mentioned. While it foreshadows a newer approach to structures, only after it has been published did the language support assign them, transfer them to and from functions, and associate the names of the members firmly with the structure or union that contains them. Although the AT&T-distributed compilers incorporated these changes, and most of the non-PCC compiler providers quickly picked them up, no complete, authoritative description of the language remained.

The first edition of K&R was also insufficiently accurate on many language details, and it became increasingly impractical to see PCC as a 'reference compiler;' it did not fully embody even the language described by K&R, let alone the subsequent extensions. Finally, the incipient use of C in commercial and government contract projects meant that the imprimatur of the official standard was important. Thus (at the urging of Mr. D. McIlroy) in the summer of 1983, ANSI established the X3J11 Committee under the leadership of CBEMA, intending to produce a C standard. At the end of 1989, X3J11 produced its report [ANSI 89], which was subsequently accepted by ISO as ISO / IEC 9899-1990.


X3J11 formulae by Dami Quartz

From the outset, the X3J11 committee took a cautious, conservative view of the language extensions. To my great satisfaction, they took their objective seriously: 'to develop a clear, consistent and unmistakable C programming language standard that codifies the common, existing C definition and promotes the portability of user programs across C language environments.' [ANSI 89] The Committee realized that the mere promulgation of a standard does not change the world.

X3J11 introduced only one truly important change to the language itself: it incorporated the types of formal arguments into the type of signature of the function, using syntax borrowed from C++ [Stroustrup 86]. In the old style, external functions have been declared as follows:

double sin();

 

That only says that sin is a function that returns a double (i.e. double-precision floating-point) value. In a new style, this was better rendered

double sin(double);

 

To make the type argument explicit and thus encourage better type-checking and appropriate conversion. Even this addition, even though it produced a considerably better language, caused difficulties. The Committee rightly felt that it was not feasible simply to outlaw 'old-style' function definitions and declarations, but also agreed that the new forms would be better. The inevitable compromise was as good as it could have been, although the language definition is complicated by allowing both forms, and portable software writers have to contend with compilers that are not yet up to standard.

X3J11 also introduced a host of minor additions and adjustments, such as const and volatile type qualifying rules and slightly different type promotion rules. However, the standardization process did not change the nature of the language. In particular, the C standard did not attempt to provide a formal definition of language semantics, and so there could be a dispute over fine points; nevertheless, it successfully accounted for changes in use since the original description and is sufficiently accurate to base implementations on it.

The core C language escaped almost undisturbed from the standardization process, and the Standard emerged more as a better, more careful codification than a new invention. More important changes have taken place in the language environment: the preprocessor and the library. The preprocessor performs macro substitution through conventions distinct from the rest of the language. Its interaction with the compiler has never been well described, and X3J11 has tried to remedy the situation. The result is noticeably better than the explanation given in the first edition of K&R; besides being more comprehensive, it provides for operations, such as concatenation of tokens, which were previously only available through implementation accidents.

X3J11 correctly believed that the full and careful description of the standard C library was as important as the work on the language itself. The C language itself does not provide input-output or any other interaction with the outside world, and thus depends on a set of standard procedures. At the time of publication of K&R, C was primarily thought of as the Unix programming language of the system; although we provided examples of library routines intended to be easily transportable to other operating systems, the underlying Unix support was implicitly understood. As a result, the X3J11 committee spent much of its time designing and documenting a set of library routines required to be available for all conforming implementations.

The current activities of the X3J11 Committee are limited by the rules of the standards process to the issuing of interpretations on the existing standard. However, the informal group originally convened by Rex Jaeschke as NCEG (Numerical C Extensions Group) was officially accepted as subgroup X3J11.1 and continues to consider extensions to C. As the name implies, many of these possible extensions are designed to make the language more suitable for numerical use: For example, multi-dimensional arrays whose boundaries are dynamically determined, the incorporation of IEEE arithmetic processing facilities, and making the language more effective on vector machines or other advanced architectural features. Not all possible extensions are specifically numerical; they include a notation for literal structure.

 

Successors:


B

C and even B have several direct descendants, although they do not compete with Pascal in the generation of offspring. One branch of the side developed early. When Steve Johnson visited the University of Waterloo on Saturday in 1972, he brought B with him. It became popular with Honeywell's machines, and later Eh and Zed (Canadian answers to 'What follows B?') spawned. When Johnson returned to Bell Labs in 1973, he was disconcerted to find that the language whose seeds he brought to Canada had evolved back home; even his YAC program had been rewritten in C by Alan Snyder.More recent descendants of C are Concurrent C [Gehani 89], Objective C [Cox 86], C * [Thinking 90] and, in particular, C++ [Stroustrup 86].

The language is also widely used as an intermediate representation (essentially as a portable assembly language) for a wide range of compilers, both for direct descendants such as C++ and for independent languages such as Module 3 and Eiffel.


Thursday, July 16, 2020

The problems with B and BCPL And Why require C Language?

The Problems With B Language


C Language

The machines on which we first used BCPL and then B were word-addressed, and the single data type of these languages, the 'cell,' was comfortably matched to the hardware machine word. The advent of the PDP-11 exposed several inadequacies of the B semantic model. First, its character-handling mechanisms, inherited with few changes from BCPL, were clumsy: using library procedures to spread packed strings to individual cells and then repack them, or to access and replace individual characters, began to feel awkward, even stupid, on a byte-oriented machine.

Second, although the original PDP-11 did not provide for floating-point arithmetic, the manufacturer promised that it would soon be available. Floating-point operations were added to BCPL in our Multics and GCOS compilers by defining special operators, but this mechanism was possible only because, on the machines concerned, a single word was large enough to contain a floating-point number; this was not true for the 16-bit PDP-11. 

Finally, the B and BCPL models implied overhead when dealing with pointers: language rules, by defining a pointer as an index in an array of words, forced pointers to be represented as word indices. Each pointer reference generated a conversion of the run-time scale from the pointer to the byte address expected by the hardware.

For all these reasons, it seemed that a typing scheme was needed to deal with characters and byte addresses, and to prepare for the upcoming floating-point hardware. Other issues, particularly the type of security and interface checks, did not seem as important as they did later.

Apart from problems with the language itself, the threaded-code technique of the B compiler produced programs so much slower than their assembly-language counterparts that we discounted the possibility of re-coding the operating system or its central utilities in B.

In 1971, I started to extend the B language by adding a character type and also rewrote its compiler to generate PDP-11 machine instructions instead of threaded code. Thus, the transition from B to C was contemporaneous with the creation of a compiler capable of producing programs quickly and small enough to compete with assembly language. I called the slightly-extended NB language 'new B.'

Embryonic C

NB existed so briefly that there was no full description of it. It provided int and char types, arrays of them, and points to them, declared in a style typified by

int i, j;

char c, d;

int iarray[10];

int ipointer[];

char carray[10];

char cpointer[];

 

The semantics of the arrays remained exactly as in B and BCPL: iarray and carray statements create dynamically initialized cells with a value pointing to the first of a sequence of 10 integers and characters, respectively. Statements for ipointer and cpointer omit the size to state that no storage should be allocated automatically. Within procedures, the language interpretation of the pointers was the same as that of the array variables: the pointer declaration created a cell that differed from the array declaration only in that the programmer was expected to assign the reference, instead of allowing the compiler to allocate the space and initialize the cell.

The values stored in the cells bound to the array and the names of the pointers were machine addresses, measured in bytes, of the corresponding storage area. Therefore, indirection through a pointer did not imply a run-time overhead to scale the pointer from word to byte offset. On the other hand, the machine code for array subscription and arithmetic pointer now depended on the type of array or pointer: to compute iarray[i] or ipointer+i  Implicit scaling of the addend I by the size of the object in question.

These semantics were an easy transition from B, and I've been experimenting with them for a few months. Problems became apparent when I tried to extend the type notation, especially to add structured (record) types. Structures, it seemed, were supposed to map intuitively to the memory in the machine, but in a structure containing an array, there was no good place to stash the pointer containing the base of the array, or any convenient way to get it initialized. For example, the directory entries of early Unix systems could be described as C.

struct {

          int     inumber;

          char  name[14];

};

 

I wanted the structure not only to characterize an abstract object, but also to describe a collection of bits that could be read from a directory. Where could the compiler hide the name pointer requested by the semantics? Even if structures were thought more abstractly, and the pointer space could somehow be hidden, how could I deal with the technical problem of properly initializing these pointers when assigning a complicated object, perhaps one that specified structures containing arrays containing structures to arbitrary depth?

The solution was a crucial jump in the evolutionary chain between typeless BCPL and typed C. It eliminated the materialization of the pointer in storage, and instead caused the pointer to be created when the name of the array is mentioned in the expression. The rule that survives in today's C is that the values of the array type are converted to the first of the objects that make up the array when they appear in expressions.

This invention allowed most of the existing B code to continue to work, despite the underlying shift in language semantics. The few programs that assigned new values to an array name to adjust its origin — possible in B and BCPL, irrelevant in C — were easily repaired. More importantly , the new language retained a coherent and workable (if unusual) explanation of the array semantics while opening the way to a more comprehensive type structure.

The second innovation that most clearly distinguishes C from its predecessors is the fuller type structure and, in particular, its expression in the syntax of declarations. NB offered basic types int and char, along with arrays of them, and pointed to them, but no further compositional methods. Generalization was required: given an object of any kind, it should be possible to describe a new object that gathers several objects into an array, generates them from a function, or is a pointer to it.

There was already a way for each object of such a composite type to mention the underlying object: index the array, call the function, use the indirect operator on the pointer. Analogical reasoning led to a declaration syntax for names that mirrored the syntax of the expression in which names typically appear. So,

int i, *pi, **ppi;


Declare an integer, a pointer to an integer, a pointer to an integer. The syntax of these statements reflects the observation that I * pi, and * * ppi all give an int type when used in an expression. In the same way, 

int f(), *f(), (*f)();

 

Declare a function that returns an integer, a function that returns a pointer to an integer, a function that returns an integer;

int *api[10], (*pai)[10];

 

Declare an array of pointers to the integers, and a pointer to the integers array. In all these cases, the declaration of a variable is similar to its use in an expression whose type is the one named at the head of the declaration. 

The type composition scheme adopted by C owes a considerable debt to Algol 68, although it may not have emerged in the form that Algol's adherents would approve. The central notion that I captured from Algol was a type structure based on atomic types (including structures), composed of arrays, pointers (references), and functions (procedures). The concept of unions and casts of Algol 68 also had an influence that appeared later.

After creating the type system, the syntax associated with it, and the compiler for the new language, I felt it deserved a new name; NB seemed insufficiently distinctive. I decided to follow the single-letter style and called it C, leaving open the question of whether the name represented a progression through the alphabet or the letters in the BCPL.

Wednesday, July 15, 2020

What are the origins of C languages

Origins of C languages


BCPL was designed by Martin Richards in the mid-1960s while visiting MIT and was used in the early 1970s for several interesting projects, including the Oxford [Stoy 72] OS6 operating system, and parts of the Alto seminal work at Xerox PARC [Thacker 79]. We became acquainted with this because the MIT CTSS system [Corbato 62] on which Richards worked was used for the development of Multics. The original BCPL compiler was transported to both the Multics system and the GE-635 GECOS system by Rum Canaday and others at Bell Labs [Canada 69]; During the final throes of Multics' life at Bell Labs, and immediately thereafter, it was the language of choice among the group of people who would later become involved with Unix.

 

BCPL, B, and C all fit firmly into the traditional procedural family of Fortran and Algol 60. They are particularly oriented towards system programming, are small and compactly described, and can be translated by simple compilers. They are close to the machine' in that the abstracts they introduce are easily grounded in the concrete data types and operations provided by conventional computers, and rely on library routines for input-output and other interactions with the operating system. With less success, library procedures are also used to specify interesting control structures, such as coroutines and process closures. At the same time, their abstractions lie at a sufficiently high level that it is possible, with care, to achieve portability between machines.

 

B Language


BCPL, B, and C differ syntactically in many details but are generally similar. Programs consist of a sequence of global statements and function (procedure) declarations. Procedures may be nested in BCPL, but may not refer to non-static objects defined in procedures. B and C avoid this restriction by imposing a more serious one: no nested procedures at all. Each language (except the earlier versions of B) recognizes a separate compilation and provides a means to include text from named files.

 

BCPL Language


Several BCPL syntactic and lexical mechanisms are more elegant and regular than those of B and C. For example, the BCPL procedure and the data declarations have a more uniform structure and a more complete set of loop constructions is provided. Although the BCPL programs are theoretically supplied from an undelimited stream of characters, the clever rules allow most semicolons to be elided after statements that end on a line boundary. 

B and C omit this convenience, and end most of the statements with semicolons. Despite the differences, the majority of BCPL statements and operators map directly to corresponding B and C. 

Some of the structural differences between BCPL and B stemmed from the limitations of the intermediate memory. BCPL declarations, for example, may take the form 

let P1 be command

and P2 be command

and P3 be command

 ...


Where the text of the program represented by the commands contains complete procedures. Subdeclarations connected to and occurring at the same time, so the name P3 is known in the P1 procedure. In the same way, BCPL can package a group of declarations and statements into a value-giving expression, for example

E1 := valof ( declarations ; commands ; resultis E2 ) + 1

 

The BCPL compiler can easily handle such constructs by storing and analyzing the parsed representation of the entire program in memory before the output is generated. The storage limitations of the B compiler required a one-pass technique in which output was generated as soon as possible, and the syntactic redesign that made this possible was forwarded to C.

Some less pleasant aspects of BCPL were due to their technical problems and were consciously avoided in the design of B. For example, the BCPL uses a 'global vector' mechanism to communicate between separately compiled programs. The BCPL compiler can easily handle such constructs by storing and analyzing the parsed representation of the entire program in memory before the output is generated. The storage limitations of the B compiler required a one-pass technique in which output was generated as soon as possible, and the syntactic redesign that made this possible was forwarded to C. Some less pleasant aspects of BCPL were due to their technical problems and were consciously avoided in the design of B. For example, the BCPL uses a 'global vector' mechanism to communicate between separately compiled programs. 

Other violins in the transition from BCPL to B have been introduced as a matter of taste, and some remain controversial, such as the decision to use a single character = for assignment instead of:=. Similarly, B uses/**/to enclose comments where BCPL uses/to ignore text up to the end of the line.

 Here, the legacy of PL / I is evident. (C++ has resurrected the BCPL comment convention.) Fortran has influenced the syntax of the declarations: B declarations begin with a specifier like auto or static, followed by a list of names, and C has not only followed this style, but has also decorated it by putting its keyword type at the start of the declarations. 

Not every difference between the BCPL language documented in Richard's book [Richards 79] and B was deliberate; we started with the earlier version of BCPL [Richards 67]. For example, the end case that escapes from the BCPL switch on the statement was not present in the language when we learned it in the 1960s, and so the overloading of the break keyword to escape from the B and C switch statement is due to diverging evolution rather than conscious change.

 In contrast to the widespread syntax variation that occurred during the creation of B, the core semantic content of the BCPL — its type structure and expression evaluation rules — remained intact. Both languages are typeless, or have a single data type, 'word,' or 'cell,' a fixed-length bit pattern. Memory in these languages consists of a linear array of such cells, and the meaning of the contents of the cells depends on the operator used. The + operator, for example, simply adds its operands using the integer add instruction of the machine, and the other arithmetic operations are equally unconscious of the actual meaning of their operands. Because memory is a linear array, the value in a cell can be interpreted as an index in this array, and BCPL supplies an operator for this purpose. It was spelled RV in the original language, and later! , while B is using the unary *. Thus, if p is a cell containing the index of (or the address of, or the pointer to) another cell, * p refers to the contents of the point-to-cell, either as the value in the expression or as the target of the assignment.

 Because BCPL and B pointers are only integer indices in the memory array, their arithmetic is meaningful: if p is the address of the cell, then p+1 is the address of the next cell. This convention is the basis for array semantics in both languages. When one writes in the BCPL 

let V = vec 10

or in B,

auto V[10];

 

The effect is the same: a cell named V is assigned, then another group of 10 contiguous cells is set aside, and the memory index of the first cells is set to V. By general rule, the expression B is set to B.

 *(V+i) 


Adds V and I and refers to the I location after V. Both BCPL and B each add a special notation to sweeten that array of accesses; in B, the equivalent expression is

V[i]

and in BCPL

V!i

 

Even at that time, this approach to arrays was unusual; C would later assimilate it in an even less conventional manner.

None of the BCPL, B, or C formats strongly support character data in the language; each handles strings much like integer vectors and complements the general rules with a few conventions. In both BCPL and B, a string denotes the address of a static area initialized with string characters packed into cells. In BCPL, the first packed byte contains the number of characters in the string; in B, there is no count and the strings are terminated by a special character that B spelled '*e.' This change was made partly to avoid limiting the length of the string caused by holding the count in an 8-bit or 9-bit slot, and partly because, in our experience, keeping the count seemed less convenient than using the terminator.

In general, individual characters in the BCPL string were manipulated by spreading the string to another array, one character per cell, and then repackaging it later; B provided the corresponding routines, but people more often used other library functions that accessed or replaced individual characters in a string.

Where is the success of C?

Where's the success C?

 

C has been successful to a far greater extent than any early expectations. What qualities have contributed to its widespread use? 

C Language

Undoubtedly, the success of Unix itself was the most important factor; it made the language available to hundreds of thousands of people. Conversely, of course, the use of C by Unix and its consequent portability to a wide range of machines was important to the success of the system. But the language invasion of other environments suggests more fundamental merits. 

Despite some aspects that are mysterious to the beginner and sometimes even to the adept, C remains a simple and small language that can be translated with simple and small compilers. Its types and operations are well-grounded in those provided by real machines, and it is not difficult for people who are used to the way computers work, to learn languages for generating time-and space-efficient programs. At the same time, the language is sufficiently abstracted from machine details that the portability of the program can be achieved.

Equally important, C and its central library support have always remained in touch with the real environment. It was not designed in isolation to prove a point or to serve as an example, but as a tool for writing programs that did useful things; it was always meant to interact with a larger operating system, and was seen as a tool for constructing larger tools. The parsimonious, pragmatic approach has influenced the things that went into C: it covers the essential needs of many programmers, but it does not try to supply too much.

Finally, despite the changes it has undergone since its first published description, which was admittedly informal and incomplete, the actual C-language as seen by millions of users using many different compilers has remained remarkably stable and unified compared to those of similar currency, such as Pascal and Fortran. There are different dialects of C — most notably those described by the older K&R and the newer C — but, overall, C remained freer from proprietary extensions than other languages. Perhaps the most significant extensions are the 'far' and 'near' pointer qualifications intended to address the peculiarities of some Intel processors. Although C was not originally designed with portability as a primary goal, it was able to express programs, including operating systems, on machines ranging from the smallest personal computers to the most powerful supercomputers.

C is bizarre, defective, and a huge success. While history accidents certainly helped, it evidently satisfied the need for a system implementation language that was efficient enough to displace assembly language, yet sufficiently abstract and fluent to describe algorithms and interactions in a wide variety of environments.

Friday, June 26, 2020

Critiques on C Language

Critique
 

Two ideas are the most characteristic of C between the languages of its class: the relationship between arrays and pointers, and how the declaration syntax mimics the expression syntax. They are also among its most frequently criticized features, and often serve as stumbling blocks to beginners. In both cases, historical accidents or errors have exacerbated their difficulties. The most important of these was the tolerance of C compilers to type errors. As should be clear from the above history, C has evolved from a variety of languages. It did not suddenly appear to its early users and developers as a completely new language with its own rules; Instead, we had to continuously adapt the existing programs as the language was developed and make provision for an existing body of code. (Later, the ANSI X3J11 C standardization committee would have the same problem.)


Compilers in 1977, and even after that, did not complain about uses such as assigning between integers and pointers, or using objects of the wrong type to refer to the members of the structure. Although the language definition presented in the first edition of K&R was reasonably (though not entirely) consistent in its handling of the type rules, the book admitted that existing compilers did not enforce them. Besides, some of the rules designed to ease early transitions have contributed to later confusion. For example, empty square brackets in the declaration function int f(a) int

a[]; {....}

  

They are a living fossil, a remnant of NB 's way of declaring a pointer; an is, in this special case only, interpreted in C as a pointer. The notation survived partly because of compatibility, partly because of the rationalization that it would allow programmers to communicate to their readers an intention to pass a pointer generated from the array, rather than a reference to a single integer. Unfortunately, it serves as much to confuse the learner as it does to alert the reader.

 

In K&R C, it was the responsibility of the programmer to supply arguments of the proper type to a function call, and the existing compilers did not check for a type agreement. The failure of the original language to include the type of argument in the type of function signature was a significant weakness, indeed one that required the boldest and most painful innovation of the X3J11 committee to be remedied. The early design is explained (if not justified) by avoiding technological problems, in particular cross-checking between separate source files, and my incomplete assimilation of the implications of moving from untyped to typed language. The lint program mentioned above tried to ease the problem: among its other functions, lint checks the consistency and consistency of the entire program by scanning a set of source files, comparing the types of function arguments used in calls with those used in their definitions.


C Language

 

The syntax accident contributed to the perceived complexity of the language. The indirection operator, spelled * in C, is syntactically a unary prefix operator, as in BCPL and B. This works well in simple expressions, but in more complex cases, parentheses are needed to direct parsing.

 

For example, to distinguish indirection by a value returned by a function from calling a function designated by a pointer, one writes *fp() and (*pf)()respectively. The style used in expressions is followed by a declaration, so that names can be declared

int *fp();

int (*pf)();

 

In more ornate but still realistic cases, things get worse:

int *(*pfp)();

 

It is a pointer to a function that returns a pointer to an integer. Two effects are occurring. Most importantly, C has a relatively rich set of ways to describe types (compared, say, with Pascal). Statements in languages as expressive as C — Algol 68, for example — describe objects that are equally difficult to understand, simply because the objects themselves are complex. A second effect is due to the syntax details. Statements in C must be read in an 'inside-out' style that many find difficult to grasp [Anderson 80]. Sethi [Sethi 81] noted that many of the nested statements and expressions would have become simpler if the indirect operator had been taken as a postfix operator instead of a prefix, but by then it was too late to change. Despite its difficulties, I believe that the C approach to declarations remains plausible, and I am comfortable with it; it is a useful unifying principle.

 

The other characteristic feature of C, its treatment of arrays, is more suspect on practical grounds, although it also has real virtues. Although the relationship between pointers and arrays is unusual, this can be learned. Moreover, language has considerable power to describe important concepts, such as vectors whose length varies over time, with only a few basic rules and conventions. In particular, character strings are handled by the same mechanisms as any another array, plus the convention that a null character will terminate a string. It is interesting to compare C's approach with that of two almost contemporary languages, Algol 68 and Pascal [Jensen 74].

 

Arrays in Algol 68 either have fixed limits or are 'flexible:' a considerable mechanism is required both in language definition and in compilers to accommodate flexible arrays (and not all compilers fully implement them.) Original Pascal had only fixed-size arrays and strings, and this proved to be confined to [Kernighan 81]. Later, this was partially fixed, although the resulting language is not yet universally available.

 

C treats strings as character arrays conventionally terminated by a marker. Apart from a specific rule on string literal initialization, string semantics are fully subsumed by more general rules governing all arrays and, as a result, the language is easier to describe and translate than one that incorporates a string as a unique data type. Some costs arise from its approach: certain string operations are more expensive than other designs because the application code or library routine must occasionally search for the end of a string. After all, few built-in operations are available, and because the burden of string management falls more on the user. C's approach to strings, however, works well.

 

mistakes


On the other hand, C's treatment of arrays in general (not just strings) has unfortunate implications for both optimization and future extensions. The prevalence of pointers in C programs, whether explicitly stated or derived from arrays, means that optimizer must be prudent and must use careful data flow techniques to achieve good results. Sophisticated compilers can understand what most pointers might change, but some important uses remain difficult to analyze. Functions with pointer arguments derived from arrays, for example, are difficult to compile into efficient vector machine code, because it is rarely possible to determine that one argument pointer does not overlap data that is also referred to by another argument or accessible externally. More fundamentally, the C definition so specifically describes the semantics of arrays that changes or extensions that treat arrays as more primitive objects, and allow operations on them as a whole, are difficult to fit into the existing language. Even extensions to allow the declaration and use of multidimensional arrays whose size is dynamically determined are not entirely straightforward [MacDonald 89] [Ritchie 90], although they would make it much easier to write numerical libraries in C. Thus, C covers the most important uses of strings and arrays resulting from a uniform and simple mechanism in practice but leaves problems for highly efficient implementations and extensions. 

 

There are, of course, many minor infelicities in the language and its description besides those discussed above. There are also general criticisms to be made, which go beyond detailed points. The most important of these is that language and its generally-expected environment are of little help in the writing of very large systems. The naming structure only provides two main levels, 'external' (visible everywhere) and 'internal' (within a single procedure). The intermediate level of visibility (within a single file of data and procedures) is weakly linked to the language definition. There is therefore little direct support for modularization, and project designers are forced to set up their conventions.

 

Similarly, C itself provides two storage duration: 'automatic' objects that exist while the control resides in or below the procedure, and 'static' objects that exist throughout the execution of the program. Off-stack, dynamically allocated storage is provided only by a library routine and the burden of managing it is placed on the programmer: C is hostile to automatic garbage collection.

Thursday, June 25, 2020

Short History of C Programming



Short History of C Programming


HISTORY OF C

There are a lot of programming languages in use today, one of which is C. There are many C programming languages, including Objective-C, C++, and C #. None of these are in the same language. So, how did C get started?

Wednesday, June 24, 2020

WHAT IS THE HISTORY OF C PROGRAMMING LANGUAGE

DESCRIBE ABOUT HISTORY OF C PROGRAMMING LANGUAGE


C Language

       C programming language is a structured programming language, developed by Dennis Ritchie at Bell Laboratories in 1972.

       C language features have been derived from an earlier language called "B" (Basic Combined Programming Language – BCPL).

       In earlier days, the programs were written in assembly level languages. So, it happened that very large programs were written to perform specific tasks using assembly code.

       But the 'B' language could perform the same task in a few program lines, and it was faster than the assembly of the language code.

       However, the B language did not support certain features such as data types and structures, etc. So, this was a drawback to the B language. Dennis Ritchie developed C by keeping most of the B-language and adding many features that produced powerful and effective outputs.

       So, the C language was invented to implement the UNIX operating system. Most of the UNIX components have been rewritten in C.

       In 1978, Dennis Ritchie and Brian Kernighan published the first edition of The C Programming Language, commonly known as K&R C.

       In 1983, the American National Standards Institute ( ANSI) set up a committee to provide a modern, comprehensive definition of C. The resulting definition, ANSI or "ANSI C," was completed at the end of 1988.

       Standard C89 / C90 – The first standardized C-language specification was developed by the American National Standards Institute in 1989. The standards C89 and C90 refer to the same programming language.

       Standard C99 – The next revision was published in 1999 that introduced new futures such as advanced data types and other changes.

       C11 standard adds new features to C and library, such as generic macro types, anonymous structures, enhanced Unicode support, atomic operations, multi-threading, and bound-checked functions. It also makes some parts of the existing C99 library optional and improves compatibility with C++.

Embedded C includes features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I / O hardware address.

Climate Crisis and Innovation: Navigating Earth's Future

Climate Change: Recent Events and Technological Solutions 1. The Escalating Climate Crisis The climate crisis has intensified in recent year...