Wednesday, July 22, 2020

Google's Project Zero team will not apply for Apple's SRD program

Google's Project Zero team will not apply for 

Apple's SRD program

 

Apple Security

Other security researchers have expressed similar intentions to skip the Apple SRD after the program rules give Apple full control of the vulnerability disclosure process.

Some of the top names in the iPhone Vulnerability Research field have announced plans to skip Apple's new Security Research Device (SRD) program today due to Apple 's restrictive disclosure rules that effectively muzzle security researchers.

The list includes Project Zero (Google's elite bug-hunting team), Will Strafach (CEO of Guardian Mobile Security), ZecOps (mobile security firm that has recently discovered a series of iOS attacks) and Axi0mX (iOS vulnerability researcher and author of Checkm8 iOS exploit).

 

What's the Apple SRD program?


ios


The Security Research Device (SRD) program is unique among smartphone manufacturers. Through the SRD program, Apple has promised to provide security researchers with pre-sale iPhones.

These iPhones are modified to have fewer restrictions and allow deeper access to the iOS operating system and hardware of the device, so security researchers can search for bugs that they would not normally be able to detect on standard iPhones where the phone's default security features prevent security tools from looking deeper into the phone.

Apple officially announced the SRD program in December 2019, when it also extended its bug bounty program to include more of its operating systems and platforms.

However, while the company tampered with the program last year, it wasn't until today that Apple launched it by publishing an official SRD website and emailing selected security researchers and bug hunters to invite them to apply for the review process needed to receive an untapped iPhone.

 

New Restrictive Rule

This new website also included the official rules of the SRD program, which security researchers have not had the opportunity to review in detail.

But while Apple's SRD announcement was welcomed by the security community with joy last year, considering it a first step in the right direction, they weren't very happy with Apple today.

According to complaints shared by social media, one specific clause was wrong for most security researchers:

"If you report a vulnerability that affects Apple products, Apple will provide you with a release date (usually the date on which Apple releases the update to resolve the issue). Apple will work in good faith to resolve any vulnerabilities as soon as possible. You can not discuss the vulnerability with others until the release date."

The clause effectively makes it possible for Apple to muzzle security researchers. The clause gives Apple full control of the process of disclosure of vulnerabilities. It allows the iPhone maker to set the release date when security researchers are allowed to talk or publish anything about vulnerabilities found in iOS and iPhone while part of the SRD program.

Many security researchers are now afraid that Apple will abuse this clause to delay major patches and drag its feet on delivering much-needed security updates by postponing the release date after which they are allowed to talk about iOS bugs. Others fear that Apple will use this clause to silence their work and prevent them from even publishing their work.

 

Project Zero and others will decide not to apply

The first to notice and understand the implications of this clause was Ben Hawkers, leader of the Google Project Zero team.

"It looks like we're not going to be able to use the Apple 'Security Research Device' because of the vulnerability restrictions that seem specifically designed to exclude Project Zero and other researchers using a 90-day policy," Hawkes said on Twitter today.

Hawkes' tweet received a lot of attention from the infosec community, and other security researchers soon followed the team's decision. Speaking to ZDNet's sister site, CNET, Will Strafach also said that he was not going to join the program because of the same clause.

On Twitter, the cybersecurity firm ZecOps also announced that it would skip the SRD program and continue to hack iPhones in the old fashion way.

In a conversation with ZDNet, security researcher Axi0mX said they were thinking of not participating as well.

"Disclosure time limits are standard practice in the industry. They are necessary," said the researcher.

"Apple requires researchers to wait for an unlimited amount of time, at Apple's discretion, before any bugs found in the Security Research Device Program can be revealed. There is no time limit. This is a poison pill," he added.

Alex Stamos, Facebook's former Chief Information Security Officer, also criticized Apple 's move, which was part of a larger set of decisions that the company has taken in recent months against the cybersecurity and vulnerability research community — which also included a lawsuit against a mobile device virtualization company that helped security researchers track iOS bugs.

It's one thing to see no-name security researchers talking about a security program, but it's another thing to see the industry's biggest names attacking one.

 

Apple Security Programs are not well viewed

The fear that Apple might abuse the rules of the SRD program to bury important iOS bugs and research is justified for those who followed Apple's security programs. Apple has previously been accused of the same practice.

In a series of tweets published in April, macOS and iOS developer Jeff Johnson attacked the company for not being serious enough about its security work.

"I 'm thinking about withdrawing from the Apple Security Bounty program," said Johnson. "I don't see any evidence that Apple is serious about the program. I've heard of just 1 bounty payment, and the bug wasn't Mac-specific. Also, Apple Product Security has ignored my last email to them for weeks.

"Apple announced the program in August, did not open it until a few days before Christmas and has not yet paid a single Mac security researcher to my knowledge. This is a joke. I think the goal is to keep researchers quiet about bugs for as long as possible, "Johnson said.


Self Driving Car Project For Computer Science

Self Driving Cars

 

Self Driving Car


INTRODUCTION

The usage and production of these cars have become a leading industry in almost every area of the world. Over the years and centuries, this industry has gone through enormous development, as the first vehicles were only powered by the steam engine, then petrol and diesel came to the public mind, and currently, it seems that the electric propulsion will be the future. Of course, with this development, faster and more useful vehicles can be produced, but in our accelerated world with more and more cars, unfortunately, the number of accidents has increased.

In most cases, these accidents are the fault of the driver, therefore it could be theoretically replaceable with the help of self-propelled cars. The human presence is the most important part of transport at present, although there are many areas where you can use a tool or feature that helps people achieve greater efficiency. Some examples of these features are the autopilot on aircraft, cruise control in cars, and many other tools that help decision-making.

 

EVOLUTION OF SELF-DRIVING CARS

Autonomous cars are those vehicles that are driven by digital technologies without any human intervention. They are capable of driving and navigating themselves on the roads by sensing the environmental impacts. Their appearance is designed to occupy less space on the road to avoid traffic jams and reduce the likelihood of accidents.

The dream of self-propelled cars goes back to the Middle Ages, centuries before the invention of the car. A piece of evidence for this statement comes from sketches of Leonardo De Vinci, in which he made a rough plan of them. Later, in literature and several science fiction novels, the robots and the vehicles controlled by them appeared. The first driverless cars were prototyped in the 1920s, but they looked different than they are today. Although the "driver" was nominally lacking, these vehicles relied heavily on specific external inputs. One of these solutions is when the car is controlled by another car behind it. Its prototype was introduced in New York and Milwaukee is known as, the American Wonder" or "Phantom Auto".

Most of the big names  Mercedes Benz, Audi, BMW, Tesla, Hyundai, etc.  have begun developing or forming partnerships around autonomous technology. They invested sizable resources into this, and by making this step they wanted to be leaders at the market of self- driving cars.

Up to this point, numerous aids, software, and sensors have been put into these cars, but we are still far from full autonomy.

They use lasers that are testing the environment with the help of LIDAR (Light Detection and Ranging). This optical technology senses the shape and movement of objects around the car; combined with the digital GPS map of the area, they detect white and yellow lines on the road, as well as all standing and moving objects on their perimeter. Autonomous vehicles can only drive themselves if the human driver can take over control if needed.

 

These are those features that driverless cars already use:

•Collision avoidance

•Drifting warning

•Blind-spot detectors

•Enhanced cruise control

•Self-parking

 

Below we briefly present some companies that play the most important role in the innovation of this segment, to show how this industry has developed.

 

Tesla 

Elon Musk, the Chief Executive Officer of Tesla, claims that every Tesla car will be completely autonomous within two years. Tesla's "S" model is a semi-self-propelled car, where different cars can learn from each other while working together. The signals processed by the sensors are sent to other cars thus they can develop each other. This information teaches cars about changing lanes and detecting obstacles, and are continually improving from day today. From October 2016, all Tesla vehicles have been being built by Autopilot Hardware 2, with a sensor and computing package that the company claims to allow complete self-driving without human interference.


Google 

The Google team has been working on driverless cars for years, and last year a working prototype was presented (by them). Furthermore, Google also supports other car manufacturers with self-driving car technologies such as Toyota Prius, Audi TT, and Lexus RX450h. Their autonomous vehicle uses Bosch sensors and other equipment manufactured by LG and Continental companies. In 2014, Google planned a driverless car that would be available without pedals and wheels to make it available to the general public by 2020, but according to the current trends, its fulfillment is still unlikely.

 

nuTonomy 

A small group of graduates of the Massachusetts Institute of Technology (MIT) created the nuTonomy software and algorithm, especially to self-propelled cars. In Singapore, nuTonomy has already put sensors to the Mitsubishi i-MiEV electric car prototype, thus nuTonomy algorithms can control the car on these complex urban roads by using GPS and LiDAR sensors. Besides that, in November 2016, they announced that self-propelled cars will be tested in Boston as well.

The National Highway Traffic Safety Administration (NHTSA) adopted the levels of the Society of Automotive Engineers for automated driving systems, which provides a broad spectrum of total human participation to total autonomy. NHTSA expects automobile manufacturers to classify each vehicle in the coming years using SAE 0 to 5 levels.


Self Driving Car


These are the levels of SAE: 


Level 0: No Automation

In this case, there is 100% of human presence. Acceleration, braking, and steering are constantly controlled by a human driver, even if they support warning sounds or safety intervention systems. This level also includes automated emergency braking.

 

Level 1: Driver Assistance 

The computer never controls steering and accelerating or braking simultaneously. In certain driving modes, the car can take control of the steering wheel or pedals. The best examples for the first level are adaptive cruise control and parking assistance.

 

Level 2: Partial Automation 

The driver can take his hands off the steering wheel. At this level, there are set-up options in which the car can control both pedals and the steering wheel at the same time, but only under certain circumstances. During this time the driver has to pay attention and if it is necessary, intervene. This is what Tesla Autopilot has known since 2014.

 

Level 3: Conditional Automation 

It approaches full autonomy, but this is dangerous in terms of liability, so therefore, paying attention to them is a very important element. Here the car has a certain model that can take full responsibility for driving in certain circumstances, but the driver must take the control back when the system asks. At this level, the car can decide when to change lanes and how to respond to dynamic events on the road and it uses the human driver as a backup system.

 

Level 4: High Automation 

It is similar to the previous level, but it is much safer. The vehicle can drive itself under suitable circumstances, and it does not need human intervention. If the car meets something that it cannot handle, it will ask for human help, but it will not endanger passengers if there is no human response. These cars are close to the fully self-driving car.

 

Level 5: Full Automation 

At this level, as the car drives itself, human presence is not necessity, only an opportunity. The front seats can turn backward so passengers can talk more easily with each other because the car does not need help in driving. All driving tasks are performed by the computer on any road under any circumstances, whether there's a human on board or not.


These levels are very useful as with these we can keep track of what happens when we move from human-driven cars to fully automated ones. This transition will have enormous consequences for our lives, our work, and our future travels. As autonomous driving options are widespread, the most advanced detection, vision, and control technologies allow cars to detect and monitor all objects around the car, relying on real-time object measurements.

Besides, the information technology built into the vehicle is fully capable of delivering both external (field) and internal (machine) information to the car

 

DECISIONS

Self-driving cars may be the future of transportation but we do not know whether it is safer than non- autonomous driving or not. There are unexpected events during driving that force us to decide, often these are only tiny things such as passing through the yellow light or not but sometimes situations arise where we have to decide on the lives of others or our own.

Trusting in new technologies is expected to be a significant challenge for the public. Few people feel comfortable about using a new and unproven transportation technology, which can be seen after studying aviation history.

 

These problems may arise:

  • How should the car be programmed to act in the event of an unavoidable accident?
  • Should it minimize the loss of life even if it means sacrificing the occupants, or should it protect the occupants at all costs?
  • Should it choose between these extremes at random?

 

Answers to these ethical questions are important because they can have a great impact on the ability to accept autonomous cars in society. Who would buy a car that is programmed to sacrifice the owner?

 

Careful about:

1. An Unregulated Industry

Because information about the technology is limited, and although 200 car companies are jumping into the self-driving car space, there are not enough solid facts to create a baseline for safety standards. As yet, the industry is unregulated which is excellent for manufacturers but bad for consumers.


2. More Accidents Blending Self-Driving and Manual Cars

Sometimes self-driving cars give the passengers a sense of false security when really, they should be extra cautious and ready to take the wheel at any given moment should the need arise.


3. Vulnerability to Hacking & Remote Control

Any computer device connected to the internet is vulnerable to hacking. These cars also rely heavily on the software that runs their components, and if a hacker gets into the system, they can control every aspect of the car.

Other dangers to be aware of are the theft of private data and even gaining remote access to a cell phone connected to the car via Bluetooth. Self-driving vehicles may also be more susceptible to computer viruses.


4. Computer Malfunctions

Most self-driving cars are made up of not one but 30 to 100 computers. That is a lot of technology where things could go wrong. The software that runs self-driving cars is admittedly sophisticated. However, one of the more difficult challenges that engineers struggle to solve is how to operate smoothly in all weather conditions. Correctly controlling sensors on the rear camera is also an issue. A particularly dangerous glitch is how to know when to execute a quick stop when someone is in the crosswalk in front of the car. Other concerns that should be solved before these cars hit the road are freeze-ups during autopilot mode, and how to account for the unpredictable behavior of other motorists. 


5. Exposure to Radiation

With all the goodies on board like GPS, remote controls, power accessories, Bluetooth, Wi-Fi, music, and radio components drivers will be increasingly exposed to higher levels of electromagnetic field radiation. Exposure to electronic radiation can cause a myriad of serious health problems. Some of the more serious issues are high blood pressure, difficulty breathing, migraine headaches, eye issues, exhaustion, and sleeplessness.


Self Driving Car
 

 

CONCLUSIONS

This is a quite new topic, and this is still closer to a piece of science fiction literature, but several companies try to solve this task, even if there are many problems with it. We showed that the main problem is the fear of losing control. If a computer decides instead of us, we do not control the processes. Every computer and program may have a back door, and the question arising is what can be done if someone enters into the computer that can save our lives.

 

Friday, July 17, 2020

Where C Language Grow Up For Users : Development of C Language

Growth that are used


UNIX

The success of our Interdata 8/32 portability experiment soon led to another Tom London and John Reiser on the DEC VAX 11/780. This machine became much more popular than Interdata, and Unix and C began to spread quickly, both inside and outside AT&T. Although by the mid-1970s Unix had used a variety of projects within the Bell System as well as a small group of research-oriented industrial, academic, and government organizations outside our company, its real growth only began after portability had been achieved. Of particular note were the System III and System V versions of the system from the emerging Computer Systems division of AT&T, based on the work of the development and research groups of the company, and the BSD series of publications by the University of California at Berkeley derived from Bell Laboratories research organizations.

In the 1980s, the use of the C-language spread widely, and compilers became available on almost every machine architecture and operating system; in particular, it became popular as a programming tool for personal computers, both for commercial software manufacturers for such machines and end-users interested in programming. At the beginning of the decade, almost every compiler was based on Johnson's PCC; by 1985, many independently produced compiler products were available.

 

Standardization

It was clear by 1982 that C required formal standardization. The best approximation to the standard, the first edition of K&R, no longer described the language in actual use; in particular, neither the void nor the enum type were mentioned. While it foreshadows a newer approach to structures, only after it has been published did the language support assign them, transfer them to and from functions, and associate the names of the members firmly with the structure or union that contains them. Although the AT&T-distributed compilers incorporated these changes, and most of the non-PCC compiler providers quickly picked them up, no complete, authoritative description of the language remained.

The first edition of K&R was also insufficiently accurate on many language details, and it became increasingly impractical to see PCC as a 'reference compiler;' it did not fully embody even the language described by K&R, let alone the subsequent extensions. Finally, the incipient use of C in commercial and government contract projects meant that the imprimatur of the official standard was important. Thus (at the urging of Mr. D. McIlroy) in the summer of 1983, ANSI established the X3J11 Committee under the leadership of CBEMA, intending to produce a C standard. At the end of 1989, X3J11 produced its report [ANSI 89], which was subsequently accepted by ISO as ISO / IEC 9899-1990.


X3J11 formulae by Dami Quartz

From the outset, the X3J11 committee took a cautious, conservative view of the language extensions. To my great satisfaction, they took their objective seriously: 'to develop a clear, consistent and unmistakable C programming language standard that codifies the common, existing C definition and promotes the portability of user programs across C language environments.' [ANSI 89] The Committee realized that the mere promulgation of a standard does not change the world.

X3J11 introduced only one truly important change to the language itself: it incorporated the types of formal arguments into the type of signature of the function, using syntax borrowed from C++ [Stroustrup 86]. In the old style, external functions have been declared as follows:

double sin();

 

That only says that sin is a function that returns a double (i.e. double-precision floating-point) value. In a new style, this was better rendered

double sin(double);

 

To make the type argument explicit and thus encourage better type-checking and appropriate conversion. Even this addition, even though it produced a considerably better language, caused difficulties. The Committee rightly felt that it was not feasible simply to outlaw 'old-style' function definitions and declarations, but also agreed that the new forms would be better. The inevitable compromise was as good as it could have been, although the language definition is complicated by allowing both forms, and portable software writers have to contend with compilers that are not yet up to standard.

X3J11 also introduced a host of minor additions and adjustments, such as const and volatile type qualifying rules and slightly different type promotion rules. However, the standardization process did not change the nature of the language. In particular, the C standard did not attempt to provide a formal definition of language semantics, and so there could be a dispute over fine points; nevertheless, it successfully accounted for changes in use since the original description and is sufficiently accurate to base implementations on it.

The core C language escaped almost undisturbed from the standardization process, and the Standard emerged more as a better, more careful codification than a new invention. More important changes have taken place in the language environment: the preprocessor and the library. The preprocessor performs macro substitution through conventions distinct from the rest of the language. Its interaction with the compiler has never been well described, and X3J11 has tried to remedy the situation. The result is noticeably better than the explanation given in the first edition of K&R; besides being more comprehensive, it provides for operations, such as concatenation of tokens, which were previously only available through implementation accidents.

X3J11 correctly believed that the full and careful description of the standard C library was as important as the work on the language itself. The C language itself does not provide input-output or any other interaction with the outside world, and thus depends on a set of standard procedures. At the time of publication of K&R, C was primarily thought of as the Unix programming language of the system; although we provided examples of library routines intended to be easily transportable to other operating systems, the underlying Unix support was implicitly understood. As a result, the X3J11 committee spent much of its time designing and documenting a set of library routines required to be available for all conforming implementations.

The current activities of the X3J11 Committee are limited by the rules of the standards process to the issuing of interpretations on the existing standard. However, the informal group originally convened by Rex Jaeschke as NCEG (Numerical C Extensions Group) was officially accepted as subgroup X3J11.1 and continues to consider extensions to C. As the name implies, many of these possible extensions are designed to make the language more suitable for numerical use: For example, multi-dimensional arrays whose boundaries are dynamically determined, the incorporation of IEEE arithmetic processing facilities, and making the language more effective on vector machines or other advanced architectural features. Not all possible extensions are specifically numerical; they include a notation for literal structure.

 

Successors:


B

C and even B have several direct descendants, although they do not compete with Pascal in the generation of offspring. One branch of the side developed early. When Steve Johnson visited the University of Waterloo on Saturday in 1972, he brought B with him. It became popular with Honeywell's machines, and later Eh and Zed (Canadian answers to 'What follows B?') spawned. When Johnson returned to Bell Labs in 1973, he was disconcerted to find that the language whose seeds he brought to Canada had evolved back home; even his YAC program had been rewritten in C by Alan Snyder.More recent descendants of C are Concurrent C [Gehani 89], Objective C [Cox 86], C * [Thinking 90] and, in particular, C++ [Stroustrup 86].

The language is also widely used as an intermediate representation (essentially as a portable assembly language) for a wide range of compilers, both for direct descendants such as C++ and for independent languages such as Module 3 and Eiffel.


Thursday, July 16, 2020

The problems with B and BCPL And Why require C Language?

The Problems With B Language


C Language

The machines on which we first used BCPL and then B were word-addressed, and the single data type of these languages, the 'cell,' was comfortably matched to the hardware machine word. The advent of the PDP-11 exposed several inadequacies of the B semantic model. First, its character-handling mechanisms, inherited with few changes from BCPL, were clumsy: using library procedures to spread packed strings to individual cells and then repack them, or to access and replace individual characters, began to feel awkward, even stupid, on a byte-oriented machine.

Second, although the original PDP-11 did not provide for floating-point arithmetic, the manufacturer promised that it would soon be available. Floating-point operations were added to BCPL in our Multics and GCOS compilers by defining special operators, but this mechanism was possible only because, on the machines concerned, a single word was large enough to contain a floating-point number; this was not true for the 16-bit PDP-11. 

Finally, the B and BCPL models implied overhead when dealing with pointers: language rules, by defining a pointer as an index in an array of words, forced pointers to be represented as word indices. Each pointer reference generated a conversion of the run-time scale from the pointer to the byte address expected by the hardware.

For all these reasons, it seemed that a typing scheme was needed to deal with characters and byte addresses, and to prepare for the upcoming floating-point hardware. Other issues, particularly the type of security and interface checks, did not seem as important as they did later.

Apart from problems with the language itself, the threaded-code technique of the B compiler produced programs so much slower than their assembly-language counterparts that we discounted the possibility of re-coding the operating system or its central utilities in B.

In 1971, I started to extend the B language by adding a character type and also rewrote its compiler to generate PDP-11 machine instructions instead of threaded code. Thus, the transition from B to C was contemporaneous with the creation of a compiler capable of producing programs quickly and small enough to compete with assembly language. I called the slightly-extended NB language 'new B.'

Embryonic C

NB existed so briefly that there was no full description of it. It provided int and char types, arrays of them, and points to them, declared in a style typified by

int i, j;

char c, d;

int iarray[10];

int ipointer[];

char carray[10];

char cpointer[];

 

The semantics of the arrays remained exactly as in B and BCPL: iarray and carray statements create dynamically initialized cells with a value pointing to the first of a sequence of 10 integers and characters, respectively. Statements for ipointer and cpointer omit the size to state that no storage should be allocated automatically. Within procedures, the language interpretation of the pointers was the same as that of the array variables: the pointer declaration created a cell that differed from the array declaration only in that the programmer was expected to assign the reference, instead of allowing the compiler to allocate the space and initialize the cell.

The values stored in the cells bound to the array and the names of the pointers were machine addresses, measured in bytes, of the corresponding storage area. Therefore, indirection through a pointer did not imply a run-time overhead to scale the pointer from word to byte offset. On the other hand, the machine code for array subscription and arithmetic pointer now depended on the type of array or pointer: to compute iarray[i] or ipointer+i  Implicit scaling of the addend I by the size of the object in question.

These semantics were an easy transition from B, and I've been experimenting with them for a few months. Problems became apparent when I tried to extend the type notation, especially to add structured (record) types. Structures, it seemed, were supposed to map intuitively to the memory in the machine, but in a structure containing an array, there was no good place to stash the pointer containing the base of the array, or any convenient way to get it initialized. For example, the directory entries of early Unix systems could be described as C.

struct {

          int     inumber;

          char  name[14];

};

 

I wanted the structure not only to characterize an abstract object, but also to describe a collection of bits that could be read from a directory. Where could the compiler hide the name pointer requested by the semantics? Even if structures were thought more abstractly, and the pointer space could somehow be hidden, how could I deal with the technical problem of properly initializing these pointers when assigning a complicated object, perhaps one that specified structures containing arrays containing structures to arbitrary depth?

The solution was a crucial jump in the evolutionary chain between typeless BCPL and typed C. It eliminated the materialization of the pointer in storage, and instead caused the pointer to be created when the name of the array is mentioned in the expression. The rule that survives in today's C is that the values of the array type are converted to the first of the objects that make up the array when they appear in expressions.

This invention allowed most of the existing B code to continue to work, despite the underlying shift in language semantics. The few programs that assigned new values to an array name to adjust its origin — possible in B and BCPL, irrelevant in C — were easily repaired. More importantly , the new language retained a coherent and workable (if unusual) explanation of the array semantics while opening the way to a more comprehensive type structure.

The second innovation that most clearly distinguishes C from its predecessors is the fuller type structure and, in particular, its expression in the syntax of declarations. NB offered basic types int and char, along with arrays of them, and pointed to them, but no further compositional methods. Generalization was required: given an object of any kind, it should be possible to describe a new object that gathers several objects into an array, generates them from a function, or is a pointer to it.

There was already a way for each object of such a composite type to mention the underlying object: index the array, call the function, use the indirect operator on the pointer. Analogical reasoning led to a declaration syntax for names that mirrored the syntax of the expression in which names typically appear. So,

int i, *pi, **ppi;


Declare an integer, a pointer to an integer, a pointer to an integer. The syntax of these statements reflects the observation that I * pi, and * * ppi all give an int type when used in an expression. In the same way, 

int f(), *f(), (*f)();

 

Declare a function that returns an integer, a function that returns a pointer to an integer, a function that returns an integer;

int *api[10], (*pai)[10];

 

Declare an array of pointers to the integers, and a pointer to the integers array. In all these cases, the declaration of a variable is similar to its use in an expression whose type is the one named at the head of the declaration. 

The type composition scheme adopted by C owes a considerable debt to Algol 68, although it may not have emerged in the form that Algol's adherents would approve. The central notion that I captured from Algol was a type structure based on atomic types (including structures), composed of arrays, pointers (references), and functions (procedures). The concept of unions and casts of Algol 68 also had an influence that appeared later.

After creating the type system, the syntax associated with it, and the compiler for the new language, I felt it deserved a new name; NB seemed insufficiently distinctive. I decided to follow the single-letter style and called it C, leaving open the question of whether the name represented a progression through the alphabet or the letters in the BCPL.

Wednesday, July 15, 2020

What are the origins of C languages

Origins of C languages


BCPL was designed by Martin Richards in the mid-1960s while visiting MIT and was used in the early 1970s for several interesting projects, including the Oxford [Stoy 72] OS6 operating system, and parts of the Alto seminal work at Xerox PARC [Thacker 79]. We became acquainted with this because the MIT CTSS system [Corbato 62] on which Richards worked was used for the development of Multics. The original BCPL compiler was transported to both the Multics system and the GE-635 GECOS system by Rum Canaday and others at Bell Labs [Canada 69]; During the final throes of Multics' life at Bell Labs, and immediately thereafter, it was the language of choice among the group of people who would later become involved with Unix.

 

BCPL, B, and C all fit firmly into the traditional procedural family of Fortran and Algol 60. They are particularly oriented towards system programming, are small and compactly described, and can be translated by simple compilers. They are close to the machine' in that the abstracts they introduce are easily grounded in the concrete data types and operations provided by conventional computers, and rely on library routines for input-output and other interactions with the operating system. With less success, library procedures are also used to specify interesting control structures, such as coroutines and process closures. At the same time, their abstractions lie at a sufficiently high level that it is possible, with care, to achieve portability between machines.

 

B Language


BCPL, B, and C differ syntactically in many details but are generally similar. Programs consist of a sequence of global statements and function (procedure) declarations. Procedures may be nested in BCPL, but may not refer to non-static objects defined in procedures. B and C avoid this restriction by imposing a more serious one: no nested procedures at all. Each language (except the earlier versions of B) recognizes a separate compilation and provides a means to include text from named files.

 

BCPL Language


Several BCPL syntactic and lexical mechanisms are more elegant and regular than those of B and C. For example, the BCPL procedure and the data declarations have a more uniform structure and a more complete set of loop constructions is provided. Although the BCPL programs are theoretically supplied from an undelimited stream of characters, the clever rules allow most semicolons to be elided after statements that end on a line boundary. 

B and C omit this convenience, and end most of the statements with semicolons. Despite the differences, the majority of BCPL statements and operators map directly to corresponding B and C. 

Some of the structural differences between BCPL and B stemmed from the limitations of the intermediate memory. BCPL declarations, for example, may take the form 

let P1 be command

and P2 be command

and P3 be command

 ...


Where the text of the program represented by the commands contains complete procedures. Subdeclarations connected to and occurring at the same time, so the name P3 is known in the P1 procedure. In the same way, BCPL can package a group of declarations and statements into a value-giving expression, for example

E1 := valof ( declarations ; commands ; resultis E2 ) + 1

 

The BCPL compiler can easily handle such constructs by storing and analyzing the parsed representation of the entire program in memory before the output is generated. The storage limitations of the B compiler required a one-pass technique in which output was generated as soon as possible, and the syntactic redesign that made this possible was forwarded to C.

Some less pleasant aspects of BCPL were due to their technical problems and were consciously avoided in the design of B. For example, the BCPL uses a 'global vector' mechanism to communicate between separately compiled programs. The BCPL compiler can easily handle such constructs by storing and analyzing the parsed representation of the entire program in memory before the output is generated. The storage limitations of the B compiler required a one-pass technique in which output was generated as soon as possible, and the syntactic redesign that made this possible was forwarded to C. Some less pleasant aspects of BCPL were due to their technical problems and were consciously avoided in the design of B. For example, the BCPL uses a 'global vector' mechanism to communicate between separately compiled programs. 

Other violins in the transition from BCPL to B have been introduced as a matter of taste, and some remain controversial, such as the decision to use a single character = for assignment instead of:=. Similarly, B uses/**/to enclose comments where BCPL uses/to ignore text up to the end of the line.

 Here, the legacy of PL / I is evident. (C++ has resurrected the BCPL comment convention.) Fortran has influenced the syntax of the declarations: B declarations begin with a specifier like auto or static, followed by a list of names, and C has not only followed this style, but has also decorated it by putting its keyword type at the start of the declarations. 

Not every difference between the BCPL language documented in Richard's book [Richards 79] and B was deliberate; we started with the earlier version of BCPL [Richards 67]. For example, the end case that escapes from the BCPL switch on the statement was not present in the language when we learned it in the 1960s, and so the overloading of the break keyword to escape from the B and C switch statement is due to diverging evolution rather than conscious change.

 In contrast to the widespread syntax variation that occurred during the creation of B, the core semantic content of the BCPL — its type structure and expression evaluation rules — remained intact. Both languages are typeless, or have a single data type, 'word,' or 'cell,' a fixed-length bit pattern. Memory in these languages consists of a linear array of such cells, and the meaning of the contents of the cells depends on the operator used. The + operator, for example, simply adds its operands using the integer add instruction of the machine, and the other arithmetic operations are equally unconscious of the actual meaning of their operands. Because memory is a linear array, the value in a cell can be interpreted as an index in this array, and BCPL supplies an operator for this purpose. It was spelled RV in the original language, and later! , while B is using the unary *. Thus, if p is a cell containing the index of (or the address of, or the pointer to) another cell, * p refers to the contents of the point-to-cell, either as the value in the expression or as the target of the assignment.

 Because BCPL and B pointers are only integer indices in the memory array, their arithmetic is meaningful: if p is the address of the cell, then p+1 is the address of the next cell. This convention is the basis for array semantics in both languages. When one writes in the BCPL 

let V = vec 10

or in B,

auto V[10];

 

The effect is the same: a cell named V is assigned, then another group of 10 contiguous cells is set aside, and the memory index of the first cells is set to V. By general rule, the expression B is set to B.

 *(V+i) 


Adds V and I and refers to the I location after V. Both BCPL and B each add a special notation to sweeten that array of accesses; in B, the equivalent expression is

V[i]

and in BCPL

V!i

 

Even at that time, this approach to arrays was unusual; C would later assimilate it in an even less conventional manner.

None of the BCPL, B, or C formats strongly support character data in the language; each handles strings much like integer vectors and complements the general rules with a few conventions. In both BCPL and B, a string denotes the address of a static area initialized with string characters packed into cells. In BCPL, the first packed byte contains the number of characters in the string; in B, there is no count and the strings are terminated by a special character that B spelled '*e.' This change was made partly to avoid limiting the length of the string caused by holding the count in an 8-bit or 9-bit slot, and partly because, in our experience, keeping the count seemed less convenient than using the terminator.

In general, individual characters in the BCPL string were manipulated by spreading the string to another array, one character per cell, and then repackaging it later; B provided the corresponding routines, but people more often used other library functions that accessed or replaced individual characters in a string.

Where is the success of C?

Where's the success C?

 

C has been successful to a far greater extent than any early expectations. What qualities have contributed to its widespread use? 

C Language

Undoubtedly, the success of Unix itself was the most important factor; it made the language available to hundreds of thousands of people. Conversely, of course, the use of C by Unix and its consequent portability to a wide range of machines was important to the success of the system. But the language invasion of other environments suggests more fundamental merits. 

Despite some aspects that are mysterious to the beginner and sometimes even to the adept, C remains a simple and small language that can be translated with simple and small compilers. Its types and operations are well-grounded in those provided by real machines, and it is not difficult for people who are used to the way computers work, to learn languages for generating time-and space-efficient programs. At the same time, the language is sufficiently abstracted from machine details that the portability of the program can be achieved.

Equally important, C and its central library support have always remained in touch with the real environment. It was not designed in isolation to prove a point or to serve as an example, but as a tool for writing programs that did useful things; it was always meant to interact with a larger operating system, and was seen as a tool for constructing larger tools. The parsimonious, pragmatic approach has influenced the things that went into C: it covers the essential needs of many programmers, but it does not try to supply too much.

Finally, despite the changes it has undergone since its first published description, which was admittedly informal and incomplete, the actual C-language as seen by millions of users using many different compilers has remained remarkably stable and unified compared to those of similar currency, such as Pascal and Fortran. There are different dialects of C — most notably those described by the older K&R and the newer C — but, overall, C remained freer from proprietary extensions than other languages. Perhaps the most significant extensions are the 'far' and 'near' pointer qualifications intended to address the peculiarities of some Intel processors. Although C was not originally designed with portability as a primary goal, it was able to express programs, including operating systems, on machines ranging from the smallest personal computers to the most powerful supercomputers.

C is bizarre, defective, and a huge success. While history accidents certainly helped, it evidently satisfied the need for a system implementation language that was efficient enough to displace assembly language, yet sufficiently abstract and fluent to describe algorithms and interactions in a wide variety of environments.

Climate Crisis and Innovation: Navigating Earth's Future

Climate Change: Recent Events and Technological Solutions 1. The Escalating Climate Crisis The climate crisis has intensified in recent year...