Why Study Compiler Construction?

A compiler is a large, complex program. Compilers often include hundreds of thousands, if not millions of lines of code. Their many parts have complex interaction. Design decisions made for one part of the compiler have important ramifications for other parts. Thus, the design and implementation of a compiler is a substantial exercise in software engineering.
A good compiler contains a microcosm of Computer Science. We can see that in some examples:

  • Greedy Algorithms (Register Allocation)
  • Heuristic Search Techniques (List Scheduling)
  • Graph Algorithms (dead-code elimination)
  • Dynamic Programming (Instruction Selection)
  • Finite Automata and Push-Down Automata (Scanning and Parsing)
  • Fixed-Point Algorithms (Data-Flow Analysis)

It deals with problems such as dynamic allocation, synchronization, naming, locality, memory hierarchy management, and pipeline scheduling.
Compilers play a fundamental role in the central activity of Computer Science: preparing problems for solution by Computer. Most Software is compilers, the correctness of that process and the efficiency of the resulting code have a direct impact on our ability to build  large systems.

Speedy Overview:

The compiler community has been building compilers since 1955, over those years we have learned many lessons about how to structure a compiler, and nowadays we are dealing with a compiler as a black box that translates a source program into a target program.

The Front-End focuses on understanding the source-language program and the Back-End focuses on mapping the programs to the target machine. This separation of concerns has several important implications for the design and implementation of compilers.
This intermediate representation (IR) becomes the compiler’s definitive representation for the code it is translating. At each point in compilation, the compiler will have a definitive representation.

References:

Engineering A compiler – Keith D. Cooper & Linda Torczon.