This article is inspired by Jamis Buck’s presentation(Algorithm is not a 4-letter word). I’ve talked before about Maze Generation and its importance in acmASCIS blog, it’s great and interesting topic. In this post we’ll talk about a technique of generating a perfect maze, it’s Recursive Backtracking . Recursive backtracking is an algorithm for solving and generating perfect mazes. This algorithm is fast, easy to understand and straightforward to implement, but the maze is still biased(check Jamis Buck’s Presentation about Biased Maze Concept). Recursive backtracker performs a drunkard’s walk from the starting point by randomizing the visited child node at each level, but with some modifications on the drunkard’s walk ,as it pushes each visited node onto a stack and when it reached a dead-end it pops the nodes off the stack until it finds one that it can continue walking from instead of doing O(n) Search. Algorithm Steps from Jamis Buck’s Blog:
- Choose a starting point in the field.
- Randomly choose a wall at that point and carve a passage through to the adjacent cell, but only if the adjacent cell has not been visited yet. This becomes the new current cell.
- If all adjacent cells have been visited, back up to the last cell that has uncarved walls and repeat.
- The algorithm ends when the process has backed all the way up to the starting point.
you can check the animation introduced by Jamis Buck (here).
Here’s my Implementation in C#.
- publicoverridevoid GenerateMaze()
- while (NumOfVisitedCells < TotalCells)
- ArrayList adjcentCells = this.GetNeighborsWithWalls(currentCell);
- if (adjcentCells.Count > 0)
- int randomCell = Cell.theRandom.Next(0, adjcentCells.Count);
- Cell aCell = ((Cell)adjcentCells[randomCell]);
- currentCell = aCell;
- currentCell = (Cell)cellStack.Pop();
you can try simple .exe file Maze Generation.exe.
A compiler is a large, complex program. Compilers often include hundreds of thousands, if not millions of lines of code. Their many parts have complex interaction. Design decisions made for one part of the compiler have important ramifications for other parts. Thus, the design and implementation of a compiler is a substantial exercise in software engineering.
A good compiler contains a microcosm of Computer Science. We can see that in some examples:
- Greedy Algorithms (Register Allocation)
- Heuristic Search Techniques (List Scheduling)
- Graph Algorithms (dead-code elimination)
- Dynamic Programming (Instruction Selection)
- Finite Automata and Push-Down Automata (Scanning and Parsing)
- Fixed-Point Algorithms (Data-Flow Analysis)
It deals with problems such as dynamic allocation, synchronization, naming, locality, memory hierarchy management, and pipeline scheduling.
Compilers play a fundamental role in the central activity of Computer Science: preparing problems for solution by Computer. Most Software is compilers, the correctness of that process and the efficiency of the resulting code have a direct impact on our ability to build large systems.
The compiler community has been building compilers since 1955, over those years we have learned many lessons about how to structure a compiler, and nowadays we are dealing with a compiler as a black box that translates a source program into a target program.
The Front-End focuses on understanding the source-language program and the Back-End focuses on mapping the programs to the target machine. This separation of concerns has several important implications for the design and implementation of compilers.
This intermediate representation (IR) becomes the compiler’s definitive representation for the code it is translating. At each point in compilation, the compiler will have a definitive representation.
Engineering A compiler – Keith D. Cooper & Linda Torczon.