Maze Generation: Recursive Backtracker

This article is inspired by Jamis Buck’s presentation(Algorithm is not a 4-letter word). I’ve talked before about Maze Generation and its importance in acmASCIS blog, it’s great and interesting topic. In this post we’ll talk about a technique of generating a perfect maze, it’s Recursive Backtracking . Recursive backtracking is an algorithm for solving and generating perfect mazes. This algorithm is fast, easy to understand and straightforward to implement, but the maze is still biased(check Jamis Buck’s Presentation about Biased Maze Concept). Recursive backtracker performs a drunkard’s walk from the starting point by randomizing the visited child node at each level, but with some modifications on the drunkard’s walk ,as  it pushes each visited node onto a stack and when it reached a dead-end it pops the nodes off the stack until it finds one that it can continue walking from instead of doing O(n) Search. Algorithm Steps from Jamis Buck’s Blog:

  1. Choose a starting point in the field.
  2. Randomly choose a wall at that point and carve a passage through to the adjacent cell, but only if the adjacent cell has not been visited yet. This becomes the new current cell.
  3. If all adjacent cells have been visited, back up to the last cell that has uncarved walls and repeat.
  4. The algorithm ends when the process has backed all the way up to the starting point.

    you can check the animation introduced by Jamis Buck (here).

Implementation:

Here’s my Implementation in C#.

  1. publicoverridevoid GenerateMaze()
  2.         {
  3.             while (NumOfVisitedCells < TotalCells)
  4.             {
  5.                 ArrayList adjcentCells = this.GetNeighborsWithWalls(currentCell);
  6.                 if (adjcentCells.Count > 0)
  7.                 {
  8.                     int randomCell = Cell.theRandom.Next(0, adjcentCells.Count);
  9.                     Cell aCell = ((Cell)adjcentCells[randomCell]);
  10.                     currentCell.KnockDownWall(aCell);
  11.                     cellStack.Push(currentCell);
  12.                     currentCell = aCell;
  13.                     numOfVisitesCells++;
  14.                 }
  15.                 else
  16.                 {
  17.                     currentCell = (Cell)cellStack.Pop();
  18.                 }
  19.             }
  20.         }

you can try simple .exe file Maze Generation.exe.

Advertisements

Why Study Compiler Construction?

A compiler is a large, complex program. Compilers often include hundreds of thousands, if not millions of lines of code. Their many parts have complex interaction. Design decisions made for one part of the compiler have important ramifications for other parts. Thus, the design and implementation of a compiler is a substantial exercise in software engineering.
A good compiler contains a microcosm of Computer Science. We can see that in some examples:

  • Greedy Algorithms (Register Allocation)
  • Heuristic Search Techniques (List Scheduling)
  • Graph Algorithms (dead-code elimination)
  • Dynamic Programming (Instruction Selection)
  • Finite Automata and Push-Down Automata (Scanning and Parsing)
  • Fixed-Point Algorithms (Data-Flow Analysis)

It deals with problems such as dynamic allocation, synchronization, naming, locality, memory hierarchy management, and pipeline scheduling.
Compilers play a fundamental role in the central activity of Computer Science: preparing problems for solution by Computer. Most Software is compilers, the correctness of that process and the efficiency of the resulting code have a direct impact on our ability to build  large systems.

Speedy Overview:

The compiler community has been building compilers since 1955, over those years we have learned many lessons about how to structure a compiler, and nowadays we are dealing with a compiler as a black box that translates a source program into a target program.

The Front-End focuses on understanding the source-language program and the Back-End focuses on mapping the programs to the target machine. This separation of concerns has several important implications for the design and implementation of compilers.
This intermediate representation (IR) becomes the compiler’s definitive representation for the code it is translating. At each point in compilation, the compiler will have a definitive representation.

References:

Engineering A compiler – Keith D. Cooper & Linda Torczon.