Introspection is the process of thinking about one's own thoughts and feelings. In this paper, I discuss recent attempts to make computational systems that exhibit introspective behavior: [Smith, 982], [Weyhrauch, 1978], and [Doyle, 1980]. Each presents a system capable of manipulating representations of its own program and current context. I argue that introspective ability is crucial for intelligent systems ??thout it an agent cannot represent certain problems that it must be able to solve. A theory of intelligent action would describe how and why certain actions intelligently achieve an agent's goals. The agent would both embody and represent this theory; it would be implemented as the program for the agent; and the importance of introspection suggests that the agent represent its theory of action to itself.