**Overview**

Given a description of the possible initial states of the world, a description of the desired goals, and a description of a set of possible actions, the planning problem is to find a plan that is guaranteed (from any of the initial states) to generate a sequence of actions that leads to one of the goal states.

The difficulty of planning is dependent on the simplifying assumptions employed. Several classes of planning problems can be identified depending on the properties the problems have in several dimensions.

- Are the actions deterministic or nondeterministic? For nondeterministic actions, are the associated probabilities available?
- Are the state variables discrete or continuous? If they are discrete, do they have only a finite number of possible values?
- Can the current state be observed unambiguously? There can be full observability and partial observability.
- How many initial states are there, one or arbitrarily many?
- Do actions have a duration?
- Can several actions be taken concurrently, or is only one action possible at a time?
- Is the objective of a plan to reach a designated goal state, or to maximize a reward function?
- Is there only one agent or are there several agents? Are the agents cooperative or selfish? Do all of the agents construct their own plans separately, or are the plans constructed centrally for all agents?

The simplest possible planning problem, known as the Classical Planning Problem, is determined by:

- a unique known initial state,
- durationless actions,
- deterministic actions,
- which can be taken only one at a time,
- and a single agent.

Since the initial state is known unambiguously, and all actions are deterministic, the state of the world after any sequence of actions can be accurately predicted, and the question of observability is irrelevant for classical planning.

Further, plans can be defined as sequences of actions, because it is always known in advance which actions will be needed.

With nondeterministic actions or other events outside the control of the agent, the possible executions form a tree, and plans have to determine the appropriate actions for every node of the tree.

discrete-time Markov decision processes (MDP) are planning problems with:

- durationless actions,
- nondeterministic actions with probabilities,
- full observability,
- maximization of a reward function,
- and a single agent.

When full observability is replaced by partial observability, planning corresponds to partially observable Markov decision process (POMDP).

If there are more than one agent, we have multi-agent planning, which is closely related to game theory.

Read more about this topic: Automated Planning And Scheduling

### Other articles related to "overview":

... Structures and Processes in Environmental Systems is meant to give an

**overview**of a technique based on fractal geometry and the processes of ... It also gives an

**overview**of the knowledge needed to solve environmental problems ... The book gives an

**overview**of chemical mechanisms, transport, kinetics, and interactions that occur in environmental systems ...

**Overview**

... There are also other schools in Kewanee like Visitation Catholic School home of the Giants, and a community college, Black Hawk College ... Black Hawk College-East Campus is recognized nationally for its equestrian program, as well as livestock judging teams ...

**Overview**- Interaction Diagrams

... Interaction

**overview**diagram provides an

**overview**in which the nodes represent communication diagrams ... Communication diagram Interaction

**overview**diagram Sequence diagram The Protocol State Machine is a sub-variant of the State Machine ...

**Overview**

... The single was a hit around the world, scaling the Top 10 as far away as Australia ... Its commercial success was probably the single factor that secured The Stranglers their continuing life in pop mainstream for the remainder of the 1980s ...

**Overview**

... U.S ... Senator Tom Harkin (Iowa) ran as a populist liberal with labor union support ...