| ►NArgumentHandlers | ArgumentHandlers contains functionality for parsing and handling command-line arguments |
| CArguments | Arguments contains all defined parameters to be set on the command line |
| ►Ncomment_cbonlp | |
| CCommentBlankorNewLineParser | |
| ►NDPOMDPFormatParsing | |
| ►CParserDPOMDPFormat_Spirit | ParserDPOMDPFormat_Spirit is a parser for DecPOMDPDiscrete |
| CAddAction | |
| CAddAgents | |
| CAddModels | |
| CAddObservation | |
| CAddStartState | |
| CAddState | |
| CDebugOutput | |
| CDebugOutputNoParsed | |
| ►CDecPOMDPFileParser | |
| Cdefinition | |
| Cdp_SetDiscountParam | |
| CInitialize | |
| CInitializeActions | |
| CInitializeDPOMDP | |
| CInitializeObservations | |
| CInitializeStates | |
| CNextFloatOfRow | |
| CNextRowOfMatrix | |
| CNextStringOfIdentList | |
| CNYI | |
| CProcessOMatrix | |
| CProcessOProb | |
| CProcessORow | |
| CProcessR | |
| CProcessRMatrix | |
| CProcessRRow | |
| CProcessStartStateList | |
| CProcessTMatrix | |
| CProcessTProb | |
| CProcessTRow | |
| CResetCurIdentList | |
| CResetCurMatrix | |
| CSetAgentIndex | |
| CSetLastParsedType | |
| CSetNextAgentIndex | |
| CSetNrActions | |
| CSetNrAgents | |
| CSetNrObservations | |
| CSetNrStates | |
| CStartStateExludes | |
| CStartStateRowProbs | |
| CStoreLastParsedElement | |
| CStoreLPAction | |
| CStoreLPFromState | |
| CStoreLPJointAction | |
| CStoreLPJointObservation | |
| CStoreLPObservation | |
| CStoreLPToState | |
| Cvt_COSTTOK | |
| Cvt_REWARDTOK | |
| CWildCardJointAction | Called before StoreLPJointAction in case of a wildcard '*' joint action |
| CWildCardJointObservation | Called before StoreLPJointObservation in case of a wildcard '*' joint action |
| ►NPOMDPFormatParsing | |
| ►CParserPOMDPFormat_Spirit | ParserPOMDPFormat_Spirit is a parser for the .pomdp file format |
| CAddAction | |
| CAddModels | |
| CAddObservation | |
| CAddStartState | |
| CAddState | |
| CDebugOutput | |
| CDebugOutputNoParsed | |
| Cdp_SetDiscountParam | |
| CInitialize | |
| CInitializeActions | |
| CInitializeObservations | |
| CInitializePOMDP | |
| CInitializeStates | |
| CNextFloatOfRow | |
| CNextRowOfMatrix | |
| CNextStringOfIdentList | |
| CNYI | |
| ►CPOMDPFileParser | |
| Cdefinition | |
| CProcessOMatrix | |
| CProcessOProb | |
| CProcessORow | |
| CProcessR | |
| CProcessRMatrix | |
| CProcessRRow | |
| CProcessStartStateList | |
| CProcessTMatrix | |
| CProcessTProb | |
| CProcessTRow | |
| CResetCurIdentList | |
| CResetCurMatrix | |
| CSetAgentIndex | |
| CSetLastParsedType | |
| CSetNextAgentIndex | |
| CSetNrActions | |
| CSetNrAgents | |
| CSetNrObservations | |
| CSetNrStates | |
| CStartStateExludes | |
| CStartStateRowProbs | |
| CStoreLastParsedElement | |
| CStoreLPAction | Stores the last-parsed action: |
| CStoreLPFromState | |
| CStoreLPJointAction | |
| CStoreLPJointObservation | |
| CStoreLPObservation | |
| CStoreLPToState | |
| Cvt_COSTTOK | |
| Cvt_REWARDTOK | |
| CWildCardJointAction | Called before StoreLPJointAction in case of a wildcard '*' joint action |
| CWildCardJointObservation | Called before StoreLPJointObservation in case of a wildcard '*' joint action |
| ►Nstd | STL namespace |
| Cless< BGIP_BnB_Node * > | Overload the less<Type> template for BGIP_BnB_Node* (we want less to give an ordering according to values, not addresses...) |
| Cless< BGIP_BnB_NodePtr > | |
| Cless< boost::shared_ptr< JPPVValuePair > > | |
| Cless< boost::shared_ptr< PartialJointPolicyValuePair > > | |
| Cless< JointPolicyValuePair * > | Overload the less<Type> template for JointPolicyValuePair* (we want less to give an ordering according to values, not addresses...) |
| Cless< JointPolicyValuePair_sharedPtr > | |
| Cless< JPPVValuePair * > | Overload the less<Type> template for JPolValuePair* (we want less to give an ordering according to values, not addresses...) |
| Cless< PartialJointPolicyValuePair * > | Overload the less<Type> template for PartialJointPolicyValuePair* (we want less to give an ordering according to values, not addresses...) |
| Cless< PartialJPDPValuePair * > | Overload the less<Type> template for JPolValPair* (we want less to give an ordering according to values, not addresses...) |
| Cless< PartialJPDPValuePair_sharedPtr > | |
| CAction | Action is a class that represent actions |
| CActionDiscrete | ActionDiscrete represents discrete actions |
| CActionHistory | ActionHistory represents an action history of a single agent |
| CActionHistoryTree | ActionHistoryTree is a wrapper for ActionHistory |
| CActionObservationHistory | ActionObservationHistory represents an action-observation history of an agent |
| CActionObservationHistoryTree | ActionObservationHistoryTree is a wrapper for ActionObservationHistory |
| CAgent | Agent represents an agent |
| CAgentBG | AgentBG represents an agent which uses a BG-based policy |
| CAgentDecPOMDPDiscrete | AgentDecPOMDPDiscrete represents an agent in a discrete DecPOMDP setting |
| CAgentDelayedSharedObservations | AgentDelayedSharedObservations represents an agent that acts on local observations and the shared observation at the previous time step |
| CAgentFullyObservable | AgentFullyObservable represents an agent that receives the true state, the joint observation and also the reward signal |
| CAgentLocalObservations | AgentLocalObservations represents an agent that acts on local observations |
| CAgentMDP | AgentMDP represents an agent which uses a MDP-based policy |
| CAgentOnlinePlanningMDP | AgentOnlinePlanningMDP represents an agent with an online MDP policy |
| CAgentPOMDP | AgentPOMDP represents an agent which POMDP-based policy |
| CAgentQLearner | AgentQLearner applies standard single-agent Q-learning in the joint action and state space |
| CAgentQMDP | AgentQMDP represents an agent which uses a QMDP-based policy |
| CAgentRandom | AgentRandom represents an agent which chooses action uniformly at random |
| CAgentSharedObservations | AgentSharedObservations is represents an agent that benefits from free communication, i.e., it can share all its observations |
| CAgentTOIFullyObservableSynced | AgentTOIFullyObservableSynced represents an agent that receives the true state, the joint observation and also the reward signal |
| CAgentTOIFullyObservableSyncedSpecialReward | AgentTOIFullyObservableSyncedSpecialReward represents an AgentTOIFullyObservableSynced |
| CAlphaVector | AlphaVector represent an alpha vector used in POMDP solving |
| CAlphaVectorBG | AlphaVectorBG implements Bayesian Game specific functionality for alpha-vector based planning |
| CAlphaVectorConstrainedPOMDP | AlphaVectorConstrainedPOMDP implements Constrained POMDP specific functionality for alpha-vector based planning |
| CAlphaVectorPlanning | AlphaVectorPlanning provides base functionality for alpha-vector based POMDP or BG techniques |
| CAlphaVectorPOMDP | AlphaVectorPOMDP implements POMDP specific functionality for alpha-vector based planning |
| CAlphaVectorPruning | AlphaVectorPruning reduces sets of alpha vectors to their parsimonious representation via LP-based pruning |
| CAlphaVectorWeighted | AlphaVectorWeighted implements a weighted BG/POMDP backup |
| CBayesianGame | BayesianGame is a class that represents a general Bayesian game in which each agent has its own utility function |
| CBayesianGameBase | BayesianGameBase is a class that represents a Bayesian game |
| CBayesianGameCollaborativeGraphical | BayesianGameCollaborativeGraphical represents a collaborative graphical Bayesian game |
| CBayesianGameForDecPOMDPStage | BayesianGameForDecPOMDPStage represents a BG for a single stage |
| CBayesianGameForDecPOMDPStageInterface | BayesianGameForDecPOMDPStageInterface is a class that represents the base class for all Bayesian games that are used to represent a stage of a Dec-POMDP (e.g., in GMAA*) |
| CBayesianGameIdenticalPayoff | BayesianGameIdenticalPayoff is a class that represents a Bayesian game with identical payoffs |
| CBayesianGameIdenticalPayoffInterface | BayesianGameIdenticalPayoffInterface provides an interface for Bayesian Games with identical payoffs |
| CBayesianGameIdenticalPayoffSolver | BayesianGameIdenticalPayoffSolver is an interface for solvers for Bayesian games with identical payoff |
| CBayesianGameIdenticalPayoffSolver_T | BayesianGameIdenticalPayoffSolver_T is an interface for solvers for Bayesian games with identical payoff |
| CBayesianGameWithClusterInfo | BayesianGameWithClusterInfo represents an identical-payoff BG that can be clustered |
| CBelief | Belief represents a probability distribution over the state space |
| CBeliefInterface | BeliefInterface is an interface for beliefs, i.e., probability distributions over the state space |
| CBeliefIterator | BeliefIterator is an iterator for dense beliefs |
| CBeliefIteratorGeneric | BeliefIteratorGeneric is an iterator for beliefs |
| CBeliefIteratorInterface | BeliefIteratorInterface is an interface for iterators over beliefs |
| CBeliefIteratorSparse | BeliefIteratorSparse is an iterator for sparse beliefs |
| CBeliefSetNonStationary | BeliefSetNonStationary represents a non-stationary belief set |
| CBeliefSparse | BeliefSparse represents a probability distribution over the state space |
| CBG_FactorGraphCreator | BG_FactorGraphCreator will create a FG from a BG |
| CBGCG_Solver | BGCG_Solver is a base class for collaborative graphical bayesian games |
| CBGCG_SolverCreator | BGCG_SolverCreator is a class that represents an object that can create a BGCG_Solver |
| CBGCG_SolverCreator_FG | BGCG_SolverCreator_FG creates BGCG Solvers with Max Plus |
| CBGCG_SolverCreator_MP | BGCG_SolverCreator_MP creates BGCG Solvers with Max Plus |
| CBGCG_SolverCreator_NDP | BGCG_SolverCreator_NDP creates BGCG Solvers with Non-serial Dynamic Programming |
| CBGCG_SolverCreator_Random | BGCG_SolverCreator_Random creates a BGCG Solver that gives random solutions |
| CBGCG_SolverFG | |
| CBGCG_SolverMaxPlus | BGCG_SolverMaxPlus is a class that performs max plus for BGIPs with agents independence |
| CBGCG_SolverNonserialDynamicProgramming | BGCG_SolverNonserialDynamicProgramming implements non-serial dynamic programming for collaborative graphical Bayesian games |
| CBGCG_SolverRandom | BGCG_SolverRandom is a class that represents a BGCG solver that returns a random joint BG policy |
| CBGforStageCreation | BGforStageCreation is a class that provides some functions to aid the construction of Bayesian games for a stage of a Dec-POMDP |
| CBGIP_BnB_Node | BGIP_BnB_Node represents a node in the search tree of BGIP_SolverBranchAndBound |
| CBGIP_IncrementalSolverCreatorInterface_T | BGIP_IncrementalSolverCreatorInterface_T is an interface for classes that create BGIP solvers |
| CBGIP_IncrementalSolverInterface | BGIP_IncrementalSolverInterface is an interface for BGIP_Solvers that can incrementally return multiple solutions |
| CBGIP_IncrementalSolverInterface_T | BGIP_IncrementalSolverInterface_T is an interface for BGIP_Solvers that can incrementally return multiple solutions |
| CBGIP_SolverAlternatingMaximization | BGIP_SolverAlternatingMaximization implements an approximate solver for identical payoff Bayesian games, based on alternating maximization |
| CBGIP_SolverBFSNonIncremental | BGIP_SolverBFSNonIncremental is a class that performs Brute force search for identical payoff Bayesian Games |
| CBGIP_SolverBranchAndBound | BGIP_SolverBranchAndBound is a class that performs Branch-and-Bound search for identical payoff Bayesian Games |
| CBGIP_SolverBruteForceSearch | BGIP_SolverBruteForceSearch is a class that performs Brute force search for identical payoff Bayesian Games |
| CBGIP_SolverCE | BGIP_SolverCE is a class that performs Cross Entropy optimization for identical payoff Bayesian Games |
| CBGIP_SolverCreator_AM | BGIP_SolverCreator_AM creates BGIP Solvers with Alternating Maximization |
| CBGIP_SolverCreator_BFS | BGIP_SolverCreator_BFS creates BGIP Solvers with Brute Force Search |
| CBGIP_SolverCreator_BFSNonInc | BGIP_SolverCreator_BFSNonInc creates BGIP Solvers with Brute Force Search |
| CBGIP_SolverCreator_BnB | BGIP_SolverCreator_BnB creates BGIP Solvers with Branch-and-Bound search |
| CBGIP_SolverCreator_CE | BGIP_SolverCreator_CE creates BGIP Solvers with Cross Entropy |
| CBGIP_SolverCreator_MP | BGIP_SolverCreatorInterface_T_MP creates BGIP Solvers with Max Plus |
| CBGIP_SolverCreator_Random | BGIP_SolverCreator_Random creates a BGIP Solver that gives random solutions |
| CBGIP_SolverCreatorInterface | BGIP_SolverCreatorInterface is an interface for classes that create BGIP solvers |
| CBGIP_SolverCreatorInterface_T | BGIP_SolverCreatorInterface_T is an interface for classes that create BGIP solvers |
| CBGIP_SolverMaxPlus | BGIP_SolverMaxPlus is a class that performs max plus for BGIPs (without agents independence) |
| CBGIP_SolverRandom | BGIP_SolverRandom creates random solutions to Bayesian games for testing purposes |
| CBGIPSolution | BGIPSolution represents a solution for BayesianGameIdenticalPayoff |
| CBruteForceSearchPlanner | BruteForceSearchPlanner implements an exact solution algorithm |
| CCompareVec | |
| CCPDDiscreteInterface | CPDDiscreteInterface is an abstract base class that represents a conditional probability distribution |
| CCPDKroneckerDelta | CPDKroneckerDelta implements a Kronecker delta-style CPD |
| CCPT | CPT implements a conditional probability table |
| CDecPOMDP | DecPOMDP is a simple implementation of DecPOMDPInterface |
| CDecPOMDPDiscrete | DecPOMDPDiscrete represent a discrete DEC-POMDP model |
| CDecPOMDPDiscreteInterface | DecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions |
| CDecPOMDPInterface | DecPOMDPInterface is an interface for DecPOMDPs |
| CDICEPSPlanner | DICEPSPlanner implements the Direct Cross-Entropy Policy Search method |
| CDiscreteEntity | DiscreteEntity is a general class for tracking discrete entities |
| CE | E is a class that represents a basic exception |
| CEDeadline | EDeadline represents a deadline exceeded expection |
| CEInvalidIndex | EInvalidIndex represents an invalid index exception |
| CENoSubScope | ENoSubScope represents an invalid index exception |
| CENotCached | ENotCached represents an invalid index exception |
| CEOverflow | EOverflow represents an integer overflow exception |
| CEParse | EParse represents a parser exception |
| CEventObservationModelMapping | EventObservationModelMapping implements an ObservationModelDiscrete which depends not only on the resulting state but also on the current state of the system, i.e. P(o(k+1) | s(k), ja(k), s(k+1)) |
| CEventObservationModelMappingSparse | EventObservationModelMappingSparse implements an ObservationModelDiscrete |
| CFactoredDecPOMDPDiscrete | FactoredDecPOMDPDiscrete is implements a factored DecPOMDPDiscrete |
| CFactoredDecPOMDPDiscreteInterface | FactoredDecPOMDPDiscreteInterface is the interface for a Dec-POMDP with factored states |
| CFactoredMMDPDiscrete | |
| CFactoredQFunctionScopeForStage | FactoredQFunctionScopeForStage represents a Scope for one stage of a factored QFunction |
| CFactoredQFunctionStateJAOHInterface | FactoredQFunctionStateJAOHInterface represents Q-value functions for factored discrete Dec-POMDPs |
| CFactoredQLastTimeStepOrElse | FactoredQLastTimeStepOrElse is a class that represents a Q-function that is factored at the last stage, and non factored for earlier stages |
| CFactoredQLastTimeStepOrQBG | FactoredQLastTimeStepOrQBG is a class that represents a Q-Function that is factored for the last stage (i.e., the factored immediate reward function) and the (non-factored) QBG function for the earlier stages |
| CFactoredQLastTimeStepOrQMDP | FactoredQLastTimeStepOrQMDP is a class that represents a Q-Function that is factored for the last stage (i.e., the factored immediate reward function) and the (non-factored) QMDP function for the earlier stages |
| CFactoredQLastTimeStepOrQPOMDP | FactoredQLastTimeStepOrQPOMDP is a class that represents a Q-Function that is factored for the last stage (i.e., the factored immediate reward function) and the (non-factored) QPOMDP function for the earlier stages |
| CFactoredStateAOHDistribution | FactoredStateAOHDistribution is a class that represents a factored probability distribution over both states and action-observation histories |
| CFactoredStateDistribution | FactoredStateDistribution is a class that represents a base class for factored state distributions |
| CFG_Solver | |
| CFG_SolverMaxPlus | FG_SolverMaxPlus optimizes (maximizes) a factor graph using max plus |
| CFG_SolverNDP | FG_SolverNDP optimizes (maximizes) a factor graph using non-serial dynamic programming |
| CFixedCapacityPriorityQueue | FixedCapacityPriorityQueue is a class that represents a priority queue with a fixed size |
| CFSAOHDist_NECOF | FSAOHDist_NECOF is a class that represents a NEarly COmpletely Factored distribution over state factors and action-observation histories |
| CFSDist_COF | FSDist_COF is a class that represents a completely factored state distribution |
| CGeneralizedMAAStarPlanner | GeneralizedMAAStarPlanner is a class that represents the Generalized MAA* planner class |
| CGeneralizedMAAStarPlannerForDecPOMDPDiscrete | GeneralizedMAAStarPlannerForDecPOMDPDiscrete is a class that represents the Generalized MAA* planner |
| CGeneralizedMAAStarPlannerForFactoredDecPOMDPDiscrete | GeneralizedMAAStarPlannerForFactoredDecPOMDPDiscrete is a class that represents the Generalized MAA* planner |
| CGMAA_kGMAA | GMAA_kGMAA is a class that represents a GMAA planner that performs k-GMAA, i.e |
| CGMAA_kGMAACluster | GMAA_kGMAA is a class that represents a GMAA planner that performs k-GMAA, i.e |
| CGMAA_MAA_ELSI | GMAA_MAA_ELSI Generalized MAA* Exploiting Last-Stage Independence |
| CGMAA_MAAstar | GMAA_MAAstar is a class that represents a planner that performs MAA* as described by Szer et al |
| CGMAA_MAAstarClassic | GMAA_MAAstarClassic is a class that represents a planner that performs MAA* as described by Szer et al |
| CGMAA_MAAstarCluster | GMAA_MAAstarCluster is a class that represents a planner that performs MAA* as described by Szer et al |
| CHistory | History is a general class for histories |
| CIndividualBeliefJESP | IndividualBeliefJESP stores individual beliefs for the JESP algorithm |
| CIndividualHistory | IndividualHistory represents a history for a single agent |
| CInterface_ProblemToPolicyDiscrete | Interface_ProblemToPolicyDiscrete is an interface from discrete problems to policies |
| CInterface_ProblemToPolicyDiscretePure | Interface_ProblemToPolicyDiscretePure is an interface from discrete problems to pure policies |
| CJESPDynamicProgrammingPlanner | JESPDynamicProgrammingPlanner plans with the DP JESP algorithm |
| CJESPExhaustivePlanner | JESPExhaustivePlanner plans with the Exhaustive JESP algorithm |
| CJointAction | JointAction represents a joint action |
| CJointActionDiscrete | JointActionDiscrete represents discrete joint actions |
| CJointActionHistory | JointActionHistory represents a joint action history |
| CJointActionHistoryTree | JointActionHistoryTree is a wrapper for JointActionHistory |
| CJointActionObservationHistory | JointActionObservationHistory represents a joint action observation history |
| CJointActionObservationHistoryTree | JointActionObservationHistoryTree is derived from TreeNode, and similar to ObservationHistoryTree: |
| CJointBelief | JointBelief stores a joint belief, represented as a regular (dense) vector of doubles |
| CJointBeliefEventDriven | JointBeliefEventDriven stores a joint belief, represented as a regular (dense) vector of doubles |
| CJointBeliefInterface | JointBeliefInterface represents an interface for joint beliefs |
| CJointBeliefSparse | JointBeliefSparse represents a sparse joint belief |
| CJointHistory | JointHistory represents a joint history, i.e., a history for each agent |
| CJointObservation | JointObservation is represents joint observations |
| CJointObservationDiscrete | JointObservationDiscrete represents discrete joint observations |
| CJointObservationHistory | JointObservationHistory represents a joint observation history |
| CJointObservationHistoryTree | JointObservationHistoryTree is a class that represents a wrapper for the JointObservationHistory class |
| CJointPolicy | JointPolicy is a class that represents a joint policy |
| CJointPolicyDiscrete | JointPolicyDiscrete is a class that represents a discrete joint policy |
| CJointPolicyDiscretePure | JointPolicyDiscretePure is represents a pure joint policy for a discrete MADP |
| CJointPolicyPureVector | JointPolicyPureVector represents a discrete pure joint policy |
| CJointPolicyPureVectorForClusteredBG | JointPolicyPureVectorForClusteredBG represents a joint policy for a clustered CBG |
| CJointPolicyValuePair | JointPolicyValuePair is a wrapper for a partial joint policy and its heuristic value |
| CJPolComponent_VectorImplementation | JPolComponent_VectorImplementation implements functionality common to several joint policy implementations |
| CJPPVIndexValuePair | JPPVIndexValuePair represents a (JointPolicyPureVector,Value) pair |
| CJPPVValuePair | JPPVValuePair represents a (JointPolicyPureVector,Value) pair, which stores the full JointPolicyPureVector |
| CLocalBGValueFunctionBGCGWrapper | LocalBGValueFunctionBGCGWrapper is a class that represents a wrapper for a BayesianGameCollaborativeGraphical such that it implements the LocalBGValueFunctionInterface |
| CLocalBGValueFunctionInterface | LocalBGValueFunctionInterface is a class that represents a local CGBG payoff function |
| CLocalBGValueFunctionVector | LocalBGValueFunctionVector is a vector implementation of LocalBGValueFunctionInterface to represent an |
| CMADPComponentDiscreteActions | MADPComponentDiscreteActions contains functionality for discrete action spaces |
| CMADPComponentDiscreteObservations | MADPComponentDiscreteObservations contains functionality for discrete observation spaces |
| CMADPComponentDiscreteStates | MADPComponentDiscreteStates is a class that represents a discrete state space |
| CMADPComponentFactoredStates | MADPComponentFactoredStates is a class that represents a factored states space |
| CMADPDiscreteStatistics | MADPDiscreteStatistics is a class that represents an object that can compute some statistics for a MADP Discrete |
| CMADPParser | MADPParser is a general class for parsers in MADP |
| CMaxPlusSolver | MaxPlusSolver is the base class for Max Plus methods, it stores the parameters |
| CMaxPlusSolverForBGs | MaxPlusSolverForBGs solves BG via Max Plus |
| CMDPPolicyIteration | MDPPolicyIteration implements policy iteration for MDPs via GPU |
| CMDPPolicyIterationGPU | MDPPolicyIterationGPU implements policy iteration for MDPs via GPU |
| CMDPSolver | MDPSolver is an interface for MDP solvers |
| CMDPValueIteration | MDPValueIteration implements value iteration for MDPs |
| CMonahanBGPlanner | MonahanBGPlanner is the Bayesian Game version of MonahanPOMDPPlanner |
| CMonahanPlanner | MonahanPlanner provides shared functionality for MonahanPOMDPPlanner and MonahanBGPlanner |
| CMonahanPOMDPPlanner | MonahanPOMDPPlanner implements Monahan's (1982) POMDP algorithm, which basically generates all possible next-step alpha vectors, followed by pruning |
| CMultiAgentDecisionProcess | MultiAgentDecisionProcess is an class that defines the primary properties of a decision process |
| CMultiAgentDecisionProcessDiscrete | MultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process |
| ►CMultiAgentDecisionProcessDiscreteFactoredStates | MultiAgentDecisionProcessDiscreteFactoredStates is a class that represents the dynamics of a MAS with a factored state space |
| CBoundObservationProbFunctor | The BoundObservationProbFunctor class binds the "ComputeObservationProb" function to a templated object |
| CBoundScopeFunctor | The BoundScopeFunctor class binds the "SetScopes" function to a templated object |
| CBoundTransitionProbFunctor | The BoundTransitionProbFunctor class binds the "ComputeTransitionProb" function to a templated object |
| CEmptyObservationProbFunctor | Can be used by fully-observable subclasses of MultiAgentDecisionProcessDiscreteFactoredStates, in order to initialize the 2DBN without requiring an actual observation function |
| CObservationProbFunctor | This is the base class for functors that return the observation probability for a given (s,a,s',o) tuple |
| CScopeFunctor | This is the base class for functors that set the scopes of the 2-DBN |
| CTransitionProbFunctor | This is the base class for functors that return the transition probability for a given (s,a,s') tuple |
| CMultiAgentDecisionProcessDiscreteFactoredStatesInterface | MultiAgentDecisionProcessDiscreteFactoredStatesInterface is the interface for factored state problems |
| CMultiAgentDecisionProcessDiscreteInterface | MultiAgentDecisionProcessDiscreteInterface is an abstract base class that defines publicly accessible member functions that a discrete multiagent decision process must implement |
| CMultiAgentDecisionProcessInterface | MultiAgentDecisionProcessInterface is an abstract base class that declares the primary properties of a multiagent decision process |
| CNamedDescribedEntity | NamedDescribedEntity represents named entities |
| CNullPlanner | NullPlanner represents a planner which does nothing, but can be used to instantiate a PlanningUnitDecPOMDPDiscrete |
| CNullPlannerFactored | NullPlannerFactored represents a planner which does nothing, but can be used to instantiate a PlanningUnitDecFactoredPOMDPDiscrete |
| CNullPlannerTOI | NullPlannerTOI represents a planner which does nothing, but can be used to instantiate a PlanningUnitTOIDecPOMDPDiscrete |
| CObservation | Observation represents observations |
| CObservationDiscrete | ObservationDiscrete represents discrete observations |
| CObservationHistory | ObservationHistory represents an action history of a single agent |
| CObservationHistoryTree | ObservationHistoryTree is a wrapper for the ObservationHistory class |
| CObservationModel | ObservationModel represents the observation model in a decision process |
| CObservationModelDiscrete | ObservationModelDiscrete represents a discrete observation model |
| CObservationModelDiscreteInterface | ObservationModelDiscreteInterface represents a discrete observation model |
| CObservationModelMapping | ObservationModelMapping implements an ObservationModelDiscrete |
| CObservationModelMappingSparse | ObservationModelMappingSparse implements an ObservationModelDiscrete |
| COGet | OGet can be used for direct access to the observation model |
| COGet_EventObservationModelMapping | |
| COGet_EventObservationModelMappingSparse | |
| COGet_ObservationModelMapping | OGet_ObservationModelMapping can be used for direct access to a ObservationModelMapping |
| COGet_ObservationModelMappingSparse | OGet_ObservationModelMappingSparse can be used for direct access to a ObservationModelMappingSparse |
| COnlineMDPPlanner | OnlineMDPPlanner provides an abstract base class for online MDP planners |
| COptimalValueDatabase | OptimalValueDatabase provides values of optimal policies for problems, so to be used for verification purposes |
| CParserInterface | ParserInterface is an interface for parsers |
| CParserPOMDPDiscrete | |
| CParserProbModelXML | ParserProbModelXML is a parser for factored Dec-POMDP models written in ProbModelXML |
| CParserTOICompactRewardDecPOMDPDiscrete | ParserTOICompactRewardDecPOMDPDiscrete is a parser for TOICompactRewardDecPOMDPDiscrete |
| CParserTOIDecMDPDiscrete | ParserTOIDecMDPDiscrete is a parser for TOIDecMDPDiscrete |
| CParserTOIDecPOMDPDiscrete | ParserTOIDecPOMDPDiscrete is a parser for TOIDecPOMDPDiscrete |
| CParserTOIFactoredRewardDecPOMDPDiscrete | ParserTOIFactoredRewardDecPOMDPDiscrete is a parser for TransitionObservationIndependentFactoredRewardDecPOMDPDiscrete |
| CPartialJointPolicy | PartialJointPolicy represents a joint policy that is only specified for t time steps instead of for every time step |
| CPartialJointPolicyDiscretePure | PartialJointPolicyDiscretePure is a discrete and pure PartialJointPolicy |
| CPartialJointPolicyPureVector | PartialJointPolicyPureVector implements a PartialJointPolicy using a mapping of history indices to actions |
| CPartialJointPolicyValuePair | PartialJointPolicyValuePair is a wrapper for a partial joint policy and its heuristic value |
| CPartialJPDPValuePair | PartialJPDPValuePair represents a (PartialJointPolicyDiscretePure,Value) pair, which stores the full PartialJointPolicyDiscretePure |
| CPartialJPPVIndexValuePair | PartialJPPVIndexValuePair represents a (PartialJointPolicyPureVector,Value) pair |
| CPartialPolicyPoolInterface | PartialPolicyPoolInterface is an interface for PolicyPools containing Partial Joint Policies |
| CPartialPolicyPoolItemInterface | PartialPolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem |
| CPDDiscreteInterface | PDDiscreteInterface is an abstract base class that represents a joint probability distribution |
| CPerseus | Perseus contains basic functionality for the Perseus planner |
| CPerseusBGNSPlanner | PerseusBGNSPlanner implements the Perseus planning algorithm for BGs with non-stationary QFunctions |
| CPerseusBGPlanner | PerseusBGPlanner implements the Perseus planning algorithm for BGs |
| CPerseusBGPOMDPPlanner | PerseusBGPOMDPPlanner implements the Perseus planning algorithm with for mixed BG/POMDP backups |
| CPerseusConstrainedPOMDPPlanner | The PerseusConstrainedPOMDPPlanner is a Perseus variant which skips action selection if the agent receives a "false-negative" observation, which in practice means that the agent cannot react to an event which it failed to detect |
| CPerseusNonStationary | PerseusNonStationary is Perseus for non-stationary policies |
| CPerseusNonStationaryQPlanner | PerseusNonStationaryQPlanner is a Perseus planner that uses non-stationary QFunctions |
| CPerseusPOMDPPlanner | PerseusPOMDPPlanner implements the Perseus planning algorithm for POMDPs |
| CPerseusQFunctionPlanner | PerseusQFunctionPlanner is a Perseus planner that uses QFunctions |
| CPerseusStationary | PerseusStationary is Perseus for stationary policies |
| CPerseusWeightedPlanner | PerseusWeightedPlanner implements the Perseus planning algorithm with a weighted BG/POMDP backup |
| CPlanningUnit | PlanningUnit represents a planning unit, i.e., a planning algorithm |
| CPlanningUnitDecPOMDPDiscrete | PlanningUnitDecPOMDPDiscrete represents a planning unit for discrete Dec-POMDPs |
| CPlanningUnitFactoredDecPOMDPDiscrete | PlanningUnitFactoredDecPOMDPDiscrete is a class that represents a planning unit for factored discrete Dec-POMDPs |
| CPlanningUnitMADPDiscrete | PlanningUnitMADPDiscrete represents a Planning unit for a discrete MADP (discrete actions, observations and states) |
| CPlanningUnitMADPDiscreteParameters | PlanningUnitMADPDiscreteParameters stores parameters of PlanningUnitMADPDiscrete |
| CPlanningUnitTOIDecPOMDPDiscrete | PlanningUnitTOIDecPOMDPDiscrete represents a planning unit for transition observation independent discrete Dec-POMDPs |
| CPolicy | Policy is a class that represents a policy for a single agent |
| CPolicyDiscrete | PolicyDiscrete is a class that represents a discrete policy |
| CPolicyDiscretePure | PolicyDiscretePure is an abstract class that represents a pure policy for a discrete MADP |
| CPolicyPoolInterface | PolicyPoolInterface is an interface for PolicyPools containing fully defined Joint Policies |
| CPolicyPoolItemInterface | PolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem |
| CPolicyPoolJPolValPair | PolicyPoolJPolValPair is a policy pool with joint policy - value pairs |
| CPolicyPoolPartialJPolValPair | PolicyPoolJPolValPair is a policy pool with partial joint policy - value pairs |
| CPolicyPureVector | PolicyPureVector is a class that represents a pure (=deterministic) policy |
| CPOMDPDiscrete | POMDPDiscrete models discrete POMDPs |
| CPOSG | POSG is a simple implementation of POSGInterface |
| CPOSGDiscrete | POSGDiscrete represent a discrete POSG model |
| CPOSGDiscreteInterface | POSGDiscreteInterface is the interface for a discrete POSG model: it defines the set/get reward functions |
| CPOSGInterface | POSGInterface is an interface for POSGs |
| ►CProblem_CGBG_FF | Problem_CGBG_FF reprents a generalized single-shot fire fighting problem |
| CdiPairComp | |
| CProblemAloha | |
| CProblemDecTiger | ProblemDecTiger implements the DecTiger problem |
| CProblemDecTigerWithCreaks | ProblemDecTigerWithCreaks implements a variation of the DecTiger problem, in which an agent can hear whether the other agent has tried to open a door |
| CProblemFireFighting | ProblemFireFighting is a class that represents the firefighting problem as described in refGMAA (DOC-references.h) |
| CProblemFireFightingFactored | ProblemFireFightingFactored is a factored implementation of the FireFighting problem introduced in (Oliehoek, Spaan, Vlassis, JAIR 32, 2008) |
| CProblemFireFightingGraph | ProblemFireFightingGraph is an implementation of the FactoredFireFighting problem introduced in (Oliehoek, Spaan, Whiteson, Vlassis, AAMAS 2008) |
| CProblemFOBSFireFightingFactored | This is a template of how to implement a fully-observable problem from scratch (by deriving from FactoredMMDPDiscrete) |
| CProblemFOBSFireFightingGraph | ProblemFOBSFireFightingGraph is an implementation of a fully overservable FireFightingGraph problem |
| CQAlphaVector | QAlphaVector implements a QFunctionJointBelief using an alpha-vector based value function loaded from disk |
| CQAV | QAV implements a QFunctionJointBelief using a planner based on alpha functions, for instance the Perseus planners |
| CQAVParameters | |
| CQBG | QBG is a class that represents the QBG heuristic |
| CQBGPlanner_TreeIncPruneBnB | QBGPlanner_TreeIncPruneBnB computes vector-based QBG functions using tree-based incremental pruning |
| CQFunction | QFunction is an abstract base class containing nothing |
| CQFunctionForDecPOMDP | QFunctionForDecPOMDP is a class that represents a Q function for a Dec-POMDP |
| CQFunctionForDecPOMDPInterface | QFunctionForDecPOMDPInterface is a class that represents a Q function for a Dec-POMDP |
| CQFunctionForFactoredDecPOMDP | QFunctionForFactoredDecPOMDP is a base class for the implementation of a QFunction for a Factored DecPOMDP |
| CQFunctionForFactoredDecPOMDPInterface | QFunctionForFactoredDecPOMDPInterface is a class that represents the interface for a QFunction for a Factored DecPOMDP |
| CQFunctionInterface | QFunctionInterface is an abstract class for all Q-Functions |
| CQFunctionJAOH | QFunctionJAOH represents a Q-function that operates on joint action-observation histories |
| CQFunctionJAOHInterface | QFunctionJAOHInterface is a class that is an interface for heuristics of the shape Q(JointActionObservationHistory, JointAction) |
| CQFunctionJAOHTree | QFunctionJAOHTree is represents QFunctionJAOH which store Qvalues in a tree |
| CQFunctionJointBelief | QFunctionJointBelief represents a Q-function that operates on joint beliefs |
| CQFunctionJointBeliefInterface | QFunctionJointBeliefInterface is an interface for QFunctionJointBelief |
| CQHybrid | QHybrid is a class that represents the QHybrid heuristic |
| CQMDP | QMDP is a class that represents the QMDP heuristic |
| CQMonahanBG | QMonahanBG implements a QFunctionJAOH using MonahanBGPlanner |
| CQMonahanPOMDP | QMonahanPOMDP implements a QFunctionJAOH using MonahanPOMDPPlanner |
| CQPOMDP | QPOMDP is a class that represents the QPOMDP heuristic |
| CQTable | QTable implements QTableInterface using a full matrix |
| CQTableInterface | QTableInterface is the abstract base class for Q(., a) functions |
| CQTreeIncPruneBG | QTreeIncPruneBG implements a QFunctionJAOH using TreeIncPruneBGPlanner |
| CRewardModel | RewardModel represents the reward model in a decision process |
| CRewardModelDiscreteInterface | RewardModelDiscreteInterface is an interface for discrete reward models |
| CRewardModelMapping | RewardModelMapping represents a discrete reward model |
| CRewardModelMappingSparse | RewardModelMappingSparse represents a discrete reward model |
| CRewardModelMappingSparseMapped | RewardModelMappingSparseMapped represents a discrete reward model |
| CRewardModelTOISparse | RewardModelTOISparse represents a discrete reward model based on vectors of states and actions |
| CRGet | RGet can be used for direct access to a reward model |
| CRGet_RewardModelMapping | RGet can be used for direct access to a RewardModelMapping |
| CRGet_RewardModelMappingSparse | RGet can be used for direct access to a RewardModelMappingSparse |
| CScope | |
| CSimulation | Simulation is a class that simulates policies in order to test their control quality |
| CSimulationAgent | SimulationAgent represents an agent in for class Simulation |
| CSimulationDecPOMDPDiscrete | SimulationDecPOMDPDiscrete simulates policies in DecPOMDPDiscrete's |
| CSimulationFactoredDecPOMDPDiscrete | SimulationFactoredDecPOMDPDiscrete simulates policies in FactoredDecPOMDPDiscrete's |
| CSimulationResult | SimulationResult stores the results from simulating a joint policy, the obtained rewards in particular |
| CSimulationTOIDecPOMDPDiscrete | SimulationTOIDecPOMDPDiscrete simulates policies in TOIDecPOMDPDiscrete's |
| CState | State is a class that represent states |
| CStateDiscrete | StateDiscrete represents discrete states |
| CStateDistribution | StateDistribution is an interface for probability distributions over states |
| CStateDistributionVector | StateDistributionVector represents a probability distribution over states as a vector of doubles |
| CStateFactorDiscrete | StateFactorDiscrete is a class that represents a state variable, or factor |
| CSystemOfLinearEquationsSolver | |
| CTGet | TGet can be used for direct access to the transition model |
| CTGet_TransitionModelMapping | TGet_TransitionModelMapping can be used for direct access to a TransitionModelMapping |
| CTGet_TransitionModelMappingSparse | TGet_TransitionModelMappingSparse can be used for direct access to a TransitionModelMappingSparse |
| CTimedAlgorithm | TimedAlgorithm allows for easy timekeeping of parts of an algorithm |
| ►CTiming | Timing provides a simple way of timing code |
| CTimes | Stores the start and end of a timespan, in clock cycles |
| CTOICompactRewardDecPOMDPDiscrete | TOICompactRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward |
| CTOIDecMDPDiscrete | TOIDecMDPDiscrete is a class that represents a transition observation indepedent discrete DecMDP |
| CTOIDecPOMDPDiscrete | TOIDecPOMDPDiscrete is a class that represents a transition observation independent discrete DecPOMDP |
| CTOIFactoredRewardDecPOMDPDiscrete | TOIFactoredRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward |
| CTransitionModel | TransitionModel represents the transition model in a decision process |
| CTransitionModelDiscrete | TransitionModelDiscrete represents a discrete transition model |
| CTransitionModelDiscreteInterface | TransitionModelDiscreteInterface represents a discrete transition model |
| CTransitionModelMapping | TransitionModelMapping implements a TransitionModelDiscrete |
| CTransitionModelMappingSparse | TransitionModelMappingSparse implements a TransitionModelDiscrete |
| CTransitionObservationIndependentMADPDiscrete | TransitionObservationIndependentMADPDiscrete is an base class that defines the primary properties of a Transition and Observation independent decision process |
| CTreeIncPruneBGPlanner | TreeIncPruneBGPlanner computes vector-based QBG functions using tree-based incremental pruning |
| CTreeNode | TreeNode represents a node in a tree of histories, for instance observation histories |
| CTwoStageDynamicBayesianNetwork | TwoStageDynamicBayesianNetwork (2DBN) is a class that represents the transition and observation model for a factored MADP |
| CType | Type is an abstract class that represents a Type (e.g |
| CType_AOHIndex | Type_AOHIndex is a implementation (extenstion) of Type and represents a type in e.g |
| CType_PointerTuple | Type_PointerTuple is a implementation (extenstion) of Type and represents a type in e.g |
| CTypeCluster | TypeCluster is a class that represents a cluster of Types |
| CValueFunction | ValueFunction is a class that represents a value function of a joint policy |
| CValueFunctionDecPOMDPDiscrete | ValueFunctionDecPOMDPDiscrete represents and calculates the value function of a (pure) joint policy for a discrete Dec-POMDP |