MultiAgentDecisionProcess
Class List
Here are the classes, structs, unions and interfaces with brief descriptions:
[detail level 1234]
 NArgumentHandlersArgumentHandlers contains functionality for parsing and handling command-line arguments
 Ncomment_cbonlp
 NDPOMDPFormatParsing
 NPOMDPFormatParsing
 NstdSTL namespace
 CActionAction is a class that represent actions
 CActionDiscreteActionDiscrete represents discrete actions
 CActionHistoryActionHistory represents an action history of a single agent
 CActionHistoryTreeActionHistoryTree is a wrapper for ActionHistory
 CActionObservationHistoryActionObservationHistory represents an action-observation history of an agent
 CActionObservationHistoryTreeActionObservationHistoryTree is a wrapper for ActionObservationHistory
 CAgentAgent represents an agent
 CAgentBGAgentBG represents an agent which uses a BG-based policy
 CAgentDecPOMDPDiscreteAgentDecPOMDPDiscrete represents an agent in a discrete DecPOMDP setting
 CAgentDelayedSharedObservationsAgentDelayedSharedObservations represents an agent that acts on local observations and the shared observation at the previous time step
 CAgentFullyObservableAgentFullyObservable represents an agent that receives the true state, the joint observation and also the reward signal
 CAgentLocalObservationsAgentLocalObservations represents an agent that acts on local observations
 CAgentMDPAgentMDP represents an agent which uses a MDP-based policy
 CAgentOnlinePlanningMDPAgentOnlinePlanningMDP represents an agent with an online MDP policy
 CAgentPOMDPAgentPOMDP represents an agent which POMDP-based policy
 CAgentQLearnerAgentQLearner applies standard single-agent Q-learning in the joint action and state space
 CAgentQMDPAgentQMDP represents an agent which uses a QMDP-based policy
 CAgentRandomAgentRandom represents an agent which chooses action uniformly at random
 CAgentSharedObservationsAgentSharedObservations is represents an agent that benefits from free communication, i.e., it can share all its observations
 CAgentTOIFullyObservableSyncedAgentTOIFullyObservableSynced represents an agent that receives the true state, the joint observation and also the reward signal
 CAgentTOIFullyObservableSyncedSpecialRewardAgentTOIFullyObservableSyncedSpecialReward represents an AgentTOIFullyObservableSynced
 CAlphaVectorAlphaVector represent an alpha vector used in POMDP solving
 CAlphaVectorBGAlphaVectorBG implements Bayesian Game specific functionality for alpha-vector based planning
 CAlphaVectorConstrainedPOMDPAlphaVectorConstrainedPOMDP implements Constrained POMDP specific functionality for alpha-vector based planning
 CAlphaVectorPlanningAlphaVectorPlanning provides base functionality for alpha-vector based POMDP or BG techniques
 CAlphaVectorPOMDPAlphaVectorPOMDP implements POMDP specific functionality for alpha-vector based planning
 CAlphaVectorPruningAlphaVectorPruning reduces sets of alpha vectors to their parsimonious representation via LP-based pruning
 CAlphaVectorWeightedAlphaVectorWeighted implements a weighted BG/POMDP backup
 CBayesianGameBayesianGame is a class that represents a general Bayesian game in which each agent has its own utility function
 CBayesianGameBaseBayesianGameBase is a class that represents a Bayesian game
 CBayesianGameCollaborativeGraphicalBayesianGameCollaborativeGraphical represents a collaborative graphical Bayesian game
 CBayesianGameForDecPOMDPStageBayesianGameForDecPOMDPStage represents a BG for a single stage
 CBayesianGameForDecPOMDPStageInterfaceBayesianGameForDecPOMDPStageInterface is a class that represents the base class for all Bayesian games that are used to represent a stage of a Dec-POMDP (e.g., in GMAA*)
 CBayesianGameIdenticalPayoffBayesianGameIdenticalPayoff is a class that represents a Bayesian game with identical payoffs
 CBayesianGameIdenticalPayoffInterfaceBayesianGameIdenticalPayoffInterface provides an interface for Bayesian Games with identical payoffs
 CBayesianGameIdenticalPayoffSolverBayesianGameIdenticalPayoffSolver is an interface for solvers for Bayesian games with identical payoff
 CBayesianGameIdenticalPayoffSolver_TBayesianGameIdenticalPayoffSolver_T is an interface for solvers for Bayesian games with identical payoff
 CBayesianGameWithClusterInfoBayesianGameWithClusterInfo represents an identical-payoff BG that can be clustered
 CBeliefBelief represents a probability distribution over the state space
 CBeliefInterfaceBeliefInterface is an interface for beliefs, i.e., probability distributions over the state space
 CBeliefIteratorBeliefIterator is an iterator for dense beliefs
 CBeliefIteratorGenericBeliefIteratorGeneric is an iterator for beliefs
 CBeliefIteratorInterfaceBeliefIteratorInterface is an interface for iterators over beliefs
 CBeliefIteratorSparseBeliefIteratorSparse is an iterator for sparse beliefs
 CBeliefSetNonStationaryBeliefSetNonStationary represents a non-stationary belief set
 CBeliefSparseBeliefSparse represents a probability distribution over the state space
 CBG_FactorGraphCreatorBG_FactorGraphCreator will create a FG from a BG
 CBGCG_SolverBGCG_Solver is a base class for collaborative graphical bayesian games
 CBGCG_SolverCreatorBGCG_SolverCreator is a class that represents an object that can create a BGCG_Solver
 CBGCG_SolverCreator_FGBGCG_SolverCreator_FG creates BGCG Solvers with Max Plus
 CBGCG_SolverCreator_MPBGCG_SolverCreator_MP creates BGCG Solvers with Max Plus
 CBGCG_SolverCreator_NDPBGCG_SolverCreator_NDP creates BGCG Solvers with Non-serial Dynamic Programming
 CBGCG_SolverCreator_RandomBGCG_SolverCreator_Random creates a BGCG Solver that gives random solutions
 CBGCG_SolverFG
 CBGCG_SolverMaxPlusBGCG_SolverMaxPlus is a class that performs max plus for BGIPs with agents independence
 CBGCG_SolverNonserialDynamicProgrammingBGCG_SolverNonserialDynamicProgramming implements non-serial dynamic programming for collaborative graphical Bayesian games
 CBGCG_SolverRandomBGCG_SolverRandom is a class that represents a BGCG solver that returns a random joint BG policy
 CBGforStageCreationBGforStageCreation is a class that provides some functions to aid the construction of Bayesian games for a stage of a Dec-POMDP
 CBGIP_BnB_NodeBGIP_BnB_Node represents a node in the search tree of BGIP_SolverBranchAndBound
 CBGIP_IncrementalSolverCreatorInterface_TBGIP_IncrementalSolverCreatorInterface_T is an interface for classes that create BGIP solvers
 CBGIP_IncrementalSolverInterfaceBGIP_IncrementalSolverInterface is an interface for BGIP_Solvers that can incrementally return multiple solutions
 CBGIP_IncrementalSolverInterface_TBGIP_IncrementalSolverInterface_T is an interface for BGIP_Solvers that can incrementally return multiple solutions
 CBGIP_SolverAlternatingMaximizationBGIP_SolverAlternatingMaximization implements an approximate solver for identical payoff Bayesian games, based on alternating maximization
 CBGIP_SolverBFSNonIncrementalBGIP_SolverBFSNonIncremental is a class that performs Brute force search for identical payoff Bayesian Games
 CBGIP_SolverBranchAndBoundBGIP_SolverBranchAndBound is a class that performs Branch-and-Bound search for identical payoff Bayesian Games
 CBGIP_SolverBruteForceSearchBGIP_SolverBruteForceSearch is a class that performs Brute force search for identical payoff Bayesian Games
 CBGIP_SolverCEBGIP_SolverCE is a class that performs Cross Entropy optimization for identical payoff Bayesian Games
 CBGIP_SolverCreator_AMBGIP_SolverCreator_AM creates BGIP Solvers with Alternating Maximization
 CBGIP_SolverCreator_BFSBGIP_SolverCreator_BFS creates BGIP Solvers with Brute Force Search
 CBGIP_SolverCreator_BFSNonIncBGIP_SolverCreator_BFSNonInc creates BGIP Solvers with Brute Force Search
 CBGIP_SolverCreator_BnBBGIP_SolverCreator_BnB creates BGIP Solvers with Branch-and-Bound search
 CBGIP_SolverCreator_CEBGIP_SolverCreator_CE creates BGIP Solvers with Cross Entropy
 CBGIP_SolverCreator_MPBGIP_SolverCreatorInterface_T_MP creates BGIP Solvers with Max Plus
 CBGIP_SolverCreator_RandomBGIP_SolverCreator_Random creates a BGIP Solver that gives random solutions
 CBGIP_SolverCreatorInterfaceBGIP_SolverCreatorInterface is an interface for classes that create BGIP solvers
 CBGIP_SolverCreatorInterface_TBGIP_SolverCreatorInterface_T is an interface for classes that create BGIP solvers
 CBGIP_SolverMaxPlusBGIP_SolverMaxPlus is a class that performs max plus for BGIPs (without agents independence)
 CBGIP_SolverRandomBGIP_SolverRandom creates random solutions to Bayesian games for testing purposes
 CBGIPSolutionBGIPSolution represents a solution for BayesianGameIdenticalPayoff
 CBruteForceSearchPlannerBruteForceSearchPlanner implements an exact solution algorithm
 CCompareVec
 CCPDDiscreteInterfaceCPDDiscreteInterface is an abstract base class that represents a conditional probability distribution $ Pr(x|y) $
 CCPDKroneckerDeltaCPDKroneckerDelta implements a Kronecker delta-style CPD
 CCPTCPT implements a conditional probability table
 CDecPOMDPDecPOMDP is a simple implementation of DecPOMDPInterface
 CDecPOMDPDiscreteDecPOMDPDiscrete represent a discrete DEC-POMDP model
 CDecPOMDPDiscreteInterfaceDecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions
 CDecPOMDPInterfaceDecPOMDPInterface is an interface for DecPOMDPs
 CDICEPSPlannerDICEPSPlanner implements the Direct Cross-Entropy Policy Search method
 CDiscreteEntityDiscreteEntity is a general class for tracking discrete entities
 CEE is a class that represents a basic exception
 CEDeadlineEDeadline represents a deadline exceeded expection
 CEInvalidIndexEInvalidIndex represents an invalid index exception
 CENoSubScopeENoSubScope represents an invalid index exception
 CENotCachedENotCached represents an invalid index exception
 CEOverflowEOverflow represents an integer overflow exception
 CEParseEParse represents a parser exception
 CEventObservationModelMappingEventObservationModelMapping implements an ObservationModelDiscrete which depends not only on the resulting state but also on the current state of the system, i.e. P(o(k+1) | s(k), ja(k), s(k+1))
 CEventObservationModelMappingSparseEventObservationModelMappingSparse implements an ObservationModelDiscrete
 CFactoredDecPOMDPDiscreteFactoredDecPOMDPDiscrete is implements a factored DecPOMDPDiscrete
 CFactoredDecPOMDPDiscreteInterfaceFactoredDecPOMDPDiscreteInterface is the interface for a Dec-POMDP with factored states
 CFactoredMMDPDiscrete
 CFactoredQFunctionScopeForStageFactoredQFunctionScopeForStage represents a Scope for one stage of a factored QFunction
 CFactoredQFunctionStateJAOHInterfaceFactoredQFunctionStateJAOHInterface represents Q-value functions for factored discrete Dec-POMDPs
 CFactoredQLastTimeStepOrElseFactoredQLastTimeStepOrElse is a class that represents a Q-function that is factored at the last stage, and non factored for earlier stages
 CFactoredQLastTimeStepOrQBGFactoredQLastTimeStepOrQBG is a class that represents a Q-Function that is factored for the last stage (i.e., the factored immediate reward function) and the (non-factored) QBG function for the earlier stages
 CFactoredQLastTimeStepOrQMDPFactoredQLastTimeStepOrQMDP is a class that represents a Q-Function that is factored for the last stage (i.e., the factored immediate reward function) and the (non-factored) QMDP function for the earlier stages
 CFactoredQLastTimeStepOrQPOMDPFactoredQLastTimeStepOrQPOMDP is a class that represents a Q-Function that is factored for the last stage (i.e., the factored immediate reward function) and the (non-factored) QPOMDP function for the earlier stages
 CFactoredStateAOHDistributionFactoredStateAOHDistribution is a class that represents a factored probability distribution over both states and action-observation histories
 CFactoredStateDistributionFactoredStateDistribution is a class that represents a base class for factored state distributions
 CFG_Solver
 CFG_SolverMaxPlusFG_SolverMaxPlus optimizes (maximizes) a factor graph using max plus
 CFG_SolverNDPFG_SolverNDP optimizes (maximizes) a factor graph using non-serial dynamic programming
 CFixedCapacityPriorityQueueFixedCapacityPriorityQueue is a class that represents a priority queue with a fixed size
 CFSAOHDist_NECOFFSAOHDist_NECOF is a class that represents a NEarly COmpletely Factored distribution over state factors and action-observation histories
 CFSDist_COFFSDist_COF is a class that represents a completely factored state distribution
 CGeneralizedMAAStarPlannerGeneralizedMAAStarPlanner is a class that represents the Generalized MAA* planner class
 CGeneralizedMAAStarPlannerForDecPOMDPDiscreteGeneralizedMAAStarPlannerForDecPOMDPDiscrete is a class that represents the Generalized MAA* planner
 CGeneralizedMAAStarPlannerForFactoredDecPOMDPDiscreteGeneralizedMAAStarPlannerForFactoredDecPOMDPDiscrete is a class that represents the Generalized MAA* planner
 CGMAA_kGMAAGMAA_kGMAA is a class that represents a GMAA planner that performs k-GMAA, i.e
 CGMAA_kGMAAClusterGMAA_kGMAA is a class that represents a GMAA planner that performs k-GMAA, i.e
 CGMAA_MAA_ELSIGMAA_MAA_ELSI Generalized MAA* Exploiting Last-Stage Independence
 CGMAA_MAAstarGMAA_MAAstar is a class that represents a planner that performs MAA* as described by Szer et al
 CGMAA_MAAstarClassicGMAA_MAAstarClassic is a class that represents a planner that performs MAA* as described by Szer et al
 CGMAA_MAAstarClusterGMAA_MAAstarCluster is a class that represents a planner that performs MAA* as described by Szer et al
 CHistoryHistory is a general class for histories
 CIndividualBeliefJESPIndividualBeliefJESP stores individual beliefs for the JESP algorithm
 CIndividualHistoryIndividualHistory represents a history for a single agent
 CInterface_ProblemToPolicyDiscreteInterface_ProblemToPolicyDiscrete is an interface from discrete problems to policies
 CInterface_ProblemToPolicyDiscretePureInterface_ProblemToPolicyDiscretePure is an interface from discrete problems to pure policies
 CJESPDynamicProgrammingPlannerJESPDynamicProgrammingPlanner plans with the DP JESP algorithm
 CJESPExhaustivePlannerJESPExhaustivePlanner plans with the Exhaustive JESP algorithm
 CJointActionJointAction represents a joint action
 CJointActionDiscreteJointActionDiscrete represents discrete joint actions
 CJointActionHistoryJointActionHistory represents a joint action history
 CJointActionHistoryTreeJointActionHistoryTree is a wrapper for JointActionHistory
 CJointActionObservationHistoryJointActionObservationHistory represents a joint action observation history
 CJointActionObservationHistoryTreeJointActionObservationHistoryTree is derived from TreeNode, and similar to ObservationHistoryTree:
 CJointBeliefJointBelief stores a joint belief, represented as a regular (dense) vector of doubles
 CJointBeliefEventDrivenJointBeliefEventDriven stores a joint belief, represented as a regular (dense) vector of doubles
 CJointBeliefInterfaceJointBeliefInterface represents an interface for joint beliefs
 CJointBeliefSparseJointBeliefSparse represents a sparse joint belief
 CJointHistoryJointHistory represents a joint history, i.e., a history for each agent
 CJointObservationJointObservation is represents joint observations
 CJointObservationDiscreteJointObservationDiscrete represents discrete joint observations
 CJointObservationHistoryJointObservationHistory represents a joint observation history
 CJointObservationHistoryTreeJointObservationHistoryTree is a class that represents a wrapper for the JointObservationHistory class
 CJointPolicyJointPolicy is a class that represents a joint policy
 CJointPolicyDiscreteJointPolicyDiscrete is a class that represents a discrete joint policy
 CJointPolicyDiscretePureJointPolicyDiscretePure is represents a pure joint policy for a discrete MADP
 CJointPolicyPureVectorJointPolicyPureVector represents a discrete pure joint policy
 CJointPolicyPureVectorForClusteredBGJointPolicyPureVectorForClusteredBG represents a joint policy for a clustered CBG
 CJointPolicyValuePairJointPolicyValuePair is a wrapper for a partial joint policy and its heuristic value
 CJPolComponent_VectorImplementationJPolComponent_VectorImplementation implements functionality common to several joint policy implementations
 CJPPVIndexValuePairJPPVIndexValuePair represents a (JointPolicyPureVector,Value) pair
 CJPPVValuePairJPPVValuePair represents a (JointPolicyPureVector,Value) pair, which stores the full JointPolicyPureVector
 CLocalBGValueFunctionBGCGWrapperLocalBGValueFunctionBGCGWrapper is a class that represents a wrapper for a BayesianGameCollaborativeGraphical such that it implements the LocalBGValueFunctionInterface
 CLocalBGValueFunctionInterfaceLocalBGValueFunctionInterface is a class that represents a local CGBG payoff function
 CLocalBGValueFunctionVectorLocalBGValueFunctionVector is a vector implementation of LocalBGValueFunctionInterface to represent an $ u^e( \beta_e ) $
 CMADPComponentDiscreteActionsMADPComponentDiscreteActions contains functionality for discrete action spaces
 CMADPComponentDiscreteObservationsMADPComponentDiscreteObservations contains functionality for discrete observation spaces
 CMADPComponentDiscreteStatesMADPComponentDiscreteStates is a class that represents a discrete state space
 CMADPComponentFactoredStatesMADPComponentFactoredStates is a class that represents a factored states space
 CMADPDiscreteStatisticsMADPDiscreteStatistics is a class that represents an object that can compute some statistics for a MADP Discrete
 CMADPParserMADPParser is a general class for parsers in MADP
 CMaxPlusSolverMaxPlusSolver is the base class for Max Plus methods, it stores the parameters
 CMaxPlusSolverForBGsMaxPlusSolverForBGs solves BG via Max Plus
 CMDPPolicyIterationMDPPolicyIteration implements policy iteration for MDPs via GPU
 CMDPPolicyIterationGPUMDPPolicyIterationGPU implements policy iteration for MDPs via GPU
 CMDPSolverMDPSolver is an interface for MDP solvers
 CMDPValueIterationMDPValueIteration implements value iteration for MDPs
 CMonahanBGPlannerMonahanBGPlanner is the Bayesian Game version of MonahanPOMDPPlanner
 CMonahanPlannerMonahanPlanner provides shared functionality for MonahanPOMDPPlanner and MonahanBGPlanner
 CMonahanPOMDPPlannerMonahanPOMDPPlanner implements Monahan's (1982) POMDP algorithm, which basically generates all possible next-step alpha vectors, followed by pruning
 CMultiAgentDecisionProcessMultiAgentDecisionProcess is an class that defines the primary properties of a decision process
 CMultiAgentDecisionProcessDiscreteMultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process
 CMultiAgentDecisionProcessDiscreteFactoredStatesMultiAgentDecisionProcessDiscreteFactoredStates is a class that represents the dynamics of a MAS with a factored state space
 CMultiAgentDecisionProcessDiscreteFactoredStatesInterfaceMultiAgentDecisionProcessDiscreteFactoredStatesInterface is the interface for factored state problems
 CMultiAgentDecisionProcessDiscreteInterfaceMultiAgentDecisionProcessDiscreteInterface is an abstract base class that defines publicly accessible member functions that a discrete multiagent decision process must implement
 CMultiAgentDecisionProcessInterfaceMultiAgentDecisionProcessInterface is an abstract base class that declares the primary properties of a multiagent decision process
 CNamedDescribedEntityNamedDescribedEntity represents named entities
 CNullPlannerNullPlanner represents a planner which does nothing, but can be used to instantiate a PlanningUnitDecPOMDPDiscrete
 CNullPlannerFactoredNullPlannerFactored represents a planner which does nothing, but can be used to instantiate a PlanningUnitDecFactoredPOMDPDiscrete
 CNullPlannerTOINullPlannerTOI represents a planner which does nothing, but can be used to instantiate a PlanningUnitTOIDecPOMDPDiscrete
 CObservationObservation represents observations
 CObservationDiscreteObservationDiscrete represents discrete observations
 CObservationHistoryObservationHistory represents an action history of a single agent
 CObservationHistoryTreeObservationHistoryTree is a wrapper for the ObservationHistory class
 CObservationModelObservationModel represents the observation model in a decision process
 CObservationModelDiscreteObservationModelDiscrete represents a discrete observation model
 CObservationModelDiscreteInterfaceObservationModelDiscreteInterface represents a discrete observation model
 CObservationModelMappingObservationModelMapping implements an ObservationModelDiscrete
 CObservationModelMappingSparseObservationModelMappingSparse implements an ObservationModelDiscrete
 COGetOGet can be used for direct access to the observation model
 COGet_EventObservationModelMapping
 COGet_EventObservationModelMappingSparse
 COGet_ObservationModelMappingOGet_ObservationModelMapping can be used for direct access to a ObservationModelMapping
 COGet_ObservationModelMappingSparseOGet_ObservationModelMappingSparse can be used for direct access to a ObservationModelMappingSparse
 COnlineMDPPlannerOnlineMDPPlanner provides an abstract base class for online MDP planners
 COptimalValueDatabaseOptimalValueDatabase provides values of optimal policies for problems, so to be used for verification purposes
 CParserInterfaceParserInterface is an interface for parsers
 CParserPOMDPDiscrete
 CParserProbModelXMLParserProbModelXML is a parser for factored Dec-POMDP models written in ProbModelXML
 CParserTOICompactRewardDecPOMDPDiscreteParserTOICompactRewardDecPOMDPDiscrete is a parser for TOICompactRewardDecPOMDPDiscrete
 CParserTOIDecMDPDiscreteParserTOIDecMDPDiscrete is a parser for TOIDecMDPDiscrete
 CParserTOIDecPOMDPDiscreteParserTOIDecPOMDPDiscrete is a parser for TOIDecPOMDPDiscrete
 CParserTOIFactoredRewardDecPOMDPDiscreteParserTOIFactoredRewardDecPOMDPDiscrete is a parser for TransitionObservationIndependentFactoredRewardDecPOMDPDiscrete
 CPartialJointPolicyPartialJointPolicy represents a joint policy that is only specified for t time steps instead of for every time step
 CPartialJointPolicyDiscretePurePartialJointPolicyDiscretePure is a discrete and pure PartialJointPolicy
 CPartialJointPolicyPureVectorPartialJointPolicyPureVector implements a PartialJointPolicy using a mapping of history indices to actions
 CPartialJointPolicyValuePairPartialJointPolicyValuePair is a wrapper for a partial joint policy and its heuristic value
 CPartialJPDPValuePairPartialJPDPValuePair represents a (PartialJointPolicyDiscretePure,Value) pair, which stores the full PartialJointPolicyDiscretePure
 CPartialJPPVIndexValuePairPartialJPPVIndexValuePair represents a (PartialJointPolicyPureVector,Value) pair
 CPartialPolicyPoolInterfacePartialPolicyPoolInterface is an interface for PolicyPools containing Partial Joint Policies
 CPartialPolicyPoolItemInterfacePartialPolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem
 CPDDiscreteInterfacePDDiscreteInterface is an abstract base class that represents a joint probability distribution $ Pr(x_1,\dots,x_k ) $
 CPerseusPerseus contains basic functionality for the Perseus planner
 CPerseusBGNSPlannerPerseusBGNSPlanner implements the Perseus planning algorithm for BGs with non-stationary QFunctions
 CPerseusBGPlannerPerseusBGPlanner implements the Perseus planning algorithm for BGs
 CPerseusBGPOMDPPlannerPerseusBGPOMDPPlanner implements the Perseus planning algorithm with for mixed BG/POMDP backups
 CPerseusConstrainedPOMDPPlannerThe PerseusConstrainedPOMDPPlanner is a Perseus variant which skips action selection if the agent receives a "false-negative" observation, which in practice means that the agent cannot react to an event which it failed to detect
 CPerseusNonStationaryPerseusNonStationary is Perseus for non-stationary policies
 CPerseusNonStationaryQPlannerPerseusNonStationaryQPlanner is a Perseus planner that uses non-stationary QFunctions
 CPerseusPOMDPPlannerPerseusPOMDPPlanner implements the Perseus planning algorithm for POMDPs
 CPerseusQFunctionPlannerPerseusQFunctionPlanner is a Perseus planner that uses QFunctions
 CPerseusStationaryPerseusStationary is Perseus for stationary policies
 CPerseusWeightedPlannerPerseusWeightedPlanner implements the Perseus planning algorithm with a weighted BG/POMDP backup
 CPlanningUnitPlanningUnit represents a planning unit, i.e., a planning algorithm
 CPlanningUnitDecPOMDPDiscretePlanningUnitDecPOMDPDiscrete represents a planning unit for discrete Dec-POMDPs
 CPlanningUnitFactoredDecPOMDPDiscretePlanningUnitFactoredDecPOMDPDiscrete is a class that represents a planning unit for factored discrete Dec-POMDPs
 CPlanningUnitMADPDiscretePlanningUnitMADPDiscrete represents a Planning unit for a discrete MADP (discrete actions, observations and states)
 CPlanningUnitMADPDiscreteParametersPlanningUnitMADPDiscreteParameters stores parameters of PlanningUnitMADPDiscrete
 CPlanningUnitTOIDecPOMDPDiscretePlanningUnitTOIDecPOMDPDiscrete represents a planning unit for transition observation independent discrete Dec-POMDPs
 CPolicyPolicy is a class that represents a policy for a single agent
 CPolicyDiscretePolicyDiscrete is a class that represents a discrete policy
 CPolicyDiscretePurePolicyDiscretePure is an abstract class that represents a pure policy for a discrete MADP
 CPolicyPoolInterfacePolicyPoolInterface is an interface for PolicyPools containing fully defined Joint Policies
 CPolicyPoolItemInterfacePolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem
 CPolicyPoolJPolValPairPolicyPoolJPolValPair is a policy pool with joint policy - value pairs
 CPolicyPoolPartialJPolValPairPolicyPoolJPolValPair is a policy pool with partial joint policy - value pairs
 CPolicyPureVectorPolicyPureVector is a class that represents a pure (=deterministic) policy
 CPOMDPDiscretePOMDPDiscrete models discrete POMDPs
 CPOSGPOSG is a simple implementation of POSGInterface
 CPOSGDiscretePOSGDiscrete represent a discrete POSG model
 CPOSGDiscreteInterfacePOSGDiscreteInterface is the interface for a discrete POSG model: it defines the set/get reward functions
 CPOSGInterfacePOSGInterface is an interface for POSGs
 CProblem_CGBG_FFProblem_CGBG_FF reprents a generalized single-shot fire fighting problem
 CProblemAloha
 CProblemDecTigerProblemDecTiger implements the DecTiger problem
 CProblemDecTigerWithCreaksProblemDecTigerWithCreaks implements a variation of the DecTiger problem, in which an agent can hear whether the other agent has tried to open a door
 CProblemFireFightingProblemFireFighting is a class that represents the firefighting problem as described in refGMAA (DOC-references.h)
 CProblemFireFightingFactoredProblemFireFightingFactored is a factored implementation of the FireFighting problem introduced in (Oliehoek, Spaan, Vlassis, JAIR 32, 2008)
 CProblemFireFightingGraphProblemFireFightingGraph is an implementation of the FactoredFireFighting problem introduced in (Oliehoek, Spaan, Whiteson, Vlassis, AAMAS 2008)
 CProblemFOBSFireFightingFactoredThis is a template of how to implement a fully-observable problem from scratch (by deriving from FactoredMMDPDiscrete)
 CProblemFOBSFireFightingGraphProblemFOBSFireFightingGraph is an implementation of a fully overservable FireFightingGraph problem
 CQAlphaVectorQAlphaVector implements a QFunctionJointBelief using an alpha-vector based value function loaded from disk
 CQAVQAV implements a QFunctionJointBelief using a planner based on alpha functions, for instance the Perseus planners
 CQAVParameters
 CQBGQBG is a class that represents the QBG heuristic
 CQBGPlanner_TreeIncPruneBnBQBGPlanner_TreeIncPruneBnB computes vector-based QBG functions using tree-based incremental pruning
 CQFunctionQFunction is an abstract base class containing nothing
 CQFunctionForDecPOMDPQFunctionForDecPOMDP is a class that represents a Q function for a Dec-POMDP
 CQFunctionForDecPOMDPInterfaceQFunctionForDecPOMDPInterface is a class that represents a Q function for a Dec-POMDP
 CQFunctionForFactoredDecPOMDPQFunctionForFactoredDecPOMDP is a base class for the implementation of a QFunction for a Factored DecPOMDP
 CQFunctionForFactoredDecPOMDPInterfaceQFunctionForFactoredDecPOMDPInterface is a class that represents the interface for a QFunction for a Factored DecPOMDP
 CQFunctionInterfaceQFunctionInterface is an abstract class for all Q-Functions
 CQFunctionJAOHQFunctionJAOH represents a Q-function that operates on joint action-observation histories
 CQFunctionJAOHInterfaceQFunctionJAOHInterface is a class that is an interface for heuristics of the shape Q(JointActionObservationHistory, JointAction)
 CQFunctionJAOHTreeQFunctionJAOHTree is represents QFunctionJAOH which store Qvalues in a tree
 CQFunctionJointBeliefQFunctionJointBelief represents a Q-function that operates on joint beliefs
 CQFunctionJointBeliefInterfaceQFunctionJointBeliefInterface is an interface for QFunctionJointBelief
 CQHybridQHybrid is a class that represents the QHybrid heuristic
 CQMDPQMDP is a class that represents the QMDP heuristic
 CQMonahanBGQMonahanBG implements a QFunctionJAOH using MonahanBGPlanner
 CQMonahanPOMDPQMonahanPOMDP implements a QFunctionJAOH using MonahanPOMDPPlanner
 CQPOMDPQPOMDP is a class that represents the QPOMDP heuristic
 CQTableQTable implements QTableInterface using a full matrix
 CQTableInterfaceQTableInterface is the abstract base class for Q(., a) functions
 CQTreeIncPruneBGQTreeIncPruneBG implements a QFunctionJAOH using TreeIncPruneBGPlanner
 CRewardModelRewardModel represents the reward model in a decision process
 CRewardModelDiscreteInterfaceRewardModelDiscreteInterface is an interface for discrete reward models
 CRewardModelMappingRewardModelMapping represents a discrete reward model
 CRewardModelMappingSparseRewardModelMappingSparse represents a discrete reward model
 CRewardModelMappingSparseMappedRewardModelMappingSparseMapped represents a discrete reward model
 CRewardModelTOISparseRewardModelTOISparse represents a discrete reward model based on vectors of states and actions
 CRGetRGet can be used for direct access to a reward model
 CRGet_RewardModelMappingRGet can be used for direct access to a RewardModelMapping
 CRGet_RewardModelMappingSparseRGet can be used for direct access to a RewardModelMappingSparse
 CScope
 CSimulationSimulation is a class that simulates policies in order to test their control quality
 CSimulationAgentSimulationAgent represents an agent in for class Simulation
 CSimulationDecPOMDPDiscreteSimulationDecPOMDPDiscrete simulates policies in DecPOMDPDiscrete's
 CSimulationFactoredDecPOMDPDiscreteSimulationFactoredDecPOMDPDiscrete simulates policies in FactoredDecPOMDPDiscrete's
 CSimulationResultSimulationResult stores the results from simulating a joint policy, the obtained rewards in particular
 CSimulationTOIDecPOMDPDiscreteSimulationTOIDecPOMDPDiscrete simulates policies in TOIDecPOMDPDiscrete's
 CStateState is a class that represent states
 CStateDiscreteStateDiscrete represents discrete states
 CStateDistributionStateDistribution is an interface for probability distributions over states
 CStateDistributionVectorStateDistributionVector represents a probability distribution over states as a vector of doubles
 CStateFactorDiscreteStateFactorDiscrete is a class that represents a state variable, or factor
 CSystemOfLinearEquationsSolver
 CTGetTGet can be used for direct access to the transition model
 CTGet_TransitionModelMappingTGet_TransitionModelMapping can be used for direct access to a TransitionModelMapping
 CTGet_TransitionModelMappingSparseTGet_TransitionModelMappingSparse can be used for direct access to a TransitionModelMappingSparse
 CTimedAlgorithmTimedAlgorithm allows for easy timekeeping of parts of an algorithm
 CTimingTiming provides a simple way of timing code
 CTOICompactRewardDecPOMDPDiscreteTOICompactRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward
 CTOIDecMDPDiscreteTOIDecMDPDiscrete is a class that represents a transition observation indepedent discrete DecMDP
 CTOIDecPOMDPDiscreteTOIDecPOMDPDiscrete is a class that represents a transition observation independent discrete DecPOMDP
 CTOIFactoredRewardDecPOMDPDiscreteTOIFactoredRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward
 CTransitionModelTransitionModel represents the transition model in a decision process
 CTransitionModelDiscreteTransitionModelDiscrete represents a discrete transition model
 CTransitionModelDiscreteInterfaceTransitionModelDiscreteInterface represents a discrete transition model
 CTransitionModelMappingTransitionModelMapping implements a TransitionModelDiscrete
 CTransitionModelMappingSparseTransitionModelMappingSparse implements a TransitionModelDiscrete
 CTransitionObservationIndependentMADPDiscreteTransitionObservationIndependentMADPDiscrete is an base class that defines the primary properties of a Transition and Observation independent decision process
 CTreeIncPruneBGPlannerTreeIncPruneBGPlanner computes vector-based QBG functions using tree-based incremental pruning
 CTreeNodeTreeNode represents a node in a tree of histories, for instance observation histories
 CTwoStageDynamicBayesianNetworkTwoStageDynamicBayesianNetwork (2DBN) is a class that represents the transition and observation model for a factored MADP
 CTypeType is an abstract class that represents a Type (e.g
 CType_AOHIndexType_AOHIndex is a implementation (extenstion) of Type and represents a type in e.g
 CType_PointerTupleType_PointerTuple is a implementation (extenstion) of Type and represents a type in e.g
 CTypeClusterTypeCluster is a class that represents a cluster of Types
 CValueFunctionValueFunction is a class that represents a value function of a joint policy
 CValueFunctionDecPOMDPDiscreteValueFunctionDecPOMDPDiscrete represents and calculates the value function of a (pure) joint policy for a discrete Dec-POMDP