MultiAgentDecisionProcess
Globals Namespace Reference

Globals contains several definitions global to the MADP toolbox. More...

Typedefs

typedef unsigned int Index
 A general index. More...
 
typedef unsigned long long int LIndex
 A long long index. More...
 

Enumerations

enum  reward_t { REWARD, COST }
 Inherited from Tony's POMDP file format. More...
 

Functions

double CastLIndexToDouble (LIndex i)
 
Index CastLIndexToIndex (LIndex i)
 
bool EqualProbability (double p1, double p2)
 
bool EqualReward (double r1, double r2)
 

Variables

const size_t ALL_SOLUTIONS =0
 constant to denote all solutions (e.g., nrDesiredSolutions = ALL_SOLUTIONS ) More...
 
const Index INITIAL_JAOHI =0
 The initial (=empty) joint action-observation history index. More...
 
const Index INITIAL_JOHI =0
 The initial (=empty) joint observation history index. More...
 
const unsigned int MAXHORIZON =999999
 The highest horizon we will consider. More...
 
const double PROB_PRECISION =1e-12
 The precision for probabilities. More...
 
const double REWARD_PRECISION =1e-12
 Used to determine when two (immediate) rewards are considered equal. More...
 

Detailed Description

Globals contains several definitions global to the MADP toolbox.

Typedef Documentation

typedef unsigned int Globals::Index

A general index.

typedef unsigned long long int Globals::LIndex

A long long index.

Enumeration Type Documentation

Inherited from Tony's POMDP file format.

Enumerator
REWARD 
COST 

Function Documentation

Index Globals::CastLIndexToIndex ( LIndex  i)

Referenced by ActionObservationHistory::ActionObservationHistory(), BGIPSolution::AddSolution(), MonahanBGPlanner::BackProjectMonahanBG(), BayesianGameWithClusterInfo::BayesianGameWithClusterInfo(), AlphaVectorBG::BeliefBackupExhaustiveStoreAll(), QPOMDP::ComputeRecursively(), QBG::ComputeRecursively(), QHybrid::ComputeRecursively(), QBG::ComputeRecursivelyNoCache(), BG_FactorGraphCreator::Construct_AgentBGPolicy_Variables(), BG_FactorGraphCreator::Construct_LocalPayoff_Factors(), GMAA_MAAstarClassic::ConstructAndValuateNextPolicies(), GMAA_kGMAA::ConstructAndValuateNextPolicies(), GMAA_MAAstar::ConstructAndValuateNextPolicies(), PlanningUnitMADPDiscrete::CreateActionHistoryTree(), PlanningUnitMADPDiscrete::CreateActionObservationHistoryTree(), PlanningUnitMADPDiscrete::CreateObservationHistoryTree(), BGCG_SolverNonserialDynamicProgramming::EliminateAgent(), BayesianGameWithClusterInfo::Extend(), BGforStageCreation::Fill_FirstOHtsI(), BayesianGameForDecPOMDPStage::Fill_FirstOHtsI(), GMAA_MAA_ELSI::Fill_FirstOHtsI(), GMAA_MAA_ELSI::Fill_jaI_Array(), PlanningUnitMADPDiscrete::GetActionHistoryArrays(), PlanningUnitMADPDiscrete::GetActionObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetActionObservationHistoryIndex(), IndividualBeliefJESP::GetAugmentedStateIndex(), PlanningUnitMADPDiscrete::GetJAOHProbGivenPred(), PlanningUnitMADPDiscrete::GetJAOHProbs(), PlanningUnitMADPDiscrete::GetJointActionHistoryIndex(), JPolComponent_VectorImplementation::GetJointActionIndex(), PlanningUnitMADPDiscrete::GetJointActionObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetJointActionObservationHistoryIndex(), PlanningUnitMADPDiscrete::GetJointActionObservationHistoryTree(), PlanningUnitMADPDiscrete::GetJointBeliefInterface(), PlanningUnitMADPDiscrete::GetJointObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetJointObservationHistoryIndex(), BGCG_SolverNonserialDynamicProgramming::GetJpolIndexForBestResponses(), PlanningUnitMADPDiscrete::GetNrPolicyDomainElements(), PlanningUnitMADPDiscrete::GetObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetObservationHistoryIndex(), IndividualBeliefJESP::GetOthersObservationHistIndex(), PlanningUnitMADPDiscrete::GetSuccessorAHI(), PlanningUnitMADPDiscrete::GetSuccessorAOHI(), PlanningUnitMADPDiscrete::GetSuccessorJAHI(), PlanningUnitMADPDiscrete::GetSuccessorJOHI(), PlanningUnitMADPDiscrete::GetSuccessorOHI(), PlanningUnitMADPDiscrete::GetTimeStepForAOHI(), AlphaVectorPlanning::ImportValueFunction(), PlanningUnitMADPDiscrete::InitializeJointActionObservationHistories(), PlanningUnitMADPDiscrete::InitializeJointObservationHistories(), JointObservationHistory::JointObservationHistory(), PlanningUnitMADPDiscrete::JointToIndividualActionObservationHistoryIndicesRef(), LocalBGValueFunctionVector::LocalBGValueFunctionVector(), BayesianGameForDecPOMDPStage::ProbRewardForjoahI(), GMAA_MAA_ELSI::ProbRewardForjoahI(), PlanningUnitMADPDiscrete::RegisterJointActionObservationHistoryTree(), PolicyPureVector::SetIndex(), and FSAOHDist_NECOF::Update().

Variable Documentation

const size_t Globals::ALL_SOLUTIONS =0

constant to denote all solutions (e.g., nrDesiredSolutions = ALL_SOLUTIONS )

const Index Globals::INITIAL_JAOHI =0

The initial (=empty) joint action-observation history index.

Referenced by QFunctionJAOHTree::ComputeQ(), QHybrid::ComputeQ(), and PlanningUnitMADPDiscrete::GetJAOHProbs().

const Index Globals::INITIAL_JOHI =0