Cooperative agent systems are designed so that each is working toward the same common good. The problem is that software systems are complex and can be subverted by an adversary to either break the system or potentially worse, create sneaky agents who are willing to cooperate when the stakes are low and take selfish, greedy actions when the rewards rise. This research focuses on the ability of a group of agents to reason about the trustworthiness of each other and make optimal decisions about whether to cooperate. A TI-POMDP is developed to model the trust interactions between agents, enabling the agents to select the best course of action from the current state. The TI-POMDP is a novel approach to multiagent cooperation based on an I-POMDP augmented with trust relationships. Experiments demonstrate the TI-POMDP's ability to accurately track the trust levels of agents with hidden agendas, providing the agents the information needed to make decisions based on their level of trust and model of the environment. On average, agents achieved rewards 3.8 times higher using the TI-POMDP model compared to a trust vector model.