Autonomous multi-agent systems that are to coordinate must be designed according to models that accommodate such complex social behavior as compromise, negotiation, and altruism. In contrast to individually rational models, where each agent seeks to maximize its own welfare without regard for others, socially rational agents have interests beyond themselves. Such models require a new type of utility function—a social utility—to ensure three desirable properties: (a) conditional preferences—agents may adjust their preferences to account for the preferences of others; (b) endogeny—group preferences are determined internally by interactions between individual agents; (c) framing invariance—reformulations of the social model using exactly the same information should not alter the conclusions; and (d) social coherence—no individual’s welfare is categorically subjugated to the welfare of the group. Social utilities in turn require a compatible solution concept—optimal failure-avoidance. Satisficing game theory embodies both social rationality and optimal failure-avoidance and provides a formal mathematical framework in which to balance group and individual interests in mixed-motive societies. The satisficing approach is applied to two scenarios: the Ultimatum game and a random graph search problem. The Ultimatum game is one for which game-theoretic analysis does not correspond well to empirical data regarding human behavior; it is thus an important test case for a new theory. The graph search scenario is an idealization of an important resource allocation problem in which the ability to compromise and negotiate can greatly facilitate the search for a solution.