Emergence of Cooperation Through Mutual Pedro Santana 1 Luís Moniz Pereira 2 1 IntRoSys, S.A. 2 CENTRIA, Universidade Nova de Lisboa 19th Int. Conf. on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 06)
A Motivating Example The mother is about to negotiate with other family members what TV show will be watched nextly... But, she wants to be fair in her position! Son s preferences : cinema x documentaries news Mother s preferences : talkshow x cinema news documentaries news Father s preferences : x talkshow news x documentaries news
Outline Introduction 1 Introduction 2 Fair 3 4
Abstracting from the Example Goal To devise a method - a cognitive process - to allow an agent to perform, or coordinate others to do so, in an fair way by taking into account other agents estimated preferences. Assumptions An agent has access to a relevant view of the involved agents and their preferences; Agents preferences can be contradictory, singly or jointly; There need not exist a pre-defined priority amongst agents or their preferences; Memory is crucial for a proper fair balance of preferences.
The Approach in a Nutshell Based on via Declarative Diagnosis [Dell Acqua and Pereira, 2005] The mother has to: 1 Estimate the preferences of all other agents; 2 Aggregate her preferences with all other estimates into a single merged preference specification; 3 Compile the specification into a revisable program; 4 Revise the agents preferences in order to remove all preference contradictions; 5 From the set of possible revisions select the fairest; 6 Select a TV show that complies with the newly revised program; 7 Repeat the process as preferences are updated.
Outline Introduction Fair 1 Introduction 2 Fair 3 4
Specifying Preferences Fair x y means that x is preferred to y; must satisfy the constraints of a strict partial order (others could be selected): Irreflexivity : x, x x Asymmetry : x y, x y y x Transitivity : x y z, (x y y z) x z
Preferences Aggregation Fair p(x, y) p son (x, y) p(x, y) p father (x, y) p(x, y) p mother (x, y) p(x, y) represents the aggregation of all preferences; Contradictions in p refer to colliding agent opinions, i.e. those that violate constraints; How to determine contradictions? Check the Integrity Constraints!
Integrity Constraints Fair A strict partial order can be defined by integrity constraints in the form of denials: p(x, x). p(x, y), p(y, x). where represents falsity. p(x, y), p(y, z), not p(x, z). The goal now is to remove all contradictory opinions! Remove all contradictions by adding or retracting the adoption of agents preferences. This has to be done carefully, in an evenhanded way, both in each occasion and over time!
Outline Introduction Fair 1 Introduction 2 Fair 3 4
What can be revised? Fair The preference sub-program to be revised is split into two: A stable part P s not subject to revisions: integrity constraints and non-negotiable preferences. A changeable part P c subject to revisions: negotiable preferences. P c = p 1 (a, b) p 2 (b, a) p 2 (c, b) P s = p(x, x) p(x, y), p(y, x) p(x, y), p(y, z), notp(x, z) p(x, y) p 1 (x, y) p(x, y) p 2 (x, y)
The Revisable Program Fair A semantics preserving transformation, Γ(P) = p 1 (a, b) not inc(p 1 (a, b)) p 2 (b, a) not inc(p 2 (b, a)) p 2 (c, b) not inc(p 2 (c, b)) p(x, y) unc(p(x, y)) p(x, x) p(x, y), p(y, x) p(x, y), p(y, z), not p(x, z) p(x, y) p 1 (x, y) p(x, y) p 2 (x, y) A diagnosis is a minimal set of facts about uncover and incorrect that when added to the program remove the contradictions (where inc = incorrect and unc = uncover). Minimal diagnosis = [inc(p 2 (b, a)], [inc(p 1 (a, b), unc(p(c, a)]
The Revisable Program Fair A semantics preserving transformation, Γ(P) = p 1 (a, b) not inc(p 1 (a, b)) p 2 (b, a) not inc(p 2 (b, a)) p 2 (c, b) not inc(p 2 (c, b)) p(x, y) unc(p(x, y)) p(x, x) p(x, y), p(y, x) p(x, y), p(y, z), not p(x, z) p(x, y) p 1 (x, y) p(x, y) p 2 (x, y) A diagnosis is a minimal set of facts about uncover and incorrect that when added to the program remove the contradictions (where inc = incorrect and unc = uncover). Minimal diagnosis = [inc(p 2 (b, a)], [inc(p 1 (a, b), unc(p(c, a)]
Outline Introduction Fair 1 Introduction 2 Fair 3 4
Fair Fair From all minimal diagnosis select the fairest; uncovered facts refer to rules that were added to the merged predicate p and so they affect all agents; incorrect facts refer to rules that were retracted from a specific agent preference p i, affecting solely agent i; The employed heuristic: adding a general preference is twice worse than retracting a specific preference.
The Cost of a Diagnosis Fair Cost associated to agent a in diagnosis D: ω yield (a, D) = ω y n y (a, D) + ω ya n ya (a, D) Average cost of all agents in D: a(d) = 1 n ω yield (a, D) a A Dispersion cost of all agents in D: 1 d(d) = n (ω yield (a, D) a(d)) 2 a A
The Best Diagnosis Fair Select the best diagnosis, b d, by minimising some cost function, e.g.: ( ) wwin b d [n] = min (β d d(d) + β a a(d)) D w all w win : accumulated nr. of occasions the agent less handycapted in n was favoured prior to n; w all : the greater accumulated nr. of occasions an agent was favoured prior to n;
Experiment (I) Implemented in XSB-Prolog / XSB-XASP Package The mother is about to negotiate with other family members what TV show will be watched nextly... But, she wants to be fair in her position Son s preferences : cinema x documentaries news Mother s preferences : talkshow x cinema news documentaries news Father s preferences : x talkshow news x documentaries news
Experiment (II) D = [inc(pson (cinema, cinema)), inc(p father (news, news)), inc(p mother (talkshow, news)), inc(p mother (talkshow, talkshow))] inc(p father (news, cinema)), inc(p mother (talkshow, cinema)), inc(p father (talkshow, talkshow)), (uncovered NOT DISPLAYED FOR THE SAKE OF SIMPLICITY) 1 (mother, father, son) = (0, 0, 1), son, cinema 2 (mother, father, son) = (1, 0, 1), mother, talkshow
Experiment (II) D = [inc(pson (cinema, cinema)), inc(p son (cinema, talkshow)), inc(p father (news, news)), inc(p father (talkshow, talkshow)), inc(p father (cinema, talkshow)), inc(p father (news, cinema)), inc(p father (news, talkshow)), inc(p mother (talkshow, talkshow))] (uncovered NOT DISPLAYED FOR THE SAKE OF SIMPLICITY) 1 (mother, father, son) = (0, 0, 1), son, cinema 2 (mother, father, son) = (1, 0, 1), mother, talkshow
Experiment (III) D = [inc(pson (cinema, cinema)), inc(p father (news, news)), inc(p mother (talkshow, talkshow)), inc(p son (talkshow, talkshow)), inc(p father (news, cinema)), inc(p mother (talkshow, cinema)), inc(p father (talkshow, talkshow)), inc(p mother (talkshow, talkshow))] (uncovered NOT DISPLAYED FOR THE SAKE OF SIMPLICITY) Added to Son s Preferences: x talkshow 1 (mother, father, son) = (0, 0, 1), son, cinema 2 (mother, father, son) = (1, 0, 1), mother, cinema
Experiment (III) D = [inc(pson (cinema, cinema)), inc(p father (news, news)), inc(p son (news, talkshow)), inc(p father (talkshow, talkshow)), inc(p mother (talkshow, talkshow))] inc(p father (news, cinema)), inc(p father (news, talkshow)), inc(p mother (talkshow, cinema)), inc(p son (talkshow, talkshow)), (uncovered NOT DISPLAYED FOR THE SAKE OF SIMPLICITY) Added to Son s Preferences: x talkshow 1 (mother, father, son) = (0, 0, 1), son, cinema 2 (mother, father, son) = (1, 0, 1), mother, cinema
Concluding Remarks 1 Instead of considering fixed priorities among agents and/or preferences, the method proposes a dynamic approach; 2 A cost function considers generic features of the solution (e.g. the quantity of preferences yielded by agents); 3 Introducing memory in the revision process enables the emergence of fairness and persisting cooperation; 4 The three-valued Well-Founded Semantics [Gelder et al., 1991] could be applied for a more skeptical preferential reasoning; 5 Other criteria but strict partial order for preferences, and other objective functions can be explored.
ANY QUESTIONS? Further information at: http://www.uninova.pt/ pfs/ http://centria.di.fct.unl.pt/ lmp/
Alferes, J. J. and Pereira, L. M. (1996). Reasoning with Logic Programming. Springer Verlag, LNAI 1111, Berlin. Dell Acqua, P. and Pereira, L. M. (2005). Preference revision via declarative debugging. In Progress in Artificial Intelligence, Procs. 12th Portuguese Int. Conf. on Artificial Intelligence (EPIA 05), Covilhã, Portugal. Springer, LNAI 3808. Gelder, A. V., Ross, K. A., and Schlipf, J. S. (1991). The well-founded semantics for general logic programs. J. ACM, 38(3):620 650. Gelfond, M. and Lifschitz., V. (1998). The stable model semantics for logic programming.
In Procs. of the 5th Int. Logic Programming Conf. MIT Press.