Love is something every human being knows as non verbal knowledge, but when we define systems and machines with some form of intelligence (strong AI) we also need to define this in a stringent terminology that can be represented and implemented in algorithms and behaviors.
First, the concept of love is even in humans somewhat ambiguous, and can be broken down into:
0. mutual love, which can be considered, love by contract.
1. agape or unconditional love.
Are both of these essential for a machine? In my view yes, especially if we speak about autonomous robust entities which could be dropped down into any type of scenario and solve problems within that scenario, by creating an internal list of problems that need to be solved, then prioritize these and solve them in some order of significance, ability and causality.
Long time ago, in March 2000, during my PhD program after some pondering over a specific problem I scribbled down an introspective approach to an ethical AI algorithm based upon love, this is in the speculative part of my thesis (ch 7), and also as a brief slashdot comment here.
Then, at a conference about nano technology I in Palo Alto in April 2004, organized by Foresight Institute I attended a workshop about safe AI, led by Steve Omohundro where there were around 25 strong AI researchers present, we discussed the problem of creating safe AI. I proposed love as a fundamental concept and reached a consensus among the audience that this is it.
Now, the problem is that “love” is considered an ill defined concept as it also needs to be formalized in an axiomatic or mathematical form, which can be understood by the machine, and so far I haven’t seen any strict definition of the concept of love.
Let’s start with the unconditional love, which is usually less understood by humans, but I claim that this is the easy part, as it can be defined in a strict manner, where mutual love needs dynamic programming.
My simple proposed definition of unconditional love, for any system:
Strive for holistic consistency.
Here as English is somewhat ambiguous, “holistic” simply means: look upon the whole context, that is, don’t deliberately reject any theorem or information.
Regarding consistency, that has a strict meaning in technical and mathematical terms. A system is consistent if it doesn’t contain contradictions, as in a system which contains contradictions, anything can be proven as truth. A consistent system in mathematical terms can then simply be considered a “true” system.
In engineering (technical/social/economics/politics/software etc) it simply means a system with conflict free solutions, that is, a solution where one part of the system is not trying to beat another part of the system (not in a competitive way, that is different). Therefore, the strive for holistic consistency could be seen as a goal generator, that is allowing the system to identify the problems in the system, without explicit programming.
Then the agents (term for autonomous systems with a specific agenda in technical terminology) need of course to interact with each other and with other beings. In social contexts there is a well established rule denoted “The Golden Rule” “treat others as they want/need to be treated“. Observe, this is not the standard definition, which is treat others as you would want to be treated. However, the latter is ill defined in a sense as not every being like to be treated the same. My own approach to this uses dynamic programming:
while true do:
- Treat others in the way you would like to be treated as first approach.
- If they respond by being rude, then respond by being somewhat less rude.
- else if they respond by being good, then respond by being somewhat less good (i.e. do not compete or exaggerate).
The process is repeated forever and (usually) reaches a dynamic mutual balance, where you over time may understand contextual dependencies. Now it is very useful to discuss issues and in a discussion about strong AI on facebook recently I got a suggestion from AI researcher Mark Waser that this is an extended version of what in game theoretical contexts is denoted Optimistic Tit for tat and, yes, I agree upon this.
The old Tit for tat has equal retaliation and does not encourage collaboration. What is an extension is that this behavior strives for quicker balance as it has a weak retaliation (I consider that “revenge” creates unstable solutions). This type of game theoretical models can solve tricky scenarios like the prisoner’s dilemma, and I consider that this converges towards a Nash equilibrium (John Forbes Nash got the Nobel Prize 1994 in Economical Sciences).
It should be noted (thanks Mikael Djurfeldt) that if we would only strive for holistic consistency, one solution is an empty world, where there can be no contradictions.
It’s only when these two definitions of love taken together it creates the condition for a being to strive for non empty worlds, as mutual love, i.e. the strive to treat others as they want to be treated, creates a motivation to strive for having someone to treat.
Thus: Love as a driving fundamental force could be summarized as (thanks Eray Özkural for helping me realize this principle):
0. local coherence. (mutual love) like strive for having someone/something to treat well or/nurture.
1. holistic consistency.
For my own I like metaphors, and for a computer scientist , a natural approach may be to use software licenses as metaphors, then I propose e.g (I’m aware that not all people may agree upon this metaphor):
|Proprietary (closed source)||Evil, as it creates a non productive asymmetry in the system|
|GPL/CC-SA/copyleft||Mutual Love, that is love by contract|
|BSD/Public domain||Agape, that is unconditional love|
This could also imply that if there were no evil proprietary software, there would not be any need for the copyleft, love by contract version, it could be enough with the one based upon unconditional love. However, as the unconditional love is not a sufficient condition [remark added Dec 25th], the mutual love builtin in GPL/copyleft creates a condition for all beings to strive for the common good.
Regarding love by contract, i.e. which applies also to generic products and product developing we have generalized the concept of free software to free computer (it’s the AI frame work for this we are developing).
It should also be noted, that this simple formal definition of love above, of course leaves out many meanings of love which includes passion, strive for experiencing beauty as music and art etc, this definition only attempts to define the sufficient conditions for any being to be collaborative.
Now, as we are all in some sense in a type of prison, that is our world has borders, which can be illustrated by this picture:
Then, an interesting issue is how this “holistic consistency” relates to Gödel’s incompleteness theorem (the axioms within a system can not prove the system to be consistent), I’ll later (as I at the moment need to assist with other stuff) explore this in more detail. Many people have this view (thanks David Jansson) about their freedom (and many actors in the society tend to implement this in different ways…):
I claim that this is possible in all systems, but the system then needs to include a self supervision (the eye) to guarantee it’s own consistency.
I’ll ponder over this further soon, but that’s all for today.
Best holiday wishes/Roland
PS. extremely happy for any type of comments and feedback on this.