Discussing with philosophers can be tricky, the other day a friend who is a philosopher asked me why this simple concept of love built into a machine could generate Friendly AI. The questions I got were either about things which were obvious for me (but hard to answer…) or obvious (well.. because they were obvious…). I asked, did you read my description? OK, then I got one, from my perspective, simple question: “how could we implement it in AI“? I realized that this may not be obvious, so I scribbled down this.
(as the algorithm is not “evolutionary” per se, that is, there is no mutation step (only based upon correlations), no random selection, just pure reasoning, more like a unifying [OK, don’t know if there are such algorithms], in lack of better words I called it “revolutionary” AI instead or why not a hacker’s hacker child 🙂
So the full final question was “If you cannot make it concrete, how could we implement it in AI?“, so I said:
Simply said. Let the AI mimic the principles of nature (physics is almost by definition consistent), but not the evolution (as it could lead to inconsistent solutions, and e.g. create an AI lier).
Here is a simple attempt (now not going into details about classification/segmentation etc which are low level problems).
- Collect information about your world.
- Try to make sense of this information.
- If the collected information is statistically significant (not according e.g. Pearson, I’m thinking Bayesian…) then
- search for inconsistencies (i.e. contradictions in the system, usually an indication about some type of problem to be solved).
- propose some solution to the problem.
- analyze what this solution would lead to
- a) less individuals? that would imply contradiction to your mutual love drive, reject!
- b) increased inconsistency of the system? Then the solution contradicts it’s own purpose, reject!
- c) less inconsistencies in the system? propose this as a possible solution.
- This proposal then implies considering some type of action.
do this forever.
Is this a reasonable first draft to an approach to explain why this generates friendly AI?
Well…, I didn’t get an answer on that either, how do one discuss with philosophers… ? 😉
Now, I do not of course in any way consider this algorithm ready. But when nothing has become something, then it’s actually easier to suggest improvements, than starting from nothing, so, could it possibly be some AI developer (or other programmer or non programmer) out there who could suggest some improvement? (unfortunately no big fund for prizes available… but it’s a fun problem, isn’t it? 🙂
So, if you ponder upon this, and publish your algorithm were it’s easy to see and easy to make improvements from, then I’ll ponder over a way to find a winner.
Ahh, yes, these complicated issues about licenses/copyright and such… well they complicate life don’t they? OK, let’s say this is GPL2, that is not the toughest form of mutual love, then we have defined a kind of love contract… The big difference between GPL2 and GPL3 is that a GPL3 wouldn’t touch any part being patented. [Of course I consider patents evil, but it can also be about patent profylax]
PS. for those of you who didn’t read my previous attempt to define love in a form suitable for a machine, what you see above is an attempt to implement this. Love is not a rule or condition, it is built into the reasoning process itself.
And… with this type of problems, we need of course finish with a suitable illustration (borrowed picture from a facebook friend (Tracy Love Lee)).