Skip navigation.
Home
Personalize your internet.

Objections

(From an email exchange with Dave Methvin, CTO of PC Pitstop)

Boneheaded decision makers will ruin any useful results

Here I'm relying on "the wikipedia effect." A study found that graffiti in Wikipedia remains there an average of only 5 minutes before it is corrected. Similarly, within my proposed system I'm hoping that boneheads will be quickly detected and marked as untrustworthy. So if the bonehead is (for example) 3 hops away, then I just need anyone within 2 hops of me to notice. (See Keeping your network clean.) And in a social network, there is additional social pressure that Wikipedia doesn't have: No one wants to be the guy that trusted the bonehead and messed things up for everyone downstream! (For example, imagine getting an email saying "Hey Dave, why is your friend Mary saying that Claria.com software is good stuff?!")

The trust network will fall victim to googlebombing

Within a web of trust, Googlebombing just doesn't work. If you are the would-be bomber, you have to convince a lot of people to add you as an informer. And then you have to hope that the people you have conned are informers to many other people. You must further hope that none of these other people will notice and report the bogus links. That's just too many levels of failure for googlebombing to be effective. (This also applies for straight-up hacking: Even though most of the trust pages will be presumably stored on low-security web servers, you'd have to hack a ton of pages to have any effect. And as soon as anyone notices, it's all for nothing.)

The other way of googlebombing would be to create tons of dummy users who are all trusted by one "real user". Once the real user is trusted, then all the dummies get in and screw up the trust levels. However, this only works if you have some sort of Bayesian or other distributed trust calculation system (see below) that takes account of the shear number of people who are giving their opinion. Outfoxed doesn't care about the number of votes, but only about the vote of the person who is closest.

Trust can't be reduced to something binary

You're absolutely right that trust is not binary. In fact, the underlying RDF structure of the trust files allows for continuous values. It was primarily a user-interface decision to go binary; I wanted the system to be usable by novices, and that meant using nice simple categories like "Good" "Bad" and "Dangerous". But there is no reason why someone couldn't write another client that uses the same RDF files but provides finer-grained information.

A Bayesian approach (or something similar) should be used

There are two reasons why I went with a simple hop-counting system over anything more complicated. The first is transparency; for something as important as trust, people should be able to understand the computer's system completely. Of course it's going to be wrong at times. However, my hunch is that users prefer a system which is sometimes wrong but always predictable, over a system which is less wrong but not understandable. (This is also the reason why my system only gives commentary, but never prevents the user from performing any actions.) The second reason was to provide a point of reference. When my system says a website is bad, it can tell you exactly who said it and how you know this person. It's important psychologically to be able to identify the source of trust, if only to have someone to blame if your trust turns out to be unwarranted.

A Bayesian system wouldn't meet either of these conditions. The math is so hard that the trust outcomes would seem to be simply mysterious when correct. But when a wrong trust decision is made, the people wouldn't trust the system ever again. For example, imagine that someone's daughter has said a website is bad when tons of more outlying people are unanimous in calling the website good. A Bayesian system might conclude mathematically that the site is in fact good. But this math won't mean a thing to the confused and angry father who can't understand why his computer just told him to trust some strangers more than his daughter. And secondly, a Bayesian system just doesn't give you anyone to blame; if a friend of a friend of yours recommended a company which turned out to be terrible, you know who you shouldn't trust in the future. If a similar failure of trust occured in a Bayesian system, there would be no clear path to fixing the problem.