Why LinkedIn’s “recommend” system is broken, and how to fix it
LinkedIn recently added a new feature, skill recommendations. The feature is a one-trick pony; it allows any (registered) user to vouch for the skills of their friends. Once they’ve done so, their picture will appear next to that skill on their friend’s LinkedIn page, allowing others to see that they have confirmed that their friend is, indeed, an expert in that skill.
Unfortunately, their implementation has a glaring problem. Fortunately, it’s not that difficult to fix.
To understand the problem and it’s solution, we need to appreciate what LinkedIn was trying to accomplish with this feature. As anyone who has tried to hire an employee knows, the problem is actually quite simple; resumes are untrustworthy. In an attempt to secure an interview, job seekers regularly “embellish” their experience and expertise, oftentimes well beyond the point of absurdity, and there’s no easy way to verify that anything on a resume is true. Sure, you can interview the person, but interviews are time-consuming and difficult to do right (by “right” I mean “in a way that actually reveals how much the candidate knows within a very short time period”). Besides, the whole point of the resume is to try to determine whether you want to interview the person. Within that framework, LinkedIn’s new feature is huge boon to employers and recruiters; it’s an attempt to fact-check the resume for you, saving you the worry that it’s all a huge pack of lies.
Against this backdrop, we can understand the problem with the current implementation of skill recommendations. Right now, when any user logs in, they are likely to be presented with a dialog box containing a random “friend”—let’s call him Bob—and a list of skills. Their task is to answer the question, “Does Bob know this stuff?” The rationale, I would guess, is that if the user doesn’t know the answer, they’d select “I don’t know”.
In practice, it seems that the average user doesn’t really know (or care) whether Bob is an expert in whatever skill. Their first thought seems to be much closer to, “hey, if I recommend Bob, maybe he’ll recommend me as a gesture of thanks.” Since this feature has been implemented, not only have I received recommendations from people who clearly don’t know whether I actually have the skill in question, I’ve also been recommended as an expert in a whole bunch of things I know nothing about.
The effect of these bogus recommendations is to remove all potential value from the recommendations. If I can’t trust that they’re correct, I’m back to where I started.
There are a few ways to solve this issue. The first solution is very simple; only show the “does this guy know these skills?” box when our skills are similar. Given that LinkedIn has access to each user’s information, including their work history, other people they worked with, their self-identified skills, and their education background, this shouldn’t be a terribly difficult problem. Yes, this will decrease the number of recommendations each person will receive, as it decreases the pot of available people to recommend any given user. However, by allowing bogus recommendations, LinkedIn decreases the value of skill recommendations in assessing a skill set, as we mentioned above.
A second solution would utilize the same skill-similarity index as above, but instead uses it to weight the recommendations received. If a user has similar skills, then their recommendation could be given high weight, and if their skills are highly dissimilar, then their skills would be given a lower weighting. The sum (or whatever operator is deemed most appropriate) of all recommendations could then be used to determine how much weight this recommendation carries. Currently, LinkedIn simply sums the number of recommendations and displays it as a number next to a given skill. By applying this suggestion, LinkedIn could make this metric more sophisticated, and therefore more useful.