The hive mind of instant answers

8 June 2013

About a 3-minute read

New instant answer services like Siri and Google now are awesome. They let curious but otherwise busy people look up that random fact. They let people on the road communicate safely. And perhaps the most significantly a present complicated information in the form of a single answer usually larger than the related search results to make digesting the information easier.

However, I think that single answer format also poses something of an ethical dilemma. Or, at least an information security concern.

Think about it: as more and more people start using services like Siri and Google now and the inevitable clones that will follow, more and more people will rely on these single phrase answers for their research and day-to-day information. That in itself is not a problem, but with the scale comes the emergent effect of a homogeneity of knowledge. If we trust a given service to decide what the most valuable answer to our questions is, or to decide which answer we intended to receive, we implicitly agree that the service’s priorities are also our own priorities.

Of course, the counter-argument would be that the kinds of questions people ask of services like Siri and Google Now are trivial and usually have factual answers, such as what time a bus leaves, or what the weather is doing. But to that I would say that it seems inevitable with new tech like knowledge graphs and bigger Big Data that the language recognition and the corresponding answers will only get more nuanced.

When you ask Google a question, you are doing something distinctly different from searching for related articles or videos or the like. You are combining “I’m feeling lucky” with a summary algorithm, a cross-referencer, and maybe even a translator. Of course I don’t know how exactly the algorithms work, but for once that isn’t my main concern. The algorithmic answers we trust from these machines are only as good as the data they mine. When you ask a question, who exactly is on the other end? Added to the already-difficult problem of authorship verification online is now the total lack of a source citation on the one-liner we receive for our query. How do you trust an author you can’t see? You trust the algorithm that chose to draw from that (or those) author(s).

This averaging effect on the underlying data corpus from which the algorithm draws its answers leads to the possibility of manipulating the answers people rely on. If people aren’t doing the careful research themselves, it may be less obvious if malicious data is injected in order to purposely skew certain answers.

And, if in the usual case, the answers we get from the instant services are accurate–for example, for weather information, or for stock quotes–what reason would we have to distrust the more complicated or sensitive information? I would say that it is unlikely that the average user will be investigating and verifying the answers that the service provides.

One-liner questions and answers inadvertently open the door to a sort of hive-mind effect where the information people use on a daily basis is cross-linked to the prevailing buzz on the net and extremely condensed and normalized. However, I think the less-often-discussed side of this new system is the potential for malicious actors to subtly affect the information people use on a daily basis, or for potential monied interests to skew common knowledge in a certain direction. For example, if I ask what is the best refrigerator? And Google now on replies a Frigidaire, rather than getting the results page of various tech reviews I will get a single answer. How will I know the origin of the answer? If many people rely on this form of research, the common perception will come to depend on the service provider’s choice of ad influence, statistical weights, and so on.

These services are certainly helpful for quick information, but please don’t sacrifice your willingness too dig a little deeper for things that matter.

Comments