Google’s machines have been learning about humans – and it seems we don’t like their findings.
Do we have a search crisis? I’ve just been reading this article by Search Engine Land which basically says that Google need to step in and ‘fix’ search. The reasoning behind it is to change search results to not steer people in a direction that’s derogatory or fake or made to cause conflict between different areas of belief.
How far do we expect Google to go though? Sanitise the internet by hiding or banning fake news? Penalising pages that display incorrect answers to questions? Or filter hurtful, racist, or sexist posts?
Other than some cases where governments or court cases, or the DMCA have instructed them to change what shows in results, Google, by design, essentially intend their ‘machines’ to be neutral. They are intended to sit back and learn about us from our search behaviour, so that a) they can sell ads and b) provide relevant results.
Personally, I don’t mind stumbling upon satire or fake news, but I want it to be clear. Because I also want real news, legitimate answers to questions, and impartial reviews.
I want to be able to find anything I want (if it exists) from search results, plus stumble upon the many things that make the internet great: the stuff I didn’t expect.
I don’t want censored results, I want to see what other people are thinking and doing, and not have someone else decide what they don’t want me to see.
Why should a governing system decide that for me?
In terms of fake news appearing on Google News, improvements are needed to make sure the sources are credible. They have a responsibility to have actual news in… News, because that is what users expect when you call it “News”. It is about relevance and giving users what they expect.
However, who determines what’s good, what’s bad, who gets penalised in search results, and who has the most legitimate content? If we disagree as humans making those decisions, and we always will because of differing beliefs, how far are we from making those decisions with algorithms? Will it come down to the ‘most logical’ of views? Or the most emotional with the highest volume? That is what the AI is figuring out along the way, with the help of Engineers. This is unfolding in real time and there still aren’t answers.
The moment something becomes an automated process (e.g. a search engine algorithm) it’s only a matter of time before someone finds a way to take advantage of it, then it’s stacks-on. There’s only a small amount of manual intervention that can be done, and there’s a lot of questionable content posted every day that no manual process could keep up with.
Autocompletion of search terms is one of these. If you start typing something like ‘why are repub’, you get a list of mostly negative suggestions. These suggestions are generated to help you quickly search, and are generated from the most popular searches by other people. It could be bots that make those searches popular, but even then, at some point, a human coded the bot to do it.
How can we blame the algorithms if we are the ones making the negative searches? We make poor choices in content and post it, we crave to satisfy our emotional demands for watching conflict and drama. The algorithm simply aggregates the data from the content we consume in the privacy of our own devices.
I can understand why search engines are copping heat from many directions over quality of results, and how they’re struggling to make it right. But are the search engines really the ones to blame here? Or are we simply seeing the ugly truth about ourselves and not liking it?
We keep looking to Google and Facebook to shelter us from ourselves – to fool ourselves into thinking we are better than we are. Maybe, the machines aren’t the problem.