Sometimes I think that Big Data has a branding problem.
You see, for data scientists to gain the trust and buy-in from their colleagues, they have to explain how their analysis can add value. They take a “data ocean” of information and distill it into highly-specific and actionable insights for every internal customer, refining and refreshing it along the way to ensure that it is as relevant as possible.
It is like they take the most powerful telescope imaginable and look for a speck of dust on the moon. “Here you go, this precise set of data will prove that you are right.”
The success of Big Data initiatives (to a large extent) comes in the ability to drill down from the planetary level to the sub-atomic level. It’s all about getting to those small insights that would never have appeared had you not started large and refocused, refocused and refocused. Of course, this doesn’t mean that the bigger trends are not relevant, but we have a tendency to view anything “large” with a certain amount of mistrust.
Somehow we naturally think that “big” things have a bigger margin for error, although the assumptions that we made on the way to the smaller insights could equally have been flawed.
So, what do we trust more – “big” things or “small” things?
Something tells me that we actually trust small things more, but I am very happy to hear the inevitably differing views of the readers.
I have a suspicion that when data science professionals talk about “Big Data” with their colleagues, the first reaction is for their eyes to glaze over and they automatically expect not to understand it. “It’s ‘big’, and it needs a team of PhDs to analyse it, so how would I possibly get my head around it?” If, on the other hand, there were a more accessible adjective such as small or tiny in front of the word “data” then maybe the end-clients would feel happier exploring it? Maybe I am overthinking things. For me, when something is described as small, it seems that bit more manageable.
Whatever the branding, Big Data is taking over our lives, both at work and at home, and the more we all seek to explore it, the more value we realise that it holds. The distillation process from large to small is key in many decision-making processes, and there are of course many potentially flawed assumptions that we can make along the way, but it is always possible to go back and correct those assumptions. Getting to “small data” from “big data” is the key for most of us in our lives – but we only do it if we are not daunted by the scope of the initial first few decisions.
We live in a data-rich society. If we think that the data is too big to mean anything for us, we are mistaken. It is remarkably easy to turn it into small, actionable data, despite the “Big Data” moniker that follows it around.
Co-Founder, Big Cloud