Who’s made the mistake of buying apps or sexy analytics software just based on appearance?
Go on, own up. I’m sure at one time or other, we have all succumbed to those impulse purchases.
It’s the same with book sales. Although it should make no difference to the reading experience, an attractive cover does increase sales.
But if you approach your IT spending based on attractiveness, you’re heading for trouble.
Now you may be thinking. Hold on, that’s what my IT department is there to protect against. That may be the case in your business, but as Gartner has predicted, by 2017 the majority of IT spending in companies is expected to be made by the CMO, not the CIO.
There are advantages to that change. Software will need to be more accessible for business users and able to be configured without IT help, and the purchasers are likely to be closer to understanding the real business requirements. But, as insight teams increase their budgets, there are also risks.
This post explores some of the pitfalls I’ve seen business decision makers make. Given our focus as a blog, I’ll be concentrating on the purchase of analytics software on the basis of appearance.
1. The lure of automation and de-skilling:
Ever since the rise of BI tools in the ’90s, vendors have looked for ways to differentiate their MI or analytics software from so many others on the market. Some concentrated on “drag and drop” front ends, some on the number of algorithms supported, some on their ease of connectivity to databases, and a number began to develop more and more automation. This led to a few products (I’ll avoid naming names) creating what were basically “black box” solutions that you were meant to trust to do all the statistics for you. They became a genre of “trust us, look the models work” solutions.
Such solutions can be very tempting for marketing or analytics leaders struggling to recruit or retain the analysts/data scientists they need. Automated model production seems like a real cost saving. But if you look more deeply, there are a number of problems. Firstly, auto-fitted models rarely last as long as ‘hand crafted’ versions, and tend to degrade faster as it is much harder not to have overfitted the data provided. Related to this, such an approach does not benefit from real understanding of the domain being modeled (which is also a pitfall of outsourced analysts). Robust models benefit from variable and algorithm selection that are both appropriate to the business problem and know the meaning of the data items, as well as any likely future changes. Lastly, automating almost always excludes meaningful “exploratory data analysis,” which is a huge missed opportunity as that stage more often than not adds to knowledge of data and provides insights itself. There is not yet a real alternative to the benefits of a trained statistical eye during the analytics and model building process.
2. The quick fix of local installation:
Unlike all the work involved in designing a data architecture and appropriate data warehouse/staging/connectivity solution, analytics software is too often portrayed as a simple matter of install and run. This can also be delusory. It is not just the front end that matters with analytics software. Yes, you need that to be easy to navigate and intuitive to work with (but that is becoming a hygiene factor these days). But there is more to consider round the back end. Even if the supplier emphasizes its ease of connectivity with a wide range of powerful database platforms. Even if you know the investment has gone into making sure your data warehouse is powerful enough to handle all those queries. None of that will protect you from lack of analytics grunts.
See Also: Analytics and Survival in the Data Age
The problem, all to often, is that business users are originally offered a surprisingly cheap solution that will just run locally on their PCs or Macs. Now, that is very convenient and mobile, if you simply want to crush low volumes of data from spreadsheets or data on your laptop. But the problem comes when you want to use larger data sources and have a whole analytics team trying to do so with just local installations of the same analytics software (probably paid for per install/user). Too many current generation cheaper analytics solutions will in that case be limited to the processing power of the PC or Mac. Business users are not warned of the need to consider client-server solutions, both for collaboration and also to have a performant analytics infrastructure (especially if you also want to score data for live systems). That can lead to wasted initial spending as a costly server and reconfiguration or even new software is needed in the end.
3. The drug of cloud-based solutions:
With any product, it’s a sound consumer maxim to beware of anything that looks too easy or too cheap. Surely, such alarm bells should have rung earlier in the ears of many a marketing director who has ended up being stung by a large final “cost of ownership” for a cloud-based CRM solution. Akin to the lure of fast-fix local installation, cloud-based analytics solutions can promise even better, no installation at all. Pending needing firewall changes to have access to the solution, it offers the business leader the ultimate way to avoid those pesky IT folk. No wonder licenses have sold.
But anyone familiar with the history of the market leaders in cloud-based solutions (and even the big boys who have jumped on the bandwagon in recent years), will know it’s not that easy. Like providing free or cheap drugs at first, to create an addict, cloud-based analytics solutions have a sting in the tail. Check out the licensing agreement and what you will need to scale. As use of your solution becomes more embedded in an organization, especially if it becomes the de facto way to access a cloud-based data solution, your users thus license costs will gather momentum. Now, I’m not saying the cloud isn’t a viable solution for some businesses. It is. But beware of the stealth sales model that is implicit.
4. Oh, abstraction, where are you now I need you more than ever?
Back in the ’90s, the original business objects product created the idea of a “layer of abstraction” or what was called a “universe.” This was configurable by the business (but probably by an experienced power user or insight analyst who knew the data), but more often than not benefited from involvement of a DBA from IT. The product looked like a visual representation of a database scheme diagram and basically defined not just all the data items the analytics software could use, but also the allowed joins between tables, etc. Beginning to sound rather too techie? Yes, obviously software vendors thought so, too. Such a definition has gone the way of metadata, perceived as a “nice to have” that is in reality avoided by flashy-looking workarounds.
The most worrying recent cases I have seen of lacking this layer of abstraction are today’s most popular data visualization tools. These support a wide range of visualizations and appear to make it as easy as “drag and drop” to create any you want from the databases to which you point the software (using more mouse action). So far, so good. Regular readers will know I’m a data visualization evangelist. The problem is that without any defined (or controlled, to use that unpopular term) definition of data access and optimal joins, the analytics queries can run amok. I’ve seen too many business users end up in confusion and have very slow response times, basically because the software is abdicating this responsibility. Come on, vendors, in a day when Hadoop et al. are making the complexity of data access more complex, there is need for more protection, not less!
Well, I hope those observations have been useful. If they protect you from an impulse purchase without having a pre-planned analytics architecture, then my time was worthwhile.
If not, well, I’m old enough to enjoy a good grumble, anyway. Keep safe! 🙂