The Internet of Things(IoT) is inherently edge biased. The“things” are all devices out in the world, not encased in a high-tech data center. Naturally, we expect initial data collection to take place on these devices, but what happens after that? Where does the data go? How is it processed? Is the data processed? Is any value derived from that data? These are all questions I think about every time I talk with customers looking to find a solution for their IoT data.
That said, if we are being honest, most organizations while hugely interested in IoT, are struggling with how to put it in place. This is due to the fact that they are hitting some common roadblocks when it comes to gaining actionable insights from their IoT sensor data.
The easy way to solve a performance problem in technology seems to be to just get a bigger, better, faster computer. That’s what I told my mom when her Chromebook died. She hated that thing, so it was probably for the best. It’s also what everyone seems to do when their cloud service isn’t running fast enough. Get a bigger server, pay for a larger instance, etc. I get it, it’s simple and low effort, but it’s also expensive. Compute cost doesn’t scale linearly; it scales exponentially, so instead of buying a super computer you could buy tons of average servers. The solution is much cheaper when parallelization can be used to solve large computing problems.