Many manufacturers have a propensity for going it alone with their own technology efforts in the belief that it will be faster and more cost-effective. They put in place data science models and platforms and concentrate on speed of data preparation, often bringing vast amounts of data quickly into a useable format. That’s frequently effective – but the wrong bottleneck for businesses to focus on.
The time it takes to build, tune and deploy a model is the biggest challenge for many who pursue an in-house approach – and that is where those going it alone can run into serious roadblocks. Beyond the challenge of developing and deploying a data science model in-house, scaling is not easy. Moreover, these kinds of data science implementations require data scientists to make them work. Often organizations have a focused group of data scientists in place but lack sufficient mass of these qualified users to implement a model quickly and easily and scale it as part of a wider deployment.
Typically, that leads to disappointment. Companies often abandon in-house projects because of the effort in skills, time and money it takes to develop a solution and the intense effort needed to scale, support and sustain it. Most businesses simply don’t have the capabilities, skills or resources to do this themselves cost-effectively and efficiently.
If manufacturers are focused on in-house projects, they can solve specific problems, but it is very difficult for them to throw off the shackles, think big and quickly scale up an implementation. Even just wrangling the data and making it usable can take a considerable amount of time. Data scientists building one-off use cases will struggle to translate this into multiple iterations across a facility. In-house data science models are often based on a single use case. Often just building one such model can take 6 to 12 months. If you think about that, scaled up to 300-600 assets, say, it can start to feel like a never-ending journey. That’s where a packaged outsourced solution from a third-party provider or partner can be advantageous, often bringing faster time to value through ease of use, scalability and speed of deployment.
Automated data science systems aimed at the broad engineering base of qualified users create an opportunity to prepare data and develop models far faster. Furthermore, these systems allow continuous monitoring, providing alerts to engineers who need to take prescribed actions to avoid business loss. By investing in these systems, manufacturers can speed deployment by 100 or even 1000 times, increasing value.
This means that the organization can minimize its investment in project resource and infrastructure and achieve faster time to value. Moreover, by scaling up the solution in this way it can impact margin positively as well as market perception in areas such as safety, emissions control and overall equipment effectiveness (OEE). In other words, it can readily transition technology benefits into business ones.
Data Scientists Still Have a Key Role
There can still be an important role for the in-house data scientists in this scenario. While they may feel defensive and even at risk when these kinds of implementations happen, that need not be the case. System usability is key here. In third-party implementations, qualified users of mechanical, systems and process engineers from the existing business can address day-to-day problems and challenges with the system, with data scientists reassigned to concentrate on a higher level, more strategic range of challenges.
This gives those scientists an ability to work on projects where they are likely to have a wider and more profound impact rather than having to deal with granular day to day issues. So, the message for manufacturers in this scenario is that as they bring on more data scientists they can make sure they are doing higher level data science at all times and are delivering major process improvement projects, often with the help of the third-party technology or equipment.
All this also drives a mindset change within businesses about how they approach projects. If they carry out rollouts on a proof of concept or a project scale, organizations simply won’t get the same impact they would if they scaled up implementations. Larger scale rollouts of this kind often bring faster return on investment especially when compared with limited in-house data science-based rollout which may not deliver any value for many months.
Ultimately, it is time to think big and stop wasting time on small scale pilots. Don’t limit yourself by putting model development solely in the hands of data scientists. Enable them, and your established base of qualified users, to help you improve your margins. It is time to take the shackles off.