A global survey of 1,500 C-level executives across 16 industries conducted by Accenture revealed the majority of top managers strongly agree that leveraging artificial intelligence (AI) is necessary to achieve their growth objectives, while acknowledging that scaling AI at the enterprise level is a real challenge. Scaling AI means that diverse teams, departments and individuals across the enterprise realize the value of AI and utilize it in their work processes to achieve efficiency and business advantages.
As more devices become internet-enabled each year, the volume of data, as well as the rate of data accumulation, particularly for capital-intensive industries, exponentially increases. Therefore, we see the rising demand for data storage and computing resources among organizations. According to the Flexera 2021 State of the Cloud Report, enterprise cloud spend is significant and growing quickly compared to previous years, and more organizations are leveraging public cloud services (e.g., AWS, Azure, and Google), private ones or both (i.e., hybrid model).
Key Existing Challenges
Despite the accelerating adoption rate of cloud services, many enterprises are still not willing (or not able) to run their applications in the cloud. To continue, we will review some of the existing challenges organizations face in this context.
The first critical decision is how to transfer data and the cost of doing so. For example, within the data bandwidth from 100 Mbps to 1 Gbps, which is a common bandwidth in many parts of the world, it takes anywhere from two minutes to 18 minutes to transfer 10 GB of data. This delay, also known as response latency, is seen as a constraint or an obstacle for a business that is performing critical operations and trying to run its assets efficiently and effectively. For any business, it is important to consider whether the benefit of achieving lower latencies is greater than the cost of acquiring the necessary network bandwidth, which in some cases is not possible due to infrastructure constraints.
Source: Migration to Google Cloud: Transferring your large datasets
Another challenge is the increased cyberattack surface. More businesses are shifting their confidential information to the Cloud, and data breaches targeting cloud-based infrastructures increased by 50% from 2019 compared to 2018 (see the Verizon Business 2020 Data Breach Investigations Report). Moving data out of the plant increases the number of potential cyber-attack vectors. Data breaches can be caused by a simple misconfiguration or internal insider threats and can be hard to avoid when part of the IT infrastructure is outsourced to a third-party business. Therefore, ensuring data security in this dynamic environment is crucial for enterprises.
Digital sovereignty, which refers to the level of control over the data, hardware, and software that a company relies on to operate, is another challenge facing the enterprise. Operational sovereignty provides customers with assurances that those working for a cloud provider cannot compromise a customer’s workloads. Software sovereignty ensures that the customer can control the availability of its workloads and run them without being dependent on or locked into a single cloud provider. Moreover, data sovereignty provides customers with a mechanism to prevent the cloud provider from accessing their data, designating access only for specific purposes. The real challenge for organizations is trusting those managing their cloud services, especially when sensitive data could circulate in the hands of multiple third-party businesses.
Cyber Regulatory Compliance
Cyber regulatory compliance has its own complexity; making sure that compliance programs evolve with cloud deployment, infrastructure, environments, and applications, and various cloud services and applications are configured securely. Data movement from the plant to the cloud service, especially when it is owned and operated by a third-party business, may violate regulatory compliances. Organizations that have a multi-cloud strategy can benefit from what is called Cloud Security Posture Management (CSPM) as it becomes difficult to ensure that various cloud services and applications are securely configured.
The next concern is around the cost of cloud-centric implementations. According to the International Data Services (IDC) report, the annual public cloud spending will hit $500 billion by 2023. There is a growing awareness of the long-term cost implications of the Cloud and several companies are taking the dramatic step of repatriating parts of their workloads or adopting a hybrid approach to alleviate the cloud costs. This shift is driven by an incredibly powerful value proposition – infrastructure available immediately, at exactly the scale needed by the business – driving efficiencies both in operations and economics for enterprises.
Retaining Employee Knowledge
One of the critical challenges of organizations is to retain their experienced employees’ knowledge, as a key strategic resource, before they retire or after a merger or acquisition occurs. For example, as baby boomers retire, transferring and distributing their knowledge to new employees, especially younger generations, will become a big concern. One of the solutions is to automate workflows and processes at the edge. Utilizing such automation along with incorporating AI and Machine Learning (ML) techniques can track and store the critical “knowhow” of key employees at different levels of an organization, and retain, improve, and share the knowledge with new recruits or the generations to come.
Inference at the Edge
Considering all the challenges discussed, a reasonable solution would be that instead of streaming process data from the plant edge into the Cloud for running inference models, the application (including the trained model) could be shipped to an edge execution environment. Actionable responses and insights could be quickly communicated to the human stakeholders. This mechanism would reduce the high cost in terms of time, network bandwidth, storage capacity, loss of independence, security and privacy caused by centralized cloud storage and computing.
In the current state of IoT devices, edge computing reflects as intelligently collecting, aggregating, and analyzing IoT data via cloud services deployed close to IoT devices (i.e., at the edge) based on the business needs of the application. The future of edge computing is complementary to cloud capabilities. The Cloud will not be replaced by the edge. The duality of these two paradigms promotes an infrastructure risk distribution between the offshore facility (manufacture) and its data center. This will provide uninterrupted real-time actionable responses on the edge. The Cloud will execute less critical tasks such as model training, retraining, and sustainment as well as monitoring. This hybrid combination will optimize uptimes while minimizing the risk of unseen issues.
Aspen AIoT Hub™ and the Intelligent Edge Vision
At AspenTech, our goal is to leverage today's edge computing technology in an optimal and scalable way to deliver our high-valued IP in an intelligent edge solution. This includes leveraging the edge to provide real-time connectivity to sensors, devices and data sources. It also encompasses ensuring the integrity and quality of this data while it’s delivered to a cloud or on-premises environment, where we want to leverage the computing power to analyze the data with our applications and then deliver the IP (models or algorithms) back to the edge where we put it online to deliver value.
Aspen AIoT Hub: The Industrial AI Infrastructure
The Aspen AIoT Hub is the result of R&D investments to deliver Aspen Inference Models that are the engines for Industrial AI applications and ultimately the Intelligent Edge. It provides access to data at scale whether in the enterprise, the plant or at the edge. It also provides comprehensive AI pipeline workflows to embed AI in Aspen Models both for Engineers and Data Scientists. As shown in the diagram above, at the very top is our enterprise governance application called Aspen Enterprise Insights™, which is a unique hybrid-cloud-based software product with a flexible enterprise visualization and workflow management solution that delivers real-time decision support across the enterprise. Below that level are the industrial AI apps used to create inference models and hybrid models. These include, for example, Aspen AI Model Builder™, Aspen Event Analytics™ and Aspen Data Science Studio™.
Next, is our Aspen Cloud™, which consists of our industrial cloud data lake, Aspen Enterprise Historian™ (IP.21), and the future intelligent edge management service. Everything underneath can be located on the edge or on-premises. At the very bottom are the various data sources that may be present in an enterprise infrastructure including but not limited to IP.21 or third-party historian, Enterprise Resource Planning (ERP), Enterprise Asset Management (EAM), etc. Aspen Connect™ is our connectivity gateway that streams the historical and live data from the sources up into the cloud or into the edge execution runtime. Therefore, the edge management service will become our customers' gateway into the Intelligent Edge. Instead of moving data to the Cloud, AspenTech’s intelligence at the edge enables our customers to move AI/ML models from the Cloud to the edge to drive critical business outcomes quickly, safely, efficiently and intelligently.
To learn more, visit the Aspen AIoT Hub solutions page.