4 Distributed AI Deployment Use Cases Supported by Infrastructure - Interconnections - The Equinix Blog

4 Distributed AI Deployment Use Cases Supported by Infrastructure – Interconnections – The Equinix Blog

Posted on

[ad_1]

As businesses ramp up artificial intelligence (AI) practices across functions, IT teams realize that production-grade AI infrastructure is necessary for efficiently running these AI training models efficiently. Our previous blog mentioned that accelerating AI innovation requires engaging with tech partners in ecosystems and advanced AI infrastructure.

The pace of AI innovation in business is accelerating, and the list of functions that can benefit is expanding. In the Equinix 2023 Global Tech Trends Survey (GTTS), we learned that the top two business functions which companies already use or plan to use AI for are IT operations (85%) and cybersecurity (81%). Customer experience, research & development and marketing are right behind as other top priorities. IT teams have their work cut out for them as they respond to the demand for AI technology and data from business functions across their companies.

Deploying hybrid multicloud architectures that include cloud, colocation and on-premises data centers is essential for running AI workloads at the core and edge and progressing enormous data sets through workflows. Distributed AI orchestrators that help perform AI tasks across private and public clouds will be necessary. Once AI orchestrators become mainstream, distributed AI architecture will be easier to deploy and manage to meet distributed AI use case requirements.

In this blog we’ll discuss these four distributed AI deployment use cases and the required infrastructure:

  • Private Generative AI
  • Scalable Generative AI
  • Smart Spaces Monitoring
  • Steady State AI

Infrastructure required for distributed AI

Increasingly, as more data gets generated at the edge, it is necessary to perform AI processing at the edge for performance, privacy and cost (moving data to a central location). Each use case listed below has specific requirements that will determine the necessary infrastructure and its placement.

Use 1: Private Generative AI

Enterprises want to leverage the benefits of various generative AI models created by different AI vendors and cloud providers. Sometimes, they are okay with uploading their data into their virtual private cloud (VPC) after they get legal agreements stating that the cloud providers will not use their data to train the global generative AI model. However, many companies won’t be comfortable uploading their confidential data sets into the cloud. Instead, these companies want to train their generative AI models in private colocation facilities.

Users can bring generative AI foundation models and train them on private AI infrastructure. Many organizations already store their private data sets at colocation data centers in their own private cages. These cages are not accessible to anyone other than their authorized employees (akin to a private cloud). Thus, enterprises can customize the foundation models using their private and confidential data. Interconnection-rich colocation data centers can also be a desirable location to host the AI models for inference if interconnection services and cloud on-ramps are accessible, as many organizations need to fuse data from external data sources (e.g., weather data, traffic data, etc.) with the private model results to provide complete answers to a user’s query.

Use Case 2: Scalable Generative AI

As the popularity of generative AI models increases, millions of users are simultaneously accessing these models via both portals and APIs. These centralized cloud locations often get overwhelmed, leading to increased query response times. This problem gets amplified as the size of the query payload increases (e.g., video workloads). Cloud providers are now exploring the use of hybrid architectures to deploy these models at multiple edge locations globally, reducing the load on central servers and, thus, scaling the solution for the growing number of end users.

Interconnection-rich data centers are often the aggregation point for many clouds and networks. Generative AI models often need low-latency access to data stored across multiple clouds and data brokers to satisfy the user query. Since in many cases the clouds and data brokers already have their network edges and caches in the at these multitenant data centers, it is a logical location to process AI inference workloads.

Use Case 3: Smart Spaces Monitoring

One trending set of use cases is referred to as smart space management or “smart spaces.” Smart spaces rely on video or vision AI workloads to provide insights, recommendations and actions. Smart spaces include monitored locations with public access, such as campuses, concert venues, transportation centers and retail locations. For example, retail businesses rely on vision AI for several purposes including monitoring store inventory on shelves, assisting with on-time supply chain delivery, observing customer foot traffic and identifying potential security threats. The solutions in these areas require video sensors such as cameras and AI-enabled infrastructure to process the video streams. A vision AI algorithm can address security breaches, inventory shortages, or traffic congestion.

In most cases, these use cases require coverage for a whole city, a chain of retail stores, etc. Once we start to address the ROI and efficiencies in such cases, it makes sense to build AI clusters at the metro level and aggregate the video streams to the metro-level AI infrastructure, to amortize CAPEX and reduce OPEX (due to maintenance) costs. This approach reduces the associated IT and maintenance costs since you’ve deployed the AI infrastructure in a reliable, physically secure, well-maintained location and not distributed across dozens of retail stores, public spaces, or transportation roadways. Most of these use cases allow for the latency required to transport the video for processing at a regional infrastructure location. In some cases, these latencies cannot be satisfied at the central cloud locations.

Use Case 4: Steady State AI

As enterprises progress through their AI journey (awareness, active, operational, systemic/steady state, transformational), they realize that once you reach a steady state (systemic) in AI practices, cost, efficiencies, and performance are the key objectives. These objectives drive the AI practice to evaluate infrastructure, AI hardware, software, tools, and internal practices. Mature AI practices realize the best approach to maximize ROI at the steady state AI phase is to build a hybrid AI architecture. Here are several examples of steady state AI:

  • The transportation/automotive industry uses steady state AI to refine AI algorithms for advanced driver-assistance systems (ADAS).
  • Financial services companies use steady state AI approaches to manage model drift or new data updates and develop new algorithms to produce a specific business outcome.
  • Banks use vision AI for security and forecasting teller workloads.
  • The pharmaceutical industry uses steady state AI to complete various tasks, such as applying AI to new drug and vaccine outcomes or monitoring the spread of infectious diseases.

Cost-efficient hybrid AI architecture requires private AI infrastructure to accommodate the main day-to-day AI workload requirements while enabling bursting to the cloud for periodic very large model training tasks. In other words, buy the base and rent the peak. Hybrid AI architecture provides better performance for everyday data science. It enables a predictable cost model for the data science teams, thus increasing the ROI for AI practices and the business outcomes they are addressing.

Learn how to distribute your AI deployments at the core and edge

High-performance, secure hybrid infrastructure is essential to meeting the extensive and varied requirements for distributed AI use cases. Companies that deploy hybrid infrastructure on Platform Equinix® can run AI training workloads at the core and edge, achieving flexibility and cost-efficiency. They can access their private data in Equinix IBX® data centers found in 70+ metros and 30+ countries, and use Equinix Fabric® virtual interconnection services for low-latency access via cloud on-ramps to data sources stored across multiple clouds and from data brokers in our digital ecosystems.

To learn more about how to build and deploy distributed AI, read our Leader’s Guide to Hybrid Infrastructure leader’s guide.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *