Category: Machine Learning

Optimize Hyperconverged Workloads – Cisco Data Center Solutions for Hybrid Cloud

Optimize Hyperconverged Workloads

Cisco Workload Optimization Manager works with many third-party solutions to ensure your applications get the resources they need. However, its deep integration with the entire Cisco environment greatly enhances your Cisco deployments to optimize your data centers. It helps you safely maximize cloud elasticity in Cisco UCS server environments and Cisco Hyperflex systems to gain better performance and efficiency. With Cisco Tetration network awareness, you can confidently re-platform to application architectures that have increased network complexity. Cisco Cloud Center can help you intelligently deploy new workloads anywhere, anytime. Cisco Workload Optimization Manager optimizes initial cloud placement for performance, cost, and compliance. Figure 3-18 illustrates CWOM meeting changing demands.


Figure 3-18 CWOM meeting changing demands

Ensure Application Performance

Application awareness with AppDynamics metrics complements Cisco Workload Optimization Manager and enables you to do the following:

• Continuously ensure application performance and eliminate application performance risk due to infrastructure

• Show your IT organization’s value to the business when infrastructure-resource decisions are directly tied to the performance of business-critical applications

• Bridge the application-infrastructure gap with full-stack control that elevates teams and provides a common understanding of application dependencies

• Accelerate and de-risk application migration with a holistic understanding of application topology, resource utilization, and the data center stack

Figure 3-19 illustrates CWOM meeting AppDynamics.


Figure 3-19 CWOM meeting AppDynamics

Cisco AppDynamics and Cisco Workload Optimization Manager provide complete visibility and insight into application and infrastructure interdependencies and business performance. The result is application-aware IT infrastructure that is continuously resourced to deliver business objectives. Figure 3-20 illustrates the CWOM and AppDynamics benefits.


Figure 3-20 CWOM and AppDynamics benefits

Infrastructure as a Service – Cisco Data Center Solutions for Hybrid Cloud

Infrastructure as a Service

Cisco UCS Director delivers Infrastructure as a Service (IaaS) for both virtual and physical infrastructure. With Cisco UCS Director, you can create an application container template that defines the infrastructure required for a specific application or how a customer or business unit is expected to use that application. Cisco UCS Director helps IT teams to define the rules for the business’s infrastructure services:

• Either you can first onboard tenants and then define the boundaries of the physical and virtual infrastructure that they can use, or you can allow your onboarded tenants to define the infrastructure boundaries.

• Create policies, orchestration workflows, and application container templates in Cisco UCS Director that define the requirements for a specific type of application that can be used by a tenant, such as a web server, database server, or generic virtual machine (VM).

• Publish these templates as a catalog in the End User Portal.

Users can go to the End User Portal, select the catalog that meets their needs, and make a service request for that particular application or VM. Their service request triggers the appropriate orchestration workflow to allocate the required infrastructure and provision the application or VM.

If the service request requires approvals, Cisco UCS Director sends emails to the specified approver(s). Once the service request is approved, Cisco UCS Director assigns the infrastructure to those users, creating a virtual machine if necessary, and doing the base configuration, such as provisioning the operating system. You can also configure an orchestration workflow to ask questions before allowing a user to choose a catalog item. Here are some points to keep in mind:

• You can configure the workflow to ask the user what type of application they plan to run and automatically select a catalog for them based on the answers to those questions.

• The end user does not have to worry about whether to request a physical server or a VM, what kind of storage they require, or which operating system to install. Everything is predefined and prepackaged in the catalog.

For example, you can create policies, orchestration workflows, and an application container template for an SAP application that uses a minimum level of infrastructure, requires approvals from a director in the company, and has a chargeback to the department. When an end user makes a service request in the End User Portal for that catalog item, Cisco UCS Director does the following:

1. Sends an email to the director, who is the required approver.

2. When the approval is received, Cisco UCS Director creates a VM in the appropriate pod with four CPUs, 10GB of memory, and 1TB of storage.

3. Installs an operating system (OS) on the VM.

4. Notifies the end user that the VM is available for them to use.

5. Sets up the chargeback account for the cost of the VM.

With the available APIs from Cisco UCS Director, you can also script custom workflows to pre-install the SAP application in the VM after the OS is installed.

Cisco UCS Director enables you to automate a wide array of tasks and use cases across a wide variety of supported Cisco and non-Cisco hardware and software data center components, including physical infrastructure automation at the compute, network, and storage layers. A few examples of the use cases that you can automate include, but are not limited to, the following:

• VM provisioning and lifecycle management

• Network resource configuration and lifecycle management

• Storage resource configuration and lifecycle management

• Tenant onboarding and infrastructure configuration

• Application infrastructure provisioning

• Self-service catalogs and VM provisioning

• Bare-metal server provisioning, including installation of an operating system

For each of the processes that you decide to automate with orchestration workflows, you can choose to implement the processes in any of the following ways:

• Use the out-of-the-box workflows provided with Cisco UCS Director.

• Modify the out-of-the-box workflows with one or more of the tasks provided with Cisco UCS Director.

• Create your own custom tasks and use them to customize the out-of-the-box workflows.

• Create your own custom workflows with custom tasks and the out-of-the-box tasks.

Beginning with version 6.6, Cisco UCS Director can be claimed as a managed device in Intersight, so usage data, license usage, and so on can be collected. UCS Director administrators can update UCS Director southbound connectors that are used to communicate with supported devices, including networking and storage platforms, during a maintenance window for rapid delivery of new features and functionality. This will enable users to leverage endpoint capabilities and APIs faster through UCS Director by enabling the update of device libraries. Figure 3-12 illustrates Cisco UCS Director Intersight integration.


Figure 3-12 Cisco UCS Director Intersight integration

The benefits of SaaS and CI/CD (continuous integration/continuous delivery) can be achieved by claiming on-premises UCS Director instances in Intersight. Once these are claimed, the traditional on-premises software is transformed into a secure hybrid SaaS setup that delivers ongoing new capabilities:

• Automatic downloads of software enhancements upgrades, bug fixes, and updates for the following:

• UCS Director Base Platform Pack

• System Update Manager

• Infrastructure specific Connector Packs (EMC storage, F5 load balancers, RedHat KVM)

• Enhanced problem resolution with Cisco Support through Intersight

• Proactive notifications and streamlined “one-click” diagnostics collection

Figure 3-13 illustrates Cisco UCS Director Intersight integration benefits.


Figure 3-13 Cisco UCS Director Intersight integration benefits

UCS Director–specific dashboard widgets can be added to provide useful summary information for the following:

• Instance summary

• Service status summary

• Last backup status

• Trends for last 10 backups

Figure 3-14 shows the UCS Director dashboard widgets in Intersight.


Figure 3-14 UCS Director dashboard widgets in Intersight

It is possible for an Intersight workflow to call a UCSD workflow, if desired, which can allow an organization to gradually migrate to Intersight as the primary orchestrator. However, the UCS Director and Intersight workflows are not compatible, and they cannot be directly imported from UCS Director into Intersight.

With Cisco ACI, you can create application infrastructure containers that contain the appropriate network services as well as support infrastructure components for each respective application. Figure 3-15 illustrates UCS Director integration with ACI.


Figure 3-15 UCS Director integration with ACI

The following are the business benefits of Cisco UCS Director and Cisco ACI integration:

• Cisco UCS Director and Cisco ACI integrate through native tasks and prebuilt workflows.

• This integration supports IaaS with three main features:

• Secure multitenancy

• Rapid application deployment

• Self-service portal

Secure Multitenancy

The integrated solution provides consistent delivery of infrastructure components that are ready to be consumed by clients in a secured fashion. Here are some key points concerning secure multitenancy:

• The solution optimizes resource sharing capabilities and provides secure isolation of clients without compromising quality of service (QoS) in a shared environment.

• To provide IaaS, secure multitenancy reserves resources for exclusive use and securely isolate them from other clients.

• Cisco ACI supports multitenancy by using Virtual Extensible LAN (VXLAN) tunnels internally within the fabric, inherently isolating tenant and application traffic.

• Cisco UCS Director manages the resource pools assigned to each container. Only Cisco supports secure multitenancy that incorporates both physical and virtual resources.

Rapid Application Deployment

The combination of Cisco UCS Director and Cisco ACI enhances your capability to rapidly deploy application infrastructure for you and your clients. With the increasing demands of new applications and the elastic nature of cloud environments, administrators need to be able to quickly design and build application profiles and publish them for use by clients. Cisco UCS Director, in conjunction with Cisco ACI, gives you the ability to quickly meet the needs of your clients. Here are some key points concerning rapid application deployment:

• Cisco UCS Director interacts with Cisco ACI to automatically implement the networking services that support applications. In Cisco UCS Director, you can specify a range of Layer 4 through Layer 7 networking services between application layers that are deployed with a zero-touch automated configuration model.

• You can dynamically place workloads based on current network conditions so that service levels are maintained at the appropriate level for the applications being supported by the client.

• You can use resource groups to establish tiers of resources based on application requirements, including computing, networking, and storage resources, with varying levels of performance. For example, a bronze level of service might be used for developers and include resources such as thin-provisioned storage and virtualized computing resources. In contrast, a gold level of service might be used for production environments and include thick-provisioned storage and bare-metal servers for performance without compromise.

• After your resources and services are deployed, you can monitor your application infrastructure with real-time health scores, dynamically reconfigure your network if necessary to meet your performance goals, and obtain resource consumption information that can be used for charging clients.

• Cisco UCS Director in conjunction with Cisco ACI also provides complete application infrastructure lifecycle management, returning resources to their respective free pools and eliminating stranded resources.

Self-Service Portal

After you have defined or adopted a set of application profiles, you can make them available to clients in a service catalog visible in the self-service portal. Your clients can log in to Cisco UCS Director’s self-service portal, view the service catalog published by your organization, and order the infrastructure as desired.

The application profiles you define can be parameterized so that clients can provide attributes during the ordering process to customize infrastructure to meet specific needs.

For example, clients can be allowed to specify the number of servers deployed in various application infrastructure tiers or the amount of storage allocated to each database server. After your clients have placed their orders, they can monitor the status of application infrastructure orders, view the progress of application infrastructure deployment, and perform lifecycle management tasks.

Cisco Cloud APIC on AWS – Cisco Data Center Solutions for Hybrid Cloud

Cisco Cloud APIC on AWS

Cisco Cloud APIC is an important new solution component introduced in the architecture of Cisco Cloud ACI. It plays the equivalent of APIC for a cloud site. Like APIC for on-premises Cisco ACI sites, Cloud APIC manages network policies for the cloud site that it is running on, by using the Cisco ACI network policy model to describe the policy intent.

Cloud APIC is a software-only solution that is deployed using cloud-native instruments such as Cloud Formation templates on AWS. Network and security policies could be locally defined on the Cloud APIC for the cloud site, or they could be locally defined globally on NDO and then distributed to the Cloud APIC. While the on-premises APIC renders the intended policies onto Cisco ACI switches of the site, Cloud APIC renders the policies onto the AWS cloud network infrastructure.

It accomplishes the task by translating the Cisco ACI network policies to the AWS-native network policies and uses the AWS-native policy API to automate the provisioning of the needed AWS-native cloud resources, such as VPCs, cloud routers, security groups, and security group rules.

The key functionalities of Cloud APIC include the following:

• Providing a northbound REST interface to configure cloud deployments

• Accepting Cisco ACI Policy Model and other cloud-specific policies directly or from MSO

• Performing endpoint discovery in the cloud site

• Performing Cisco ACI Cloud Policy translation

• Configuring the cloud router’s control plane

• Configuring the data-path between the Cisco ACI fabric and the cloud site

Cisco Cloud APIC is a microservices-based software deployment of APIC. Cisco Cloud APIC on AWS is deployed and runs as an Amazon Elastic Compute Cloud (Amazon EC2) instance using persistent block storage volumes in Amazon Elastic Block Store (Amazon EBS). The Amazon Machine for Cisco Cloud APIC is available at the AWS marketplace and uses a bring-your-own-license (BYOL) model.

As ACI APIC is for an on-premises ACI fabric, ACI Cloud APIC contains only policies and is not in the data-forwarding path. Any downtime of the Cloud APIC will not impact network forwarding functionality or performance in the cloud site. The Amazon EC2 instance of the Cloud APIC takes advantage of Amazon EBS built-in storage volume redundancy, high availability, and durability.

Upon a failure in the Amazon EC2 instance, it can always relaunch or restore to the previous state by rebuilding the configuration and states from persistent storage and provide seamless Cloud APIC functionalities. Therefore, for simplicity and cost effectiveness, Cloud APIC is deployed as a single Amazon EC2 instance in the initial release of Cisco Cloud ACI on AWS. In the future, clustering of multiple virtual instances will be introduced for Cloud APIC to achieve higher scalability and instance level redundancy.

Both Cisco ACI and AWS use group-based network and security policy models. The logical network constructs of the Cisco ACI network policy model consist of tenants, bridge domains (BDs), bridge-domain subnets, endpoint groups (EPGs), and contracts. AWS uses slightly different constructs: user accounts, virtual private cloud (VPC), and security groups, plus security group rules and network access lists.

Cisco ACI classifies endpoints into EPGs and uses contracts to enforce communication policies between these EPGs. AWS uses security groups (SGs) and security group rules for classification and policy enforcement.

Cisco Cloud APIC’s First Time Setup Wizard

The first time you connect to Cisco Cloud APIC UI, the First Time Setup Wizard automatically kicks off. This wizard helps you configure some of the Cisco Cloud APIC required settings, such as DNS, the TEP (Tunnel End Point) pool, the regions to be managed, and IPsec connectivity options.

At the end of the First Time Setup Wizard, Cisco Cloud APIC configures the AWS infrastructure needed to become fully operational, such as a pair of Cisco CSR 1000V Series routers. The provisioning of the AWS infrastructure is fully automated and carried out by Cisco Cloud APIC. After this step, you will be able to start deploying your Cisco ACI policy on AWS. Figure 3-4 shows the First Time Setup Wizard of Cisco Cloud APIC.


Figure 3-4 First Time Setup Wizard of Cisco Cloud APIC

Registering a Cisco ACI Cloud Site in NDO

Each Cisco Cloud APIC represents a Cisco ACI site. To extend policy across sites, Cisco ACI uses the Cisco ACI Nexus Dashboard Orchestrator (NDO). When you register a Cisco Cloud APIC in NDO, it will appear as a new site and will allow you to deploy existing or new schemas to AWS. NDO ensures that you specify the required site-specific options, such as subnets and EPG membership classification criteria, which are different for each site. Figure 3-5 shows how to register a Cisco ACI cloud site in NDO.


Figure 3-5 Registering a Cisco ACI cloud site in NDO

Cisco Cloud APIC also provides a view of the AWS-native constructs used to represent the Cisco ACI policy. This allows network administrators to gradually familiarize themselves with AWS networking constructs. Figure 3-6 shows the native cloud resources view on the Cloud APIC UI.


Figure 3-6 Native cloud resources view on the Cloud APIC UI

Deploying a Multitier Application in a Hybrid Scenario

To deploy a three-tier application, consisting of Database (DB), App, and Web tiers, across an on-premises data center and the AWS cloud using Cisco Cloud ACI integration, you will need to configure a schema on NDO that represents this policy. It should contain at least one VRF, one application profile, and three EPGs (one EPG for each tier of the application) as well as contracts between the tiers.

For example, the App and DB tiers can be deployed on the premises and the Web tier in AWS—or you can use any permutation of this set as you see fit. Figure 3-7 shows a three-tier application schema on MSO.


Figure 3-7 Three-tier application schema on NDO

The schema can then be associated with the on-premises site and the Cisco Cloud ACI site. Once the association is made, you then define the subnets to be used for the VRF on AWS. Cisco Cloud APIC model associates subnets to VRF because, in AWS, VRFs are mapped to VPCs and subnets are mapped to an availability zone (AZ) inside a VPC. Figure 3-8 illustrates how to deploy an application to on-premises and cloud sites in AWS.


Figure 3-8 Deploying an application to on-premises and cloud sites in AWS

Cisco Cloud ACI ensures that the AWS cloud and on-premises ACI are configured appropriately to allow communication between the App EPG and the Web EPG residing on AWS. Figure 3-9 illustrates the three-tier application deployed across on-premises and cloud sites in AWS.


Figure 3-9 Three-tier application deployed across on-premises and cloud sites in AWS

You can now deploy new Web instances on AWS to accommodate your needs.

High-Level Architecture of Cisco Cloud ACI on AWS – Cisco Data Center Solutions for Hybrid Cloud

High-Level Architecture of Cisco Cloud ACI on AWS

An instance of MSO orchestrates multiple independent sites using a consistent policy model and provides a single pane of glass for centralized management and visibility. The sites can be either on-premises Cisco ACI fabric sites with their own site-local APIC clusters or cloud sites in AWS with Cloud APIC to manage them.

Just as with a normal Cisco ACI multisite architecture, all the sites are interconnected via a “plain” IP network. There’s no need for IP multicast or Dynamic Host Configuration Protocol (DHCP) relay. You provide IP connectivity, and MSO will be responsible for setting up the intersite overlay connectivity. Figure 3-3 illustrates Cisco Cloud ACI on AWS architecture.


Figure 3-3 Cisco Cloud ACI on AWS architecture

The following are the key building blocks of the Cisco Cloud ACI architecture:

• An on-premises Cisco ACI site running Cisco ACI software and equipped with at least one second-generation spine model (EX, FX, C or GX)

• Cisco ACI Nexus Dashboard Orchestrator (NDO)

• Cisco Cloud APIC

• Intersite connectivity between the on-premises and cloud sites

• Network policy mapping between the Cisco ACI on-premises and cloud sites

Cisco ACI Nexus Dashboard Orchestrator

In a Cisco ACI multisite architecture, the Cisco ACI Nexus Dashboard Orchestrator (NDO) is the single pane of glass for management of all the interconnected sites. It is a centralized place to define all the intersite policies that can then be published to the individual Cisco ACI sites where the site-local APICs render them on the physical switches that build those fabrics.

With the Cisco Cloud ACI, NDO’s orchestration functions expand to the cloud sites. It is responsible for site registration of both on-premises Cisco ACI data center sites and the cloud sites. It automates the creation of overlay connectivity between all the sites (on-premises and cloud). Continuing to be the central orchestrator of intersite policies, NDO publishes policies to on-premises Cisco ACI data center sites as well as pushes the same policies to cloud sites in AWS.

It is also capable of instrumenting the policy deployment among different sites by selectively distributing the policies to only the relevant sites. For instance, NDO can deploy the web front tier of an application into the cloud site in AWS while keeping its compute and database tiers in the on-premises site. Through the NDO interface, network administrators can also regulate the communication flow between the on-premises site and AWS as required by applications.

Challenges in Hybrid Cloud Environments – Cisco Data Center Solutions for Hybrid Cloud

Challenges in Hybrid Cloud Environments

In a hybrid cloud environment, it is becoming more and more challenging to maintain a homogeneous enterprise operational model, comply with corporate security policies, and gain visibility across hybrid environments.

The following are the main challenges in building and operating a hybrid cloud environment:

• Automating the creation of secure interconnects between on-premises and public clouds

• Dealing with the diverse and disjoint capabilities across on-premises private cloud and public cloud

• Multiple panes of glass to manage, monitor, and operate hybrid cloud instances

• Inconsistent security segmentation capabilities between on-premises and public clouds

• Facing the learning curve associated with operating a public cloud environment

• Inability to leverage a consistent L4–L7 services integration in hybrid cloud deployments

Cisco Cloud Application Centric Infrastructure (Cisco Cloud ACI) is a comprehensive solution for simplified operations, automated network connectivity, consistent policy management, and visibility for multiple on-premises data centers and public clouds or multicloud environments.

The solution captures business and user intents and translates them into native policy constructs for applications deployed across various cloud environments. It uses a holistic approach to enable application availability and segmentation for bare-metal, virtualized, containerized, or microservices-based applications deployed across multiple cloud domains. The common policy and operating model will drastically reduce the cost and complexity of managing hybrid and multicloud deployments. It provides a single management console to configure, monitor, and operate multiple disjointed environments spread across multiple clouds.

The Cisco Cloud ACI solution extends the successful capabilities of Cisco ACI in private clouds into public cloud environments (AWS, Microsoft Azure, and now on Google Cloud). This solution introduces Cisco Cloud APIC, which runs natively in public clouds to provide automated connectivity, policy translation, and enhanced visibility of workloads in the public cloud. This solution brings a suite of capabilities to extend your on-premises data center into true multicloud architectures, helping to drive policy and operational consistency regardless of where your applications or data reside. Figure 3-1 illustrates Cisco Cloud ACI.


Figure 3-1 Cisco Cloud ACI

Cisco Nexus Dashboard offers a centralized management console that allows network operators to easily access applications needed to perform the lifecycle management of their fabric for provisioning, troubleshooting, or simply gaining deeper visibility into their network. It’s a single launch point to monitor and scale across different fabric controllers, whether it is Cisco Application Policy Infrastructure Controller (APIC), Cisco Data Center Network Manager (DCNM), or Cisco Cloud APIC.

The Cisco Nexus Dashboard Orchestrator, which is hosted on the Cisco Nexus Dashboard, provides policy management, network policy configuration, and application segmentation definition and enforcement policies for multicloud deployments. Using the Cisco Nexus Dashboard Orchestrator, customers get a single view into the Cisco APIC, Cisco DCNM, and Cisco Cloud APIC policies across AWS, Microsoft Azure, and Google Cloud environments.

In an on-premises Cisco ACI data center, Cisco Application Policy Infrastructure Controller (APIC) is the single point of policy configuration and management for all the Cisco ACI switches deployed in the data center. When there is a need to seamlessly interconnect multiple Cisco ACI–powered data centers and selectively extend Cisco ACI constructs and policies across sites, Cisco Nexus Dashboard Orchestrator is the solution.

Cisco Nexus Dashboard Orchestrator can manage policies across multiple on-premises Cisco ACI data centers as well as public clouds. The policies configured from Orchestrator can be pushed to different on-premises Cisco ACI sites and cloud sites. Cisco APIC running on the premises receive this policy from Orchestrator and then render and enforce it locally.

When extending Cisco ACI to the public cloud, a similar model applies. However, public cloud vendors do not understand Cisco ACI concepts such as endpoint groups (EPGs) and contracts. Orchestrator policies therefore need to be translated into cloud-native policy constructs. For example, contracts between Cisco ACI EPGs need to be translated into security groups on AWS first and then applied to AWS cloud instances.

This policy translation and programming of the cloud environment is performed using a new component of the Cisco Cloud ACI solution called Cisco Cloud Application Policy Infrastructure Controller (Cisco Cloud APIC or Cloud APIC).

The Cisco Cloud ACI solution ensures a common security posture across all locations for application deployments. The Cisco Cloud APIC translates ACI policies into cloud-native policy constructs, thus enabling consistent application segmentation, access control, and isolation across varied deployment models.

Cisco Cloud APIC runs natively on supported public clouds to provide automated connectivity, policy translation, and enhanced visibility of workloads in the public cloud. Cisco Cloud APIC translates all the policies received from Multi-Site Orchestrator (MSO) and programs them into cloud-native constructs such as virtual private clouds (VPCs), security groups, and security group rules.

This new solution brings a suite of capabilities to extend your on-premises data center into true hybrid cloud architectures, helping drive policy and operational consistency regardless of where your applications reside. It provides a single point of policy orchestration across hybrid environments, operational consistency, and visibility across clouds. Figure 3-2 illustrates Cisco Cloud ACI capabilities.


Figure 3-2 Cisco Cloud ACI capabilities

Figure 3-2 shows the overall high-level architecture of Cisco Cloud ACI with Cisco ACI Multi-Site Orchestrator acting as a central policy controller, managing policies across multiple on-premises Cisco ACI data centers as well as hybrid environments, with each cloud site being abstracted by its own Cloud APICs.

Meraki Virtual MX Appliances for Public and Private Clouds – Cisco Data Center Analytics and Insights

Meraki Virtual MX Appliances for Public and Private Clouds

Virtual MX (vMX) is a virtual instance of a Meraki security and SD-WAN appliance dedicated specifically to providing the simple configuration benefits of site-to-site Auto VPN for organizations running or migrating IT services to public or private cloud environments. An Auto VPN tunnel to a vMX is like having a direct Ethernet connection to a private data center. Figure 2-27 illustrates an overview of Meraki vMX integration with cloud.


Figure 2-27 An overview of Meraki vMX integration with cloud

Features and Functionality of the vMX Appliance

vMX functions like a VPN concentrator and includes SD-WAN functionality like other MX devices. For public cloud environments, a vMX is added via the respective public cloud marketplace and, for private cloud environments, a vMX can be spun up on a Cisco UCS running NFVIS. Setup and management in the Meraki dashboard is just like any other MX, including the following features:

• Seamless cloud migration. You can securely connect branch sites with a physical MX appliance to resources in public cloud environments in three clicks with Auto VPN

• Secure virtual connections. You can extend SD-WAN to public cloud environments for optimized access to business-critical resources.

• Only a Meraki license is required.

• 500Mbps of VPN throughput. vMX is available in three VPN throughput-based sizes to suit a wide range of use cases: small, medium, and large.

• Easy deployments, which support for private cloud environments through the Cisco NFVIS Meraki dashboard

Figure 2-28 illustrates Meraki vMX functioning like a VPN connector.


Figure 2-28 Meraki vMX functioning like a VPN connector

Cisco Meraki MX – Cisco Data Center Analytics and Insights

Cisco Meraki MX

The Cisco Meraki MX appliances are multifunctional security and SD-WAN enterprise appliances with a wide set of capabilities to address multiple use cases—from an all-in-one device. Organizations of all sizes and across all industries rely on the MX to deliver secure connectivity to hub locations or multicloud environments, as well as application quality of experience (QoE), through advanced analytics with machine learning.

The MX is 100% cloud-managed, so installation and remote management is truly zero touch, making it ideal for distributed branches, campuses, and data center locations. Natively integrated with a comprehensive suite of secure network and assurance capabilities, the MX eliminates the need for multiple appliances. These capabilities include application-based firewalling, content filtering, web search filtering, SNORT-based intrusion detection and prevention, Cisco Advanced Malware Protection (AMP), site-to-site Auto VPN, client VPN, WAN and cellular failover, dynamic path selection, web application health, VoIP health, and more.

SD-WAN can be easily extended to deliver optimized access to resources in public and private cloud environments with virtual MX appliances (vMX). Public clouds supported with vMX include Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and Alibaba Cloud and private cloud support through Cisco Network Function Virtualization Infrastructure Software (NFVIS).

Cisco Enterprise Network Function Virtualization Infrastructure Software (Cisco Enterprise NFVIS) is Linux-based infrastructure software designed to help service providers and enterprises dynamically deploy virtualized network functions, such as a virtual router, firewall, and WAN acceleration, on a supported Cisco device. There is no need to add a physical device for every network function, and you can use automated provisioning and centralized management to eliminate costly truck rolls.

Cisco Enterprise NFVIS provides a Linux-based virtualization layer to the Cisco Enterprise Network Functions Virtualization (ENFV) solution. Figure 2-26 illustrates the Cisco SD-WAN extensions.


Figure 2-26 Cisco SD-WAN extensions

Some of the many highlights of using Meraki MX are as listed below:

• Advanced quality of experience (QoE) analytics

• End-to-end health of web applications at a glance across the LAN, WAN, and application server.

• Machine-learned smart application thresholds autonomously applied to identify true anomalies based on past behavioral patterns.

• Monitoring of the health of all MX WAN links, including cellular, across an entire organization.

• Detailed hop-by-hop VoIP performance analysis across all uplinks.

• Agile on-premises and cloud security capabilities informed by Cisco Talos

• Next-gen Layer 7 firewall for identity-based security policies and application management.

• Advanced Malware Protection with sandboxing; file reputation-based protection engine powered by Cisco AMP.

• Intrusion prevention with PCI-compliant IPS sensor using industry-leading SNORT signature database from Cisco.

• Granular and automatically updated category-based content filtering.

• SSL decryption/inspection, data loss prevention (DLP), cloud access security broker (CASB), SaaS tenant restrictions, granular app control, and file type control.

• Branch gateway services

• Built-in DHCP, NAT, QoS, and VLAN management services.

• Web caching accelerates frequently accessed content.

• Load balancing combines multiple WAN links into a single high-speed interface, with policies for QoS, traffic shaping, and failover.

• Smart connection monitoring provides automatic detection of Layer 2 and Layer 3 outages and fast failover, including the option of integrated LTE Advanced or 3G/4G modems.

• Intelligent site-to-site VPN with Cisco SD-WAN powered by Meraki

• Auto VPN allows automatic VPN route generation using IKE/IKEv2/IPsec setup; runs on physical MX appliances.

• Virtual instance in public and private clouds.

• SD-WAN with active-active VPN, policy-based routing, dynamic VPN path selection, and support for application-layer performance profiles to ensure prioritization of application types that matter.

• Interoperation with all IPsec VPN devices and services.

• Automated MPLS to VPN failover within seconds of a connection failure.

• L2TP IPsec remote client VPN included at no extra cost with support for native Windows, macOS, iPad, and Android clients.

• Support for Cisco AnyConnect remote client VPN (AnyConnect license required).

• Industry-leading cloud management

• Unified firewall, switching, wireless LAN, and mobile device management through an intuitive web-based dashboard.

• Template-based settings scale easily from small deployments to tens of thousands of devices.

• Role-based administration, configurable email alerts for a variety of important events, and easily auditable change logs.

• Summary reports with user, device, and application usage details archived in the cloud.

Powered by WordPress & Theme by Anders Norén