This reference architecture allows you to deploy MATLAB® Parallel Server™ on Kubernetes® on Amazon® Elastic Kubernetes Service™. After deploying this solution, you can run large-scale parallel computations without having to manage your own Kubernetes control plane. EKS provides elastic, on‑demand scaling for worker nodes, allowing MATLAB workloads to grow and shrink automatically based on demand.
- A MATLAB Parallel Server license. You can use either:
- A MATLAB Parallel Server license configured to use online licensing for MATLAB. For information on how to configure your license for cloud use, see Configure MATLAB Parallel Server Licensing for Cloud Platforms.
- A network license manager for MATLAB hosting sufficient MATLAB Parallel Server licenses for your cluster. MathWorks® provides a reference architecture to deploy a suitable Network License Manager for MATLAB on Amazon Web Services, or you can use an existing license manager.
- MATLAB and Parallel Computing Toolbox™ on your client machine.
- An AWS® account with required permissions.
- AWS Command Line Interface (AWS CLI) installed on your client machine. For help with installing AWS CLI, see Installing or updating to the latest version of the AWS CLI on the AWS website.
- An existing VPC with two subnets. For more details, refer to the AWS documentation on Amazon EKS networking requirements for VPC and subnets. If you need to create a new VPC, you can refer to the VPC IaC building blocks.
- AWS credentials configured on your client machine. For information on how to configure AWS credentials, see Configuration and credential file settings on the AWS website.
- Helm® package manager version 3.8.0 or later installed on your client machine. For help with installing Helm, see Quickstart Guide on the Helm website.
kubectlcommand-line tool installed on your client machine and configured to access your Kubernetes cluster. For help with installingkubectl, see Install Tools on the Kubernetes website.- Terraform™ or OpenTofu™. For help with installing Terraform, see Install Terraform on the HashiCorp documentation. For help with installing OpenTofu, see Installing OpenTofu on the OpenTofu documentation.
You are responsible for the cost of the AWS services you use when you create cloud resources using this repository. Resource settings, such as instance type, affects the cost of deployment. For cost estimates, see the pricing pages for each AWS service you use. Prices are subject to change.
These steps show you how to deploy MATLAB Parallel Server on an Amazon Elastic Kubernetes Service (EKS) cluster using Helm and either Terraform or OpenTofu.
- Clone this repository from GitHub® and navigate to the newly created folder.
git clone https://github.com/mathworks-ref-arch/matlab-parallel-server-on-eks
cd matlab-parallel-server-on-eks- Install the Helm chart dependencies using this command.
helm dependency update ./matlab-parallel-server-on-eks-chart-
Configure the Terraform variables file,
./terraform/terraform.tfvars, for your MATLAB Parallel Server cluster. You must manually fill in the required parameters,vpc_id,subnet_ids, andpublic_access_cidr_blocks. Other parameters are optional, and you can customize them based on your requirements. Note that the number of MATLAB workers is computed automatically based on the worker node EC2 instance type and the Terraformmax_worker_nodessetting. -
Initialize Terraform and deploy the EKS cluster in AWS using these commands.
cd ./terraform
terraform init # or tofu init
terraform apply # or tofu apply-
After Terraform deploys the EKS cluster, it prints several outputs. In these outputs, note the name of the cluster (
<eks_cluster_name>) and the name of the Helm value override file (<helm_values_override_file>). You need these values for the next steps. -
Configure your local Kubernetes configuration file so that
kubectlcan connect to your EKS cluster and run commands on this EKS cluster.
aws eks update-kubeconfig --region <AWS_REGION> --name <eks_cluster_name>- Create a namespace to isolate the MATLAB Job Scheduler from other resources on the Kubernetes cluster. Kubernetes uses namespaces to separate groups of resources. To learn more about namespaces, see the Kubernetes documentation for Namespaces.
kubectl create namespace mjs-
(Optional) To customize your MATLAB Parallel Server beyond the default settings, configure the Helm Chart parameters in the
./matlab-parallel-server-on-eks-chart/values.yamlfile. For details about these parameters, see Helm Values for MATLAB Parallel Server in Kubernetes -
Install the Helm chart using this command.
cd ..
helm install matlab-parallel-server-on-eks-chart ./matlab-parallel-server-on-eks-chart -n mjs -f terraform/<helm_value_override_file>- Check the status of the MATLAB Job Scheduler pods. When all pods display 1/1 in the READY field, your MATLAB Parallel Server cluster is ready to use. This command takes a few minutes.
kubectl get pods -n mjs -wOnce the mjs-job-manager pod is READY 1/1, press Ctrl+C to exit the watch command.
To connect to your cluster from MATLAB, you need the cluster profile. The cluster profile is a JSON-format file that allows the MATLAB client on your desktop to connect to your MATLAB Job Scheduler cluster. Download the cluster profile using this command.
kubectl get secrets mjs-cluster-profile --template="{{.data.profile | base64decode}}" --namespace mjs > profile.jsonImport the cluster profile into MATLAB. For details, see Discover Clusters and Use Cluster Profiles. Your cluster is now ready to use from your MATLAB client. You can also share the cluster profile with other MATLAB users that want to connect to the cluster. When users first submit jobs or tasks to the cluster, they must create a username and password.
If you are a cluster administrator, you can access all jobs and tasks using the administrator password. This reference architecture stores this password in a Kubernetes secret, mjs-admin-password. To retrieve the administrator password, use this command.
kubectl get secret -n mjs mjs-admin-password -o jsonpath='{.data.password}' | base64 --decodeYour cluster remains running after you close MATLAB. To delete your cluster, follow the instructions in the Delete Your Cloud Resources section.
You can remove the Terraform stack and all associated resources when you are done with them. Note that you cannot recover resources once they are deleted. After you delete the cloud resources, you cannot use the downloaded profile again. To delete all resources created by this reference architecture, run this command. This command can take up to 15 minutes to complete.
helm uninstall matlab-parallel-server-on-eks-chart -n mjs
cd ./terraform
terraform destroy # or tofu destroyThis reference architecture contains two main components.
-
Terraform or OpenTofu module: This module sets up all the required infrastructure in AWS, including EC2 instances, security groups, IAM roles and policies, networking components, autoscaling groups, and the EKS cluster itself.
-
Helm chart: This chart deploys MATLAB Parallel Server on the EKS cluster, including the job manager, workers, and all necessary Kubernetes resources. The Helm chart bundles two primary sub-charts, that you can configure using the Helm values file.
-
MATLAB Parallel Server in Kubernetes: Deploys the core MATLAB Parallel Server components. For more information, see the MATLAB Parallel Server on Kubernetes GitHub repository. For details about its architecture, see Architecture and Resources for MATLAB Parallel Server in Kubernetes.
-
Autoscaling: Configures autoscaling for worker nodes. For more information, see Cluster Autoscaler on AWS in the Kubernetes Autoscaler GitHub repository.
-
Parallel Computing Toolbox and MATLAB Parallel Server software let you solve computationally and data-intensive programs using MATLAB and Simulink on computer clusters, clouds, and grids. Parallel processing constructs such as parallel-for loops and code blocks, distributed arrays, parallel numerical algorithms, and message-passing functions let you implement task-parallel and data-parallel algorithms at a high level in MATLAB. To learn more, see the documentation: Parallel Computing Toolbox and MATLAB Parallel Server.
MATLAB Job Scheduler is a built-in scheduler that ships with MATLAB Parallel Server. The scheduler coordinates the execution of jobs and distributes the tasks for evaluation to the server’s individual MATLAB sessions called workers. For more details, see How Parallel Computing Toolbox Runs a Job.
If you have already set up your AWS account, you can deploy this cluster in under 30 minutes. This time estimate applies only to the first deployment. Later job runs need less setup time.
To learn about setting quotas, see AWS Service Quotas.
To enable collaboration and avoid issues with local state files, you can store your Terraform state file remotely. Terraform supports several remote backends, including AWS S3. To use AWS S3, configure the settings in ./terraform/backend.tf before running terraform init.
To manage multiple environments or deployments, you can use Terraform Workspaces. For details see Terraform Workspaces on the HashiCorp website.
You can copy the EBS snapshot for a specific MATLAB version to any AWS region using these steps.
- In the
terraformfolder of this repository, navigate to the ./terraform/locals.tf file. - Find the MATLAB release you want to copy, and note the corresponding
snapshot_id. - In the AWS EC2 console, go to Snapshots, select Public Snapshots, and search for the
snapshot_id. - Select the snapshot checkbox, then from the Actions dropdown, choose Copy snapshot.
- In the window that appears, select your target region from the Destination region dropdown and click Copy.
For more details, see Copy an Amazon EBS snapshot.
To restrict connections to the Kubernetes control plane to a private subnet, modify the cluster section in the Terraform template. Update the VPC configuration of the cluster to enable only private access using these commands.
#--------------------------------------------
# Cluster
#--------------------------------------------
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = aws_iam_role.eks_cluster_role.arn
vpc_config {
subnet_ids = var.subnet_ids
endpoint_private_access = true
# Restrict access to control pane to private subnet
endpoint_public_access = false
}
Any host or service accessing the cluster must be in the VPC’s private subnet. This includes the environment used to deploy the Helm chart. To access the cluster, you can use a bastion host, an AWS Lambda function, or AWS CloudShell deployed in the VPC.
The host or service accessing the cluster must also belong to the security group that allows access to the control plane node. This ensures that kubectl and helm commands can run successfully.
If you require assistance or have a request for additional features or capabilities, contact MathWorks Technical Support.
Copyright 2026 The MathWorks, Inc.