Product docs and API reference are now on Akamai TechDocs.
Search product docs.
Search for “” in product docs.
Search API reference.
Search for “” in API reference.
Search Results
 results matching 
 results
No Results
Filters
Migrating from AWS EKS to Linode Kubernetes Engine (LKE)
Traducciones al EspañolEstamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
This guide walks you through the process of migrating an application from Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) to Linode Kubernetes Engine (LKE). An example REST API service is used to demonstrate the steps for migrating an application.
Before You Begin
Follow our Getting Started guide, and create an Akamai Cloud account if you do not already have one.
Create a personal access token using the instructions in our Manage personal access tokens guide.
Install the Linode CLI using the instructions in the Install and configure the CLI guide.
Follow the steps in the Install
kubectl
section of the Getting started with LKE guide to install and configurekubectl
.Ensure that you have access to your AWS account with sufficient permissions to work with EKS clusters. The AWS CLI and
eksctl
must also be installed and configured.Install
jq
, a lightweight command line JSON processor.Install
yq
, a YAML processor for the command line.Install ripgrep (
rg
), an alternative togrep
written in Rust.
sudo
. If you’re not familiar with the sudo
command, see the
Users and Groups guide.Connect kubectl
to Your EKS Cluster
Connect kubectl
to the EKS cluster that you want to migrate. Skip this step if your local machine is already using a kubeconfig
file with your EKS cluster information.
In the AWS console, navigate to the EKS service and find the name of your EKS cluster. In the screenshot below, the cluster name is
wonderful-hideout-1734286097
:You also need to know the AWS region where your cluster resides. For this example, the region is
us-west-1
(not shown).Use the AWS CLI to update your local
kubeconfig
file, replacing AWS_REGION and EKS_CLUSTER_NAME with your actual EKS cluster information:aws eks update-kubeconfig --region AWS_REGION --name EKS_CLUSTER_NAME
Added new context arn:aws:eks:AWS_REGION:AWS_ACCOUNT_ID:cluster/EKS_CLUSTER_NAME to /home/user/.kube/config
If your
kubeconfig
file includes multiple clusters, use the following command to list the available contexts:kubectl config get-contexts
Identify the context name for your EKS cluster, and set it to the active context. Replace the values with those of your cluster:
kubectl config use-context EKS_CLUSTER_CONTEXT_NAME
Assess Your EKS Cluster
Verify the EKS cluster is operational with
kubectl
:kubectl cluster-info
Kubernetes control plane is running at EKS_CONTROL_PLANE_URL CoreDNS is running at EKS_DNS_URL To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If you wish to see more detailed cluster information, run the following command:
kubectl cluster-info dump
Review the Node Group
In AWS EKS, the node group defines the type of worker nodes in your cluster. Since a production cluster may have multiple node groups with different node types, it can be a key factor for the migration process.
While Kubernetes does not have a native concept of a node group, all the nodes within a given EKS node group share the same configuration. Therefore, inspecting a single node provides all the information needed for migration.
List the nodes in your cluster:
kubectl get nodes
NAME STATUS ROLES AGE VERSION EKS_NODE_1_NAME Ready <none> 24m v1.31.5-eks-5d632ec EKS_NODE_2_NAME Ready <none> 24m v1.31.5-eks-5d632ec
Run the following command to retrieve detailed information about the first node in YAML format:
kubectl get node \ $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') -o yaml
You can run the previous command through a pipe to filter for specific fields (e.g. allocatable CPU and memory):
kubectl get node \ $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') -o yaml \ | yq '.status.allocatable | {"cpu": .cpu, "memory": .memory}' \ | awk -F': ' ' /cpu/ {cpu=$2} /memory/ {mem=$2} \ END {printf "cpu: %s\nmemory: %.2f Gi\n", cpu, mem / 1024 / 1024}'
cpu: 1930m memory: 6.89 Gi
Verify the Application Is Running
To illustrate an application running in a production environment, a REST API service application written in Go is deployed to the example EKS cluster. If you already have one or more applications running on your EKS cluster, you may skip this section.
The function of the REST API service allows you to add a quote (a string) to a stored list, or to retrieve that list. The application has been deployed to the cluster, creating a Kubernetes Deployment, Service, and HorizontalPodAutoscaler.
Follow the steps below to install, configure, and test the REST API service application on your EKS cluster.
Use a command line text editor such as
nano
to create a Kubernetes manifest file (manifest.yaml
) that defines the application and its supporting resources:nano manifest.yaml
Give the file the following contents:
- File: manifest.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
apiVersion: apps/v1 kind: Deployment metadata: name: go-quote labels: app: go-quote spec: replicas: 1 selector: matchLabels: app: go-quote template: metadata: labels: app: go-quote spec: containers: - name: go-quote image: linodedocs/go-quote-service:latest ports: - containerPort: 7777 resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "250m" memory: "256Mi" --- apiVersion: v1 kind: Service metadata: name: go-quote-service labels: app: go-quote spec: type: LoadBalancer ports: - port: 80 targetPort: 7777 selector: app: go-quote --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: go-quote-hpa labels: app: go-quote spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: go-quote minReplicas: 1 maxReplicas: 1 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50
When done, press CTRL+X, followed by Y then Enter to save the file and exit
nano
.Apply the manifest to deploy the application on your EKS cluster:
kubectl apply -f manifest.yaml
deployment.apps/go-quote created service/go-quote-service created horizontalpodautoscaler.autoscaling/go-quote-hpa created
With the application deployed, run the following
kubectl
command to verify that the deployment is available:kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE go-quote 1/1 1 1 5m7s
Run the following
kubectl
command to retrieve the external IP address assigned to the service:kubectl get svc
The service is a LoadBalancer, which means it can be accessed from outside the cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE go-quote-service LoadBalancer GO_QUOTE_CLUSTER_IP GO_QUOTE_EXTERNAL_HOSTNAME 80:30570/TCP 5m27s kubernetes ClusterIP K8S_CLUSTER_IP <none>
Test the service by adding a quote, replacing GO_QUOTE_EXTERNAL_HOSTNAME with the actual
EXTERNAL-IP
of yourLoadBalancer
:curl -X POST \ --data '{"quote":"This is my first quote."}' \ GO_QUOTE_EXTERNAL_HOSTNAME/quotes
Add a second quote:
curl -X POST \ --data '{"quote":"This is my second quote."}' \ GO_QUOTE_EXTERNAL_HOSTNAME/quotes
Now retrieve the stored quotes:
curl GO_QUOTE_EXTERNAL_HOSTNAME/quotes
This should yield the following result:
["This is my first quote.","This is my second quote."]
After verifying that your EKS cluster is fully operational and running a live service, you are ready for migration.
Provision an LKE Cluster
When migrating from EKS to LKE, provision an LKE cluster with similar resources to run the same workloads. While there are several ways to create a Kubernetes cluster on Akamai Cloud, this guide uses the Linode CLI to provision resources.
See our LKE documentation for instructions on how to provision a cluster using Cloud Manager.
Use the Linode CLI (
linode-cli
) to see available Kubernetes versions:linode-cli lke versions-list
┌──────┐ │ id │ ├──────┤ │ 1.32 │ ├──────┤ │ 1.31 │ └──────┘
Unless specific requirements dictate otherwise, it’s generally recommended to provision the latest version of Kubernetes.
Determine the type of Linode to provision. The example EKS cluster configuration uses nodes with two CPUs and 8 GB of memory. To find a Linode type with a similar configuration, run the following command with the Linode CLI:
linode-cli linodes types --vcpus 2 --json --pretty \ | jq '.[] | {class, id, vcpus, memory, price}'
{ "class": "standard", "id": "g6-standard-2", "vcpus": 2, "memory": 4096, "price": { ... } } { "class": "highmem", "id": "g7-highmem-1", "vcpus": 2, "memory": 24576, "price": { ... } } { "class": "highmem", "id": "g7-highmem-2", "vcpus": 2, "memory": 49152, "price": { ... } } { "class": "dedicated", "id": "g6-dedicated-2", "vcpus": 2, "memory": 4096, "price": { ... } } { "class": "premium", "id": "g7-premium-2", "vcpus": 2, "memory": 4096, "price": { ... } }
See Akamai Cloud: Pricing for more detailed pricing information.
The examples in this guide use the
g6-standard-2
Linode, which features two CPU cores and 4 GB of memory. Run the following command to display detailed information in JSON for this Linode plan:linode-cli linodes types --label "Linode 4GB" --json --pretty
[ { "addons": { ... }, "class": "standard", "disk": 81920, "gpus": 0, "id": "g6-standard-2", "label": "Linode 4GB", "memory": 4096, "network_out": 4000, "price": { ... }, "region_prices": [ ... ], "successor": null, "transfer": 4000, "vcpus": 2 } ]
View available regions with the
regions list
command:linode-cli regions list
After selecting a Kubernetes version and Linode type, use the following command to create a cluster named
eks-to-lke
in theus-mia
(Miami, FL) region with three nodes and auto-scaling. Replaceeks-to-lke
andus-mia
with a cluster label and region of your choosing, respectively:linode-cli lke cluster-create \ --label eks-to-lke \ --k8s_version 1.32 \ --region us-mia \ --node_pools '[{ "type": "g6-standard-2", "count": 1, "autoscaler": { "enabled": true, "min": 1, "max": 3 } }]'
After creating your cluster successfully, you should see output similar to the following:
Using default values: {}; use the --no-defaults flag to disable defaults ┌────────┬────────────┬────────┬─────────────┬──────────────────────────┬──────┐ │ id │ label │ region │ k8s_version │ control_plane.high_avai… │ tier │ ├────────┼────────────┼────────┼─────────────┼──────────────────────────┼──────┤ │ 343326 │ eks-to-lke │ us-mia │ 1.32 │ False │ │ └────────┴────────────┴────────┴─────────────┴──────────────────────────┴──────┘
Access the Kubernetes Cluster
To access your cluster, fetch the cluster credentials as a kubeconfig
file. Your cluster’s kubeconfig
can also be downloaded via the Cloud Manager.
Use the following command to retrieve the cluster’s ID:
CLUSTER_ID=$(linode-cli lke clusters-list --json | jq -r \ '.[] | select(.label == "eks-to-lke") | .id')
Retrieve the
kubeconfig
file and save it to~/.kube/lke-config
:.linode-cli lke kubeconfig-view --json "$CLUSTER_ID" | \ jq -r '.[0].kubeconfig' | \ base64 --decode > ~/.kube/lke-config
After saving the
kubeconfig
, access your cluster by usingkubectl
and specifying the file:kubectl get nodes --kubeconfig ~/.kube/lke-config
NAME STATUS ROLES AGE VERSION LKE_NODE_NAME Ready <none> 85s v1.32.0
One node is ready, and it uses Kubernetes version 1.32.
Next, verify the cluster’s health and readiness for application deployment.
kubectl cluster-info --kubeconfig ~/.kube/lke-config
Kubernetes control plane is running at LKE_CONTROL_PLANE_URL KubeDNS is running at LKE_DNS_URL To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Migrate From AWS EKS to LKE
In some cases, migrating Kubernetes applications requires an incremental approach, as moving large interconnected systems all at once isn’t always practical.
For example, if Service A interacts with Services B, C, and D, you may be able to migrate Services A and B together to LKE, where they can communicate efficiently. However, Services C and D may still rely on AWS infrastructure or native services, making their migration more complex.
In this scenario, you may need to temporarily run Service A in both AWS EKS and LKE. Service A on LKE would interact with Service B on LKE, while the version of Service A on AWS EKS continues communicating with Services C and D. This setup minimizes disruptions while you work through the complexities of migrating the remaining services to LKE. Although cross-cloud communication may incur higher latency and costs, this approach helps maintain functionality during the transition.
This guide covers the key steps required to migrate the example application from EKS to LKE.
Assess Current Workloads and Dependencies in AWS EKS
Ensure that kubectl
uses the original kubeconfig
file with the EKS cluster context.
If necessary, you may need to re-save your EKS cluster’s kubeconfig
file path to your $KUBECONFIG
environment variable.
kubectl get all --context EKS_CLUSTER_CONTEXT_NAME
The output shows the running pod and the one active replica set created by the deployment:
NAME READY STATUS RESTARTS AGE
pod/go-quote-POD_SUFFIX 1/1 Running 0 170m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/go-quote-service LoadBalancer GO_QUOTE_CLUSTER_IP GO_QUOTE_EXTERNAL_HOSTNAME 80:30570/TCP 170m
service/kubernetes ClusterIP K8S_CLUSTER_IP <none> 443/TCP 3h30m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/go-quote 1/1 1 1 170m
NAME DESIRED CURRENT READY AGE
replicaset.apps/go-quote-REPLICASET_SUFFIX 1 1 1 170m
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/go-quote-hpa Deployment/go-quote cpu: <unknown>/50% 1 1 1 170m
By default, kubectl get all
only displays resources in the default
namespace. If your workloads are deployed in a different namespace (recommended for production clusters), use:
kubectl get all --namespace=YOUR_NAMESPACE
Export Kubernetes Manifests of AWS EKS
There are multiple ways to define the resources you want to deploy to Kubernetes, including YAML manifests, Kustomize configurations, and Helm charts. For consistency and version control, store these in a git repository and deploy them via your CI/CD pipeline. The guide uses plain YAML manifests as the example.
Update Manifests for Compatibility With LKE
You may need to update your manifests to accommodate for differences between EKS and LKE. For example, your configuration on EKS may use the AWS Load Balancer Controller, which helps manage AWS Application Load Balancers (ALB) as Kubernetes Ingress resources. As an alternative to AWS ALBs, you can deploy a dedicated NGINX Ingress on LKE.
The deployment image may point to AWS Elastic Container Registry (ECR). Modify this to point to an alternative registry. For example, the Deployment
section of your application manifest may look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13
apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: - name: go-quote image: 123456789.dkr.ecr.us-west-2.amazonaws.com/go-quote:latest ...
The container image, pointing to AWS ECR, has the following format:
AWS_ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com/REPOSITORY_NAME:TAG
To migrate away from AWS ECR, upload the container image to another registry service (e.g. Docker Hub) or Set Up a Docker Registry with LKE and Object Storage. Then, modify your Kubernetes manifest to point to the new location for your image.
Transfer Persistent Data
If the workload depends on persistent data in AWS S3 or a database, then transfer the data or make it available to LKE. See the following guides for more information:
Deploy Workloads to LKE
Deploy your application to the newly created LKE cluster.
Verify the current
kubectl
context to ensure you are pointing to thekubeconfig
file for the LKE cluster. This may require re-saving your LKEkubeconfig
file’s path to your$KUBECONFIG
environment variable.kubectl config current-context --kubeconfig ~/.kube/lke-config
LKE_CLUSTER_CONTEXT_NAME
Apply the same
manifest.yaml
file used to deploy your application to EKS, but this time on your LKE cluster:kubectl apply --kubeconfig ~/.kube/lke-config -f manifest.yaml
deployment.apps/go-quote created service/go-quote-service created horizontalpodautoscaler.autoscaling/go-quote-hpa created
Validate Application Functionality
Verify that the deployment and the service were created successfully. The steps below validate and test the functionality of the example REST API service.
With the application deployed, run the following
kubectl
command to verify that the deployment is available:kubectl get deploy --kubeconfig ~/.kube/lke-config
NAME READY UP-TO-DATE AVAILABLE AGE go-quote 1/1 1 1 108s
Run the following
kubectl
command to retrieve the external IP address assigned to the service:kubectl get service --kubeconfig ~/.kube/lke-config
The service exposes a public IP address to the REST API service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE go-quote-service LoadBalancer GO_QUOTE_CLUSTER_IP GO_QUOTE_EXTERNAL_IP 80:30407/TCP 117s kubernetes ClusterIP K8S_CLUSTER_IP <none> 443/TCP 157m
Test the service by adding a quote, replacing GO_QUOTE_EXTERNAL_IP with the actual external IP address of your load balancer:
curl -X POST \ --data '{"quote":"This is my first quote for LKE."}' \ GO_QUOTE_EXTERNAL_IP/quotes
Add a second quote:
curl -X POST \ --data '{"quote":"This is my second quote for LKE."}' \ GO_QUOTE_EXTERNAL_IP/quotes
Now retrieve the stored quotes:
curl GO_QUOTE_EXTERNAL_IP/quotes
["This is my first quote for LKE.","This is my second quote for LKE."]
The example REST API service is up and running on LKE.
Depending on your application, point any services dependent on the EKS cluster deployment to the LKE cluster deployment instead. After testing and verifying your application is running on LKE, you can terminate your EKS cluster.
Additional Considerations and Concerns
When migrating from AWS EKS to LKE, there are several important factors to keep in mind, including cost management, data persistence, networking, security, and alternative solutions for cloud-specific services.
Cost Management
Cost reduction is one reason an organization might migrate from AWS EKS to LKE. Typically, the compute cost of Kubernetes can be a primary driver for migration. Use kubectl
to find the instance type and capacity type for your AWS EKS instance.
kubectl get node EKS_NODE_1_NAME -o yaml \
| yq .metadata.labels \
| rg 'node.kubernetes.io/instance-type|capacityType'
eks.amazonaws.com/capacityType: EKS_CAPACITY_TYPE
node.kubernetes.io/instance-type: EKS_INSTANCE_TYPE
Reference the AWS pricing page for EC2 On-Demand Instances to find the cost for your EKS instance. Compare this with the cost of a Linode instance with comparable resources by examining our pricing page.
Additionally, applications with substantial data egress can be significantly impacted by egress costs. Consider the typical networking usage of applications running on your EKS cluster, and determine your data transfer costs with AWS. Compare this with data transfer limits allocated to your LKE nodes.
Data Persistence and Storage
Cloud-native workloads are ephemeral. As a container orchestration platform, Kubernetes is designed to ensure your pods are up and running, with autoscaling to handle demand. However, it’s important to handle persistent data carefully. If you are in a position to impose a large maintenance window with system downtime, migrating workloads can be a simpler task.
Should you need to perform a live migration with minimal downtime, you must develop proper migration procedures and test them in a non-production environment. This may include:
- Parallel storage and databases on both clouds
- Cross-cloud replication between storage and databases
- Double writes at the application level
- Failover reads at the application level
- Switching the AWS storage and databases to read-only
- Storage and database indirection at the configuration or DNS level
Advanced Network Configuration
The AWS network model includes virtual private clouds (VPCs), virtual private networks (VPNs), and different types of load balancers. For LKE, Akamai Cloud provides NodeBalancers, which are equivalent to application load balancers. If you use advanced features of AWS networking, adapting them to Akamai Cloud networking may require significant configuration changes.
For network security, you may need to port AWS security group rules into Kubernetes Network Policies on LKE.
Security and Access Management
AWS EKS integrates AWS Identity and Access Management (IAM) with Kubernetes access. LKE uses standard Kubernetes user and service accounts, as well as Kubernetes role-based access control (RBAC).
DNS
If you use an independent DNS provider for your application, you must update various DNS records to point to LKE endpoints and NodeBalancers instead of AWS endpoints.
If you use Route53, the AWS DNS service, and plan to migrate away from it, our DNS Manager may be a migration option.
Alternative to AWS Elastic Container Registry (ECR)
LKE doesn’t have its own container registry. To migrate away from AWS ECR, set up a third-party private container registry, such as Docker Hub or GitHub Container Registry.
Alternatively, you can set up your own container registry, see How to Set Up a Docker Registry with LKE and Object Storage for instructions.
Alternative to AWS CloudWatch
AWS uses CloudWatch for Kubernetes cluster observability. With Akamai Cloud, you can install an alternative observability solution on LKE. One example of such a solution is The Observability Stack (TOBS), which includes:
- Kube-Prometheus
- Prometheus
- AlertManager
- Grafana
- Node-Exporter
- Kube-State-Metrics
- Prometheus-Operator
- Promscale
- TimescaleDB
- Postgres-Exporter
- OpenTelemetry-Operator
See the following guides for additional information:
- Migrating From AWS CloudWatch to Prometheus and Grafana on Akamai
- How to Deploy TOBS (The Observability Stack) on LKE
Alternative to AWS Secrets Manager
The AWS Secrets Manager can be leveraged to provide Kubernetes secrets on EKS. With LKE, you need an alternative solution, such as OpenBao on Akamai Cloud.
This page was originally published on