EC2 runs Linux. A specialized Linux called Amazon Linux AMI — Amazon Linux Amazon Machine Image.
EC2 instances has many options, which I recommend you start from On-Demand Instances.
- Elastic Block Store, EBS = Storage
- Without AWS EC2 Key Pair, you can still SSH into your EC2
- Reserved Instances
- Savings Plans
- Spot Instances Amazon can recycle your instance at any time and does not give you time to finish work at hand. It needs extra work to be able to use Spot Instances. Read the Intercom Post in Further links secion.
- Security of EC2: Security Group, VPC and Network ACL
- Use Security Group to protect your instances
- Use VPC & Network ACL to protect the network of your instances
- EC2 Autoscaling can replace unhealthy instance
- EC2 Network throttling
- CloudWatch is where your Metrics are
- The Noisy Neighbors problem
- For Ruby on Rails application, m5
- For Database, choose r6g
- Intel Xeon Processor x86_64
- Intel Xeon Platinum 8175M
- AMD EPYC Processor x86_64 arch (10% cost saving)
- AWS Graviton2 Processor (64-bit ARM arch) (45% cost saving)
NVIDIA M60 GPU K80, M60, V100
Xilinx UltraScale+ VU9P FPGA
Fixed performance: M5, C5, R5.
Burstable performance: T3
// Example: t3.large, c5d.2xlarge [type].[size] where [type] => [Family][Generation](Additional Feature)
Large number of generation means newer. C5 is newer than C4.
R5a, M5a —
a means with AMD CPU.
n means better networking
e for extra capacity
g for Graviton2 processor
d for directly-attached instance storage
- T, M, A: General usage (T3 T3a M5 M5a A1)
- C: For Computation (C5 C5n)
- R, X, Z: For Memory (R5, R5e, X1, X1e, Z1d)
- P, G, F: For GPU (P3 G3 F1)
- I, D, H: For high-speed storage (I3 I3en D2 H1)
T Turbo (Burstable)
M for Most Scenarios (General)
C for Compute
R for RAM
X for Extra RAM
H for HDD
D for Dense Storage
I for I/O
HS for High-speed Storage
G for GPU
P for Premium
F for FPGA
A for ARM
- Each type has SSD options
- For CI, use T3. Or Spot Instances if your CI task can finishes in 2-3 mins.
- For ARM, you use m6g ones.
- P2 — GPU
A1 only has medium, large, xlarge, 2xlarge, and 4xlarge.
Grow by factor of 2 (vCPU, Memory, price/hour)
vCPU usually ranges from 2 to 96.
Memory usually ranges from 8GB to 384GB.
Network bandwidth ranges from 0 - 25Gbps.
Not all instance supports all sizes.
For real-production use, Memory >= 2GB is a good start (
small). And you want an Instance Type that can scale. Seems like
large is a more reasonable start because bloated software nowadays.
For using EC2 on CI, check this GitHub Issue: Which hardware are you using to run your CI.
- T2 has free plan
- T3 costs more
- T3 uses new CPU Intel Skylake