Storage innovations at AWS re:invent 2020 AWS announces first SAN for the Cloud
AWS has announced at its customer conference on “AWS re:invent 2020”, four Storage innovations, the customer’s additional storage performance, reliability, and other value-offer, by delivering the first SAN designed for Cloud.
On the “AWS re:invent 2020“ introduced a new Storage Service Amazon EBS io2 Block Express Volumes offers the first Cloud developed SAN. These virtual drives are designed to provide four times the performance of Standard-io2-Volumes in terms of all metrics: The maximum power is of 256,000 IOPS, a throughput of 4,000 MBps and 64 terabytes of capacity. This level of performance is focused on high claims, such as from databases such as SAP HANA, Oracle, MS SQL Server or SAS Analytics, because you are guaranteed a shelf-life of the data of 99.999 percent. Customers should be able to achieve a latency of under a millisecond. You can combine, such as from the SAN usual, several of the io2-Block-Express-drives (as a “Stripe”) together to achieve higher performance.
This was made possible by decoupling the Compute layer from the Storage layer to the Hardware layer and the Rewriting of the Software for Compute, Storage and Networking. The new network stack uses the Protocol Scalable Reliable Datagrams (SRD), with the Block Express the latency time is reduced drastically.
So far, a preview of io2 Block Express available. In the coming months, the SAN will be added Features. This Multi-Attach” with I/O Fencing include “, so that customers can connect in a secure manner, at the same time multiple instances of a single drive. Furthermore, the performance characteristics of Fast Snapshot Restore and Elastic Volumes are to follow, to be able to have the EBS Volume in the ongoing operation in terms of size, type and performance upgrade.
The new EBS-Gp3-SSD Volumes should enable the customers to be able to additional IOPS and throughput performance independent of capacity provision. These virtual drives can deliver an output power of 3,000 IOPS, and 125 MBps, can optionally provide but the 16,000 IOPS for 1,000 MBps. In Gp2, the performance (IOPS and so on), depending on the memory capacity, which has been decoupled in Gp3 scales. Book instead of more unnecessary storage capacity in order to achieve the required IOPS performance in Gp2, added Gp3 you in the position to book more IOPS at a constant storage capacity.
“Who surrounded to the EBS on the new Gp3 Volumes, can ultimately reach two important goals – to a higher level of Performance, important cost savings,” says Michael Hanisch, Head of Technology at AWS in Germany. Because Gp3-Volumes are one-fifth cheaper per GB than the Gp2-drives of the previous Generation. The Migration of Gp2 to Gp3 is easy with the EBS Tool Elastic Volumes. Customers can change the Volume type, IOPS, storage capacity and throughput of their existing EBS Volumes, without your EC2 instances to interrupt. Users can create new Gp3 Volumes themselves, and the Performance by using the Amazon Management console, the AWS command line interface (CLI) or the AWS SDK to scale. Gp3 (gp = general purpose) is already available.
Amazon S3 Intelligent Tiering
Amazon S3 Intelligent Tiering includes, already available, two new Tiers, namely, S3 Glacier Archive Access (from 90 days) and Deep Archive Access (from 180 days).This will allow customers to reduce their long-term storage costs by up to 95 percent, because rarely-used objects are automatically moved from Frequent Access to this Archive. This automatic replaced the Apps, the customers themselves develop and put into operation had to take to be able to use this function. Hanisch commented: “Amazon S3 Intelligent Tiering provides real relief on the customer’s side, by decreasing unnecessary work: It finds automatically for each file the cheapest storage class, and can archive rarely used data on demand.” The new Service supports Features such as S3 Inventory, and S3 Replication.
S3 Replication (multi-destination)
The new Service Amazon S3 Replication (multi-destination) customers will be able to replicate data at the same time to different S3 Buckets, whether it be in the same AWS Region or any number of AWS regions. This is to serve the global distribution of (media)Content, the Storage Compliance and the needs of sharing data. This Service, which will replace own developments of the customer, is also already available.
“In the next three years more data created than in the last 30 years,” said Mai-Lan Thomsen-Bukovec in your presentation of these new Services. “Data storage needs urgently to be re-invented.” The new SAN in the Cloud is part of this reinvention, because company produced and saved most of their data in the Cloud. The two new Tiering Services, the customer saved automatically costs; the multi-national replication facilitate the effective distribution of your data in the new normal.
New D3 instances for Storage
Thomsen-Bukovec also introduced two new EC2 instances for Storage purposes. The background is that Storage and Compute tend to go Hand-in-Hand, in order to process Workloads. The D3/D3en instances are not based on SSDs, but hard drives, what is high sequential Read and write allows performance at a low cost. Areas of use for D3/D3en are Data Warehouses, distributed file systems, network file systems and Streaming – and data-processing applications. The customers have demanded higher performance. D3-instances deliver up to 30 percent higher processing power and up to 2.5 times better network performance as the D2 instances. D3 is based on Intel’s Cascade-Lake-Xeon-CPUs and offer up to 48 TB of storage capacity, 32vCPU, 256 GiB of RAM, and 25 Gbps of network bandwidth.
The D3en instances are even more powerful. They offer up to 336 TB of total storage (seven times more than D2), 75 Gbps of network bandwidth (7.5 times more than D2) and up to 6.2 GiB/s data throughput per Disk, twice as much as D2. D3en to allow a reduction in cost per Terabyte by up to 80 percent compared to D2 instances. Thus, the customer File Storage clusters, Petabyte-scale could build up in order to consolidate their analytical Big-Data Workloads.
New R5b Instances
Thomsen-Bukovec presented the new R 5B Instancesthat have been optimized for applications with high main memory requirements and a high level of EBS performance. To deliver the highest level of EBS performance on Amazon EC2 is available. Up to 60 Gbps of dedicated bandwidth for EBS drives, and up to 260,000 IOPS should make these instances for the most demanding database Workloads. So far, R 5 have cooperating instances and EBS in many application cases, such as in database Workloads such as E-Commerce platforms, ERP applications, and health records. Relational databases such as Oracle, SQL Server or SAP HANA are in relation to the Storage, however, is sometimes challenging. The user must Unscrew your license rights with respect to databases and infrastructure in the height, and that is expensive. At the same time, you do not need to reduce the utilization of computing power and Memory usage, which is not exactly optimal. R5b offer you, so Bukovec, three times the performance of an equal-sized R5-instance.
R 5B instances also support the new io2-Block-Express-drives the EBS (see above) with whom you are Storage-intensive Workloads can consolidate. Exactly the Volumes, their Strengths properlyso Hanisch. And further: “Our SAN for the Cloud’ provides customers with up to 60 Gbps of dedicated Storage bandwidth, high performance and at the same time the flexibility of EBS.”
Of course, existing EC2 can require customers with Workloads of the Storage Performance depends on R 5B: on less or on smaller instances. This allows you to license and to reduce infrastructure costs. R 5B instances are supported by Amazon RDS for Oracle RDS for SQL Server. SAP HANA is missing in this list. Also for SAP HANA, which are new instance types certified. These instances are now in Frankfurt/Main available.