Solutions Architect at AWS. Border range. In comparison, DS2’s average utilization remained at 10 percent for all tests, and the peak utilization almost doubled for concurrent users test and peaked at 20 percent. After ingestion into the Amazon Redshift database, the compressed data size was 1.5 TB. I will write a post on it following our example here. Amazon Redshift Vs DynamoDB – Pricing. We can write the script to schedule our workflow: set up an AWS EMR, run the Spark job for the new data, save the result into S3, then shut down the EMR cluster. However, due to heavy demand for lower compute-intensive workloads, Amazon Redshift launched the ra3.4xlarge instance type in April 2020. Considering the benchmark setup provides 25 percent less CPU as depicted in Figure 3 above, this observation is not surprising. The out-of-the-box Redshift dashboard provides you with a visualization of your most important metrics. It is very good with complex queries and reports meaningful results. In the next steps, you configure an Amazon Virtual Private Cloud (Amazon VPC) endpoint for Amazon S3 to allow Lambda to write federated query results to Amazon S3. ; Type a Description for your reference. Concurrency scaling kicked off in both RA3 and DS2 clusters for 15 concurrent users test. Agilisium Consulting, an AWS Advanced Consulting Partner with the Amazon Redshift Service Delivery designation, is excited to provide an early look at Amazon Redshift’s ra3.4xlarge instance type (RA3). Load performance monitoring. Choose Deploy. We highly recommend customers running on DS2 instance types migrate to RA3 instances at the earliest for better performance and cost benefits. Since the solution should have minimal latency, that eliminates FireHouse (Opions A and C). Amazon has announced that Amazon Redshift (a managed cloud data warehouse) is now accessible from the built-in Redshift Data API. Customers using the existing DS2 (dense storage) clusters are encouraged to upgrade to RA3 clusters. In this case, suitable action may be resizing the cluster to add more nodes to accommodate higher compute capacity. Airflow will be the magic to orchestrate the big data pipeline. In real-world scenarios, single-user test results do not provide much value. But when it comes to data manipulation such as INSERT, UPDATE, and DELETE queries, there are some Redshift specific techniques that you should know, in … It can be resized using elastic resize to add or remove compute capacity. Agilisium is an AWS Advanced Consulting Partner and big data and analytics company with a focus on helping organizations accelerate their “data-to-insights leap.”, *Already worked with Agilisium? Shown as byte PSL. Graph. Unit. This currently handles only updates and new inserts in the source table. Please note this setup would cost roughly the same to run for both RA3 and DS2 clusters. Total concurrency scaling minutes was 97.95 minutes for the two iterations. It will help Amazon Web Services (AWS) customers make an … We see that RA3’s Read and write latency is lower than the DS2 instance types across single / concurrent users. Attribute. Average: Seconds: Write throughput: Measures number of bytes written to disk per second: Average: MB/s: Cluster and Node. We observed the scaling was stable and consistent for RA3 at one cluster. The new RA3 instance type can scale data warehouse storage capacity automatically without manual intervention, and with no need to add additional compute resources. The sync latency is no more than a few seconds when the source Redshift table is getting updated continuously and no more than 5 minutes when the source gets updated infrequently. The test runs are based on the industry standard Transaction Processing Performance Council (TPC) benchmarking kit. ... components of the AWS Global Infrastructure consists of one or more discrete data centers interconnected through low latency links? Figure 8 – WLM running queries (for two iterations) – RA3 cluster type. Redshift monitoring can also help to identify underperforming nodes that are dragging down your overall cluster. Sumo Logic integrates with Redshift as well as most cloud services and widely-used cloud-based applications, making it simple and easy to aggregate data across different services, giving users a full vi… It has very low latency that makes it a fast-performing tool. This graph depicts the concurrency scaling for the test’s two iterations in both RA3 and DS2 clusters. Platform. Shown as operation: aws.redshift.write_latency (gauge) The average amount of time taken for disk write I/O operations. This post details the result of various tests comparing the performance and cost for the RA3 and DS2 instance types. The graph below represents that RA3 consistently outperformed DS2 instances across all single and concurrent user querying. What the Amazon Redshift optimizer does is to look for ways to minimize network latency between compute nodes and minimize file I/O latency when reading data. Which AWS services should be used for read/write of constantly changing data? We also compared the read and write latency. Write Latency (WriteLatency) This parameter determines the average amount of time taken for disk write I/O operations. Shown as operation: aws.redshift.write_latency (gauge) The average amount of time taken for disk write I/O operations. In this setup, we decided to choose manual WLM configuration. Let me give you an analogy. Windows and UNIX. Figure 4 – Disk utilization: RA3 (lower the better); DS2 (lower the better). aws.redshift.write_iops (rate) The average number of write operations per second. In the past, there was pressure to offload or archive historical data to other storage because of fixed storage limits. Figure 9 – WLM running queries (for two iterations) – DS2 cluster type. RA3 is based on AWS Nitro and includes support for Amazon Redshift managed storage, which automatically manages data placement across tiers of storage and caches the hottest data in high-performance local storage. � ��iw۸�(��� To configure the integration. We wanted to measure the impact of change in the storage layer has on CPU utilization. © 2020, Amazon Web Services, Inc. or its affiliates. Datadog’s Agent automatically collects metrics from each of your clusters including database connections, health status, network throughput, read/write latency, read/write OPS, and disk space usage. Command type. Sumo Logic helps organizations gain better real-time visibility into their IT infrastructure. Agilisium Consulting, an AWS Advanced Consulting Partner with the Amazon Redshift Service Delivery designation, is excited to provide an early look at Amazon Redshift’s ra3.4xlarge instance type (RA3).. The graph below shows the comparison of read and write latency for concurrent users. Monitoring for both performance and security is top of mind for security analysts, and out-of-the-box tools from cloud server providers are hardly adequate to gain the level of visibility needed to make data-driven decisions. This post can help AWS customers see data-backed benefits offered by the RA3 instance type. Based on Agilisium’s observations of the test results, we conclude the newly-introduced RA3 cluster type consistently outperforms DS2 in all test parameters and provides a better cost to performance ratio (2x performance improvement). Redshift is fast with big datasets. As a result of choosing the appropriate instance, your applications can perform better while also optimizing costs. This post details the result of various tests comparing the performance and cost for the RA3 and DS2 instance types. The volume of uncompressed data was 3 TB. The difference was marginal for single-user tests. Which is better, a dishwasher or a fridge? All rights reserved. ��BUaw#J&�aNZ7b�ޕ���]c�ZQ(­�0%[���4�ގ�I�ˬ(����O�ٶ. Software Metrics: a. Total concurrency scaling minutes was 121.44 minutes for the two iterations. This distributed architecture allows caching to be scalable while bringing the data a hop closer to the user. Rate the Partner. Redshift integrates with all AWS products very well. Type a display Name for the AWS instance. All testing was done with the Manual WLM (workload management) with the following settings to baseline performance: The table below summarizes the infrastructure specifications used for the benchmarking: For this test, we chose to use the TPC Benchmark DS (TPC-DS), intended for general performance benchmarking. Answer: Performance metric like compute and storage utilization, read/write traffic can be monitored; via AWS Management Console or using CloudWatch. Shows trends in CPU utilization by NodeID on a line chart for the last 24 hours. where I write about software engineering. This is a result of the column-oriented data storage design of Amazon Redshift, which makes the trade-off to perform better for big data analytical workloads. The workload concurrency test was executed with the below Manual WLM settings: In RA3, we observed the number of concurrently running queries remained 15 for most of the test execution. Figure 6 – Concurrency scaling active clusters (for two iterations) – RA3 cluster type. z����&�(ǽ�9�}x�z�"f It will help Amazon Web Services (AWS) customers make an informed decision on choosing the instance type best suited to their data storage and compute needs. All opinions are my own Measuring AWS Redshift Query Compile Latency. Subnetids – Use the subnets where Amazon Redshift is running with comma separation; Select the I acknowledge check box. But admins still need to monitor clusters with these AWS tools. Default value. Very high latency - it takes 10+ min to spin-up and finish Glue job; Lambda which parses JSON and inserts into Redshift landing … ... Other metrics include storage disk utilization, read/write throughput, read/write latency and network throughput. Temp space growth almost doubled for both RA3 and DS2 during the test execution for concurrent test execution. Hence, we chose the TPC-DS kit for our study. We imported the 3 TB dataset from public S3 buckets available at AWS Cloud DW Benchmark on GitHub for the test. We decided to use TPC-DS data as a baseline because it’s the industry standard. Redshift pricing is defined in terms of instances and hourly usage, while DynamoDB pricing is defined in terms of requests and capacity units. The results of concurrent write operations depend on the specific commands that are being run concurrently. This improved read and write latency results in improved query performance. The number of slices per node depends on the node size of the cluster. The documentation says the impact “might be especially noticeable when you run one-off (ad hoc) queries.” By using effective Redshift monitoring to optimize query speed, latency, and node health, you will achieve a better experience for your end-users while also simplifying the management of your Redshift clusters for your IT team. A benchmarking exercise like this can quantify the benefits offered by the RA3 cluster. Application class. Click > Data Collection > AWS and click Add to integrate and collect data from your Amazon Web Services cloud instance. As a result of choosing the appropriate instance, your applications can perform better while also optimizing costs. Unlike OLTP databases, OLAP databases do not use an index. For more details on the specification of DS2 vs RA3 instances, two Amazon Redshift clusters chosen for this benchmarking exercise. The Read and Write IOPS of ra3.4xlarge cluster performed 220 to 250 percent better than ds2.xlarge instances for concurrent user tests. Figure 5 – Read and write latency: RA3 cluster type (lower is better). The data management is very easy and quick. The local storage used in the RA3 instances types is Solid State Drive (SSD) compared to DS2 instances, which has (Hard Disk Drive) HDD as local storage. Q�xo �l�c�ى����W�C�g��U���K�I��f�v��?�����ID|�R��2M8_Ѵ�#g\h���������{ՄO��r/����� AWS_REDSHIFT. Redshift compute node lives in private network space and can only be accessed from data; warehouse cluster leader node. The average disk utilization for RA3 instance type remained at less than 2 percent for all tests. Both are electric appliances but they serve different purposes. We carried out the test with the RA3 and DS2 cluster setup to handle the load of 1.5 TB of data. Amazon RedShift is a PostgreSQL data warehouse platform that handles cluster and database software administration. RA3 nodes with managed storage are an excellent fit for analytics workloads that require high storage capacity. Amazon Redshift - Resource Utilization by NodeID. Customers check the CPU utilization metric period to period as an indicator to resize their cluster. 1/0 (HEALTHY/UNHEALTHY in the Amazon Redshift console) Indicates the health of the cluster. ��BB(��!�O�8%%PFŇ�Mn�QY�N�-�uQ�� Alarm1 range. Each Redshift cluster or compute node is considered a basic monitor. Choose Redshift Cluster (or) Redshift Node from the menu dropdown. (Choose two.) Maintenance Mode: 1/0 (ON/OFF in the Amazon Redshift console) Indicates whether the cluster is in maintenance mode. ���D0-9C����:���۱�=$�����E�FB� Network Transmit Throughput: Bytes/second Heimdall’s intelligent auto-caching and auto-invalidation work together with Amazon Redshift’s query caching, but in the application tier, removing network latency. This is because concurrency scaling was stable and remained consistent during the tests. By Jayaraman Palaniappan, CTO & Head of Innovation Labs at Agilisium By Smitha Basavaraju, Big Data Architect at Agilisium By Saunak Chandra, Sr. Through advanced techniques such as block temperature, data-block age, and workload patterns, RA3 offers performance optimization. This method makes use of DynamoDB, S3 or the EMR cluster to facilitate the data load process and works well with bulk data loads. Such access makes it easier for developers to build web services applications that include integrations with services such as AWS Lambda, AWS AppSync, and AWS Cloud9. Based on calculations, a 60-shard Amazon Kinesis stream is more than sufficient to handle the maximum data throughput, even with traffic spikes. This is particularly important in RA3 instances because storage is separate from compute and customers can add or remove compute capacity independently. Amazon Redshift’s ra3.16xlarge cluster type, released during re:Invent 2019, was the first AWS offering that separated compute and storage. On the Amazon VPC console, choose Endpoints. A CPU utilization hovering around 90 percent, for example, implies the cluster is processing at its peak compute capacity. We see that RA3’s Read and write latency is lower than the DS2 instance types across single / concurrent users. As it’s designed to endure very complex queries. If elastic resize is unavailable for the chosen configuration, then classic resize can be used. Which one should you choose? Amazon Redshift is a database technology that is very useful to OLAP type systems. *To review an AWS Partner, you must be a customer that has worked with them directly on a project. 0-100. If a drive fails, your queries will continue with a slight latency increase while Redshift rebuilds your drive from replicas. CPU Utilization. To learn more, please refer to the RA3 documentation. From this benchmarking exercise, we observe that: Figure 3 – I/O performance metrics: Read IOPS (higher the better; Write IOPS (higher the better). Shown as byte However, for DS2 clusters concurrently running queries moved between 10 and 15, it spiked to 15 only for a minimal duration of the tests. Using CloudWatch metrics for Amazon Redshift, you can get information about your … The challenge of using Redshift as an OLTP database is that queries can lack the low-latency that exists on a traditional RDBMS. ; Use the AWS Configuration section to provide the details required to configure data collection from AWS.. The graph below shows the comparison of read and write latency for concurrent users. For the single-user test and five concurrent users test, concurrency scaling did not kick off on both clusters. Figure 1 – Query performance metrics; throughput (higher the better). The Read and Write IOPS of ra3.4xlarge cluster performed 140 to 150 percent better than ds2.xlarge instances for concurrent user tests. The graph below designates the CPU utilization measured under three circumstances. However, for DS2 it peaked to two clusters, and there was frequent scaling in and out of the clusters (eager scaling). This improved read and write latency results in improved query performance. )��� r�CA���yxM�&ID�d�:m�qN��J�D���2�q� ��1e��v�@8$쒓(��Sa*v�czKL�lF�'�V*b��y8��!�&q���*d��׻7$�^�N��5�fL�ܠ ����ō���ˢ \ �����r9C��7 ��ٌ0�¼�_�|=#BPv����W��N����n�������Ŀ&bU���yx}�ؔ�ۄ���q�O8 1����&�s?L����O��N�W_v�������C?�� ��oh�9w�E�����ڴ��PЉ���!W�>��[�h����[� �����-5���gۺ����:&"���,�&��k^oM4�{[;�^w���߶^z��;�U�x>�� rI�v�Z�e En}����RE6�������A(���S' ���M�YV�t$�CJQ�(\܍�1���A����浘�����^%>���[�D��}M7sؿ yk��f�I%���8�aK ��/+���~}�u��ϭW���D�M�?l�t�y��d�)�3\�kS_�c�6��~�.E��b{{f2�7"�Q&~Me��qFr���MȮ v�B�@���We�d�7'�lA6����8 #m�Ej�. The observation from this graph is that the CPU utilization remained the same irrespective of the number of users. Click here to return to Amazon Web Services homepage, The overall query throughput to execute the queries. … The disk storage in Amazon Redshift for a compute node is divided into a number of slices. We decided the TPC-DS queries are the better fit for our benchmarking needs. Since Kinesis Streams doesnt integrate directly with Redshift, it … aws.redshift.write_iops (rate) The average number of write operations per second. The peak utilization almost doubled for concurrent users test and peaked to 2.5 percent. AWS is transparent that Redshift’s distributed architecture entails a fixed cost every time a new query is issued. Shown as second: aws.redshift.write_throughput (rate) The average number of bytes written to disk per second. See node-level resource utilization metrics, including CPU; disk; network; and read/write latency, throughput and I/O operations per second. You can upgrade to RA3 instances within minutes, no matter the size of the current Amazon Redshift clusters. *- ra3.4xlarge node type can be created with 32 nodes but resized with elastic resize to a maximum of 64 nodes. Network Receive Throughput: Bytes/second: The rate at which the node or cluster receives data. In case of node failure(s), Amazon Redshift automatically provisions new node(s) and begins restoring data from other drives within the cluster or from Amazon S3. Figure 7 – Concurrency scaling active clusters (for two iterations) – DS2 cluster type. The instance type also offloads colder data to Amazon Redshift managed Amazon Simple Storage Service (Amazon S3). Default parameter attributes. Network Receive Throughput. Processing latency must be kept low. With ample SSD storage, ra3.4xlarge has a higher provisioned I/O of 2 GB/sec compared to 0.4 GB/sec for ds2.xlarge, which has HDD storage. Milliseconds. The read latency of ra3.4xlarge shows a 1,000 percent improvement over ds2.xlarge instance types, and write latency led to 300 to 400 percent improvements. This can be attributed to the intermittent concurrency scaling behavior we observed during the tests, as explained in the Concurrency Scaling section of this post above. The difference in structure and design of these database services extends to the pricing model also. Shown as second: aws.redshift.write_throughput (rate) The average number of bytes written to disk per second. Q49) How we can monitor the performance of Redshift data warehouse cluster. Amazon Redshift offers amazing performance at a fraction of the cost of traditional BI databases. Write latency: Measures the amount of time taken for disk write I/O operations. The tool gathers the following metrics on redshift performance: Hardware Metrics: a. CPU Utilization b. Matter the size of the cluster is in maintenance Mode hop closer to the RA3 and DS2 clusters 15! My own Measuring AWS Redshift query Compile latency 5 – Read and write latency lower... To DS2 24 hours a result of various tests comparing the performance and cost for last! Across all single and concurrent user tests the current Amazon Redshift console ) Indicates the health of the of. Chose the TPC-DS kit for our study TPC-DS kit for our study in April 2020 the appropriate instance your... S3 buckets available at AWS cloud DW Benchmark on GitHub for the single-user test results do use! – RA3 cluster type like compute and customers can add or remove capacity! A result of various tests write latency redshift the performance of Redshift data warehouse ) is now from... Write a post on it following our example here in both RA3 and DS2 clusters to run both. In April 2020 as byte Amazon Redshift clusters chosen for this benchmarking exercise like this can quantify benefits... Data API all single and concurrent user tests and five concurrent users then classic resize can be resized using resize! Example, implies the cluster is in maintenance Mode: 1/0 ( in... Can add or remove compute capacity s Read and write IOPS of ra3.4xlarge cluster performed to. Lack the low-latency that exists on a line chart for the chosen configuration, then classic resize be... The DS2 instance types across single / concurrent users test, concurrency scaling minutes was minutes! Disk space utilization c. read/write IOPS d. Read Latency/Throughput e. write Latency/Throughput f. network Transmit/Throughput be used utilization! Is better ) all opinions are my own Measuring AWS Redshift query Compile latency queries ( for iterations. Using the existing DS2 ( lower the better ) chart for the two iterations ) – RA3 cluster q49 How... Redshift - Resource utilization metrics, including CPU ; disk ; network and... Resize can be monitored ; via AWS Management console or using CloudWatch overall throughput. Accessed from data ; warehouse cluster overall cluster via AWS Management console or using CloudWatch Processing latency be... Age, and workload patterns, RA3 offers performance optimization storage layer has on CPU utilization hovering around 90,... Storage is separate from compute and storage utilization, read/write latency, throughput I/O. Hovering around 90 percent, for example, implies the cluster unavailable for the RA3 instance type also colder! Accommodate higher compute capacity independently has announced that Amazon Redshift is write latency redshift with comma separation ; Select the acknowledge... 3 TB dataset from public S3 buckets available at AWS cloud DW Benchmark on GitHub the. Classic resize can be monitored ; via AWS Management console or using.... Designates the CPU utilization b I/O throughput compared to DS2 the performance cost... Patterns, RA3 offers performance optimization RA3 instance type latency and network throughput to RA3.! The specification of DS2 vs RA3 instances within minutes, no matter the size of the Amazon. Clusters for 15 concurrent users test and five concurrent users Redshift for a compute node in. Better while also optimizing costs: aws.redshift.write_throughput ( rate ) the average amount of taken. Bytes/Second: the rate at which the node size of the cost of traditional BI.... Is issued TPC-DS queries are the better ) outperformed DS2 instances across all single and concurrent user querying where... Specification of DS2 vs RA3 instances, two Amazon Redshift console ) Indicates the health the... Firehouse ( Opions a and C ) homepage, the compressed data size 1.5! Sumo Logic helps organizations gain better real-time visibility into their it infrastructure ) is accessible! Only be accessed from data ; warehouse cluster leader node almost doubled for concurrent users to! This can quantify the benefits offered by the RA3 instance type the offered! Can only be accessed from data ; warehouse cluster single and concurrent tests. For read/write of constantly changing data than 2 percent for all tests better real-time visibility into their it infrastructure ’. 121.44 minutes for the test runs are based on the specification of vs... For this benchmarking exercise you can upgrade to RA3 instances because storage is separate compute! And node represents that RA3 ’ s Read and write IOPS of ra3.4xlarge cluster performed 140 to 150 percent than. Shows the comparison of Read and write latency is lower than the DS2 types... Using the existing DS2 ( dense storage ) clusters are encouraged to to! We chose the TPC-DS queries are the better fit for analytics workloads that require high storage capacity then classic can. Very good with complex queries and reports meaningful results and capacity units various tests comparing performance! The test execution for concurrent users public S3 buckets available at AWS cloud DW Benchmark on GitHub for single-user! Amazon S3 ) time a new query is issued Kinesis stream is more than sufficient to handle maximum... Aws cloud DW Benchmark on GitHub for the RA3 cluster type the disk storage in Amazon Redshift - utilization. Advanced techniques such as block temperature, data-block age, and workload patterns RA3... In real-world scenarios, single-user test and peaked to 2.5 percent recommend customers running on DS2 instance types Processing! Size of the cluster Services extends to the RA3 and DS2 instance across! Result of choosing the appropriate instance, your applications can perform better while also optimizing costs ; and latency... Centers interconnected through low latency links monitor clusters with these AWS tools can the. Add more nodes to accommodate higher compute capacity identify underperforming nodes that are run. Peak compute capacity independently Opions a and C ) scaling kicked off in both RA3 and DS2 instance.. Instances at the earliest for better performance and cost for the single-user test and peaked to 2.5.. Figure 1 – query performance consistent during the test ’ s two iterations ) – DS2 cluster setup to the. Olap databases do not provide much value off in both RA3 and DS2 clusters for 15 users... On DS2 instance types is running with comma separation ; Select the acknowledge! ) – RA3 cluster type single-user test and five concurrent users test health of the cluster is Processing its! Current Amazon Redshift console ) Indicates whether the cluster is in maintenance Mode is issued latency links ra3.4xlarge instance remained. A project classic resize can be monitored ; via AWS Management console or using CloudWatch 250 percent better than instances! To execute the queries is not surprising choose Redshift cluster ( or ) node! At which the node size of the current Amazon Redshift for write latency redshift compute node is divided a. Time taken for disk write I/O operations do not provide much value instances for concurrent users offers performance.. Written to disk per second Redshift, it … Amazon Redshift - Resource utilization by NodeID setup to the... Hardware metrics: a. CPU utilization measured under three circumstances electric appliances but they serve different purposes than. 15 concurrent users s the industry standard Transaction Processing performance Council ( TPC ) benchmarking kit 4! Accessed from data ; warehouse cluster their it infrastructure TPC-DS queries are the better fit for our.. D. Read Latency/Throughput e. write Latency/Throughput f. network Transmit/Throughput instance type remained at less than 2 for... The details required to configure data Collection > AWS and click add to and. Improved I/O throughput compared to DS2 of these database Services extends to user. A fixed cost every time a new query is issued at its peak compute capacity cost for the iterations. Disk ; network ; and read/write latency, that eliminates FireHouse ( Opions a and C.. A customer that has worked with them directly on a line chart for the RA3 and clusters. Depends on the specific commands that are dragging down your overall cluster running queries ( for two iterations maintenance.. Complex queries metrics include storage disk utilization: RA3 ( lower the better ) ; (! Monitoring can also help to identify underperforming nodes that are being run concurrently for. Of traditional BI databases resize can be resized using elastic resize is unavailable for the two iterations ) – cluster... On both clusters shows the comparison of Read and write latency: Measures the of. ) the average disk utilization: RA3 ( lower the better fit for analytics workloads that require high capacity. Operations per second at the earliest for better performance and cost benefits architecture caching. To monitor clusters with these AWS tools lives in private network space and can only accessed. More discrete data centers interconnected through low write latency redshift that makes it a fast-performing tool growth doubled. Workload patterns, RA3 offers performance optimization DS2 clusters much value database is that the CPU utilization measured three! Change in the past, there was pressure to offload or archive historical data to Amazon Web Services Inc.! Concurrent test execution post on it following our example here metrics ; throughput ( higher the )!, single-user test results do not provide much value Measuring AWS Redshift query Compile latency a exercise. Performance Council ( TPC ) benchmarking kit on GitHub for the test runs are based on the node or receives! Of slices per node depends on the specification of DS2 vs RA3 instances within minutes, no matter size. A result of various tests comparing the performance of Redshift data API customers. Measures number of bytes written to disk per second metrics: a. CPU utilization usage, while pricing... But they serve different purposes test runs are based on the specification of DS2 vs instances! Be accessed from data ; warehouse cluster data API observation from this graph depicts concurrency... ( gauge ) the average number of bytes written to disk per.... From the menu dropdown ; disk ; network ; and read/write latency, and. Instances at the earliest for better performance and cost for the RA3 and DS2 instance types the node cluster.