Effective November 12, 2024, AWS will discontinue previous generation AWS Snowball devices and both Snowcone devices (HDD and SDD). We will continue to support existing customers using these end-of-life devices until November 12, 2025. The latest generation AWS Snowball devices are available for all customers. For more information on the specific devices and alternatives, read the blog.

AWS Snowball FAQs

General

AWS Snowball is a service that provides secure, rugged devices, so you can bring AWS computing and storage capabilities to your edge environments, and transfer data into and out of AWS. Those rugged devices are commonly referred to as AWS Snowball or AWS Snowball Edge devices. The AWS Snowball service operates on-board computing capabilities as well as storage.

AWS Snowball Edge is an edge computing and data transfer device provided by the AWS Snowball service. It has on-board storage and compute power that provides select AWS services for use in edge locations. Snowball Edge comes in two options, Storage Optimized and Compute Optimized, to support local data processing and collection in disconnected environments such as ships, windmills, and remote factories. Learn more about its features here.

The Snowball devices were transitioned out of service and Snowball Edge Storage Optimized 210TB are now the primary devices used for data transfer.

No. For data transfer needs now, please select the Snowball Edge Storage Optimized 210TB devices.

You start by requesting one or more Snowball Edge Compute Optimized or Snowball Edge Storage Optimized devices in the AWS Management Console based on how much data you need to transfer and the compute needed for local processing. The buckets, data, Amazon EC2 AMIs, and Lambda functions you select are automatically configured, encrypted, and preinstalled on your devices before they are shipped to you. Once a device arrives, you connect it to your local network and set the IP address either manually or automatically with DHCP. Then use the Snowball Edge client software, job manifest, and unlock code to verify the integrity of the Snowball Edge device or cluster, and unlock it for use. The manifest and unlock code are uniquely generated and crypto-logically bound to your account and the Snowball Edge shipped to you, and cannot be used with any other devices. Data copied to Snowball Edge is automatically encrypted and stored in the buckets you specify.

All logistics and shipping is done by Amazon, so when copying is complete and the device is ready to be returned, the E Ink shipping label will automatically update the return address, ensuring that the Snowball Edge device is delivered to the correct AWS facility. Once the device ships, you can receive tracking status via messages sent by Amazon Simple Notification Service (Amazon SNS), generated texts and emails, or directly from the console.

All of the management for your Snowball Edge resources can be performed in the AWS management console and these operations require system engineers.

AWS Snowball now refers to the service overall, and AWS Snowball Edge are the current types of devices that the service uses – sometimes referred to generically as AWS Snowball devices. Originally, early Snowball hardware designs were for data transport only. Snowball Edge has the additional capability to run computing locally, even when there is no network connection available.

AWS Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. We recommend using Snowball Edge Compute Optimized for use cases that require access to powerful compute and high-speed storage for data processing before transferring it into AWS. It is a good fit for high-resolution video processing, advanced IoT data analytics, and real-time optimization of machine learning models in environments with limited connectivity. For more details, see the documentation.

Consider Snowball Edge if you need to run computing in rugged, austere, mobile, or disconnected (or intermittently connected) environments. Also consider it for large-scale data transfers and migrations when bandwidth is not available for use of a high-speed online transfer service, such as AWS DataSync.

AWS Snowball Edge Storage Optimized is the optimal data transfer choice if you need to securely and quickly transfer terabytes to petabytes of data to AWS. You can use Snowball Edge Storage Optimized if you have a large backlog of data to transfer or if you frequently collect data that needs to be transferred to AWS and your storage is in an area where high-bandwidth internet connections are not available or cost-prohibitive.

You can also use Snowball Edge to run edge computing workloads, such as performing local analysis of data on a Snowball Edge cluster and writing it to the Amazon S3 compatible endpoint. You can streamline it into existing workflows leveraging built-in capabilities such as the NFS file interface and migrate files to the device while maintaining file metadata.

AWS Snowball Edge can operate in remote locations or harsh operating environments, such as factory floors, oil and gas rigs, mining sites, hospitals, and on moving vehicles. Snowball Edge is pre-configured and does not have to be connected to the internet, so processing and data collection can take place within isolated operating environments. Snowball Edge allows you to run the same software at the edge and access select AWS capabilities as you would with full connectivity to AWS.

No. Snowball Edge is intended to serve as a data transport solution for moving high volumes of data into and out of a designated AWS Region. For use cases that require data transfer between AWS Regions, we recommend using S3 Cross-Region Replication as an alternative.

You can transfer virtually any amount of data with Snowball Edge, from a few terabytes to many petabytes. You can transfer up to approximately 210 TB with a single Snowball Edge Storage Optimized device and can transfer even larger data sets with multiple devices, either in parallel, or sequentially.

Data transfer speed is affected by a number of factors including local network speed, file size, and the speed at which data can be read from your local servers. The total time to transfer 210 TB data includes the time taken to load and unload data from the device, the time involved in shipping the devices to and from your location. This timeline does not include the up to 4 weeks of time we may take to provision and prepare a Snowball device for your job.

For security purposes, jobs using an AWS Snowball Edge device must be completed within 360 days of being prepared. If you need to keep one or more devices for longer than 360 days, contact AWS Support. Otherwise, after 360 days, the device becomes locked, can no longer be accessed, and must be returned. If the AWS Snowball Edge device becomes locked during an import job, we can still transfer the existing data on the device into Amazon S3.

Please see the AWS Snowball Features page for feature details and the Snowball Edge documentation page for a complete list of hardware specs, including network connections, thermal and power requirements, decibel output, and dimensions.

Snowball Edge Storage Optimized for data transfer devices have two 10G RJ45 ports, one 10/25G SFP28 port, and one 40G/100G QSFP28 port.

Snowball Edge Storage Optimized for edge compute devices have one 10G RJ45 port, one 10/25G SFP28 port, and one 40G QSFP+ port.

The Snowball Edge Compute Optimized devices (including the GPU option) have two 10G RJ45 ports, one 10/25G SFP28 port, and one 40G/100G QSFP28 port.

As a default, Snowball Edge uses two-day shipping by UPS. You can choose expedited shipping if your jobs are time-sensitive. This timeline does not include the up to 4 weeks of time we may take to provision and prepare a Snowball device for your job.

Once you place a job order for a Snowball device, we will provision, prepare, and ship the device. This provisioning process may take up to 4 weeks from the time of order placement. This timeline should be factored into your project plan to ensure a seamless transition.

Edge Computing Capabilities

Yes. The Snowball Edge Storage Optimized option supports SBE1 instance.

The Snowball Edge Compute Optimized option features more powerful and larger instances, sbe-c for compute-intensive applications.

The support for EC2-compatible instances on Snowball Edge devices enables you to build and test on EC2, then operate your AMI on a Snowball Edge to address workloads that sit in remote or disconnected locations.

The GPU option on AWS Snowball Edge Compute Optimized comes with SBE-G instances that can take advantage of the onboard GPU for accelerating the application performance. After receiving the device, select the option to use the SBE-G instance in order to use the on-board GPU with your application.

You should use the EC2 compatible instances when you have an application running on the edge that is managed and deployed as a virtual machine (an Amazon Machine Image, or AMI).

Yes, multiple Snowball Edge Compute and Storage Optimized devices can be clustered into a larger durable storage pool with a single Amazon S3 compatible endpoint. For example, if you have 16 Storage Optimized devices, they can be configured to be a single cluster that exposes S3 compatible endpoints with 2.6 PB of storage. Alternatively, they can be used individually without clustering, each hosting a separate S3 compatible endpoint with up to 190 TB of usable storage.

With an AWS Snowball Edge storage cluster, you increase local storage durability and scalability. Storage clustering creates durable, scalable, S3 compatible local storage. AWS Snowball Edge storage clusters allow you to scale your local storage capacity up or down depending on your requirements by adding or removing appliances, eliminating the need to buy expensive hardware.

You can enable EKS-A for modern distributed container based applications, and Amazon EC2 AMIs or Lambda functions during AWS Snowball Edge job creation using either the AWS Console, AWS Snowball SDK, or AWS CLI.

Yes. AWS Snowball Edge provides an Amazon EC2-compatible endpoint that can be used to start, stop, and manage your instances on AWS Snowball Edge. This endpoint is compatible with the AWS CLI and AWS SDK.

The Amazon EC2 endpoint running on AWS Snowball Edge, provides a set of EC2 features that customers would find most useful for edge computing scenarios. This includes APIs to run, terminate, and describe your installed AMIs and running instances. Snowball Edge also supports block storage for EC2 images, which is managed using a set of the Amazon EBS API commands.

No. At this time, you cannot use an existing EBS volume with AWS Snowball Edge, however, Snowball Edge does offer block storage volumes, which are managed with an EBS-compatible API.

To run instances, provide the AMI IDs during job creation and the images come pre-installed when the device is shipped to you.

Yes. You can import or export your KVM/VMware images to AMIs using the EC2 VM Import/Export service. Refer to the VM Import/Export documentation for more details.

This is necessary in order to run licensed software, including operating systems other than those which the AWS Snowball service provides.

Amazon EC2 on AWS Snowball Edge provides default support for a variety of free-to-use operating systems (OS) like Ubuntu and CentOS. They will appear as AMI’s that can be loaded onto Snowball Edge without any modification. To run other OSes that require licenses on Snowball Edge EC2 instances, you must provide your own license, and then export the AMI using Amazon EC2 VM Import/Export (VMIE).

SBE-C instances feature up to 104 vCPUs, ephemeral instance storage for root volumes, and 418GB of memory for running a wide variety of compute-intensive applications, such as advanced machine learning, full motion video analysis, LAMP stacks, and EC2-hosted containers, in environments with little or no internet connectivity. SBE-C instances can also use AWS Snowball Edge NVMe SSD and HDD block storage for persistent volumes.

AMIs that run on the C5 instance type in AWS are compatible with SBE1 instances available on AWS Snowball Edge Storage Optimized in the vast majority of cases. We recommend that you first test your applications in the C5 instance type to ensure they can be run on the Snowball Edge Storage Optimized device.

AMIs that run on the M5a instance type in AWS are compatible with SBE-C instances available on AWS Snowball Edge Compute Optimized in the vast majority of cases. For SBE-C instances running on the AWS Snowball Edge Compute Optimized device, we recommend you test your applications on the M5a instance type.

For SBE-G instance types for the Snowball Edge Compute Optimized with the GPU option, we recommend you first test your applications against the EC2 P3 instance types.

Yes. You can run multiple instances on a device as long as the total resources used across all instances are within the limits for your Snowball Edge device.

All the EC2 compatible instances can run on each node of an AWS Snowball Edge cluster. When you provision a Snowball Edge cluster using the AWS Console, you can provide details for instances to run on each node of the cluster, for example, the AMI you want to run and the instance type and size you want to use. Nodes can use the same or different AMIs across each node in a cluster.

Each AMI has an AMI ID associated with it. You can use run-instance command to start the instance by providing this ID. Running this command returns an instance-id value that can be used to manage this instance.

You can check the status of all the images that are installed on the device using the describe-images command. To see the active instances of instances running on the device, you can use the describe-instance-status command.

You can terminate a running instance using the terminate-instance command.

AWS Snowball Edge encrypts all data, including AMIs, with 256-bit encryption. You manage your encryption keys by using the AWS Key Management Service (KMS). Your keys are never stored on the device and you need both the keys and an unlock code to use the device on-premises. In addition to using a tamper-evident enclosure, Snowball Edge uses industry-standard Trusted Platform Modules (TPM) designed to detect any unauthorized modifications to the hardware, firmware, or software. AWS visually and cryptographically inspects every device for any signs of tampering.

Yes. You can import virtual machine (VM) images as AMIs to your Snowball device while it is onsite. For more information about importing VM images into Snowball devices, see the Snowball Edge documentation.

You are responsible for licensing any software that you run on your instance. Specifically, for Windows operating systems, you can bring your existing license to the running instances on the device, by installing the licensed OS in your AMI in EC2, and then using VM Import/Export to load the AMI to your Snowball Edge device.

AWS IoT Greengrass is an IoT edge runtime (open source starting with version 2.0) and a cloud service that helps you build, deploy and manage IoT applications on your devices. AWS Snowball devices running AWS IoT Greengrass can operate as an IoT hub, data aggregation point, application monitor, or a lightweight analytics engine.

To get started with AWS IoT Greengrass on an AWS Snowball device follow the steps listed below:

  1. When you are ready to place your job order on the AWS Snowball console, you can opt-in to install the AWS IoT Greengrass AMI, which uses Amazon Linux 2 (AL2) AMI for the AWS Snowball on the Snowball device of your choice.
  2. Once you receive the device, you can use AWS OpsHub for Snowball to unlock the device with the credentials provided after the job is created.         
  3. After the device is powered on and unlocked, you can launch the AL2 AMI for AWS Snowball on the applicable Snowball device and remotely log in to it using your existing SSH keys or by creating new SSH keys.
  4. Now you are ready to install the latest version AWS IoT Greengrass on the Snowball device following the instructions here.       
  5. Once the installation is complete, you will be able to manage the AWS Snowball device and deploy IoT workloads from the AWS IoT console. 

AWS Snowball Edge supports EKS Anywhere, which allows you to easily create and operate Kubernetes clusters on Snowball Edge devices.

EKS Anywhere on Snowball is supported on AWS Snowball Edge Compute Optimized devices.

Block Storage for Amazon EC2 on AWS Snowball Edge

You can run block storage on both Snowball Edge Compute Optimized and Snowball Edge Storage Optimized devices. You attach block storage volumes to EC2 instances using a subset of Amazon EBS capabilities that enable you to configure and manage volumes for EC2 instances on Snowball Edge devices.

AWS Snowball Edge block storage enables you to have multiple persistent block storage volumes – in addition to your root volume – for your Amazon EC2 based applications on the device. This can provide higher performance and more storage capacity for EC2 applications on Snowball Edge than you can achieve with only a root volume. You can now attach multiple volumes to your EC2 instances. Volumes that are attached to EC2 instances on Snowball Edge persist independently from the life of the instance, enabling you to deploy multiple applications on Snowball Edge, and to start and stop the applications as needed.

AWS Snowball Edge block storage provides performance-optimized SSD volumes (sbp1), and capacity-optimized HDD volumes (sbg1), to meet IOPS and throughput requirements for a wide-variety of data processing and data collection applications. Block storage volumes have a maximum size of 10 TB per volume, and you can attach up to 10 volumes to any EC2 instance on Snowball Edge.

On Snowball Edge Compute Optimized, you can use up to 7 TB of NVMe SSD for sbp1 volumes, which are good for latency-sensitive applications, such as machine learning. On Snowball Edge Storage Optimized, you can use up to 1 TB of SATA SSD for sbp1 volumes, which are good for pre-processing data.

On both the Snowball Edge Storage Optimized and Compute Optimized devices, you can use capacity-optimized volumes, sbg1, to store data on HDDs. This volume type is appropriate for data collection and less IOPS-intensive applications. It has a maximum volume size of 10 TB, and you can attach multiple volumes to any instance.

By default, all Snowball Edge devices are now shipped with the block storage feature. Once you unlock the device you can use AWS CLI or SDK to create volumes and attach them to an Amazon EC2 instance. You can attach multiple volumes to each EC2 instance, however, a single volume can only be attached to a single instance at any time.

AWS Snowball Edge block storage has different performance, availability, and durability characteristics than Amazon EBS volumes. Also, it provides only a subset of Amazon EBS capabilities. For example, snapshot functionality is not currently supported on Snowball Edge block storage. Please see Snowball Edge’s technical documentation for a complete list of supported APIs.

To interact with block storage on SBE, you can use create, delete, attach, detach, and describe volumes EBS APIs. Please see Snowball Edge’s technical documentation for a complete list of supported APIs.

Any Amazon Machine Image (AMI) running on Snowball Edge can access up to 10 block storage volumes at once. Generic AMIs provided by AWS and custom AMIs can access any block storage volume. There are no special requirements to make the block storage volumes work. However, certain operating systems perform better with specific drivers. Please see Snowball Edge’s technical documentation for details.

Volumes created on a single AWS Snowball Edge are only accessible to the EC2 instances running on that device.

You can use describe-device command from the Snowball client to monitor how much block storage is been used on the device. When you create a volume, all of the storage capacity requested is allocated to it based on the available capacity.

Not directly, no. Data on block storage volumes on AWS Snowball Edge is deleted when the device returns to AWS. If you wish to preserve data in block storage volumes, you must copy data into the Amazon S3 adapter in the case of data migration jobs or Amazon S3 compatible storage in the case of edge compute devices. The data from the Amazon S3 adapter will be copied into your S3 bucket when the device returns to AWS as part of import job. For edge compute, you can use AWS DataSync to transfer data online back and forth between your S3 buckets in regions and S3 buckets on devices when there’s connectivity to AWS region.

Yes, you can use the Amazon S3 compatible storage on Snowball and block storage on the same Snowball Edge Compute Optimized device in a single device deployment. The object storage and block storage used for sbg1 volumes share the same data disk capacity and is pre-provisioned at the time of placing an order. The underlying storage features work together so that an increase in I/O demand for block or object storage does not impede the availability and performance of the other.  Note, this is not supported in the case of cluster deployments where all storage on data disks is utilized for object storage capacity.

No, you add volumes to your Amazon EC2 instances after you have received the device.

Yes. For Amazon S3 compatible storage on Snow, usable object storage capability is selected at the time of placing order for pre-provisioning. This step is not required for devices with the Amazon S3 adapter. You can dynamically add or remove volumes and objects based on your application needs.

AWS Snowball Edge is designed with security in mind for the most sensitive data. All data written into block volumes is encrypted by keys provided by you through AWS Key Management Service (KMS). All volumes are encrypted using the same keys selected during Snowball Edge job creation. The keys are not permanently stored on the device and are erased after loss of power.

Additional volumes attached using the block storage offer up to 10 times higher performance compared to the root volumes. We recommended you use relatively smaller root volumes, and create additional block storage volumes for storing data for your Amazon EC2 applications. Please see Snowball Edge’s technical documentation for performance best practices, and recommended drivers.

Regional Availability

Check the Regional Service Availability pages for the latest information. 

No. AWS Snowball Edge devices are designed to be requested and used within the same AWS Region where your S3 bucket is located. The device may not be requested from one Region and returned to another. Snowball Edge devices used for imports or exports from an AWS Region in the EU may be used with any of the other EU countries. Check the Regional Service Availability pages for the latest information.

Security

AWS Snowball Edge encrypts all data with 256-bit encryption. You manage your encryption keys by using the AWS Key Management Service (AWS KMS). Your keys are never stored on the device and all memory is erased when it is disconnected and returned to AWS.

In addition to using a tamper-resistant enclosure, Snowball Edge uses industry-standard Trusted Platform Modules (TPM) designed to detect any unauthorized modifications to the hardware, firmware, or software. AWS visually and cryptographically inspects every device for any signs of tampering and to verify that no changes were detected by the TPM.

AWS Snowball Edge is designed with security in mind for the most sensitive data. All data is encrypted by keys provided by you through AWS Key Management Service (KMS). The keys are not permanently stored on the device and are erased after loss of power. Applications and Lambda functions run in a physically isolated environment and do not have access to storage. Lastly, after your data has been transferred to AWS, your data is erased from the device using standards defined by National Institute of Standards and Technology. Snowball Edge devices are hardened against attack and all configuration files are encrypted and signed with keys that are never present on the device.

AWS Snowball Edge uses an innovative, E Ink shipping label designed to ensure the device is automatically sent to the correct AWS facility. When you have completed your data transfer job, you can track it by using Amazon SNS generated text messages or emails, and the console.

Yes. To receive a history of Snowball API calls made on your account, you simply turn on CloudTrail in the AWS Management Console; The following API calls in Snowball are not recorded and delivered: DescribeAddress (in response), CreateAddress (in request), DescribeAddresses (in response).

Import Data with AWS Snowball Edge

After you have connected and activated the Snowball Edge, you can transfer data from local sources to the device through the S3-compatible endpoint or the NFS file interface, both available on the device. You can also use the Snowball client to copy data. To learn more, please refer to the Snowball Edge documentation.

Wait to confirm that the Snowball Edge has been received by AWS and your data has successfully been transferred into appropriate S3 buckets prior to you deleting any data on your disk(s). While AWS verifies the integrity of files copied to Snowball Edge during the S3 transfer, it is your responsibility to verify the integrity of data before deleting it from your disk(s). AWS is not liable for any lost or corrupted data during copy or transit.

When the data transfer job is complete, the E Ink display on the Snowball Edge automatically updates the return shipping label to indicate the correct AWS facility to ship to. Just drop off the Snowball Edge at the nearest UPS and you're all set. You can track the status of your transfer job through Amazon SNS generated text messages or emails, or directly in the AWS Management Console.

Yes. You can stop the data ingestion to an Amazon Simple Storage Service (Amazon S3) bucket by cancelling the job in the AWS Snowball Management Console or by contacting AWS support.

No. AWS uses an automated workflow process to securely delete the data ingest and clean the returned device with a complete erasure of the Snowball device according to NIST 800-88 standards. Additionally, it is not possible to return the same device back to you after the data import is complete.

Export Data with AWS Snowball Edge

In addition to the Export job fees detailed on our pricing page, you will also be charged all fees incurred to retrieve your data from Amazon S3.

We typically start exporting your data within 24 hours of receiving your request, and exporting data can take as long as a week. Once the job is complete and the device is ready, we ship it to you using the shipping options you selected when you created the job.

Yes. AWS saves an import log report to your bucket. This report contains per file information including the date and time of the upload, the Amazon S3 key, MD5 checksum, and number of bytes. For more details, see the documentation.

The Snowball export job from Amazon S3 workflow does not have access to the objects stored in the Amazon S3 Glacier or Amazon S3 Glacier Deep Archive storage classes. You must first restore these objects from the Glacier or Glacier Deep archive for a minimum of 10 days or until the Snowball export job completes to ensure that these restored objects are successfully copied to the Snowball device.

Large Data Migration Manager

Large Data Migration Manager helps you plan, and monitor your large data migration from 500TB minimum to petabytes of data. First, Large Data Migration Manager enables you to create a plan for your migration projects that use multiple AWS Snowball devices to complete your petabyte scale data migration or data movement from the rugged, mobile edge. Creating a plan helps you and your partners onboard to Snowball and align on project goals such as data size to be migrated and project duration. Once a plan is in place, Large Data Migration Manager provides a central location in AWS Snowball management console for you to stay updated with the progress of all your Snowball jobs (number of outstanding jobs, current data ingested etc.), and view estimated schedules for placing the next job orders. Finally, you can control the project plan as you monitor the migration and can extend or end the migration when you deem appropriate.  

Prior to the Large Data Migration Manager launch, you had to track all of this information yourself and spend time coordinating with your partners and AWS for tracking data ingestion progress and placing job orders. Large Data Migration Manager saves you time and effort by keeping track of all project details and allows you to focus on your overall goal of data migration. 

You start by creating a data migration or data movement project plan in the AWS Management Console. To create a plan, you are prompted for your import job type specifics that includes plan name, service access roles and notification preference. Once a plan is created, you need to create a site where the Snowball devices will be shipped. Site information details include name and shipping address for each site, data size amount, number of concurrent Snowball jobs, Snowball job type, Snowball device, fill rate (as per the last monitored data), project start and end dates. After you create your site, you can review your automatically created Snowball job ordering schedule, which helps you know when to order your Snowball jobs. You can either clone from prior existing jobs or add a job that was already created to the site.  

You can update your project plan in one of two ways: (1) you modify your plan information by updating data size amount, number of concurrent Snowball jobs, or (2) Snowball’s Data Migration Manager calculates the average Snowball job duration (from order creation to completion) and average fill rate on a per site basis to automatically adjust your plan. Data Migration Manager then uses these plan updates to adjust your ordering schedule to inform you of your project status and if additional Snowball jobs are required. 

You monitor your Snowball jobs using the Data Migration Manager dashboard or viewing your project plan summary. Using the Data Migration Manager dashboard, you can quickly monitor project status and identify issues at a glance. With this new capability, you can track your overall migration or data movement transfer progress, time remaining, Snowball job status, site status, average Snowball job fill rate, average Snowball job duration, and upcoming order schedule across your plan and site. 

No. Large Data Migration Manager is available to customers using AWS Snowball. 

Large Data Migration Manager is available in all commercial regions where AWS Snowball Edge devices are available.

AWS OpsHub

AWS OpsHub is an application that you can download from the Snowball resources page. It offers a graphical user interface for managing the AWS Snowball devices. AWS OpsHub makes it easy to setup and manage AWS Snowball devices enabling you to rapidly deploy edge computing workloads and simplify data migration to the cloud. With just a few clicks in AWS OpsHub, you can unlock and configure devices, drag-and-drop data to devices, launch and manage EC2 instances on devices, or monitor device metrics. AWS OpsHub is available globally at no extra charge.

AWS OpsHub is an application that you can download and install on any Windows or Mac client machine, such as a laptop. Once you have installed AWS OpsHub and have your AWS Snowball device on site, open AWS OpsHub and unlock the device. You will then be presented with a dashboard showing your device and its system metrics. You can then begin deploying your edge applications or migrating your data to the device with just a few clicks.

Yes. However, the task automation features are available for only Snowball devices ordered after AWS OpsHub launched on April 16, 2020. All other functionality will be available for all devices, including those ordered before AWS OpsHub launched.

You use AWS OpsHub to manage and operate your AWS Snowball devices and the AWS services that run on them. AWS OpsHub is an application that runs on a local client machine, such as a laptop, and can operate in disconnected or connected environments. In contrast, you use the AWS Management Console to manage and operate the AWS services running in the cloud. The AWS Management Console is a web-based application that operates when you have a connection to the internet.

AWS OpsHub will automatically check for AWS OpsHub software updates when the client machine that AWS OpsHub is running on is connected to the internet. When there is a software update, you will be notified on the application and will be given the option to download and update the latest software. Additionally, you can visit the Snowball resources page and check for the latest version of AWS OpsHub.

Yes. When you copy data to AWS Snowball devices using AWS OpsHub, checksums are used to ensure that the data you copy to the device is the same as the original. Also, all data written to AWS Snowball devices is encrypted by default.

Billing

Please see our AWS Snowball Edge pricing page for pricing details.

Snowball Edge transfers data on your behalf into AWS services such as Amazon S3. Standard AWS service charges apply. Data transferred IN to AWS does not incur any data transfer fees, and Standard Amazon S3 pricing fees apply for data stored in S3.

There is an additional charge for Amazon S3 compatible storage with increased resiliency, scale and expanded API feature-set. The pricing is based on S3 capacity provisioned at the time of placing order. Please see our pricing page for details.

No, there is no additional charge for this feature.

Devices are only available on a per-job pay-as-you-go basis, and are not available for purchase.

Workflow Integration Tools

Yes. The Snowball Job Management API provides programmatic access to the job creation and management features of a Snowball or Snowball Edge. It is a simple, standards-based REST web service interface, designed to work with any Internet development environment.

The AWS Snowball Job Management API allows partners and customers to build custom integrations to manage the process of requesting Snowballs and communicating job status. The API provides a simple web service interface that you can use to create, list, update, and cancel jobs from anywhere on the web. Using this web service, developers can easily build applications that manage Snowball jobs. To learn more, please refer to AWS Snowball documentation.

The S3 SDK Adapter for Snowball provides an S3-compatible interface for reading and writing data on a Snowball or Snowball Edge for data migration use cases.

The S3 Adapter allows customers to help applications write data from file and non-file sources to S3 buckets on the Snowball or Snowball Edge device. It also includes interfaces to copy data with the same encryption as is available through the Snowball client. To learn more, please refer to the AWS Snowball documentation.

The Snowball Client is a turnkey tool that makes it easier to copy file-based data to Snowball. Customers who prefer a tighter integration can use the S3 Adapter to easily extend their existing applications and workflows to seamlessly integrate with Snowball for data migration use cases.

The S3 Adapter writes data using the same advanced encryption mechanism that the Snowball Client provides.

The S3 Adapter communicates over REST which is language-agnostic.

Amazon S3 compatible storage on AWS Snowball

You use Amazon S3 compatible storage on AWS Snowball to run applications that require S3 object storage in rugged, mobile edge or disconnected environments for local data processing or residency use cases. For example, applications that leverage Amazon S3 in Region for machine learning inference or data analytics can now be deployed on Snowball devices at the edge to make real-time decisions proximal to end users, or to meet data residency requirements.

Amazon S3 compatible storage on AWS Snowball supports bucket and object APIs such as GetObject, PutObject, DeleteObject(s), multi part uploads, object tagging, CreateBuckets, DeleteBuckets, BucketLifecycle and List. Amazon S3 compatible storage on Snowball supports encryption using SSE-S3 or SSE-C, authentication and authorization using Snowball IAM, advanced features such as built-in resiliency, and flexible multi-node (3-16) deployment options. Amazon S3 compatible storage on Snowball is an enhancement to the current S3 Adapter implementation on Snowball, which has limited API support designed primarily for data transfer use cases

Amazon S3 compatible storage is available on AWS Snowball Edge Compute and Storage Optimized devices for local edge compute and storage use case.

Amazon S3 compatible storage on Snowball continuously monitors health status of device or cluster. Background processes respond to data inconsistencies and temporary failures to heal and recover data in order to ensure resiliency. In the case of non-recoverable hardware failures, Amazon S3 compatible storage on Snowball can continue operations, and provides proactive notifications through emails, prompting customers to work with AWS to replace failed devices. While devices are connected, AWS also receives these notifications when remote monitoring feature is turned on, for proactive customer engagement. Service and device health status are available to you locally on OpsHub at all times.

Yes. Data stored in Amazon S3 compatible storage on Snowball is encrypted using server-side encryption. Amazon S3 compatible storage on Snowball supports both SSE-S3 and SSE-C encryption option. Server-side encryption encrypts only the object data, not object metadata.

S3 capacity on Amazon S3 compatible storage on Snow depends on the quantity and type of Snow device. For single node deployments, you can provision granular S3 compatible capacity from 2.5TB on SBE-CO devices up to 190TB in SBE-SO devices. For cluster deployments, you can provision up to a maximum of 2.6 PB of usable S3 compatible capacity in a 16 node cluster of Snowball Edge Storage Optimized 210TB devices.