Learn the limitations for reverse migration. Is Synapse using Hyperscale under the hood? Migration of a dedicated SQL pool (formerly SQL DW) in relative terms is easy. ** Edited Question after reading answers : edited to change Azure SQL DW Hyperscale to Azure SQL DB Hyperscale **. Short-term backup retention for 1-35 days for Hyperscale databases is now in preview. On the other hand, Azure Synapse Analytics is an integrated analytics solution that is ideal for advanced analytical workloads, such as OLAP. And Azure Synapse Analytics is optimized for complex querying and analysis. A Hyperscale database is a database in SQL Database that is backed by the Hyperscale scale-out storage technology. Fast database backups (based on file snapshots stored in Azure Blob storage) regardless of size with no IO impact on compute resources. This can be beneficial to other community members. Not at this time. There is no guarantee that Synapse will ever be enabled on this platform and if it does come it is still a long way off. Conversely, workloads that are mostly read-only may have smaller backup costs. Additionally, you can create up to 30 named replicas for many read scale-out scenarios. You will also see notes in many docs trying to highlight which Synapse implementation of dedicated SQL pools the document is referencing. The tempdb database and RBPEX cache size on compute nodes will scale up automatically as the number of cores is increased. Azure Synapse Analytics can handle complex analytical workloads like OLAP (Online Analytical Processing). Ultimately, the choice between Azure Synapse and Azure SQL Database will depend on the specific needs and goals of your business. This is the default for new databases. You can only connect to HA secondary replicas by specifying ApplicationIntent=ReadOnly. Azure Synapse and Azure SQL Database are both powerful tools offered by Microsoft Azure to help businesses manage and process their data. You must be a registered user to add a comment. Relational DBMS. Data files are copied in parallel, so the duration of this operation depends primarily on the size of the largest file in the database, rather than on total database size. Auto sharding or data sharding is needed when a dataset is too big to be stored in a single database. Your tempdb database is located on local SSD storage and is sized proportionally to the compute size (the number of cores) that you provision. In serverless compute, automatic scaling typically does not result dropping a connection, but it can occur occasionally.