close

Filter

loading table of contents...

Release Notes / Version 12.2404

Table Of Contents

Support for Blobs Larger Than 2GB

The CMS now supports blobs larger than 2GB (henceforth called “huge blobs”) on external media stores. Huge blobs can be up- and downloaded via UAPI clients, serverimport/serverexport, and Studio. They can also be delivered via CAE and Headless Server.

Huge blobs are only supported if external media stores are used (e.g., FileStore, S3Store, Blob Service). See section 3.3 Configuring Blob Storage in Content Server Manual. For Blob Service, refer to https://documentation.coremedia.com/discover/services/image-transformation/. Huge blobs cannot be stored in the database.

Internal tests with a blob of 5GB have been performed to validate functionality. Depending on network performance and general system stability, larger blobs may be possible. Please keep in mind that AWS S3 and file systems have upper file size limits which must be adhered to. For S3Store and Blob Service, 5GB currently is the upper limit. Also have an eye on your service quotas when using S3 or a file system.

“Ranged requests” for large blobs are supported, e.g., to stream video data. Image transformation on huge blobs is not supported as this may easily exceed any reasonably available Java process memory.

API Changes

Unfortunately, supporting huge blobs implies that we cannot represent blobs' sizes as int values any longer, but had to switch to long. This required some changes which affect the public API. You must at least recompile your project Blueprint workspace against the new CMCC version. Depending on your customizations, you may have to adjust some source code too.

To access a blob’s size, a new UAPI method com.coremedia.cap.common.Blob#getSizeLong() has been introduced which returns the blob size as a long value. The still existing method Blob#getSize():int has been deprecated and will be removed in a future release. Please be aware that this deprecated method will fail with an exception when called for a blob of size 2GB and beyond since an int value cannot represent huge numbers (it’s limited to java.lang.Integer#MAX_VALUE). To be safe, only the new method Blob#getSizeLong()}}should be used in UAPI clients and Freemarker templates. All usages of {{Blob#getSize() have been replaced by Blob#getSizeLong() in the Blueprint workspace. For backward compatibility, Blob#getSizeLong() is a default method that delegates to Blob#getSize(). Thus, existing custom implementations of the Blob interface will still work until we finally delete Blob#getSize().

The Headless Server API class com.coremedia.caas.model.adapter.ContentBlobAdapter also has a method getSize(), which has been deprecated in favor of a new method getSizeLong(), just like the UAPI Blob interface. This change has also been done in he GraphQL schema where a new field sizeLong was added to the Blob interface. The old field size is now deprecated.

Furthermore, avoid the use of method Blob#asBytes():bytes[] if huge blobs are to be processed, as a Java array can only hold up to java.lang.Integer#MAX_VALUE bytes and thus cannot be used for huge blobs. Calling this method on a huge blob will result in an exception being thrown. Ideally, all blobs should be transferred via streams.

The interface com.coremedia.cap.common.BlobService features various blob creation methods which expect a size argument. The type of the size argument has been changed from int to long for all those methods.

In the interface com.coremedia.cap.common.CapConnectionManager the methods setMaxCachedBlobSize and getMaxCachedBlobSize have been changed from int to long.

Database Changes

To represent huge blob sizes, the database schema has been adjusted. Upon first startup, the database schema will be migrated automatically. As this process, depending on the size of the existing Blob data, may take a long time, it is recommended to record migration times during a test run on a copy of productive data to estimate downtimes during production roll-out.

Additionally keep in mind that a CMS server during such a migration is not available for normal operation and take the following precautions:

  • Increase the CMS server's property cap.server.maximum-startup-delay to a value that is higher than the expected migration time. This will prevent the Spring-Boot component of the server to switch to the 'running' state and, if using the default Blueprint compose setup, prevent dependent containers to start before the migration is completed.

  • Disable or adjust health check probes to prevent the server from being marked as "unhealthy" and automatically restarted while the migration is still running. As many databases cannot execute multiple DDL statements in a transactional way, this could lead to an inconsistent upgrade state of the schema.

  • Check connection settings on your databases/JDBC database connections. As single DDL statements run for a very long time without returning data, there must not be any configured socket timeouts for reading data, or they must be set large enough to cover the expected migration time.

Studio Configuration

The UploadSettings have been extended with a new property maxFileSizeMB }}to allow for the configuration of upload limits beyond 2GB. When present, this settings overrides the old value {{maxFileSize. The default value of 64MB is unchanged.

(CMS-24179)

Search Results

Table Of Contents
warning

Your Internet Explorer is no longer supported.

Please use Mozilla Firefox, Google Chrome, or Microsoft Edge.