Yes. As defined by the industry “each object includes the data itself, a variable amount of metadata, and a globally unique identifier”. However, DeepSpace is not keystore over SDS (software defined storage) or SWIFT type implementation.

DeepSpace is based on the IEEE Mass Storage Reference Model for large scale storage management. DeepSpace is a file repository that stores files as files but with a unique global reference name. Inside DeepSpace files are stored individually with user selectable metadata formats (we refer to as label type, i.e, ANSI, DSCM, etc) where the metadata for a file is stored redundantly just before and just after the end of the file as well as having volume metadata at the beginning and end of each volume.

There are multiple methods to placing and retrieving files that are in the DeepSpace archive. These include a “C” API,  command line and GUI directed interactions with DeepSpace’s archive layers. In addition, DeepSpace integrates with native file systems using the fanotify kernel library and extends the storage of the primary file system by allowing DeepSpace to continuously archive data as it changes in the managed file systems and to effectuate tiering across storage systems in a fully transparent manner to the users and applications that read and write files through the native POSIX file system.

In many cases, no. DeepSpace can read ANSI and IBM standard tape labels natively and can recover all standard metadata adding it to the catalogue. Foreign tapes are added as new volumesets with the operator providing an arbitrary volumeset name as well as the VSN’s (Volume Serial Numbers). If these volumesets are spanned, meaning up to 10,000 physical volumes may exist in any one volumeset, then DeepSpace will handle all volume transitions between tapes including drive allocations and mount processing between volumes and hot swapping the I/O so the last volume can be rewinding on one drive and a free drive will be allocated for the next tape in the set.

Once the physical volumes are added, the catalog only knows about the volumeset and its constituent volumes, but technically they can still be read by specifying individual files using file sequence numbers, which may in some cases be enough. However –  our standard CONOP would be to run a volumeset report with a scan option. That process will scan the tapes reading just the the 4 80 byte HDR1-HDR4 labels the label standard puts in between each tape file, adding the full file metadata to the DeepSpace catalog – which in turns makes the individual files members of our catalog service with all of the file name, generation and all other standard label metadata extracted in the scan process.

So in this last scenario, the standard command line process to retrieve a file now that it is fully catalogued is to request access to the volumeset,  where user and volumeset permissions allow,  DeepSpace will perform a device allocation to a compatible drive, execute a library mount or send an operator mount request depending on whether the tape is an automated library member or shelf tape, and subsequent dsread commands are executed using the file id extracted. The data stream from the read can be passed using STDIO redirection, or it can be directed to output the file data into a file system location as specified at the command line.

Yes, DeepSpace follows the mainframe convention of allowing the user to customize their operating environment by strategically placing user-coded site exits in the code path where site specific customization may be required. In this case there are site exits for both bringing a volume online when accessed, and offline when the I/O processing is complete. So with automated tape, site exits are provided for each tape drive and it points to an executable that should be run to execute the mount when it is called with a list of arguments that specify all options that the site exit may need to do its job.

So as long as you have some ability to drive the robot at the command line, all you have to do is code a site exit for the drive that parses the arguments passed to it and executes your mount command. We typically use the mtx media changer as the base executable and provide the scripts that run this small FOSS utility using the correct arguments that our script passes to mtx, mtx in turn executes the SCSI media changer commands to perform the mount/dismount operations. In practice, it is quite simple to configure, but is still a manual set-up that has to be done once for each storage system.

Other options for large enterprise libraries that use a network API  vs SCSI media changer are also supported using the library control platform software licensed from the library OEM. We have created the software for Oracle libraries using ACSLSTM but cannot release sources as they are licensed products, but we can provide the binaries under a support contract. Other storage platforms such as power down disk storage systems have also been integrated, so contact us if you have a specific application you need help with.

DeepSpace has a robust metadata structure and query engine. DeepSpace can report on and discover any file based on standard ACL’s and other data recorded about the file on ingest. Query on as little or much info you have about a file and get a list back of files that meet your search criteria.

As big as you like. DeepSpace  is not a block mapped architecture like a file system, its unit of storage is a logical or physical volume, and a volumeset can have up to 10,000 volumes using the current volume label types. These can be extended even further than 10K volumes with another label version (its just a field size limit that is arbitrary at this point), but since current LTO8 stores a minimum 12TB of data per volume (and potentially a lot more with compression) you already can store a single file that is at least 12TB x 10,000, or 120 petabytes. The number of files is only limited by your ability to scale out your RDBMS server implementation.

No. But we have a better idea. DeepSpace is a repository with rich metadata and direct query capabilities. As files are created they can be pushed into DeepSpace with very descriptive file attributes and delimited comments (64k worth). Analytic systems can be configured to query DeepSpace and seed each node with data and then be worked on. When they are done the results can be pushed back out to DeepSpace for use at another time or interacted with by other systems in other clusters.

It depends on your SSO’s requirements and directions. DeepSpace can report on everywhere that file is and was. You can decide to destroy the file, the volume, the volume group or the just the Crypto Key. We give you the reporting tools and utilities to do your job.

No, DeepSpace is intended to be an exascale centralized repository in the form of an archival namespace with rich attributes and customizable metadata. It’s designed to allow users and datacenter architects to use different servers and hosts as they were intended to be used and have access to your data wherever and whenever your need it without worrying about where it is. We ease the user experience by moving data back and forth between file system and DeepSpace using Hierarchical Storage Management technology so you can basically extend your current favourite file system(s) to be DeepSpace clients, think of them a working set caches that let you interact transparently with the archive namespace.

A minimal set of requirements need to be met to use an existing file system type with our Hierarchical Storage Management daemons. These are the fanotify kernel library support, which comes with any reasonably recent upstream Linux kernel, as well as support for extended attributes in the file system. The latter point does make the HSM daemons file system specific in that the user library for getting and setting file system attributes is peculiar to the file system implementation, and at this time we have only implemented the XFS extended attribute library. But supporting any other file system’s extended attribute user library is pretty trivial as long as the file system allows some latitude in the amount of space available for these attributes. Just be advised some older file systems like ext variants have extended attributes, but very small permissible payload sizes and if they need to be shared with other storage services like NFS, they may not have enough room left over in all cases for our HSM implementation.

Absolutely not! Deepspace includes a fully distributed I/O subsystem in its data mover services. To explain, lets assume you have a Fibre Channel SAN architecture already. You will want to configure some or all tape drives for exclusive use by DeepSpace (that isn’t really the case with ACSLS based libraries, but set that aside for now). Next you will configure your SAN to have a wide open zone allowing every tape host to see every tape drive. DeepSpace will control device allocation at the master server, but the I/O path for any host to any drive will be implemented through our data mover so that the path with be SCSI over the switched SAN architecture. The same data can move over a network socket if the SAN connection is not available on that host and you can choose the host with a SAN connection that you want to perform the channel I/O portion of the data transfer.

Yes. DeepSpace is location aware. Business based rules are used to define when and where files are copied or relocated.

DeepSpace can manage volume group rules that can be very granular to allow for this scenario. When or where files are replicated to or migrated from are configurable via a rule-based policy.

Yes, in several ways. DeepSpace can be used by the guest OS as a source or destination for files AND DeepSpace can be a target for cinder backup images via NFS.

Yes, as soon as one of the Federated systems closes a file on write, the other systems in the federated name space are notified that their local copy (if they have one) is stale, causing the file contents to be purged and leaving the file in a non-resident state. A user accessing the same file on a federated peer will cause a re-stage of the file data from the new file generation using the newly archived copy.

Yes. DeepSpace can utilize most tape, automated library and block based disk already on your datacenter floor.

It depends. DeepSpace manages a wide array of media types and tier types. DeepSpace can be configured to ingest on very fast and very slow media depending on the user, the server, the application etc.. So, you can design a system that works in micro-seconds and streams in data in at GB/s to TB/s rates or you can have a system that takes minutes or hours to complete a write or read. Budget and SLA requirements are all taken into account when designing a DeepSpace implementation.

Yes, DeepSpace has a robust query engine. This allows you to build automated reports for user, group, hostname and application capacity usage.

Yes, the DeepSpace administrator can set up specific volume and volumegroup quotas or on-demand automated growth at a very granular level.

Client System:

Minimal requirements for clients are no greater than any database client application and a version 3.10 or later standard Linux kernel. Presently we are only developing and testing against RHEL/CENTOS 7.X but we will add, and test other client OS’s as needed by our customers.


The master DeepSpace Server basically is where device allocation and mount request processing take place. This applies to any archive target – disk and tape targets follow the exact same code path as both are shared dynamically between systems. The master communicates with an external RDBMS server for its catalog implementation, but so does every client, so it is not a bottleneck, but still it should be robust machine with enough memory.

In practice, with a lot of activity and a heavy workload seeing ~20G of memory in use is common in an environment where the master server and MariaDB/MySQL are cohosted on the same server and the catalog size is ~5 million files or less with a file change rate of up to ~ 50,000 per hour. If the site requirements are substantial and the file count needs to scale to 10 million files and beyond, we recommend a separate RDBMS implementation, and this becomes a database implementation question more so than a discussion of anything unique to DeepSpace. You basically need to scale your RDBMS server to meet the needs of an OLTP environment that matches your enterprise size and workload and it is primarily a matter of how often files are changing and being added in your enterprise.

As a rule of thumb, if you start to see any lag in file search capability and/or the GUI starts to show a lack of responsiveness, you need to have your database administrator look into the database sever workload and scaling. In other words, check your slow query log and let us know what you are seeing for any queries that exceed 500ms and we’ll work our way down from there.


Ideally, the enterprise tape implementation will be on a fibre channel SAN that has a wide-open zone for tape I/O only that attaches every tape host requiring regular tape I/O to every tape drive. Our product direction is towards a converged network with RDMA bypass as supported by the latest CNE cards. In particular our System Monitoring Facility is implemented on an ultra-low latency RDMA message queue that provides time synchronized query-based event logging across distributed systems. If you are planning a really large deployment, open a conversation with us early in the process.

All managed file systems will provide continuous protection for all file data once a data management policy is attached to the file system(s). The retention policy can be set to keep any number of prior file versions from 0 to 9,999. The creation of archive copies will happen asynchronously, starting within a second or two of the files being closed on write.

Any file, file system, or collection of files can be rolled back to any point in time near instantly by any user with applicable privileges, at the command line, or through the GUI. This rollback speed is accomplished without data movement, instead we simply purge the data portion of the file and reset the file’s extended attribute pointer to a prior generation. Once the file is re-opened by a user, it will be rehydrated at the prior generation automatically. We know of no failure mode that can cause data loss outside of this very short window of vulnerability while the file data is being archived.

Our first public release includes automated database dumps to tape, we strongly suggest at least one tape drive be configured for this purpose. The drive won’t be dedicated to this purpose unless desired (as in dedicated audit trail device). Backups to Ceph are also supported, but not recommended as an only copy if you are serious about data protection. DeepSpace also provides support for physical tape vaulting or S3 electronic vaulting of metadata.

In a catastrophic situation where no metadata copies have been made or are available, DeepSpace can reconstitute its entire metadata space from its archive copies without reading any data beyond one of the two copies of the 320 byte ANSI/IBM label or 512 byte DSCM label written in front and back of every file and volume as a separate file. Even in a tape library environment DeepSpace will exploit the standard tape label format to just read this label and “file skip forward” over the data to recover the metadata as quickly as possible.

The exact same thing as happens on your current file systems as they exist now, assuming you leave every file resident all the time. If you choose as we expect to take advantage of automated capacity management, it does become more complicated. It becomes a matter of how you choose to implement your archive system. In practice, using a disk-based object store data fault servicing (what happens when a non-resident file is opened) happens almost as fast as a resident file system file open from the user experience perspective, but there is a lag that will clearly be seen when thousands or millions of files need to be restaged to search the contents of each.

Sort of. We configure the open source minio S3 servers with our data management policies behind the minio target file system(s). In the future we plan to integrate the minio server deployment in our GUI. Later versions will map to the degree possible data management into available S3 command set but this is a product future at this time.

Not yet, but we recognize the need to create aggregated data sets in the form of blobs and will support this in a future release.

Yes! We gave this subject a lot of attention and are very proud of the implementation. The reclamation/transcription/compaction processes fully supports 3rd party copy directly from source – directly to target, requires zero metadata updates (Our generation Data Group implementation makes this possible), is fully preemptable (data faults/restage will preempt a target being transcribed or compacted) and has built-in checkpoint restart for those situations.

Yes, but you will find everything you need in the GUI now or supported by direct SQL query against our database tables.

We have everything you need to track space and movement, but you may have to build your own database queries depending on your specific chargeback needs. But it’s all standard SQL and we can help build your queries for you if you like, or help you integrate the requisite queries into your current chargeback system.

Yes! We love showing this off! Physical vaulting of bulk tapes by volumeset is supported, but what is really interesting is the way we create export volumes to take data offsite selectively with export sets for those times when you want to share or transfer custodial duties for select data sets with another site, not just cold vault.

Using the GUI you can transcribe data archived on any storage platform to a standard label tape(s), spanning up to 10,000 physical volumes per volumeset (we handle all the end of media spanning issues transparently), and then DeepSpace will eject the media from your library, remove the original file system copies if desired, and print QR labels for every tape. You can then scan the QR code to view all files on the media from any remote location using your phone, tablet, or any computer with a camera, and if you like automatically copy the volume metadata to another DeepSpace server at another site using a secure S3 transfer that takes place automatically between local and remote DeepSpace master catalog servers.

Secure sites can alternatively create a metadata dump file for physical transfer if a secure network is not available between servers and S3 encryption isn’t “good enough” even for metadata only.

For tape transfer to a site not running DeepSpace, we provide an open source tape reader that works in either standalone or library environments to fully automate the data transfer process at the remote end.