Minimal requirements for clients are no greater than any database client application and a version 3.10 or later standard Linux kernel. Presently we are only developing and testing against RHEL/CENTOS 7.X but we will add, and test other client OS’s as needed by our customers.
The master DeepSpace Server basically is where device allocation and mount request processing take place. This applies to any archive target – disk and tape targets follow the exact same code path as both are shared dynamically between systems. The master communicates with an external RDBMS server for its catalog implementation, but so does every client, so it is not a bottleneck, but still it should be robust machine with enough memory.
In practice, with a lot of activity and a heavy workload seeing ~20G of memory in use is common in an environment where the master server and MariaDB/MySQL are cohosted on the same server and the catalog size is ~5 million files or less with a file change rate of up to ~ 50,000 per hour. If the site requirements are substantial and the file count needs to scale to 10 million files and beyond, we recommend a separate RDBMS implementation, and this becomes a database implementation question more so than a discussion of anything unique to DeepSpace. You basically need to scale your RDBMS server to meet the needs of an OLTP environment that matches your enterprise size and workload and it is primarily a matter of how often files are changing and being added in your enterprise.
As a rule of thumb, if you start to see any lag in file search capability and/or the GUI starts to show a lack of responsiveness, you need to have your database administrator look into the database sever workload and scaling. In other words, check your slow query log and let us know what you are seeing for any queries that exceed 500ms and we’ll work our way down from there.
Ideally, the enterprise tape implementation will be on a fibre channel SAN that has a wide-open zone for tape I/O only that attaches every tape host requiring regular tape I/O to every tape drive. Our product direction is towards a converged network with RDMA bypass as supported by the latest CNE cards. In particular our System Monitoring Facility is implemented on an ultra-low latency RDMA message queue that provides time synchronized query-based event logging across distributed systems. If you are planning a really large deployment, open a conversation with us early in the process.