Video & Image Storage Solutions
Fast shared storage for active projects, long-term archive you can trust, and predictable costs for teams working with large video and image files.
Built for Teams Who Live in Large Files
If your business produces a steady stream of footage and large images, this setup is built for you.
Production & Post
Pre and post production video workflows where projects generate terabytes per week.
In‑House Marketing
Growing libraries of campaign footage, product photography, and social content.
Large Format & Print
High‑resolution image files and artwork archives that need to be available years later.
Any Project‑Based Media
Any environment where client work arrives as jobs, is delivered, and must stay accessible.
The Problems We See
-
Running out of space, juggling multiple systems.
Editors working off a mix of NAS, workstations, and loose drives. Nothing central, nothing future‑proof.
-
Slow access when multiple people are working.
Peak throughput numbers look good on paper, but IOPS and real‑world workloads tell a different story.
-
Cloud bills creeping up every month.
Cloud storage seemed simple at first, but as your library grows the monthly number becomes hard to ignore.
“Clients are normally juggling various storage systems they have working together and running out of space.”
“Cloud services are killing them on storage.”
A Cost Story That Makes Sense for Media
Large media libraries and long retention are where on‑premise storage and tape really shine.
As an example, storing around 100 TB long‑term in a major cloud can easily land around a couple of thousand dollars per month on a multi‑year commit. With our tape‑backed archive, the underlying hardware lives in our racks and is bundled into a managed service rather than a pile of equipment your team has to look after.
For the infrastructure we run, the tape hardware typically pays for itself in roughly 5–8 months compared to equivalent cloud storage — and we are still running tape drives that are more than eight years old after routine maintenance.
Real‑world numbers we see
- Cloud example: roughly $2,000/month over three years for 100 TB class workloads.
- Underlying tape hardware typically pays for itself within 5–8 months in our environment.
- Tape avoids most of the ongoing price increases in storage.
“In about 5–8 months the tape hardware has effectively paid for itself compared to cloud.”
How Projects Move Through the System
Everything is organized around jobs, so you can always find what you need later.
Ingest to fast ZFS storage
New footage and image assets land on the ZFS array. Editors and artists work directly from this central storage instead of passing drives around.
Shared access during production
Multiple editors and artists access the same project folders over the network with the performance profile matched to your workload.
Snapshots while work is active
Throughout the active phase, ZFS snapshots capture point‑in‑time copies so we can roll back if something is deleted, overwritten, or goes sideways mid‑edit.
Job closed and shuttled to tape
When a job is complete, it is marked read‑only and written to tape. The job database and indexing software know exactly which tapes hold which projects.
On‑demand restore for future work
When a client comes back years later, we look up the job, load the required tapes into the autoloader, and stream the project back to the array ready to use.
Why the job database matters
The key to making this work is organization. Everything is tracked as a job — open, in progress, or closed — with a predictable folder structure.
When a job is on tape, the software tells the operator exactly which tapes to load. There is no guesswork or manual searching involved.
“The key with any storage tiering is things must be organized. A job database is a must.”
Reliability You Can Bet Client Work On
This whole setup is built around one simple requirement: don’t lose the footage.
Data integrity by design
- ZFS checksums every block of data to catch silent corruption.
- Weekly scrubs read all data and repair from redundant copies where needed.
- Snapshots give you a clean point‑in‑time to roll back to if something goes wrong.
“We have never had corrupted data. ZFS has built in protections for silent data loss.”
In practice the chance of corruption is extremely low because ZFS is designed to detect and correct it.
Long‑lived tape archive
- Inline verification as data is written — read head follows the write head.
- Tapes routinely read successfully after 10+ years in service.
- No observed bit rot on magnetic tape over decades of working with it.
“I pull from tapes over 10 years old without issue. I have never seen bit rot on a magnetic tape.”
Designed for failure scenarios
- RAIDZ2 on each VDEV so you can survive drive failures while staying online.
- Dual‑array replication options when you reach very large data sets.
- Tape layer is offline, encrypted, and naturally air‑gapped.
For larger environments we often deploy a pair of arrays with regular replication, and then offload completed work to tape. That way restore time after a major event is realistic.
How We Build Media Storage That Lasts
A simple idea underneath: fast working storage for active jobs, and a tape layer that lets you keep everything.
Fast working storage (ZFS arrays)
Live projects sit on ZFS storage — sized for your editors and built for big files.
- RAIDZ2 layouts so you can tolerate multiple drive failures in a VDEV without losing work.
- Mirrored SSD write cache for smooth ingest and responsive timelines.
- Large record sizes tuned for big video and image files.
- Systems that scale from 12 bays to well over 50 bays as your library grows.
Long‑term archive (LTO tape)
When a job is closed, it moves to tape — where you keep it, not prune it.
- LTO‑8 and LTO‑9 autoloaders for hands‑off operation and years of growth.
- Up to 400 MB/s per drive on LTO‑9 — ideal for large linear restores.
- Read head right after the write head, so tapes are verified at high speed.
- Encrypted, offline, and air‑gapped once they are off the array.
Snapshots for everyday safety
On the live array, ZFS snapshots give you “undo history” for real‑world mistakes.
- Lightweight point‑in‑time copies taken on a regular schedule.
- Recover projects or folders if files are deleted or overwritten.
- Works alongside weekly scrubs to catch and correct silent corruption.
- For older jobs, tape becomes the second layer of recovery.
Grows with Your Team and Library
Start at a sensible size, then add capacity and performance as your workload grows.
Scaling the ZFS array
With ZFS we grow by adding VDEVs. Our systems can easily support 50+ bays or more, and new data automatically balances across the expanded pool.
- Add capacity in sensible building blocks rather than forklift upgrades.
- Option to rebalance for performance when needed, especially in heavy‑use environments.
- Job‑based organization makes it straightforward to move or balance projects.
Tape capacity that just keeps going
Tape does not need a forklift upgrade. You keep adding media as your archive grows, keeping cost per terabyte low and predictable.
- Autoloaders make day‑to‑day operation simple, even as the library grows.
- Indexing means you still find projects quickly years down the road.
- Ideal for media environments where “just keep everything” is the safest answer.
Why Work with Dataforge on Media Storage?
This is not a side offering — we run these systems ourselves and have been working with large data for decades.
30+ years with large data
Dataforge has been building and running large storage, tape, and management software for decades — long before “cold cloud storage” was a line item.
“Doing large storage, tape, management software takes years to master.”
We run this gear ourselves
We are not reselling someone else’s platform. You can come to our facility, see the arrays and tape systems, and talk through the architecture in person.
“You can come to our facility and see what we do — we run these systems ourselves.”
A niche most MSPs avoid
Very few MSPs in our region are comfortable with this kind of scale. We are one of the only ones in the Burlington/Hamilton/GTA area doing it day‑to‑day.
“Our competitors in the MSP area don't do this type of work.”
Frequently Asked Questions
Common questions we hear from studios and media teams.
This platform is designed for creative teams who live in large files every day:
- Production and post‑production shops handling regular shoots and edits
- In‑house marketing teams with a growing backlog of campaign assets
- Large format and print environments with huge image files and artwork
- Any business where client work arrives as projects and needs to stay accessible for years
If you are measuring storage in terabytes per week, you are exactly who this was built for.
For recent projects still on the ZFS working storage, restores are instantaneous — editors are simply pointed at the right folders.
For archived projects on tape, our indexing software tells us which tapes hold the job. We load them into the autoloader and stream the project back onto the array. With LTO‑9 running around 400 MB/s, even large jobs come back in a reasonable window as long as the array is sized to keep up.
In practice, restores are measured in minutes to a few hours depending on how large the project is.
With proper handling and storage, LTO tape is rated for very long retention periods — the official spec is on the order of 30 years in good conditions. In our own environment we routinely pull from tapes that are more than 10 years old without any issues.
For media teams that need to keep client work available years down the road, tape is a practical way to do that without betting everything on one cloud bill.
The arrays we build for media use ZFS with RAIDZ2, which means each VDEV can tolerate the loss of at least two drives without losing data. If a drive fails, the system stays online while we replace and rebuild the failed component.
Because ZFS is constantly checking checksums in the background, it can detect and correct issues during that process rather than only noticing them when you happen to read a file.
Yes. The working storage is built as a central array precisely so multiple editors and artists can work at the same time. We size the system around your real workloads instead of a lab benchmark so you get reliable performance when sessions are busy.
That usually means saying goodbye to juggling external drives and “who has the latest copy?” conversations.
Many of our media clients start with a modest array and a single tape drive or small autoloader. As their workload grows, we add VDEVs to the ZFS pool for more capacity and performance, and expand tape capacity by adding media.
Because everything is organized around jobs, we can grow in place rather than forcing a disruptive forklift upgrade.
Yes. On the live storage we take regular ZFS snapshots. These are lightweight point‑in‑time copies that let us roll back to an earlier state if a project or folder is deleted, overwritten, or damaged.
For older projects that have already moved to tape, we retrieve them from the archive and place them back on the working array ready for a new round of work.
Let’s Talk Media Storage
If you are juggling multiple storage systems and watching costs climb, we can design a media‑first storage setup you can rely on.