Thanks for great answers. You got me thinking - since we have loads and loads of pages I think we really need to get rid of the unnecessary pages from EPi. So I guess hiding them or move them to a (hidden) archive folder is not an option in the long run.
Maybe another approach could be to have a url-crawler (outside EPi) to save all html pages or at least make images of them. Like the Wayback Machine or similar. Is it someone "out there" that faces the same problems?
You could do but I guess it depends how complex your data structure/block strucutre is to how easy it would be to restore these. What I would probably do is create another episerver site off another domain/server and then take daily backups of the databases. Then if you needed to get access to a specific version of the content you could for any day, this would also allow you to use the admin import/export tool to export those pages and automatically pull them and any associated media/blocks back in to the current solution. If you wanted to go furthur you could create an admin plugin which lists these backups and allows auto restoration of a specific one by executing a powershell script or something. If you're on the DXC this may be difficult but if you control them it's something you can definately do.
If you don't have the DXC you could always programatically hook in to the import/export feature of episerver and just generate an export of the site tree that could be used for re-importing too. I think it really depends on if you want editors/admins to self action this feaure or if it's something you want to do.
Also the import export tool generate a very readable file so maybe using this and then creating a simple viewer would allow you to programatically export these files and import them as needed. Lots of ideas :-)
Many of our editors feel uneasiness when deleting pages in EPi and instead choose to unpublish them. The reason is partly that editors do not know if they will reuse the files in the future, partly that they want the ability to see what they have previously published. With time this makes the tree completely full of unnecessary old material that clutters the tree.
I would like the editors to feel completely confident when deleting pages and be sure that they can find their old stuffi.
Any ideas how this can be achieved? I´m thinking of something like a connection to some kind of archive that that regularly and automatic makes copies of the site.