I have two instances of yacy server, local (ie on my laptop) junior and public (ie on remote server) senior, both are running in a docker container. I want to ensure that all the index of junior crawler is used by the senior peer. I followed an instruction of Import/Export:
- Internal Index Export → XML (Rich and full-text Solr data, one document per line in one large xml file, can be processed with shell tools, can be imported with DATA/SURROGATE/in/)
- Copy it on remote server to DATA/SURROGATES/in on the monted volume.
- Chech md5sum to be equal
Nothing is happened. Nothing is shown on Surrogate import page (CrawlResults.html?process=7)
My question are:
- Am I doing it wrong?
- If it’s a bug, can you help me open a bug issue?
- Is import/export procedure the only way to guarantee the index will survive when the server is shut down? Is that true for junior peers only, or both junior and senior?
P.S. Despite the path DATA/SURROGATE/in without the last S in the XML description, I used DATA/SURROGATES/in, because of the log message:
SWITCHBOARD surrogates.in Path = /opt/yacy_search_server/DATA/SURROGATES/in