We’re using AWS Elastic Beanstalk to deploy Matomo from Docker Hub. This has been working well, but we’re having performance issues when launching a new instance.
Some details:
Using the latest matomo:fpm-alpine image, which is currently 4.2.1, although I’ve forced 4.2.0 and 4.1.1 as well with the same results
We have the following volumes defined in our docker-compose. These are all network NFS mounts (AWS Elastic Filesystem) that are mounted on the host so that the containers all share the data/settings.
- /matomo_config:/var/www/html/config:rw
- /matomo_plugins:/var/www/html/plugins:rw
- /matomo_misc_user:/var/www/html/misc/user:rw
We do not have ‘multi_server_environment’ set in the config, since we want to be able to change the plugins and configuration from the UI - hence the shared volumes.
This works, except that in https://github.com/matomo-org/docker/blob/master/docker-entrypoint.sh it’s extracting the tar it created from /usr/src/matomo into /var/www/html, and that’s taking a really long time to execute (several minutes). I’ve monitored a container after launch, and I can see that the primary delay is the plugins directory, which is the largest shared network mount.
I need to address the speed with AWS - it shouldn’t be that slow. But it also seems unnecessary to update the shared plugins directory every time a container is launched. I could exclude that folder from the tar that entrypoint.sh creates, but that means creating a custom entrypoint script and not getting future updates. So I was wondering if anyone had any better suggestions for me.
Thanks!
Chuck
1 post - 1 participant