Apport CoreDump from containers requires a lot of dependencies to be installed inside the container
We have a bunch of microservices, published as docker images, and run them on-premise for local testing and in AKS k8s cluster.
We would like to automatically collect coredump when our microservice crashes inside the docker container using apport utility.
This lead us to the following dokcerfile:
FROM mcr.microsoft.
COPY dotnet.service /lib/systemd/
RUN apt-get update && apt-get install -y --no-install-
systemd \
init \
apport \
python3-systemd && rm -rf /var/lib/
RUN sed -i "s/enabled=
sed -i "s/'problem_
sed -i "s/ConditionVir
RUN systemctl enable apport-
systemctl enable dotnet.service
# our app
COPY . /opt/app
WORKDIR /opt/app
EXPOSE 5000
ENTRYPOINT [ "/sbin/init" ]
which adds ~65MB to the container, and runs a bunch processes required to start services and listen to the activation socket /run/apport.socket
This looks like it is a lot of things to do in order this to work (coredump collected using ubuntu default coredump interceptor). Docker philosophy to use single process per container is broken here.
We were able to comment out some parts related to container detection, introduced in 869366238 data/apport (Brian Murray 2017-11-20 08:46:52 -0800), and succeeded in that we get coredump using host apport (no apport was installed inside container in this experiment).
So we are interested in simplification of coredump collection process inside container.
Why do you need apport running inside container in order to collect coredump from that container?
Question information
- Language:
- English Edit question
- Status:
- Expired
- For:
- Ubuntu apport Edit question
- Assignee:
- No assignee Edit question
- Last query:
- Last reply: