Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I'm trying to learn docker at the moment and I'm getting confused about where data volumes actually exist.

I'm using Docker Desktop for Windows . (Windows 10)

In the docs they say that running docker inspect on the object will give you the source: https://docs.docker.com/engine/tutorials/dockervolumes/#locating-a-volume

$ docker inspect web
"Mounts": [
        "Name": "fac362...80535",
        "Source": "/var/lib/docker/volumes/fac362...80535/_data",
        "Destination": "/webapp",
        "Driver": "local",
        "Mode": "",
        "RW": true,
        "Propagation": ""

however I don't see this, I get the following:

$ docker inspect blog_postgres-data
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/blog_postgres-data/_data",
        "Name": "blog_postgres-data",
        "Options": {},
        "Scope": "local"

Can anyone help me? I just want to know where my data volume actually exists is it on my host machine? If so how can i get the path to it?

did you find the solution to see where they "actually" stored ? It is very easy in Docker toolbox to verify, we can log in to the docker-machine and check. However, I didn't find a way yet to verify where those volumes exist in the case of Docker Desktop – Nag Mar 24, 2020 at 9:06

Type in the Windows file explorer :

  • For Docker version 20.10.+ : \\wsl$\docker-desktop-data\data\docker\volumes
  • For Docker Engine v19.03: \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
  • You will have one directory per volume.

    Holy mother, I cannot believe how long it took before I found your answer and the solution. I looked in a ridiculous number of places. I guess this is a brand new thing. Thanks so much. – Sarel Botha Oct 24, 2020 at 17:44 More generally /var/lib/docker/ maps to \\wsl$\docker-desktop-data\version-pack-data\community\docker\ – David Dec 5, 2020 at 22:19 This worked for me, but I am also wondering if this is explicitly documented anywhere in the official docs. – wlnirvana Apr 18, 2021 at 1:24 For this to work I had to enable network detection first on my Windows 10 machine otherwise I got an error message after entering the wsl$ path in the file explorer. – Bigman74066 Aug 30, 2022 at 12:50

    Your volume directory is /var/lib/docker/volumes/blog_postgres-data/_data, and /var/lib/docker usually mounted in C:\Users\Public\Documents\Hyper-V\Virtual hard disks. Anyway you can check it out by looking in Docker settings.

    You can refer to these docs for info on how to share drives with Docker on Windows.

    BTW, Source is the location on the host and Destination is the location inside the container in the following output:

    "Mounts": [
        "Name": "fac362...80535",
        "Source": "/var/lib/docker/volumes/fac362...80535/_data",
        "Destination": "/webapp",
        "Driver": "local",
        "Mode": "",
        "RW": true,
        "Propagation": ""
    

    Updated to answer questions in the comment:

    My main curiosity here is that sharing images etc is great but how do I share my data?

    Actually volume is designed for this purpose (manage data in Docker container). The data in a volume is persisted on the host FS and isolated from the life-cycle of a Docker container/image. You can share your data in a volume by:

  • Mount Docker volume to host and reuse it

    docker run -v /path/on/host:/path/inside/container image

    Then all your data will persist in /path/on/host; you could back it up, copy it to another machine, and re-run your container with the same volume.

  • Create and mount a data container.

    Create a data container: docker create -v /dbdata --name dbstore training/postgres /bin/true

    Run other containers based on this container using --volumes-from: docker run -d --volumes-from dbstore --name db1 training/postgres, then all data generated by db1 will persist in the volume of container dbstore.

    For more information you could refer to the official Docker volumes docs.

    Simply speaking, volumes is just a directory on your host with all your container data, so you could use any method you used before to backup/share your data.

    can I push a volume to docker-hub like I do with images?

    No. A Docker image is something you can push to a Docker hub (a.k.a. 'registry'); but data is not. You could backup/persist/share your data with any method you like, but pushing data to a Docker registry to share it does not make any sense.

    can I make backups etc?

    Yes, as posted above :-)

    Ok so the source /var/lib/docker/volumes/blog_postgres-data/_data is the linux VM docker is running on. My main curiosity here is that sharing images etc is great but how do I share my data? can I push a volume to docker-hub like I do with images? can I make backups etc? – Brad Apr 3, 2017 at 15:15 @Brad, I updated my answer for your questions because comment is not enough, hope it could be helpful to you :-) – shizhz Apr 4, 2017 at 1:28 Directory "C:\Users\Public\Documents\Hyper-V\Virtual hard disks" is empty (Wondows 10 Enterprise) – Leos Literak Aug 14, 2020 at 15:45 Your Docker is eventually managed by Hyper-V (unless you use WSL2). It could be under ProgramData\DockerDesktop – hyamanieu Dec 15, 2020 at 9:24

    For Windows 10 + WSL 2 (Ubuntu 20.04), Docker version 20.10.2, build 2291f61

    Docker artifacts can be found in

    DOCKER_ARTIFACTS == \\wsl$\docker-desktop-data\version-pack-data\community\docker

    Data volumes can be found in

    DOCKER_ARTIFACTS\volumes\[VOLUME_ID]\_data

    Thanks for this. I've been struggling to find the correct path. Where did you find out the correct folder? Most answers on the web are outdated and inaccurate for the latest version, pointing to now empty/inexistent folders. – Leônidas Villeneuve Mar 2, 2021 at 17:58 Awesome, thanks so much. For any windows user who is still confused, simply copy-paste this path \\wsl$\docker-desktop-data\version-pack-data\community\docker directly into your file explorer and it will work :D – Tanzim Chowdhury Sep 28, 2021 at 12:53 Hey everyone, don't type in Network at the start of your file path. Just type \\wsl$ This way, you can manually navigate your way to the Docker files manually for anyone that wants to better understand how to navigate to those files. Hope this helps :) This is the resource I found to access the wsl$ ntweekly.com/2021/05/18/open-wsl-path-on-windows-explorer – Monttana16 Jul 27, 2022 at 8:01 Using docker desktop v20. docker volume inspect my_vol shows a valid volume. \\wsl$ is empty. – dudeNumber4 Oct 4, 2022 at 15:17

    I have found that my setup of Docker with WSL 2 (Ubuntu 20.04) uses this location at Windows 10:

    C:\Users\Username\AppData\Local\Docker\wsl\data\ext4.vhdx
    

    Where Username is your username.

    Using docker desktop v20. docker volume inspect my_vol shows a valid volume. C:\Users\me\AppData\Local\Docker\wsl doesn't exist. – dudeNumber4 Oct 4, 2022 at 15:22

    When running linux based containers on a windows host, the actual volumes will be stored within the linux VM and will not be available on the host's fs, otherwise windows running on windows => C:\ProgramData\Docker\volumes\

    Also docker inspect <container_id> will list the container configuration, under Mounts section see more details about the persistence layer.

    Update: Not applicable for Docker running on WSL.

    Your answer is correct "When running linux based containers on a windows host, the actual volumes will be stored within the linux VM and will not be available on the host's fs" I could not find any items in "C:\Users\Public\Documents\Hyper-V\Virtual hard disks" folder. – theeranitp Jun 1, 2020 at 5:35 And it appears that Docker Desktop is very willing to simply blow away that VM. For instance I tried to change something in the JSON file in Docker Desktop > Settings > Docker Engine and it was apparently invalid. After Docker Desktop tried to restart a few times, I happened to be watching in Hyper-V Manager and it just deleted the VM. – Aaron Axvig Jul 21, 2020 at 17:26 @AaronAxvig wait so you just lost all of your data like that? How is that an okay thing? So they give us no ability to back it up (because the volume is apparently nowhere to be found) and it can just be gone like that? What's the point of Docker again if you can just lose all your data like that? – gargoylebident Apr 21, 2022 at 21:01

    In my case, i install docker-desktop on wsl2, windows 10 home. i find my image files in

    \\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
    \\wsl$\docker-desktop-data\version-pack-data\community\docker
    

    Containers, images volumes infos are all there.

    All image files are stored there, and have been seperated into several folders with long string names. When i look into every folder, i can find all the real image files in "diff" folders.

    Although the terminal show the path "var/lib/docker", but the folder doesn't exsit and the actual files are not stored there. i think there is no error, the "var/lib/docker" is just linked or mapped to the real folder, kind like that

    Mounting any NTFS based directories did not work for my purpose (MongoDB - as far as I'm aware it is also the case for Redis and CouchDB at least): NTFS permissions did not allow necessary access for such DBs running in containers. The following is a setup with named volumes on HyperV.

    The following approach starts an ssh server within a service, setup with docker-compse such that it automatically starts up and uses public key encryption between host and container for authorization. This way, data can be uploaded/downloaded via scp or sftp.

    The full docker-compose.yml for a webapp + mongodb is below, together with some documentation on how to use ssh service:

    version: '3'
    services:
        build: .
        image: localhost.localdomain/${repository_name}:${tag}
        container_name: ${container_name}
        ports:
          - "3333:3333"
        links:
          - mongodb-foo
        depends_on:
          - mongodb-foo
          - sshd
        volumes:
          - "${host_log_directory}:/var/log/app"
      mongodb-foo:
        container_name: mongodb-${repository_name}
        image: "mongo:3.4-jessie"
        volumes:
          - mongodata-foo:/data/db
        expose:
          - '27017'
      #since mongo data on Windows only works within HyperV virtual disk (as of 2019-4-3), the following allows upload/download of mongo data
      #setup: you need to copy your ~/.ssh/id_rsa.pub into $DOCKER_DATA_DIR/.ssh/id_rsa.pub, then run this service again
      #download (all mongo data): scp -r -P 2222 user@localhost:/data/mongodb [target-dir within /c/]
      #upload (all mongo data): scp -r -P 2222 [source-dir within /c/] user@localhost:/data/mongodb
      sshd:
        image: maltyxx/sshd
        volumes:
            - mongodata-foo:/data/mongodb
            - $DOCKER_DATA_DIR/.ssh/id_rsa.pub:/home/user/.ssh/keys/id_rsa.pub:ro
        ports:
            - "2222:22"
        command: user::1001
    #please note: using a named volume like this for mongo is necessary on Windows rather than mounting an NTFS directory.
    #mongodb (and probably most other databases) are not compatible with windows native data directories due ot permissions issues.
    #this means that there is no direct access to this data, it needs to be dumped elsewhere if you want to reimport something.
    #it will however be persisted as long as you don't delete the HyperV virtual drive that docker host is using.
    #on Linux and Docker for Mac it is not an issue, named volumes are directly accessible from host.
    volumes:
      mongodata-foo:
    

    this is unrelated, but for a fully working example, before any docker-compose call the following script needs to be run:

    #!/usr/bin/env bash
    set -o errexit
    set -o pipefail
    set -o nounset
    working_directory="$(pwd)"
    host_repo_dir="${working_directory}"
    repository_name="$(basename ${working_directory})"
    branch_name="$(git rev-parse --abbrev-ref HEAD)"
    container_name="${repository_name}-${branch_name}"
    host_log_directory="${DOCKER_DATA_DIR}/log/${repository_name}"
    tag="${branch_name}"
    export host_repo_dir
    export repository_name
    export container_name
    export tag
    export host_log_directory
    

    Update: Please note that you can also just use docker cp nowadays, so the sshd container outlined above is probably not necessary anymore, except if you need remote access to the file system running in a container under a Windows host.

    If you find \\wsl$ a pain to enter or remember, there's a more GUI-friendly method in Windows 10 release 2004 and onwards. With WSL 2, you can safely navigate to all the special WSL shares via the new Linux icon in File Explorer:

    From there you can drill down to (e.g.) \docker-desktop-data\data\docker\volumes, as mentioned in other answers.

    For more details, refer to Microsoft's official WSL filesystems documentation, which mentions these access methods. For the technically curious, Microsoft's deep dive video should answer a lot of questions.

    If you're searching where the data is actually located when you put a volume that is pointing to the docker "vm" like here:

    version: '3.0'
    services:
      mysql-server:
        image: mysql:latest
        container_name: mysql-server
        restart: always
        ports:
          - 3306:3306
        volumes:
          - /opt/docker/mysql/data:/var/lib/mysql
    

    The "/opt/docker/mysql/data" or just the / is located in \\wsl$\docker-desktop\mnt\version-pack\containers\services\docker\rootfs

    Hope it's helping :)

    If you're on windows and use Docker For Windows then Docker works via VM (MobyLinuxVM). Your volumes (as everting else) are in this VM! It is how to find them:

    # get a privileged container with access to Docker daemon
    docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh
    # in second power-shell run a container with full root access to MobyLinuxVM
    docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
    # switch to host FS
    chroot /host
    # and then go to the volume you asked for
    cd /var/lib/docker/volumes/YOUR_VOLUME_NAME/_data
    

    Each container has its own filesystem which is independent from the host filesystem. If you run your container with the -v flag you can mount volumes so that the host and container see the same data (as in docker run -v hostFolder:containerFolder).

    The first output you printed describes such a mounted volume (hence mounts) where /var/lib/docker/volumes/fac362...80535/_data (host) is mounted to /webapp (container).

    I assume you did not use -v hence the folder is not mounted and only accessible in the container filesystem which you can find in /var/lib/docker/volumes/blog_postgres-data/_data. This data will be deleted if you remove the container (docker -rm) so it might be a good idea to mount the folder.

    As to the question where you can access this data from windows. As far as I know, docker for windows uses the bash subsystem in Windows 10. I would try to run bash for windows10 and go to that folder or find out how to access the linux folders from windows 10. Check this page for a FAQ on the linux subsystem in windows 10.

    Update: You can also use docker cp to copy files between host and container.

    I realise this is a couple of years old, but it's probably worth pointing out that Docker for Windows does not presently use Windows Subsystem for Linux; rather, it runs Moby Linux inside HyperV. – al45tair May 24, 2019 at 11:04

    If your using windows, your docker files (in this case your volumes) exist on a virtual machine that docker uses for windows either Hyper-V or WSL. However if you need to access those files, you can copy your container files and store them locally on your machine and access the data this way.

    docker cp container_Id_Here:/var/lib/mysql path_To_Your_Local_Machine_Here
            

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.

  •