Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Live photos are downloaded with the wrong extension #1

Open
dvdkruk opened this issue Nov 13, 2024 · 4 comments
Open

Live photos are downloaded with the wrong extension #1

dvdkruk opened this issue Nov 13, 2024 · 4 comments

Comments

@dvdkruk
Copy link

dvdkruk commented Nov 13, 2024

First, thanks for building this tool. I think many users would like to back up their Google Photos in original quality, and this might be the way to go.

I played a bit with gphotosdl and found that live photos that normally are downloaded as a zip archive from the web UI are incorrectly named to a picture file format like .jpg, .heic, etc. For example, photos.zip (containing img_123.jpg and img_123.mov) is named to img_123.JPG.

A possible solution is to first check if the file is a zip archive and then unzip it and serve the matching file.

gphotosdl/main.go

Lines 275 to 301 in 8d9e256

func (g *Gphotos) getID(w http.ResponseWriter, r *http.Request) {
photoID := r.PathValue("photoID")
slog.Info("got photo request", "id", photoID)
path, err := g.Download(photoID)
if err != nil {
slog.Error("Download image failed", "id", photoID, "err", err)
var h httpError
if errors.As(err, &h) {
w.WriteHeader(int(h))
} else {
w.WriteHeader(http.StatusInternalServerError)
}
return
}
slog.Info("Downloaded photo", "id", photoID, "path", path)
// Remove the file after it has been served
defer func() {
err := os.Remove(path)
if err == nil {
slog.Debug("Removed downloaded photo", "id", photoID, "path", path)
} else {
slog.Error("Failed to remove download directory", "id", photoID, "path", path, "err", err)
}
}()
http.ServeFile(w, r, path)

@eleazar123
Copy link

I'm running into the same issue. Did you happen to work out a solution? 🤞

I have over 5TB of photos/videos in GP, and using Google Takeout is an absolute nightmare, so I'd really love to be able to use gphotosdl to back everything up.

@eleazar123
Copy link

@ncw Apologies for the ping. ~98% of my pictures are live photos, and I can't wait to use gphotosdl to download my entire Google Photos library. Is it possible to add the ability to unzip the motion photo archive files? or maybe just leave them as .zip instead of using .jpg/.heic?

Thank you for making this tool for us! I feel trapped with so much data in Google Photos, and takeout being an absolute mess (and almost impossible to get a full archive downloaded since it's so large).

@ncw
Copy link
Member

ncw commented Feb 4, 2025

What is happening is that rclone uses the name that google photos uses to save the image.

So if you look at one of your heic images on the google photos website and click the info button you'll see something like this

Image

And I suspect in there the name is .jpg.

Unfortunately it is rclone that names the files not gphotosdl.

I think the answer to this might be to have a script which detects the zip files and unpacks them afterwards.

Or alternatively gphotosdl could detect it is a zip file and unpack the .zip file and return only the largest file - would that be acceptable?

If someone could attach a live photo to this issue for me to experiment with I'd be grateful as I don't have any!

@eleazar123
Copy link

Thank you for looking into this! It would be neat to have the script detect and unzip the files as it goes (and keep the image and the video file), although if it's not keeping the original file, I assume resuming would be a problem when I hit the rate limit and have to start the script from scratch? I have attached a sample file below.

Photos (1).zip

If it ends up breaking other things (ie. the ability to have the script resume the transfers), it would probably be better to have a separate script that detects zip files and extracts them after all files are transferred.

Just wanted to say thank you again for investing the time to make this. I was so excited when I found it. I'm currently struggling with API limits, but I'm tweaking my rclone command to try to slow everything down to avoid it. I might have to run the script for a few hours, then cancel for a few hours, and then run it again, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants