I (Finally) Ditched Google Photos for Self-Hosted; Here’s How It Went
The digital age has brought about an unprecedented era of convenience, but at a cost that is becoming increasingly difficult to ignore. For years, we have entrusted our most precious memories, professional assets, and personal data to the titans of the tech industry. Among these, Google Photos stood as a monolithic solution, offering a seamless, AI-powered experience backed by seemingly infinite storage. However, the tides are turning. A growing chorus of privacy advocates, data sovereignty enthusiasts, and cost-conscious individuals are beginning to question the long-term viability of this centralized model. We found ourselves at this very crossroads. The decision to migrate away from the familiar ecosystem of Google Photos to a self-hosted solution was not one taken lightly. It represented a fundamental shift in how we manage our digital lives. This is the story of that transition—a comprehensive deep dive into the motivations, the technical hurdles, the solutions we implemented, and the profound liberation that followed. We are here to document the journey, providing a detailed roadmap for anyone else ready to reclaim ownership of their digital footprint.
The Breaking Point: Why We Decided to Leave Google Photos
Before embarking on a journey of migration, one must have a compelling reason to leave a service that, for all its faults, works with remarkable efficiency. The decision to abandon Google Photos was not based on a single catastrophic event, but rather a culmination of steadily growing concerns that eventually reached a tipping point.
The Unsettling Reality of Privacy and Data Ownership
The fundamental trade-off of most free-tier online services is the exchange of personal data for access. With Google, this exchange is particularly stark. We realized that every photo and video we uploaded was being ingested by a vast machine learning engine. Google’s “AI-powered” features, such as object, face, and location recognition, are impressive, but they are powered by the constant analysis of our private lives. The question became: who truly owns these memories? Are they ours, or are they merely assets in a data portfolio used to refine advertising algorithms? The idea of our family photos, our travels, and our personal milestones being used to train a corporate entity’s models felt like a profound violation. The terms of service, often a labyrinth of legalese, grant them broad licenses to use, store, and manipulate our data. True ownership implies control, and with a third party, we relinquished that control by definition. The Edward Snowden revelations years ago were a wake-up call, but the day-to-day reality is that we are handing over the keys to our digital kingdom. We decided that we wanted our data to be just that: our data.
The Inevitable Financial Calculation: Costs of Cloud Storage
What begins as a generous offering of “free” storage inevitably evolves into a recurring monthly or annual expense. For years, we utilized the high-quality “storage saver” option, but as our libraries grew into the terabytes, we were nudged into the paid tier of Google One. While the cost per month seemed trivial, its cumulative effect over years is significant. We performed a simple calculation: the annual cost of our Google One subscription, when projected over a five or ten-year period, amounted to a substantial sum. This was money being paid for the privilege of renting storage space on someone else’s hardware. When we compared this to the one-time cost of purchasing a Network Attached Storage (NAS) device with several terabytes of redundant storage, the economic argument for self-hosting became overwhelmingly clear. The long-term savings are not just a bonus; they are a core pillar of the financial justification for taking control of our own data infrastructure.
The Frustration of Artificial Limitations and Walled Gardens
The convenience of Google Photos is encased within a meticulously constructed walled garden. While you can download your data, seamless integration with competing ecosystems is non-existent. If you want to share a photo with someone not on Google, they receive a link, but the experience is clunky. If you want to edit a photo with a tool outside of the Google suite, you must first download it, breaking the workflow. Furthermore, the AI-driven organization, while powerful, is a black box. You cannot customize the algorithms or tweak the facial recognition parameters. You are a passive consumer of the features Google decides to build. This lack of flexibility became a source of growing frustration. We desired a solution that was platform-agnostic, scriptable, and offered granular control over every aspect of its operation.
The Search for the Perfect Google Photos Alternative
Our journey of self-hosting began with the most critical step: selecting the right software. The self-hosting landscape is rich with options, each with its own philosophy and feature set. We spent considerable time evaluating the leading candidates, weighing their strengths and weaknesses against our specific needs.
Immich: The Rising Star and Spiritual Successor to Google Photos
Without a doubt, the most compelling open-source project in this space today is Immich. From the moment we first deployed a test instance, it was clear that this was not just another photo gallery. Immich was built from the ground up with the explicit goal of being a near one-to-one replacement for Google Photos. Its feature list is staggering:
- Automatic Backup: A mobile app that seamlessly uploads photos and videos in the background.
- AI-Powered Machine Learning: Object, person, and facial recognition that is remarkably accurate, powered by tools like OpenVINO or CUDA for hardware acceleration.
- Intuitive Timeline Navigation: A buttery-smooth, map-based interface for exploring photos by location and date.
- Shared Albums and Collaboration: The ability to create and share albums with other users, complete with permissions.
- Efficient Storage: Support for modern codecs and efficient storage management.
The development pace of Immich is frenetic, with the community actively contributing and the lead developer pushing updates at an incredible speed. For us, Immich was the clear front-runner, offering the polish and user experience we were accustomed to, without the privacy compromises.
Photoprism: The Mature and Feature-Rich Contender
Before the meteoric rise of Immich, Photoprism was the de-facto standard for self-hosted photo management. It is a mature, stable, and incredibly powerful platform. Its core strength lies in its powerful search capabilities, using on-device machine learning (via TensorFlow) to tag photos with an extensive vocabulary. You can search for “red car,” “beach sunset,” or “birthday cake” and get accurate results. It also offers features like RAW file processing, WebDAV support for importing from external sources, and robust user management. While its user interface is slightly less modern than Immich’s, its stability and extensive documentation make it an extremely reliable choice. For those who prioritize rock-solid stability over the bleeding edge of new features, Photoprism remains an excellent option.
Nextcloud Memories: The Integrated Ecosystem Play
For those already invested in the Nextcloud ecosystem, the Memories app is a formidable contender. Nextcloud is a sprawling platform offering file storage, collaboration tools, calendars, and more. The Memories app plugs directly into this, leveraging the existing user system and file structure. Its primary advantage is integration; you don’t need to manage a separate service for photos. It features a clean, modern UI, timeline view, facial recognition (via an external tool called Memories), and the ability to handle RAW files. The choice here is philosophical: do you want a dedicated, single-purpose photo management tool, or do you want your photo library to be just one component of a larger, all-in-one personal cloud? For us, the dedicated approach of Immich was more appealing, but Nextcloud Memories is a powerful choice for the right user.
The Migration: A Step-by-Step Guide to Liberation
Once we had settled on Immich as our platform of choice, the daunting task of migration began. This is the most critical and nerve-wracking part of the process. A single mistake could lead to data loss. Our strategy was built on meticulous planning and phased execution.
Phase 1: The Great Download
The first step was to retrieve our data from Google’s grasp. The irony is that the easiest way to get all your data out is to use Google Takeout. This service allows you to export a complete archive of your Google data, including all your photos and videos in their original quality. However, it is not without its quirks. The export is typically split into multiple 2GB ZIP files. This presents a logistical challenge. Our first task was to download all these archives. We used a stable, wired internet connection and set aside several hours for this process, as our library was substantial.
Phase 2: Data Aggregation and Verification
After downloading all the archives, we needed to consolidate them. This involved unzipping each archive and moving the contents into a single, master staging directory on our server. This process revealed another quirk of Takeout: the file structure can be a bit messy, with metadata and JSON files scattered about. We had to write a simple script to scan the staging directory, identify the actual image and video files (.jpg, .png, .heic, .mp4, .mov, etc.), and move them into a clean, flat directory structure for the import process. Verification was key. We ran a file count to ensure the number of files we downloaded matched the number we had in our staging directory. This gave us confidence that no data was lost in transit.
Phase 3: Preparing the Immich Environment
With our data prepared, we turned our attention to setting up the Immich server. We chose to deploy it using Docker Compose, as this is the recommended and most robust method. Our hardware consisted of a small, power-efficient server with an Intel CPU that supported Quick Sync Video (QSV), which is crucial for hardware-accelerated video transcoding. We meticulously configured our docker-compose.yml file, defining volumes that mapped our server’s directories to the containers. It is absolutely critical to get these volume mappings correct to prevent data loss. We mounted a dedicated directory for the original photos and another for the library folder that Immich manages. We also configured the database and Redis containers as specified in the official Immich documentation. Before starting the import, we launched the stack and logged into the web interface to ensure everything was operational.
Phase 4: The Phased Import Strategy
We did not simply point Immich at our entire multi-terabyte library and hope for the best. A bulk import can strain system resources and make it difficult to identify issues. Instead, we employed a phased approach. We started with a single, smaller sub-directory containing a few thousand photos from a specific year. We moved this sub-directory into the external library path that we had mounted in our Docker configuration. Within the Immich settings, we then initiated the “External Library” scan. This process tells Immich to “read” the files without moving or modifying them, which is a major benefit of its design.
We monitored the server during the initial scan. The machine learning service began its work, identifying faces, objects, and locations. The thumbnail generation service started creating the various sizes of previews needed for the web and mobile interfaces. This is a computationally intensive process, and our hardware was put to the test. After the initial scan of the test batch, we meticulously reviewed the results in the web interface. Did all the photos appear? Was the EXIF data (date, time, location) preserved correctly? Was the timeline accurate? Once we confirmed the test batch was a success, we began to scale the process, gradually moving more and more directories into the external library path and running scans until our entire library was ingested. This methodical approach took several days but ensured a flawless transition.
Rebuilding the Ecosystem: Features and Workflows
Migration is not just about moving files; it is about replicating the workflows that made the original service so valuable. This is where the true power of self-hosting begins to shine, as we are no longer constrained by a one-size-fits-all solution.
Replicating the AI Search and Object Recognition
One of Google Photos’ most beloved features was its ability to search for anything. “Show me pictures of cats,” “pictures from the mountains,” “pictures of my daughter with a dog.” Immich’s machine learning engine is designed to replicate this. By leveraging CLIP (Contrastive Language-Image Pre-training) models, Immich can understand the content of your images on a semantic level. We configured our Immich instance to utilize our Intel CPU’s Quick Sync for machine learning acceleration, which dramatically sped up the initial indexing. The result was a search capability that felt just as powerful as Google’s. We could search for abstract concepts like “celebration” or “technology” and get relevant results. For us, the key difference was that this powerful AI was running entirely on our own hardware, with our data never leaving our local network.
Creating a Seamless Mobile Backup Experience
The automated background backup is the killer feature of the mobile experience. We installed the Immich mobile app on our iOS and Android devices. The configuration was straightforward. We entered our server’s address (using a secure reverse proxy with a proper SSL certificate), our credentials, and configured the backup settings. We could choose to back up only when on Wi-Fi, to preserve cellular data, and to only back up when the device is charging. We could select specific folders to back up (e.g., the Camera Roll) and set it to auto-start. The app does its job silently and efficiently, uploading new photos and videos to our server just as the Google Photos app did. The crucial difference is that the destination is our own database, not a remote server.
Advanced Features: Facial Groups, Maps, and Sharing
Immich does an excellent job of replicating the “magic” of Google Photos. The “People” tab automatically clusters faces, allowing us to view every photo of a specific person. The “Explore” tab provides a map view, plotting our photos geographically. Creating and sharing albums is simple and effective. We can generate a public link to an album, complete with a password and an expiration date, giving us fine-grained control over who can see our shared memories and for how long. We even configured a partnership with PhotoPrism for a period, using its reverse geocoding capabilities (via Immich’s API) to get more detailed location names on our map pins, showcasing the power of an open and composable ecosystem.
Technical Deep Dive: Our Self-Hosted Infrastructure
The beauty of self-hosting is that you can tailor the infrastructure to your exact needs. Our stack is designed for redundancy, performance, and data integrity.
Hardware and Network Considerations
Our core server is a small form-factor PC with an Intel Core i5 processor, 16GB of RAM, and a 4-bay NAS chassis. For storage, we deployed a ZFS pool in a RAID-Z1 configuration. This provides single-disk fault tolerance, which is non-negotiable for precious data. The server is connected to our network via a Gigabit Ethernet connection, which is sufficient for our streaming and backup needs. For remote access, we have a reverse proxy (we use Nginx Proxy Manager) that handles SSL/TLS termination, providing a secure HTTPS connection to our Immich instance from anywhere in the world.
The Power of Docker Compose
Our entire stack is defined in a docker-compose.yml file. This is the blueprint of our service. It defines the Immich server, the microservices, the machine learning engine, the database (PostgreSQL), the cache (Redis), and the reverse proxy. This approach provides several benefits. First, it makes the setup completely reproducible. If our server fails, we can redeploy the entire stack on new hardware with a single command. Second, it isolates the services, preventing conflicts and making upgrades clean and predictable. We can update Immich to a new version by simply pulling the latest images and restarting the stack.
Backup and Disaster Recovery Strategy
Our most important job is to protect the data we fought so hard to liberate. Our self-hosted backup strategy is multi-layered, following the industry-standard 3-2-1 rule:
- Three Copies of Data: The live data on our Immich server, a local backup, and an off-site backup.
- Two Different Media Types: Our live data is on spinning disks. Our local backup is performed nightly to a separate set of hard drives.
- One Off-site Copy: We use Rclone to perform encrypted, incremental backups of our entire photo library to a cloud storage provider (not Google). This protects us against physical disasters like fire or theft.
The Verdict: Is Self-Hosting Worth the Effort?
After living with our self-hosted solution for months, the answer is an unequivocal, resounding yes. The initial investment in time and effort was significant. It required learning about Docker, networking, and Linux administration. However, the long-term benefits far outweigh this initial learning curve.
The Positives:
- Total Privacy: We are the sole custodians of our data.
- Cost Savings: The initial hardware cost has already paid for itself compared to ongoing subscription fees.
- Unparalleled Control: We can customize every aspect, from the hardware to the software features.
- No Artificial Limits: We can upload as much data as our hardware can hold.
The Challenges:
- Initial Setup Complexity: It is not a “plug and play” solution.
- Ongoing Maintenance: You are the system administrator. You are responsible for applying security updates and managing the hardware.
- Power Consumption: A server runs 24/7, adding to your electricity bill.
For us, the peace of mind that comes from knowing our memories are secure, private, and under our complete control is priceless. We have transitioned from being users of a service to being the owners of our own infrastructure. This journey was more than just a technical project; it was a statement of principle. It was a declaration that our digital lives are our own, and we will not cede control to a third party. For anyone with the technical aptitude and the desire to truly own their data, we can say without hesitation: the leap is worth taking.