This is great. I want this but for much more. I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write. I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press. I want a record of everything I do in the digital world that is searchable and that can answer the question: "what was I working on 2 weeks ago on this day?" and bring back all the context also.
For obvious reasons this has to be self hosted and managed. I'm not interested in creating surveillance software or technology.
It sounds extreme but whenever I have seen peoples obsidian set ups with heaps of manual and bidirectional linking I always thought that time is the one thing we should look at. If I look up some concept on wikipedia today, there is a higher chance of me looking up related concepts or working on something related to that around this time also.
willytobler 59 minutes ago [-]
This is absolutely crazy. Every secret service on this planet will be happy! They have been working so hard to get all these informations about every person. With this software they get everything delivered FOB. The united spies will make a worldwide ad campaign for your software.
hn_acc1 22 hours ago [-]
I think Microsoft has some kind of product that can help you Recall what you were working on?
spencerflem 3 hours ago [-]
The problem with that wasn’t the idea for me, it’s that Microsoft was doing it and they’re a known bad actor
endless-r0ad 5 hours ago [-]
"I'm not interested in creating surveillance software or technology."
That is exactly what you would be doing though
michaelterryio 17 hours ago [-]
I can never find it now, but someone had an idea for a computing system which was purely temporal for every object and then you'd only access outside of temporal by filter.
I wish I could find it again.
perilunar 9 hours ago [-]
Might you be thinking of Lifestreams by Freeman and Gelernter?
This could even allow resetting the state of the computer back in time so you can pick up exactly where you left off, or undo away mistakes regardless of app, etc.
Definitely some privacy concerns but for a self ran open source thing I think it could be really cool
mholt 24 hours ago [-]
> I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write.
Sounds in-scope so far. Long-term, perhaps, and maybe optional add-on features rather than built-in, but we'll see.
> I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press.
That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
But hey, the vision is pretty similar. We are generating all sorts of data to document and understand our lives -- we don't even have to deliberately write a journal -- but we have no way of comprehending it. This app is an attempt to solve that.
sureglymop 20 hours ago [-]
> That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
I do see that. I think for that reason it would be cool to support a kind of extension system for arbitrary "collectors". And then solid filtering to filter data.
The vision is definitely similar. I am very pleasantly surprised to see your project. And I also like your ideas/roadmap on the website. I know you are building this for yourself/your family but I certainly would be open to contribute to it.
mholt 16 hours ago [-]
Wonderful, I'd love to collaborate!
ramses0 15 hours ago [-]
Dig deep on "dogsheep" from Mr Willison. We're all circling on the same orbit here.
I feel like nextcloud replacement is out of scope ?
I mostly felt like Timelize was about being *behind* data-generating applications and showing and cross-referencing their data when reading the website.
I think the way is to do some sort of nextcloud extension that puts data into Timeline.
I also saw it tracks "documents" on the website but I didn't try it yet, and I would hope it can use external document sources that are already processing documents like paperless for example (which I am already using and liking)
mholt 16 hours ago [-]
Well, I don't plan on 1:1 feature parity with NextCloud or any comprehensive cloud suite. But I think in terms of what was mentioned: "storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write," I think that's in scope.
So yes, Timelinize sits behind your current work flows. It's more of a resting place for your data (which you can still organize and curate -- more features to come in this regard), but I also can see why it might make sense to be one's primary photo library application, in the future, with more development.
As for document storage, this is still WIP, and the implementation could use more discussion to clarify the vision and specifics.
ignoramous 22 hours ago [-]
Timelinize looks rad. Congratulations.
> That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
Think this can go quite far with just the browsing history & content of viewed webpages.
mholt 22 hours ago [-]
Thanks! Yes, I agree. Someone already implemented a Firefox history data source; I don't think it includes the _content_ of the pages, but that could be interesting.
ketzu 33 minutes ago [-]
I wanted to give it a try, unfortuantely I fail already at the first screen. Selection a location the backend fails at creating the timeline:
"error": "opening database: pinging database file: invalid uri authority: D%5CMy%20Timeline%5Ctimeline.db_foreign_keys=on&_journal_mode=wal&_txlock=immediate&mode=rwc"
It seems it is trying to parse windows file paths as URIs, and thrown off by the ":" in windows file paths.
chrisweekly 1 days ago [-]
Oh yeah, mholt is notable for having created Caddy (the webserver). My interest in Timelineize just went up.
codethief 21 hours ago [-]
This looks really cool and like something I've been subconsciously looking for!
A couple thoughts & ideas:
- Given the sensitivity of the data, I would be rather scared to self-host this, unless it's a machine at home, behind a Wireguard/Tailscale setup. I would love to see this as an E2E-encrypted application, similarly to Ente.io.
- Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication? (For instance, if you already self-host Immich or Ente.io and you also set up backups, it'd be a waste to have Timelinize store a separate copy of the photos IMO.) I know, this is not entirely trivial to achieve but for viewing & interacting with different types of data there are already tons of specialized applications out there. Timelinized can't possibly replace all of them.
The problem with any other model, AFAIK, is that someone else has access to your data unless I implement an encrypted live database, like with homomorphic encryption, but even then, I'm sure at some places it would have to be decrypted in memory in places (like, transcoding videos or encoding images, for starters), and the physical owner of the machine will always have access to that.
I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
(Or maybe I'm just totally wrong!)
> - Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication?
I know this is contentious for some, but part of the point is to duplicate/copy your data into the timeline. It acts as a backup, and it ensures consistency, reliability, and availability.
Apps like PhotoStructure do what you describe -- and do a good job of indexing external content. I just think that's going to be hard to compel in Timelinize.
Agreed! I played with Signal exports for a while but the format changed enough that it was difficult to rely on this as a data source. Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
That said, anyone is welcome to contribute more data sources. I will even have an import API at some point, so the data sources don't have to be compiled in. Other scripts or programs could push data to Timelinize.
Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
codethief 21 hours ago [-]
Thanks for your thoughtful response!
> I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
It's certainly not easy but I wouldn't go as far as saying it requires homomorphic encryption. Have you had a look at what the Ente.io people do? Even though everything is E2E-encrypted, they have (purely local) facial recognition, which to me sounds an order of magnitude harder (compute-intensive) than building a chronological index/timeline. But maybe I'm missing something here, which isn't unlikely, given that I'm not the person who just spent a decade building this very cool tool.
> It acts as a backup, and it ensures consistency, reliability, and availability.
Hmmm, according to you[0],
> Timelinize is an archival tool, not a backup utility. Please back up your timeline(s) with a proper backup tool.
;)
I get your point, though, especially when it comes to reliability & availability. Maybe the deduplication needs to happen at a different level, e.g. at the level of the file system (ZFS etc.) or at least at the level of backups (i.e. have restic/borgbackup deduplicate identical files in the backed-up data).
Then again, I can't say I have not had wet dreams once or twice of a future where apps & their persistent data simply refer to user files through their content hashes, instead of hard-coding paths & URLs. (Prime example: Why don't m3u playlist files use hashes to become resistant against file renamings? Every music player already indexes all music files, anyway. Sigh.)
> Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
Wouldn't you rather diff the data after decrypting the archive?
> Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
I suspect it will lead to duplication of pretty much all user data (i.e. original storage requirements × 2), at least if you're serious about your timeline. However, I see your point, it might very well be a tradeoff that's worth it.
Correct, Timelinize is not a backup utility, but having the copy of your data in your timeline acts as a backup against losing access to your data sources, such as Google Photos, or your social media account(s), etc. As opposed to simply displaying data that is stored elsewhere.
But yes, I think the likes of what Ente is doing is interesting, though I don't know the technical details.
Data content hashes are pretty appealing too! But they have some drawbacks and I decided not to lean heavily on them in this application, at least for now.
fivestones 10 hours ago [-]
I’m not sure how exactly timelinize stores photos, but you could sync photos as you take them to timelinize, and then, if they are accessible, point immich to the timelinize photos for its use. That would essentially deduplicate your photos.
mholt 4 hours ago [-]
They are just organized into folders on disk, so that could definitely work!
akersten 1 days ago [-]
This is an amazing idea but do I have to run Google takeout every time I want to update the data[0]? Unfortunately that's such a a cumbersome process that I don't think I'd use this. But if my timeline could update in near real time this would be a killer app
Yeah. Major thorn in my side. I spent hours trying to automate that process by using Chrome headless, and it kinda worked, until I realized that I needed to physically authenticate not just once, but every 10 minutes. So, it basically can't be automated since 2FA is needed so often.
In practice, I do a Takeout once or twice a year. (I recommend this even if not using Timelinize, so you can be sure to have your data.)
whistle650 1 days ago [-]
I thought you could set up an automatic Takeout export periodically, and choose the target to be your Google Drive. Then via a webapp oauth you could pull the data that way. Frequency was limited (looks like it says the auto export is “every 2 months for 1 year”). So hardly realtime, but seems useful and (relatively) easy? Does a method like that not work for your intentions?
mholt 1 days ago [-]
Will have to look into that. Sounds like it could be expensive but maybe worth it.
robinwassen 22 hours ago [-]
You can schedule the takeout to Drive, then use a tool such as rclone (amazing tool) to pull it down.
It should not add any costs except the storage for the takeout zip on drive.
Look at supported providers in rclone and you might find easy solutions for some hard sync problems: https://rclone.org/#providers
mholt 22 hours ago [-]
> except the storage for the takeout zip on drive.
Yeah, that's the cost I'm talking about. It essentially amounts to paying an extra subscription to be able to download your data [on a regular basis].
I'm a big rclone fan btw :) I'm sure there's some future where we do something like this to automate Takeouts.
alashow 5 hours ago [-]
Downloading from Google drive doesn't cost anything, does it? Although, I guess you would have to have enough empty space on your Google drive to be able to store the takeout zip, which I think is an acceptable cost
akersten 1 days ago [-]
Some kind of companion app that runs on my phone and streams the latest data (photos, location history, texts, etc ) back to the timeline would probably be more tractable for live updates. But that is probably a wildly different scope than the import based workflow. This is very cool regardless.
mholt 1 days ago [-]
For sure.
About 5-6 years ago, Timelinize actually used only the Google Photos API. It didn't even support imports from Takeout yet. The problem is the API strips photos of crucial metadata including location, and gives you nerfed versions of your data. Plus the rate limits were so unbearable, I eventually ripped this out.
But yeah, an app that runs on your phone would be a nice QoL improvement.
apitman 17 hours ago [-]
Is takeout the only way to get the original photos out?
mholt 16 hours ago [-]
As far as I know, yes. ("Original" is a strong word, but it's pretty close if you don't have space saver enabled / pay for storage.)
_flux 1 days ago [-]
Syncthing from phone to a directory on PC?
That's what I do. Though I don't then put them into any system. Yet.
clueless 1 days ago [-]
How easy would it be to integrate this with immich (instead of needing the access to google photo)?
mholt 1 days ago [-]
Probably not hard. Timelinize's data sources have a standard API with just 2 methods [0], so it should be fairly trivial to implement depend on how accessible Immich is.
To clarify, you don't grant access to Google Photos, you just do the Takeout from https://takeout.google.com to download your data first.
I did this by creating my own small password manager.
airtonix 22 hours ago [-]
[dead]
jeremie 4 hours ago [-]
It was a different era 15 years ago, made some decent inroads on an API based approach with the Locker Project https://en.wikipedia.org/wiki/Locker_(software) but the platforms quickly started gimping the APIs and exports/takeouts were non-existent or too early yet back then, so ran out of steam.
Thanks for making this, def will be digging in!
mholt 3 hours ago [-]
Yep, I encountered the same problem. Initially it was solely API-based, but as they got locked down and paid-only and removed entirely, I went the bulk download route.
kylecazar 18 hours ago [-]
"Because Timelinize is entity-aware, "it can project data points onto a map even without coordinate data. If a geolocated point is known for an entity around the same time of others of that entity's data points, it will appear on the map."
In the context of Timelinize, this is great! Outside of that, this sentence really drives home how much data Google could join on me -- a heavy Android/Chrome/Gmail/Maps w/ timeline user.
What are you planning with weather? Associating entities that have a known location with historical temp/forecast data?
mholt 17 hours ago [-]
Something like that, yeah. Augmenting public data sets like weather or news to add context to your timeline.
novoreorx 7 hours ago [-]
This reminds me of a similar project called HPI [1] that I discovered in 2022, the idea to aggregate one's digital footprints into one single private place really inspires me a lot. Timelinize is almost like a GUI version of HPI, a more sophisticated user product that everyone can use. I will definitely try it, congrats and respect!
P.S. HPI has many existing exporters, so it might be helpful to extend Timelinize's capacity.
Timelinize's data sources are easily extensible so adding more shouldn't be too hard, HPI included.
Thank you!
Tepix 24 hours ago [-]
Nice project! If you don't like "timelinize" - have you looked at latin names? Perhaps something like Temperi?
In terms of Features, i'd like to see support for FindPenguins. A lot of interesting data (photos, videos, GPS coordinates, text) is already there.
mholt 24 hours ago [-]
A few latin names have been suggested, but nothing has stuck. The problem is they are usually difficult to spell and pronounce, which isn't really an improvement over the current situation :)
FindPenguins is cool! I don't use it myself, but anyone is welcome to implement a data source for it.
alex_duf 10 hours ago [-]
Thank you so much for this! I've also had the idea but, contrary to you, never had the courage nor energy to actually do it. I have some of my old devices backed up with the very first text my (now) wife and I exchanged, and this feels like the perfect use case.
So thank you so much, I now have another project to host on my home server, and a meaningful one.
>(PS. I'm open to changing the name. Never really liked this one...)
Timelinize is great, it explains what the product does
mholt 4 hours ago [-]
Thanks for the feedback; especially about the name. I'm glad you value the project's goals!
slightwinder 7 hours ago [-]
Interesting. But this seems to be limited to the supported data-sources? No generic format to build your own sources? Also, why the focus on a timeline? Not all personal data or tasks are time-related. So is this more about personal data, or a timeview on personal data?
If you want to have collaboration on this, maybe think about splitting it into backend and frontends. With the backend being the storage, and it's import-mechanism, and the frontend being open for anyone to put their own view onto the data, and maybe work with them.
Maintaining personal data has demand here, but so far there is no real unified project catering to a broad range of services and different types of data. So there is a space to fill and a community to build IMHO.
mholt 4 hours ago [-]
> Interesting. But this seems to be limited to the supported data-sources?
I'm designing an import API that will let any script or program push data to the timeline.
> Also, why the focus on a timeline?
Seemed to make the most sense.
> Not all personal data or tasks are time-related.
Probably true. Do you have examples of what you're thinking of?
> If you want to have collaboration on this, maybe think about splitting it into backend and frontends. With the backend being the storage, and it's import-mechanism, and the frontend being open for anyone to put their own view onto the data, and maybe work with them.
That's how it is right now; it's a client-server architecture. It has a CLI and a JSON API so you can build your own front-ends.
> Maintaining personal data has demand here, but so far there is no real unified project catering to a broad range of services and different types of data. So there is a space to fill and a community to build IMHO.
I agree, this is a huge void that needs to be filled!
slightwinder 3 hours ago [-]
> Probably true. Do you have examples of what you're thinking of?
Bookmarks, contacts, notes for example, they are usually not time-based but by context, category or whichever organization their usage demands at the moment. They can be time-based in a journal, but usually I do not remember websites or people by the time I encountered them, so I would not search them this way.
The other question is how personal is personal in this project? Are my IMDB-ratings valid data for this? My Ebook-collection? My Steam-account? Those are usually things I would not manage in a timeline or a map, but with different interfaces and features.
BubbleRings 15 hours ago [-]
Great work. Various ideas here:
You might suggest to users the following use case. “If you want to create a Timelinize data store, but don’t feel that your own local systems are secure enough to safely hold a basket with a copy of every egg in your life, you might consider the following use case, which some of our customers implement. Once our twice a year, update the data store, but store it on an external disk. When the update is done, take the disk offline and keep it in a drawer or safe.”
Also
I always wondered how cool it would be if I could tell some Spotify-like system, “I’m 20 miles away from the lake, we are going to stay in the cabins a week, just like we did 10 years ago. Play me the exact same songs now that played at every turn back then.”
Also
For a name, how about:
ChronEngine
That name seems pretty free and clear from previous use, if you like it grab ChronEngine.com before some squatter does and thank me with a phone call, I would enjoy a quick chat with you.
Also
Your web page might benefit from a grid that lists all the input sources you accept, with hotlinks next to the names that give a pop-up quick summary of what that source is about, and maybe some color coding that shows something like “green = rock solid lately”, “yellow = some users reporting problems”, “red = they just updated their format, it’s broken, we are working on it”. You will/are facing challenges similar to Trillian, a chat client from the early 2000s that would try to maintain ongoing connection compatibility with multiple other chat clients such as AIM/ICQ/MSN. Also, the grid could have a “suggested source sets” filter that helped people find what 5 (for example) input sources they might select for their use style.
Oh and make a list of anybody that says they have done an elaborate something similar with Excel (like me and at least one other person in this thread) and maybe have a discussion with them some time, we/they might have some useful insights.
Let’s hear it for people on the opposite side of the “go fast and break things” coin! My first project took 16 years. My current one I started 28 years ago!
BubbleRings 15 hours ago [-]
And one more idea from me. I could see your current system to be deliberately kept like it is, but offered as a two-part system, where the second part is, “once you get your timeline built and you have reviewed it carefully, then click here and the local-only LLM will have access to the whole thing.” The two part nature could be a big competitive advantage, helping people to carefully build a LLM system that, for instance, they could then offer to their whole family to peruse, without having to worry too much that it accidentally included information that it should not have.
mholt 4 hours ago [-]
Thank you for the great feedback!
> You might suggest to users the following use case. “If you want to create a Timelinize data store, but don’t feel that your own local systems are secure enough to safely hold a basket with a copy of every egg in your life, you might consider the following use case, which some of our customers implement. Once our twice a year, update the data store, but store it on an external disk. When the update is done, take the disk offline and keep it in a drawer or safe.”
Sure, I like that.
> I always wondered how cool it would be if I could tell some Spotify-like system, “I’m 20 miles away from the lake, we are going to stay in the cabins a week, just like we did 10 years ago. Play me the exact same songs now that played at every turn back then.”
It's inevitable for an LLM and other tooling like personal assistants to be integrated to this thing, so yeah that sounds like a great use case.
> For a name, how about: ChronEngine That name seems pretty free and clear from previous use, if you like it grab ChronEngine.com before some squatter does and thank me with a phone call, I would enjoy a quick chat with you. ...
Oh and make a list of anybody that says they have done an elaborate something similar with Excel (like me and at least one other person in this thread) and maybe have a discussion with them some time, we/they might have some useful insights.
Not bad actually; and sure, we can chat either way. Feel free to book a time at https://matt.chat.
> Your web page might benefit from a grid that lists all the input sources you accept, with hotlinks next to the names that give a pop-up quick summary of what that source is about, and maybe some color coding that shows something like “green = rock solid lately”, “yellow = some users reporting problems”, “red = they just updated their format, it’s broken, we are working on it”. You will/are facing challenges similar to Trillian, a chat client from the early 2000s that would try to maintain ongoing connection compatibility with multiple other chat clients such as AIM/ICQ/MSN. Also, the grid could have a “suggested source sets” filter that helped people find what 5 (for example) input sources they might select for their use style.
I LOVED Trillian, thanks or the nostalgia. Oh man, they're still alive: https://trillian.im/ (I love the icon...)
bun_at_work 1 days ago [-]
Hey - this is awesome. I've been working on a small local app like this to import financial data and present a dashboard, for the family to use together (wife and I). So yeah - great work here, taking control of your data.
I'm curious about real-time data, or cron jobs, though. I love the idea of importing my data into this, but it would be nicer if I could set it up to automatically poll for new data somehow. Does Timelineize do something like that? I didn't see on the page.
mholt 24 hours ago [-]
Cool, yeah, the finance use case seems very relevant. Someday it'd be cool to have a Finance exploration page, like we do for other kinds of data.
Real-time/polling imports aren't yet supported, but that's not too difficult once we land on the right design for that feature.
I tinkered with a "drop zone" where you could designate a folder that, when you add files to it, Timelinize immediately imports it (then deletes the file from the drop zone).
But putting imports on a timer would be trivial.
ramses0 15 hours ago [-]
I have something set up at home: ~/Inbox and ~/Outbox. Anything dropped into ~/Outbox gets rsync'd to rsync.net and mv'd (locally) to ~/Inbox.
Anything in ~/Inbox is "safe to delete" because it's guaranteed to have an off-site backup.
Presumably a fancy management app would queue (or symlink) the full directory structure into ~/Inbox (which would then behave as a "streaming cache")
~/Inbox would effectively be available (read only) on "all machines" and "for free" with near zero disk space until you start accessing or pulling down files.
I use Dropbox to manage ~/Sync (aka: active, not "dead" files).
"Outbox", "Inbox", and "Sync" have been the "collaboration names" that resonated the most with me (along with ~/Public if you're old enough, and ~/Documents for stuff that's currently purely local)
sdotdev 21 hours ago [-]
Nice work. I’ve been frustrated with how closed off location history tools have become lately. This looks like a solid step toward giving people real ownership of their data again. Definitely checking this out.
mholt 16 hours ago [-]
Thank you. Yes, I feel the same!
mhamann 1 days ago [-]
Cool idea. Thanks for sharing. I was really annoyed by the way Google nerfed the maps timeline stuff last year. Obviously this project is way more ambitious than that, but just goes to show you how little Google cares about the longevity of your data.
aetherspawn 19 hours ago [-]
This seems like the perfect thing to mix with financial records (ie bank feeds) and a local LLM.
I’m not sure what you’d use it for … exactly … but it could probably reconcile and figure out all your credit card charges based on your message history and location, allocate charges to budgets, and show more analytics than you’re probably interested in knowing.
People with a cloud connected car such as a Tesla could probably get some real “personal assistant” type use cases out of this, such as automatically sorting your personal and business travel kms, expenses and such for tax purposes.
There’s probably other use cases like suggesting local experiences you haven’t done before, and helping you with time management.
mholt 16 hours ago [-]
Yeah, I hear this a lot, I would love to have my financials and an LLM integrated as well!
Could make for a very interesting/useful/private personal assistant.
ramses0 15 hours ago [-]
ledger.txt (plaintextaccounting.org), g-cal integration, and Home Assistant are all so close to each other.
ObscureScience 18 hours ago [-]
This reminds me of Perkeep, but I understand this is more focused on the presentation of the data, while Perkeep was on the data storage.
But maybe they could be integrated, or at least support each other's formats.
TheTaytay 1 days ago [-]
I really like the local storage of this. Files and folders are the best!
(When noodling on this, I’ve also been wondering about putting metadata for files in sidecar files next to the files they describe, rather than a centralized SQLite database. Did you experiment with anything like that by any chance?)
mholt 24 hours ago [-]
Why sidecar metadata files? In general I've tried to minimize the number of files on disk since that makes copying slow and error-prone. (A future version of Timelinize will likely support storing ALL the data in a DB to make for faster, easier copying.) We'd still need a DB for the index anyway, which essentially becomes a copy of the metadata.
dav43 16 hours ago [-]
Don’t know if you have seen the work DuckDB is doing on ducklake. Maybe there is an overlap in vision for versioning data across multiple data sources - and similar to SQLite it’s not proprietary and easily drilled down on. I’m sorry, don’t have technical knowledge :/
mentalgear 9 hours ago [-]
This basically brings your data from the cloud to local-first ! Kudos to your dedication and especially making this open-source for the benefit of everyone!
steveharrison 9 hours ago [-]
This is really cool! I’ve always wanted to do something similar, but didn’t want to invest the time, so thanks for seeing this through!
Jarwain 15 hours ago [-]
Oh I love this oh so much. It lines up with a lot of things I've wanted or am planning. So I'm definitely going to take a dive into your source for inspiration
One thing I wish for encompasses the idea that I might want to share different slices of my life with different groups
mholt 14 hours ago [-]
Sharing is planned. Long term roadmap but definitely on my list!
Bradwheat 42 minutes ago [-]
I was looking for a "sharing idea" comment, so it's great to see something is planned.
Being able to share a time-based, geo-based, or X-based slice (all or some of) with someone would be a great way to learn about "close encounters" before or after we met, or even someone else's perspective of a shared trip or experience.
Just a general comment; you've made something truly wonderful (IMHO), and I look forward to seeing where you take this.
rixed 22 hours ago [-]
Like others I really like the idea and the realisation looks great too!
I might not be the typical user for this, because I'd prefer my data to actually stay in the cloud where it is, but I'd still like to have it indexed and timelined. Can timelinize do this? Like, instead of downloading everything from gphoto, youtube, bluesky, wtv, just index what's there and offer the same interface? And only optionnaly download the actual data in addition to the meta-data?
mholt 21 hours ago [-]
That's not really aligned with my vision/goals, which is to bring my data home; but to be clear, downloading your data doesn't mean it has to leave the cloud. You can have your cake and eat it too.
The debate between importing he data and indexing external data is a long, grueling one, but ultimately I cannot be satisfied with not having my data guaranteed to be locally available.
I suppose in the future it's possible we could add the ability to index external data, but this likely wouldn't work well in practice since most data sources lock down real-time access via their API restrictions.
whacked_new 16 hours ago [-]
Super interested in this as well (and thank you for Caddy)
How does this handle data updating / fixing? My use case is importing data that's semi structured. Say you get data from a 3rd party provider from one dump, and it's for an event called "jog". Then they update their data dump format so "jog" becomes subdivided into "light run" vs "intense walk", and they also applied it retroactively. In this case you'd have to reimport a load of overlapping data.
I saw the FAQ and it only talks about imports not strictly additive.
I am dealing with similar use cases of evolving data and don't want to deal with SQL updating, and end up working entirely in plain text. One advantage is that you can use git to enable time traveling (for a single user it still works reasonably).
mholt 16 hours ago [-]
Glad you like Caddy!
> How does this handle data updating / fixing?
In the advanced import settings, you can customize what makes an item unique or a duplicate. You can also configure how to handle duplicates. By default, duplicates are skipped. But they can also be updated, and you can customize what gets updated and which of the two values to keep.
But yes, updates do run an UPDATE query, so they're irreversible. I explored schemas that were purely additive, so that you could traverse through mutations of the timeline, but this got messy real fast, and made exploring (reading) the timeline more complex/slow/error-prone. I do think it would be cool though, and I may still revisit that, because I think it could be quite beneficial.
infogulch 23 minutes ago [-]
Have you heard of XTDB / Bitemporality? The basic idea is to make time 2-dimensional, where each record has both a System Time range and a Valid Time range. Designed as a write-only db with full auditability for compliance purposes.
With 2D time you can ask complex questions about what you knew when, with simpler questions automatically extended into a question about the current time. Like:
"What is the price?" -> "What is the price today, as of today?"
"What was the price in 2022" -> "What was the price in 2022, as of today?"
"What was the price in 2022, as of 2023?"
You probably don't want to just switch to XTDB, but if you pursue this idea I think you should look into 2D time as I think it is schematically the correct conceptualization for this problem.
Thanks for the reply! I'll have to try this out... it almost looks like what perkeep was meant to become.
One interesting scenario re time traveling is if we use an LLM somewhere in data derivation. Say there's a secondary processor of e.g. journal notes that yield one kind of feature extraction, but the model gets updated at some point, then the output possibilities expand very quickly. We might also allow human intervention/correction, which should take priority and resist overwrites. Assuming we're caching these data then they'll also land somewhere in the database and unless provenance is first class, they'll appear just as ground truth as any other.
Bitemporal databases look interesting but the amount of scaffolding above sqlite makes the data harder to manage.
So if I keep ground truth data as text, looks like I'm going to have an import pipeline into timelinize, and basically ensure that there's a stable pkey (almost certainly timestamp + qualifier), and always overwrite. Seems feasible, pretty exciting!
zenmac 9 hours ago [-]
> Recommended: GPU (NVIDIA 20-series or newer; 40-series for faster thumbnails) or Apple Silicon (M2 or newer)
Very nice project, it is something what we all need right now in these days. Just noticed that you recommend Apple Silicon M2 or newer? Is there really a significant difference between M1 and M2? Is it running some LLM model in the background?
ChrisbyMe 1 days ago [-]
Very cool! I have a sketchy pipeline for exporting my data from Gmaps to my personal site and always thought about building something like this.
This could be a really interesting as a digital forensics thing.
coffeecoders 23 hours ago [-]
Love the grind! One suggestion would be to add a demo link with some test data so we can see it in action.
I am also slowly "offlining" my life. Currently, it is a mix of synology, hard drives and all.
I have always thought about building a little dashboard to access everything really. Build a financial dashboard[1] and now onto photos.
A live demo would be great, but I'm not sure how to generate the fake data in a way that imitates real data patterns. That's originally how I wanted to demo things, but the results weren't compelling. (It was a half-hearted effort, I admit.) So I switched to obfuscating real data.
FreeMyCash looks great! Yours is the second financial application I've heard of; maybe we need to look at adding some finance features soon.
mh- 17 hours ago [-]
Hmm, I bet you could find appropriately-licensed datasets on Kaggle or HuggingFace that you could repurpose for that.
junon 1 days ago [-]
I had this same idea for a long time. Even took github.com/center for it (I've since changed how it's being used). Cool to see someone actually achieve it, well done.
hexagonwin 17 hours ago [-]
Looks awesome! Btw, it would be nice if it can also support audio files, I frequently make multi hour long recordings (meetings, presentation etc) and it would be very useful to see pictures and other stuff from a specific time together with the audio from that timing.
mholt 16 hours ago [-]
Oh it actually does support audio files :) Haven't tested them in a couple years but a player should appear.
dav43 16 hours ago [-]
I have been recording my gps every few mins for the past 10years for this exact product.
Looks interesting.
zdc1 15 hours ago [-]
Out of curiosity, how have you been doing this?
wizzard0 23 hours ago [-]
Wow that's great! Interesting if it's possible to use not just a folder but like a s3-compatible backend for photos and for db backups as well
(I don't think all my photo/video archives would fit on my laptop, though the thumbnails definitely would, while minio or something replicated between my desktop plus a backup machine at Hetzner or something would definitely do the thing)
mholt 23 hours ago [-]
I don't think sqlite runs very well on S3 file systems. I think it would also be insufferably slow.
I even encountered crashes within sqlite when using ExFAT -- so file system choice is definitely important! (I've since implemented a workaround for this bug by detecting exfat and configuring sqlite to avoid WAL mode, so it's just... much slower.)
Definitely not sqlite-on-s3! Just for the photos and videos, and the periodic db backups
mholt 22 hours ago [-]
I see... that might make it hard to keep all the data together, which is one of the goals. But I will give it some thought.
mustafauysal 6 hours ago [-]
It reminded me of Camlistore (Perkeep). I've always liked that kind of project since I'm terrible at keeping my personal data organized.
mholt 4 hours ago [-]
Yeah, big fan of Perkeep. I think it's well-engineered. (Better than Timelinize probably! Just missing some important features for me.)
asciii 18 hours ago [-]
Oh man, this is awesome. I wanted an answer to Apple's Journal because I personally am using Obsidian (doing my best to tag and ref data). But this can be what I need to access that metadata layer and combine everything
kevinsync 22 hours ago [-]
As for branding, IMO you could go a bunch of directions:
Timelines
Tempor (temporal)
Chronos
Chronografik
Continuum
Momentum (moments, memory, momentum through time)
IdioSync (kinda hate this one tbh)
Who knows! Those are just the ones that fell out of my mouth while typing. It's just gotta have a memorable and easy-to-pronounce cadence. Even "Memorable" is a possibility LOL
-suggestions from some dude, not ChatGPT
kevinsync 22 hours ago [-]
Dateline (with the Dateline NBC theme song playing quietly in the background while you browse your history and achievements)
infogulch 22 hours ago [-]
Momenta
petepete 20 hours ago [-]
Scribe? As in the person who writes the timeline.
aboodman 10 hours ago [-]
Matt, I am so impressed by how far this has come. WRT naming, what about "Journey".
mholt 4 hours ago [-]
Thanks man! Good to see you here. Nice suggestion, I have a list on a text doc here, I'll add this one :)
aboodman 10 hours ago [-]
OTOH, I always thought just simplifying to "Timeline" was the obvious move?
diodoe 9 hours ago [-]
Great idea! Do you plan to integrate with other services? I'm looking for similar solutions to collect feedback and improve my product. Are you already getting feedback from the community?
mholt 4 hours ago [-]
Got a lot of feedback here so far, plenty to keep me busy. And there will be an import API to let scripts/programs push data to Timelinize, even if it doesn't support those data sources built-in.
TheTaytay 1 days ago [-]
Sounds really cool. I’ve been wanting something like this. Kudos for building it!
Interesting; how easy is it to backup it up somewhere - yes, on a cloud for example - and then restore/sync it on another machine? Is the data format portable and easy to move like this?
mholt 1 days ago [-]
Yep -- a timeline is just a folder with regular files and folders in it. They're portable across OSes. I've tried to account for differences in case-sensitive and -insensitive file systems as well. So you can copy/move them and back them up like you would any other directory.
jdthedisciple 5 hours ago [-]
ballast.
i want to forget the past except for keeping the warmest moments in my heart.
i know others will feel differently.
that said, beautiful presentation and website!
mholt 4 hours ago [-]
Thank you!
And yeah, I feel ya. This lets you delete things too.
totetsu 14 hours ago [-]
for many years I kept Tiny Travel Tracker running on my android phone, and would periodically import it into os&m maps. It was nice to have the record of all exactly all the places I had wandered to, and not also share that with five eyes.
https://f-droid.org/en/packages/com.rareventure.gps2/
Nice one, thanks for sharing. For sure I’ll give it a try.
Have you thought of creating a setup so as to package all libraries and dependencies needed? You have a very nice installation guide, but there are many users who just want the setup.exe :-)
mholt 21 hours ago [-]
Thank you! Not sure I can package everything because of license requirements. IANAL. I think the container image basically automates the various dependencies, but I didn't create it and I don't use it so I'm not 100% sure.
Basically, I would love to know how to do this correctly for each platform, but I don't know how. Help would be welcomed.
mtyurt 23 hours ago [-]
Very nice project! Curiousity question: since you are taking data dumps once-twice a year, and let's say you also copy the photos as well, do you do any updates incrementally or just replace the old one with new dump?
mholt 23 hours ago [-]
Timelinize doesn't import duplicates by default, so you can just import the whole new Takeout and it will only keep what is new.
But you have control over this:
- You can customize what makes an item a duplicate or unique
- You can choose whether to update existing items, and how to update them (what to keep of the incoming item versus the existing one)
LatticeAnimal 1 days ago [-]
Beautiful app. Surprised to see JQuery for your frontend; brings back good old memories.
mholt 1 days ago [-]
Ha, thanks it’s actually AJQuery, just a two line shim to gain the $ sugar. Otherwise vanilla JS.
LelouBil 19 hours ago [-]
This looks so cool ! I will try it ASAP !
iamleppert 4 hours ago [-]
This is just some feedback. But when including product shots on your web site, at least don't use real user data where you have to blur most of the content. It looks tacky and unprofessional.
renewiltord 1 days ago [-]
I've always wanted this but not enough to build it. I wonder if I can integrate this with my Monica instance. Thank you! I'm going to try it.
mholt 1 days ago [-]
Will be curious how you use it. I plan to integrate local LLM at some point but it’s still nebulous in my head.
endless-r0ad 5 hours ago [-]
Call it D-FUKT and the purpose would be to show people how absolutely fucked all of their data is. If you list every corp/org that has access to each piece of data, this could be cool
phito 1 days ago [-]
That's great, I've been running a timeline of my life in excel, I wonder if this could replace it.
john_minsk 20 hours ago [-]
Amazing project. In the era of AI I can see the software like this being used daily.
coolelectronics 24 hours ago [-]
I've been basically doing this for years via a private mastodon instance. Very nice to see!
willwade 21 hours ago [-]
Yeah I totally want this. How much data are we talking about on average?
throwrb 21 hours ago [-]
For a name, how about 'Rain Barrel' ?
Your own personal cloud
Congratulations, good work, and good luck, "TickTock" might be an appropriate name.
sneak 12 hours ago [-]
When announcing a product, the language, license, version number, amount of time under development since inception, and approximate number of contributors are all critical pieces of information to include in an announcement.
Ten years is a long time. Is it just you? What language is it in? Does it run on desktop or does it need a server? What is the license?
cawksuwcka 14 hours ago [-]
outside of pictures living in the past bro
1 days ago [-]
kat529770 13 hours ago [-]
[dead]
RLAIF 1 days ago [-]
[dead]
Lumoscore 13 hours ago [-]
This is exactly the type of tool the world needs right now. The pitch—"Privately organize your own data from everywhere, locally"—is powerful and hits right at the heart of the current privacy and data ownership anxiety.
We've been conditioned to simply let our most precious memories and life records (photos, chats, locations) slowly dissolve into the opaque, proprietary silos of mega-corporations. The fact that Timelinize enables us to pull that data back down from Google Takeout, social media exports, and device backups, and then organize it cohesively and locally is a monumental step toward digital self-sovereignty.
What I love most:
Unified Context: Aggregating data across sources (like seeing SMS and Facebook messages with the same person in one unified thread) is genius. This is the crucial context that cloud platforms deliberately fragment.
Local Privacy: Storing everything in an easily navigable local SQLite database is the gold standard for personal data. It means the developer, or any third party, cannot monetize or surveil my history.
Data Longevity: Knowing that even if a service like Google Location History shuts down its export (as it did recently), my historical data is safe and searchable on my own hard drive is the only way to genuinely keep these records "alive forever."
This isn't just a useful app; it's a vital piece of infrastructure for anyone who cares about building a personal, permanent digital archive of their life outside of the corporate cloud. I hope this gains massive traction. Great work on addressing a foundational digital problem!
ValveFan6969 13 hours ago [-]
Em dash, "not just blank, it's blank". ChatGPT slop garbage. Calling on all ycombinator moderators to please nuke this post with a powerful ion cannon. Thank you.
anon7000 13 hours ago [-]
IMO, em dash isn’t a great signal — plenty of people used them already. I can type em-dash easily on my phone with double dashes.
the giveaway for me is the too-perfect summary list of things it loves the most
AppleBananaPie 13 hours ago [-]
It's sad that it's becoming even more common on HN :(
Rendered at 18:23:27 GMT+0000 (Coordinated Universal Time) with Vercel.
For obvious reasons this has to be self hosted and managed. I'm not interested in creating surveillance software or technology.
It sounds extreme but whenever I have seen peoples obsidian set ups with heaps of manual and bidirectional linking I always thought that time is the one thing we should look at. If I look up some concept on wikipedia today, there is a higher chance of me looking up related concepts or working on something related to that around this time also.
That is exactly what you would be doing though
I wish I could find it again.
https://www.cs.yale.edu/homes/freeman/lifestreams.html
https://www.scribd.com/document/18361503/Eric-T-Freeman-The-...
https://en.wikipedia.org/wiki/The_Humane_Interface#:~:text=N...
https://www.usenix.org/conference/osdi14/technical-sessions/...
This could even allow resetting the state of the computer back in time so you can pick up exactly where you left off, or undo away mistakes regardless of app, etc.
Definitely some privacy concerns but for a self ran open source thing I think it could be really cool
Sounds in-scope so far. Long-term, perhaps, and maybe optional add-on features rather than built-in, but we'll see.
> I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press.
That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
But hey, the vision is pretty similar. We are generating all sorts of data to document and understand our lives -- we don't even have to deliberately write a journal -- but we have no way of comprehending it. This app is an attempt to solve that.
I do see that. I think for that reason it would be cool to support a kind of extension system for arbitrary "collectors". And then solid filtering to filter data.
The vision is definitely similar. I am very pleasantly surprised to see your project. And I also like your ideas/roadmap on the website. I know you are building this for yourself/your family but I certainly would be open to contribute to it.
I mostly felt like Timelize was about being *behind* data-generating applications and showing and cross-referencing their data when reading the website.
I think the way is to do some sort of nextcloud extension that puts data into Timeline.
I also saw it tracks "documents" on the website but I didn't try it yet, and I would hope it can use external document sources that are already processing documents like paperless for example (which I am already using and liking)
So yes, Timelinize sits behind your current work flows. It's more of a resting place for your data (which you can still organize and curate -- more features to come in this regard), but I also can see why it might make sense to be one's primary photo library application, in the future, with more development.
As for document storage, this is still WIP, and the implementation could use more discussion to clarify the vision and specifics.
> That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
Think this can go quite far with just the browsing history & content of viewed webpages.
A couple thoughts & ideas:
- Given the sensitivity of the data, I would be rather scared to self-host this, unless it's a machine at home, behind a Wireguard/Tailscale setup. I would love to see this as an E2E-encrypted application, similarly to Ente.io.
- Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication? (For instance, if you already self-host Immich or Ente.io and you also set up backups, it'd be a waste to have Timelinize store a separate copy of the photos IMO.) I know, this is not entirely trivial to achieve but for viewing & interacting with different types of data there are already tons of specialized applications out there. Timelinized can't possibly replace all of them.
- Support for importing Polarsteps trips, and for importing Signal backups (e.g. via https://github.com/bepaald/signalbackup-tools ) would be nice!
> unless it's a machine at home,
This is, in fact, the intended model.
The problem with any other model, AFAIK, is that someone else has access to your data unless I implement an encrypted live database, like with homomorphic encryption, but even then, I'm sure at some places it would have to be decrypted in memory in places (like, transcoding videos or encoding images, for starters), and the physical owner of the machine will always have access to that.
I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
(Or maybe I'm just totally wrong!)
> - Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication?
I know this is contentious for some, but part of the point is to duplicate/copy your data into the timeline. It acts as a backup, and it ensures consistency, reliability, and availability.
Apps like PhotoStructure do what you describe -- and do a good job of indexing external content. I just think that's going to be hard to compel in Timelinize.
> Support for importing Polarsteps trips, and for importing Signal backups (e.g. via https://github.com/bepaald/signalbackup-tools ) would be nice!
Agreed! I played with Signal exports for a while but the format changed enough that it was difficult to rely on this as a data source. Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
That said, anyone is welcome to contribute more data sources. I will even have an import API at some point, so the data sources don't have to be compiled in. Other scripts or programs could push data to Timelinize.
Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
> I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
It's certainly not easy but I wouldn't go as far as saying it requires homomorphic encryption. Have you had a look at what the Ente.io people do? Even though everything is E2E-encrypted, they have (purely local) facial recognition, which to me sounds an order of magnitude harder (compute-intensive) than building a chronological index/timeline. But maybe I'm missing something here, which isn't unlikely, given that I'm not the person who just spent a decade building this very cool tool.
> It acts as a backup, and it ensures consistency, reliability, and availability.
Hmmm, according to you[0],
> Timelinize is an archival tool, not a backup utility. Please back up your timeline(s) with a proper backup tool.
;)
I get your point, though, especially when it comes to reliability & availability. Maybe the deduplication needs to happen at a different level, e.g. at the level of the file system (ZFS etc.) or at least at the level of backups (i.e. have restic/borgbackup deduplicate identical files in the backed-up data).
Then again, I can't say I have not had wet dreams once or twice of a future where apps & their persistent data simply refer to user files through their content hashes, instead of hard-coding paths & URLs. (Prime example: Why don't m3u playlist files use hashes to become resistant against file renamings? Every music player already indexes all music files, anyway. Sigh.)
> Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
Wouldn't you rather diff the data after decrypting the archive?
> Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
I suspect it will lead to duplication of pretty much all user data (i.e. original storage requirements × 2), at least if you're serious about your timeline. However, I see your point, it might very well be a tradeoff that's worth it.
[0]: https://timelinize.com/docs/importing-data
But yes, I think the likes of what Ente is doing is interesting, though I don't know the technical details.
Data content hashes are pretty appealing too! But they have some drawbacks and I decided not to lean heavily on them in this application, at least for now.
[0]: https://timelinize.com/docs/data-sources/google-photos
In practice, I do a Takeout once or twice a year. (I recommend this even if not using Timelinize, so you can be sure to have your data.)
It should not add any costs except the storage for the takeout zip on drive.
Look at supported providers in rclone and you might find easy solutions for some hard sync problems: https://rclone.org/#providers
Yeah, that's the cost I'm talking about. It essentially amounts to paying an extra subscription to be able to download your data [on a regular basis].
I'm a big rclone fan btw :) I'm sure there's some future where we do something like this to automate Takeouts.
About 5-6 years ago, Timelinize actually used only the Google Photos API. It didn't even support imports from Takeout yet. The problem is the API strips photos of crucial metadata including location, and gives you nerfed versions of your data. Plus the rate limits were so unbearable, I eventually ripped this out.
But yeah, an app that runs on your phone would be a nice QoL improvement.
That's what I do. Though I don't then put them into any system. Yet.
To clarify, you don't grant access to Google Photos, you just do the Takeout from https://takeout.google.com to download your data first.
[0]: https://pkg.go.dev/github.com/timelinize/timelinize@v0.0.23/...
Thanks for making this, def will be digging in!
In the context of Timelinize, this is great! Outside of that, this sentence really drives home how much data Google could join on me -- a heavy Android/Chrome/Gmail/Maps w/ timeline user.
What are you planning with weather? Associating entities that have a known location with historical temp/forecast data?
P.S. HPI has many existing exporters, so it might be helpful to extend Timelinize's capacity.
[1]: https://github.com/karlicoss/HPI
Timelinize's data sources are easily extensible so adding more shouldn't be too hard, HPI included.
Thank you!
In terms of Features, i'd like to see support for FindPenguins. A lot of interesting data (photos, videos, GPS coordinates, text) is already there.
FindPenguins is cool! I don't use it myself, but anyone is welcome to implement a data source for it.
So thank you so much, I now have another project to host on my home server, and a meaningful one.
>(PS. I'm open to changing the name. Never really liked this one...)
Timelinize is great, it explains what the product does
If you want to have collaboration on this, maybe think about splitting it into backend and frontends. With the backend being the storage, and it's import-mechanism, and the frontend being open for anyone to put their own view onto the data, and maybe work with them.
Maintaining personal data has demand here, but so far there is no real unified project catering to a broad range of services and different types of data. So there is a space to fill and a community to build IMHO.
Data sources are extensible, they basically just need to implement 2 methods: https://pkg.go.dev/github.com/timelinize/timelinize@v0.0.24/...
> No generic format to build your own sources?
I'm designing an import API that will let any script or program push data to the timeline.
> Also, why the focus on a timeline?
Seemed to make the most sense.
> Not all personal data or tasks are time-related.
Probably true. Do you have examples of what you're thinking of?
> If you want to have collaboration on this, maybe think about splitting it into backend and frontends. With the backend being the storage, and it's import-mechanism, and the frontend being open for anyone to put their own view onto the data, and maybe work with them.
That's how it is right now; it's a client-server architecture. It has a CLI and a JSON API so you can build your own front-ends.
> Maintaining personal data has demand here, but so far there is no real unified project catering to a broad range of services and different types of data. So there is a space to fill and a community to build IMHO.
I agree, this is a huge void that needs to be filled!
Bookmarks, contacts, notes for example, they are usually not time-based but by context, category or whichever organization their usage demands at the moment. They can be time-based in a journal, but usually I do not remember websites or people by the time I encountered them, so I would not search them this way.
The other question is how personal is personal in this project? Are my IMDB-ratings valid data for this? My Ebook-collection? My Steam-account? Those are usually things I would not manage in a timeline or a map, but with different interfaces and features.
You might suggest to users the following use case. “If you want to create a Timelinize data store, but don’t feel that your own local systems are secure enough to safely hold a basket with a copy of every egg in your life, you might consider the following use case, which some of our customers implement. Once our twice a year, update the data store, but store it on an external disk. When the update is done, take the disk offline and keep it in a drawer or safe.”
Also
I always wondered how cool it would be if I could tell some Spotify-like system, “I’m 20 miles away from the lake, we are going to stay in the cabins a week, just like we did 10 years ago. Play me the exact same songs now that played at every turn back then.”
Also
For a name, how about: ChronEngine That name seems pretty free and clear from previous use, if you like it grab ChronEngine.com before some squatter does and thank me with a phone call, I would enjoy a quick chat with you.
Also
Your web page might benefit from a grid that lists all the input sources you accept, with hotlinks next to the names that give a pop-up quick summary of what that source is about, and maybe some color coding that shows something like “green = rock solid lately”, “yellow = some users reporting problems”, “red = they just updated their format, it’s broken, we are working on it”. You will/are facing challenges similar to Trillian, a chat client from the early 2000s that would try to maintain ongoing connection compatibility with multiple other chat clients such as AIM/ICQ/MSN. Also, the grid could have a “suggested source sets” filter that helped people find what 5 (for example) input sources they might select for their use style.
Oh and make a list of anybody that says they have done an elaborate something similar with Excel (like me and at least one other person in this thread) and maybe have a discussion with them some time, we/they might have some useful insights.
Let’s hear it for people on the opposite side of the “go fast and break things” coin! My first project took 16 years. My current one I started 28 years ago!
> You might suggest to users the following use case. “If you want to create a Timelinize data store, but don’t feel that your own local systems are secure enough to safely hold a basket with a copy of every egg in your life, you might consider the following use case, which some of our customers implement. Once our twice a year, update the data store, but store it on an external disk. When the update is done, take the disk offline and keep it in a drawer or safe.”
Sure, I like that.
> I always wondered how cool it would be if I could tell some Spotify-like system, “I’m 20 miles away from the lake, we are going to stay in the cabins a week, just like we did 10 years ago. Play me the exact same songs now that played at every turn back then.”
It's inevitable for an LLM and other tooling like personal assistants to be integrated to this thing, so yeah that sounds like a great use case.
> For a name, how about: ChronEngine That name seems pretty free and clear from previous use, if you like it grab ChronEngine.com before some squatter does and thank me with a phone call, I would enjoy a quick chat with you. ... Oh and make a list of anybody that says they have done an elaborate something similar with Excel (like me and at least one other person in this thread) and maybe have a discussion with them some time, we/they might have some useful insights.
Not bad actually; and sure, we can chat either way. Feel free to book a time at https://matt.chat.
> Your web page might benefit from a grid that lists all the input sources you accept, with hotlinks next to the names that give a pop-up quick summary of what that source is about, and maybe some color coding that shows something like “green = rock solid lately”, “yellow = some users reporting problems”, “red = they just updated their format, it’s broken, we are working on it”. You will/are facing challenges similar to Trillian, a chat client from the early 2000s that would try to maintain ongoing connection compatibility with multiple other chat clients such as AIM/ICQ/MSN. Also, the grid could have a “suggested source sets” filter that helped people find what 5 (for example) input sources they might select for their use style.
I LOVED Trillian, thanks or the nostalgia. Oh man, they're still alive: https://trillian.im/ (I love the icon...)
I'm curious about real-time data, or cron jobs, though. I love the idea of importing my data into this, but it would be nicer if I could set it up to automatically poll for new data somehow. Does Timelineize do something like that? I didn't see on the page.
Real-time/polling imports aren't yet supported, but that's not too difficult once we land on the right design for that feature.
I tinkered with a "drop zone" where you could designate a folder that, when you add files to it, Timelinize immediately imports it (then deletes the file from the drop zone).
But putting imports on a timer would be trivial.
Anything in ~/Inbox is "safe to delete" because it's guaranteed to have an off-site backup.
Presumably a fancy management app would queue (or symlink) the full directory structure into ~/Inbox (which would then behave as a "streaming cache")
~/Inbox would effectively be available (read only) on "all machines" and "for free" with near zero disk space until you start accessing or pulling down files.
I use Dropbox to manage ~/Sync (aka: active, not "dead" files).
"Outbox", "Inbox", and "Sync" have been the "collaboration names" that resonated the most with me (along with ~/Public if you're old enough, and ~/Documents for stuff that's currently purely local)
I’m not sure what you’d use it for … exactly … but it could probably reconcile and figure out all your credit card charges based on your message history and location, allocate charges to budgets, and show more analytics than you’re probably interested in knowing.
People with a cloud connected car such as a Tesla could probably get some real “personal assistant” type use cases out of this, such as automatically sorting your personal and business travel kms, expenses and such for tax purposes.
There’s probably other use cases like suggesting local experiences you haven’t done before, and helping you with time management.
Could make for a very interesting/useful/private personal assistant.
(When noodling on this, I’ve also been wondering about putting metadata for files in sidecar files next to the files they describe, rather than a centralized SQLite database. Did you experiment with anything like that by any chance?)
One thing I wish for encompasses the idea that I might want to share different slices of my life with different groups
Being able to share a time-based, geo-based, or X-based slice (all or some of) with someone would be a great way to learn about "close encounters" before or after we met, or even someone else's perspective of a shared trip or experience.
Just a general comment; you've made something truly wonderful (IMHO), and I look forward to seeing where you take this.
I might not be the typical user for this, because I'd prefer my data to actually stay in the cloud where it is, but I'd still like to have it indexed and timelined. Can timelinize do this? Like, instead of downloading everything from gphoto, youtube, bluesky, wtv, just index what's there and offer the same interface? And only optionnaly download the actual data in addition to the meta-data?
The debate between importing he data and indexing external data is a long, grueling one, but ultimately I cannot be satisfied with not having my data guaranteed to be locally available.
I suppose in the future it's possible we could add the ability to index external data, but this likely wouldn't work well in practice since most data sources lock down real-time access via their API restrictions.
How does this handle data updating / fixing? My use case is importing data that's semi structured. Say you get data from a 3rd party provider from one dump, and it's for an event called "jog". Then they update their data dump format so "jog" becomes subdivided into "light run" vs "intense walk", and they also applied it retroactively. In this case you'd have to reimport a load of overlapping data.
I saw the FAQ and it only talks about imports not strictly additive.
I am dealing with similar use cases of evolving data and don't want to deal with SQL updating, and end up working entirely in plain text. One advantage is that you can use git to enable time traveling (for a single user it still works reasonably).
> How does this handle data updating / fixing?
In the advanced import settings, you can customize what makes an item unique or a duplicate. You can also configure how to handle duplicates. By default, duplicates are skipped. But they can also be updated, and you can customize what gets updated and which of the two values to keep.
But yes, updates do run an UPDATE query, so they're irreversible. I explored schemas that were purely additive, so that you could traverse through mutations of the timeline, but this got messy real fast, and made exploring (reading) the timeline more complex/slow/error-prone. I do think it would be cool though, and I may still revisit that, because I think it could be quite beneficial.
With 2D time you can ask complex questions about what you knew when, with simpler questions automatically extended into a question about the current time. Like:
You probably don't want to just switch to XTDB, but if you pursue this idea I think you should look into 2D time as I think it is schematically the correct conceptualization for this problem.Docs: https://docs.xtdb.com/concepts/key-concepts.html#temporal-co... | 2025 Blog: https://xtdb.com/blog/diy-bitemporality-challenge | Visualization tool: https://docs.xtdb.com/concepts/key-concepts.html#temporal-co...
One interesting scenario re time traveling is if we use an LLM somewhere in data derivation. Say there's a secondary processor of e.g. journal notes that yield one kind of feature extraction, but the model gets updated at some point, then the output possibilities expand very quickly. We might also allow human intervention/correction, which should take priority and resist overwrites. Assuming we're caching these data then they'll also land somewhere in the database and unless provenance is first class, they'll appear just as ground truth as any other.
Bitemporal databases look interesting but the amount of scaffolding above sqlite makes the data harder to manage.
So if I keep ground truth data as text, looks like I'm going to have an import pipeline into timelinize, and basically ensure that there's a stable pkey (almost certainly timestamp + qualifier), and always overwrite. Seems feasible, pretty exciting!
Very nice project, it is something what we all need right now in these days. Just noticed that you recommend Apple Silicon M2 or newer? Is there really a significant difference between M1 and M2? Is it running some LLM model in the background?
This could be a really interesting as a digital forensics thing.
I am also slowly "offlining" my life. Currently, it is a mix of synology, hard drives and all.
I have always thought about building a little dashboard to access everything really. Build a financial dashboard[1] and now onto photos.
[1] https://github.com/neberej/freemycash/
FreeMyCash looks great! Yours is the second financial application I've heard of; maybe we need to look at adding some finance features soon.
Looks interesting.
(I don't think all my photo/video archives would fit on my laptop, though the thumbnails definitely would, while minio or something replicated between my desktop plus a backup machine at Hetzner or something would definitely do the thing)
I even encountered crashes within sqlite when using ExFAT -- so file system choice is definitely important! (I've since implemented a workaround for this bug by detecting exfat and configuring sqlite to avoid WAL mode, so it's just... much slower.)
Timelines
Tempor (temporal)
Chronos
Chronografik
Continuum
Momentum (moments, memory, momentum through time)
IdioSync (kinda hate this one tbh)
Who knows! Those are just the ones that fell out of my mouth while typing. It's just gotta have a memorable and easy-to-pronounce cadence. Even "Memorable" is a possibility LOL
-suggestions from some dude, not ChatGPT
I don’t see the link to the rep on on first glance of the linked site, so linking it here: https://github.com/timelinize/timelinize
i want to forget the past except for keeping the warmest moments in my heart.
i know others will feel differently.
that said, beautiful presentation and website!
And yeah, I feel ya. This lets you delete things too.
Have you thought of creating a setup so as to package all libraries and dependencies needed? You have a very nice installation guide, but there are many users who just want the setup.exe :-)
Basically, I would love to know how to do this correctly for each platform, but I don't know how. Help would be welcomed.
But you have control over this:
- You can customize what makes an item a duplicate or unique
- You can choose whether to update existing items, and how to update them (what to keep of the incoming item versus the existing one)
Ten years is a long time. Is it just you? What language is it in? Does it run on desktop or does it need a server? What is the license?
We've been conditioned to simply let our most precious memories and life records (photos, chats, locations) slowly dissolve into the opaque, proprietary silos of mega-corporations. The fact that Timelinize enables us to pull that data back down from Google Takeout, social media exports, and device backups, and then organize it cohesively and locally is a monumental step toward digital self-sovereignty.
What I love most:
Unified Context: Aggregating data across sources (like seeing SMS and Facebook messages with the same person in one unified thread) is genius. This is the crucial context that cloud platforms deliberately fragment.
Local Privacy: Storing everything in an easily navigable local SQLite database is the gold standard for personal data. It means the developer, or any third party, cannot monetize or surveil my history.
Data Longevity: Knowing that even if a service like Google Location History shuts down its export (as it did recently), my historical data is safe and searchable on my own hard drive is the only way to genuinely keep these records "alive forever."
This isn't just a useful app; it's a vital piece of infrastructure for anyone who cares about building a personal, permanent digital archive of their life outside of the corporate cloud. I hope this gains massive traction. Great work on addressing a foundational digital problem!
the giveaway for me is the too-perfect summary list of things it loves the most