Agreed. Youtube downloaders are essential for backup purposes and for getting clips to put in your own videos as fair use. But people turning them in to fully user facing ad free frontends are driving the crackdown on the tools so we will end up with no way at all to download videos..
Would be nice if Youtube just let premium users download the actual video files. What I find interesting is how so many of the Chinese social media platforms just let you download videos while western tech companies pretty much universally block it.
so, essentially, what you are saying is that yt-dlp should have never been open-sourced/published and ever posted on HN (so that not even you would have found out about it)?
My take is: its either there with all of its features and popularity or its not. The argument that it will be taken down if its more popular seems to me fundamenally wrong.
I mean, the root of the problem is that there is essentially only one "Youtube" that isn't a public service. Not sure if you make this better by leaning into it or not.
the desire is already there. they've testing DRM for videos as we speak. this cat and mouse game will never end until google creates some anti-cheat with kernel permissions to attest anti-tamper
Not sure it was ever youtubes desire to shut it down. Why would they, as there are a multitude of reasons why someone would want a video off a platform. It was the RIAA's, since there the ones who sent the takedown.
If they shut down yt-dlp for good, a lot of power users and creators would find the YouTube platform useless for themselves and abandon it en masse for its nearest competitor. A tool like yt-dlp is very much required if you want to engage professionally with that kind of community. Even something as trivial as making a well-produced "video reaction" relies on it.
Yes, YT has good monetization, but it still pays peanuts to the average creator. So the competitive threat is very real - superstars alone wouldn't be enough to make for a really compelling platform.
A question for the author or anyone else who has experience in similar solutions.
Is there any good solution for discovering new content? Much of the time, I want to stick to my subscriptions, but I do enjoy content surfaced by the algorithm at least once weekly, sometimes more often. My concern in taking my viewing off-platform is twofold: 1) going to YouTube will prompt me with all the stuff I've already watched off platform, and 2) any changes to my viewing habits won't be reflected in algorithmic suggestions.
Am I making any bad assumptions or missing anything that would be useful?
As an example, I usually get conference presentations surfaced for me, but I don't track conferences to know when I should go looking for presentations. YouTube is good at surfacing these for me.
I view Discovery as a social problem where the content you want is almost always clustered between a relatively small number of creators, regions, etc.
Technically it then becomes less of an indexing everything problem and more of a find a few cornerstone creators, say Khan academy, and occasionally branching out.
So to answer your question I don’t thing the cost/benefit for automating discovery is much better then spending 20 minutes and finding enough cornerstones to fill you for 100+ hours of content. Or similarly finding a social group like an rss feed, say in ios development it would be fatbobman, and sourcing it from there.
Time to source content isn’t the bottleneck worthy of software solutions, yet for monetization reasons discovery is the vice grip of social media and made to be the most important thing.
If you were to have something local build you an algorithm, what signal would you want it to consume and how far from the median would you want it to deviate? Would you want it to use signal from online socials?
I am almost a month into having a Perplexity subscription and I am not sure I can not have a deep research subscription at this point.
I have found youtube videos this month that I don't know how I would have found otherwise that were just part of the sources for what deep research came back with.
It has really created the opposite problem for me is I have so much good information I don't even know what to do with it right now. I am probably taking a month off to just sort through what I found this past month.
I looked into this as well since I find the YouTube algorithm terrible, but couldn’t find any API for exploration. Which makes sense they want to control what you watch and hence monetize. In a perfect world you could just pick an open source recommendation algorithm from a marketplace and YouTube would just be a wrapper around s3 buckets and some index.
I've been using Unhook[0] for years that it's almost a jumpscare for me to see a recommended video or the Youtube homepage. Your social circles and natural serendipity should be plenty for finding new creators. And in general, avoiding algorithmic feeds will help with ADHD and mindless scrolling.
I use a Firefox profile to watch specific videos while logged-out just for the focused recommendations.
I've also noticed that I getting more recommendations for small creators with little to no views/subs when I'm browsing from a smaller, developing country.
I readily follow youtube links offered on HN discussions. If anything, I could use more of these.
But otherwise I agree with your concern. Video recommendations on youtube was far from perfect (very repetitive in my experience), but was uncovering useful stuff.
good question. I don't think I have a definitive answer but I'll try:
- pure luck. sometimes I discover a channel/creator/blog by pure accident, I'm an avid rss reader and HN adept so content comes to me naturally, so to speak.
- following a feed (be it a website's rss feed, reddit/YouTube) sometimes made me discover related feeds, simply because someone wrote about a cool project a peer made and links their YouTube/github/blog
We built Videocrawl [1] to enhance the learning and watching experience using LLMs. It handles the usual tasks like clean transcript extraction, summarization, and chat-based interaction with videos. However, we go a step further by analyzing frames to extract code snippets, references, sources, and more.
You can try it out by watching a video on Videocrawl, such as the OpenAI Agent video, by following this link [2]. LLMs have the potential to significantly improve how we learn from and engage with videos.
Can you make either a hub.docker or ghcr.io premade image so that people can just pull the image and run it and automate the updates? Its pretty standard practice in the self hosting world and if you don't do it a lot of people will not install it. People have 40-50 odd services installed, managing it via git updates just isn't going to happen.
What I've wanted for a while now is a browser extension that adds a button on youtube video pages, where you click on it and it does yt-dlp downloading but saves it to something like ipfs and posts it to some free video site for indexing.
Basically, there should be a video indexing/search/discovery protocol (don't care if it's still http) where random people can submit metadata and a link to a distributed content-addressable system like ipfs. Alternatives to youtube,tiktok,etc.. even platforms like Bluesky can make use of this. Popular videos get more "seeds"/"mirrors" this way. The biggest problem is getting enough interesting content, so the browser extension helps with that, you just click "share in <insert platform name>" and you have it locally available as well as available on any of your other devices, and now others can see the content without having to use yt.
What I'd like is essentially a user-controlled caching layer for everything. When you view a webpage or video or something you are fully downloading all of that data, you might as well optimistically write it to a local cache. Then a browser extension could be made that says "save this version" which tells the caching layer to add a tag to all of the assets that were downloaded in this page view. It would create a tag that means all of those assets aren't garbage collected from your local cache and you retain your copy forever.
Super-charging this idea with IPFS is even better. Essentially a collective Internet Archive will be created with every version of every page someone has decided they are interested in, for whatever reason.
This kind of thing would be perfectly feasible with the web as it was designed, which was designed with caching in mind.
But, of course, big corporations like Google will fight hard to stop such a thing happening because they don't want you in control. They want to be in control. They hate peer to peer technologies because they can't control them.
I built the same thing a few years back [0], and used the YouTube API for searching. It was fun on the building part.
For hosting, though, I picked Heroku, and they kept removing my deployment because I downloaded ytdlp on it! I ended up deploying it on my own server to make it work.
This is monetizable for parents (or at least, highly needed). YouTube is terrible for child behavior as there are so many pranks and people screaming etc (in kids content) but there are a select few YouTubers who are really good for kids. For example our 10yo does well with: ZebraGamer, Half Asleep Chris, Mark Rober, Brick Experiment Channel, Ants Canada, etc. We have it locked down via safe app but it would be great to have this for the full home network with channels buttoned down.
They are constantly testing pushing other things into the subscription box.
What I want is it to only show me videos. Now, it also shows shorts, and also now “community posts” which are frequently just self-promotion and useless polls that drive engagement. I’ve started unsubscribing from anyone that uses those features too much. I want videos not “check out my twitch channel” and “want more merch? Check out my merch! Also this is a poll so that you will click it”
One channel I follow got some new “comments from the community” kind of feature, and suddenly posts from anyone on YouTube were showing up in my sub box because they also subscribed to the same creator. All of the posts were image posts that were blatantly rule breaking spam, or comments like “why is this a feature”. None of them were from anyone I intentionally followed. Literally just random internet comments as a huge section in my sub-box. I instantly unsubscribed.
YouTube REALLY wants to shove other content into the “subscription box” because as-is it lets you avoid all the algorithmic clickbait.
Just got done setting up Pinchflat this morning as I need jellyfin and sponsorblock integration but it’s always great to see a nice gui around yt-dlp with some new niche features.
Personally I don't even use it to watch the video and instead open them in browser, but it allows to monitor the channel you want and only that with a 'feed' that consist of their video in chronological order.
It doesn't require self hosting, no YouTube account, has the thing to skip promotional video and setting to automatically change clickbait thumbnail.
How does yt-dlp work with sponsorblock? Does it download the video can snip out segments?
I wish PLEX still had youtube plugin. Right now I have a googlesheet script that adds latest videos of channels into various playlists on my premium account. Keeps things simple bouncing between devices / chromecast.
Just recently I stumbled upon these options of yt-dlp, but haven't had the chance to dig deeper (sorry in advance for the formatting):
SponsorBlock Options:
Make chapter entries for, or remove various segments (sponsor, introductions, etc.) from downloaded YouTube videos using the SponsorBlock API (https://sponsor.ajay.app)
--sponsorblock-mark CATS SponsorBlock categories to create chapters for, separated by commas. Available categories are sponsor, intro, outro,
selfpromo, preview, filler, interaction, music_offtopic, poi_highlight, chapter, all and default (=all). You can prefix the
category with a "-" to exclude it. See [1] for descriptions of the categories. E.g. --sponsorblock-mark all,-preview [1]
https://wiki.sponsor.ajay.app/w/Segment_Categories
--sponsorblock-remove CATS SponsorBlock categories to be removed from the video file, separated by commas. If a category is present in both mark and
remove, remove takes precedence. The syntax and available categories are the same as for --sponsorblock-mark except that
"default" refers to "all,-filler" and poi_highlight, chapter are not available
--sponsorblock-chapter-title TEMPLATE An output template for the title of the SponsorBlock chapters created by --sponsorblock-mark. The only available fields are
start_time, end_time, category, categories, name, category_names. Defaults to "[SponsorBlock]: %(category_names)l"
--no-sponsorblock Disable both --sponsorblock-mark and --sponsorblock-remove
--sponsorblock-api URL SponsorBlock API location, defaults to https://sponsor.ajay.app
Potentially dumb question: if YouTube.js works in browser - can/has someone made a YouTube player that’s just a static page? Is there a need for a backend?
I basically have an even simpler version of something like this for my own personal use too. I found it pretty easy to write in Go and my area of expertise is decidedly not web frontend/backend. I’d recommend it as a fun little project if you’re looking for something to do.
For mine, I paste in a video or playlist URL and it downloads the video and creates a lower resolution transcoded version suitable for streaming to my phone. It also extracts an audio-only version in case that’s more appropriate.
I have one too, it's honestly a very fun area to program around, and I'm not going to be surprised if this thread is full of me-toos.
Mine is specifically meant to help get videos onto plex in exactly the way we want - with particular emphasis on playlists, taking the numbering and putting it in plex format, and transcoding any codecs (detected via ffprobe) i know certain shitty players (smart TVs) will have issues with. Along with putting it in the right spot on the filesystem with the right permissions and user+group set so it serves correctly over samba too (for management from windows / via GUI).
Would really appreciate if you could add some options for download quality(with webm merge for 4k support), gave it a go and it just by default downloads the 360p MP4.
Interesting project and great to see other projects as well. Everyone has their own wants, wishes, and requirements for their YT feed so its awesome to see what people have come up with.
This post has actually inspired me to create something of my own because I am the worst YT addict of all time.
Hi Chris, do you know how to handle issues with cookies in production? It seems yt-dlp works fine, but once put in a cloud runner, it doesn't work. Coincidentally, I was also working with yt-dlp this week for another reason.
the project currently supports cookies (never use your own though, of your google profile), just place them in cookies.txt in the root of the project.
but it didn't seem to work well on my server, on a residential IP it works well
it is awful that a paid subscription product like YouTube does actually aim to give their (paying) users the worst experience possible by only ever showing stuff i do NOT want to see and offering no way to disable or customize things. honestly, is there anyone happy with their offering?
but will this or anything similar ever run on FireTV / Samsung?
I get sign in to prove your not a bot all the time since the last few months esp on vpn. Too scared to use my home ip cus I don’t want my gmail to get banned with it
There's Piped but that keeps running into "IOS player response is not valid" error. (I don't know if my Invidious instance works either, I shut it down because of errors.)
As a user of a Firefox-based browser, YouTube's performance really is hit or miss. Sometimes it's ok, other times it's barely useable.
These days I simply queue up videos in mpv. It is much lighter on the resources, and also provides a nice cache that makes seeking through videos a breeze. I can open a link straight in mpv using a very nice system[1].
Once I have an mpv instance open I simply drag links on top of it to enqueue them. (shift+drag if you haven't set the following option in your config: drag-and-drop=append)
It works so well I find myself doing it for other online sources of videos too (e.g. Twitter/X, local TV websites, ...)
I use h264ify plugin and didn't see performance issues for playback. The UI depends on test group you go into, but only the first load is really terrible.
tip: Disable YouTube history and go to subscription page for chronological ordered videos. No more "algorithmically curated" videos in YouTube home page.
Thanks! It is definitely not the cleanest code I've written but I'm slowly making it cleaner and ready for OSS contributions. Learned a ton along the way too, which makes this all worth it nevertheless.
I'll use the common excuse: I jotted this project down for myself without the thought of publishing it ^^
Seems it needs docker and/or NodeJS and runs as a server, so not something most of the non technical users out there would use. This makes widespread adoption unlikely.
If it was packaged as a single executable electron app on the other hand, that would be another story.
Hmm nice. I already have my own search frontend (SearXNG), my own chat frontend (Matrix+Element), LLM (OpenWebUI), and this would now be a good addition.
It's sad that it's necessary but the internet has become so enshittified.
This is awesome and it's one of the countermeasures that the book Chokepoint Capitalism proposed against enshittification.
Imagine seeing Twitch, Nebula, Youtube, etc all in one aggregator app, then the switching cost of leaving one platform to another goes way down. If a content creator wanted to move from one platform to another to get a better deal, the users would hardly notice.
Unfortunately I think DRM + DMCA makes this illegal, e.g. removing DRM from a Netflix stream to use a third-party app is illegal even if there is no copyright infringement. This needs to be fixed.
This already kinda exists in the TV sticks like Google TV in that all recently watched / continue watching is a mix of different services that open the correct app when launched. Also has the "Live" tab which shows a combined TV guide of all app's that share it (pluto, tubi, prime, etc)
Of course this still locks you into the end app for playback but the concepts are there.
Indeed, those with ridiculously slow broadband networks won't ever get 4k content again.
DRM was.. and still is dumb... as it collectively punishes paying customers. While ContentID is sometimes abused by brazen scammers, it is a better solution given the majority of content is still served off the YT platform. =3
(Also, to all the other posters who have done the same for themselves)
--
I have been mentally building a UX I want out of YT over the last few weeks.
What I want to do is have it go through all my history and categorize it and give me a local page and sqlite3 of my browsing hist with various meta-data..
My YT experience has gotten so poor, that even browsing which channels I am sub'd to and finding newer vids in them is a nightmare of a dark pattern...
I thought I wouldnt be able to pull off my vision - but this gives me new hope - and I had told myself that this week I would make an attempt.
One thing I want to do is include VoidTools 'Everything' Search into some MCP tools for Cursor -- and this inspiration ties it all into a more formulated vision for what I want out of a YT ux.
I look forward to trying this out and seeing if it fills the void - or still build my own thing.
(There was an HN SHOW: that was "what if YT channels were like a TV some time ago and that always pops into my head)
--
EDIT: With the postings of GH repos and such, and my comment on categorizing and searching hist -- I also want to be able to have a dashboard of GH repos that I click on, and then have that click in hist be sent to my history categorizer automatically and give me a summary of the thing and category. maybe even from which site I found the repo -- so much like broawsing a YT hist of vids - being able to see all the repos I have been interested in.
I remember a project from some 20 years ago that acted as a proxy and kept a local copy of every single page you visited. I don't remember any details other that you could access that app and search through the history based on time, urls, page titles and content separately.
I believe this is one area where current AI could really shine.
For instance I have a large collection of links about the stuff I care or one I use as one-line answers to different questions (e.g. a friend is taking part in a hackathon and needs a color palette to display some statistical data - in my collection I exactly that along with 20 page long explanation on why these particular colors were chosen if one wish to know).
I keep them in a long markdown file I can somehow navigate by using tags, hierarchy and short descriptions but it gets clunky. Having youtube links doesn't help.
Would be nice to have a tool that would be able to get transcription, distil it to a short summary and maybe you could even ask direct questions about the contents.
I have one for categorizing subscriptions/channels. I've been running it for 3y maybe more. SQLite too. No history integration. I have not opened sourced it because the code is thrown together and there are some subscriptions that the YouTube API doesn't return. I'm not certain what the commonality between them is, either. Possibly country of origin.
I kind of wish people would stop making yt-dlp more accessible and increasing Google's desire to shut it down.
Agreed. Youtube downloaders are essential for backup purposes and for getting clips to put in your own videos as fair use. But people turning them in to fully user facing ad free frontends are driving the crackdown on the tools so we will end up with no way at all to download videos..
Would be nice if Youtube just let premium users download the actual video files. What I find interesting is how so many of the Chinese social media platforms just let you download videos while western tech companies pretty much universally block it.
I'd say it's less people's fault and more Google's for driving people to want something like it.
so, essentially, what you are saying is that yt-dlp should have never been open-sourced/published and ever posted on HN (so that not even you would have found out about it)?
My take is: its either there with all of its features and popularity or its not. The argument that it will be taken down if its more popular seems to me fundamenally wrong.
I mean, the root of the problem is that there is essentially only one "Youtube" that isn't a public service. Not sure if you make this better by leaning into it or not.
the desire is already there. they've testing DRM for videos as we speak. this cat and mouse game will never end until google creates some anti-cheat with kernel permissions to attest anti-tamper
Not sure it was ever youtubes desire to shut it down. Why would they, as there are a multitude of reasons why someone would want a video off a platform. It was the RIAA's, since there the ones who sent the takedown.
do you feel the same about ad blockers?
gatekeeping is not the way.
I don't think they can ever kill it. Something else will rise. There is too much demand for it.
people are idiots... and trying to become famous by using the lowest hanging fruit, hence killing it in the process.
If they shut down yt-dlp for good, a lot of power users and creators would find the YouTube platform useless for themselves and abandon it en masse for its nearest competitor. A tool like yt-dlp is very much required if you want to engage professionally with that kind of community. Even something as trivial as making a well-produced "video reaction" relies on it.
Yes, YT has good monetization, but it still pays peanuts to the average creator. So the competitive threat is very real - superstars alone wouldn't be enough to make for a really compelling platform.
A question for the author or anyone else who has experience in similar solutions.
Is there any good solution for discovering new content? Much of the time, I want to stick to my subscriptions, but I do enjoy content surfaced by the algorithm at least once weekly, sometimes more often. My concern in taking my viewing off-platform is twofold: 1) going to YouTube will prompt me with all the stuff I've already watched off platform, and 2) any changes to my viewing habits won't be reflected in algorithmic suggestions.
Am I making any bad assumptions or missing anything that would be useful?
As an example, I usually get conference presentations surfaced for me, but I don't track conferences to know when I should go looking for presentations. YouTube is good at surfacing these for me.
I view Discovery as a social problem where the content you want is almost always clustered between a relatively small number of creators, regions, etc.
Technically it then becomes less of an indexing everything problem and more of a find a few cornerstone creators, say Khan academy, and occasionally branching out.
So to answer your question I don’t thing the cost/benefit for automating discovery is much better then spending 20 minutes and finding enough cornerstones to fill you for 100+ hours of content. Or similarly finding a social group like an rss feed, say in ios development it would be fatbobman, and sourcing it from there.
Time to source content isn’t the bottleneck worthy of software solutions, yet for monetization reasons discovery is the vice grip of social media and made to be the most important thing.
If you were to have something local build you an algorithm, what signal would you want it to consume and how far from the median would you want it to deviate? Would you want it to use signal from online socials?
I am almost a month into having a Perplexity subscription and I am not sure I can not have a deep research subscription at this point.
I have found youtube videos this month that I don't know how I would have found otherwise that were just part of the sources for what deep research came back with.
It has really created the opposite problem for me is I have so much good information I don't even know what to do with it right now. I am probably taking a month off to just sort through what I found this past month.
I've been using a third party app to watch the videos and the official app to discover content.
Instead of just clicking the video I click share and watch on the unofficial with no ads.
I looked into this as well since I find the YouTube algorithm terrible, but couldn’t find any API for exploration. Which makes sense they want to control what you watch and hence monetize. In a perfect world you could just pick an open source recommendation algorithm from a marketplace and YouTube would just be a wrapper around s3 buckets and some index.
I've been using Unhook[0] for years that it's almost a jumpscare for me to see a recommended video or the Youtube homepage. Your social circles and natural serendipity should be plenty for finding new creators. And in general, avoiding algorithmic feeds will help with ADHD and mindless scrolling.
[0] https://unhook.app/
I use a Firefox profile to watch specific videos while logged-out just for the focused recommendations.
I've also noticed that I getting more recommendations for small creators with little to no views/subs when I'm browsing from a smaller, developing country.
I readily follow youtube links offered on HN discussions. If anything, I could use more of these.
But otherwise I agree with your concern. Video recommendations on youtube was far from perfect (very repetitive in my experience), but was uncovering useful stuff.
good question. I don't think I have a definitive answer but I'll try:
- pure luck. sometimes I discover a channel/creator/blog by pure accident, I'm an avid rss reader and HN adept so content comes to me naturally, so to speak.
- following a feed (be it a website's rss feed, reddit/YouTube) sometimes made me discover related feeds, simply because someone wrote about a cool project a peer made and links their YouTube/github/blog
Check out the Vinegar extension if you use Safari. Same old YouTube but all the videos are replaced with HTML5 <video>s.
We built Videocrawl [1] to enhance the learning and watching experience using LLMs. It handles the usual tasks like clean transcript extraction, summarization, and chat-based interaction with videos. However, we go a step further by analyzing frames to extract code snippets, references, sources, and more.
You can try it out by watching a video on Videocrawl, such as the OpenAI Agent video, by following this link [2]. LLMs have the potential to significantly improve how we learn from and engage with videos.
1. https://www.videocrawl.dev/ 2. https://www.videocrawl.dev/studio?url=https%3A%2F%2Fwww.yout...
Can you make either a hub.docker or ghcr.io premade image so that people can just pull the image and run it and automate the updates? Its pretty standard practice in the self hosting world and if you don't do it a lot of people will not install it. People have 40-50 odd services installed, managing it via git updates just isn't going to happen.
Done
will do, thanks for the suggestion
What I've wanted for a while now is a browser extension that adds a button on youtube video pages, where you click on it and it does yt-dlp downloading but saves it to something like ipfs and posts it to some free video site for indexing.
Basically, there should be a video indexing/search/discovery protocol (don't care if it's still http) where random people can submit metadata and a link to a distributed content-addressable system like ipfs. Alternatives to youtube,tiktok,etc.. even platforms like Bluesky can make use of this. Popular videos get more "seeds"/"mirrors" this way. The biggest problem is getting enough interesting content, so the browser extension helps with that, you just click "share in <insert platform name>" and you have it locally available as well as available on any of your other devices, and now others can see the content without having to use yt.
What you're describing is a piracy platform. That makes it pretty tricky to get off the ground, with regards to funding and outreach.
Write a script to call yt-dlp command with url in clipboard on ipfs server
What I'd like is essentially a user-controlled caching layer for everything. When you view a webpage or video or something you are fully downloading all of that data, you might as well optimistically write it to a local cache. Then a browser extension could be made that says "save this version" which tells the caching layer to add a tag to all of the assets that were downloaded in this page view. It would create a tag that means all of those assets aren't garbage collected from your local cache and you retain your copy forever.
Super-charging this idea with IPFS is even better. Essentially a collective Internet Archive will be created with every version of every page someone has decided they are interested in, for whatever reason.
This kind of thing would be perfectly feasible with the web as it was designed, which was designed with caching in mind.
But, of course, big corporations like Google will fight hard to stop such a thing happening because they don't want you in control. They want to be in control. They hate peer to peer technologies because they can't control them.
Ahaha, I love the "vi/vim" pronouns on Christian's GitHub profile[0]. How have I never seen this before?
[0]: https://github.com/christian-fei
I don't see it. Has it been removed?
Maybe mine are mg/emacs.
copied it from someone else, can't remember who :)
I built the same thing a few years back [0], and used the YouTube API for searching. It was fun on the building part.
For hosting, though, I picked Heroku, and they kept removing my deployment because I downloaded ytdlp on it! I ended up deploying it on my own server to make it work.
[0]: https://github.com/huytd/xaudio
This is monetizable for parents (or at least, highly needed). YouTube is terrible for child behavior as there are so many pranks and people screaming etc (in kids content) but there are a select few YouTubers who are really good for kids. For example our 10yo does well with: ZebraGamer, Half Asleep Chris, Mark Rober, Brick Experiment Channel, Ants Canada, etc. We have it locked down via safe app but it would be great to have this for the full home network with channels buttoned down.
Monetizing this would put YT-DLP in danger of having legal action taken against it, or at least being shut down.
What is “safe app”? Too generic to be googleable.
"wanted to get back my chronological feed, instead of a "algorithmically curated" one"
The 'Subscriptions' link at the top left of the Youtube home page only shows the things you subscribed to, just bookmark that.
Along with so many shorts. So many. Going from Smarttube back to the official app and it just plain sucks.
They are constantly testing pushing other things into the subscription box.
What I want is it to only show me videos. Now, it also shows shorts, and also now “community posts” which are frequently just self-promotion and useless polls that drive engagement. I’ve started unsubscribing from anyone that uses those features too much. I want videos not “check out my twitch channel” and “want more merch? Check out my merch! Also this is a poll so that you will click it”
One channel I follow got some new “comments from the community” kind of feature, and suddenly posts from anyone on YouTube were showing up in my sub box because they also subscribed to the same creator. All of the posts were image posts that were blatantly rule breaking spam, or comments like “why is this a feature”. None of them were from anyone I intentionally followed. Literally just random internet comments as a huge section in my sub-box. I instantly unsubscribed.
YouTube REALLY wants to shove other content into the “subscription box” because as-is it lets you avoid all the algorithmic clickbait.
On android, you can even force the app to open up to that page (long press the icon and you can place a shortcut to subscriptions).
Just got done setting up Pinchflat this morning as I need jellyfin and sponsorblock integration but it’s always great to see a nice gui around yt-dlp with some new niche features.
thanks!
That's pretty great, just tried it. I'd have a few feature requests:
- Make it possible to delete downloaded videos
- Show more than just a few weeks worth of videos per channel. For example, if I look at @AndrejKarpathy I only see his latest two videos.
- Have a way to view a video at a reasonable size in between the small preview and full screen
- Add a way to download a single video without subscribing to a channel
Thanks for making it a Docker image, it's super easy to get it working with Docker compose!
Thanks for the feedback! Some of the new additions are already wip :)
tried to find a solution for your 3rd point, would love to get your feedback
Ten years ago I wrote a alternative frontend for YouTube in C++ (Qt & VLC), it worked pretty well!
https://github.com/skhaz/qt-youtube
Freetube is great alternative.
Personally I don't even use it to watch the video and instead open them in browser, but it allows to monitor the channel you want and only that with a 'feed' that consist of their video in chronological order.
It doesn't require self hosting, no YouTube account, has the thing to skip promotional video and setting to automatically change clickbait thumbnail.
How does yt-dlp work with sponsorblock? Does it download the video can snip out segments?
I wish PLEX still had youtube plugin. Right now I have a googlesheet script that adds latest videos of channels into various playlists on my premium account. Keeps things simple bouncing between devices / chromecast.
Just recently I stumbled upon these options of yt-dlp, but haven't had the chance to dig deeper (sorry in advance for the formatting):
> make youtube great again
I wish we would all just stop doing this. At least fora bit.
if you don't get the sarcasm, I can't help you.
Potentially dumb question: if YouTube.js works in browser - can/has someone made a YouTube player that’s just a static page? Is there a need for a backend?
I basically have an even simpler version of something like this for my own personal use too. I found it pretty easy to write in Go and my area of expertise is decidedly not web frontend/backend. I’d recommend it as a fun little project if you’re looking for something to do.
For mine, I paste in a video or playlist URL and it downloads the video and creates a lower resolution transcoded version suitable for streaming to my phone. It also extracts an audio-only version in case that’s more appropriate.
I have one too, it's honestly a very fun area to program around, and I'm not going to be surprised if this thread is full of me-toos.
Mine is specifically meant to help get videos onto plex in exactly the way we want - with particular emphasis on playlists, taking the numbering and putting it in plex format, and transcoding any codecs (detected via ffprobe) i know certain shitty players (smart TVs) will have issues with. Along with putting it in the right spot on the filesystem with the right permissions and user+group set so it serves correctly over samba too (for management from windows / via GUI).
Interesting! How do you stream it to your phone? I imagine its on the local network?
Freetube, invidious and newpipe are still the best frontends imo
Didn’t invidious stop working?
Would really appreciate if you could add some options for download quality(with webm merge for 4k support), gave it a go and it just by default downloads the 360p MP4.
Absolutely
Interesting project and great to see other projects as well. Everyone has their own wants, wishes, and requirements for their YT feed so its awesome to see what people have come up with.
This post has actually inspired me to create something of my own because I am the worst YT addict of all time.
awesome, thanks!
Hi Chris, do you know how to handle issues with cookies in production? It seems yt-dlp works fine, but once put in a cloud runner, it doesn't work. Coincidentally, I was also working with yt-dlp this week for another reason.
the project currently supports cookies (never use your own though, of your google profile), just place them in cookies.txt in the root of the project. but it didn't seem to work well on my server, on a residential IP it works well
Grayjay also has a desktop app that does this very well. https://grayjay.app/
does it work on iOS?
great to see this, absolutely need to try it out.
it is awful that a paid subscription product like YouTube does actually aim to give their (paying) users the worst experience possible by only ever showing stuff i do NOT want to see and offering no way to disable or customize things. honestly, is there anyone happy with their offering?
but will this or anything similar ever run on FireTV / Samsung?
Awesome! Have some things cooking for the future, hope to see your feedback if you can
I get sign in to prove your not a bot all the time since the last few months esp on vpn. Too scared to use my home ip cus I don’t want my gmail to get banned with it
I wonder how many front/proxies exists:
- invidious
what else ?
There's Piped but that keeps running into "IOS player response is not valid" error. (I don't know if my Invidious instance works either, I shut it down because of errors.)
a lot for sure.
but the more the better, right?
I use skipvids.com as my frontend
As a user of a Firefox-based browser, YouTube's performance really is hit or miss. Sometimes it's ok, other times it's barely useable.
These days I simply queue up videos in mpv. It is much lighter on the resources, and also provides a nice cache that makes seeking through videos a breeze. I can open a link straight in mpv using a very nice system[1]. Once I have an mpv instance open I simply drag links on top of it to enqueue them. (shift+drag if you haven't set the following option in your config: drag-and-drop=append)
It works so well I find myself doing it for other online sources of videos too (e.g. Twitter/X, local TV websites, ...)
[1]: https://github.com/Baldomo/open-in-mpv
I use h264ify plugin and didn't see performance issues for playback. The UI depends on test group you go into, but only the first load is really terrible.
I never have yt issues in FF. Do you have addons that are yt related?
tip: Disable YouTube history and go to subscription page for chronological ordered videos. No more "algorithmically curated" videos in YouTube home page.
Why disable youtube history? I have it enabled, and my subscription page works fine.
I wonder if it will be taken down.
eventually
So refreshing to see native web components and not some React monstrosity with 500 extra dependencies.
Thanks! It is definitely not the cleanest code I've written but I'm slowly making it cleaner and ready for OSS contributions. Learned a ton along the way too, which makes this all worth it nevertheless.
I'll use the common excuse: I jotted this project down for myself without the thought of publishing it ^^
qq on tech choice - why lmstudio over ollama?
It should work out of the box by just changing the server port of the llm service you’re contacting
LMStudio 1234 ollama 11434
currently into lmstudio, but had it working before with ollama. it's compatible with both, since it uses the standard /chat/completions endpoint.
Seems it needs docker and/or NodeJS and runs as a server, so not something most of the non technical users out there would use. This makes widespread adoption unlikely.
If it was packaged as a single executable electron app on the other hand, that would be another story.
Missed opportunity. Should have called it MyTube?
haha who cares :) the missed opportunity would have been if I kept it for myself instead of releasing it
Hmm nice. I already have my own search frontend (SearXNG), my own chat frontend (Matrix+Element), LLM (OpenWebUI), and this would now be a good addition.
It's sad that it's necessary but the internet has become so enshittified.
thanks, and totally agree on the enshittification of the web (and not only).
this is the very reason why I wanted to dig deeper into my-yt and trying to build a custom solution for my needs
This is awesome and it's one of the countermeasures that the book Chokepoint Capitalism proposed against enshittification.
Imagine seeing Twitch, Nebula, Youtube, etc all in one aggregator app, then the switching cost of leaving one platform to another goes way down. If a content creator wanted to move from one platform to another to get a better deal, the users would hardly notice.
Unfortunately I think DRM + DMCA makes this illegal, e.g. removing DRM from a Netflix stream to use a third-party app is illegal even if there is no copyright infringement. This needs to be fixed.
This already kinda exists in the TV sticks like Google TV in that all recently watched / continue watching is a mix of different services that open the correct app when launched. Also has the "Live" tab which shows a combined TV guide of all app's that share it (pluto, tubi, prime, etc)
Of course this still locks you into the end app for playback but the concepts are there.
Shhh! We are going to have to fork it again if too many people find out.
Indeed, those with ridiculously slow broadband networks won't ever get 4k content again.
DRM was.. and still is dumb... as it collectively punishes paying customers. While ContentID is sometimes abused by brazen scammers, it is a better solution given the majority of content is still served off the YT platform. =3
Another thought: Can it do this:
"give me a list of the latest podcasts about/from [subject/channel] {{from the already subscribed channels}}"
--
Or a crontab of schedule "play the latest X at Y time" (so you can tell it to put on your bedtime playlist starting at 9pm)
sort of thing?
Wonderful!
(Also, to all the other posters who have done the same for themselves)
--
I have been mentally building a UX I want out of YT over the last few weeks. What I want to do is have it go through all my history and categorize it and give me a local page and sqlite3 of my browsing hist with various meta-data..
My YT experience has gotten so poor, that even browsing which channels I am sub'd to and finding newer vids in them is a nightmare of a dark pattern...
I thought I wouldnt be able to pull off my vision - but this gives me new hope - and I had told myself that this week I would make an attempt.
One thing I want to do is include VoidTools 'Everything' Search into some MCP tools for Cursor -- and this inspiration ties it all into a more formulated vision for what I want out of a YT ux.
I look forward to trying this out and seeing if it fills the void - or still build my own thing.
(There was an HN SHOW: that was "what if YT channels were like a TV some time ago and that always pops into my head)
--
EDIT: With the postings of GH repos and such, and my comment on categorizing and searching hist -- I also want to be able to have a dashboard of GH repos that I click on, and then have that click in hist be sent to my history categorizer automatically and give me a summary of the thing and category. maybe even from which site I found the repo -- so much like broawsing a YT hist of vids - being able to see all the repos I have been interested in.
Anyone build anything like that for themselves?
I remember a project from some 20 years ago that acted as a proxy and kept a local copy of every single page you visited. I don't remember any details other that you could access that app and search through the history based on time, urls, page titles and content separately.
I believe this is one area where current AI could really shine.
For instance I have a large collection of links about the stuff I care or one I use as one-line answers to different questions (e.g. a friend is taking part in a hackathon and needs a color palette to display some statistical data - in my collection I exactly that along with 20 page long explanation on why these particular colors were chosen if one wish to know).
I keep them in a long markdown file I can somehow navigate by using tags, hierarchy and short descriptions but it gets clunky. Having youtube links doesn't help.
Would be nice to have a tool that would be able to get transcription, distil it to a short summary and maybe you could even ask direct questions about the contents.
I have one for categorizing subscriptions/channels. I've been running it for 3y maybe more. SQLite too. No history integration. I have not opened sourced it because the code is thrown together and there are some subscriptions that the YouTube API doesn't return. I'm not certain what the commonality between them is, either. Possibly country of origin.
thanks!