ffmpeg seeks accurately when transcoding. [1] Cutting on non-keyframes when stream copying results in broken video until the next keyframe.
Handbrake does have a CLI. [2] I haven't used it and I'm not sure what advantage it might have over ffmpeg. I personally use mkvmerge or ffmpeg for my muxing/cutting and VapourSynth for encoding.
Yeah, it is literally not possible to not seek to keyframe if you are stream copying.
There are however, plenty of great software that can but at any frame and only re-encode the frames that are outside the whole GOP. Most of them are commercial though, I haven't find one that is free and good.
----
Also, seeking in FFMPEG in practice, is actually more complicated than the guide [1] you linked.
Below is a note I keep for own reference for keyframe-copy. Hope someone will find it useful.
How to keyframe-cut video properly with FFMPEG
FFMPEG supports "input seeking" and "output seeking". The output seeking is very slow (it needs to decode the whole video until the timestamp of your -ss) so you want to avoid it if unnecessary.
However, while -ss (seek start) works fine with input seeking, "-to/-t" (seek ending) is somehow vastly inaccurate in input seeking for FFMPEG. It could be off by a few seconds, or sometimes straight up does not work (for some mepeg-ts files recorded from TV).
The best of the two worlds is to use input seeking for -ss and then output seeking for -to.
However, this way, the timestamp will restart from 0 in output seeking. So instead of using -to, you should calculate -t (duration) yourself by subtracting -ss from -to, and use `-t duration` instead. Below is a quick Python script to do so.
>The best of the two worlds is to use input seeking for -ss and then output seeking for -to.
Could you clarify what you mean with "use output seeking for -to"?
From your Python script it seems that you're just using input seeking and then specifying the duration in seconds with `-t`, which is actually the same as using `-to` when doing input seeking.
Also, input seeking should be inaccurate when doing stream copy, so I'm not sure your script actually works as expected?
(And unless I'm missing something, it seems that all of this is well-explained in the ffmpeg guide linked above.)
I think I know where you're confused: from the guide it looks like there is only a difference where you put -ss; but in reality, where you put -t/-to matters too.
In my Python script, I did input seeking for -ss (start point) part, and then output seeking for -t part (end point). As you can see, the -ss part is before -i {inputfile}, and -t is after.
-ss 1:00 -i file -t 5
is NOT the same as
-ss 1:00 -t 5 -i file.
The latter has a bug that happens frequently when I'm trimming MPEG-TS files recorded from HDTV. It literally doesn't stop at the -t/-to timestamp for reason I don't know. And it only happens when stream copy.
Below is a quick showcase: t.ts is the source, and the filenames show how I generate them with FFMPEG (for example, ss_t_i means input seeking -ss first, then -i t.ts, then -t 1:00).
As you can see, if I use -t/-to before -i, it doesn't cut the file properly.
>Also, input seeking should be inaccurate when doing stream copy
Yeah, it's not frame accurate, can only cut at keyframes, but enough for my application. By the way the same inaccuracy exists for output seeking if you're doing stream copy.
What I don't understand is, how can professional video editing tools trim accurately (and very quickly)? What are they doing differently to ffmpeg?
If do things the "fast way" with ffmpeg, the exported video has random black frames which I think is related to the keyframe issue you mention. If I do things the "slow way" (e.g. accurately) with ffmpeg, it takes a huge amount of time (at least with large 4k videos). But I don't understand how I can drop that same 4k video into Screenflow, trim 1 second out of it and export it in a matter of seconds.
All of the proprietary tools I know of for doing frame-perfect cuts (VideoRedo, TMPGEnc, SolveigMM) work by determining (guessing?) the original encoding parameters and then only reencoding the first and last GOP. The rest of the video is just remuxed.
Regardless, I don't think these software are "matching" anything. TMPGEnc for example has settings to choose what quality you want for these re-encoded frames.
Ceasing to publish new copies is different than disallowing the exchange of existing copies. The owner of the IP chose to do the former, eBay chose to do the latter.
It's bizarre to me that he would describe heroin as a "social lubricant." I've had the misfortune of attempting to socialize with people who were under the influence of heavy opioids and "social lubricant" is not how I would describe their vacant stares or frequent nodding off.
> Wikipedia frequently asserts, in its own voice, that many of Trump’s statements are “false.” Well, perhaps they are. But even if they are, it is not exactly neutral for an encyclopedia article to say so, especially without attribution. You might approve of Wikipedia describing Trump’s incorrect statements as “false,” very well; but then you must admit that you no longer support a policy of neutrality on Wikipedia.
What? No. Objective truth exists and stating it is absolutely not an indication of bias. If Wikipedia is incorrectly referring to his statements as false then those cases should be fixed, but his argument is that it doesn't even matter if these assertions are correct or not.
His "Encyclosphere" is going to be an absolute disaster if it adopts his conception of neutrality as described here. It sounds like he rejects any evaluation of the factuality of the subjects at hand and his preferred standard for inclusion is "some people believe." That sounds like a great way to create a catalog of falsehoods and misinformation.
“Drooling scroll zombies” is a wildly and unnecessarily uncharitable description of people that visit websites and apps that you (apparently) don’t enjoy.
Disparaging entire mediums makes little sense. There is both enlightening and mind-numbing content on social networks, blogs, television, and books. Are people that read many books drooling page-turning zombies? Is scrolling through blogs inherently superior to Twitter scrolling?
I think it makes a lot of sense when the medium is purpose designed and highly optimized to maximize additive shallow interaction.
Sorry, I've had too many meetings and dinners disrupted by too many different people who couldn't stay off the phone to keep pretending it isn't a problem. ... and had far too many informative long form works ignored by their target audience (members of which also just spent our last meeting glued to facebook scroll).
It is entirely possible to curate a Twitter feed of: experts in nearly any subject, gossip, sports, pornography, spiritual guidance, comedy, hate speech, or journalism. The same can be said of bookshelves. It is also possible to consume both excessively or in inappropriate situations. There are young members of my extended family who are regularly scolded for trying to read books during meals.
The rudeness of your guests does not justify passing judgment on millions of people with an almost infinitely broad spectrum of usage patterns, and makes about as much sense as criticizing those who read words on paper.
What? Not every person who uses X infinite scrolling app is a "scroll zombie". I took scroll zombie as that person you know who wakes up, starts scrolling, scrolls all day, then falls asleep scrolling at night but somehow has the balls to say they don't have time to do anything. Social media/phone addicts if you will.
I'm glad I'm not the only one. It's very common that I'll be searching for a keyword that only appears in the actual code a handful of times but hundreds of times in tests. GitHub's search is practically useless in those cases.
I almost always just resort to cloning and searching with ripgrep, which can be annoying if I have no other reason to have the codebase on my machine or it's just a one-off.
Which RLDS church are you referring to? Because the one most commonly referred to as such (but now known as Community of Christ) has been ordaining women since 1984, and I believe has started recognizing and conducting same-sex marriages (at least in the US, Canada, and Australia) in recent years.
I don't have a technical explanation for how call misrouting can happen except to observe that the endpoints are entirely reliant on the network to get this right, they don't even tell each other "Hi I'm X trying to call Y" and "Hi I'm Y answering a call from X".
On Strowger electro-mechanical exchanges one of the nice features is that a random piece of the exchange handles each dialed call. This means the human intuitive approach of "Huh, that didn't work, I'll hang up and try again" actually had a pretty good chance of success if the problem is an electrical fault or something rather than you wrote the number down incorrectly.
I did have a morning once where every call I received was for a business in a Welsh village (I live in England) and the callers were as confused as I was that they'd reached a personal mobile phone instead. The problem resolved itself before it made me annoyed rather than confused.
I have a friend who, for a few months, had a really strange thing going on with his phone. Any time his phone was off, calls to him would be redirected to someone else. It was another person from the same city, but using a different carrier.
Me, my friend and other friends of his called both companies to report the error multiple times. All we received was scorn and disbelief. We were told, repeatedly, that it simply couldn't happen. But it did, consistently.
It felt really silly when it got to the stage where I'd call my friend, someone else picked up and I'd go "Oh, hi, it's me again". One day, it just stopped happening.
According to customer service it wasn't and they said they didn't allow it anyway to that other company's network. And hence, they didn't believe us when we said it was happening.
Back in the dim and distant past I worked Tech Support for Psion. One gloriously quiet morning our number was redirected to the house of some poor woman in South London who eventually just gave up and left her phone off the hook.
Sadly I couldn't enjoy it, as I was running late for work and mistimed my jump onto a bus, breaking my ankle ...
I don't have a technical explanation, but I had a few weirdly misconnected calls and even a call that changed to another person in the middle of a call in the analog cell phone days.
Way easier to happen on the old analog phone systems, when an unreliable relay would not react, or randomly flip. With digital networks and all their checksums and error detection, it's not become impossible, but at least you'd much much more likely just get a dropped detection or other "unreliability" fault than being connected to someone else.
Back when I grew up there would sometimes be some steange noises or eerie silence or something when we was on the phone. We used to joke that "they are just listening in", but I guess no one believed that except possibly my mum. (There wasn't much interesting going on.)
What was the lighting setup like for the demo videos? Did you do any processing to reduce flicker?
I get a lot of flickering from CFL and incandescent bulbs in my 240 FPS videos, though I’ve had better results with some LEDs. Considering how much light I’ve needed for good videos at 240 FPS, I’d imagine the lighting requirements at 660-1000 are fairly immense.
Usually, I would try to wait until evening when the sunlight shines directly into my apartment. Firlming under a 60 watt lamp was still quite dark. I didn't do any experiments with post-processing to reduce flicker, but I know there some advanced video editors do provide such features.
Easiest way to avoid flicker is to light the scene with 1000lm or 5000lm cheap leds with led driver (they power the led with 38V/1.5A DC for 5000lm). Total cost of led+driver is less than 9$ with free shipping on aliexpress.
I appreciate it too, unfortunately Apple is not always so transparent. One example that comes to mind is their 2010 promise to make FaceTime an open standard. It was Steve Jobs that made that promise in 2010 (one year before his death), and apparently a patent dispute with VirnetX may be to blame, but to my knowledge they haven't made any public statements about it since the initial promise.
If legalities are to blame it seems like that would be all the more reason to come out and say something along the lines of: "sorry everyone, we want to keep our promise but stupid software patents have made that impossible." Instead they've (to my knowledge—please correct me if I'm wrong) chosen to remain silent and hoped that people would forget.
I think to your point, that was how Steve Jobs use to run the company. Under Tim Cook's leadership, things have changed. When Steve Jobs was still around, I attended WWDC under the student scholarship, but he never greeted any of the students or ran any events. On occasion, there would be the "job fair." These past few years, Tim Cook has taken the time to greet the students, chat with them, and take photos with them. There's a softer side to the company now, and it's starting with admitting to failure and making mistakes.
I don’t have the source to hand, but I’ve read (or heard on a podcast) that Job’s made that decision on stage. It surprised everybody, including the dev team behind facetime. So it’s less a case of Apple remaining silent, and more Steve being Steve.
I don't know who made the decision (or when), but he did have a slide prepared that said "OPEN" [1] in big letters next to a FaceTime screenshot, which I'd assume means the decision was made at least before taking the stage.
Handbrake does have a CLI. [2] I haven't used it and I'm not sure what advantage it might have over ffmpeg. I personally use mkvmerge or ffmpeg for my muxing/cutting and VapourSynth for encoding.
[1] https://trac.ffmpeg.org/wiki/Seeking
[2] https://handbrake.fr/docs/en/latest/cli/cli-options.html