All the floppy drives I've ever seen had a little mechanical arm attached to a microswitch that detected the presence or absence of the write protect notch. VHS cassettes worked similarly.
Audio cassettes had this also. A little tab you could break off that would prevent recording. If you later changed your mind and wanted to record over the tape, you would just tape over the hole.
I've had great success doing transcription on-prem with Speechmatics: https://www.speechmatics.com They offer both a VM and a Docker image that you can run behind your firewall, and even license offline if you really need to. I use it for generating closed captions of videos, but you could build a transcription tool out of it as well. Their engine's accuracy is the best I've found out there, too, which is a nice bonus.
The exact opposite for me -- I’ve been interested in using FastAPI with SQLAlchemy for some time, so this project is exactly what I’ve been looking for.
Also of note: HLS supports byte range addressing, so you can create the various stream files as single .ts files rather than a collection of segments per stream. A client can use the Range HTTP header to select the window of bytes it wants for the stream/bandwidth slot it wants. This mode is supported from something like iOS 5 and up, and ffmpeg has flags to produce such streams.
If you want good cache hit ratios, you probably want to use segments as files rather than segments as byte ranges. Off the shelf http caching software tends to have filesize limits and may not cache large files in ram or even on disk if they're large enough, and may do unexpected things with range requests like request the whole file, then serve the range to the client. CDNs may be running totally custom http stacks or off the shelf stacks with tuned configs or somewhere in between, but if you intend to use one, it makes sense to build your content so it'll be easily cached.
I have a 2015 MacBook Air and wanted the SD card reader to belong to a Linux guest inside VMWare Fusion (for dd'ing custom RPi images). I just couldn't get it to work. Then I found this article https://kb.vmware.com/s/article/1035825 which says This issue occurs because the SDXC card slot included in newer Mac models is connected to the Mac using the PCIe bus and not using the USB bus that the original SD card slot uses.
Only if your installation is set up that way. Here's how I create a Jira ticket in my organisation: 1. Click "Create Issue", 2. type in a summary (the headline), 3. type in the detail, 3. Click "Create"
If you have more stuff to fill out, that's an issue with your local configuration. Blame your Jira master, not Jira or Atlassian :D
You only have to enter two fields in your workflow and it feels clunky? This doesn't jive with my experience at all but perhaps you're on an old version or it's resource starved. Once you start beating up jira or confluence you have to start tweaking defaults and jvm settings to get the most out of it.
I assume managed account means it's hosted by someone else. Unless it's Atlassian themselves they're probably being cheap and giving it as little resources as possible and it not being on-prem probably contributes to the feeling of slowness.
Jeez so many replies to jira hate with. Oh buts how you configure it. Oh its this. It's that. Stop apologising for a shitty product. If a user find it hard. It's failed. End of story.
There is no universal solution. Every product will have users who find it hard to use. Speaking out about the fact that you find it manageable is valuable information and may show that there are more users who find it easy to use than not.
I agree. Things for some people, they don't work for others. But you cannot blame users and say "oh its how you configured it" "its good out of the box" etc.
If its failed for 1 user, its a shit product.
If its a success for 1 user, it's a great product.
It's GOOD for a user to say they find it easy and why.
It's GOOD for a user to say they find it hard and why.
1 helps the user give a product a second chance by learning of his pitfalls.
1 helps the company improve their product to make it easier for more users.
My point is that apologists shouldn't come in here and blame users for a shitty product.
Let's say -ferinstance- you have kick-ass, seriously best-at-what-it-does software that's only localized in Japanese, and a uniglot USian attempts to use it. They're gonna have a hard time. The software will very likely fail them, because they will be unable to understand anything they're being told by the software.
Does that make the software shit? No. No, it does not. The software is great, it "just" needs to be localized in order to be great for a wider swath of humanity.
> From the perspective of the user, if the product fails for that user, its a shit product.
Honestly, even this is a hazardous over-generalization.
I've looked at gobs of software that failed to do what I needed it to do. Much of that software was good-to-great at what it was written to do, but it -like a lot of software- wasn't written to do what I needed it to do.
That doesn't make the software shit, that makes it unsuited for my needs. I am -and many others are- clever enough to distinguish between the two categories. :)
Can you please elaborate on this configuration? What is the general nature of the configuration changes you're making? Or, how is your workflow affecting the product. I ask as I am in an organization right on the cusp of implementing JIRA, and I would love to steer the implementation team toward a more useful configuration than the clunky defaults, with which I've had prior experience.
* Setting up several statuses (more than in the default) to reflect backlog, selected, in progress, testing, deployed
* Creating an appropriate board which splits everything into columns by status and rows by user
* Setting up a few quick filters to find things like anything which has been in the backlog for more than 6 weeks or bugs which haven't seen activity in the last 48 hours
* Integrating GitHub. Being able to kick a ticket over to QA from your commit message is awesome.
Beyond that, I mostly focus on stripping out default things to make the workflow simpler. I don't need my software to enforce that I can't move a ticket to deployed straight from in progress, and stripping out those extra rules makes it easier to deal with.
Yes, I regularly contemplate the possibility that they all secretly hate me for using software which you don't like. All their appreciation and thanks for moving us over to Jira is surely a facade for their secret plot of hate.
Issue trackers are like utilities, like electricity and water. If they're doing their job, they're invisible. You only notice them when they're broken.
As a result, all issue trackers have more detractors than promotors, with the vast majority of users being neutral.
That said, I've used RT (holy shit no), Pivotal Tracker (drinks too heavily from the dynamic DOM/JS koolaid, sucking up all RAM and CPU in the known universe), and JIRA. I'd take JIRA over the others.
I've used Mantis and Bugzilla and I'd also take JIRA over those.
JIRA is the best bug tracking software I've used since I left Microsoft, which was over a decade ago. Sometimes I still miss Raid, particularly it's ability to build complex queries, but also the ability to connect to any team's project. At the time, it was supposedly the only internal tool used throughout the entire company.
I'm curious to learn how API response caching is affected by GraphQL. In a REST setup, there's a possibility that API responses can be cached. But if the response structure is dictated by the client, it seems like responses might differ and not be cacheable.
In practice it wouldn't make much sense to cache whole query responses to GraphQL queries because your hit rates would be too low due to the variability of the queries. You end up pushing a lot of the caching to the client. That's not really a big issue because if you're writing something like a Android or iOS app because you already need to be caching lots of data on the client-side to make the app responsive.
On the server you end up caching at lower levels in the stack. For example a query for user(id: 123456) {id, name} is going to need data from a key-value store containing user info. That access can easily be cached with something like memcache, saving load on the database. Cache-invalidation problems are also much easier to solve at these layers.
Worth noting there's a massive performance penalty to pay when caching at the app level depending on your stack. On hardware where rails + memcached is struggling to handle 500 concurrents varnish or Nginx will easily handle tens of thousands.