Hacker Newsnew | past | comments | ask | show | jobs | submit | CyberRage's commentslogin

some information can be viable for longer than that, there are some extreme cases where 2~3 minutes would not suffice


IMO good algorithms can only get you so far, even if you use modern high resistance hashing if your user uses 'password1234' it will be cracked, as computation power grows we need to use stronger and stronger passwords, this is a losing game especially for the average Joe.

Modern solutions move away from passwords to MFA and/or digital/physical tokens which there we can control the security level with high precision.

Users are the weakest link as the author stated.


Pair it with a solution like zxcvbn and you can stop your users from their worst impulses as far as passwords.


That's called a dictionary attack and it's not a function of computing power.


If you obtain a hash and want to brute force it with a dictionary attack, the amount of time it takes is still a function of computing power since you have to hash the strings & compare. The success rate is of course not


You don't understand how dictionary attacks work.

1) Dictionaries are TINY compared to the number of possible hashes. Something you could reasonably fit on a single hard drive in many cases. Humans really aren't that creative when it comes to choosing passwords.

2) You don't hash every entry in the dictionary on-the-fly. That's stupid. You store the hashes in a large lookup tree and compare hash-to-hash. There's basically no processing power required, especially compared to actual hashing work.


what is your point exactly?

really bizarre response, if you don't have something good to say, don't.


pointless because these modern algorithms support "Adaptable work factor" what it means is that you can control computational work to your likings

you can set the option to compute a billion rounds for each password.


Passwords are simple to implement and easy for users to adopt

For a long time security wasn't a top priority for many companies if we are frank about this.

Today we have standardized implementations and guidelines, we have many additional options like TOTP or U2F which are pretty robust.

Nowadays if someone wants to implement good user identity/authentication they can.


Webauthn is a dumpster fire to implement compared to passwords or totp.


I specifically didn't mention WebAuth...

U2F(Yubikeys utilize it) and TOTP are both great options.

even push to login is quite common these days.

You have great options.


For chrome u2f was replaced by webauthn. Regardless, both are hard to implement compared to passwords


There is one incorrect statemement that really bothered me

TOTP and passwords are not the 'similar' because TOTP is a secret generated from the 'key' and is only valid for a short time period

this make usage of that secret vastly more challenging for attackers, especially at scale.

I've seen real attacks where passwords are stolen, it is impossible to keep track of that. However for TOTP since the attacker has to login NOW, you can very easily detect anomalies of a new IP addr with many successful logins for different accounts

Or to put it differently, when you capture a password it is always valid credential but for TOTP, it requires more advanced infrastructure(automatic logins, save tokens)

and even then the token will eventually expire, with password you can generate infinite tokens


My understanding of the post was the the author means that from the point of view of “authenticate this user”, the TOTP _key/seed_ is quite similar to a password, which I would agree with. Especially if you’re storing both in your password manager like the OP.


> the TOTP _key/seed_ is quite similar to a password

It's not though. The whole point is the secret moves once, from server to client (ignoring user exports and whatnot). Pass(word|key)s are basic authentication, and so have to be transmitted with every request. That's a huge difference in surface area. Also they are guaranteed to be high entropy, unlike user-picked passwords with no filter.


It is not because:

1. TOTP is time based, after 30 seconds it means absolutely nothing, you cannot recreate the 'secret key' from that number

2. with TOTP everything is well-known. TOTP will usually generate a 6 digit 'secret', this makes managing it very predictable:

6 digits = 1,000,000 options(including all zeros) we can easily calculate a good security margin

like: 5 attempt - 5/1,000,000 = 0.000005% chance of success Very predictable security margin

for passwords it's a huge unknown, it entirely depends on the user password quality


While you are technically correct, you are missing the whole point of the blog post


Seems like a rant to me.

Passwords are used because they are convenient and intuitive.

Once you use a 'password manager' you basically have a glorified key generator/storer

We already have so many alternatives from GPG keys to FIDO/FIDO2 solutions

Security isn't always the first priority when running a website/app, it is the sad but honest truth(coming from security expert with over a decade of experience)


I see a ton of linux malware as part of my job but it's a different kind to windows malware(which I also reverse/research) in Linux the focus is on server/enterprise so things like webshells, miners, data scraping are very common


Vast majority of malware doesn't use any exploits, let alone something like a zero-day

Most of malware, logically, abuse common features within the operating system


same is true for windows... zero day exploits are hard and get patched rather quickly in all major operating systems

It is always the abuse of legitimate features which are the problem


Honestly HPC moved to GPUs for most of the heavy FP compute

for CPUs INT perf is king, even in HPC/enterprise


Unfortunately this is not true in numerics. Lots of stupid heavy cfd/fea type workloads parellize well but aren't gpu accelerated. The reasons aren't clear to me, but a lot of the popular solvers are cpu only and involve mostly fp calcs. There are a few solvers that use gpus but they tend to be less accurate in exchange.


Reasons : there is a significant amount of work needed to get codes to work on a distributed hybrid or gpu-only fashion. It's a completely different coding paradigm that needs significant studies before commercial entities adopt gpu use at scale. All-gpu solvers are starting to be developed, such as fun3d GPU[0], but features are very limited. GPU development is starting to catch up in the community, so it won't be long before a significant portion can operate heterogeneously or in gpu-only mode.

[0] https://fun3d.larc.nasa.gov/GPU_March_2021.pdf


'this is not true in numerics' - shows no evidence...

GPUs are gaining traction in FP workloads, it can be seen clearly with CPU/GPU data-center market share

Moore's law is pretty much over, we can't simply print more performance these days, we are going to see major shift to accelerators which would require some rewrites, otherwise you're going to be stuck


That would be ironic because Linus also predicted the death of discrete GPUs.


heavily depends on the workload

Some workloads can be accelerated via AVX-512 as shown here by Anandtech:

https://www.anandtech.com/show/17601/intel-core-i9-13900k-an...

See how AMD CPUs with AVX-512 enabled some a massive boost even with similar/less cores

I would agree that most typical workloads don't benefit much from AVX-512, it requires software support and good use-case(wide parallel SIMD)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: