Hacker Newsnew | past | comments | ask | show | jobs | submit | more reflexe's commentslogin

I think that while data is a major point here, in my opinion, these are the reasons apps are preferred by developers:

1. Persistence: while websites are very easy to close, deleting an app is much more difficult and usually requires pressing on some “red buttons” and scary dialogs. It also makes sure the user now has a button for your app on their Home Screen which makes it a lot more accessible.

2. Notifications: while they exist for websites too, they are much less popular and turned off by default. Notifications are maybe the best way to get the user to use your app.

And while I hate the dark patterns some companies use (Meta, AliExpress, etc), I do understand why installing the app worth so much to them.


And why does a developer care about those things if not for the fact it means they can collect data even when the user isn’t actively using the service?


> And why does a developer care about those things...

I have several apps on my phone where I am interested in receiving notifications.

1. Airline app. While traveling I need to know about gate changes, flight time changes, etc. etc. 2. Credit card app. I have turned on notifications for all changes above $10. 3. Bank app. I have turned on notifications for all transfers. 4. Moen water meter app. If there is a water leak at my house, I need to know. 5. Server monitor app. If my website goes down, I need to know right away. 6. Google smoke detector. If there is smoke in my house, I need to know right away. 7. Tesla app. If I didn't close the door properly and walked away, the app lets me know. 8. Security camera app. If there is unexpected movement at my home or office, I get an alert. 9. WhatsApp and other messaging apps. When someone sends me a message, I get an alert.

And those are only the things that immediately come to mind. If you were a developer of some of these apps, would you be able to provide these same functions in a user friendly way with a web app? Genuinely curious.


I actually do not want your garbage persisting on my machine and if you want to notify me you can ask for my email and maintain the required infrastructure to send me notification emails.


The article is a bit strange. While GPS can be used to receive accurate timing (phase correction once per second), for gps less navigation, even a picosecond accurate atomic clock wont really give any additional benefit compared to a wirst watch.

Using an accurate clock, you might be able to detect spoofing (by detecting small “jumps in time”). However, the same should be possible even with a non accurate clock (a few ppms) by detecting conflicts between the different satellites timings (since the “fake” transmitter is on earth, it will never be able to accurately simulate the real satellites’ airtime delays from space to your specific reception location).

On the other hand, if you pair a very accurate clock with a very accurate gyroscope, you might be able to replace gps altogether (https://en.m.wikipedia.org/wiki/Inertial_navigation_system) But from my knowledge, these kind of gyros are not really available for sale (but this is already outside of my knowledge, so maybe something changed).


> On the other hand, if you pair a very accurate clock with a very accurate gyroscope, you might be able to replace gps altogether (https://en.m.wikipedia.org/wiki/Inertial_navigation_system) But from my knowledge, these kind of gyros are not really available for sale (but this is already outside of my knowledge, so maybe something changed).

Dead reckoning systems are available with varying degrees of accuracy and drift depending on your budget. It's common to use them to guess location during GPS dropouts, such as driving through tunnels.

More accurate systems are available as budget allows and the military has a lot of research on this topic. Error accumulates over time, so the longer you go without a GPS reset, the worse the precision gets.

You can't fully eliminate the error accumulation over time, so they can't completely replace GPS. You still need some way to periodically refresh your ground truth position.



yeah, i don't get it either.

The clock is not the hard part of this. Oscillators doing 10mhz or 1pps with nanosecond accurate holdover for 24hours are easily available (for like 3k for chip-scale atomic clocks, and less for rubidium or whatever ).

Galileo et al also have publicly available cryptographic signatures so you can't actually spoof them, only jam them.

If you are trying to do navigation while jammed, the reckoning is the hard part of this, not the clock.

We solved the clock enough already.


> Galileo et al also have publicly available cryptographic signatures so you can't actually spoof them, only jam them.

Replay attacks still work allowing one to spoof location.


The first thing i said makes this sort of irrelevant, but to go down this path:

The replay attacks i'm aware of fall into two categories - cold start and warm start (mostly from https://arxiv.org/html/2501.09246v1, which has been in progress for a while)

The cold start replay attacks are irrelevant here - unless you can force-restart the gps receiver in cold start mode during flight. If you can do that, you probably don't need to spoof the signal. Especially since it requires precise timing to forge the new signal to the receiver at the right time (otherwise it detects it), etc. Seems like there are easier ways.

The warm start replay attack A. Requires you replay valid, but out of date, signals, in real time. This is non-trivial, and also limited in effect as you can only arbitrarily spoof one receiver to a location of your choosing - maybe you can get a few receivers with really good high-signal strength directional beaming of different replays, but it requires real-time tracking and adjustment of the signal of the target anyway to be able to spoof the location accurately.

Spoofing the location inaccurately is sort of pointless in most cases.

B. The attack has to change the time (and thus location) slow enough to not trigger various protections, then keep changing it slowly enough to continue that.

C. The attack requires that your receiver is too stupid to notice that a forced revert to non-authenticated time occurred, doesn't notify you of this, and then doesn't notice that time or location jumped suddenly by more than any reasonable amount. It also has to not notice that the SNR of everything suddenly changed, etc. Oh, also, they have to spoof all other sources of time, including local oscillators, etc, for you to not notice.

Given we just talked about how cheap and easy it is to have a high quality oscillator disciplined to time before takeoff, this kind of replay attack seems "practical" only in the sense that it is possible.

Are you aware of other replay attacks, if so, that'd be cool :)

Otherwise, yes, I agree you can spoof location in theory. I can't imagine a practical application of it in the scenario we are talking about.


Not like us is pretty new (may 24), not sure any proper llm could have been trained on it. All big openai models know nothing about 2024.


Nothing a simple "Browse the Web" plugin/tool before replying can't fix ;)


Maybe i am missing something but while it is interesting, I dont think it has any real security impact.

Since the threat model is that the attacker and the victim are connected to the same router via the same wifi network, not isolated from each other, in a case where you are using wifi in psk for example, the attacker can already sniff everything from other clients.

Therefore, you can spoof packets by just responding to them directly. It is a lot simpler and takes a lot less time (since you just need to respond faster than the server with the right seq and port numbers). Once you are in the same network you can do even crazier stuff like arp spoofing and then let the victim think that you are the router and convince it to send all of its packets to you (https://en.m.wikipedia.org/wiki/ARP_spoofing)

Edit: on a second thought, maybe in a use case where the victim and the attacker are in different wifi networks (or just configured to be isolated ), the attacker should be able to perform a denial of service for a specific ip:port by sending RST and then ACK with every possible source port.


Also only works with non encrypted conns (ftp, http), that one should not be using. And like you say on open or PSK networks you can do worst stuff (if isolation is not enable arp spoofing the default G will be way worst then this)


wxHexEditor is great but not really maintained and sometimes crashes (it even has a builtin prayer to save you from crashing https://github.com/EUA/wxHexEditor/blob/master/src/HexEditor...) A good replacement is ImHex (https://github.com/WerWolv/ImHex). Which does the job really well.


ImHex Looks amazing, but I couldn't get it to work on my system last time I tried. Not a pre built version and not compiling it myself. So I wrote myself a simple hex viewer. Only a viewer, don't need an editor. All other hex editors that I could get to work on my system where really disappointing. Either they couldn't handle large files (>2GB), or they lacked features, like decoding the bytes at the current location as various integer types, had very cumbersome controls for navigation, or displayed important information like the current offset in uneditable labels (status bar) and even didn't give it enough room for large files so it got cut off! Did they never use their own program? Anyway, my viewer only has a terminal interface, so you can always select and copy any text it displays. Also has IMHO handy controls to jump around to absolute and relative offsets. See: https://github.com/panzi/rust-hox But don't look at the ugly code. I just cobbled it together somehow because I needed exactly that.


I noticed hexyl wasn't on your list: https://github.com/sharkdp/hexyl

Your software seems to be in the same vein as hexyl. I can't personally vouch for how well it handles large files cause it's been a while, but I suspect it'll do alright.


Is that an actual viewer with navigation and all, or is it just like xxd, but with Unicode? Dumping gigabytes to the terminal isn't what I want.


I've actually looked at hexyl and wxHexEditor now and added comments on those to the README of my own hex viewer.


Not sure i am following, what problem your product is trying to solve? helping to write tests/run the tests/just organizing tests as a part of the CI pipeline? How is it different than just running tests? (Or is it the platform to run tests on?) If you are trying to do CI for silicon, then what is your target market? From my experience, companies that design their own silicon are usually big enough to have their own custom pipeline for testing and verification and it would be quite difficult to convince them to switch. Smaller companies get help from larger companies in development and verification.

Do you have any tooling that won’t require the developer to write tests? (E.g. something that will ‘work’ with no effort from the developer’s POV - kind of sonarqube for vhdl/verilog)

In any case, good luck. Glad to see some HW-related startups.


Hey, thanks!

CI is one component of our platform. Most other CI tools are pretty agnostic about how tests are structured, though. We also integrate a way to structure your tests into groups so you can control when each test is called. For example, if one test out of 500 fails, it's super easy to rerun that one test with verbose logging and wave dumping enabled. We then also track test pass/fails over time, have tools to leave comments for coworkers on waveforms and logs in the browser like in Google Docs, etc.

Out of curiosity, what do you mean by "Smaller companies get help from larger companies in development and verification"?


In my experience in two HW companies that developed their own ASICs (one as a startup and one as a publicity traded company), we never developed any chip fully by ourself. In all of the cases there was another large company who helped to make the project work so we will actually end up with wafers.

If you are not at the scale of NVIDIA/intel and release a new silicon every other month, it is not worth it to recruit so many people for a relatively short period. However, I am not fully sure how involved they were in the pre-silicon verification process, but at least in some cases they were very involved in the development.


That's not correct. I've worked from start-ups to semiconductor giants. Always the first option to develop everything in house, if you can find the talent. This is pretty much industry standard.


What ASIC/semi start up that you know of is developing everything in house? That is absurdly complex and hundreds of millions of dollars...


Pretty much most of them. They might buy a small IP or two here and there, but for the rest everyone develops their design mostly in house. It's not 100s of millions, that's a ridiculous amount of money unless you are designing like a huge CPU or TPU or so. We design (can't give company name) quite large chips with complex analog and digital in 7nm and 5nm as a start-up and our seed funding was less than 20 million. This is kind of bare minimum funding for a semi start-up anyhow.


From my experience, the biggest footgun with shared_ptr and multi threading is actually destruction.

It is very hard to understand which thread will call the destructor (which is by definition a non-thread-safe operation), and whether a lambda is currently holding a reference to the object, or its members. Different runs result different threads calling the destructor, which is very painful to predict and debug.

I think that rust suffers from the same issue, but maybe it is less relevant as it is a lot harder to cause thread safety issues there.


> which is by definition a non-thread-safe operation

yes, but at this point, since the reference count is reaching 0, there is supposed to be only that one thread accessing the object being destroyed, so the destruction not being thread-safe should not be a problem.

If otherwise, it means there was a prior memory error where a reference to the pointed-to object escaped the shared_ptr. From there the code is busted anyway. By the way it cannot happen in Rust.

> Different runs result different threads calling the destructor

What adverse effects can happen there? I can think of performance impact, if a busy thread terminates the object, or if there is a pattern of always offloading termination to the same thread (or both of these situations happening at once). I can think of potential deadlocks, if a thread holding a lock must take the same lock to destroy the object (unlikely to happen in Rust where the Arc object would typically contain the object wrapped in its mutex and the mutex wouldn't be reused for locking other parts of the code). There isn't much else I can think of, what do you have in mind?

> whether a lambda is currently holding a reference to the object, or its members

This cannot happen in Rust. If a lambda is holding a reference to the object, then it either has (a clone of) the Arc, or is a scoped lambda to a borrow of an Arc.


Looks like this is is not the only problematic example, for example: https://demo.corgea.com/338 Makes sure you don't try to get ctf.key (but not .env for example). Another issue: https://demo.corgea.com/531# The LLM makes up a usage of shell=True despite the original “vulnerable” code not using it.

Well, at least they are showing a real demo and not some made up results.

I think that overall the idea has some potential, but not sure we are there yet.


Thanks for the feedback!

For the first one the SAST scanner reports to us issues based on lines and issue type, so we generate fixes isolated for that issue. We do not generate fixes for other vulnerabilities in the same file for the same finding in the same because we want to have one fix to one finding. There might be another issue reported on another issue, and we plan on allowing people to group fixes in the same file together.

Not sure if I'm missing something on the shell=True. It's in the vulnerable code, which is why it changed it. You have to scroll to the right in the code viewer. https://github.com/RhinoSecurityLabs/cloudgoat/blob/8ed1cf0e...

Is there something I'm missing?


For the first issue: I understand. Thanks.

As for the second, There is no shell=True for me in the demo but it is present in the code you sent. So maybe it is just a bug in the presentation somewhere.


Scrolling to the right should work, but you'll need to do so on each code editor section. We should combine scrolling of these two windows to be in sync.

We'll also take a look at what's causing this. It might be a browser issue.


They scroll in sync for me, but long lines seem truncated in iOS 16.2 Safari. No visible code on that second linked page includes the string in question.


Thanks for sharing! Will look into it :)


Same here, must be a bug in the view, for me it's missing the closing parenthesis as well.


Actually, in its root it is based on simd and prefetching. In short, each part of the packet processing graph is a node. It receives a vector of packets (represented as a vector of packet indexes), then the output is one or more vectors, each goes as an input to the next step in the processing graph. This architecture maximizes cache hits and heats the branch predictor (since we run the same small code for many packets instead of the whole graph for each packet).

You can read more about it here: https://s3-docs.fd.io/vpp/24.02/aboutvpp/scalar-vs-vector-pa...


I can certainly imagine some SIMD concepts in that. Particularly stream-compaction (or in AVX512 case: VPCOMPRESSD and VPEXPANDD instructions)

EDIT: I guess from a SIMD-perspective, I'd have expected an interleaved set of packets, a-la struct-of-arrays rather than array-of-structs. But maybe that doesn't make sense for packet formats.


The NIC gives you an array (ring buffer) of pointers to structs (packets). Interleaving them into SOA format would probably cost more than any speedup from SIMD.


Yeah, but its difficult to write a SIMD / AVX512 routine if things aren't in SOA format.

I can see how this approach described is "vector-like", even if the vector is this... imaginary unit that's parallelizing over the branch predictor instead of an explicit SIMD-code.

This "vector" organization probably has 99.999%+ branch prediction or something, effectively parallelizing the concept. But not in the SIMD-way. So still useful, but not what I thought originally based on the title.


A ring buffer of pointers to structs is friendly to gather instructions. That said, the documentation shows a graph of operations applied to each packet. I'd expect that to lead to a lot of "divergence", and therefore being non-SIMD friendly.

(also, x86-64 CPUs with good gather instructions are rare, and sibling comments show that this is aimed at lower end CPUs. That makes SIMD even less relevant.)


Most packets follows the same nodes in the graph. You have some divergence (eg. ARP packets vs IP packets to forward), but the bulk of the traffic does not. So typically the initial batch of packets might be split in 2 with a small "control plane traffic" batch (eg. ARP) and a big "dataplane traffic" batch (IP packets to forward). You'll not do much SIMD on the small controlplane batch which is branchy anyway, but you do on the big dataplane batch, which is the bulk of the traffic.

And VPP is targeting high-end system and uses plenty of AVX512 (we demonstrated 1TBps of IPsec traffic on Intel Icelake for example). It's just very scalable to both small and big systems.


I have been developing a product that uses vpp in production for a few years now. It is very cool to see how much you can squeeze out of cheap low power CPUs. You can easily handle tens of gbits in iMIX with a a few ARM cortex A72s.

Vpp has very good documentation: https://s3-docs.fd.io/vpp/24.02/ A very cool unique feature is the graph representation for packet processing, and the ability to insert processing nodes to the graph dynamically per interface at some point in the processing using features (https://s3-docs.fd.io/vpp/24.02/developer/corearchitecture/f...)


VPP has been shown to run at 22.1 Mpps on a single core of Gracemont (the efficient / Atom core in Alder Lake), and 42.3 Mpps on 2 cores. (Intel E810 4x25 NIC, DPDK 22.0, VPP 22.06, GCC 9.4.0, RFC 2544 test with packet loss <= 0.1%.

The same core will do 14.99Gbps of IPsec (aes-128-gcm, 1480 byte packets) using VPP, largely because it supports (VEX-encoded) VAES.

While these aren't ARM cortex A72s, they're quite close (cheap low power) for Intel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: