Hacker Newsnew | past | comments | ask | show | jobs | submit | jjk7's commentslogin

Assuming you're serious? Store passwords with salted one-way hashes.

(I will be copy/paste this answer for the other comments)

My bad - I misread the post.

To clear things up: I am completely aware about how to store passwords in services that check against them. You are likely to have read some of my prose on that topic in OWASP or at a conference :)

My point, after misreading the article, was that in order to authenticate to a service (the one that holds the hashed version of that password) you need to have access to its cleartext version. This is VERY bad, should never be stored without special considerations etc.

I read the articlae as if they accessed the source of the passwords, the one used to access to services (a vault, with its encryption, access restrictions etc.). 5k was a lot but that could have been bearers or similar ones.

So my comment, and the comments to it, actually yelled at me (that's good!) the way I yell at actual implemententions sometimes :)

In all seriousness - thanks for the reaction, we need more of these. My next obsession are servies that require "only digits" or "strictly 8 to 11 chars" for credentials :)


Before SYSCALL there was INT

I remember SVC. and also things like BR14 and IEFBR14

I had a similar experience but with MSSQL, was invited to join some meetings with MS Sales folks. I quickly learned the project was never meant to succeed, but was simply leverage to negotiate a better contract.

A tool so good, the workers need to be forced to use it.

Workers are good at their job using tools they know because they had years/decades to hone that craft.

The new tool might make a lot of that experience obsolete. Also, some people that were good with old tools might be great with the new tool. Some may not.

Overall, I don't think it's a bad idea to burn some tokens (and money) to let people experiment.


Once you buy gold, you don't have to rely on other people continuing to burn resources indefinitely. You pay upfront for the cost of mining, and that's it.

They'll resent you insofar as it was confrontational vs. collaborative. If you can incept your conclusion into others they will not resent you. It's the whole raison d'etre of the Socratic method.

I had someone tell me, earnestly, that they hated me because it turned out that I was alright right. Not in the stubborn sense either.


>Reporting Mechanism: In countries with Intergovernmental Agreements (IGAs), such as Canada, financial institutions report to local tax authorities, which then share the information with the IRS.


Probably accessibility APIs


Which specific ones though allow you to send input to a window without raising it? People have been trying to do "focus follows mouse [without auto raise]" for a long time on mac, and the synthetic event equivalent to command+click is the only discovered method I'm aware of, e.g. used in https://github.com/sbmpost/AutoRaise

There is also this old blog post by Yegge [1] which mentions `AXUIElementPostKeyboardEvent` but there were plenty of bugs with that, and I haven't seen anyone else build on it. I guess the modern equivalent is `CGEventPostToPSN`/`CGEventPostToPid`. I guess it's a good candidate though, perhaps the Sky team they acquired knows the right private APIs to use to get this working.

Edit: The thread at [2] also has some interesting tidbits, such as Automator.app having "Watch Me Do" which can also do this, and a CLI tool that claims to use the CGEventPostToPid API [3]. Maybe there's more ways to do it than I realized.

[1] https://steve-yegge.blogspot.com/2008/04/settling-osx-focus-... [2] https://www.macscripter.net/t/keystroke-to-background-app-as... [3] https://github.com/socsieng/sendkeys


You don't actually need to send CGEvents to UI elements to make them do things ;)


Could you elaborate on what you mean? My understanding of the Cocoa event loop was that ultimately everything is received as an NSEvent at the application layer (maybe that's wrong though).

Do you mean that you can just AXUIElementPerformAction once you have a reference to it and the OS will internally synthesize the right type of event, even if it's not in the foreground?


yes you can do a lot background UI interaction using the AX APIs. Displaying a second cursor is also simple, just a borderless, transparent window that moves around.

For the few things you cannot achieve with the Accessibility API's there are ways to post events directly to an app - even though CGEventPostToPid is mostly broken when used on its own. These require a combination of CGEventPostToPid and CGEventTapCreateForPid. (I have done a lot of this stuff in my BetterTouchTool app)


Neat, good to know! And it does seem my mental model of event loop was broken. Accessibility related interactions don't have any related NSEvent.

They are handled as part of the "conceptual" run loop, but they seem to be dispatched internally by AXRuntime library from a callback off some mach port. And because of this, the call to nextEventMatchingEventMask in the main -[NSApplication run] loop never even sees any such NSEvent.

    -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:]  (in AppKit)
        _DPSNextEvent  (in AppKit)
          _BlockUntilNextEventMatchingListInModeWithFilter  (in HIToolbox)
            ReceiveNextEventCommon  (in HIToolbox)
              RunCurrentEventLoopInMode  (in HIToolbox)
                CFRunLoopRunSpecific  (in CoreFoundation)
                  __CFRunLoopRun  (in CoreFoundation)
                    __CFRunLoopDoSource1  (in CoreFoundation)
                      __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__  (in CoreFoundation)
                        mshMIGPerform  (in HIServices)
                          _XPerformAction  (in HIServices)
                            _AXXMIGPerformAction  (in HIServices)

In some sense this is sort of similar to apple events, which are also "hidden" from the caller of nextEventMatchingEventMask. From what I can see those are handled by DPSNextEvent, which sorts based on the raw carbon EventRef. aevt types have `AEProcessAppleEvent` called on them, then the event is just consumed silently. Others get converted to a CGEvent and returned back to caller for it to handle. But of course accessibility events didn't exist in Classic mac, so they can't be handled at this layer so they were pushed further down. You can almost see the historical legacy here..

[1] https://www.cocoawithlove.com/2009/01/demystifying-nsapplica...


Maybe they used Claude to come up with a good method to do this. /s

But I was also wondering, how this even works. The AI agent can have its own cursors and none of its actions interrupt my own workflow at all? Maybe I need to try this.

Also, this sounds like it would be very expensive since from my understanding each app frame needs to be analysed as an image first, which is pretty token intensive.


There are better ways to analyze on-screen content than images.


Allbirds ran on Shopify... was the CTO a shoe engineer?


At least tokens are equivalent to measuring 'thinking'... I wouldn't mind if it burned 100k tokens to output a one line change to fix a bug.

The problem is maximizing code generated per token spent. This model of "efficiency" is fundamentally broken.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: