Hacker Newsnew | past | comments | ask | show | jobs | submit | kimi's commentslogin

The point is not launching companies, is making money with them.

agree 100%. we are in revenue and I have operating partners in all of these.

Not sure i understand the point.... any p2p webrtc call in encrypted e2e.

This is now. But the seeds were sown much earlier.

The anti-nuclear area, at least in Western Europe, had historically a very high correlation with those who held sympathies for a certain very large nuclear power who would have strategically benefited from an anti-nuke sentiment that would avoid another nuclear power's weapon deployment in EU bases. But I'm sure it is a coincidence.

Archive returns 503 ATM....


Imagine when you receive millions of events per day -- the ones that arrive out of order, the days when delivery time goes up and up again, the days when oauth fails renewing keys... it's meant to be a lot of fun.

Compare to a sad websocket that just stays connected, you receive everything in order and you don't need an harness with tunnels every time you want to test something in dev.


Interesting to see the SDK making chat/bot integration simpler, but there's a whole other dimension to Teams integration that this doesn't touch: telephony.

The company I'm at has been been building call analytics for Teams since 2021 (QueueMetrics Live). A long-time customer migrated their entire phone system to Teams during the pandemic and asked my boss if we could follow them there. We said "in principle, yes": the Graph API exposes telephony events, so it seemed doable.

It was doable. It was also far harder than anybody expected. The Graph API gives you raw call records, but it doesn't model concepts like "wait time in queue" or "lost call" or "failed agent attempt", and whet it does, nowhere it written HOW. You have to reconstruct those from sequences of low-level events. We ended up building the whole processing pipeline in Elixir because we needed to handle the real-time stream reliably at scale. After a long beta and a few million calls processed, we got to GA. I was dabbling in Elixir at the time, I put it on my CV, and it was noted. In a sense, I owe Teams my current job (though I ended up on a different team, so I'm mostly referring coffee-mug lore here).

We set out to track queues and auto-attendants (that's our bread and butter from the Asterisk world), but we discovered we were seeing everything — inbound, outbound, Teams-to-Teams, even calls with other companies with their internal ids. You can get a complete picture of someone's telephony activity regardless of whether they work in a contact center or just use Teams as their phone. Most of the boring config (names, groups, codes) comes straight from Graph, which is nice.

Like it or not, a lot of enterprises are quietly moving their entire communications (including telephony) to Teams. When they do, they lose the monitoring and analytics they had with their old PBX. That's a real gap, and the Graph API — despite its limitations — gives you something to fill it. But I have a feeling that "3 lines of code" won't cut it. :-)


I was bitten by the same issue. Maybe add a line to the docs.


Added to the README, apologies for that.


I have a pet-peeve with this. As a non-native English speaker, I find it very useful to dictate multiple notes, in different languages, and have the LLM produce clear English prose out of it. The prose may be LLM-generated, but I edit it when needed to make sure that the contents is 100% mine.

It's like dictating to a typist like they did in the 60's - he will make sure that your letter looks professional and will fix your grammar, but you will sign the letter. This is totally different from LLM spam, the kind that inflates a sentence into a three-page article full of nothing.

So - is it a problem if the language reverts to a mean? that is the point of a shared language, right?


It's not just the language that reverts to a mean, it's the knowledge embedded in the model. If you're interested in discussing niche topics with ChatGPT, the further the model collapses the less likely you are to get meaningful results from the "tail" - the areas of knowledge that fall at the far ends of the model's bell curve.


Actually, both will, as they are not separate within the LLM. The thing is, one is a style issue, the other content. You can express original ideas and still use a lot of em dashes, or produce slop with a lot of typos in it.


Wish I had something like that when I was coding demos around 1988... :-)

(Edit): you kids have it easy.


I grew up in the 80's programming my C64, assuming that this would be more or less the sort of thing I'd be doing for a living. The reality is actually pretty disappointing. I _wish_ anybody still did this stuff.


I wrote something similar in the 1990s. Except, I had to write the TUI library myself (in x86 assembler) since nothing like what I needed existed at the time.

I was working on a project to do ECU performance curve remapping for a rally driver friend, so mine had additional features like the ability to export memory segments as .m files for plotting curves in Matlab.

I watched a video about ECU remapping (on a modern processor) yesterday, and the guy started by using the OBD port to get access to the system. I had to physically desolder the EPROM from the board to dump it back in the day.

The kids today have it far too easy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: