Hacker Newsnew | past | comments | ask | show | jobs | submit | Sytten's commentslogin

Great terminal, annoying that everytime it updates I have to go back to the settings to disable new AI features or layout changes.

Respectfully I think your argument defeats itself. If you can only speak to your users once every 10-12 months it means your process doesn't scale by definition. Good analytics (not useless vanity metrics) should allow you to spot a problem days after it was launched not wait 3 quarters for a user to air their grievances.

Microsoft has a horde of developers that fit the entire breadth of gh usage. They could fix issues prior to a release if they wished to without opt-out client side telemetry.

You're describing a different problem.

Bug fixing absolutely gets taken care of immediately, and our customers are very active in telling us about them through these strange new feedback mechanisms known as "e-mail" and "a telephone."

But we don't spy on people to fix bugs.

Nothing that the big tech "telemetry" is doing is about bug fixes. In the article we're all talking about the spying that Microsoft proposes isn't to fix bugs. Re-read what it wrote. It's all for things that may not appear for weeks, months, or years.

And to think that a trillion-dollar company like Microsoft can't figure out how, or doesn't have the money available to scale real customer feedback is just sticking your head in the sand and making excuses.

Microsoft doesn't need people to apologize for its failure.


Ah yes, all the spyware on Windows 11 really helped Microsoft scale up development and make it the best Windows version ever.

Now, let's replicate this with GitHub. What can go wrong?


That is just poor analytics IMO, if you have a good harness you can definitely tell if a feature is not well designed. You have to optimize for things like number of clicks to perform an operation not time spent in app.

I used to believe that it was not necessary until I started building my own startup. If you dont have analytics you are flying blind. You don't know what your users actually care about and how to optimize a successful user journey. The difference between what people tell you when asked directly and how they actually use your software is actually shocking.

You're only flying blind if you make decisions not looking and thinking. Analytics isn't the only way to figure out "what your users actually care about", you can also try the old school way, commonly referred to as "Talking with people", then after taking notes, you think about it, maybe discuss with others. Don't take what people say at face value, but think about it together with your knowledge and experience, and you'll make even better product decisions than the people who are only making "data driven decisions" all the time.

Sure, you can spend the weeks to months of expensive and time consuming work it takes to get a fuzzy, half accurate and biased picture of what your users workflows look like through user interviews and surveys. Or you can look at the analytics, which tell you everything you need to know immediately, always up to date, with perfect precision.

Sometimes HN drives me crazy. From this thread you’d think telemetry is screen recording your every move and facial expression and sending it to the government. I’ve worked at places that had telemetry and it’s more along the granularity of “how many people clicked the secondary button on the third tab?” This is a far cry from “spying on users”.


Many products would be much better if they listened to what people are saying on public forums instead of using telemetry. For example, Google Maps has a longstanding bug where it auto-translates all reviews even if they are in a language you speak. If Google cared about user feedback, they could’ve easily fixed it, but no amount of telemetry will tell them this.

The reality is that most product leaders only care about the feedback that has visible consequences. If users aren't performing some action like quitting the app that shows in the telemetry, then they aren't going to pay attention.

They'd probably call the issue you see a "craft" issue. Some PM is likely raising it. What happens is that leaders in big companies want perspectives based on data. You can go in with issues like yours but if you don't have clear data that shows significant numbers of users leaving, or users piling in, then you might as well not show up. People care about craft primarily will really struggle in these large organizations. That's not a good thing but how it is.

In large organizations, you'll see a lot of A/B testing or experimentation. Some of the worst decisions from a craft perspective are ones where they only look for "did this cause some kind of negative impact on numbers?" situation. If your feature is neutral (on abandons, uninstalls, or whatever negative outcome), then it can get shipped which overrides any qualitative question around "should we ship this in this state?". Doesn't matter too much according to these folks because it's not making things worse (in terms of numbers.)

There is probably more to explore in modern "product management" that's at the root of many of these problems. HN tends to focus on engineering but within large companies there is now a bifurcation and development of a field that forgets lots of PM was already invented.


These kinds of issues cause a negative feeling towards the product in the user. They keep using the product even after having seen a badly auto translated review from a language they speak or all these other things, but they now have a little bit more resentment towards the product. It makes them a bit more likely, over time, to switch to a competitor. Maybe they vent to a friend a month later and the friend suggests giving Apple Maps a try.

How do the metrics you speak of capture these subtle, delayed effects?


My point is that they don't capture the effects you describe - unless designed in. There is little motivation to do that though because they can track larger effects which are aligned with current leadership priorities. That's why I included the part about the PM that has recognized the problem.

I can guarantee you that the class of problem you describe has been discussed at the individual contributor level, so is known to some extent. Getting it from recognition to action is the problem. It is a huge lift to get some of these small things through the gauntlet to execution. Meanwhile, as you say, competitors with taste and attention to detail are building a better product.

This is very much a problem of large organizations. Those same PMs at a small company. If Google Maps was an independent company, the impediments would be fewer and priorities more aligned with building the best Google Maps.


I hate this feature. Google knows the languages I speak because I added them in my account, even with all the tracking they obviously know, but they keep messing it up in all their products, Google Search, YouTube (they add machine audio translations to videos and translate the thumbnails).

They even do it on Google Play. No, I don't want to buy books in a language I can't read, suggest me ones that I can. It's been like that for a decade now I think. I guess it doesn't make them lose a noticeable amount of money.

What you're saying is exactly wrong. What people say on public forums is a very biased sample, the proverbial vocal minority

> Sure, you can spend the weeks to months of expensive and time consuming work it takes to get a fuzzy, half accurate and biased picture of what your users workflows look like through user interviews and surveys. Or you can look at the analytics, which tell you everything you need to know immediately, always up to date, with perfect precision.

Yes, admittedly, the first time you do these things, they're difficult, hard and you have lots to learn. But as you do this more often, build up a knowledge base and learn about your users, you'll gain knowledge and experience you can reuse, and it'll no longer take you weeks or months of investigations to answer "Where should this button go?", you'll base it on what you already know.


You seem to be interpreting my position as saying that one should only use telemetry to make decisions. Of course, no one reasonable would hold that position! What I’m saying is that only relying on user interviews without supplementing them with analytics would be knowingly introducing a blind spot into how you understand user behavior.

Yes, probably because someone else said "If you dont have analytics you are flying blind" which I initially replied to, then when you replied to my reply, I took that as agreeing with parent, which isn't necessarily true.

> What I’m saying is that only relying on user interviews without supplementing them

I also took your "spend the weeks to months of expensive and time consuming work [...] Or you can look at the analytics" as a "either this or that proposition", where if we're making that choice, I'd go with qualitative data rather than quantitative, regardless of time taken. But probably it comes down to what tradeoffs we're willing to accept.


Maybe it just comes down to how you interpret "flying blind", because I do tend to agree with that statement. Telemetry is one half of the puzzle, user interviews are the other. Without either I would argue you are flying blind; I think you agree here though.

In enterprise, you have little chance of getting the real story from end users in many cases. IT will also tell you that things are used one way, only for analytics to tell you it's the opposite. If you spend some of your UX research budget to deep dive on the area you can then finally get to the bottom of it.

I think the root of the complaints here is prioritization. The things they care about are prioritized. Qualitative feedback is likely already telling PMs that something is wrong and really should be fixed, but other feedback has more data supporting it.


So if you don't want to spend the time doing that, or as is more accurate in corporate settings, the general turnover of the team is high enough that no one is around long enough to build that deep foundational product knowledge, and to be frank most people do not care enough.

This is why telemetry happens, its faster, easier and more resilient to organizational turmoil.


> This is why telemetry happens, its faster, easier and more resilient to organizational turmoil.

I don't disagree with that, I was mainly talking about trying to deliver an experience that makes sense, is intuitive and as helpful and useful as possible, even in exchange for it taking longer time.

Of course this isn't applicable in every case, sometimes you need different tradeoffs, that's OK too. But that some favor quality over shorter implementation time shouldn't drive people crazy, it's just making different tradeoffs.


> even in exchange for it taking longer time.

I think in terms of corporate teams this is the issue a lot of times, people just are not on the team long enough to build that knowledge. Between the constant reorgs, these days layoffs and other churn the no one puts in the years required to gain the implicit knowledge. So orgs reach for the "tenure independent knowledge base.


Asking users isn't a substitute for usage data.

Usage data is the ground truth.

Soliciting user feedback is invasive, and it's only possible for some questions.

The HN response to this is "too bad" but it's a thought-terminating response.


The ground truth that I never click on Stargate on Netflix is completely at odds with the actual truth that I love Stargate and want more of it and things like it.

What the ground truth usage data is completely ignorant of is that Netflix's copy is a crappy blurry transfer, and so I got dvds instead.


Telemetry doesn’t tell the “why”. You never clicking in Stargate in Netflix is apparently true, so the telemetry isn’t wrong. It just doesn’t answer why.

Le duh. The whole point is that the perfectly true data is misleading and uninformative, and the "ground truth usage data" argument has a plot hole.

It's not that it has no value at all, it's just that it's stupid to know one thing (how to collect usage data) and think that is all you need to know and that that obviates all other sources of understanding.

If you wish to collect money from human customers, you have to be some minimum level of human yourself. Talking to your customers is not some icky hardship to be avoided and replaced with nice bash script.


Sure, but Netflix is not interested in whether you love Stargate or not. Telemetry says that you never click it, so it's ok to remove it from their catalogue (which is correct).

Now, they could've done a better job by increasing the quality, but that's a further (and costly) optimisation.


Netflix should be very interested in a fact like "Netflix has show X, but Netflix subscribers who love show X choose to watch it someplace else due to issues with Netflix".

It's not correct. I paid someone else for dvds. A little more of that and I may consciously question why I pay both netflix and ebay.

It goes the other way as well. Usage data isn't equivalent to asking users either. A solid percentage of bad decisions in tech can be traced to someone, somewhere forgetting that distinction and trusting usage data that says it's it's okay to remove <very important feature> because it's infrequently used.

This. If I'm forced to use a feature I hate because it's the only way to do something, the "ground truth" reflects that I like that feature. It doesn't tell the whole story.

Most metrics teams are reasonably competent and are aware of that. Excepting "growth hackers"

I haven't been in a single metrics discussion where we didn't talk about what we're actually measuring, if it reflects what we want to measure, and how to counterbalance metrics sufficiently so we don't build yet another growthhacking disaster.

Doesn't mean that metrics are perfect - they are in fact aggravatingly imprecise - but the ground truth is usually somewhat better than "you clicked it, musta liked it!"


And yet, the observable evidence of changes in software that collect metrics directly contradict this.

Eh, there are a lot of cases where teams A/B test their way into a product that sucks.

Yeah, it's not a good discussion without concrete examples.

One: Building a good UX involves guesswork and experiments. You don't know what will be best for most users until you try something. You will often be wrong, and you rarely find the global maximum on the first try.

This applies to major features but also the most trivial UI details like whether users understand that this label can be clicked or that this button exists.

Two: Like all software, you're in a constant battle to avoid encumbering the system with things you don't actually need, like leaving around UI components that people don't use. Yet you don't want to become so terse with the UI that people find it confusing.

Three: I ran a popular cryptocurrency-related service where people constantly complained about there being no 2FA. I built it and polished a UX flow to both hint at the feature and make it easy to set up. A few months later I saw that only a few people enabled it.

Was it broken? No. It just turns out that people didn't really want to use 2FA.

The point being that you can be super wrong about usage patterns even after talking to users.

Finally: It's easy to think about companies we don't like and telemetry that's too snitchy. I don't want Microslop phoning home each app I open.

But if we only focus on the worst cases, we miss out on the more reasonable cases where thoughtful developers collect minimal data in an earnest effort to make the UX better for everyone.


> You don't know what will be best for most users until you try something.

That's because you don't understand your users. If you did, you wouldn't need to spy on them.

> you rarely find the global maximum on the first try

One never finds the "global maximum" with telemetry, at best a local sort-of maximum. To find what's best, you need understanding, which you never get from telemetry. Telemetry tells you what was done, not why or what was in the people's mind when it was done.


> Asking users isn't a substitute for usage data.

Sure.

> Usage data is the ground truth.

Absolutely not. That's how you get "we buried this feature and nobody used it, so clearly nobody wants it".


> Usage data is the ground truth.

For what, precisely? As far as I know, you can use it to know "how much is X used" but not more than that, and it's not a "ground truth" for anything besides that.


Then pay for the data if you need it so bad.

"You’d think telemetry is screen recording your every move" - that's literally what tracing and telemetry is about.

"Sure, you can spend the weeks to months of expensive and time consuming work it takes to get a fuzzy, half accurate and biased picture of what your users workflows look like through user interviews and surveys. Or you can look at the analytics, which tell you everything you need to know immediately, always up to date, with perfect precision." -> your analytics will never show what you didn't measure - it will only show what you already worked on - at best, it's some kind of validator mechanism - not a driver for feature exploration.

This kind of monitoring need to go through the documented data exposure - and it's a sufficient argument for a company to stop using github immediately if they take security seriously.

But I'd add that if you take security seriously you are not on Github anyway.


No, telemetry is not "literally" about screen recording. Telemetry is metrics. That is why they invented a new word for it rather than calling it "screen recording".

Why do you need to collect hardware fingerprint, IMEI, phone number, geolocation, list of nearby wifi access points, list of installed applications, selfie and passport photo when you can simply count how much times a server route was called?

My comment explicitly uses "how many people clicked the secondary button on the third tab" as an example, not any of that nonsense -- you are not responding in good faith.

That's a slippery slope and we both know it. Telemetry does not automatically include those things.

Indeed it's not fair in discussion context, so wonder if it was meant as a statement on the ills of telemetry as a whole.

> and sending it to the government

It literally is. The network itself is always listening: https://en.wikipedia.org/wiki/Room_641A

The mere act of making a network connection leaks my physical location, the time I'm using my computer, and the fact that I use a particular piece of software. Given enough telemetry endpoints creates a fingerprint unique to me, because it is very unlikely that any other person at the same physical location uses the exact same set of software that I do, almost all of which want to phone home all the goddamn time. It's the metadata that's important here, so payload contents (including encryption) don't even matter.


There are two aspects of that:

1) Metrics lead to wrong conclusion. There is software which has extremely rarely used features, I need it once or twice a year only, but the ability is why I use the software to begin with. If metrics get too much attention such things are removed as being unimportant ...

2) a lot of the tracking happening is way too intrusive and intransparent. There are valid use cases, however some large corporations especially, in the last had cases where they collected way too much, including private information, without really giving information about it. That overshadows good cases.


> Or you can look at the analytics, which tell you everything you need to know immediately, always up to date, with perfect precision.

Analytics do not tell you everything you need to know immediately. The analytics may say that no one is using a given feature, but they don't necessarily tell you why. Maybe they don't use it because they're not aware of it, marketing is presenting it wrong, or sales isn't selling against it. Maybe they've tried to use it and it doesn't work for them and they never tried it again. Maybe the call to action to bring them to it doesn't work or directs them wrong. Maybe it gets used by 1% of the users who happen to be power users. You might look at that 1% and conclude that it's not getting enough use to warrant supporting it or keeping it around.


The problem is that, without the context of actually talking to and observing users in the real world, software teams have repeatedly misinterpreted telemetry. Even the description of how they use telemetry to decide which features need investment or improvement shows this. In the face of huge data with no context, they make bad assumptions rather than talking to actual users. Over and over again.

Telemetry is the previous obvious step to surveillance. Not the telemetry you implement in your own small bus, but at the scale of microsoft, apple, meta… yeah

Yes, but the answer to "how many people clicked that button" is irrelevant if it describes the outside world. This id like concluding something is wrong with umbrellas because none of the users in the desert opened them.

If the questions you have can be answered by simple telemetry you are likely asking the wrong questions. E.g. a confused user will click all the buttons, while one thst efficiently uses your software to solve a very specific problem may always ever press the ssme ones.

The actually interesting questions are all about how your software empowers users to deal with the things they have to deal with. Ideally with as little buttons as possible. And if once a year they need that other button it will be there.

It is very easy to draw the wrong conclusions from telemetry.


> with perfect precision.

Precision isn't accuracy and all that.


> From this thread you’d think telemetry is screen recording your every move

> it’s more along the granularity of “how many people clicked the secondary button on the third tab?”

You don't see the contradiction here?


> Sometimes HN drives me crazy.

You can tell the difference between those who build businesses and those who simply use them.


I was gonna say it's a difference between producing something to make money, and producing something meant for people to like and perhaps love, but same thing :)

You're never going to win this argument, most of the people who post here have never actually shipped a product themselves and only work on isolated features and others have to handle / manage all of this for them so they have no real understanding of what it takes to do it

the other crowd that pretends otherwise are larping or only have some generic open source project that only a handful of people use or they only update it every 6 years


> You're never going to win this argument

Probably because there is no "truth" here, only subjective opinion, there is no "winning", only "learning" and "sharing".

I could ramble the same about how "people relying on data never shipped an enjoyable thing to people who ended up loving, only care about shipping as fast as possible" and yadda yadda, or I can actually make my points for why I believe what I believe. I do know what I prefer to read, so that's what I try to contribute back.


Nobody actually cares "what it takes to do it", that's not our problem. You're not entitled to knowing even a single bit of information about us without our consent. Try innovating a way to do it without spying on people.

You could hire people to be testers and pay them for the analytics, I think they would even allow you to record the screen if you paid well enough. The problem is that you do not want to pay or get consent, you want to grab the data for free and without permission and without people realizing what you do. And such kind of people deserve much worse treatment than they are treated today.

> You're never going to win this argument, most of the people who post here have never actually shipped a product themselves and only work on isolated features and others have to handle / manage all of this for them so they have no real understanding of what it takes to do it

Have you asked those users or did you conclude this from (likely unrelated) "telemetry" you have gathered?


Exactly - purely "data driven" decisions are how we end up with ads really close to (or overlapping with) some button you want to press, because the data says that increase click-through rate! But it's actually a user-hostile feature that everyone hates.

The reason that feature gets implemented is not because the devs think users will like it ... they know users don't want it, but it drives revenue and pays salaries.

But collecting data and looking for insights doesn't mean you mechanically optimize features, especially user-hostile ones? This is just as, if not more, likely to happen when basing your decisions on what people say they want over what they actually do.

If we were perfectly rational, then yeah, more data should never lead to worse decisions. However, it's easy to fall into the trap where being data-driven makes you only work on those things that you know how to measure.

Or worse, it makes you go look for data backing up the assumptions you already made.

It's easier to not care about users if they are just numbers and you can tell yourself to ignore what they actually say.

We do both and they yield different learnings. They are complementary. We also have an issue tracking board with upvotes. I would say to your point that you can't improve what you don't measure.

I would say to your point that you can't not spy on me while also spying on me. Maybe just don't?

If I was running a physical business and I wrote down each person’s name and credit card number and the exact time and order they placed, that would be pretty invasive and “spying”. If I write down how many units I sold of each item per day, and the volume of transactions by credit card vs cash, it’s anonymized and I don’t think this would generally be considered “spying”, just normal business metrics. How’s the latter much different than anonymized product analytics?

Watching me use my computer in my house or office is spying.

Aggregating request statistics server-side unless you're only generating those requests to spy on what I'm doing on my computer is more like the not-spying you're talking about.


Most telemetry is more along the lines of "user spent N minutes on platform, clicked on these things, looked at these other things" etc etc. And the primary way devs use this data is by aggregating across all users and running a/b tests or viewing longer term trends.

Are some companies spying on you the way you say? Yea, probably. Most of us just want data to know what's working and what's not.


The logical conclusion is you’re asking for no local products and everything to run server side. It’s kind of a ridiculous position that doesn’t change the spying being done other than it’s on the other side of a browser.

I accounted for this in my post. Obviously if you’re making requests just so you can spy, that’s spying.

No you didn’t. If I build you a web video editor, is that because I want to spy on you or because I want to make deployment easier and reduce install friction?

You’re making a distinction that puts you in the privileged judge position of evaluating if a service is making requests just so you can spy vs what the app author’s might believe is a critical design feature in how they want the product to operate.


> Watching me use my computer in my house or office is spying.

I agree, but once you cross the borders out to the internet, I'd say you need to stop seeing that as "Me sitting at my computer at home", because you're actually "on someone else's property" at that point essentially. And I say this as someone who care greatly about preserving personal privacy.


I deeply hate that this attitude took over even among “hackers”.

Watching people move their mouse and click stuff on “your webpage” is fucking spying. It’s in my browser. On my machine. Not running on your hardware.

Tracking what I do on my own computer doesn’t stop being spying because the program I’m doing stuff in can make network requests. WTF.


> Watching people move their mouse and click stuff on “your webpage” is fucking spying. It’s in my browser. On my machine. Not running on your hardware.

Well, I was mainly talking about network requests, which are quite literally served by "my hardware" when your client reaches out to my servers, and they agree to serve your client. I do agree that it sucks that browser viewports now also are considered "mine" from the perspective of servers, but you do have a choice to execute that code or not, you can always say no.

I don't think it's as much "this attitude took over", people saying that the internet is the wild west and warning you "browse at your own peril" has been around for as long as I can remember.


Yeah server logs don’t bother me. I’m requesting a resource, you unavoidably see that happen.

The attitude that’s changed is that in the 90s and 00s a program that sent information about what you’re doing that wasn’t necessary and expected for how it operates would have been instantly, popularly, and unequivocally labeled spyware by a programmer crowd. Now it’s normal and you get a bunch of folks claiming it’s ok.


Worse, you get a bunch of programmers claiming it's impossible to write software without that level of spying.

You can write down your business transactions, sure.

Telemetry would be more akin to looking through your customers windows each day and noting down how often they use the tools you sold them. Not cool.


It's sort of hilarious to compare "talking to people" with analytics. I'm not defending Github here, but you can't possibly think that "talking to 1M customers" is viable.

You could survey a representative sample

Not really. (a) People hate responding to surveys and hate emails, you're more likely to lose users than to get data (b) there's no way you're surveying people's in a way that gets you information like "time spent on a page" or "time between commits" or whatever.

This is just nonsense tbh. Surveys and customer outreach solve completely different problems from analytics.


If people feel like their feedback is valued you don't even need to ask them for it, they will come to you.

I agree you can't practically get the same information as you could with telemetry.

Survey data is still real data that can be used for "analytics".

Some people also hate telemetry. It feels invasive. I have a guess about what direction the percentage of consumers who hate telemetry is moving toward.


You can hire people to test your product and provide analytics. But not try to siphon the data for free.

I'm not taking a side on whether a product should add telemetry. I'm rejecting the absurd notion that these suggestions are at all giving the same information.

No one claimed that they give the same information, only that it's viable to produce a good product that solves your user's needs without using telemetry. The whole point is that you don't get the same information, e.g. no private data that the users haven't provided informed consent for upload to your servers.

That’s what user forums are for.

You can set up a user forum if you'd like. If you think it will get you the same information that analytics will, you're obviously wrong.

Kagi has a user forum (as well as listening to comments on other sites like Hacker News) and does not (at least supsosedly) collect telemetry. They seem to be doing fine when it comes to feedback.

Okay?

> Don't take what people say at face value, but think about it together with your knowledge and experience

While you’re comparing different information sources, you might even want to consider telemetry, too.


What people say, and what people do are different things. Especially when the people who agree to talk to you aren't representative of the whole user base.

And what people do and the reasons they do it are even more different.

> The difference between what people tell you when asked directly and how they actually use your software is actually shocking.

And the difference between what they do and what they want is equally shocking. If what they want isn’t in your app, they can’t do it and it won’t show up in your data.

Quantitative data doesn’t tell you what your users want or care about. It tells you only what they are doing. You can get similar data without spying on your users.

I don’t necessarily think all data gathering is equivalent to spying, but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.


> If what they want isn’t in your app, they can’t do it and it won’t show up in your data.

Excellent point.

> but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.

Every web page visit is logged on the http server, and that's been the default since the mid 1990's. Is that spying?


In principle, yes, I believe it is a form of spying. Not particularly invasive nor harmful, but spying nonetheless.

Logging every page visited is not a technical requirement of serving the requested resource.


But it's just tracking something the server was asked to do - I'd say it's legitimate logging.

If you buy something at the supermarket, the supermarket keeps a record of the transaction - it's part of the process.

However if you try and link that to entities and build a pattern behaviour across multiple websites then I think you stray into spying.

Also if the tin of beans I bought at the supermarket records audio at home and uploads to the cloud - that's spying.


> Logging every page visited is not a technical requirement of serving the requested resource.

How will you know which page is having problems being served or is having performance problems?


You won’t, but that’s not what was asked.

Logging the requested resource is not a technical requirement of serving that resource.


Depends how you define “technical requirement” but I’d say 404 for example is an indication of a failure to serve a given resource. If you don’t have logging you won’t know unless someone complains which means you’ll only catch the most visible issues. Same goes for performance - everywhere I’ve ever worked serving a resource was tightly coupled to “how fast can the user retrieve that resource”.

No, since GDPR logging for page views is not something you can do without thinking what metadata you store with it and for how long.

> If you dont have analytics you are flying blind.

We... we are talking about a CLI tool. A CLI tool that directly uses the API. A tool which already identifies itself with a User-Agent[0].

A tool which obviously knows who is using it. What information are you gathering by running telemetry on my machine that couldn't.. just. be. a. database. query?

Reading the justification the main thing they seem to want to know is if gh is being driven by a human or an agent... Which, F off with your creepy nonsense.

Please don't just use generic "but ma analytics!" when this obviously doesn't apply here?

[0]: https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...


Wow, it really is sad how literally unthinkable it is to you and so much of the industry that you could actually talk to your users and customers like human beings instead of just data points.

And you know what happens when you reach out to talk to your customers like human beings instead of spying on them like animals? They like you more and they raise issues that your telemetry would never even think to measure.

It's called user research and client relationship management.


I think you’re overlooking that they were talking about stated and revealed preferences, a well known economic challenge where what people say is important to them and what shows up in the data is a gap. Of course you talk to users and do relationship management. That doesn’t negate the need to understand revealed preferences.

In the OSS world this is not a huge deal. You get some community that’s underserved by the product (ie software package) and they fork, modify, or build something else. If it turned out to be valuable, then you get the old solution complemented or replaced. In the business world this is an existential threat to the business - you want to make sure your users aren’t better served by a competitor who’s focusing on your blindspot.


Customer interviews are an indispensable, high-value activity for all businesses. They are a permanent, ongoing capability that the organization must have. A conversation will surface things that analytics will not catch. People will describe their experiences in a qualitative manner that can inspire product improvements that analytics never will.

However, the plural of "anecdote" is not "data". People are unreliable narrators, and you can only ask them so many questions in a limited time amid their busy lives. Also, there are trends which appear sooner in automated analytics by days, weeks, or even months than they would appear in data gathered by the most ambitious interview schedule.

There is a third, middle-ground option as well: surveys. They don't require as much time commitment from the user or the company as a sit-down interview. A larger number of people are willing to engage with them than are willing to schedule a call.

In my experience, all three are indispensable tools.


You are inferring your own perception based on my comment, no need to be an asshole here. Like I said elsewhere we do both and they serve different purpose. We also make is very clear and easy to disable in the onboarding. I hope you try to build a business sometimes and open up your perspectives that maybe just maybe you don't have all the answers.

You stated that you are blind without analytics, which heavily implies other forms of user research are useless and don’t provide meaningful signal. I don’t think an assumption that you’re not using other methods is that outrageous.

> We also make is very clear and easy to disable in the onboarding.

Yeah, sure. How long is that policy gonna last? How does a user even know that that checkbox does anything?

Once you’ve decided to break a social contract it’s not like you can slap a bandaid on it and it’s all okay now.

> I hope you try to build a business sometimes and open up your perspectives that maybe just maybe you don't have all the answers.

People were building successful businesses long before the Internet.


> You are inferring your own perception based on my comment, no need to be an asshole here.

People in this case are likely extrapolating based on how user data is harvested in the industry at large. So there is bound to be (very likely) some characterization that is unfair to you.

Given modern data aggregation, really data vacuuming, and that software is opaque, it can be really hard to trust anyone with any aggregation of data. They say that they pseudonymize properly. The proof? Trust them bro. Then read yet another news article about how some data aggregation was either sloppily leaked or just a front for selling data.

A natural response to opaque practices by people you don’t trust is a hardline no.


The problem they're trying to solve is to find out what functions of their software are most useful for people and what to invest in, and to make directions on product direction.

Yes, vendors can, do, and should talk to users, but then a lot of users don't like receiving cold messages from vendors (and some users go so far as to say that cold messages should _never_ be sent).

So, the alternative is to collect some soft telemetry to get usage metrics. As long as a company is upfront about it and provides an opt-out mechanism, I don't see a problem with it. Software projects (and the businesses around them) die if they don't make the right decisions.

As an open source author and maintainer, I very rarely hear from my users unless I put in the legwork to reach out to them so I completely identify with this.


If you have an existing financial relationship with someone it is by definition not a "cold message." People who think they should never, ever be contacted by a company they are paying to use a service of are in the extreme minority. That's "cabin in the woods with no electricity" territory.

Marketing came to the conclusion that people dont know what they actually want. They decided to lump in engineers and programmers as well, since they started abusing their goodwill.

Apple and Microsoft reached their peak usability when they employed teams of people to literally sit and watch what users did in real life (and listen to them narrating what they want to do), take notes, and ask followup questions.

Everything went to crap in the metric-based era that followed.


Get off your high horse.

Talking to users when you have hundreds of customers does no more than give you an idea of what those specific people need. If you have hundreds of users or more, then data is the only thing that reliably tells you these things.


Clearly the data doesn't tell you that. Either that or most companies gobbling up ungodly amounts of private data don't actually cares what it tells them about the user's needs.

The totality of Microsoft's products is proof that this is false. If telemetry and analytics actually mattered for usability, every product Microsoft puts out wouldn't be good instead of garbage.

There are far too many factors to assign the quality of microsoft's products to telemetry.

Having the data doesn't mean you will act on it. And doesn't mean microsofts interests are aligned with the users.


Be very careful with that.

Analytics driven development easily leads to bad outcomes. 1. Important, but less frequently used feature gets moved to a hidden spot leading to even less usage leading to eventual removal. 2. Poorly functioning features not getting the improvement they need because few use them due to how poorly they function.

I have seen these patterns a lot in software where decisions are based on analytics, and I usually stop using that sofware when I find a replacement.


> If you dont have analytics you are flying blind

More like flying based on your knowledge as a pilot and not by the whims of your passengers.

For many CLIs and developer tooling, principled decisions need to reign. Accepting the unquantifiability of usage in a principled product is often difficult for those that are not the target demographic, but for developer tools specifically (be they programming languages, CLIs, APIs, SDKs, etc), cohesion and common sense are usually enough. It also seems real hard for product teams to accept the value of the status quo with these existing, heavily used tools.


Actually it's more like flying in the clouds with no instruments which can lead to spatial disorientation when you exit the cloud cover and realize you're nosediving towards the earth. https://en.wikipedia.org/wiki/Spatial_disorientation

Flying based on the whims of your passengers would be user testing/interviewing, which is a complementary, and IMO necessary, strategy alongside analytics.


Analytics is wrong. I never click any ads, but they keep showing it. I avoid registering or enter fake emails, but they keep showing full screen popups asking for email. I always reject cookies but they still ask me to accept them. And youtube keeps pushing those vertical videos for alternately gifted kids despite me never watching them. What's the point of this garbage analytics. It seems that their only goal is to annoy people.

All of those are affected by analytics.

Ad slots will be filled whether or not you click. If you never click, you'll tend to match with either very low quality ads or ads that pay per impression (display ads).

Email registration is highly valuable for a business, so analytics won't be used to decide whether to show the modal but rather test different versions of it.

Cookies are too valuable to not push on users, because without them only the previously mentioned low quality ads can be shown. High quality and display ads match on interest or demographic labels.

The business decision to keep vertical videos is highly likely to be affected by analytics, and of course the choice of which videos to show is based on recommendation models trained on interaction logs.

The priority isn't making your experience better, though that is often an incidental result -- it's driving the business.


It makes me think, what `gh` features don't generate some activity in the github API that could as easily guide feature development without adding extra telemetry?

Yeah. Unless they plan to move more local git operations in the tool and blur the line between git and gh.

You can "optimize a successful user journey" by making the software easy to use, making it load so fast people are surprised by it, and talking to your customers. Telemetry doesn't help you do any of that, but it does help you squeeze more money out of them, or find out where you can pop an interstitial ad to goose your ad revenue, and what features you can move up a tier level to increase revenue without providing any additional value.

I'm pretty ok with the github cli tool team flying blind. The tool isn't exactly a necessary part of any workflow. You don't need telemetry to glean that

that's akin to saying "i do not need their product therefore i don't care"... so what's your point? someone may have made it part of their workflow!

True. Some people shouldn't use git if their workflow doesn't beg it.

> If you dont have analytics you are flying blind

If you have too much emphasis on (invasive) analytics you might end up flying empty i.e. without customers.


Game developers benefit tremendously from streams where they get to see peoples webcams _and_ screens as they use their software.

This would be _absolutely insane_ telemetry to request from a user for any other piece of software, but it would be fantastically useful in identifying where people get frustrated and why.

That said, I do not trust Microsoft with any telemetry, I am not invested in helping them improve their product, and I am happy not to rely on the GitHub CLI.


You can get this level of observation by paying testers (both professions as well as existing users who provide informed consent). You don't get to just spy on random users, yes.

I agree with you in that regard. That said, knowing that this is Microsoft, the data will be used to extract value from the customers, not provide them with one.

You could, I don't know, do user interviews with the various customer segments that use your product.

Teams that do this need to just dogfood internally. Once you start collecting telemetry on external users defaulted to opt-in you're not a good faith actor in the ecosystem.

This got me thinking: Are there prominent examples of open source projects that 1. collect telemetry, 2. without a way to opt-out (or obfuscating / making it difficult to opt-out)? This practice seems to be specific to corporate software development.

Why is it that startups and commercial software developers seem to be the only ones obsessed with telemetry? Why do they need it to "optimize user journeys" but open source projects do just fine while flying blind?


open source projects are usually creating something for themselves so it's much easier to know what to build when you are the user

whereas, commercial software has a disconnect between who are the users and developers are


Bats are mostly blind and manage to fly just fine.

They could well use the data from their own developers, couldn't that be enough?

I think there's room for a distinction between "not using metrics" and "not using data".

Unthinkingly leaning on metrics is likely to help you build a faster, stronger horse, while at the same time avoiding building a car, a bus or a tractor.


It's not like they don't own API's that those cli's are hitting. They have all the stats they need.

You have all info you need on server side, I don’t believe that you’re totally blind without client tracking

How did GitHub ever survive without this telemetry? Was it a web application buried in obscurity?

Interesting product (Caido co-founder here). It is very hard to nail auth, probably the most underlooked aspect by end users. We are working on something similar for PoC reproduction of vulnerabilities.

Fingerprinting is also a hard thing to match perfectly, I would be curious to know what your strategy is on that. My experience has been that unless you bundle multiple TLS lib it is almost impossible to do at 100% because none of the lib cover all the TLS extensions.


We’re currently running a variety of stuff for TLS/HTTP2. If you download you can see the full trace of the connection. We dump the TLS connection byte for byte with the different structured subsections. With tls.peet.ws and bogdann finn’s tls-client (which we use parts of with some modifications) I would say that http3/tcp fingerprinting is probably the remaining issue. We currently don’t support http3 connections (they’re niche + apple system proxy doesn’t support them well), and TCP fingerprinting is a bit too low level to build out tooling in GO currently. Possibly for a later release. Curious if you’ve tried bogdann finn/the existing tooling?


We are in Rust so our options are more limited. Make sense the golang ecosystem is pretty good for that.

Jokes on them I am still running Windows 10 and not planning to move my gaming PC to Windows 11 (hardware not supported).


My hardware is supported and I'm sticking with 10 until the last ATM finally updates


I really wish people would stop using the language as an argument and that commenter would also move on to a more interesting debate.

In your discussion the first comment from an ex kuzu dev made an excellent point that rust for databases in an excellent language to ship faster with confidence while reducing real problems of concurrency and corruption.

At some point it becomes intellectual dishonesty to dismiss a language because of vibes instead of merit.


I didn't dismiss the language. I called it a north star. Rust is still the best option if you desire memory safety.

But rewriting a complex working piece of software in Rust is not trivial. Having an incremental path (where only parts are rewritten in Rust and compatible with C++ code) would be a good path to get there.

Also open to new code and extensions getting written in Rust.


Knowing if it is embeddable or server would be nice in that table


Yes, I have the "embedded" kind in there but a dedicated column would be nice. Thanks!


Trademark are always scoped to particular domains it is not universal, if you look up mouser you will see they listed the usage for electronic components, distribution and related. No mention of software. They might fight you since you do have to protect your trademark but in theory you could open a mouser restaurant and trademark that name for food distribution. As long as the customers is able to tell the difference it is fine.


Indeed, I think the main issue with the name is searchability.


Also to note that the inline directive is optional and the compiler can decide to ignore it (even if you put always if I remember)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: