I have. I've used all of the git GUIs. None are as good as p4v as far as a GUI goes, and none are particularly usable by artists. Especially powerful in p4v is the timelapse view, which nothing in any git client comes close to approaching.
I recently switched to GitKraken from SourceTree. SourceTree has better features and workflow, but SourceTree performance is very VERY terrible. I'm talking 2+ minutes to wait for some operations to complete.
Not sure why GitKraken makes it so hard to add remote repositories and doesn't allow auto-push, and other niceties. But I use it for normal checkin / branch switching workflows and it's great.
Aah, sweet, someone else who uses Perforce! I've been wondering, do you use version control as:
1. a way to record logical changes to files (e.g. implement two features without making any commits, then when you're done, pick out the files/chunks that encode each feature and create commits out of them),
2. a record of history (e.g. just start writing code and make a commit every time you compile/run tests without unexpected failures),
3. something else?
I've found it very painful to apply my git-adapted workflow to Perforce: I _want_ to just start coding, testing out various possible design choices and implementations instead of only theorizing about them, but can't (e.g. "I wonder if I could factor out these methods + fields into a different class?" Perforce: oh well, I guess I should write it down somewhere to remember for later. Git: branch my current work, spend five minutes sketching an extraction, then realize it's insane and continue working). Am I crazy and just don't realize ow much better the Perforce model is?
I actually came to quite like it, but the workflow was sometimes a bit tiresome :( When I found myself the situation you describe, potentially wanting to make an additional change while already working on another, I did exactly what you suggest: add a todo item, and carry on until I'm done with the task at hand. Then start my second change with nothing checked out, so I can undo checkout on everything should I make a mess.
This works, and you get used to it, and of course many would say that it's a better approach - but it would be nice if the Perforce client tools could be a bit more imaginative.
As for how I think of version control, if it's git, #1. If it's Perforce, #2, plus #3 - backup and distribution.
I don't know how much value I get from being careful about my commits with git, but it does make me feel better (which I suppose could be reason enough). On the large-scale, goal-oriented projects that I've used Perforce for, worrying about logical changes has never felt very important. Does it make a big difference if you have one commit that implements 3 features, or 3 commits? When you're trying to fix a bug, you don't really mind either way, because (a) bugs don't respect feature boundaries, (b) all the features are non-negotiable, so it's not like you can back one out and carry on anyway, and (c) the project is large and fast-moving enough that even if you could, there'd be a good chance that actually, you couldn't.
What history tends to come in useful for when it comes to this sort of project: finding the code that corresponds to a particular build that you have a bug report for, and finding who made a particular change so that you can ask them about it (using the imeline view say). These both work fine whether you make granular changes or not.
You might want to be a bit more careful about things if you're working on changes that might want to be merged into another branch on an individual basis. (Suppose you're moving spot fixes from release branch to main branch - some you'll want, some you won't.) But you usually know when you're making this sort of change, and so you work that way anyway.
Thus the reason I know carpentry :3 but, having programmed for a while, it seems like the hardest problems are converting plain english requirements to source code. At the point that that becomes trivial (and doesn't fall into WYSIWYG editors or graphical languages, which already exist and haven't brought about the apocalypse), wouldn't you just ask the computer to make itself smarter over and over? Or ask it to automate any other profession with stupendous ease?
I think specialisation in software will be the first to go not programing itself.
I expect machine learning software to get very good at finding and fixing software exploits in the next few years. Google used deepmind and machine learning to optimise their energy use. How long before (maybe they are doing it already) put it to work on optimising their software code like chrome and android etc to find to speed it up or find exploits.
Isn't cyclomatic complexity just about the number of paths through code? It seems like (statically resolved) nested calls, as long as they don't loop or branch themselves, shouldn't increase cyclomatic complexity.
I was thinking about the issue of nested code the other day -- isn't it (mostly) an editing environment problem? I mean, if my IDE can detect that a method is pure, couldn't it do a little magic (e.g. provide a different coloured background, some dotted boxes, etc.) and show me the inlined code right there? It doesn't seem wise to architect your codebase around the deficiencies in your tools, since one is (hopefully) going to long outlive the other.
Oh, i like this idea. Imagine your IDE just being able to substitute method calls that are just calling single line methods, for viewing only. Or even being able to refactor them in-line, but keep them in private methods?
With Visual Studio, you can visually inline the definition of a function below its call by selecting "Peek Definition", which is close to what you're saying.
You can only do it one at a time, and it's for quick scanning of what the function does, though.
Hmm, wonder if I can coax Sublime into doing that. It's a little closer to IDE-type functionality, which it's pretty weak on (refactoring is terrible for Ruby with it).
That's just the editor doing the compiler's work at edit time, though. I know that Chuck Moore has always advocated extreme early binding, but that really is usually associated with Forth, not with mainstream languages.
I didn't mean that the editor would change what was presented to the compiler, just that it would simplify for the user when expanding the use of helper functions.
> By making your methods shorter, you’re just trading one kind of complexity for another.
Oh, I want to frame this and put it on the wall.
I can't stand code where even the simplest thing is implemented as a giant tree of sub-5-line functions nested 15 deep (and probably, for bonus points, scattered across half a dozen files).
Just to be clear, wouldn't the alternative with a monorepo still require that you go back and forth with multiple teams if the commit is not backwards compatible? It seems like the main complaint you have is that it's difficult to wrangle a number of related pull requests, so perhaps switching to something like gitcolony [1] for code reviews would help.
> Just to be clear, wouldn't the alternative with a monorepo still require that you go back and forth with multiple teams if the commit is not backwards compatible?
No. You just send one diff that changes all the team's code and update everything in lockstep, so at any point in your history, everything is compatible.
Instead of you going back and forth with multiple teams, you're bringing them together to comment on a change in one place. You synchronize on version control history instead of needing to wrangle multiple teams across multiple repositories, and you no longer need to deal with fallout for code change compatibility. You just make the change. Everywhere. In one shot.
You may have to get multiple teams to review your change before being allowed to commit it. And you have to run all their tests. If there is a problem the whole thing will typically get rolled back, which is a drag because then you have to fix the issue, run tests again and get approvals again.
So, in practice, for large-scale changes that affect many teams, we still try to break up large patches into multiple smaller steps, even working in a monorepo.
A single commit is nice for small patches that only affect a handful of teams, though.
Never heard of gitcolony before. Looks interesting, thanks! That would probably solve one issue. Then there is another one.
I try my best to keep head of master branch in such a state that it can always be taken into use in all projects. Just last week one branch of one embedded device had slight API change. Nothing big, but backwards incompatible change.
I branched test automation core and made it work with the new API. All tests looked green and things looked nice. We agreed with the embedded device owner that we'll merge that change upwards soon. Soon like in one hour or so. I rushed and merged the test automation core changes to master.
At the same time I was working with another API change with one PC app. That branch was also looking good and those changes were merged upwards in both the test automation core and in the PC app.
Now my test automation core master head was compatible with everything else but one embedded device, the one with the small API change that looked good in all tests. For some reason business did not want that change to go live yet with the device, so now I had changes in my test automation core that made it incompatible with the released embedded device.
Yes, it was my mistake to rush with the merge. But because getting those changes upwards was two merges, one in the product itself and in the test automation core, it was possible to get those out of sync. If we had used monolithic repository, it would have been just one merge and such thing would not have been possible.
Sure, not a huge thing but still an annoyance I could live without.
Tests belong in the project they're testing. Test tooling belongs elsewhere. It's not clear what your setup is, but it sounds like the tests themselves live in another repo. That's bad. It's not bad to have a separate repo for your tooling/test runners. As you've just seen, new tests need to go out simultaneously with new code.
Yes, tests are with the code as are test resources that are product specific. In addition to those there are a lot of resources that are shared across all or most products that sit in the test automation core.
It's more like the pipe operator in ocaml (http://blog.shaynefletcher.org/2013/12/pipelining-with-opera...). The lisp version has the extra advantage that you don't have to repeat it between all the intermediate functions. ((->> 2 (* 100) str count) vs 2 |> (* 100) |> str |> count).
I understand how a lisp implementation would work here to require only the single operator (I'm assuming a fairly simple macro).
Would it not be possible to do something similar in another functional language to take a <pipe function> and apply it sequentially to a list of function calls?
There are no semantic problems with this, but typing will get in the way: you can express it fairly easily if all the functions have the same type (such as Int -> Int): actually it's just 'foldr ($)'. But it is difficult to type a list of functions such as each member's return value has the same type as the next one's parameter (symbolically, [an-1 -> an, ..., a1 -> a2, a0 -> a1]). It's easier to refer to the composition of such functions, which is why you would see it as 'h . g . f'.
From the perspective of a younger dev, this seems like excellent reading for older devs, who tend to enforce their personal style without deferring to the de facto language standards (or PEP 8 for those working in Python). In fact, perhaps everyone could do with downing a half dozen humble pills and optimize their code for harmonious human interoperability than anything to do with machines :)
As someone who could very easily be on the other side of that code review (and I'm preeeettty sure I'm not in this case?) I feel obliged to at least try to provide a counterpoint :).
So I agree that enormous blobs of unreadable crap are indeed unreadable, and that regardless of how neat and functional your code is, it can still be complete gibberish to most people. That being said, long chains of streams can be broken out quite nicely by using intermediate names:
vs. the first code block under "Figure 1" in the linked article.
For me at least, it helps keep code from getting too unruly: you only have one 'thing' you can do in a filter, map or sorted call, unlike in a foreach loop where anything can go. So my thesis is this: using streams I can quickly scan over the function/stream names to get the gist of what it's doing, but using foreach loops I need to closely examine each line to have any idea of what's happening :3 (e.g. people love abusing labeled breaks [1] in our codebase, as well as excessively modifying input parameters w/o documentation, so I might be a bit biased against for loops)
I find the code in the linked article much more readable than that. You've introduced a huge amount of noise that makes it hard to see what the actual operations being performed are.
I agree to an extent ... this example is pretty contrived, but when you start getting around ten filter/map/groupby operations in, it gets a little difficult to follow what's supposed to be happening. So typically, my first step towards breaking it out into a method is separating out the individual streams like above. As is mentioned in a cousin comment, it also looks a lot nicer with type inferencing, but alas we are stuck with the verbosity of standard Java 8 for now.
This is a place where local variable type inference really comes in handy for cutting down the noise of the type declarations.
var descendingTransactionsByValue = comparing(Transaction::getValue).reversed();
var groceries = transactions.filter(t -> t.getType() == Transaction.GROCERY);
var sortedGroceries = groceries.sorted(descendingTransactionsByValue);
var transactionids = sortedGroceries.map(Transaction::getId).collect(toList());
[1]: https://www.gitkraken.com/