Don't see a really important one in my opinion:
Refactor legacy code, don't rewrite it. All that cruft you see are bug fixes.
Because rewriting old complex code is way more time consuming that you think it'll be. You have to add not only in the same features, but all the corner cases that your system ran into in the past.
Have seen this myself. A large team spent an entire year of wasted effort on a clean rewrite of an key system (shopping cart at a high-volume website) that never worked...
...although, in the age of AI, wonder if a rewrite would be easier than in the past. Still, guessing even then, it'd be better if the AI refactored it first as a basis for reworking the code, as opposed to the AI doing a clean rewrite of code from the start.
Ah, think there is overlap, but still not the same in my opinion.
Having read this just now, the second system effect seems to be more about not getting overly ambitious in the redesign. What the guideline I mentioned is saying is "don't rewrite, refactor.""
As you probably know, there is a tendency when new developers join a team to hate the old legacy code - one of the toughest skills is being able to read someone else's code - so they ask their managers to throw it away and rewrite it. This is rarely worth it and often results in a lot of time being spent recreating fixes for old bugs and corner cases. Much better use of time to try refactoring the existing code first.
Although, can see why you mentioned it from the initial example that I gave (on that rewrite of the shopping cart) which is also covered by the "second system effect." Yeah, thinking back, have seen this too. Overdesign can get really out of hand and becomes really annoying to wade through all that unnecessary complexity whenever you need to make a change.
Hmm, it seems pretty clear that climate is getting hotter, so it seems natural for some people to be worried about what will happen to the planet in a few decades (me for one).
And, you may be right, it may not be that big a deal and that we're being alarmists, but it seems like we currently have the tools to slow it down greatly. Why not be on the safe side and use them?
... but to be honest, guessing my opinion won't sway you in any way, still thought I'd try. thanks!
The value of plowing ahead and using more energy is worth far more than making sure Florida doesn’t lose some coastline.
The presumptions I see that annoy me with the alarmists, is that they completely negate human agency and ingenuity, and they ignore the economic cost of many of the proposed plans.
Natural gas is far better than coal and should be encouraged rather than condemned. Nuclear power is best of all, is the cleanest and safest energy, and yet is hardly ever the first choice of the alarmists.
I’d rather spend double the energy unlocking breakthroughs in science with the help of AI, and address the problems when they come. I don’t go out of my way to lower my “carbon footprint”, but I also don’t just do things that are wasteful and deliberately harmful to the environment.
AI making us forget how to think for ourselves is a far bigger risk to mankind than climate change. Thanks.
Agree that you need to balance costs with benefits, but nowadays, solar and wind are often the cheapest options (southern states or states with lots of wind). And nuclear is an option that even some staunch environmentalists support these days.
Yeah, don't think most people who support battling climate change are extremists. We just believe it's a big problem, and, to put it in monetary terms, having to deal with major changes in climate could cost the world tens of trillions of dollars by some scientist predictions. Yeah, it's like any problem, doing relatively small fixes now could save enormous amounts of time and money later down the line. Seems like it would probably good usage of our efforts.
I probably just overreact and judge too quickly certain statements from my experiences of people who act like I’m destroying the earth because I have more than 3 kids.
I appreciate reasonable people though, and I should not assume, everyone is a crazy alarmist because they have any concern, so I apologize.
... and not just giving you lip service, but I do find the far left to have gone too far themselves (am a moderate independent myself). They're assuredness that everything they believe is the only correct way to think is frustrating (they are often the least understanding). Yeah, it seems if you step out of line and say anything against their beliefs, you're apart of the far right.
But, feels like things are shifting back to the middle for various reasons. Think this is a good trend
Actually, really like maven, it's focus on building in standard way is fantastic (but agreed, it look messy, with all its xml and necessary versioning).
Just wrote a comment how I've always liked Maven. It's perfect for small and medium sized projects, and for service-oriented architectures/microservices - it seems like it was designed for this! It's main goal is to help you figure out the libraries that you're using and build them in a standard way.
It isn't great for really strange and odd builds, but in that case, you should probably be breaking your project down into smaller components (each with it's own maven file) anyways.
Actually, I like Maven. It's perfect for code that is broken into medium-sized projects, which makes it great for service-oriented architectures (would have said microservices here instead, but think we're learning that breaking our services too finely down is generally not a good idea).
Yeah, it seems like Maven is designed to build just one project with relatively little build-code (although, figuring out versioning of the libs used in your build can get tricky, but guessing this is how it is in most languages). It's still one of my favorites build tools for many situations.
Have always really liked Java, but yeah, Spring overall has been terrible for the language. Autowiring is against the principles of a typesafe programming language - Don't make me guess what what object is going to be attached to a reference. And if you do, at least figure out what linked object is at compile time, not at run time.
Spring autowiring makes Java seem as a whole unnecessarily complex. Think it should be highly discouraged in the language (unless it is revamped and made apart of the compiler).
... not sure how this applies to the ObjectMapper, as I haven't programmed in Java in awhile.
... and my gripe doesn't apply to SpringBoot though:)
> Autowiring is against the principles of a typesafe programming language
Constructor autowiring is the application of the inversion of control and dependency injection pattern. If there was no autowiring, you could autowire the components together just the same with normal code calling constructors in the correct order. Spring just finds the components and does the construction for you.
Yeah, for me at least, personally believe inversion of control should be used more surgically instead of blanketing the system with it. On the one hand, freeing your application layer from direct dependencies with the lower-level objects conceptually seems like a good idea, but think in practice, this is hardly ever helpful especially when used for every dependency.
At least from my experience, seems like we don't change the objects we use that often, that once a object is set on a reference var, a very, very large majority of them won't change.
And because of this, seems like that for most objects dependencies, we should just new them directly, and if later on we do need to change them, then at that time, we can refactor the code to use more abstraction (like inversion of control) to break the direct dependency, but only for the code that needs it (or if there is a important situation where having a direct dependency could be highly problematic in the future, like to a DB provider).
It's like the performance optimization problem. One guideline that is often quoted is that it's best not to over optimize the performance of your code, until you actually can test it in real-world test cases, because you very often optimize things that aren't the bottleneck. Same with the over usage of inversion of control. Spring makes it so we're using IOC everywhere, but it's just adding unnecessary complexity.
Think that if inversion of control is used, should be used mainly at a higher level, for components instead of on every class like often happens. But even for components, think you should be careful when deciding to do so.
... and agreed, you could just use the factory pattern instead of Spring.
For large applications, having the implementation (or multiple implementations) of certain functionality decoupled from the code using it, improves maintainability and configurability of the application. That is where inversion of control helps. And then manually writing the instatiation, scoping, dependency ordering and cleanup code to manage all of that is not useful to write yourself. Any dependency injection framework will work, although Spring is well used and has many integrations.
Yeah, I get the idea, abstractions allow decoupling. But, think that it should be used in a thoughtful way - there is quote from the original Design Patterns book that said something like a careful considered use of Design Patterns should make the system easier to work with, or something like that (sorry, don't have it on hand).
We can go back and forth on this, so will just say, in my opinion, Spring autowiring overall doesn't provide enough benefit versus its downsides, which to me are: increased complexity and doesn't work well enough (it should be easier to debug autowiring problems for one).
You seem very knowledgeable about design, and, of course, you're entitled to your opinion, so seems like we'll probably just have to disagree on this:)
For me at least, being statically typed is overall a strength. Yeah, it's not that much work to include types when declaring vars, but the benefits are you don't have the problems with types in expressions that you do with dynamically typed languages (Javascript for example, which is one the reasons why Typescript was created).
... although, Java have does support some dynamic typing, as you now don't need to have the type when instantiating objects, using the "var" keyword and the compiler will infer the type from the object that is being set to it (although, this is just syntactic sugar).
`var` has nothing to do with dynamic typing. It is still statically (compile time) typed, so the type can not change at runtime. Compare that to JavaScript where you could easily switch the type of a variable from Number to String.
agreed, it's not (as mentioned, it's just syntactic sugar). Still, how often is changing the type of a var needed? (besides minor casting issues)
And not saying that dynamic typing doesn't have a place, I really like working in Python, it's just that for more complicated code, prefer statically typed as it leads to less problems with your expressions. To each their own.
Am not 100% what's going on and why everyone is ragging on it, but to me, DLSS 5 clearly improves the graphics most of the time. Yeah, almost all the faces look more real, with more realistic skin and shadows instead looking like those CGI faces with poor detail from 12 years ago.
Personally, think it's just people freaked out that it's being improved on by AI, and therefore apart of the "AI slop" trend. Think if they had done this all with no AI and just polygons, it'd be hailed as a large step forward in graphics.
... and btw, am just as freaked out about AI taking over the creative fields as a lot of others (am a musician myself), but have to try to objective, and in my opinion, DLSS 5 is impressive.
They dont look real the lighting is terrible. There is lighting that would suggest two light sources on one part and lighting that would suggest one light source on other parts. Its jarring.
Took another look. You're entitled to your opinion, but, yeah, am not seeing the two light source problem you mentioned, not at least in screenshots I looked at. And for me at least, they look more realistic than with dlss 5 turned off. But may be I'm not seeing something you're seeing.
The specular highlights on faces definitely look wrong to me though I struggle to describe why. Shadows and diffuse lighting is a totally different story, though. Look at how it completely deletes the shadow of the steeple on the right hand side[1], or how it completely eliminates the shadows on this guy's face and jacket. Overcast lighting is an easy cheat for hyper-realism[3] and almost every single scene shown has softened or absent shadows and more diffuse light.
As an aside, I'm starting to wonder if they are modifying engine settings when switching it on and off. There's clearly some amount of accumulation it has to do and its impossible to frame-by-frame a video of a monitor, but in [1] the first frame snaps from a dynamic shadow of the steeple to a generic small blob shadow, then gets entirely eliminated on the next frame.
Hmm, I do see the shadows being removed in the links you have, and have noticed that the backgrounds do look like their lighted differently from the original, but was wondering if that is just because the AI lights things differently? - they did say that these AI effects are done with the actual 3d assets themselves and is not just some type of filter that run over the existing images, so could see how the lighting could change quite a bit.
Yeah, may be the fact that they are lighted differently from the original is turning people off. Understandable. For me, still find it impressive, and think the level detail in the faces and clothing is a full step up in capability.
> they did say that these AI effects are done with the actual 3d assets themselves and is not just some type of filter that run over the existing images
That was essentially just Jensen Huang lying during his Q&A. DLSS5 uses the same input data as DLSS<5, which is just screen space color data and motion vectors. From NVIDIAs announcement: "DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame."
I agree, every shot has something to like, especially in fine details, but I question the feasibility of fixing the issues while running the model on a consumer GPU in realtime. Getting similar improvements without falling back to diffuse lighting would require the model to infer a huge amount of information about off-screen light sources and objects. I'm much more excited about putting my tensor cores and vram towards neural textures since they can actually add detail at the geometry level.
Hmm, actually heard it from a podcast that this is actually working with the 3d assets, and even that statement you quoted says "anchored to source 3D content." Although, that could mean a lot of things and it's still early on, so it could still just be a pass at the end by an AI model. Yeah, I'll stay on the fence until more details are released - and should mention, am no graphics expert, and am only giving my opinion as a fan of good graphics on what the results look like:)
You have a point - cheap drones have changed warfare - but you might be simplifying the issue. As some warfare experts online have discussed, it isn't that cheap drones are the only weapon that is used in Ukraine (or warfare in general), it is one option in vast array of options based on the situation (although, agreed, it is taking on a much bigger significance). Look at the war in Iran. They did a pretty standard playbook and use stealth jets and cruise missiles to surgically take out air defenses in order to gain air dominance. This would be very difficult with just cheap drones.
... but, do agree that cheap weapons are still becoming extremely important. Iran is terrorizing the middle east and strait of hormuz with cheap drones, so they are definitely important. Yeah, in the war of attrition, low cost, high-volume options are clearly very important.
It's fairly important to distinguish what kind of drones are we talking about [1]. Iran's using Group 3 drones.
The GP is confusing Iran's neighbours not being ready to counter group 3 drones with the drones being inevitably effective. These drones are by necessity large and slow, because they need a lot of energy and aerodynamic efficiency to get their range. That means that they are vulnerable to cheap counters, which Ukraine is demonstrating very convincingly: even though Russia is now launching 800+-drone raids, the vast majority is shot down.
Even when those drones do get through, they are extremely inefficient. It's not just that they can't carry a heavy or sophisticated payload (more complex warheads are more effective, but way more expensive), the extremely high attrition ratio forces the enemy to try to target way too many drones per aimpoint. Instead of serving a few hundred aimpoints, the 800-strong raid is forced to concentrate on just a few, otherwise most aimpoints will get no hits whatsoever.
But also the only reason 800-strong raids can even be launched is Ukraine lacking the capability to interdict the launches. 800 group 3 drones have an enormous logistics and manufacturing tail, which a Western force would have no problem destroying way before the raid can be launched. For example, Iran in its current state can't launch such raids. So in practice Iran's neighbours would need to intercept only a handful of drones, which is hardly an insurmountable challenge.
GPS denial is a mixed bag. After about two years of efforts and counter-efforts, the Russians seemingly managed to build GPS receivers that are pretty resistant to jamming.
Because rewriting old complex code is way more time consuming that you think it'll be. You have to add not only in the same features, but all the corner cases that your system ran into in the past.
Have seen this myself. A large team spent an entire year of wasted effort on a clean rewrite of an key system (shopping cart at a high-volume website) that never worked... ...although, in the age of AI, wonder if a rewrite would be easier than in the past. Still, guessing even then, it'd be better if the AI refactored it first as a basis for reworking the code, as opposed to the AI doing a clean rewrite of code from the start.
reply