Something able to learn or to discover without relying on a 'brute force' (exhaustive exploration of the problem space) approach.
Neural networks are in a way pertinent, I reckon, however I'm not enthusiastic about the fact that using it forbids us to 'explain how it solves the problem'. It also apply, albeit somewhat to a lesser extent(?), to bayesians and such.
In Bayesian optimization, the next guess is chosen according to a non-brute force equation that takes all previous information into account.
Does it appear more intelligent than repeated slight shifts more or less toward the first order gradient?
The second seems to be more similar to how humans learn, but the former more similar to how we rationalize after learning.
Still, the first is still repeated, relatively simple applied math. I’m not sure either case jumps out as being qualitatively different in a meaningful way when it comes to intelligence.
Neural networks are in a way pertinent, I reckon, however I'm not enthusiastic about the fact that using it forbids us to 'explain how it solves the problem'. It also apply, albeit somewhat to a lesser extent(?), to bayesians and such.