Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos
The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting “deep learning” systems to work.
This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.
The earlier concept of a universe made up of physical particles interacting according to fixed laws is no longer tenable. It is implicit in present findings that action rather than matter is basic. This is good news, for it is no longer appropriate to think of the universe as a gradually subsiding agitation of billiard balls. The universe, far from being a desert of inert particles, is a theatre of increasingly complex organization, a stage for development in which man has a definite place, without any upper limit to his evolution.
Arthur M. Young (via inthenoosphere)