Resources on the true progress, current state and future of AI, Machine Learning and Deep Learning

Is Artificial Intelligence (AI) over-hyped?

Is Deep Learning over-hyped?

Is Machine Learning over-hyped?

Deep learning should be treated as just a tool just like any other tool in Machine Learning and AI toolbox. Compilation of resources to to give a true and more fair picture on  the progress, current state and future of Artificial Intelligence, Machine Learning and Deep Learning:

Papers

  • Anh Nguyen, Jason Yosinski and Jeff Clune, Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2015
  • Jason Jo and Yoshua Bengio, Measuring the tendency of CNNs to Learn Surface Statistical Regularities, 2017
  • Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, Radha Poovendran, On the Limitation of Convolutional Neural Networks in Recognizing Negative Images, 2017
  • Gary Marcus, Deep Learning: A Critical Appraisal, 2018
  • Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt and Vaishaal Shankar, Do CIFAR-10 Classifiers Generalize to CIFAR-10?, 2018
    • [quote] Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution. [unquote]
  • Hossein Hosseini, Radha Poovendran, Semantic Adversarial Examples, 2018
    • [quote] Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. …. In this paper, we introduce a new class of adversarial examples, namely “Semantic Adversarial Examples,” as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. … Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7%.[unquote]
  • Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt and Aleksander Madry, A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations, 2018
    • [quote] We show that simple transformations, namely translations and rotations alone, are sufficient to fool neural network-based vision models on a significant fraction of inputs. This is in sharp contrast to previous work that relied on more complicated optimization approaches that are unlikely to appear outside of a truly adversarial setting. Moreover, fooling rotations and translations are easy to find and require only a few black-box queries to the target model. Overall, our findings emphasize the need for designing robust classifiers even in natural, benign contexts. [unquote]
  • Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, Jason Yosinski, An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution, 2018
    • [quote] Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. Although convolutional networks would seem appropriate for this task, we show that they fail spectacularly. We demonstrate and carefully analyze the failure first on a toy problem, at which point a simple fix becomes obvious. We call this solution CoordConv, which works by giving convolution access to its own input coordinates through the use of extra coordinate channels. Without sacrificing the computational and parametric efficiency of ordinary convolution, CoordConv allows networks to learn either perfect translation invariance or varying degrees of translation dependence, as required by the task. CoordConv solves the coordinate transform problem with perfect generalization and 150 times faster with 10–100 times fewer parameters than convolution. This stark contrast raises the question: to what extent has this inability of convolution persisted insidiously inside other tasks, subtly hampering performance from within? A complete answer to this question will require further investigation, but we show preliminary evidence that swapping convolution for CoordConv can improve models on a diverse set of tasks. Using CoordConv in a GAN produced less mode collapse as the transform between high-level spatial latents and pixels becomes easier to learn. A Faster R-CNN detection model trained on MNIST detection showed 24% better IOU when using CoordConv, and in the RL domain agents playing Atari games benefit significantly from the use of CoordConv layers. [unquote]
  • Jiawei Su, Danilo Vasconcellos Vargas and Sakurai Kouichi, One pixel attack for fooling deep neural networks, 2018
    • [quote] Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 70.97% of the natural images can be perturbed to at least one target class by modifying just one pixel with 97.47% confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. [unquote]
  • Aharon Azulay and Yair Weiss, Why do deep convolutional networks generalize so poorly to small image transformations?, 2018
    • [quote] Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans. [unquote]
  • Matthew Ricci, Junkyung Kim, Thomas Serre, Same-different problems strain convolutional neural networks, 2018.
    • [quote] The robust and efficient recognition of visual relations in images is a hallmark of biological vision. We argue that, despite recent progress in visual recognition, modern machine vision algorithms are severely limited in their ability to learn visual relations. Through controlled experiments, we demonstrate that visual-relation problems strain convolutional neural networks (CNNs). The networks eventually break altogether when rote memorization becomes impossible, as when intra-class variability exceeds network capacity. Motivated by the comparable success of biological vision, we argue that feedback mechanisms including attention and perceptual grouping may be the key computational components underlying abstract visual reasoning. [unquote]
  • Yuezun Li, Xian Bian, Siwei Lyu, Attacking Object Detectors via Imperceptible Patches on Background, 2018
    • [quote] Deep neural networks have been proven vulnerable against adversarial perturbations. Recent works succeeded to generate adversarial perturbations on either the entire image or on the target of interests to corrupt object detectors. In this paper, we investigate the vulnerability of object detectors from a new perspective — adding minimal perturbations on small background patches outside of targets to fail the detection results. Our work focuses on attacking the common component in the state-of-the-art detectors (e.g. Faster R-CNN), Region Proposal Networks (RPNs). As the receptive fields generated by RPN is often larger than the proposals themselves, we propose a novel method to generate background perturbation patches, and show that the perturbations solely outside of the targets can severely damage the performance of multiple types of detectors by simultaneously decreasing the true positives and increasing the false positives. We demonstrate the efficacy of our method on 5 different state-of-the-art object detectors on MS COCO 2014 dataset. [unquote]
  • Amir Rosenfeld, Richard Zemel, John K. Tsotsos, The Elephant in the Room, 2018
    • [quote] We showcase a family of common failures of state-of-the art object detectors. These are obtained by replacing image sub-regions by another sub-image that contains a trained object. We call this “object transplanting”. Modifying an image in this manner is shown to have a non-local impact on object detection. Slight changes in object position can affect its identity according to an object detector as well as that of other objects in the image. We provide some analysis and suggest possible reasons for the reported phenomena. [unquote]
  • Gary Marcus, Innateness, AlphaZero, and Artificial Intelligence, 2018
    • [quote] The concept of innateness is rarely discussed in the context of artificial intelligence. When it is discussed, or hinted at, it is often the context of trying to reduce the amount of innate machinery in a given system. In this paper, I consider as a test case a recent series of papers by Silver et al (Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that a “even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance”, “starting tabula rasa.” I argue that these claims are overstated, for multiple reasons. I close by arguing that artificial intelligence needs greater attention to innateness, and I point to some proposals about what that innateness might look like. [unquote]
  • Michael A. Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, Anh Nguyen, Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects, 2018

News, reports, etc.

  • AI Company Accused of Using Humans to Fake Its AI
    • [quote] On Friday, iFlytek was hit with accusations that it hired humans to fake its simultaneous interpretation tools, which are supposedly powered by AI. … In an open letter posted on Quora-like Q&A platform Zhihu, interpreter Bell Wang claimed he was one of a team of simultaneous interpreters who helped translate the 2018 International Forum on Innovation and Emerging Industries Development on Thursday. The forum claimed to use iFlytek’s automated interpretation service. … While a Japanese professor spoke in English at the conference on Thursday morning, a screen behind him showed both an English transcription of what he was saying, and what appeared to be a simultaneous translation into Chinese which was credited to iFlytek. Wang claims that the Chinese wasn’t a simultaneous translation, but was instead a transcription of an interpretation by himself and a fellow interpreter. “I was deeply disgusted,” Wang wrote in the letter. … This is not the first time iFlytek has been accused of disguising works done by flesh-and-blood interpreters as the work of their AI-powered product. Last year, another simultaneous interpreter accused iFlytek of hiding their existence from the keynote speakers while providing interpretation services that appeared to come from the AI product. [unquote]
  • Tesla Enthusiast’s European Model 3 Tour Ends When Autopilot Crashes Into Median
    • [quote] Two Americans are dead and one is injured as a result of Tesla deceiving and misleading consumers into believing that the Autopilot feature of its vehicles is safer and more capable than it actually is. After studying the first of these fatal accidents, the National Transportation Safety Board (NTSB) determined that over-reliance on and a lack of understanding of the Autopilot feature can lead to death. The marketing and advertising practices of Tesla, combined with Elon Musk’s public statements, have made it reasonable for Tesla owners to believe, and act on that belief, that a Tesla with Autopilot is an autonomous vehicle capable of “self-driving”. [unquote]
    • [quote] Consumers in the market for a new Tesla see advertisements proclaiming, “Full Self Driving Hardware on All Cars.” They are directed to videos of Tesla vehicles driving themselves through busy public roads, with no human operation whatsoever. They see press releases alleging that Autopilot reduces the likelihood of an accident by 40%. They also hear statements like “the probability of an accident with Autopilot is just less” from Tesla’s CEO, Elon Musk. Or they hear him relate Autopilot in a Tesla to autopilot systems in an aircraft. Such advertisements and statements mislead and deceive consumers into believing that Autopilot is safer and more capable than it is known to be.” [unquote]
    • [quote] “Judging by You You’s statements about his recent experience, it doesn’t just seem that Autopilot is being marketed incorrectly; this “amenity” may in fact represent an extra liability to the driver given Elon’s comments. In fact, why doesn’t Tesla just completely deactivate the erroneously named “autopilot” feature indefinitely until the company stops, in You You’s words, letting humans risk their lives as beta testers to experiment with what is clearly faulty software.” [unquote]
  • Tesla in Autopilot mode crashes into parked Laguna Beach police cruiser
    • [quote] A Tesla sedan in Autopilot mode crashed into a parked Laguna Beach Police Department vehicle Tuesday morning, authorities said…“Why do these vehicles keep doing that?” Cota said. “We’re just lucky that people aren’t getting injured.” Tesla’s Autopilot driver-assist feature has come under scrutiny following other collisions.Tesla’s Autopilot driver-assist feature has come under scrutiny following other collisions. [unquote]
  • Tesla car mangled in fatal crash was on Autopilot and speeding, NTSB says
    • [quote] The Tesla car involved in a fatal crash in Florida this spring was in Autopilot mode and going about 10 miles faster than the speed limit, according to safety regulators, who also released a picture of the mangled vehicle. … Earlier reports had stated the Tesla Model S struck a big rig while traveling on a divided highway in central Florida, and speculated that the Tesla Autopilot system had failed to intervene in time to prevent the collision. … The crash killed 40-year-old Ohio resident Joshua Brown, who was behind the wheel of the Tesla. It is the first known fatality in a Tesla using Autopilot. The driver of the truck was not injured. [unquote]
  • Tesla says driver’s hands weren’t on wheel at time of accident
    • [quote] The collision occurred days after an Uber Technologies Inc. self-driving test vehicle killed a pedestrian in Arizona, the most significant incident involving autonomous-driving technology since a Tesla driver’s death in May 2016 touched off months of finger-pointing and set back the company’s Autopilot program. A U.S. transportation safety regulator said Tuesday it would investigate the Model X crash, contributing to Tesla’s loss of more than $5 billion in market value this week. …. “This is another potential illustration of the mushy middle of automation,” Bryant Walker Smith, a University of South Carolina law professor who studies self-driving cars, said in an email. Partial automation systems such as Tesla’s Autopilot “work unless and until they don’t,” and there will be speculation and research about their safety, he said. [unquote]
  • Tesla driver in Utah crash says Autopilot was on and she was looking at her phone
    • [quote] The driver of a Tesla electric car had the vehicle’s semi-autonomous Autopilot mode engaged when she slammed into the back of a Utah fire truck Friday, in the latest crash involving a car with self-driving features. [unquote]
  • Artificial Intelligence Pioneer Says We Need to Start Over
    • [quote] “Geoffrey Hinton, a professor emeritus at the University of Toronto and a Google researcher, says he is now “deeply suspicious” of back-propagation, which underlies many advances in the artificial intelligence field today.” In 1986, Geoffrey Hinton co-authored a paper that, three decades later, is central to the explosion of artificial intelligence (AI). But Hinton says his breakthrough method should be dispensed with, and a new path to AI found. Speaking with Axios on the sidelines of an AI conference in Toronto on Wednesday, Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. “My view is throw it all away and start again,” he said. The bottom line: Other scientists at the conference said back-propagation still has a core role in AI’s future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. “Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.” [unquote]
  • Gary Marcus and Ernest Davis, A.I. Is Harder Than You Think, 2018
    • [quote] By Gary Marcus and Ernest Davis Gary Marcus is a professor of psychology and neural science and Ernest Davis is a professor of computer science, both at New York University. May 18, 2018 The field of artificial intelligence doesn’t lack for ambition. In January, Google’s chief executive, Sundar Pichai, claimed in an interview that A.I. “is more profound than, I dunno, electricity or fire.” Day-to-day developments, though, are more mundane. Last week, Mr. Pichai stood onstage in front of a cheering audience and proudly showed a video in which a new Google program, Google Duplex, made a phone call and scheduled a hair salon appointment. The program performed those tasks well enough that a human at the other end of the call didn’t suspect she was talking to a computer. Assuming the demonstration is legitimate, that’s an impressive (if somewhat creepy) accomplishment. But Google Duplex is not the advance toward meaningful A.I. that many people seem to think. If you read Google’s public statement about Google Duplex, you’ll discover that the initial scope of the project is surprisingly limited. It encompasses just three tasks: helping users “make restaurant reservations, schedule hair salon appointments, and get holiday hours.” Schedule hair salon appointments? The dream of artificial intelligence was supposed to be grander than this — to help revolutionize medicine, say, or to produce trustworthy robot helpers for the home. The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals. The reason is that the field of A.I. doesn’t yet have a clue how to do any better. As Google concedes, the trick to making Google Duplex work was to limit it to “closed domains,” or highly constrained types of data (like conversations about making hair salon appointments), “which are narrow enough to explore extensively.” Google Duplex can have a human-sounding conversation only “after being deeply trained in such domains.” Open-ended conversation on a wide range of topics is nowhere in sight. The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety. Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy. [unquote]
  • Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots
  • UK police use of facial recognition technology a failure
    • “wrong nine times out of 10”, i.e. 90% false positive rate.
  • UK police say 92% false positive facial recognition is no big deal
    • South Wales Police: “No facial recognition system is 100% accurate under all conditions.”
  • AI fails: why AI still isn’t ready to take your job
  • Lessons from Fabio: why you can’t robotise your customer service
    • [selected quotes] We all love a good tech novelty, be it an AI toothbrush or a new egg whisk you can have Alexa operate. But as fun as they are, it only takes time for the novelty to wear off before you find that fun new gadget isn’t as great as you thought. … Chatbots and automated assistants are one such tech novelty looking for their place in the world. And it’s exactly this disillusionment that happened to Fabio the ShopBot. Fabio managed to hold his job at Scottish supermarket Margiotta for all of one week before being fired for confusing the customers. … Fabio the Pepper robot was the UK’s first automated shop assistant, and with that title came buckets of novelty-born interest. For a short time, Fabio seemed to succeed at his new retail job, offering high-fives to customers, and even hugs. … But when the novelty wore off and regular customers grew used to him, the illusion shattered. It became clear that Fabio added very little to the customer experience. In fact, he detracted from it, repelling customers who appeared to be actively avoiding him. … And so it was that poor little Fabio found himself fired and packed off back to the factory.
  • Funniest Chatbot Fails

Blog posts

  • Filip Piekniewski, AI And The Ludic Fallacy, 2016
  • Filip Piekniewski, Just How Close Are We To Solving Vision?, 2016
  • Filip Piekniewski, Outside The Box, 2017
  • Filip Piekniewski, AI Winter Is Well On Its Way, 2018
  • Filip Piekniewski, AI Winter – Addendum, 2018
  • Filip Piekniewski, Autopsy Of A Deep Learning Paper, 2018
  • Eric Helloway, Artificial Intelligence is Impossible, Sept 2018
    • [quote] … All forms of artificial intelligence can be reduced to a Turing machine, that is, a system of rules, states, and transitions that can determine a result using a set of rules. All Turing machines operate entirely according to randomness and determinism. … Because the law of independence conservation states that no combination of randomness and determinism can create mutual information, then likewise no Turing machine nor artificial intelligence can create mutual information. Thus, the goal of artificial intelligence researchers to reproduce human intelligence with a computer program is impossible to achieve. … [unquote]
  • Eric Helloway, Could one single machine invent everything, Aug 2018
    • [quote] … And in lay terms?: Computers can never originate, they only regurgitate. Humans, on the other hand, can come up with original ideas, i.e. they write the programs in the first place. So, my proof shows that the human mind cannot be a computer program. … This is why no one has yet invented an invention algorithm, that will come up with great inventions without any human input. It is also why AI systems only turn out to be useful in very narrow problems and cannot be generalized to cover many problems. … [unquote]
  • Eric Helloway, Why machines can’t think as we do, Aug 2018
    • [quote] … Recently, we looked at Moravec’s Paradox, the fact that it is hard to teach machines to do things that are easy for most humans (walking, for example) but comparatively easy to teach them things that are challenging for most humans (chess comes to mind). … Here’s the Paradox, as formulated by law professor John Danaher, who studies emerging technologies, at his blog Philosophical Disquisitions: We can know more than we can tell, i.e. many of the tasks we perform rely on tacit, intuitive knowledge that is difficult to codify and automate. … We have all encountered that problem. It’s common in healthcare and personal counseling. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home than in his rundown private home with several staircases. The analysis, as such, is straightforward. But that is not the challenge the nurse faces. Her challenge is to convey to the patient, not the information itself, but her tacit knowledge that the proposed move would liberate, rather than restrict him. She may face powerful cultural and psychological barriers in communicating that knowledge to him if he perceives the move as a loss of independence, pure and simple. … We have all encountered that problem. It’s common in healthcare and personal counseling. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home than in his rundown private home with several staircases. … The analysis, as such, is straightforward. But that is not the challenge the nurse faces. Her challenge is to convey to the patient, not the information itself, but her tacit knowledge that the proposed move would liberate, rather than restrict him. She may face powerful cultural and psychological barriers in communicating that knowledge to him if he perceives the move as a loss of independence, pure and simple. … [unquote]
  • Eric Helloway, AI that can read minds? Deconstructing AI hype, Aug 2018
    • [quote] … Fake and misleading AI news is everywhere today. Here’s an example I ran across recently: A headline from a large-circulation daily’s web page screams: No more secrets! New mind-reading machine can translate your thoughts and display them as text INSTANTLY!  Not just “instantly,” notice, but “INSTANTLY!” The Daily Mail is the United Kingdom’s second biggest-selling daily newspaper. …
      [unquote]
  • Eric Helloway, Boy loses a large chunk of his brain and doing just fine, Aug 2018
    • [quote] If the mind is just what the brain does, as materialists claim, how do we explain this? The boy, identified as UD in the case study, was a healthy, normal kid—up until he suddenly suffered a seizure at age four. He subsequently developed intractable epilepsy due to the tumor. When he was nearly seven years old, his parents and doctors made the tough decision to surgically remove the mass. That also meant removing the entire right side of his occipital lobe and part of his temporal lobe on his right side. Together, the extracted sections accounted for a third of the right hemisphere of UD’s brain. … In the end, the only permanent injury appears to have been a blind spot on his left side. … Modern medical diagnostics, far from definitively showing that the mind is just what the brain does, challenges that notion. Earlier this year, Ars Technica also reported on the case of an otherwise healthy 84-year-old man who had a 9cm (~3.5 inch) pressurized pocket of air in place of much of his right frontal lobe. He had come to medical attention because of routine complaints for an elderly person. … Neuroscience tried wholly embracing naturalism, but then the brain got away … [unquote]
  • William Dembski, How humans can thrive in a world of increasing automation, July 2018.
    • [quote] … Zero evidence supports the view that machines will attain and ultimately exceed human intelligence. And absent such evidence, there is zero reason to worry that they will. So how do we see that clearly, despite the hype? We see it by understanding the nature of true intelligence, as exhibited in a fully robust human intelligence, so that we do not confuse it with artificial intelligence. Unfortunately, rather than use AI to enhance our humanity, computational reductionists use it as a club to beat our humanity, suggesting that we are well on our way to being replaced by machines. Such predictions are sheer hype. Machines have come nowhere near attaining human intelligence, and show zero prospects of ever doing so. I want to linger on this dim view of their grand pretensions because it flies in the face of the propaganda about an AI takeover that constantly bombards us. Zero evidence supports the view that machines will attain and ultimately exceed human intelligence. And absent such evidence, there is zero reason to worry that they will. So how do we see that clearly, despite the hype? We see it by understanding the nature of true intelligence, as exhibited in a fully robust human intelligence so that we do not confuse it with artificial intelligence. What has artificial intelligence accomplished to date? AI has, no doubt, an impressive string of accomplishments: chess playing programs, Go-playing programs, and Jeopardy playing programs just scratch the surface. Consider Google’s search business, Facebook’s tracking and filtering technology, and the robotics industry. Automated cars seem just around the corner. In every case, however, one finds a specifically adapted algorithmic solution applied to a well-defined and narrowly conceived problem. The engineers and programmers who produce these AI systems are to be commended for their insight and creativity. They are building a library of AI applications. But all such applications, even when considered collectively and extrapolated in the light of an ever-increasing army of programmers equipped with ever more powerful computers, get us no closer to computers that achieve, much less exceed, human intelligence. For a full-fledged AI takeover (think Skynet or HAL 9000) to become a reality, AI needs more than a library of algorithms that solve specific problems. An AI takeover needs a higher-order master algorithm with a general-purpose problem-solving capability, able to harness the first-order problem-solving capabilities of the specific algorithms in this library and adapt them to the widely varying contingent circumstances of life. Building such a master algorithm is a task on which AI’s practitioners have made zero headway. The library of search algorithms is a kludge — it simply brings together all existing AI algorithms, each narrowly focused on solving specific problems. What’s needed is not a kludge but a coordination of all these algorithms, appropriately matching algorithm to problem across a vast array of situations. A master algorithm that achieves such coordination is the holy grail of AI. But there’s no reason to think it exists. Certainly, work on AI to date provides no evidence for it. AI, even at its current outer reaches (automated vehicles?), still focuses on narrow, well-defined problems. Absence of evidence for such a master algorithm might prompt defenders of strong AI to dig in their heels: They say, give us more time, effort, and computational power to find such a master algorithm and we’ll solve it! But why should we take their protestations seriously? We simply have no precedent or idea of what such a master algorithm would look like. Essentially, to resolve AI’s master algorithm problem, supporters of strong AI must come up with a radically new approach to programming, perhaps building machines by analogy with humans in some form of machine embryological development. Such possibilities remain pure speculation for now. The computational literature on No Free Lunch theorems and Conservation of Information (see the work of David Wolpert and Bill Macready on the former as well as that of Robert J. Marks and myself on the latter) imply that all problem-solving algorithms, including such a master algorithm, must be adapted to specific problems. Yet a master algorithm must also be perfectly general, transforming AI into a universal problem solver. The No Free Lunch theorem and Conservation of Information demonstrate that such universal problem solvers do not exist. Yet what algorithms can’t do, humans can. True intelligence, as exhibited by humans, is a general faculty for taking wide-ranging, diverse abilities for solving specific problems and matching them to the actual and multifarious problems that arise in practice. Such a distinction between true intelligence and machine intelligence is nothing new. Descartes and Leibniz understood it in the seventeenth century: ‘While intelligence is a universal instrument that can serve for all contingencies, [machines] have need of some special adaptation for every particular action. From this, it follows that it is impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our intelligence causes us to act.’ … Indeed, it is perhaps the best and most concise statement of what may be called AI’s master algorithm problem, namely, the total lack of insight and progress on the part of the computer science community to construct a master algorithm (which Descartes calls a “universal instrument”) that can harness the algorithms AI is able to produce and match them with the problem situations to which those algorithms apply.
      [quote]
  • Bob Wachter, How Medical Tech Gave a Patient a Massive Overdose, 2015
  • Randy Gallistel, Not hard enough on DLNs, 2016
    • [quote] Although, DLN image-recognition achieves state-of-the-art image recognition on web-based images, the papers from Google — Szegedy et al (http://arxiv.org/abs/1312.6199) & Nguyen, et al (http://arxiv.org/pdf/1412.1897v1.pdf) — make it clear that the DLN in fact has no idea what the things it labels look like. [unquote]
    • [quote] Again, what better proof could one ask for that what the DLN is doing is not what animal visual systems do. [unquote]
    • [quote] It may “recognize” them when it sees them on the net, but it has no clue what they actually look like. And this is the area — image labeling — in which DLN have enjoyed their greatest triumph. To paraphrase the unfortunate King Pyrhhus: Another such victory and [the proponents of DNLs as models of brain function] will be completely undone. The DLNs do not recognize these images; they just paste labels on them. You would never know that from listening to the hype from Hinton or Bengio or LeCun. [unquote]
    • [quote] What is most deeply dishonest about the claims that DLNs point the way toward an understanding of how brains work is that conceptually DLNs have no addressable read-write memory (ARWM). In practice, however, they are implemented on machines that do have such a memory, and their implementation makes unfettered use of the ARWMs in those machines. This point applies to DLNs of every kind.  [unquote]