Resources on the true progress, current state and future of AI, Machine Learning and Deep Learning

Is Artificial Intelligence (AI) over-hyped?

Is Deep Learning over-hyped?

Is Machine Learning over-hyped?

Deep learning should be treated as just a tool just like any other tool in Machine Learning and AI toolbox. Compilation of resources to to give a true and more fair picture on  the progress, current state and future of Artificial Intelligence, Machine Learning and Deep Learning:

Papers

  • Anh Nguyen, Jason Yosinski and Jeff Clune, Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2015
  • Jason Jo and Yoshua Bengio, Measuring the tendency of CNNs to Learn Surface Statistical Regularities, 2017
  • Gary Marcus, Deep Learning: A Critical Appraisal, 2018
  • Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt and Vaishaal Shankar, Do CIFAR-10 Classifiers Generalize to CIFAR-10?, 2018
    • [quote] Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution. [unquote]
  • Hossein Hosseini, Radha Poovendran, Semantic Adversarial Examples, 2018
    • [quote] Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. …. In this paper, we introduce a new class of adversarial examples, namely “Semantic Adversarial Examples,” as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. … Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7%.[unquote]
  • Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt and Aleksander Madry, A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations, 2018
    • [quote] We show that simple transformations, namely translations and rotations alone, are sufficient to fool neural network-based vision models on a significant fraction of inputs. This is in sharp contrast to previous work that relied on more complicated optimization approaches that are unlikely to appear outside of a truly adversarial setting. Moreover, fooling rotations and translations are easy to find and require only a few black-box queries to the target model. Overall, our findings emphasize the need for designing robust classifiers even in natural, benign contexts. [unquote]
  • Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, Jason Yosinski, An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution, 2018
    • [quote] Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. Although convolutional networks would seem appropriate for this task, we show that they fail spectacularly. We demonstrate and carefully analyze the failure first on a toy problem, at which point a simple fix becomes obvious. We call this solution CoordConv, which works by giving convolution access to its own input coordinates through the use of extra coordinate channels. Without sacrificing the computational and parametric efficiency of ordinary convolution, CoordConv allows networks to learn either perfect translation invariance or varying degrees of translation dependence, as required by the task. CoordConv solves the coordinate transform problem with perfect generalization and 150 times faster with 10–100 times fewer parameters than convolution. This stark contrast raises the question: to what extent has this inability of convolution persisted insidiously inside other tasks, subtly hampering performance from within? A complete answer to this question will require further investigation, but we show preliminary evidence that swapping convolution for CoordConv can improve models on a diverse set of tasks. Using CoordConv in a GAN produced less mode collapse as the transform between high-level spatial latents and pixels becomes easier to learn. A Faster R-CNN detection model trained on MNIST detection showed 24% better IOU when using CoordConv, and in the RL domain agents playing Atari games benefit significantly from the use of CoordConv layers. [unquote]
  • Jiawei Su, Danilo Vasconcellos Vargas and Sakurai Kouichi, One pixel attack for fooling deep neural networks, 2018
    • [quote] Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 70.97% of the natural images can be perturbed to at least one target class by modifying just one pixel with 97.47% confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. [unquote]
  • Aharon Azulay and Yair Weiss, Why do deep convolutional networks generalize so poorly to small image transformations?, 2018
    • [quote] Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans. [unquote]
  • Matthew Ricci, Junkyung Kim, Thomas Serre, Same-different problems strain convolutional neural networks, 2018.
    • [quote] The robust and efficient recognition of visual relations in images is a hallmark of biological vision. We argue that, despite recent progress in visual recognition, modern machine vision algorithms are severely limited in their ability to learn visual relations. Through controlled experiments, we demonstrate that visual-relation problems strain convolutional neural networks (CNNs). The networks eventually break altogether when rote memorization becomes impossible, as when intra-class variability exceeds network capacity. Motivated by the comparable success of biological vision, we argue that feedback mechanisms including attention and perceptual grouping may be the key computational components underlying abstract visual reasoning. [unquote]

News, reports, etc.

  • Tesla Enthusiast’s European Model 3 Tour Ends When Autopilot Crashes Into Median
    • [quote] Two Americans are dead and one is injured as a result of Tesla deceiving and misleading consumers into believing that the Autopilot feature of its vehicles is safer and more capable than it actually is. After studying the first of these fatal accidents, the National Transportation Safety Board (NTSB) determined that over-reliance on and a lack of understanding of the Autopilot feature can lead to death. The marketing and advertising practices of Tesla, combined with Elon Musk’s public statements, have made it reasonable for Tesla owners to believe, and act on that belief, that a Tesla with Autopilot is an autonomous vehicle capable of “self-driving”. [unquote]
    • [quote] Consumers in the market for a new Tesla see advertisements proclaiming, “Full Self Driving Hardware on All Cars.” They are directed to videos of Tesla vehicles driving themselves through busy public roads, with no human operation whatsoever. They see press releases alleging that Autopilot reduces the likelihood of an accident by 40%. They also hear statements like “the probability of an accident with Autopilot is just less” from Tesla’s CEO, Elon Musk. Or they hear him relate Autopilot in a Tesla to autopilot systems in an aircraft. Such advertisements and statements mislead and deceive consumers into believing that Autopilot is safer and more capable than it is known to be.” [unquote]
    • [quote] “Judging by You You’s statements about his recent experience, it doesn’t just seem that Autopilot is being marketed incorrectly; this “amenity” may in fact represent an extra liability to the driver given Elon’s comments. In fact, why doesn’t Tesla just completely deactivate the erroneously named “autopilot” feature indefinitely until the company stops, in You You’s words, letting humans risk their lives as beta testers to experiment with what is clearly faulty software.” [unquote]
  • Tesla in Autopilot mode crashes into parked Laguna Beach police cruiser
    • [quote] A Tesla sedan in Autopilot mode crashed into a parked Laguna Beach Police Department vehicle Tuesday morning, authorities said…“Why do these vehicles keep doing that?” Cota said. “We’re just lucky that people aren’t getting injured.” Tesla’s Autopilot driver-assist feature has come under scrutiny following other collisions.Tesla’s Autopilot driver-assist feature has come under scrutiny following other collisions. [unquote]
  • Tesla car mangled in fatal crash was on Autopilot and speeding, NTSB says
    • [quote] The Tesla car involved in a fatal crash in Florida this spring was in Autopilot mode and going about 10 miles faster than the speed limit, according to safety regulators, who also released a picture of the mangled vehicle. … Earlier reports had stated the Tesla Model S struck a big rig while traveling on a divided highway in central Florida, and speculated that the Tesla Autopilot system had failed to intervene in time to prevent the collision. … The crash killed 40-year-old Ohio resident Joshua Brown, who was behind the wheel of the Tesla. It is the first known fatality in a Tesla using Autopilot. The driver of the truck was not injured. [unquote]
  • Tesla says driver’s hands weren’t on wheel at time of accident
    • [quote] The collision occurred days after an Uber Technologies Inc. self-driving test vehicle killed a pedestrian in Arizona, the most significant incident involving autonomous-driving technology since a Tesla driver’s death in May 2016 touched off months of finger-pointing and set back the company’s Autopilot program. A U.S. transportation safety regulator said Tuesday it would investigate the Model X crash, contributing to Tesla’s loss of more than $5 billion in market value this week. …. “This is another potential illustration of the mushy middle of automation,” Bryant Walker Smith, a University of South Carolina law professor who studies self-driving cars, said in an email. Partial automation systems such as Tesla’s Autopilot “work unless and until they don’t,” and there will be speculation and research about their safety, he said. [unquote]
  • Tesla driver in Utah crash says Autopilot was on and she was looking at her phone
    • [quote] The driver of a Tesla electric car had the vehicle’s semi-autonomous Autopilot mode engaged when she slammed into the back of a Utah fire truck Friday, in the latest crash involving a car with self-driving features. [unquote]
  • Artificial Intelligence Pioneer Says We Need to Start Over
    • [quote] “Geoffrey Hinton, a professor emeritus at the University of Toronto and a Google researcher, says he is now “deeply suspicious” of back-propagation, which underlies many advances in the artificial intelligence field today.” In 1986, Geoffrey Hinton co-authored a paper that, three decades later, is central to the explosion of artificial intelligence (AI). But Hinton says his breakthrough method should be dispensed with, and a new path to AI found. Speaking with Axios on the sidelines of an AI conference in Toronto on Wednesday, Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. “My view is throw it all away and start again,” he said. The bottom line: Other scientists at the conference said back-propagation still has a core role in AI’s future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. “Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.” [unquote]
  • Gary Marcus and Ernest Davis, A.I. Is Harder Than You Think, 2018
    • [quote] By Gary Marcus and Ernest Davis Gary Marcus is a professor of psychology and neural science and Ernest Davis is a professor of computer science, both at New York University. May 18, 2018 The field of artificial intelligence doesn’t lack for ambition. In January, Google’s chief executive, Sundar Pichai, claimed in an interview that A.I. “is more profound than, I dunno, electricity or fire.” Day-to-day developments, though, are more mundane. Last week, Mr. Pichai stood onstage in front of a cheering audience and proudly showed a video in which a new Google program, Google Duplex, made a phone call and scheduled a hair salon appointment. The program performed those tasks well enough that a human at the other end of the call didn’t suspect she was talking to a computer. Assuming the demonstration is legitimate, that’s an impressive (if somewhat creepy) accomplishment. But Google Duplex is not the advance toward meaningful A.I. that many people seem to think. If you read Google’s public statement about Google Duplex, you’ll discover that the initial scope of the project is surprisingly limited. It encompasses just three tasks: helping users “make restaurant reservations, schedule hair salon appointments, and get holiday hours.” Schedule hair salon appointments? The dream of artificial intelligence was supposed to be grander than this — to help revolutionize medicine, say, or to produce trustworthy robot helpers for the home. The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals. The reason is that the field of A.I. doesn’t yet have a clue how to do any better. As Google concedes, the trick to making Google Duplex work was to limit it to “closed domains,” or highly constrained types of data (like conversations about making hair salon appointments), “which are narrow enough to explore extensively.” Google Duplex can have a human-sounding conversation only “after being deeply trained in such domains.” Open-ended conversation on a wide range of topics is nowhere in sight. The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety. Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy. [unquote]
  • Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots
  • UK police use of facial recognition technology a failure
    • “wrong nine times out of 10”, i.e. 90% false positive rate.
  • UK police say 92% false positive facial recognition is no big deal
    • South Wales Police: “No facial recognition system is 100% accurate under all conditions.”

Blog posts