Loading AI tools
Database of handwritten digits From Wikipedia, the free encyclopedia
The MNIST database (Modified National Institute of Standards and Technology database[1]) is a large database of handwritten digits that is commonly used for training various image processing systems.[2][3] The database is also widely used for training and testing in the field of machine learning.[4][5] It was created by "re-mixing" the samples from NIST's original datasets.[6] The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments.[7] Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels.[7]
The MNIST database contains 60,000 training images and 10,000 testing images.[8] Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset.[9] The original creators of the database keep a list of some of the methods tested on it.[7] In their original paper, they use a support-vector machine to get an error rate of 0.8%.[10]
The original MNIST dataset contains at least 4 wrong labels.[11]
The set of images in the MNIST database was created in 1994. Previously, NIST released two datasets: Special Database 1 (NIST Test Data I, or SD-1); and Special Database 3 (or SD-2). They were released on two CD-ROMs.
SD-1 was the test set, and it contained digits written by high school students, 58,646 images written by 500 different writers. Each image is accompanied by the identity of its writer. SD-3 was the training set, and it contained digits written by 2000 employees of the United States Census Bureau. It was much cleaner and easier to recognize than images in SD-1.[7] It was found that machine learning systems trained and validated on SD-3 suffered significant drops in performance on the test set.[12]
The original dataset from MNIST contained 128x128 binary images. Each was size-normalized to fit in a 20x20 pixel box while preserving their aspect ratio, and anti-aliased to grayscale. Then it was put into a 28x28 image by translating it until the center of mass of the pixels is in the center of the image. The details of how the downsampling proceeded was reconstructed.[13]
The training set and the testing set both originally had 60k samples, but 50k of the testing set samples were discarded. These were restored to construct the QMNIST, which has 60k images in the training set and 60k in the testing set.[14][13]
Extended MNIST (EMNIST) is a newer dataset developed and released by NIST to be the (final) successor to MNIST.[15][16] MNIST included images only of handwritten digits. EMNIST includes all the images from NIST Special Database 19 (SD 19), which is a large database of 814,255 handwritten uppercase and lower case letters and digits.[17][18] The images in EMNIST were converted into the same 28x28 pixel format, by the same process, as were the MNIST images. Accordingly, tools which work with the older, smaller, MNIST dataset will likely work unmodified with EMNIST.
Fashion MNIST was created in 2017 as a more challenging replacement for MNIST. The dataset consists of 70,000 28x28 grayscale images of fashion products from 10 categories.[19]
Some researchers have achieved "near-human performance" on the MNIST database, using a committee of neural networks; in the same paper, the authors achieve performance double that of humans on other recognition tasks.[20] The highest error rate listed[7] on the original website of the database is 12 percent, which is achieved using a simple linear classifier with no preprocessing.[10]
In 2004, a best-case error rate of 0.42 percent was achieved on the database by researchers using a new classifier called the LIRA, which is a neural classifier with three neuron layers based on Rosenblatt's perceptron principles.[21]
Some researchers have tested artificial intelligence systems using the database put under random distortions. The systems in these cases are usually neural networks and the distortions used tend to be either affine distortions or elastic distortions.[7] Sometimes, these systems can be very successful; one such system achieved an error rate on the database of 0.39 percent.[22]
In 2011, an error rate of 0.27 percent, improving on the previous best result, was reported by researchers using a similar system of neural networks.[23] In 2013, an approach based on regularization of neural networks using DropConnect has been claimed to achieve a 0.21 percent error rate.[24] In 2016, the single convolutional neural network best performance was 0.25 percent error rate.[25] As of August 2018, the best performance of a single convolutional neural network trained on MNIST training data using no data augmentation is 0.25 percent error rate.[25][26] Also, the Parallel Computing Center (Khmelnytskyi, Ukraine) obtained an ensemble of only 5 convolutional neural networks which performs on MNIST at 0.21 percent error rate.[27][28]
This is a table of some of the machine learning methods used on the dataset and their error rates, by type of classifier:
Type | Classifier | Distortion | Preprocessing | Error rate (%) |
---|---|---|---|---|
Neural Network | Gradient Descent Tunneling | None | None | 0[29] |
Linear classifier | Pairwise linear classifier | None | Deskewing | 7.6[10] |
K-Nearest Neighbors | K-NN with rigid transformations | None | None | 0.96[30] |
K-Nearest Neighbors | K-NN with non-linear deformation (P2DHMDM) | None | Shiftable edges | 0.52[31] |
Boosted Stumps | Product of stumps on Haar features | None | Haar features | 0.87[32] |
Non-linear classifier | 40 PCA + quadratic classifier | None | None | 3.3[10] |
Random Forest | Fast Unified Random Forests for Survival, Regression, and Classification (RF-SRC)[33] | None | Simple statistical pixel importance | 2.8[34] |
Support-vector machine (SVM) | Virtual SVM, deg-9 poly, 2-pixel jittered | None | Deskewing | 0.56[35] |
Neural network | 2-layer 784-800-10 | None | None | 1.6[36] |
Neural network | 2-layer 784-800-10 | Elastic distortions | None | 0.7[36] |
Deep neural network (DNN) | 6-layer 784-2500-2000-1500-1000-500-10 | Elastic distortions | None | 0.35[37] |
Convolutional neural network (CNN) | 6-layer 784-40-80-500-1000-2000-10 | None | Expansion of the training data | 0.31[38] |
Convolutional neural network | 6-layer 784-50-100-500-1000-10-10 | None | Expansion of the training data | 0.27[39] |
Convolutional neural network (CNN) | 13-layer 64-128(5x)-256(3x)-512-2048-256-256-10 | None | None | 0.25[25] |
Convolutional neural network | Committee of 35 CNNs, 1-20-P-40-P-150-10 | Elastic distortions | Width normalizations | 0.23[20] |
Convolutional neural network | Committee of 5 CNNs, 6-layer 784-50-100-500-1000-10-10 | None | Expansion of the training data | 0.21[27][28] |
Convolutional neural network | Committee of 20 CNNS with Squeeze-and-Excitation Networks[40] | None | Data augmentation | 0.17[41] |
Convolutional neural network | Ensemble of 3 CNNs with varying kernel sizes | None | Data augmentation consisting of rotation and translation | 0.09[42] |
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.