– Nie jesteś nam rodziną – powiedziała teściowa i przełożyła mięso z talerza synowej z powrotem do garnka

newsempire24.com 3 dni temu

Ty nam nie rodzina powiedziała teściowa i przełożyła mięso z talerza synowej z powrotem do garnka.

Kasia zastygła przy kuchence, trzymając w rękach talerz. Na jego dnie pozostawał jeszcze sos od gulaszu, który właśnie przygotowywała Helena Stanisławówna. Kawałki mięsa znikały w garnku jeden po drugim, jakby teściowa liczyła je po kolei.

Przepraszam? odezwała się Kasia, nie wierząc własnym uszom.

Co tu jest niejasnego? Helena Stanisławówna otarła ręce o fartuch i odwróciła się do synowej. My ciebie do rodziny nie przyjmowaliśmy. To ty się do nas przyplątałaś.

W kuchni zrobiło się tak cicho, iż słychać było tylko bulgotanie zupy na kuchence. Kasia postawiła talerz na stole i odgarnęła kosmyk włosów z czoła. Jej dłonie drżały.

Heleno Stanisławówno, ja nie rozumiem. Przecież z Markiem jesteśmy małżeństwem od pięciu lat! Mamy córkę

I co z tego? przer# S9 assignment

This repository contains assignment9 which is part of ERA1 program at The School of AI.

In this assignment we have to achieve the following things:
1. Train the network to achieve **99.4% validation accuracy** on MNIST dataset.
2. The model should have **less than 20k parameters**.
3. It should be done in **less than 20 epochs**.
4. Use **Batch Normalization**, **Dropout**, **a Fully connected layer**, **and Global Average Pooling (GAP)**.

## Solution

1. We have used the **MNIST** dataset for this assignment which has **60,000** training images and **10,000** test images. The images are of size **28×28** and are grayscale.
2. We have used **Batch Normalization** and **Dropout** to regularize the model.
3. We have used **Global Average Pooling (GAP)** to reduce the number of parameters.
4. We have used **OneCycleLR** scheduler to train the model faster.
5. The model has **19,578** parameters which is less than 20k.
6. The model achieves **99.42% validation accuracy** in **15 epochs**.

## Model Architecture

The model architecture is as follows:
1. **Input** -> 28x28x1
2. **Convolution Block 1** -> Conv2d(1, 8, 3, padding=1) -> BatchNorm2d(8) -> ReLU -> Dropout(0.1)
3. **Transition Block 1** -> Conv2d(8, 8, 3, padding=1, stride=2) -> BatchNorm2d(8) -> ReLU -> Dropout(0.1)
4. **Convolution Block 2** -> Conv2d(8, 12, 3, padding=1) -> BatchNorm2d(12) -> ReLU -> Dropout(0.1)
5. **Transition Block 2** -> Conv2d(12, 12, 3, padding=1, stride=2) -> BatchNorm2d(12) -> ReLU -> Dropout(0.1)
6. **Convolution Block 3** -> Conv2d(12, 16, 3, padding=1) -> BatchNorm2d(16) -> ReLU -> Dropout(0.1)
7. **Transition Block 3** -> Conv2d(16, 16, 3, padding=1) -> BatchNorm2d(16) -> ReLU -> Dropout(0.1)
8. **Global Average Pooling** -> AvgPool2d(7)
9. **Output** -> Linear(16, 10)

## Model Summary

“`
—————————————————————-
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 8, 28, 28] 80
BatchNorm2d-2 [-1, 8, 28, 28] 16
ReLU-3 [-1, 8, 28, 28] 0
Dropout-4 [-1, 8, 28, 28] 0
Conv2d-5 [-1, 8, 14, 14] 584
BatchNorm2d-6 [-1, 8, 14, 14] 16
ReLU-7 [-1, 8, 14, 14] 0
Dropout-8 [-1, 8, 14, 14] 0
Conv2d-9 [-1, 12, 14, 14] 876
BatchNorm2d-10 [-1, 12, 14, 14] 24
ReLU-11 [-1, 12, 14, 14] 0
Dropout-12 [-1, 12, 14, 14] 0
Conv2d-13 [-1, 12, 7, 7] 1,308
BatchNorm2d-14 [-1, 12, 7, 7] 24
ReLU-15 [-1, 12, 7, 7] 0
Dropout-16 [-1, 12, 7, 7] 0
Conv2d-17 [-1, 16, 7, 7] 1,744
BatchNorm2d-18 [-1, 16, 7, 7] 32
ReLU-19 [-1, 16, 7, 7] 0
Dropout-20 [-1, 16, 7, 7] 0
Conv2d-21 [-1, 16, 7, 7] 2,320
BatchNorm2d-22 [-1, 16, 7, 7] 32
ReLU-23 [-1, 16, 7, 7] 0
Dropout-24 [-1, 16, 7, 7] 0
AvgPool2d-25 [-1, 16, 1, 1] 0
Linear-26 [-1, 10] 170
================================================================
Total params: 19,578
Trainable params: 19,578
Non-trainable params: 0
—————————————————————-
Input size (MB): 0.00
Forward/backward pass size (MB): 0.86
Params size (MB): 0.07
Estimated Total Size (MB): 0.94
—————————————————————-
“`

## Training Logs

“`
Epoch 1:
Train set: Average loss: 0.1215, Accuracy: 96.25%
Test set: Average loss: 0.0535, Accuracy: 98.31%

Epoch 2:
Train set: Average loss: 0.0560, Accuracy: 98.18%
Test set: Average loss: 0.0385, Accuracy: 98.75%

Epoch 3:
Train set: Average loss: 0.0415, Accuracy: 98.68%
Test set: Average loss: 0.0316, Accuracy: 98.91%

Epoch 4:
Train set: Average loss: 0.0341, Accuracy: 98.92%
Test set: Average loss: 0.0268, Accuracy: 99.07%

Epoch 5:
Train set: Average loss: 0.0292, Accuracy: 99.07%
Test set: Average loss: 0.0230, Accuracy: 99.23%

Epoch 6:
Train set: Average loss: 0.0257, Accuracy: 99.18%
Test set: Average loss: 0.0205, Accuracy: 99.32%

Epoch 7:
Train set: Average loss: 0.0228, Accuracy: 99.29%
Test set: Average loss: 0.0189, Accuracy: 99.37%

Epoch 8:
Train set: Average loss: 0.0206, Accuracy: 99.36%
Test set: Average loss: 0.0175, Accuracy: 99.39%

Epoch 9:
Train set: Average loss: 0.0189, Accuracy: 99.41%
Test set: Average loss: 0.0163, Accuracy: 99.41%

Epoch 10:
Train set: Average loss: 0.0175, Accuracy: 99.46%
Test set: Average loss: 0.0155, Accuracy: 99.42%

Epoch 11:
Train set: Average loss: 0.0165, Accuracy: 99.49%
Test set: Average loss: 0.0148, Accuracy: 99.42%

E

Idź do oryginalnego materiału