Eu estava executando um script em imagens de 30k e, de repente, foi morto. O que poderia ter causado isso?
mona@pascal:~/computer_vision/deep_learning/darknet$ ./darknet coco test cfg/yolo-coco.cfg yolo-coco.weights images
0: Convolutional Layer: 448 x 448 x 3 image, 64 filters -> 224 x 224 x 64 image
1: Maxpool Layer: 224 x 224 x 64 image, 2 size, 2 stride
2: Convolutional Layer: 112 x 112 x 64 image, 192 filters -> 112 x 112 x 192 image
3: Maxpool Layer: 112 x 112 x 192 image, 2 size, 2 stride
4: Convolutional Layer: 56 x 56 x 192 image, 128 filters -> 56 x 56 x 128 image
5: Convolutional Layer: 56 x 56 x 128 image, 256 filters -> 56 x 56 x 256 image
6: Convolutional Layer: 56 x 56 x 256 image, 256 filters -> 56 x 56 x 256 image
7: Convolutional Layer: 56 x 56 x 256 image, 512 filters -> 56 x 56 x 512 image
8: Maxpool Layer: 56 x 56 x 512 image, 2 size, 2 stride
9: Convolutional Layer: 28 x 28 x 512 image, 256 filters -> 28 x 28 x 256 image
10: Convolutional Layer: 28 x 28 x 256 image, 512 filters -> 28 x 28 x 512 image
11: Convolutional Layer: 28 x 28 x 512 image, 256 filters -> 28 x 28 x 256 image
12: Convolutional Layer: 28 x 28 x 256 image, 512 filters -> 28 x 28 x 512 image
13: Convolutional Layer: 28 x 28 x 512 image, 256 filters -> 28 x 28 x 256 image
14: Convolutional Layer: 28 x 28 x 256 image, 512 filters -> 28 x 28 x 512 image
15: Convolutional Layer: 28 x 28 x 512 image, 256 filters -> 28 x 28 x 256 image
16: Convolutional Layer: 28 x 28 x 256 image, 512 filters -> 28 x 28 x 512 image
17: Convolutional Layer: 28 x 28 x 512 image, 512 filters -> 28 x 28 x 512 image
18: Convolutional Layer: 28 x 28 x 512 image, 1024 filters -> 28 x 28 x 1024 image
19: Maxpool Layer: 28 x 28 x 1024 image, 2 size, 2 stride
20: Convolutional Layer: 14 x 14 x 1024 image, 512 filters -> 14 x 14 x 512 image
21: Convolutional Layer: 14 x 14 x 512 image, 1024 filters -> 14 x 14 x 1024 image
22: Convolutional Layer: 14 x 14 x 1024 image, 512 filters -> 14 x 14 x 512 image
23: Convolutional Layer: 14 x 14 x 512 image, 1024 filters -> 14 x 14 x 1024 image
24: Convolutional Layer: 14 x 14 x 1024 image, 1024 filters -> 14 x 14 x 1024 image
25: Convolutional Layer: 14 x 14 x 1024 image, 1024 filters -> 7 x 7 x 1024 image
26: Convolutional Layer: 7 x 7 x 1024 image, 1024 filters -> 7 x 7 x 1024 image
27: Convolutional Layer: 7 x 7 x 1024 image, 1024 filters -> 7 x 7 x 1024 image
28: Local Layer: 7 x 7 x 1024 image, 256 filters -> 7 x 7 x 256 image
29: Connected Layer: 12544 inputs, 4655 outputs
30: Detection Layer
forced: Using default '0'
Loading weights from yolo-coco.weights...Done!
Matado
mona@pascal:~/computer_vision/deep_learning/darknet/src$ dmesg | tail -5
[2265064.961124] [28256] 1007 28256 27449 11 55 271 0 sshd
[2265064.961126] [28257] 1007 28257 6906 11 19 888 0 bash
[2265064.961128] [32519] 1007 32519 57295584 16122050 62725 15112867 0 darknet
[2265064.961130] Out of memory: Kill process 32519 (darknet) score 941 or sacrifice child
[2265064.961385] Killed process 32519 (darknet) total-vm:229182336kB, anon-rss:64415788kB, file-rss:72412kB
e
[2265064.961128] [32519] 1007 32519 57295584 16122050 62725 15112867 0 darknet
[2265064.961130] Out of memory: Kill process 32519 (darknet) score 941 or sacrifice child
[2265064.961385] Killed process 32519 (darknet) total-vm:229182336kB, anon-rss:64415788kB, file-rss:72412kB
Após o processo ser morto, eu tenho:
$ top | grep -i mem
KiB Mem: 65942576 total, 8932112 used, 57010464 free, 50440 buffers
KiB Swap: 67071996 total, 6666296 used, 60405700 free. 7794708 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
KiB Mem: 65942576 total, 8932484 used, 57010092 free, 50440 buffers
KiB Swap: 67071996 total, 6666296 used, 60405700 free. 7794736 cached Mem
KiB Mem: 65942576 total, 8932608 used, 57009968 free, 50448 buffers
KiB Mem: 65942576 total, 8932480 used, 57010096 free, 50448 buffers
Meu vmstat é:
$ vmstat -s -SM
64397 M total memory
8722 M used memory
305 M active memory
7566 M inactive memory
55674 M free memory
49 M buffer memory
7612 M swap cache
65499 M total swap
6510 M used swap
58989 M free swap
930702519 non-nice user cpu ticks
33069 nice user cpu ticks
121205290 system cpu ticks
4327558564 idle cpu ticks
4518820 IO-wait cpu ticks
148 IRQ cpu ticks
260645 softirq cpu ticks
0 stolen cpu ticks
315976129 pages paged in
829418865 pages paged out
38599842 pages swapped in
46593418 pages swapped out
2984775555 interrupts
3388511507 CPU context switches
1475266463 boot time
162071 forks
Na outra vez em que executei este script com apenas 3.000 imagens em vez de 30k, recebi este erro:
28: Local Layer: 7 x 7 x 1024 image, 256 filters -> 7 x 7 x 256 image
29: Connected Layer: 12544 inputs, 4655 outputs
30: Detection Layer
forced: Using default '0'
Loading weights from yolo-coco.weights...Done!
OpenCV Error: Insufficient memory (Failed to allocate 23970816 bytes) in OutOfMemoryError, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/alloc.cpp, line 52
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/alloc.cpp:52: error: (-4) Failed to allocate 23970816 bytes in function OutOfMemoryError
Aborted (core dumped)
Usou 61G da minha memória RES 64G, como mostrado no htop.