blob: b1a9656967d06834e2ffe0cd239b9147621e2a53 [file] [log] [blame]
Roadmap 2.60:
=============
afl-fuzz:
- radamsa mutator (via dlopen())
gcc_plugin:
- laf-intel
libdislocator:
- add a wrapper for posix_memalign
qemu_mode:
- update to 4.x (probably this will be skipped :( )
- instrim for QEMU mode via static analysis (with r2pipe? or angr?)
Idea: The static analyzer outputs a map in which each edge that must be
skipped is marked with 1. QEMU loads it at startup in the parent process.
custom_mutators:
- rip what Superion is doing into custom mutators for js, php, etc.
enhance test/test.sh script for checking if compcov features are working
correctly (especially float splitting)
The far away future:
====================
Problem: Average targets (tiff, jpeg, unrar) go through 1500 edges.
At afl's default map that means ~16 collisions and ~3 wrappings.
Solution #1: increase map size.
every +1 decreases fuzzing speed by ~10% and halfs the collisions
birthday paradox predicts collisions at this # of edges:
mapsize => collisions
2^16 = 302
2^17 = 427
2^18 = 603
2^19 = 853
2^20 = 1207
2^21 = 1706
2^22 = 2412
2^23 = 3411
2^24 = 4823
Increasing the map is an easy solution but also not a good one.
Solution #2: use dynamic map size and collision free basic block IDs
This only works in llvm_mode and llvm >= 9 though
A potential good future solution. Heiko/hexcoder follows this up
Solution #3: write instruction pointers to a big shared map
512kb/1MB shared map and the instrumented code writes the instruction
pointer into the map. Map must be big enough but could be command line
controlled.
Good: complete coverage information, nothing is lost. choice of analysis
impacts speed, but this can be decided by user options
Neutral: a little bit slower but no loss of coverage
Bad: completely changes how afl uses the map and the scheduling.
Overall another very good solution, Marc Heuse/vanHauser follows this up