Dynamic crawling is automated interaction with web page interface elements using a headless browser, simulating user actions and observing the requests being sent to server.
Although dynamic crawling often works well, there are cases when it fails to discover some endpoints. Sometimes the user interface is too complex to be crawled completely. Making all possible user actions may require too much time. In such cases a crawler would stop before completing, probably missing some endpoints.
Furthermore, sometimes JS code accessing an endpoint is impossible to trigger from the user interface at all — essentially, this is dead code. Such code still provides interest for the scanner and can access working parts of the server. We call such endpoints hidden endpoints.