Ignoring robots.txt

What would be the best way to ignore robots.txt of a website?

I unchecked
Obey html-robots-noindex:
Obey html-robots-nofollow:
in Advanced Crawler.
But the website is still not crawlable with a robots.txt like this:
User-agent: *
Disallow: /

Also the crawled pages with a Disallow robots.txt are not included at all, not even the front page.
This a big issue right now, being able to configure the web crawler and changing the user agent name (more choices) should be added.
I’m just gonna have to make my own search engine from scratch now

I think it’s more complicated than just ignoring it as my homemade search engine have the same issue even when not specifying anything related to robots.txt
Someone suggested using MITM proxy or maybe using some headless browser on top of it, a more complicated issue than what I thought first.