There are a lot of advantages to using a parser generator like bison or antlr, particularly while you're developing a language. You'll undoubtedly end up making changes to the grammar as you go, and you'll want to end up with documentation of the final grammar. Tools which produce a grammar automatically from the documentation are really useful. They also can help give you confidence that the grammar of the language is (a) what you think it is and (b) not ambiguous.
If your language (unlike C++) is actually LALR(1), or even better, LL(1), and you're using LLVM tools to build the AST and IR, then it's unlikely that you will need to do much more than write down the grammar and provide a few simple actions to build the AST. That will keep you going for a while.
The usual reason that people eventually choose to build their own parsers, other than the "real programmers don't use parser generators" prejudice, is that it's not easy to provide good diagnostics for syntax errors, particularly with LR(1) parsing. If that's one of your goals, you should try to make your grammar LL(k) parseable (it's still not easy to provide good diagnostics with LL(k), but it seems to be a little easier) and use an LL(k) framework like Antlr.
There is another strategy, which is to first parse the program text in the simplest possible way using an LALR(1) parser, which is more flexible than LL(1), without even trying to provide diagnostics. If the parse fails, you can then parse it again using a slower, possibly even backtracking parser, which doesn't know how to generate ASTs, but does keep track of source location and try to recover from syntax errors. Recovering from syntax erros without invalidating the AST is even more difficult than just continuing to parse, so there's a lot to be said for not trying. Also, keeping track of source location is really slow, and it's not very useful if you don't have to produce diagnostics (unless you need it for adding debugging annotations), so you can speed the parse up quite a bit by not bothering with location tracking.
Personally, I'm biased against packrat parsing, because it's not clear what the actual language parsed by a PEG is. Other people don't mind this so much, and YMMV.
Why it is "not clear" what is the actual language? PEG is well-defined, even with all the cool hacks that packrat allows to do (high-order parsing and such).
@SK-logic: well-defined is not the same as clear. A hand-crafted parser written in C++ is well-defined. A Turing machine is well-defined. Yes, PEG is well-defined. But for all of them, the only way to see if a given string is in the language is to execute the code. (Of those three alternatives, PEG is the least bad, imo. But I still prefer formal context free grammars. However, as I said, other people like PEG, and whatever works for you is cool with me.)
From my practical experience, PEGs are the most clear and easy to read grammars. I can translate a language spec straight into a PEG with very little modifications. It is possible to obfuscate it, of course, but I have not seen a really bad grammar yet. Whereas there are many unreadable beyond any hope Yacc grammars.