flop count stopped having much relationship to reality in the early 1990s when MIPS brought out the R8000 processor with pipelining and out-of-order execution. You do still see terms like "petaflops" thrown around for supercomputers, but the associated figures quoted are usually for absolute best-case processing on hand-written assembly code, and it is seldom that any real program can achieve even half of the quoted figure.
flop counts do not take in to account that addition and subtraction are very fast, multiplication somewhat slower, and division typically one of the slowest operations. Compilers are free to recode (A*2) as (A+A) and doing so can speed up the code a fair bit. Machines commonly have a "fused multiply and add" that operates at pretty much the same speed as a multiply alone: would you count that as one flop or as two ?