Everything can be done in constant time, at least during runtime, with a sufficiently large look-up table. It’s easy! If you want to simulate the universe exactly, you just need a table with nxm entries, where n is the number of plank volumes in the universe, and m is the number of quantum fields. Then, you just need to compute all of them at compile time, and you have O(1) time complexity during runtime.
I don’t think that quantum physics is quite right, but the jist probably* is. Any finite system can have it’s states enumerated on a table. I’d think you’d need to calculate configurations over all of space-time at compile time, and then look up answers by boundary condition.
*
Interestingly, MIP*=RE from quantum complexity theory implies tangibly infinite quantum states could exist. It’s not clear if real physics has that particular feature, though. If it does no finite lookup table will do the job, even approximately.
O((n!)!)
I know I shouldn’t do it here, but let me ask a serious question : does the square in O(n!²) really matter ? I have a confused intuition that the factorial grows so much faster than the square that it kind of disapears assymptoticaly.
no it’s the joke. In o-notation you always use the highest approximation, so o(n!²) does not exist, it’s only o(n!)
Otherwise there would never be o(1) or o(n), because o(1) would imply that the algorithm only has a single line of instructions, same for o(n)
T = O(n) means that there exists a single constant k such that T < kn for all sufficiently large n. Therefore O(n!^2) is not the the same as O(n!), but for example both 10n!, 10000n!, n! + n^2 (note the plus) are O(n!).
Another way to think about this: suppose you believe that O(n) and O(n^2) are distinct. Now plug in only numbers that are factorials (2, 6, 24, …).