?

Log in

No account? Create an account
Semiformalishmaybe

Not everyone can be a Gershwinnar

A mellow afternoon in the Té Café - some productivity with work things, while Gershwin plays in the background.

Thoughts recycle ideas of people not eating too much 「foreign food」 because they see it as alien and not wanting to dilute their identity too much, then frustration with anti intellectualism and people suggesting that scientific institutions need a "man of action" to "straighten out" the academes - sensed fear of truth divides it into two truth frameworks, the "theoretical actual truth" and the "public truth", the latter in which people allow for things that we want or need to be true, and decry the academes for bringing these truths too close together.

Musings on factorising numbers...If we were given the task of finding all the factors of a large number, and were doing it by hand, we would probably do well to take advantage of certain tricks to radically prune our search space - for example, if a number in question is odd, we could prune any numbers that are even from being factors. We learn a number of these tricks in maths-by-hand. We might not be inclined towards such ideas when writing software, for three related reasons:

  • The numerous tests we might do to disqualify a number are slow
  • Modern CPUs are fairly speculative, and having conditionals screws them up
  • The way ALUs are designed, the difficult work is done mostly in parallel, and mostly in a very small number of clocks
Could we imagine, given an architecture where one could use microcode to tweak the ALUs, efficiently implementing these tricks in a way that would let us factor more quickly? Probably not, but if we could, how would it work? Presumably we'd try to reduce the hand-tricks into things that could be done within a single bitframe (or small set of them) without carry, and parallelise each such frame (or maybe small set of them) as a separate test for whatever disqualifiers we would like on a separate potential factor, using the result as a mask to prune numbers from our search space. That's probably about as far as we can take the idea - it in fact is not an idea that would give us any benefits on a computer because all the legwork would be much slower than a hardware division operation (and we'd really want the suboperations of this done in far less than a clock tick if we were even to try to be competitive). Concievably, if we were using exact-precision software libraries and imagine that the state of the art of thise is for them not to be particularly fast, these tricks might be worthwhile.

The relevant big differences in how we do maths and how computers do it are that we think of numbers the same way we think of linguistic units - when we solve a formula, anything more than one or two digits is broken down into its individual graphs and handled in those smaller chunks (two digits are an exception- we might remember that 12² is 225 and have a few other slightly wider-than-single-glyph operations, but our general way of working with numbers is on a glyph-by-glyph basis. This is probably due to teaching and memory limitations - some people might be fine memorising all the two-digit multiplication (and other) tables and using 2-unit chunking for everything, but most people would have difficulty). When we break down a big number into chunks, the experienced human will note any special characteristics of the number and use those to accelerate operations on it if possible (e.g. 200 is easier to work with than 704 for practically all operations). Even given these tricks, we're not going to be able to compete with an ALU using our styles of thought. I wonder if silicon using such a pruning/disqualification model would be faster than using a general purpose really fast ALU. In a broader sense, I wonder if the tricks we have for maths are ever useful for computation on computers. Also, are there ALUs designed for arbitrary precision (bignum) maths?

The existence of this plugin is highly amusing.

Comments