A mellow afternoon in the Té Café - some productivity with work things, while Gershwin plays in the background.
Thoughts recycle ideas of people not eating too much 「foreign food」 because they see it as alien and not wanting to dilute their identity too much, then frustration with anti intellectualism and people suggesting that scientific institutions need a "man of action" to "straighten out" the academes - sensed fear of truth divides it into two truth frameworks, the "theoretical actual truth" and the "public truth", the latter in which people allow for things that we want or need to be true, and decry the academes for bringing these truths too close together.
Musings on factorising numbers...If we were given the task of finding all the factors of a large number, and were doing it by hand, we would probably do well to take advantage of certain tricks to radically prune our search space - for example, if a number in question is odd, we could prune any numbers that are even from being factors. We learn a number of these tricks in maths-by-hand. We might not be inclined towards such ideas when writing software, for three related reasons:
- The numerous tests we might do to disqualify a number are slow
- Modern CPUs are fairly speculative, and having conditionals screws them up
- The way ALUs are designed, the difficult work is done mostly in parallel, and mostly in a very small number of clocks
The relevant big differences in how we do maths and how computers do it are that we think of numbers the same way we think of linguistic units - when we solve a formula, anything more than one or two digits is broken down into its individual graphs and handled in those smaller chunks (two digits are an exception- we might remember that 12² is 225 and have a few other slightly wider-than-single-glyph operations, but our general way of working with numbers is on a glyph-by-glyph basis. This is probably due to teaching and memory limitations - some people might be fine memorising all the two-digit multiplication (and other) tables and using 2-unit chunking for everything, but most people would have difficulty). When we break down a big number into chunks, the experienced human will note any special characteristics of the number and use those to accelerate operations on it if possible (e.g. 200 is easier to work with than 704 for practically all operations). Even given these tricks, we're not going to be able to compete with an ALU using our styles of thought. I wonder if silicon using such a pruning/disqualification model would be faster than using a general purpose really fast ALU. In a broader sense, I wonder if the tricks we have for maths are ever useful for computation on computers. Also, are there ALUs designed for arbitrary precision (bignum) maths?
The existence of this plugin is highly amusing.