Creating a Measure for Primality or Compositeness

The search for prime numbers has fascinated mathematicians for centuries. Various approaches have been taken. No perfect method has yet been discovered. Perhaps one of the problems is the definition of prime itself. Prime suffers from having a negative definition: a prime number is a number having NO other factors than one and itself. Prime numbers are all the whole numbers which are NOT composite. Composite numbers are any whole numbers which has factors other than one and itself. This is also a weak definition. This problem might be addressed by one of two approaches.

Written 2001

Formatted 2010

 

[1] Create a new definition for Prime:

A prime number could be defined as any odd number that can be written as the difference of squares in one and only one way. [ e.g.: Prime: 3 = 42 - 12, 11 = 62-52 only, vs. Not Prime: 9 = 52-42 = 32-02 Another way to state this is any number that appears only in the difference of one diagonal. This definition has two problems: it misses 2, and its still a weak definition. But it might offer new insights.

A generated prime number might be definable as any number that may be found using the generalization of Euclid's proof. This has the classic problem of missing some primes but finding some composites. It gives a new approach.

Try generating your own definitions for primes, or for numbers with similar qualities as primes.

 

 

 

[2] Create a new measure for primality or compositeness:

Rather than defining numbers to be either prime or composite, we could create a measure to determine how composite any number actually is. This might create new patterns to study or new ways to view compositeness.

Here are a few simple attempts to measure compositeness:

[A] Factor Count Approaches

Counting the factors obviously has merit. This creates a bias towards large numbers: e.g.: 1440 can have a lot more factors than can 12. One solution would be to normalize each count by dividing by the number itself.

Another approach would be to recognize that for each factor larger than the square root of a number there is a factor smaller than the square root. That means the the square root of a number limits the number of factors it may have. So we might divide by the square root. composite1

By this method a prime will have a value of zero, and the more composite a number the larger this value will take. In the graph, we can see that out of the first 36 counting numbers 24, 36, and 12 are the most composite.

[B] Factors vs: Nonfactors

Instead of normalizing to the number or its square root, we might normalize the factor count to the numbers which are not factors. Or similarly, the square root of the numbers which are not factors.

comp2 By this method, we can see that out of the first 36 counting numbers 24, and 12 are the most composite.

[C] Factor Sum Approaches

Instead we might look at the sum of the factors. a prime has no factors (other than 1 and itself, which to avoid redundancy we don't add). 12 has factors 2+3+4+6=15. Again, to avoid biases towards high numbers we normalize by dividing by the number itself.

comp3 By this method, we see that out of the first 36 counting numbers the most composite are: 36, 24, and 30.

[D] Factors within a given region.

In the leading triangle each factor appears exactly once. Two possible ways of defining the leading triangle are shown for 10. factor triangle
In the trailing triangle each factor appears exactly once. The trailing triangle is shown for 13. factor triangles

By defining leading and trailing triangles, will we be able to create operations that determine the number of factors within a region?

Here we see leading triangle (12) intersect trailing triangle (7) which creates a common factors triangle containing 5, 6, and 7.

factor triangles
 
 

Return to: