There are many advantages of simplifying Boolean functions before implementation, it ensures cost efficient and speedy the production of hardware, while increasing the response time of a hardware design. However, there are almost no algorithms that are capable of solving Boolean functions of more than 6 variables. This paper is to elaborate a new heuristic method to simplify Boolean functions in lesser time. The abstract idea is to design a ML Boolean binary tree and overfit it with all the minterms data, then represent the tree in form of a Boolean function and use it as our solution
The complexity of a design is directly affected by the complexity of the Boolean functions, as the number of terms or literals increase, the number of the problems increases. This method uses a Boolean Binary Tree. The results are quite satisfactory, and expressions of even more than 15 variables can be handled for worst case of minterms. The method is in some way inspired by the Shannon-Fano lossless compression method.
Our model is a form of a concept of machine learning known as decision trees. They work by splitting the input data according to a certain parameter until a final outcome has been reached. The Espresso algorithm can also be used to simplify two-level logic minimization problems. It works but manipulating cubes that represent the minterms. The Quine-McCluskey algorithm is also a method used for minimization of Boolean functions. It finds all prime implicants of a function and finds the essential prime implicants
A Boolean Binary Tree is a data structure in which each node has two children which are divided on the basis of 1 bit.
Every node checks the condition of one bit, and returns the literal X or !X on depending on the condition. And once the leaf of the tree is reached, the reverse data transverse start.
We have used a greedy algorithm to simplify our problem, firstly the minterms are converted to their binary equivalent. Then the entropy of every variable is calculated to determine which of the variables can serve as the best possible starting point. A variable that is capable of separating as many minterms as possible will have the lowest entropy. The goal is to separate a maximum amount of minterms at the earliest stage.
Entropy in Boolean minterms is the amount of switching i.e. transitioning between 0s and 1s, a variable goes through in the Boolean function.
If we set the literal with the minimum entropy as our first node, most of our data will be sorted. For example, in the minterms Σ(1,3,5,7), C has the least entropy, so dividing data using C will result in a single True node, and the tree will not grow in width.
The next step is to set aside the chosen variable and then recursively call the same function again for the remaining variables to further extend our tree. Once we hit the leaf, we return the literal. If it is coming through the right (false) root, the literal is negated.
If there is only one minterm, convert it to its literal equivalent and use it as it is.
If the left, and right group of variables after marking the minimum entropy variable are equal, it is considered as a leaf and the tree stops growing.
We have implemented a memory to our binary tree, to store a minterms and solution pair, if the tree encounters the same minterms, it returns the solution without calculating.
(Not implemented) we use a sequence classification model, to check if given minterms are possible to group, if it is not possible to group them, we can directly return the literal form of the minterms. This will drastically decrease the complexity. And even if the model gives the wrong value the solution will be right.
We can build an intuition of Boolean minimization using lattice of hypercube. For the sake of explanation, we will use a 3D lattice (3 variables) but the idea is extendable to higher dimensions.
To calculate a better, more simplified answer, we can implement a random forest model on our Boolean binary tree, which chooses different cuts on the minterms and hence we can have multiple angles to the same questions, after we have all the minterms, we can find the essential implicants from the list of all implicants.
As the problem is np hard, the number of problems increases rapidly, even with just 6 variables, there are 1.84×1019 possible functions. Our algorithm has an O (2n ) time complexity, where n is number of midterms rather than the number of variables, and O(n) space complexity, but due to safe nets and nature of our algorithm the average solution time is much smaller and faster. The complexity w.r.t to the number of variable is O(n), for every new variable introduced to the system, the code will run one more time.
The performance is evaluated by increasing the number of variables, and doubling the number of minterms, we have taken 50 random cases of minterms for each variable with uniform distribution using numpy.
The algorithm is capable of solving a function of any number of variables that present day computers are capable of handling. In the worst case where the minterms are impossible to group, the complexity would be 2n but the algorithm looks for the worst case first and if it is detected the input is returned as the output so that the problem does not scale exponentially.
Quine–McCluskey is more accurate in terms of the output as it generates the output with the minimum number of terms required to keep the input and output logically equivalent.
Although for a larger number of variables, Quine–McCluskey is fast, but the Boolean Tree is almost as quick and it trades perfect minimization of Boolean function to heuristic value for speed. And by implementing ideas inspired by random forest and sequence classification models, we can simplify the problem further and get more accurate results.
An implementation of the model has been coded in python which can be found on GITHUB
[Kernighan and Lin, 1970] B. W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. The Bell System Technical Journal, pages 291-307, February 1970
[Oliveira, 1994] Arlindo L. Oliveira. Inductive Learning by Selection of Minimal Representations. PhD thesis, UC Berkeley, 1994. In preparation.
[Quinlan, 1986] J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986.