This article was originally published on May 18, 2020.
The popular representation of computer processes as endless strings of ones and zeros is so enduring, it may be hard to conceive of them any other way. But computers that only think in absolutes — one and zero, true and false — have some proven limitations. This is especially the case with vague concepts of human experience and language, of which there are endless examples. The woman is tall. It’s cold today. The stock market is good right now. “Tall,” “cold,” and “good” all express values not of absolutes, but subjective ranges with mushy edges that are still nonetheless distinguishable to most people. After all, when you tell me someone is “tall,” I know more or less what you mean, even if you haven’t told me an exact value for the person’s height.
Fuzzy logic, a branch of computer science that popped into existence with a single less-than-two-page research paper in 1965, is the computer scientist’s proven tool for capturing this vagueness in our linguistic concepts. To do this, variables aren’t measured against a binary system of true and false, but a continuous spectrum between true and false. To take one of the above examples, when I say it’s cold, what I’m indicating to you is that the temperature is somewhere between two extremes — really hot and really cold — and more toward the colder end of the spectrum. Conceived of in this way, computers can model this thinking pretty easily, by simply assigning individual cases numerical values between 0 and 1. In other words, the relevant question with fuzzy logic isn’t whether something is true or false. It’s how true or false it is. And because of this, it’s perfect for modeling human experiences that operate not in the realm of black and white, but shades of gray.
When computers operate according to these fuzzy principles, they can do some pretty cool things. Professor of Electrical and Computer Engineering Adnan Shaout, who’s been working in fuzzy logic since its initial heyday in the mid-1980s, says “fuzzy” (for short) first took hold when consumer electronics started incorporating more intelligent sensors. Things like the “popcorn” button on your microwave, the “delicates” setting on your washing machine owe their power to fuzzy logic. Shaout says with technologies like this, sensors are taking continuous readings of the environment and responding with constant adjustments to, say, cooking or drying time and intensity. The result is not only popcorn that’s not burned and clothes that don’t get fried, but appliances that are more energy efficient thanks to their optimized processes.
Despite that initial burst of interest, Shaout says fuzzy fell on hard times by the early 2000s. “We got very good at using it in these kinds of applications, but people thought, ‘Well, that’s it. That’s the limit.’ There simply weren’t any new advancements in the theory.” But Shaout says attitudes have shifted more recently. On the theoretical side, computer scientists like Shaout are discovering that fuzzy can enhance the power and efficiency of neural networks and deep learning systems. And it continues to be the engine that powers today’s sensor-dependent consumer devices. Smart thermostats; the image stabilization in our cameras that corrects out-of-focus shots; image processing that can turn a keyword into a list of all the cat photos you have on your phone; and new automatic braking systems in cars all wouldn’t be possible without fuzzy logic.
In fact, Shaout says our demand for ever-more intuitive technologies that think and act more like us may mean the best days are still ahead for this “old” branch of computer science.
“My feeling is that fuzzy is coming back even stronger than before,” Shaout says. “As humans, our needs are becoming more ‘greedy.’ We need more intelligence, more features, more speed, more security. We are demanding more of these intuitive conveniences in our technology all the time, and that’s where fuzzy will play a large role. It’s still one of our most powerful tools for mimicking human thinking.”