Home Household appliances Few Shot Learning AI accurately “knows” home appliances

Few Shot Learning AI accurately “knows” home appliances


NIALM (Non-Intrusive Appliance Load Monitoring) can “detect” appliances using electrical energy. NIALM is used in homes and small buildings. For this, NIALM may require hundreds of labeled power signal images of each type of device to train. But there is a much faster and more cost-effective approach than “traditional” machine learning.

Researchers from the University of Johannesburg deployed Few Shot Learning (FSL) for NIALM. “Classic FSL” only needs 10 classified and labeled images to recognize devices with very high accuracy.

They adapted the process so that the AI ​​(artificial intelligence neural network) could choose the best training images on its own. This makes the training process even faster. When they also tweaked some hyper settings, seven test images were enough for FSL to recognize the devices with 97.83% accuracy. A training image (one-time training) gave an accuracy of 88.17-91.343% depending on the number of classes during testing.

FSL – When machines learn to learn

Teaching an AI (artificial intelligence neural network) to recognize a device’s power signal usually requires a lot of data. Typically, an AI will need hundreds of images tagged by humans, to recognize each type of device, despite differing capabilities and operating states. An example of two operating states for an appliance are the wash and spin cycles of a washing machine.

All of this training data for an AI has to be created and labeled by humans, which becomes slow and expensive very quickly.

But there is another way for an AI to learn, which indeed requires very little labeled data. As few as 10 labeled training images can be enough for extremely accurate image classification.

For example, let’s say an AI is trained this way with ten images each of elephants, tigers, and bears. When the AI ​​is “tested” with an unlabeled image of a large male lion, the AI ​​must recognize that the lion is similar to a tiger, but not the same. Then the AI ​​should decide on its own to create a new “object class for the lion”.

Also, when this AI is faced with an unlabeled image of a lion cub, it should be able to put the cub in the same class as the male lion.

This type of AI Machine Learning (ML) algorithm is called Few Shot Learning (FSL). It is a form of Meta-Learning, or ‘learning to learn’.

FSL is already powering gigantic language models for dominant global tech companies. Computer vision systems that check passports against travelers’ faces at some airports also use FLS.

parts of a cat

Few Shot Learning is really about training an AI neural network with data, even incomplete data about a class of objects, says Professor Yanxia Sun of UJ’s Department of Electrical and Electronic Engineering Sciences. Sun is the lead author of the study.

“As we train a neural network with training images, the AI ​​learns the characteristics of each animal or object on its own.”

In the tiger vs. lion example, an FSL AI learns cat whiskers, cat eyes, fur, and cat tails from tiger images. He had never seen a lion before. But when the AI ​​is tested with a lion image, it should recognize a lion as a similar, but not identical, object class to a tiger.

NIALM – A power consumption meter for many devices

NIALM is used in small commercial buildings or in homes, to measure the amount of energy consumed by each appliance or piece of equipment.

NIALM uses “power disaggregation” to separate the combined power consumption signal from many devices turned on at the same time, on the same electrical phase. The NIALM uses a single measuring device. It’s much easier and faster than having to physically connect a power meter to each device in turn.

In some countries, home smart meters store data on each device’s power consumption and send it to power utilities. In other countries, smart meters also make energy consumption data available to homeowners.

Power consumption signal to digital image

In this study, the researchers trained their FSL AI on NIALM images of electrical charge signals from various household appliances.

They obtained the power consumption signals by plugging a power analyzer (Tektronix PA1000) and each device in turn into a power strip extension. They then turned on the power analyzer. The device was then cycled on and off, while the power analyzer recorded power consumption over time. For the laptop and the desktop computer, entire boot sequences were recorded.

The power analyzer converts the device’s analog power consumption signal into digital data. This data was then converted into Gramian Angular Summation Fields (GASF), which look like brightly colored tiles.

The 400 X 400 pixel color GASF images were then converted to grayscale and reduced in size to 28 X 28 pixels. This reduced the complexity of the algorithms and used fewer computational resources.

Acceleration of training

“Classic” FLS is a two-step process: training and testing. In this study, the researchers added an adjustment to speed up the selection of highly suitable data for training, which would also speed up the training process itself.

“We have increased the accuracy of the FSL algorithm by implementing an initial assessment of how easy or appropriate our data is for metric learning. We call this a similarity test,” says Dr Liston Matindife, the first author of the study.

During the study, Matindife was a PhD student at the University of Johannesburg. He is currently teaching in Zimbabwe, at the National University of Science and Technology.

“If a GASF image fails the similarity test, it means that the data needs more pre-processing, especially in time series or waveform format, before being converted into a GASF image. Images that passed the similarity test allowed our model to learn faster,” adds Matindife.

FSL AI training and testing

To train the AI ​​FSL, the researchers provided it with GASF images of 10 of the 14 device classes in the study. They used 10 images per device class: a laptop that turns on, a laptop running MSWord, a desktop computer, a refrigerator, a two-plate stove, and a variety of low-energy lamps.

Next, they tested the FSL AI to see how well it had learned to recognize or classify devices; and learned to create new classes for devices he had never seen before.

For the test, they fed the FSL AI images of 4 new classes, 10 images per class: a laptop displaying the video, a microwave, a kettle, and a compact fluorescent lamp (CFL).

High precision with few images

In the case of the laptop test images, the FSL algorithm was 97.83% accurate in classifying (recognizing) the test images as coming from the laptop power consumption signal, but in a new operational state. This new operational state was showing video, not starting or running MSWord as in the training images.

FSL AI achieved this accuracy with only seven test images for booting and another seven for running MSWord.

Learning from Few Shot to One Shot

The researchers also varied the number of training images and then measured the classification accuracy of the algorithm. Tests show that as the number of test images per class increases, the average accuracy increases from a minimum of 91.343% to a maximum of 97.83%. This shows that FLS can be applied in NIALM recognition.

“The development of NIALM algorithms requires a lot of data. For devices that have varying activation periods, we would generate a different number of dataset images per device. This imbalance in the number of training frames for different devices would normally affect the training algorithm,” says Matindife.

“Our algorithm reduces the need for expensive acquisition of device-specific data. Using the prototypical network FSL algorithm makes it easy to capture datasets from devices that have a different number of images d ‘sample,’ he adds.

This study shows that it is possible to achieve an average accuracy of 90% with a single training image per class, using Siamese and prototypical FSL algorithms based on computer vision applied to GASF graphs. When a training image gives sufficient precision, it is called one-time learning. One-time learning could solve a huge challenge in NIALM — the large number of training images required, Matindife says.

See failure before it happens

According to Sun, this AI FSL NIALM can be used to identify high-value devices that aren’t working properly, such as computers or refrigerators where power supplies or compressor motors are slowly starting to fail.

An AI FSL could be trained on a few images of the power signal from a functioning household refrigerator. The first refrigerator is brand A and capacity B.

Then, in a house or small commercial building, if this AI “sees” the power signal from a household refrigerator with brand C compressor motor and capacity D problems, it should be able to report that the second device has a power signal problem, she says.

/Public release. This material from the original organization/authors may be ad hoc in nature, edited for clarity, style and length. The views and opinions expressed are those of the author or authors. See in full here.