menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Deep Learning News

Deep Learning News

source image

Hackernoon

20h

read

211

img
dot

Image Credit: Hackernoon

ClassBD Outperforms Competitors in Real-World Bearing Fault Diagnosis Using PU Dataset

  • The article highlights the ClassBD approach for real-world bearing fault diagnosis using the PU dataset, which is capable of overcoming the limitations of existing approaches.
  • The PU dataset was collected by Paderborn University (PU) Bearing Data Center and has 32 bearings including vibration and current signals. The bearings were categorized into three groups, including healthy and manually-induced or real damage included accelerated lifetime tests.
  • In this study, the authors exclusively use the real damaged bearings for classification for validation in real-world scenarios, and the datasets include 5776 training data, 1444 validation data, and 380 test data.
  • The ClassBD model shows superior performance compared to its competitors across various noise levels and operating conditions and delivers competitive performance across diverse high-noise scenarios.
  • In the challenging scenario of extremely limited sample availability, ClassBD, EWSNet, and DRSN models exhibit commendable performance. All three methods achieve over 90% F1 scores with a small dataset.

Read Full Article

like

12 Likes

source image

TechCrunch

1d

read

344

img
dot

Image Credit: TechCrunch

A popular technique to make AI more efficient has drawbacks

  • Quantization, a widely-used technique to make AI models more efficient, has limits and the industry may soon be approaching them. Quantization lowers the number of bits needed to represent information, but researchers have found that quantized models perform worse than the original unquantized versions if trained over a long period on lots of data, which spells bad news for AI firms training very large models. Scaling up models eventually provides diminishing returns and data curation and filtering may have an impact on efficacy.
  • Labs are reluctant to train models on smaller data sets and so researchers suggest that training models in low precision can make them more robust. The optimal balance has yet to be discovered, but low quantization precision will result in a noticeable step down in quality, unless the original model is incredibly large in terms of parameter count. There are no shortcuts and bit precision matters, according to the researchers, who believe that future innovations will focus on architectures designed for low-precision training.
  • The performance of quantized models is influenced by how models are trained and the precision of data types. Most models today are trained at 16-bit or half-precision before being post-train quantized to 8-bit precision. Low precision is seen as desirable for inference costs, but it has its limitations.
  • Contrary to popular belief, AI model inferencing is often more expensive in aggregate than model training. Google spent an estimated $191m training one of its Gemini models. But if the company used the model to generate 50-word answers to half of all Google Search queries, it would spend around $6bn a year.
  • Quantizing models with fewer bits representing their parameters are less demanding mathematically and computationally. But quantization may have more trade-offs than previously assumed.
  • The industry must move away from scaling up models and training on massive datasets because there are limitations that cannot be overcome.
  • In the future, architectures that deliberately aim to make low-precision training stable will be important and low-precision training will be useful in some scenarios.
  • AI models are not fully understood, and known shortcuts that work in many kinds of computation do not necessarily work in AI. Researchers of AI models believe that low quantization precision will result in a noticeable step down in quality, unless the original model is incredibly large in terms of parameter count.
  • Kumar and his colleagues' study was at a small scale, and they plan to test it with more models in the future. But he believes that there is no free lunch when it comes to reducing inference costs.
  • Efforts will be put into meticulous data curation and filtering so that only the highest quality data is put into smaller models. Kumar concludes that reducing bit precision is not sustainable and has its limitations.

Read Full Article

like

20 Likes

source image

Medium

2h

read

193

img
dot

What Goes Beyond the Prompt?

  • Tokenization provides structure for the AI to process the input.
  • Transformers use self-attention to handle different cases.
  • Transformers process words in parallel, enabling faster computations and improved context analysis.
  • The Transformer architecture consists of two main parts.

Read Full Article

like

11 Likes

source image

Medium

9h

read

288

img
dot

Image Credit: Medium

The whispers inch the walls *The invisible terror*

  • Emma moves into her green flat to escape her past.
  • Whispers start coming from the walls, initially dismissed as settling noises.
  • Whispers become louder and more distinct, mentioning secrets and observing Emma.
  • Emma discovers an old journal hidden in the floorboards, revealing information about the previous tenant, Vincent.

Read Full Article

like

17 Likes

source image

Medium

16h

read

18

img
dot

Image Credit: Medium

How I Made $500 a Week with This Simple System

  • The Revolutionary AI Bot System promises to help users earn up to $500 a week effortlessly.
  • The system is 100% done-for-you and operates with zero errors, making it reliable and suitable for anyone.
  • Users have reported earning passive income and substantial profits within a few weeks of using the AI Bot System.
  • The user-friendly interface and positive reviews make it a worthwhile tool for those looking to supplement their income.

Read Full Article

like

1 Like

source image

Hackernoon

20h

read

3

img
dot

Image Credit: Hackernoon

Study Finds ClassBD Outperforms Top Fault Diagnosis Methods in Noisy Scenarios

  • A recent study has found that ClassBD, a fault diagnosis method, outperforms top methods in noisy scenarios.
  • The study employed time domain quadratic convolutional filters, frequency domain linear filters, and integral optimization with uncertainty-aware weighing scheme.
  • Computational experiments were conducted on various noise conditions, and ClassBD demonstrated superior classification results.
  • The study also examined the feature extraction ability of quadratic and conventional networks.

Read Full Article

like

Like

source image

Hackernoon

20h

read

133

img
dot

Image Credit: Hackernoon

New AI System Enhances Fault Detection with Smarter Optimization Techniques

  • A new AI system has been developed to enhance fault detection using smarter optimization techniques.
  • The system utilizes quadratic neural networks and convolutional filters to extract cyclic features and improve fault diagnosis.
  • The method incorporates a joint loss function that combines the objective functions of fault detection and downstream classification tasks.
  • The system demonstrates flexibility and can be easily integrated with various 1D classifiers for improved performance.

Read Full Article

like

8 Likes

source image

Hackernoon

20h

read

163

img
dot

Image Credit: Hackernoon

How New Neural Networks Are Improving Signal Processing in Fault Detection

  • New neural networks are improving signal processing in fault detection.
  • The frequency domain filter employs a neural network to manipulate the signal's frequency domain.
  • The approach is referred to as Fourier neural networks.
  • The frequency filter utilizes a fully-connected neural network for implementation.

Read Full Article

like

9 Likes

source image

Hackernoon

20h

read

159

img
dot

Image Credit: Hackernoon

How Advanced Neural Networks Improve Signal Clarity and Fault Detection

  • The paper discusses the benefits of quadratic convolutional networks (QCNN) for the extraction of features from periodic and non-stationary signals.
  • QCNN employs a convolution kernel to convolve over a signal segment, performing cross-correlation and autocorrelation operations crucial for noise cancellation in bearing fault vibration signals.
  • Bearing fault signals are considered second-order cyclostationary signals with periodicity in their autocorrelation function.
  • The quadratic neuron in QCNN enhances fault-related signals from noise by combining cross-correlation and autocorrelation functions.

Read Full Article

like

9 Likes

source image

Hackernoon

20h

read

37

img
dot

Image Credit: Hackernoon

Quadratic Neural Networks Show Promise in Handling Noise and Data Imbalances

  • Quadratic neural networks (QCNN) have shown promise in handling noise and data imbalances.
  • Quadratic networks possess advantages in efficiency and feature representation compared to conventional neural networks.
  • Previous studies have successfully incorporated quadratic neural networks in bearing fault diagnosis, demonstrating superior performance under challenging conditions.
  • A dedicated strategy for initializing quadratic networks has been developed to improve stability and avoid gradient explosion during training.

Read Full Article

like

2 Likes

source image

Hackernoon

20h

read

174

img
dot

Image Credit: Hackernoon

Researchers Propose Novel Framework Combining Time and Frequency Domain Filters

  • Researchers propose a novel framework that combines time and frequency domain filters for blind deconvolution.
  • The framework consists of a time domain quadratic convolutional filter and a frequency domain linear filter.
  • The time domain filter employs quadratic convolutional neural networks (QCNN) and an inverse QCNN layer for filtering and recovering the input signal.
  • The frequency domain filter uses fast Fourier transform (FFT) and an objective function in the envelope spectrum (ES) for optimization.

Read Full Article

like

10 Likes

source image

Hackernoon

20h

read

51

img
dot

Image Credit: Hackernoon

Study Shows Advances in High-Order Neural Networks for Industrial Applications

  • A recent study has demonstrated advancements in high-order neural networks for industrial applications.
  • The study focuses on quadratic neural networks, which restrict the polynomial function to the second order to ensure stable training.
  • Various versions of quadratic neurons are discussed, with the approach proposed by Fan et al. being considered as a general version.
  • The researchers from multiple institutions collaborated on the study, providing valuable insights and findings for industrial applications.

Read Full Article

like

3 Likes

source image

Hackernoon

20h

read

248

img
dot

Image Credit: Hackernoon

Researchers Develop Advanced Methods for Fault Diagnosis Using Blind Deconvolution

  • Researchers have developed advanced methods for fault diagnosis using blind deconvolution.
  • Blind deconvolution is considered an ill-posed problem in the absence of prior information.
  • Kurtosis is utilized as an optimization objective function in blind deconvolution.
  • Several optimization methods have been developed for blind deconvolution.

Read Full Article

like

14 Likes

source image

Hackernoon

20h

read

229

img
dot

Image Credit: Hackernoon

AI and Signal Processing Unite to Diagnose Machine Faults Faster

  • Researchers have combined artificial intelligence (AI) with signal processing to diagnose faults in machines faster and more effectively. The classifier-guided blind deconvolution (ClassBD) approach co-optimizes blind deconvolution-based signal feature extraction and deep learning-based fault classification. The system uses two filters: one for time-domain quadratic convolutional filters (QCNN) to extract periodic impulses and another in the frequency domain. The ClassBD framework aims to seamlessly integrate BD with deep learning classifiers via co-optimisation of model parameters. The method was tested on three datasets, and results show ClassBD outperforms other methods in noisy conditions, providing better interpretability.
  • BD has been a successful approach used to extract bearing fault-specific features from vibration signals under strong background noise. However, one of the major challenges is integration with fault-diagnosing classifiers due to differing learning objectives. When combined, classifiers and BD share separate optimization spaces. Integration has the potential to cause BD issues such as enhancing the cyclic impulses of the fault signal while reducing differences between fault severities. The system developed aims to use classified information to instruct BD to extract features necessary to distinguish classes amid strong noise.
  • The ClassBD system includes two neural network modules: one time-domain QCNN module and another of linear filters for signals in the frequency domain. ClassBD aims to integrate BD and deep learning classifiers. This is achieved by employing a deep learning classifier to teach BD filters. The fault labels provide useful information in guiding the BD to distinguish features. ClassBD is the first method to diagnose bearing faults under heavy noise while providing good interpretability.
  • The quadratic neural filter enhances the filter's capacity to extract periodic impulses in the time domain. Meanwhile, the linear neural filter offers the ability to filter signals in the frequency domain and improves BD performance. The entire ClassBD system has plug-and-play capability and can be used as a module in the first layer of deep learning classifier, while physics-informed loss and uncertainty-aware weighing loss strategy are used to optimize both classifiers and BD filters.
  • The research team conducted computational experiments on three datasets, two public and one private. The ClassBD system was shown to outperform other methods in noisy conditions on all datasets, providing more accurate results and better interpretability.
  • In conclusion, combining AI and signal processing allows fault diagnosis in machinery faster and more effectively, providing better interpretability with high accuracy and efficiency. It is an essential step in ensuring the reliable operations of rotating machinery.

Read Full Article

like

13 Likes

source image

Medium

22h

read

317

img
dot

Building AI systems is fun.

  • Building production-grade AI requires a combination of engineering knowledge and practical AI.
  • It’s easy to get overwhelmed by the velocity of new developments in the AI ecosystem.
  • To avoid becoming less effective, pick one or two core domains to focus on deeply.
  • Create focused “no-noise” slots, plan learning in sprints, and implement structured breaks.
  • Understand the core principles of classical algorithms and foundational NLP methods.
  • Grasp the principles of Convolutional Neural Networks and deep learning frameworks.
  • Understand the principles of generative models like GANs, VAEs, and large language models.
  • Incremental updates can be exciting but rarely a radical leap.
  • Focus on underlying mechanisms, evidence of actual impact, and cut through the noise.
  • Critical thinking saves you from chasing every buzzword-laden release.

Read Full Article

like

19 Likes

For uninterrupted reading, download the app