menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

ML News

source image

Arxiv

2h

read

3

img
dot

Image Credit: Arxiv

Learning Sign Language Representation using CNN LSTM, 3DCNN, CNN RNN LSTM and CCN TD

  • Existing Sign Language Learning applications focus on the demonstration of the sign in the hope that the student will copy a sign correctly.
  • This paper explores algorithms for real-time, video sign translation, and grading of sign language accuracy for new users.
  • The study compares popular algorithms including CNN and 3DCNN on Trinidad and Tobago Sign Language and American Sign Language datasets.
  • The 3DCNN algorithm achieved 91% accuracy in the TTSL dataset and 83% accuracy in the ASL dataset, making it the best performing neural network algorithm.

Read Full Article

like

Like

source image

Arxiv

2h

read

306

img
dot

Image Credit: Arxiv

Developing Cryptocurrency Trading Strategy Based on Autoencoder-CNN-GANs Algorithms

  • This paper utilizes machine learning algorithms to forecast and analyze financial time series.
  • It employs a denoising autoencoder to filter out random noise fluctuations from the main contract price data.
  • The filtered data is then processed using one-dimensional convolution to extract key information.
  • A GANs network is utilized to further analyze the filtered and dimensionality-reduced price data for predicting significant price changes in real-time.

Read Full Article

like

18 Likes

source image

Arxiv

2h

read

289

img
dot

Image Credit: Arxiv

Sharper Error Bounds in Late Fusion Multi-view Clustering Using Eigenvalue Proportion

  • Late Fusion Multi-View Clustering (LFMVC) aims to integrate complementary information from multiple views to enhance clustering performance.
  • Current LFMVC methods struggle with noisy and redundant partitions and often fail to capture high-order correlations across views.
  • A novel theoretical framework is presented for analyzing the generalization error bounds of multiple kernel k-means, leveraging local Rademacher complexity and principal eigenvalue proportions.
  • Experimental results on benchmark datasets confirm that the proposed approach outperforms state-of-the-art methods in clustering performance and robustness.

Read Full Article

like

17 Likes

source image

Arxiv

2h

read

204

img
dot

Image Credit: Arxiv

Accelerating AIGC Services with Latent Action Diffusion Scheduling in Edge Networks

  • AIGC services in edge networks face challenges due to high service delay and resource demands.
  • A novel method called LAD-TS is proposed for task scheduling in edge networks.
  • LAD-TS minimizes service delays by leveraging diffusion models and reinforcement learning.
  • DEdgeAI, a prototype edge system, demonstrates shorter service delays compared to existing AIGC platforms.

Read Full Article

like

12 Likes

source image

Arxiv

2h

read

184

img
dot

Image Credit: Arxiv

On the Effectiveness of Adversarial Training on Malware Classifiers

  • Adversarial Training (AT) has been widely applied to harden malware classifiers against adversarial evasive attacks.
  • The effectiveness of AT in identifying and strengthening vulnerable areas of the model's decision space while maintaining high performance on clean data remains underexplored.
  • Robustness achieved by AT has often been assessed against unrealistic or weak adversarial attacks, negatively affecting performance on clean data.
  • Factors such as data, feature representations, classifiers, and robust optimization settings influence the effectiveness of AT.

Read Full Article

like

11 Likes

source image

Arxiv

2h

read

3

img
dot

Image Credit: Arxiv

Towards Macro-AUC oriented Imbalanced Multi-Label Continual Learning

  • In Continual Learning (CL), there has been limited research on Multi-Label Learning (MLL).
  • MLL datasets are often class-imbalanced, making it challenging in CL.
  • To optimize Macro-AUC in MLCL, a new memory replay-based method called RLDAM is proposed.
  • Experimental results demonstrate the effectiveness of the proposed method over baselines.

Read Full Article

like

Like

source image

Arxiv

2h

read

111

img
dot

Image Credit: Arxiv

Robust Semi-Supervised Learning in Open Environments

  • Semi-supervised learning (SSL) aims to improve performance by exploiting unlabeled data when labels are scarce.
  • Conventional SSL studies assume consistent important factors between labeled and unlabeled data in close environments.
  • This paper discusses robust SSL in open environments with inconsistent important factors.
  • Advances in techniques addressing label, feature, and data distribution inconsistency in SSL are introduced, along with evaluation benchmarks.

Read Full Article

like

6 Likes

source image

Arxiv

2h

read

19

img
dot

Image Credit: Arxiv

Free the Design Space of Equivariant Graph Neural Networks: High-Rank Irreducible Cartesian Tensor Decomposition and Bases of Equivariant Spaces

  • Irreducible Cartesian tensors (ICTs) are important in the design of equivariant graph neural networks and theoretical chemistry.
  • The ICT decomposition and basis for equivariant spaces are challenging to obtain for high-order tensors.
  • Researchers have achieved an explicit ICT decomposition for $n=5$ with factorial complexity and obtained decomposition matrices for ICTs up to rank $n=9$ with reduced complexity.
  • They used path matrices obtained through chain-like contraction with Clebsch-Gordan matrices to establish an orthonormal change-of-basis matrix and a complete orthogonal basis for the equivariant space.

Read Full Article

like

1 Like

source image

Arxiv

2h

read

6

img
dot

Image Credit: Arxiv

NoiseHGNN: Synthesized Similarity Graph-Based Neural Network For Noised Heterogeneous Graph Representation Learning

  • Real-world graph data environments often contain noise that affects the effectiveness of graph representation and downstream learning tasks.
  • Existing methods for homogeneous graphs synthesize a similarity graph based on original node features to correct the structure of the noisy graph.
  • However, similar nodes in heterogeneous graphs do not have direct links, posing a challenge for noise correction in heterogeneous graph learning.
  • This paper proposes a novel synthesized similarity-based graph neural network for learning from noisy heterogeneous graphs, achieving state-of-the-art results in various real-world datasets.

Read Full Article

like

Like

source image

Arxiv

2h

read

144

img
dot

Image Credit: Arxiv

On the Local Complexity of Linear Regions in Deep ReLU Networks

  • Researchers have defined the local complexity of a neural network with continuous piecewise linear activations as a measure of the density of linear regions over an input data distribution.
  • They have theoretically shown that ReLU networks learning low-dimensional feature representations have a lower local complexity.
  • This connects recent empirical observations on feature learning with concrete properties of the learned functions.
  • The local complexity also serves as an upper bound on the total variation of the function over the input data distribution, linking feature learning to adversarial robustness.
  • The researchers also consider how optimization drives ReLU networks towards solutions with lower local complexity, contributing a theoretical framework for understanding geometric properties of ReLU networks in relation to learning.

Read Full Article

like

8 Likes

source image

Arxiv

2h

read

131

img
dot

Image Credit: Arxiv

Semi-supervised Credit Card Fraud Detection via Attribute-Driven Graph Representation

  • Researchers propose a semi-supervised graph neural network for credit card fraud detection.
  • The method utilizes transaction records to construct a temporal transaction graph.
  • A Gated Temporal Attention Network (GTAN) is used to learn transaction representations.
  • The proposed method outperforms other baselines on fraud detection datasets.

Read Full Article

like

7 Likes

source image

Arxiv

2h

read

32

img
dot

Image Credit: Arxiv

Navigating Data Corruption in Machine Learning: Balancing Quality, Quantity, and Imputation Strategies

  • Data corruption, including missing and noisy data, poses challenges in machine learning.
  • This study investigates the effects of data corruption on model performance and explores strategies to mitigate them.
  • Results show that noisy data has a more severe impact than missing data, especially in sequential decision-making tasks.
  • Increasing dataset size helps mitigate but cannot fully overcome the effects of data corruption.

Read Full Article

like

1 Like

source image

Arxiv

2h

read

319

img
dot

Image Credit: Arxiv

Data-Driven Self-Supervised Graph Representation Learning

  • Self-supervised graph representation learning (SSGRL) is a representation learning paradigm used to reduce or avoid manual labeling.
  • Existing methods of graph data augmentation rely on heuristics and are effective only within specific application domains.
  • This study proposes a data-driven SSGRL approach that automatically learns graph augmentation from the graph's signal.
  • The proposed method outperforms baselines and performs similarly to semi-supervised methods in various experiments.

Read Full Article

like

19 Likes

source image

Arxiv

2h

read

151

img
dot

Image Credit: Arxiv

Point-DeepONet: A Deep Operator Network Integrating PointNet for Nonlinear Analysis of Non-Parametric 3D Geometries and Load Conditions

  • Nonlinear structural analyses in engineering often require extensive finite element simulations.
  • Point-DeepONet is an operator-learning-based surrogate that integrates PointNet into the DeepONet framework.
  • It accurately predicts three-dimensional displacement and von Mises stress fields without mesh parameterization or retraining.
  • Point-DeepONet provides predictions in mere seconds, approximately 400 times faster than nonlinear finite element analyses.

Read Full Article

like

9 Likes

source image

Arxiv

2h

read

141

img
dot

Image Credit: Arxiv

Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges

  • Recent studies have shown that Hypergraph Neural Networks (HGNNs) are vulnerable to adversarial attacks.
  • A novel framework called Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges (IE-Attack) is proposed to tackle these challenges.
  • IE-Attack utilizes node spanning in the hypergraph to identify hyperedges to be injected and generates a homogeneous node with the group identity of hyperedges using Kernel Density Estimation (KDE).
  • By injecting the homogeneous node into elite hyperedges, IE-Attack improves attack performance and enhances the imperceptibility of attacks.

Read Full Article

like

8 Likes

For uninterrupted reading, download the app