menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

ML News

source image

Arxiv

4h

read

92

img
dot

Conservative approximation-based feedforward neural network for WENO schemes

  • Researchers have developed a feedforward neural network based on conservative approximation for WENO schemes in solving hyperbolic conservation laws.
  • The neural network replaces the classical WENO weighting procedure by taking point values as inputs and two nonlinear weights as outputs from a three-point stencil.
  • Supervised learning is used with a new labeled dataset for conservative approximation, incorporating a symmetric-balancing term in the loss function to ensure high-order accuracy and match the conservative approximation to the derivative.
  • The resulting WENO schemes, WENO3-CADNNs, exhibit robust generalization and outperform WENO3-Z while achieving accuracy comparable to WENO5-JS across different benchmark scenarios and resolutions.

Read Full Article

like

5 Likes

source image

Arxiv

4h

read

98

img
dot

SQLBarber: A System Leveraging Large Language Models to Generate Customized and Realistic SQL Workloads

  • SQLBarber is a system leveraging Large Language Models (LLMs) to generate customized and realistic SQL workloads for database research and development.
  • It eliminates the need for manual crafting of SQL templates by providing a declarative interface and accepts natural language specifications to constrain SQL templates.
  • SQLBarber scales efficiently to generate large volumes of queries matching user-defined cost distributions and uses execution statistics from Amazon Redshift and Snowflake for real-world query characteristics.
  • The system introduces a self-correction module, a Bayesian Optimizer, and open-sourced benchmarks to generate customized SQL templates, reduce query generation time significantly, and improve alignment with target cost distributions.

Read Full Article

like

5 Likes

source image

Arxiv

4h

read

166

img
dot

Is Diversity All You Need for Scalable Robotic Manipulation?

  • Data scaling has been successful in NLP and CV, but its effectiveness in robotic manipulation needs further exploration.
  • Task diversity is more critical than the quantity of demonstrations, aiding transfer learning to new scenarios.
  • Multi-embodiment pre-training data is not necessary for cross-embodiment transfer; models trained on single-embodiment data can efficiently transfer to different platforms.
  • Expert diversity, influenced by individual preferences and human demonstrations, can hinder policy learning; a debiasing method called GO-1-Pro addressed velocity ambiguity, resulting in significant performance gains.

Read Full Article

like

10 Likes

source image

Arxiv

4h

read

252

img
dot

Efficiency-Effectiveness Reranking FLOPs for LLM-based Rerankers

  • Large Language Models (LLMs) are being used for reranking tasks in information retrieval with high performance but face deployment challenges due to computational demands.
  • Existing studies on LLM-based rerankers' efficiency use metrics like latency and token count, but they do not adequately consider model size and hardware variations.
  • A new metric called E^2R-FLOPs is proposed to evaluate LLM-based rerankers, focusing on relevance per compute (RPP) and queries per PetaFLOP (QPP) for hardware-agnostic throughput.
  • Comprehensive experiments were conducted using the new metrics to assess the efficiency-effectiveness trade-off of various LLM-based rerankers, shedding light on this issue in the research community.

Read Full Article

like

15 Likes

source image

Arxiv

4h

read

180

img
dot

Deep neural networks have an inbuilt Occam's razor

  • Researchers introduce a Bayesian approach to understand the success of overparameterized deep neural networks (DNNs) by considering network architecture, training algorithms, and data structure.
  • They show that DNNs exhibit an Occam's razor-like inductive bias towards simple functions, which helps counteract the growth of complex functions, leading to their remarkable performance.
  • By analyzing Boolean function classification and utilizing a prior over functions determined by the network, researchers accurately predict the posterior for DNNs trained with stochastic gradient descent.
  • This study demonstrates that structured data and the intrinsic Occam's razor principle play a significant role in the success of deep neural networks.

Read Full Article

like

10 Likes

source image

Arxiv

4h

read

186

img
dot

Learning Federated Neural Graph Databases for Answering Complex Queries from Distributed Knowledge Graphs

  • Neural graph databases (NGDBs) are efficient data retrieval mechanisms for deep learning-based models to access precise information.
  • Current NGDBs are limited to single-graph operation, hindering reasoning across multiple distributed graphs.
  • The lack of support for multi-source graph data in existing NGDBs affects reasoning across distributed sources, impacting decision-making.
  • Proposed solution, Federated Neural Graph DataBase (FedNGDB), uses federated learning for privacy-preserving reasoning over multi-source graph data, improving data quality.

Read Full Article

like

11 Likes

source image

Arxiv

4h

read

23

img
dot

Optimal Transport for Domain Adaptation through Gaussian Mixture Models

  • Machine learning systems often assume training and test data come from the same distribution, but this is rarely the case in real-world scenarios where data conditions may change.
  • Adapting unsupervised domains with minimal access to new data is crucial for building models robust to distribution changes.
  • This study explores optimal transport between Gaussian Mixture Models (GMMs) for analyzing distribution changes efficiently, showing promising results in various benchmarks.
  • The proposed method is more efficient and scalable compared to previous shallow domain adaptation methods, performing well with varying sample sizes and dimensions.

Read Full Article

like

1 Like

source image

Arxiv

4h

read

268

img
dot

Policy Verification in Stochastic Dynamical Systems Using Logarithmic Neural Certificates

  • Researchers propose a method for verifying neural network policies in stochastic systems for reach-avoid specifications.
  • They introduce logarithmic Reach-Avoid Supermartingales (logRASMs) to achieve smaller Lipschitz constants than existing approaches.
  • A faster method to compute tighter upper bounds on Lipschitz constants based on weighted norms is presented in the study.
  • Empirical evaluation demonstrates successful verification of reach-avoid specifications with probabilities as high as 99.9999%.

Read Full Article

like

16 Likes

source image

Arxiv

4h

read

285

img
dot

The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret

  • Reward learning in reinforcement learning aims to address the challenge of specifying reward functions accurately for a given task.
  • A learned reward model can have low error on data distribution but may result in a policy with significant regret, termed as an error-regret mismatch, mainly due to distributional shift during policy optimization.
  • The study mathematically demonstrates that while a low expected test error of the reward model ensures low worst-case regret, fixed expected test error can lead to an error-regret mismatch under realistic data distributions.
  • Even with policy regularization techniques like RLHF, similar issues persist, highlighting the need for improved methods in learning reward models and assessing their quality accurately.

Read Full Article

like

17 Likes

source image

Arxiv

4h

read

169

img
dot

Multi-Channel Hypergraph Contrastive Learning for Matrix Completion

  • The paper discusses the challenges in matrix completion for recommender systems due to data sparsity and long-tail distribution in real-world scenarios.
  • A new framework called Multi-Channel Hypergraph Contrastive Learning (MHCL) is proposed to address these challenges by adaptively learning hypergraph structures and capturing high-order correlations between nodes.
  • MHCL utilizes attention-based cross-view aggregation to jointly capture local and global collaborative relationships and encourages alignment between adjacent ratings through multi-channel cross-rating contrastive learning.
  • Extensive experiments on five public datasets show that MHCL outperforms current state-of-the-art approaches in rating prediction and matrix completion for recommender systems.

Read Full Article

like

10 Likes

source image

Arxiv

4h

read

312

img
dot

Longitudinal Ensemble Integration for sequential classification with multimodal data

  • This study focuses on effectively modeling multimodal longitudinal data, particularly in the field of biomedicine, to address the lack of approaches in the literature.
  • The study introduces Longitudinal Ensemble Integration (LEI), a novel framework for sequential classification that outperformed existing methods in the early detection of dementia.
  • LEI's superiority is attributed to its utilization of intermediate base predictions from individual data modalities, leading to better integration over time and consistent identification of important features for dementia prediction.
  • The research highlights the potential of LEI for sequential classification tasks involving longitudinal multimodal data.

Read Full Article

like

18 Likes

source image

Arxiv

4h

read

295

img
dot

Regression for the Mean: Auto-Evaluation and Inference with Few Labels through Post-hoc Regression

  • The availability of machine learning systems has led to the use of synthetic labels in statistical inference applications.
  • The Prediction Powered Inference (PPI) framework aims to combine pseudo-labelled data with a small sample of real high-quality labels for efficient evaluation.
  • When labelled data is scarce, the PPI++ method may perform poorly compared to traditional inference methods like ordinary least squares regression.
  • The study relates PPI++ to regression techniques and introduces new PPI-based approaches that utilize robust regressors for improved estimation in scenarios with few labels.

Read Full Article

like

17 Likes

source image

Arxiv

4h

read

88

img
dot

Mind the Cost of Scaffold! Benign Clients May Even Become Accomplices of Backdoor Attack

  • Researchers have identified a new backdoor attack, BadSFL, targeting the Scaffold framework used in Federated Learning to address data heterogeneity issues.
  • BadSFL manipulates the control variate in Scaffold to steer benign clients' local gradient updates, turning them into unwitting accomplices of the attacker.
  • This attack enhances the backdoor persistence and leverages a GAN-enhanced poisoning strategy to maintain high accuracy while remaining stealthy.
  • Experiments show that BadSFL has superior attack durability, lasting over 60 global rounds and outperforming existing baselines in maintaining effectiveness even after ceasing malicious injections.

Read Full Article

like

5 Likes

source image

Arxiv

4h

read

312

img
dot

Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG

  • Researchers have introduced a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, EEG-DisGCMAE, to enhance performance by leveraging unlabeled high-density EEG data to aid limited labeled low-density EEG data.
  • The approach integrates graph contrastive pre-training with graph masked autoencoder pre-training and introduces a graph topology distillation loss function to facilitate knowledge transfer from teacher models trained on high-density data to lightweight student models trained on low-density data.
  • The method effectively addresses missing electrodes through contrastive distillation, and it has been validated across four classification tasks using clinical EEG datasets.
  • The research paper and source code can be accessed at arXiv:2411.19230v2 and https://github.com/weixinxu666/EEG_DisGCMAE, respectively.

Read Full Article

like

18 Likes

source image

Arxiv

4h

read

275

img
dot

Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based Centrality

  • Subgraph GNNs have emerged to enhance the expressiveness of Graph Neural Networks (GNNs) by processing bags of subgraphs.
  • A new approach called HyMN is proposed to reduce the computational cost of Subgraph GNNs by leveraging walk-based centrality measures.
  • HyMN samples a small number of relevant subgraphs to reduce bag size, increasing efficiency without sacrificing performance.
  • Experimental results show that HyMN effectively balances expressiveness, efficiency, and downstream performance, making Subgraph GNNs applicable to larger graphs.

Read Full Article

like

16 Likes

For uninterrupted reading, download the app