menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

ML News

source image

Arxiv

2h

read

265

img
dot

Image Credit: Arxiv

A machine learning platform for development of low flammability polymers

  • A machine learning platform has been developed to predict flammability metrics of polymers.
  • Using synthetic data, the models demonstrated the potential to accurately predict flammability index and cone calorimetry outcomes.
  • The platform, called POLYCOMPRED, is integrated into the cloud-based MatVerse platform and provides an accessible web-based interface for flammability prediction.
  • The development of the platform offers a tool for designing new materials with tailored fire-resistant properties.

Read Full Article

like

15 Likes

source image

Arxiv

2h

read

39

img
dot

Image Credit: Arxiv

Over-the-Air Edge Inference via End-to-End Metasurfaces-Integrated Artificial Neural Networks

  • Researchers propose a framework of Metasurfaces-Integrated Neural Networks (MINNs) for Edge Inference (EI) paradigm.
  • Metasurfaces offer programmable propagation of wireless signals, enabling over-the-air computing.
  • MINNs optimize Reconfigurable Intelligent Surfaces (RISs) and Stacked Intelligent Metasurfaces (SIM) for DNN operations.
  • The MINN framework simplifies Edge Inference requirements and achieves near-optimal performance.

Read Full Article

like

2 Likes

source image

Arxiv

2h

read

245

img
dot

Image Credit: Arxiv

ElaLoRA: Elastic & Learnable Low-Rank Adaptation for Efficient Model Fine-Tuning

  • ElaLoRA is an adaptive low-rank adaptation framework for efficient model fine-tuning.
  • It dynamically prunes and expands ranks based on gradient-derived importance scores.
  • ElaLoRA outperforms existing PEFT methods across different parameter budgets in experiments.
  • It offers a scalable and efficient fine-tuning solution, especially for resource-constrained environments.

Read Full Article

like

14 Likes

source image

Arxiv

2h

read

216

img
dot

Image Credit: Arxiv

Federated Learning for Cross-Domain Data Privacy: A Distributed Approach to Secure Collaboration

  • This paper presents a data privacy protection framework based on federated learning.
  • Federated learning reduces the risk of privacy breaches by training the model locally on each client and sharing only model parameters.
  • The experiment demonstrates the efficiency and privacy protection ability of federated learning for medical, financial, and user data.
  • Federated learning enables effective cross-domain data collaboration while ensuring data privacy.

Read Full Article

like

13 Likes

source image

Arxiv

2h

read

232

img
dot

Image Credit: Arxiv

A Deep Learning Approach to Anomaly Detection in High-Frequency Trading Data

  • This paper presents a deep learning algorithm for anomaly detection in high-frequency trading data.
  • The algorithm utilizes a staged sliding window Transformer architecture to capture multi-scale temporal features.
  • Experimental results show that the proposed method outperforms traditional and deep learning approaches in terms of accuracy, F1-Score, and AUC-ROC.
  • The model provides important support for market supervision but suffers from false positives.

Read Full Article

like

14 Likes

source image

Arxiv

2h

read

324

img
dot

Image Credit: Arxiv

Inference-Time Scaling for Complex Tasks: Where We Stand and What Lies Ahead

  • Inference-time scaling can enhance the reasoning capabilities of large language models (LLMs) on complex problems.
  • This research investigates the benefits and limitations of scaling methods across nine state-of-the-art models and eight challenging tasks.
  • The advantages of inference-time scaling vary across tasks and diminish as problem complexity increases.
  • Results show that, for some tasks, conventional models can achieve performance close to advanced reasoning models, but for other tasks, a performance gap remains.

Read Full Article

like

19 Likes

source image

Arxiv

2h

read

272

img
dot

Image Credit: Arxiv

LOCO-EPI: Leave-one-chromosome-out (LOCO) as a benchmarking paradigm for deep learning based prediction of enhancer-promoter interactions

  • A new benchmarking paradigm called Leave-one-chromosome-out (LOCO) has been proposed for deep learning based prediction of enhancer-promoter interactions (EPI).
  • Traditional methods randomly split the dataset into training and testing subsets, leading to performance overestimation due to information leakage.
  • The LOCO cross-validation approach demonstrates that a deep learning algorithm's performance drops drastically, highlighting the overestimation of performance in random-splitting settings.
  • A novel hybrid deep neural network that combines k-mer features of the nucleotide sequence is proposed, showing significantly better performance in the LOCO setting.

Read Full Article

like

16 Likes

source image

Arxiv

2h

read

275

img
dot

Image Credit: Arxiv

Diffusion models for probabilistic precipitation generation from atmospheric variables

  • Improving the representation of precipitation in Earth system models (ESMs) is critical for assessing the impacts of climate change and extreme events.
  • A novel approach based on generative machine learning is presented, which integrates a conditional diffusion model with a UNet architecture to generate accurate, high-resolution global daily precipitation fields.
  • The model provides ensemble predictions, capturing uncertainties in precipitation, and does not require manual fine-tuning.
  • By leveraging interactions between global prognostic variables, the approach offers a computationally efficient alternative to conventional schemes for modeling complex precipitation patterns.

Read Full Article

like

16 Likes

source image

Arxiv

2h

read

278

img
dot

Image Credit: Arxiv

FedPaI: Achieving Extreme Sparsity in Federated Learning via Pruning at Initialization

  • FedPaI is a novel efficient federated learning (FL) framework that achieves extreme sparsity by leveraging Pruning at Initialization (PaI).
  • FedPaI maximizes model capacity and reduces communication and computation overhead by fixing sparsity patterns at the start of training.
  • The framework supports both structured and unstructured pruning, personalized client-side pruning mechanisms, and sparsity-aware server-side aggregation.
  • Experimental results show that FedPaI achieves an extreme sparsity level of up to 98% without compromising model accuracy compared to unpruned baselines.

Read Full Article

like

16 Likes

source image

Arxiv

2h

read

154

img
dot

Image Credit: Arxiv

Simple yet Effective Node Property Prediction on Edge Streams under Distribution Shifts

  • The problem of predicting node properties in graphs has gained attention for its applications.
  • Temporal graph neural networks (TGNNs) have been developed to handle dynamic node properties.
  • SPLASH is a simple yet powerful method proposed to predict node properties on edge streams under distribution shifts.
  • SPLASH improves the effectiveness of TGNNs with feature augmentation methods and an automatic feature selection method.

Read Full Article

like

9 Likes

source image

Arxiv

2h

read

249

img
dot

Image Credit: Arxiv

SeizureTransformer: Scaling U-Net with Transformer for Simultaneous Time-Step Level Seizure Detection from Long EEG Recordings

  • SeizureTransformer is a deep learning-based model for simultaneous time-step level seizure detection from long EEG recordings.
  • The model consists of a deep encoder with 1D convolutions, a residual CNN stack, and a transformer encoder.
  • SeizureTransformer effectively handles long-range patterns in EEG data and outperforms existing approaches in seizure detection.
  • The model ranked first in the 2025 "seizure detection challenge" and shows potential for real-time, precise seizure detection.

Read Full Article

like

14 Likes

source image

Arxiv

2h

read

255

img
dot

Image Credit: Arxiv

Agentic Multimodal AI for Hyperpersonalized B2B and B2C Advertising in Competitive Markets: An AI-Driven Competitive Advertising Framework

  • Researchers have developed a multilingual, multimodal AI framework for hyper-personalized advertising in B2B and B2C markets.
  • The framework integrates retrieval-augmented generation (RAG), multimodal reasoning, and adaptive persona-based targeting.
  • It generates culturally relevant, market-aware ads tailored to shifting consumer behaviors and competition.
  • The framework combines real-world product experiments and synthetic experiments to optimize strategies at scale and maximize return on advertising spend (ROAS).

Read Full Article

like

15 Likes

source image

Arxiv

2h

read

32

img
dot

Image Credit: Arxiv

Reducing Smoothness with Expressive Memory Enhanced Hierarchical Graph Neural Networks

  • Graphical forecasting models learn the structure of time series data via projecting onto a graph.
  • Hierarchical Graph Flow (HiGFlow) network introduces a memory buffer variable to store previously seen information across variable resolutions.
  • HiGFlow reduces smoothness when mapping onto new feature spaces in the hierarchy.
  • Empirical results show that HiGFlow outperforms state-of-the-art baselines, including transformer models, in MAE and RMSE.

Read Full Article

like

1 Like

source image

Arxiv

2h

read

193

img
dot

Image Credit: Arxiv

Deep learning for state estimation of commercial sodium-ion batteries using partial charging profiles: validation with a multi-temperature ageing dataset

  • Accurately predicting the state of health for sodium-ion batteries is crucial for managing battery modules and ensuring operational safety.
  • A new framework was designed that integrates the neural ordinary differential equation and 2D convolutional neural networks to predict the state of charge (SOC), capacity, and state of health (SOH) of batteries using partial charging profiles as input.
  • The model demonstrated high accuracy, with an R^2 accuracy of 0.998 for SOC and 0.997 for SOH across various temperatures.
  • The trained model can be used to predict single cells at temperatures outside the training set and battery modules with different capacity and current levels.

Read Full Article

like

11 Likes

source image

Arxiv

2h

read

200

img
dot

Image Credit: Arxiv

Minimum Description Length of a Spectrum Variational Autoencoder: A Theory

  • Deep neural networks (DNNs) trained through end-to-end learning have achieved remarkable success across diverse machine learning tasks, but they are not designed to adhere to the Minimum Description Length (MDL) principle.
  • A novel theoretical framework for designing and evaluating deep Variational Autoencoders (VAEs) based on MDL is introduced.
  • The Spectrum VAE, a specific VAE architecture, is designed and its MDL can be rigorously evaluated under given conditions.
  • This work lays the foundation for future research on designing deep learning systems that explicitly adhere to information-theoretic principles.

Read Full Article

like

12 Likes

For uninterrupted reading, download the app