menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

ML News

source image

Arxiv

4d

read

295

img
dot

Image Credit: Arxiv

LayerAct: Advanced Activation Mechanism for Robust Inference of CNNs

  • Researchers have proposed a new activation mechanism called LayerAct for Convolutional Neural Networks (CNNs).
  • LayerAct addresses the limitation of existing activation functions in terms of noise robustness.
  • The method reduces the influence of input shifts without compromising the homogeneity of activation outputs.
  • Experimental results demonstrate that LayerAct functions outperform other activation functions in handling noisy datasets while achieving superior performance on clean datasets.

Read Full Article

like

17 Likes

source image

Arxiv

4d

read

222

img
dot

Image Credit: Arxiv

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

  • Machine learning models are vulnerable to attacks on privacy and model accuracy.
  • Standard differentially private model training is inadequate for strong certified robustness guarantees.
  • DP-CERT is a simple and effective method that combines differential privacy and robustness guarantees.
  • DP-CERT reduces Lipschitz constants and improves certified accuracy on CIFAR10.

Read Full Article

like

13 Likes

source image

Arxiv

4d

read

384

img
dot

Image Credit: Arxiv

Detecting Throat Cancer from Speech Signals using Machine Learning: A Scoping Literature Review

  • Cases of throat cancer are increasing globally, emphasizing the importance of early detection.
  • Artificial intelligence (AI) and machine learning (ML) can help in detecting throat cancer from speech signals.
  • A scoping literature review evaluated the performance of AI and ML in classifying speech signals of throat cancer patients.
  • Open-source code and standardization of methodologies are needed for further development in this field.

Read Full Article

like

23 Likes

source image

Arxiv

4d

read

275

img
dot

Image Credit: Arxiv

Residual Multi-Fidelity Neural Network Computing

  • This work proposes a method for constructing a neural network surrogate model using multi-fidelity information.
  • The method formulates the correlation between a low-fidelity model and a high-fidelity model as a residual function.
  • Two neural networks are trained to work together, with the first network learning the residual function and the second network serving as the surrogate for the high-fidelity quantity of interest.
  • Numerical examples demonstrate the effectiveness of this framework in achieving accurate output predictions with significant computational cost savings.

Read Full Article

like

16 Likes

source image

Arxiv

4d

read

238

img
dot

Image Credit: Arxiv

Little is Enough: Boosting Privacy by Sharing Only Hard Labels in Federated Semi-Supervised Learning

  • Researchers propose a federated co-training approach called FedCT to boost privacy in federated semi-supervised learning.
  • FedCT shares only definitive (hard) labels on a public unlabeled dataset, improving privacy and allowing the use of local models that are not suitable for parameter aggregation.
  • Clients use a consensus of the shared labels as pseudo-labels for local training, enhancing privacy without compromising model quality.
  • Empirical evaluations and theoretical analyses suggest the applicability of FedCT in various federated learning scenarios, including fine-tuning of large language models.

Read Full Article

like

14 Likes

source image

Arxiv

4d

read

198

img
dot

Image Credit: Arxiv

Boosting, Voting Classifiers and Randomized Sample Compression Schemes

  • Boosting is a popular technique in machine learning that aims to combine multiple weak learners into a strong learner.
  • Traditionally, boosting algorithms have produced voting classifiers, but their theoretical performance has been suboptimal.
  • However, a new randomized boosting algorithm has been proposed that achieves improved performance.
  • This algorithm creates voting classifiers with a single logarithmic dependency on the sample size.

Read Full Article

like

11 Likes

source image

Arxiv

4d

read

255

img
dot

Image Credit: Arxiv

Generalizing Denoising to Non-Equilibrium Structures Improves Equivariant Force Fields

  • Understanding the interactions of atoms such as forces in 3D atomistic systems is important for applications like molecular dynamics and catalyst design.
  • Denoising non-equilibrium structures (DeNS) is proposed as an auxiliary task to improve the performance of training neural networks.
  • DeNS generalizes denoising to a larger set of non-equilibrium structures, which have non-zero forces and multiple possible atomic positions.
  • By encoding the forces of the original non-equilibrium structure, DeNS can specify the target structure for denoising.

Read Full Article

like

15 Likes

source image

Arxiv

4d

read

283

img
dot

Image Credit: Arxiv

Towards Adversarially Robust Dataset Distillation by Curvature Regularization

  • Dataset distillation (DD) allows datasets to be distilled to fractions of their original size while preserving the rich distributional information.
  • Recent research has been focusing on improving the accuracy of models trained on distilled datasets.
  • This paper introduces a new perspective of DD by studying how to embed adversarial robustness in distilled datasets.
  • The proposed method incorporates curvature regularization into the distillation process to achieve better adversarial robustness.

Read Full Article

like

17 Likes

source image

Arxiv

4d

read

162

img
dot

Image Credit: Arxiv

Reviewing AI's Role in Non-Muscle-Invasive Bladder Cancer Recurrence Prediction

  • Non-muscle-invasive Bladder Cancer (NMIBC) has a high recurrence rate and poor prediction tools.
  • Machine learning (ML) techniques offer promising solutions for predicting NMIBC recurrence.
  • ML algorithms leveraging various data modalities show significant promise.
  • Challenges remain in generalisability and interpretability of AI models.

Read Full Article

like

9 Likes

source image

Arxiv

4d

read

145

img
dot

Image Credit: Arxiv

A First Introduction to Cooperative Multi-Agent Reinforcement Learning

  • Multi-agent reinforcement learning (MARL) has exploded in popularity in recent years.
  • MARL can be categorized into three main types: centralized training and execution (CTE), centralized training for decentralized execution (CTDE), and decentralized training and execution (DTE).
  • This text is an introduction to cooperative MARL where all agents share a single, joint reward.
  • The introduction covers basic concepts and common methods for CTE, CTDE, and DTE settings in cooperative MARL.

Read Full Article

like

8 Likes

source image

Arxiv

4d

read

20

img
dot

Image Credit: Arxiv

Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient

  • Text-to-image diffusion models have achieved remarkable success in generating photorealistic images.
  • Machine Unlearning (MU) offers a promising solution to eliminate sensitive concepts from these models.
  • A concept domain correction framework named DoCo is proposed to address the challenges of limited generalization and utility degradation in existing MU methods.
  • Experimental results show the effectiveness of DoCo in unlearning targeted concepts with minimal impact on related concepts, outperforming previous approaches even for out-of-distribution prompts.

Read Full Article

like

1 Like

source image

Arxiv

4d

read

360

img
dot

Image Credit: Arxiv

Fairness-Accuracy Trade-Offs: A Causal Perspective

  • Systems based on machine learning may exhibit discriminatory behavior based on sensitive characteristics.
  • Various notions of fairness and methods to quantify discrimination have been proposed.
  • There is a tension between fairness and utility, as imposing fairness constraints may decrease the utility of the decision-maker.
  • A new approach called causal fairness/utility ratio is introduced to summarize the fairness-utility trade-off across causal pathways.

Read Full Article

like

21 Likes

source image

Arxiv

4d

read

202

img
dot

Image Credit: Arxiv

DiveR-CT: Diversity-enhanced Red Teaming Large Language Model Assistants with Relaxing Constraints

  • Recent advances in large language model assistants have raised concerns about their safety.
  • Automated red teaming offers a scalable safety evaluation method.
  • Existing approaches compromise diversity by focusing on maximizing attack success rate.
  • DiveR-CT introduces a method that relaxes constraints to enhance diversity in red teaming.

Read Full Article

like

12 Likes

source image

Arxiv

4d

read

72

img
dot

Image Credit: Arxiv

CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive Learning

  • Data selection has emerged as a core issue for large-scale visual-language model pretraining.
  • Three main data selection approaches are: leveraging external non-CLIP models, training new CLIP-style embedding models, and designing better metrics or strategies.
  • This paper proposes two new methods: surrogate-CLIPLoss (s-CLIPLoss) and NormSim.
  • The methods achieve improvements on ImageNet-1k and downstream evaluation tasks, and can be combined with existing techniques.

Read Full Article

like

4 Likes

source image

Arxiv

4d

read

105

img
dot

Image Credit: Arxiv

BMRS: Bayesian Model Reduction for Structured Pruning

  • BMRS (Bayesian Model Reduction for Structured Pruning) is a fully end-to-end Bayesian method of structured pruning.
  • It is based on Bayesian structured pruning with multiplicative noise and Bayesian model reduction (BMR), allowing efficient comparison of Bayesian models under a change in prior.
  • The two realizations of BMRS, BMRS_N and BMRS_U, derived from different priors, offer reliable compression rates and accuracy without the need for tuning thresholds, and achieve aggressive compression based on truncation boundaries, respectively.
  • Experiments on multiple datasets and neural networks showed that BMRS provides a competitive performance-efficiency trade-off compared to other pruning methods.

Read Full Article

like

6 Likes

For uninterrupted reading, download the app