menu
techminis

A naukri.com initiative

google-web-stories
Home

>

AR News

AR News

source image

VentureBeat

1M

read

140

img
dot

Image Credit: VentureBeat

Nvidia CEO takes a shot at U.S. policy cutting off AI chip sales to China

  • Nvidia CEO, Jensen Huang criticized the U.S. policy cutting off AI chip sales to China.
  • Nvidia had to take a $4.5 billion charge against its Q1 earnings as sales to China were halted.
  • Huang mentioned that China already has the necessary computing capabilities for AI, regardless of U.S. chips.
  • While critical of the policy changes, Huang also appreciated the rescission of the AI Diffusion Rule under the Biden administration.

Read Full Article

like

8 Likes

source image

TechCrunch

1M

read

384

img
dot

Image Credit: TechCrunch

Nvidia expects to lose billions in revenue due to H20 chip licensing requirements

  • Nvidia faces a $4.5 billion charge in Q1 due to licensing requirements impacting its ability to sell H20 AI chip to Chinese companies.
  • An additional $2.5 billion of H20 revenue in the quarter was unable to be shipped due to restrictions.
  • The H20 licensing requirements are expected to result in an $8 billion hit to Nvidia's revenue in Q2.
  • Despite Biden's chip export rules not coming into effect, Nvidia is impacted by the attempt to stifle China's AI market.

Read Full Article

like

23 Likes

source image

VentureBeat

1M

read

199

img
dot

Image Credit: VentureBeat

Nvidia beats estimates for Q1 results as revenues rise 69% from a year ago

  • Nvidia reported revenue for Q1 2025 at $44.1 billion, up 69% from a year ago.
  • A $4.5 billion charge in Q1 was attributed to export requirements for Nvidia's H20 products in China.
  • Nvidia's stock rose 4.4% in after-hours trading post the earnings announcement.
  • Excluding the charge, non-GAAP gross margin for Q1 would have been 71.3%.
  • Q1 earnings per share were $0.76 (GAAP) and $0.81 (non-GAAP).
  • Nvidia forecasts Q2 revenue of $45 billion with gross margins around 72.0%.
  • The company announced new products like Blackwell NVL72 AI supercomputer and expects gross margins in mid-70% range later this year.
  • An increase in gaming revenue was noted, with the introduction of new GeForce RTX products.
  • Nvidia is expanding operations and partnerships globally, notably in the U.S., Middle East, and Asia.
  • DLSS 4 available in over 125 games, reflecting Nvidia's commitment to AI and gaming technology development.

Read Full Article

like

11 Likes

source image

Pymnts

1M

read

262

img
dot

Image Credit: Pymnts

Saudi Arabia Unveils $10 Billion VC Fund in Race for Middle East’s AI Crown

  • Saudi Arabia unveils a $10 billion venture capital fund through its state-backed AI company, Humain, to become the AI leader in the Middle East.
  • Humain plans to launch Humain Ventures, a new venture fund targeting startups globally, and build significant data center capacity by 2030.
  • The company aims to handle 7% of the world's AI training and inferencing workloads by 2030 and has secured deals with major tech firms like Nvidia, AMD, Amazon, and Qualcomm.
  • Crown Prince Mohammed bin Salman chairs Humain as part of Saudi Arabia's strategy to diversify the economy with a focus on AI and collaboration with U.S. tech companies.

Read Full Article

like

15 Likes

source image

Pcgamesn

1M

read

316

img
dot

Image Credit: Pcgamesn

Acer just accidentally shared some Nvidia GeForce RTX 5050 gaming GPU specs

  • Acer accidentally shared some specifications of the rumored Nvidia GeForce RTX 5050 gaming GPU, specifically focusing on the laptop version.
  • The laptop version of the GPU is expected to have higher clock speeds compared to the RTX 5060 and 5070 models.
  • Rumors also suggest the existence of a desktop Nvidia GeForce RTX 5050 model with 2,560 CUDA cores and 8GB of GDDR6 VRAM.
  • While details on the laptop version are limited, it seems Nvidia is compensating by increasing the clock speed to enhance performance.

Read Full Article

like

19 Likes

source image

TechCrunch

1M

read

420

img
dot

Image Credit: TechCrunch

Why export restrictions aren’t the only thing to pay attention to in Nvidia’s earnings

  • Nvidia is set to report Q1 earnings for fiscal year 2026, with focus likely on the impact of U.S. chip export controls on its international chip business and future guidance.
  • Zacks Investment Research suggests that Nvidia's rollout of the new GB200 NVL72 hardware, a single-rack exascale computer, is a crucial area for shareholders to monitor. Delivery estimates for the machine have been adjusted following chaos around DeepSeek in January.
  • The company's ability to deliver a significant number of the GB200 NVL72 units in Q2 could positively influence investor sentiment regarding enterprise adoption of the latest AI tech.
  • While U.S. export controls might have short-term effects on Nvidia's stock, the long-term valuation is believed to be more impacted by the demand for GB200 NVL72 units, as Nvidia has proven resilience to market fluctuations.

Read Full Article

like

25 Likes

source image

Insider

1M

read

27

img
dot

Image Credit: Insider

Nvidia earnings updates: Wall Street watching for tariff impact, China market share

  • Nvidia is set to release its first-quarter earnings after the closing bell on Wednesday, with Wall Street monitoring the potential impact of tariffs and the company's market share in China.
  • There are concerns about how President Donald Trump's tariffs will affect Nvidia's business, but overall optimism exists on Wall Street regarding the earnings report.
  • Bank of America analysts expect Nvidia to post a modest sales beat in the first quarter, but anticipate 'messy' guidance for the current quarter due to potential tariff impact from China.
  • Nvidia is also anticipated to report revenue estimates for different segments like data center, compute, networking, gaming, professional visualization, and automotive, with adjustments in gross margin, expenses, and earnings.

Read Full Article

like

1 Like

source image

Pcgamesn

1M

read

402

img
dot

Image Credit: Pcgamesn

The Nvidia GeForce RTX 5090 price just nosedived, if you live in the right place

  • The Nvidia GeForce RTX 5090 price has seen a significant drop in Europe, making it more affordable for potential buyers.
  • While initially hard to purchase due to limited supply and high MSRP, the stock is stabilizing in Europe, allowing buyers to acquire the fastest GPU at a reduced price.
  • In the UK specifically, the Nvidia GeForce RTX 5090 is available for less than its launch price, providing a good opportunity for interested customers.
  • The Nvidia GeForce RTX 5090 is known for its superior performance, making it the top choice for those prioritizing performance, with its 32GB frame buffer and 21,760 CUDA cores.

Read Full Article

like

24 Likes

source image

Hackernoon

1M

read

253

img
dot

Image Credit: Hackernoon

Achieve 100x Speedups in Graph Analytics Using Nx-cugraph

  • Nx-cugraph is a RAPIDS backend that accelerates NetworkX graph analytics by leveraging NVIDIA GPUs for massive speedups.
  • NetworkX, a popular Python graph analytics library, struggles with performance for large datasets due to its pure-Python implementation.
  • NetworkX 3.0 introduced the ability to dispatch algorithms to accelerated backends, such as nx-cugraph, without abandoning existing code.
  • Setting the NX_CUGRAPH_AUTOCONFIG environment variable to True enables NetworkX to use the 'cugraph' backend by default for GPU acceleration.
  • Nx-cugraph significantly accelerates common graph algorithms like Betweenness Centrality and PageRank, showcasing speedups for both small and large datasets.
  • For small graphs, CPU may be faster due to GPU kernel launch overhead, but for larger datasets, nx-cugraph demonstrates its power.
  • Nx-cugraph provides over 100x speedup for algorithms like Betweenness Centrality on large graphs, offering increased accuracy with larger k values.
  • Compared to default NetworkX implementations on CPU, nx-cugraph consistently delivers faster results, making it a valuable tool for graph analytics.
  • Migrating NetworkX workflows to GPU acceleration with nx-cugraph yields substantial benefits, including dramatic performance improvements, minimal code changes, enhanced scalability, simple setup, and a familiar API.
  • Nx-cugraph is recommended for handling real-world graph problems that exceed the capabilities of traditional CPU-only NetworkX, unlocking new possibilities in graph analytics.

Read Full Article

like

15 Likes

source image

Hackernoon

1M

read

27

img
dot

Image Credit: Hackernoon

Supercharge ML: Your Guide to GPU-Accelerated cuML and XGBoost

  • The article provides insights on leveraging GPU acceleration for fast machine learning using cuML and XGBoost.
  • cuML, part of the RAPIDS™ suite, offers GPU-accelerated machine learning algorithms similar to Scikit-Learn but optimized for NVIDIA GPUs.
  • XGBoost, known for its performance, can be GPU-accelerated by setting parameters like tree_method to gpu_hist for faster training on large datasets.
  • Dimensionality reduction techniques like PCA, Truncated SVD, and UMAP are essential for managing high-dimensional data and improving model performance.
  • Scaling features before applying techniques like PCA is crucial to avoid misleading components due to varying feature scales.
  • The article includes code examples for CPU (Scikit-Learn) and GPU (cuML) implementations of PCA and Truncated SVD, showcasing the speedup with GPU acceleration.
  • UMAP, a non-linear reduction technique, can reveal structures in data that linear methods like PCA might overlook.
  • Key takeaways emphasize the accessibility of GPU acceleration, API familiarity between cuML and Scikit-Learn, the importance of speed in large datasets, and the significance of dimensionality reduction.
  • The article provides Google Colab notebooks for running the code snippets on cuML, XGBoost on GPU, and dimensionality reduction techniques.

Read Full Article

like

1 Like

source image

Hackernoon

1M

read

149

img
dot

Image Credit: Hackernoon

Dask & cuDF: Key to Distributed Computing in Data Science

  • This article discusses the significance of Dask and cuDF in distributed computing and data processing for data science professionals.
  • Dask, a library for parallelized computing in Python, allows complex workflows using data structures like NumPy arrays and Pandas DataFrames with parallel execution.
  • Dask's client/worker architecture involves the client for task scheduling and workers for parallel computation execution.
  • By leveraging delayed operations in Dask, computations are deferred until later, enabling the construction of computational graphs.
  • The integration of cuDF with Dask enables GPU acceleration for high-performance data processing, especially in multi-GPU scenarios.
  • Dask-cudf offers advantages in distributed computing, including automatic data shuffling across GPUs and parallel group operations.
  • Key performance benefits of Dask include parallel execution speedup, GPU acceleration, memory efficiency, and automatic task scheduling.
  • For the NVIDIA Data Science Professional Certification, mastering concepts like lazy evaluation, futures patterns, and cluster management is crucial.
  • Best practices mentioned include choosing the right tool, optimizing partition size, monitoring GPU memory usage, and understanding graph optimization.
  • The article emphasizes the importance of understanding when to use Dask, cuDF, or dask-cudf based on the computational requirements and dataset sizes.
  • In the next post, the focus will shift to machine learning workflows with RAPIDS, covering cuML and distributed training scenarios.

Read Full Article

like

8 Likes

source image

VoIP

1M

read

172

img
dot

Image Credit: VoIP

Swedish Industry Giants Unite to Build Nation’s Largest AI Supercomputer

  • Swedish corporations Ericsson, AstraZeneca, Saab, SEB, and Wallenberg Investments AB have joined forces to create a national AI facility with a focus on high-performance computing.
  • The initiative will feature two Nvidia DGX SuperPODs equipped with Grace Blackwell GB300 systems, making it Sweden's largest AI supercomputer upon completion.
  • The facility will support various AI tasks, such as training industry-specific models and large-scale inference, while Nvidia plans to establish its first AI Technology Centre in Sweden to bolster local research endeavors.
  • The collaboration aims to leverage AI for innovation and commercial growth, with specific applications in areas like network technology, drug discovery, defense systems, and customer services, highlighting the interplay between AI, 5G, and future technological advancements.

Read Full Article

like

10 Likes

source image

Hackernoon

1M

read

135

img
dot

Image Credit: Hackernoon

Achieve 400x Performance Boost with NVIDIA RAPIDS cuDF: A Guide

  • The article discusses leveraging NVIDIA RAPIDS cuDF for significant performance gains in data processing, achieving up to 400x speed improvements over pandas with minimal code changes.
  • Key topics covered include performance benchmarks, easy migration from pandas to cuDF, exploratory data analysis using the NYC Taxi dataset, and using pandas syntax with cuDF backend acceleration.
  • Setting up RAPIDS cuDF is straightforward, and it offers a pandas-like API, allowing for easy integration and immediate benefits of GPU acceleration by replacing pd.DataFrame() with cudf.DataFrame().
  • Performance benchmarks using the NYC Taxi dataset show loading records being over 22x faster and sorting operations up to ~29x faster with cuDF compared to pandas.
  • Other operations like groupby operations and complex filtering show notable speedups of approximately 20x to 123x, showcasing the efficiency of cuDF for data processing tasks.
  • cuDF seamlessly integrates into existing analysis workflows, offering features like data filtering with complex conditions, feature engineering, and visualization-ready aggregations.
  • The article introduces the cudf.pandas extension, allowing existing pandas code to automatically benefit from GPU acceleration without requiring code changes, ensuring a smooth transition to utilizing cuDF.
  • Key takeaways for certification include significant performance improvements, seamless integration with existing pandas code, single GPU focus, and considerations regarding GPU memory, SQL syntax, and dependencies when using cuDF.
  • Readers are encouraged to explore cuDF, try it in Google Colab or install it locally to experience the speed enhancements and efficiency in data processing tasks, potentially transforming their data science workflows.
  • RAPIDS cuDF not only offers performance enhancements but also simplifies GPU computing for data scientists, making it a valuable tool for accelerating workflows and achieving efficient data processing.

Read Full Article

like

8 Likes

source image

VoIP

1M

read

230

img
dot

Image Credit: VoIP

Trend Micro and Nvidia Unite for Secure AI Revolution

  • Trend Micro and Nvidia collaborate to address the need for secure AI systems in industries demanding private AI infrastructure.
  • The partnership integrates Trend Micro’s cybersecurity solutions with Nvidia’s Enterprise AI Factory framework to safeguard the AI lifecycle for various sectors like government, healthcare, and finance.
  • Nvidia's GPU-accelerated security technologies enhance threat detection, data loss prevention performance, and cost efficiency for private AI applications.
  • The collaboration aims to accelerate AI adoption for enterprises while ensuring governance and compliance standards are met, with a focus on safeguarding sensitive information in diverse AI deployments.

Read Full Article

like

13 Likes

source image

Siliconangle

1M

read

58

img
dot

Image Credit: Siliconangle

Dell’s storage updates highlight growing influence of AI data platform

  • Dell Technologies Inc. announced updates for PowerScale and ObjectScale to enhance storage architecture for AI, incorporating features like Project Lightning, PowerEdge XE servers, and Nvidia Corp.’s KV cache.
  • Travis Vigil, chief product officer of IT infrastructure at Dell, highlighted the importance of fast and scalable storage in AI deployments, mentioning that Lightning acts as an accelerator for KV Cache, improving efficiency and reducing latency.
  • Dell's collaboration with Nvidia includes the AI data platform, an appliance form factor combining compute, storage, and networking to cater to large-scale AI workloads, emphasizing cyber resilience and ransomware detection.
  • Additionally, Dell introduced the PowerScale Cybersecurity Suite, an AI-driven solution offering features like ransomware detection, near-instant recovery, airgap vault for backups, and disaster recovery software.

Read Full Article

like

3 Likes

For uninterrupted reading, download the app