menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Open Source News

Open Source News

source image

Arstechnica

1M

read

123

img
dot

Image Credit: Arstechnica

Go Module Mirror served backdoor to devs for 3+ years

  • A mirror proxy Google runs on behalf of developers of the Go programming language pushed a backdoored package for more than three years until Monday.
  • The Go Module Mirror caches open source packages available on GitHub and elsewhere to ensure compatibility and faster downloads.
  • Since November 2021, the Go Module Mirror has hosted a backdoored version of a widely used module.
  • The backdoored file used typosquatting, a technique that redirects users to a malicious file when they mistype or slightly vary the correct name.

Read Full Article

like

7 Likes

source image

TechCrunch

1M

read

252

img
dot

Image Credit: TechCrunch

Hugging Face researchers aim to build an ‘open’ version of OpenAI’s deep research tool

  • Hugging Face developers have built an 'open' version of OpenAI's deep research tool called Open Deep Research.
  • Open Deep Research consists of an AI model, OpenAI's o1, and an open source framework to plan and analyze research.
  • Open Deep Research achieved a GAIA score of 54%, compared to OpenAI deep research's score of 67.36%.
  • While there are alternative deep research reproductions, they lack the proprietary model o3 used by OpenAI deep research.

Read Full Article

like

15 Likes

source image

Marktechpost

1M

read

123

img
dot

Deep Agent Released R1-V: Reinforcing Super Generalization in Vision-Language Models with Cost-Effective Reinforcement Learning to Outperform Larger Models

  • Deep Agent released R1-V, a reinforcement learning approach that enhances the generalization ability of vision-language models (VLMs) while being cost-effective.
  • The R1-V approach employs reinforcement learning techniques to teach VLMs to develop robust visual counting abilities, enhancing their performance in various AI applications.
  • Despite having only 2 billion parameters, R1-V outperforms a significantly larger model in out-of-distribution (OOD) tests, demonstrating the importance of the training methodology and reinforcement learning strategies.
  • R1-V's training efficiency and relatively low computational cost of $2.62 make it an attractive choice for researchers and developers seeking high performance without extensive computational resources.

Read Full Article

like

7 Likes

source image

Kaspersky

1M

read

0

img
dot

Image Credit: Kaspersky

The biggest supply chain attacks in 2024 | Kaspersky official blog

  • Supply chain attacks are considered to be one of the most dangerous threats in the security of any firm as it occurs in infrastructure that's not within the security team's control.
  • Some of the major supply chain attacks of 2024 include; downloading malicious npm packages that stole SSH keys from hundreds of developers on GitHub, Trojanizing jQuery versions on jsDelivr, npm and GitHub, and attacking the cdn.polyfill.io domain to redirect users to a Vietnamese sports betting site through a fake domain impersonating Google Analytics.
  • The backdoor implanted in XZ Utils project could have led to the biggest supply-chain attack of 2024, which had devastating consequences, but it was detected in test versions of several Linux distributions, and most linux users remained safe.
  • In this era of increasing supply-chain attacks, it is crucial that businesses carefully review any code used in their projects and maintain a Software Bill of Materials (SBOM) to track dependencies and components, and ensure an XDR-class security solution in their corporate network.
  • Researchers urge people to monitor suspicious activity in their network using an XDR pest-control function and rope in an external service for timely threat detection and response.

Read Full Article

like

Like

source image

Hackernoon

1M

read

219

img
dot

Image Credit: Hackernoon

Open Source Solution Makes Kill-port Implementation a Breeze

  • A developer created an open-source solution called port-client to address the issue of slow port management.
  • The developer found that the existing solution, kill-port, was slow and decided to build a faster and more efficient alternative.
  • Port-client is 11 times faster than kill-port, allowing developers to free up ports almost instantly.
  • Port-client has gained significant popularity, with nearly 80,000 downloads and growing.

Read Full Article

like

13 Likes

source image

Mit

1M

read

398

img
dot

Image Credit: Mit

Introducing the MIT Generative AI Impact Consortium

  • MIT Generative AI Impact Consortium brings together industry leaders and MIT top minds to address social impacts of generative AI and big language models (LLMs).
  • The Consortium vision is rooted in MIT's core mission pushing forward newer and more efficient models, guiding their development, and impact on the world.
  • The Consortium is founded on three pivotal questions: How can AI-human collaboration create outcomes that neither could achieve alone? What is the dynamic between AI systems and human behavior? How can interdisciplinary research guide the development of better, safer AI technologies that improve human life?
  • Six founding members of the consortium are Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global who will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.
  • The core of the consortium’s mission is collaboration—bringing MIT researchers and industry partners together to unlock generative AI’s potential while ensuring its benefits are felt across society.
  • One of the consortium's core goals is to guide the change in a way that benefits both businesses and society by educating global business leaders and employees on the evolving uses and applications of AI.
  • Participants share a common goal of advancing generative AI for broad societal benefit. Success within the initiative is defined by shared progress, open innovation, and mutual growth.
  • The consortium aims to prepare the workforce of tomorrow. When the first commercial digital computers were introduced, people were worried about losing their jobs. The consortium aims to help reduce the fear of missing out (FOMO) for leaders.
  • Generative AI is no longer confined to isolated research labs- it's driving innovation across industries and disciplines. MIT promotes this technology by connecting researchers, students, and industry leaders solving complex challenges.
  • The MIT Generative AI Association is one of many pushes towards changing perception about AI. In a world of robots and new technology, having a platform to discuss and learn is critical to leveraging AI for the greatest possible benefit.

Read Full Article

like

24 Likes

source image

Medium

1M

read

186

img
dot

Image Credit: Medium

Weekly AI Update: Innovations, Competitions, and the Open-Source Revolution

  • OpenAI has secured a $500 billion investment, highlighting the confidence in AI's potential.
  • Anticipation is building for GPT-5, which promises significant improvements and a seamless user experience.
  • Deep Seek's R1, an open-source AI model, challenges the traditional paid models, offering accessibility and competition.
  • Innovations like Gemini 2.0, Perplexity's enhanced API, and advanced content creation tools are reshaping the AI landscape.

Read Full Article

like

11 Likes

source image

Spicyip

1M

read

348

img
dot

Image Credit: Spicyip

Call for Applications: SpicyIP Tech Innovation Policy Fellowship 2025 (Apply by February 23)

  • SpicyIP Tech Innovation Policy Fellowship 2025 is now open for applications.
  • The fellowship aims to contribute to the analysis of IP-related law and policy around new and emerging technologies in India.
  • Selected fellows will be required to write and publish at least one rigorously researched blog post of 1500 words each month.
  • The fellowship offers a stipend of INR 4,000 per month for the 12-month duration.

Read Full Article

like

20 Likes

source image

Medium

1M

read

0

img
dot

Image Credit: Medium

The DeepSeek Divide: A Defining Moment in AI’s Global Power Struggle

  • DeepSeek has sparked a global AI race that is shaping government policies, market strategies, and industry alliances.
  • The debate over DeepSeek has created distinct factions, each with its own perspectives, motivations, and ambitions.
  • Skeptics call for more transparency, while validators engage in direct experimentation to provide empirical validation or refute claims.
  • For US lawmakers and China hawks, DeepSeek’s rise represents a geopolitical wake-up call and reinforces calls for stronger restrictions on AI hardware and technology sharing.
  • Investors and market analysts see DeepSeek’s efficiency breakthrough as a potential market disruptor that could reshape AI economics.
  • National pride and selective memory of AI progress have led some to dismiss DeepSeek’s efficiency gains, underestimating global competitors.
  • Major Western AI firms have downplayed DeepSeek’s success, framing it as either overhyped or dependent on unfair advantages.
  • The backlash against DeepSeek is more about competitive positioning than AI ethics.
  • Some companies are adapting quickly, incorporating DeepSeek-inspired efficiencies into their own AI strategies.
  • Regulators are focusing on legal and ethical concerns that could determine DeepSeek’s access to global markets.
  • The emergence of DeepSeek has the potential to drive down the cost of AI training and deployment, making advanced AI more accessible to a broader range of players.
  • DeepSeek is a defining moment in the global AI power struggle, accelerating changes in AI business models, regulation, and international competition.
  • AI is no longer just about technological advancement—it is about who controls the future of intelligence itself.

Read Full Article

like

Like

source image

Pymnts

2M

read

226

img
dot

Image Credit: Pymnts

OpenAI CEO Sam Altman: Company Considering ‘Different Open-Source Strategy’

  • OpenAI is reconsidering its closed-source development approach and exploring a different open-source strategy.
  • This decision comes after DeepSeek's release of a lower cost open-source AI model.
  • OpenAI CEO Sam Altman acknowledged the need for a different approach but stated it is not the highest priority at the moment.
  • OpenAI Chief Product Officer Kevin Weil mentioned the possibility of open sourcing older AI models.

Read Full Article

like

13 Likes

source image

Hackernoon

2M

read

211

img
dot

Image Credit: Hackernoon

An Open Source Exploit Last Year Changed How Professionals Think of Security

  • Open source software, maintained largely by volunteers, is a major security risk for corporations and governments. Vulnerabilities in such code now make it a prime target for cyberattacks by both malicious hackers and state actors. Reports highlight the risks: 82% of open source components are considered risky due to poor maintenance, outdated code, or security flaws. Many of these projects are run by small teams or individual volunteers with limited resources, leaving them vulnerable to attacks.
  • The xz Utils incident was a major example of just how vulnerable open source security is. Andres Freund, a software engineer at Microsoft, “inadvertently found a backdoor hidden in a piece of software that is part of the Linux operating system.” This backdoor came from the release tarballs for xz Utils, which were tampered with and allowed unauthorised access to systems using affected versions. The source code that was compromised was of the xz Utils open source data compression utility in Linux systems. The engineer prevented a “potentially historic cyberattack.”
  • Adding to the risks in open-source security is the rise of large language models (LLMs), which can be misused by attackers. However, LLMs offer also offer opportunities to improve open source security by flagging suspicious changes and detecting unusual patterns in contributor behaviours. However, deploying an open source LLM on a server or in a cloud environment introduces the risk of unauthorized access to the model.
  • Supply chain attacks on open source software are increasing due to the growing reliance on open-source libraries and the rise of sophisticated attack methods like phishing and social engineering. According to Synopsys, vulnerabilities in open source software are increasing. The federal government itself is one of the largest consumers of open source software and will continue to increase its involvement in the space.
  • Furthermore, state actors remain one of the biggest threats. Open source software offers them a low-cost, high-reward target for espionage, sabotage, and disruption. Governments are likely to get more involved, helping promote public-private partnerships to improve security across the wider ecosystem.
  • Phishing attacks are already dangerous, they exploit trust rather than breaching technical defenses - tricking individuals into executing malicious code in a trusted environment. Open-source thrives on the contributions of faceless developers who work in good faith, often without direct interaction or verification of identity. However, GenAI undermines this foundation by making it feasible for many of those faceless contributors to be entirely fabricated.
  • As we enter 2025, open source software is at a critical point. The threats are becoming more sophisticated, driven by state actors, the misuse of AI tools like LLMs, and a focus on supply chain interference. However, with proactive measures, greater investment, and shared responsibility, it’s entirely possible to create a future where open source continues to thrive as a force for innovation and progress.

Read Full Article

like

12 Likes

source image

Unite

2M

read

330

img
dot

Image Credit: Unite

Allen AI’s Tülu 3 Just Became DeepSeek’s Unexpected Rival

  • Allen AI has released the new Tülu 3 family of models that are matching or even beating DeepSeek on key benchmarks.
  • Tülu 3 is open-source, with Allen AI releasing the complete training pipeline, code, and even their reinforcement learning method called Reinforcement Learning with Verifiable Rewards (RLVR) that made this possible.
  • Tülu 3 is built using a unique four-stage training process, which involves strategic data selection, building better responses, learning from comparisons, and Reinforcement Learning with Verifiable Rewards.
  • RLVR replaces subjective reward models with concrete verification, which is a technical breakthrough that deserves attention. This technique trains Tülu 3 on the correctness of its answers and results in binary feedback without any room for partial credit or fuzzy evaluation.
  • Tülu 3's 405B parameter version performs well in competiting directly with top models in math, coding, and instruction following.
  • Allen AI has released complete documentation of the development process including complete training pipelines, data processing tools, evaluation frameworks and implementation specifications.
  • This open approach accelerates innovation across the field, enabling developers to build on proven approaches, and sparking a new wave of AI development.
  • The success of Tülu 3 is a big moment for open AI development, which changes the industry when open source models match or exceed private alternatives.
  • Allen AI's verifiable rewards and multi-stage training techniques pave the way for future AI development, providing a foundation for teams to build upon and push performance even higher.
  • Tülu 3's breakthroughs in multiple stages training and verifiable rewards hint at what is coming, and a new wave of AI development has just begun.

Read Full Article

like

19 Likes

source image

Hackernoon

2M

read

440

img
dot

Image Credit: Hackernoon

The HackerNoon Newsletter: Can the Blockchain Make Traditional Courts Redundant? (2/1/2025)

  • Can the Blockchain Make Traditional Courts Redundant? - Can multisig transactions serve as legally recognized arbitral awards under international law?
  • Scientists Hack Pac-Man to Make Physical Therapy Less of a Chore - Video games can help people recover their motor skills through repetitive exercises, but in a way that still feels fun and engaging.
  • Successful Entrepreneurs Recommend These Books to Those Looking to Change Their Mindset - What We Owe the Future and other titles that are a must-read for those looking to change their habits.
  • Web3 Promised to Decentralize the Internet—AI Might Actually Make It Happen - Artificial Intelligence and Web3 are coming together to transform how we transact globally and reshape our world.

Read Full Article

like

26 Likes

source image

Hackaday

2M

read

248

img
dot

Image Credit: Hackaday

Time vs Money, 3D Printer Style

  • A Hackaday writer shares their experience of buying untested, returned-to-manufacturer 3D printers and how it turned out.
  • The first printer they bought was a success, while the second one had a defective bed-touch sensor.
  • They managed to fix the issue by tweaking the firmware and using a cheap knock-off touch probe.
  • Despite the extra effort, they consider the purchase worth it, as they now have three printers running at a significantly lower cost.

Read Full Article

like

14 Likes

source image

Medium

2M

read

418

img
dot

Image Credit: Medium

DeepSeek-R1 Explained: The AI Model That’s Smarter, Faster, and More Efficient

  • DeepSeek-R1 is an AI model that uses Chain of Thought (CoT) prompting to boost its reasoning skills and explain its thought process step-by-step.
  • Unlike traditional AI models, DeepSeek-R1 learns through trial and error, similar to a baby learning to walk.
  • Model distillation allows DeepSeek-R1 to achieve the same power and complexity as larger models, but in a smaller and cheaper form.
  • DeepSeek-R1 represents a blueprint for building smarter and more accessible AI systems that can be used for faster app development and as educational tutors.

Read Full Article

like

25 Likes

For uninterrupted reading, download the app