menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Databases

Databases

source image

Amazon

1w

read

25

img
dot

Image Credit: Amazon

FundApps’s journey from SQL Server to Amazon Aurora Serverless v2 with Babelfish

  • FundApps is a compliance as a service platform that enables compliance teams to respond efficiently to regulatory change, reduce complexity, and increase certainty of the compliance process.
  • FundApps utilized the combination of cloud computing and regulatory expertise to launch the first compliance as a service (CaaS) product.
  • FundApps migrated their service from a SQL Server database to cloud-centric solutions of Amazon Aurora Serverless v2 with Babelfish for Aurora PostgreSQL and Amazon S3 to improve service scalability and reduce costs.
  • FundApps combined extensive dataset with its clients' positions in the Rule Engine to generate actionable results.
  • The Regulatory data service is responsible for ingesting thousands of files daily from various regulatory sources and processing the data for use by the FundApps Rule Engine.
  • FundApps overprovisioned resources to handle peak load previously which resulted in cost inefficiency. The migration to Aurora allowed FundApps to use the serverless feature that provides auto-scaling configuration, where the database scales capacity up or down based on the application's needs.
  • FundApps used a Babelfish feature for Amazon Aurora PostgreSQL-Compatible Edition to enable Aurora to understand commands from applications written for SQL Server.
  • The transition to the Aurora PostgreSQL database, with the strategic use of complementary AWS services and tools, has delivered tangible operational excellence for FundApps with Babelfish and Aurora PostgreSQL-Compatible.
  • FundApps migrated from a traditional, proprietary database to an Aurora PostgreSQL database to achieve significant cost optimizations, scalability enhancements, and performance improvements for their workloads.
  • The migration from a traditional, proprietary database to an Aurora PostgreSQL database, combined with the strategic use of complementary AWS services and tools, has delivered tangible operational excellence for FundApps with Babelfish and Aurora PostgreSQL-Compatible.

Read Full Article

like

1 Like

source image

Amazon

1w

read

351

img
dot

Image Credit: Amazon

Shrink storage volumes for your RDS databases and optimize your infrastructure costs

  • Amazon RDS has launched the ability to shrink storage volumes using Amazon RDS Blue/Green Deployments - a nice addition to the list of new use cases that Blue/Green Deployments now supports.
  • Blue/Green Deployments create a fully managed staging environment, or Green databases, with your specified storage size, and keep the Blue and Green databases in sync. When you are ready, you can promote the Green databases to be the new production system in as fast as a minute.
  • Previously, reducing Amazon RDS storage involved manually migrating data to a new database instance with a smaller storage configuration. Some of the common approaches to this migration included logical backup and restore, AWS Database Migration Service (AWS DMS), and native database replication. These methods required manual setup and coordination of the cutover procedures, which was complex and error-prone.
  • Blue/Green Deployments storage volume shrink, available for Amazon RDS for PostgreSQL major versions 12 and later, RDS for MySQL major versions 5.7 and later, and Amazon RDS for MariaDB major versions 10.4 and later.
  • For reducing your RDS instance storage using Amazon RDS Blue/Green Deployments, you can create a fully managed staging environment and keep the Blue and Green databases in sync. This simplified approach has more predictable downtime and allows you to increase and decrease your storage volume size based on anticipated application demands.
  • It is important to consider both storage space and performance requirements, including IOPS and throughput, when rightsizing Amazon RDS database instance storage.
  • To improve the performance of the storage volume shrink operation, it is worth considering using the best performing io2 Block Express storage when creating the green instance. The baseline IOPS and throughput for your storage volume type and allocated storage are essential for optimal performance.
  • PostgreSQL vacuum or MySQL purge cleans up space occupied by obsolete data and makes the space available for subsequent operations, minimizing data bloat.
  • RDS Blue/Green Deployments use PostgreSQL physical streaming or logical replication technology depending on specifications of the green environment.
  • With Amazon RDS Blue/Green Deployments, you can now scale your storage down post-holidays if needed.

Read Full Article

like

21 Likes

source image

Amazon

1w

read

50

img
dot

Image Credit: Amazon

Best practices for creating a VPC for Amazon RDS for Db2

  • To create an RDS for Db2 instance, one of the prerequisites is to configure the virtual private cloud (VPC) appropriately.
  • Amazon RDS for Db2 is a fully managed solution on scalable hardware designed to deliver optimal performance in minutes.
  • With Amazon Virtual Private Cloud (Amazon VPC), you can create a logically isolated network dedicated to your AWS account.
  • Amazon VPC includes subnets, route tables, and network access control lists (ACLs) to control how network traffic flows into and out of AWS resources.
  • Amazon VPC provides complete network isolation and security, and it enables you to control who and what can access your resources.
  • Best practices for creating an Amazon VPC for use by Amazon RDS for Db2 include using multiple Availability Zones, separating public and private subnets, using private subnets for sensitive data, and limiting the scope of security groups.
  • Selection criteria for choosing a VPC and other AWS features depends on your organization’s scale and security needs.
  • By following best practices such as isolating public and private subnets, using security groups effectively, and using VPC endpoints, you can build secure and scalable network architectures that protect your cloud resources.
  • Through automation using tools like Lambda and AWS CloudFormation, you can further simplify the process of creating and managing VPCs.
  • The authors of this post, Vikram S Khatri and Sumit Kumar, provide a detailed guide for creating an Amazon VPC for use by Amazon RDS for Db2 and highlight best practices and important networking aspects.

Read Full Article

like

3 Likes

source image

Dbi-Services

1w

read

79

img
dot

Image Credit: Dbi-Services

Microsoft Foundry in Action: Building Smarter AI Solutions with Your Data

  • Microsoft unveiled Microsoft Foundry, a unified platform designed to simplify the design, customization, and management of AI solutions.
  • Microsoft Foundry is a comprehensive platform for developing and deploying generative AI apps responsibly.
  • It is incredibly accessible and boasts an impressive 1,812 models in its library.
  • Microsoft integrates Responsible AI, Microsoft Purview for data governance, and content safety protocols into its solutions.
  • Microsoft Foundry democratizes AI by enabling users—regardless of technical expertise—to leverage an extensive catalog of pre-trained models.
  • It enables users to tailor applications using custom datasets.
  • Microsoft Foundry facilitates seamless integration of AI capabilities into enterprise solutions.
  • Microsoft emphasizes on fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability in Responsible AI.
  • Microsoft Purview provides a comprehensive tool to enforce governance policies that dictate what AI can return to end-users.
  • Microsoft Foundry is designed for developers to explore, build, test, and deploy using cutting-edge AI tools and ML models, grounded in responsible AI practices.

Read Full Article

like

4 Likes

source image

Amazon

1w

read

4

img
dot

Image Credit: Amazon

How the Amazon TimeHub team designed a recovery and validation framework for their data replication framework: Part 4

  • The Amazon TimeHub team built a custom data validation framework on top of AWS DMS validation tasks to maintain data integrity during data replication between source and target databases.
  • AWS DMS provides two data validation options: validation with ongoing replication and standalone validation only tasks that validate data independent of replication tasks. Amazon TimeHub's team chose the latter to maintain isolation between the functionalities.
  • The team encountered limitations with AWS DMS data validation such as a high number of false positives and revalidation of logged errors not being possible. As a solution, the team built a custom revalidation framework to eliminate false positives and came up with a manual correction approach to overcome the limitations.
  • Amazon TimeHub's team also explored the possibility of validating partial data using table-level filters in cases when replication is disrupted and tasks need to be restarted.
  • Through the custom framework built on top of AWS DMS validation tasks, operational teams can maintain data integrity during ongoing data replication between source and target databases, avoiding data integrity issues due to unplanned failures at the source database, AWS DMS, or target database.

Read Full Article

like

Like

source image

Amazon

1w

read

59

img
dot

Image Credit: Amazon

How the Amazon TimeHub team handled disruption in AWS DMS CDC task caused by Oracle RESETLOGS: Part 3

  • The article describes how the Amazon TimeHub team handled a disruption in the AWS DMS CDC task caused by an Oracle RESETLOGS scenario.
  • The RESETLOGS scenario in Oracle resets the log sequence number to 1, causing AWS DMS to fail when looking for the next LSN.
  • The article details three options to recover a failed task with limitations and how Amazon TimeHub team chose the third option to minimize the risk of potential data loss.
  • The article then describes how the team built an operational framework to detect the RESETLOGS operation and validate data discrepancies caused by failover scenarios.
  • The RESETLOGS data validation operates independently from AWS DMS and uses a custom validation framework that functions independent of redo logs.
  • The data validation framework queries data from both the source and target environments based on audit columns, keeping a buffer in terms of the window of the source failure.
  • The high-level operational workflow diagram is depicted above, including the steps to be taken in case of RESETLOGS failure or non-failure.
  • In the next blog part of this series, the article will discuss how they developed a data validation framework to recover from disaster and disruption scenarios, maintaining data integrity between source and target.
  • The article concludes by listing the authors and their roles in the Amazon TimeHub team.

Read Full Article

like

3 Likes

source image

Amazon

1w

read

182

img
dot

Image Credit: Amazon

How the Amazon TimeHub team designed resiliency and high availability for their data replication framework: Part 2

  • The Amazon Timehub team built a data replication framework using AWS Database Migration Service (AWS DMS) to replicate data from an Oracle database to Amazon Aurora PostgreSQL-Compatible Edition. In this post, we explain our approach to address resilience of the ongoing replication. We created a robust reliability benchmark testing and data validation framework, which tests the resilience mechanisms, such as failover scenarios, scaling, and monitoring, to make sure the solution can handle disruptions and provide accurate alerts.
  • As part of our vision to build a resilient data replication framework, we focused on the resilient architecture, monitoring, and data validation. The resiliency mechanisms had to be addressed across the source system, AWS DMS, network, and Aurora PostgreSQL.
  • The failure scenarios we tested were failures at the source, failures in AWS DMS processing, and failures at the target. We wanted to check AWS DMS' reaction and ability to handle these scenarios gracefully without requiring manual intervention.
  • AWS DMS publishes metrics to Amazon CloudWatch to help measure propagation delay from source to target. In addition, Aurora offers monitoring on system metrics with alarms set on 12 such metrics to monitor CPU utilization, read, and write IOPs, and disk queue depth. AWS DMS performance degrades when the records are fetched from disk, compared to when they are fetched from memory. Hence, monitoring is key.
  • We built a custom monitoring framework to integrate custom metrics to CloudWatch that aren't available out of the box. This helps engineers analyze and identify any underlying issues related to replication lags and issues.
  • In conclusion, with the fault-resilient framework we built for data replication using AWS DMS and Aurora PostgreSQL-Compatible, we can avoid data integrity-related issues and impacts to downstream systems. The key metrics that we monitor detect issues early and react to such situations in a controlled manner.

Read Full Article

like

10 Likes

source image

Medium

1w

read

220

img
dot

Image Credit: Medium

Message Operations with Oracle Database Transactional Event Queues

  • To write data to a queue, the queue must be both created and started. The following SQL script creates a queue with a JSON payload type.
  • We’ll dequeue the message and print it to the console.
  • The following snippet creates a topic using the Kafka API for Transactional Event Queues:
  • The following Java snippet creates an org.oracle.okafka.clients.producer.KafkaProducer instance capable of producing data to Transactional Event Queue topics.
  • The following Java snippet creates an org.oracle.okafka.clients.consumer.KafkaConsumer instance capable of records from Transactional Event Queue topics.
  • JMS (Java Message Service) provides a standard way to enqueue and dequeue messages. This section shows how to use plain Java JMS APIs and Spring JMS integration.
  • When enqueuing a message, you can specify an expiration time using the expiration attribute of the message_properties object.
  • When enqueuing a message, you can specify a delay (in seconds) before the message becomes available for dequeuing.
  • When enqueuing a message, you can specify its priority using the priority attribute of the message_properties object.
  • Messages that exceed their expiration time are automatically moved to an exception queue for further processing or inspection.

Read Full Article

like

13 Likes

source image

Analyticsindiamag

1w

read

80

img
dot

Image Credit: Analyticsindiamag

Why Traditional SaaS is Under Threat 

  • Klarna, a fintech unicorn valued at $46bn, is ending its service provider relationships with Salesforce and Workday as part of a major internal overhaul driven by AI initiatives.
  • A number of firms are opting to move away from its previous SaaS providers in favour of building in-house solutions.
  • Many feel that with the advent of GenAI coding tools like GitHub Copilot and Anthropic’s Claude, one can expect software development to become cheaper and the job market for coders to evolve, creating a more accessible environment for talent, although at lower price points.
  • Developing in-house AI involves higher initial costs due to infrastructure, talent acquisition, and maintenance expenses.
  • However, in the long run, SaaS costs can add up as subscription fees scale with use.
  • “The choice between in-house AI development and SaaS adoption depends heavily on an organisation’s specific situation, goals, and resources,” said Pradeep Sanyal, AI and data leader at a global tech consulting company. 
  • Notably, all SaaS companies, including Salesforce and Oracle, are stepping up efforts to integrate AI solutions into their offerings.
  • On the other hand, NVIDIA chief Jensen Huang believes that SaaS is sitting on a goldmine.
  • Hybrid approach is becoming popular as companies are using SaaS to initiate quickly and fill gaps while building in-house capabilities for their most critical, differentiating AI needs.
  • B2B SaaS companies, which are taking a lot of data from businesses, will have a gloomy future.

Read Full Article

like

4 Likes

source image

Soais

1w

read

336

img
dot

Benefits of Using ORC Global Templates

  • Emails play a key role in keeping candidates informed during the hiring process.
  • Oracle Recruiting Cloud (ORC) offers Global Templates to simplify email management.
  • Global Templates allow for consistent branding across multiple email templates.
  • Benefits include time savings, improved consistency, reduced errors, and scalability.

Read Full Article

like

20 Likes

source image

Soais

1w

read

178

img
dot

Overview of Impact Analysis with Tricentis SeaLights

  • Impact analysis is the process of identifying the potential consequences and implications of changes to a software application or system.
  • Tricentis SeaLights provides metrics, traceability, and insights for organizations to confidently meet quality gates and deliver software quickly.
  • Using Tricentis SeaLights, organizations can minimize production defects and ensure high quality by providing advanced code coverage for various types of testing.
  • Tricentis SeaLights uses machine learning to identify quality risks during software releases and allows software testing teams to focus on the minimum number of functional tests, thus saving time and speeding up release.

Read Full Article

like

10 Likes

source image

Pymnts

1w

read

115

img
dot

Image Credit: Pymnts

AI Deals and Earnings Paint Mixed Picture of Tech Transformation

  • Oracle reported a surge in cloud revenue, driven by AI demand and a partnership with Meta.
  • C3 AI reported strong growth and expanded its Microsoft partnership, aiming to penetrate the market further.
  • ImmunoPrecise Antibodies showcased its AI drug discovery platform, despite flat quarterly revenues.
  • The tech industry is experiencing a mixed picture of AI deals and earnings, highlighting the ongoing tech transformation.

Read Full Article

like

6 Likes

source image

Amazon

1w

read

259

img
dot

Image Credit: Amazon

Understand the benefits of physical replication in Amazon RDS for PostgreSQL Blue/Green Deployments

  • Amazon RDS introduced Blue/Green Deployments for PostgreSQL in 2023, designed to simplify major version upgrades using community logical replication.
  • Physical replication is now an option for RDS Blue/Green Deployments, which overcomes limitations of logical replication.
  • Physical replication offers enhanced flexibility, scalability, and performance for databases.
  • Physical replication is supported across all PostgreSQL engine versions in RDS, making it accessible for every database.
  • Physical replication eliminates dependency on static parameter configurations and avoids additional downtime.
  • Physical replication enables seamless scaling for multi-tenant environments.
  • Physical replication propagates schema changes made in the blue environment to the green environment, avoiding inconsistencies.
  • Physical replication enables faster and more predictable switchovers, by eliminating sequence synchronization.
  • Physical replication provides seamless handling of large objects without extra configuration.
  • Adopting physical replication for Amazon RDS for PostgreSQL blue/green deployments is straightforward.

Read Full Article

like

15 Likes

source image

Cloudblog

1w

read

409

img
dot

Image Credit: Cloudblog

How Ford Pro uses Bigtable to harness connected vehicle telemetry data

  • Ford Pro Intelligence is a cloud-based platform that streamlines fleet operations.
  • The platform collects data from vehicles to improve performance and reduce cost of ownership.
  • Ford Pro needed a platform that could handle complex and variable connected vehicle data.
  • They chose Bigtable, Google Cloud’s NoSQL database for high throughput and low-latency applications at scale.
  • Bigtable unlocked real-time vehicle health notifications, predictive maintenance, and fleet monitoring dashboards.
  • Ford Pro telemetry data is streamed from vehicles, processed by Dataflow, and stored on Bigtable.
  • Bigtable supports low-latency access to both real-time and historical data with a flexible schema.
  • The platform provides interactive dashboards showing real-time vehicle locations and tracking.
  • They leverage BigQuery alongside Bigtable to generate reports, available for fleet managers.
  • Ford Pro plans to use machine learning to predict vehicle maintenance and improve efficiency.

Read Full Article

like

24 Likes

source image

Dev

1w

read

106

img
dot

Image Credit: Dev

Rust-Based Reimagining of SQLite: Leading Asynchronous I/O and Contemporary Improvements

  • SQLite, the world’s most deployed database engine, is being re-implemented in Rust.
  • Rust offers memory safety, concurrency features, and zero-cost abstractions.
  • The re-implementation introduces asynchronous I/O, WASM integration, eBPF for performance optimization, and modern API design.
  • The shift to Rust addresses scalability, security, and flexibility while reflecting the modernization trend in legacy systems.

Read Full Article

like

6 Likes

For uninterrupted reading, download the app