menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Databases

Databases

source image

Siliconangle

1M

read

44

img
dot

Image Credit: Siliconangle

Couchbase shares rise despite mixed quarterly results and lower outlook

  • Couchbase reported mixed results in its fiscal 2025 fourth quarter, but shares rose in late trading.
  • The company's adjusted net loss per share was 30 cents, while revenue reached $54.9 million.
  • Couchbase's full-year revenue was $209.5 million, with total annual recurring revenue of $237.9 million.
  • For fiscal 2026, the company expects first quarter revenue of $55.1 million to $55.9 million.

Read Full Article

like

2 Likes

source image

Dev

1M

read

421

img
dot

Image Credit: Dev

Using LAST_INSERT_ID() in MySQL to Insert Related Data

  • When using AUTO_INCREMENT in MySQL, you often need to retrieve the ID of a newly inserted record. By using LAST_INSERT_ID(), you can easily obtain the ID generated from the most recent INSERT statement, making easier to insert related data.
  • Inserting Related Data Using LAST_INSERT_ID(): Let's take the following two tables as an example...
  • SQL Example: Use LAST_INSERT_ID() immediately after INSERT to retrieve the newly generated id. Use the obtained id to insert related records into order_items.
  • What Happens When ROLLBACK is Used? If ROLLBACK is executed, the AUTO_INCREMENT value is not reset, even though the insert operation is undone.

Read Full Article

like

25 Likes

source image

Amazon

1M

read

215

img
dot

Image Credit: Amazon

Perform minor/major version upgrades for Amazon Aurora MySQL Global Database with minimum downtime

  • Amazon Aurora Global Database allows for globally distributed applications with replication for performance and disaster recovery. Upgrading Aurora MySQL Global Database may involve removing secondary Regions, upgrading the primary Region, and recreating secondary Regions.
  • The blue/green deployment approach involves creating a mirror (green) environment for upgrades while the original (blue) environment runs production. Data changes from blue are replicated to green. Major upgrades require thorough testing before switchover.
  • Steps include fast cloning for green environment setup, in-place version upgrades, binary log replication between blue and green clusters, and environment adjustments. Rollback capability via reverse replication can be implemented with caution.
  • Prerequisites include verifying upgrade paths, disabling Auto Minor Version Upgrade, enabling binary logging, setting binlog format, and adjusting binary log retention period.
  • After setting up replication, database cache prewarming, post-switchover testing, and health checks are recommended. The switchover process involves stopping write traffic in blue, halting replication, and reconfiguring applications to green.
  • Backup considerations, cleanup steps, and detailed testing post-switchover are crucial. Remember to monitor replication status and lag before performing the blue/green switchover for minimal downtime during upgrades.

Read Full Article

like

12 Likes

source image

Cloudblog

1M

read

76

img
dot

Image Credit: Cloudblog

Enhancing AlloyDB vector search with inline filtering and enterprise observability

  • AlloyDB for PostgreSQL offers the ScaNN index for efficient data retrieval with a single SQL statement.
  • New enhancements in AlloyDB include inline filtering for improved performance and accuracy in vector searches.
  • Enterprise observability tools have been introduced to ensure stable performance and high-quality search results.
  • These tools include a recall evaluator to measure search quality and vector index distribution statistics for consistent performance.
  • AlloyDB's PostgreSQL interface enables developers to easily query and combine structured and unstructured data.
  • Inline filtering optimizes query execution by leveraging vector indexes and metadata column indexes.
  • The new recall evaluator helps measure and improve search quality in similarity search or AI workloads.
  • Vector index distribution statistics aid in monitoring data changes and maintaining performance.
  • AlloyDB offers performant, flexible, and high-quality solutions for vector search.
  • ScaNN for AlloyDB is generally available and can be accessed via a quickstart guide and documentation.

Read Full Article

like

4 Likes

source image

Mysql

1M

read

206

img
dot

Galera Cluster for MySQL 8.0.41-26.22 and 8.4.4-26.22 released

  • Codership has released Galera Cluster for MySQL 8.0.41-26.22 and 8.4.4-26.22.
  • The release includes MySQL-wsrep 8.0.41-26.22 and Galera replication library 4.22.
  • The update incorporates changes to MySQL 8.0.41 and MySQL 8.4.4.
  • New features include defining sequential consistency parameters for the certification process.

Read Full Article

like

12 Likes

source image

Dev

1M

read

170

img
dot

Image Credit: Dev

How We’re Making MySQL Performance Monitoring Easier & More Efficient - Releem’s February Update

  • Releem's February update brings key improvements in monitoring and query analytics for database servers.
  • Automatic SQL Query Analytics simplifies the workflow by automatically collecting updated data.
  • Extended monitoring periods allow analysis over periods of up to three months.
  • ARM64 support added to Releem Agent to enable deployment on ARM-based servers.

Read Full Article

like

10 Likes

source image

Dbi-Services

1M

read

107

img
dot

pg_mooncake: (another) Columnar storage for PostgreSQL

  • pg_mooncake is a columnar storage extension for PostgreSQL designed to optimize it for analytics by storing tables in Iceberg or Delta Lake format.
  • To install pg_mooncake, one must clone the GitHub repository, compile it using Cargo, and then install the extension on a PostgreSQL database.
  • The tables using pg_mooncake's column store format show significant differences in storage size and organization compared to standard PostgreSQL tables.
  • Rather than storing data in PostgreSQL's regular format, pg_mooncake stores tables as Parquet files, leading to efficient storage and data retrieval.
  • Queries on tables utilizing pg_mooncake's columnar storage demonstrate improved performance compared to standard PostgreSQL tables, especially for compressed data.
  • Inserting and querying data on tables with pg_mooncake extension show faster execution times, making it suitable for use cases requiring efficient analytics.
  • Overall, pg_mooncake offers PostgreSQL users a versatile option for implementing columnar storage and optimizing performance for analytical workloads.

Read Full Article

like

6 Likes

source image

Dev

1M

read

53

img
dot

Image Credit: Dev

[SQL] Essential Terms in the Relational Database Model and Characteristics of a relation

  • A relational database model is a way of organizing and managing data in a database using a structure based on relations and their relationships.
  • Essential terms in the relational database model include relationship, relation schema, relation instance, attributes, constraints, domain, data type, degree, tuple, and cardinality.
  • The characteristics of a relation include unique attribute names, unique tuples, consistent domain for each column, no inherent order of rows and columns, atomic values in each column, defined domain for each attribute, and allowance of NULL values.
  • Understanding these terms and characteristics is crucial for developing a deeper understanding of relational database management systems and effectively working with data.

Read Full Article

like

3 Likes

source image

Dev

1M

read

17

img
dot

Image Credit: Dev

Some Random SQL Error Handling Techniques for Reliability and Transactional Integrity

  • SQL error handling is crucial for reliability and transactional integrity, especially in stored procedures, triggers, and transactions.
  • Key techniques include TRY...CATCH blocks, logging, transaction management, SAVEPOINT, XACT_ABORT, and stored procedure return codes.
  • RAISEERROR and THROW are used for generating custom error messages and re-throwing captured errors to maintain error context.
  • Using THROW in CATCH allows for error propagation up the call stack without manual error details passing.
  • SAVINGPOINT in SQL Server enables partial transaction rollbacks to preserve successful operations amidst errors.
  • XACT_ABORT setting ensures automatic full rollback of a transaction upon runtime errors, ensuring data integrity.
  • Error logging to tables and using output parameters in stored procedures aid in effective error reporting and system resilience.
  • Transactions guarantee atomicity, either all operations succeed or fail as a group, facilitating data consistency and integrity.
  • Implementing these error handling techniques in SQL code enhances system reliability and maintainability, ensuring smooth database operations.
  • Proper error handling mechanisms like TRY...CATCH blocks, transaction management, and logging contribute to handling unexpected scenarios effectively.

Read Full Article

like

1 Like

source image

Siliconangle

1M

read

399

img
dot

Image Credit: Siliconangle

Microsoft scraps some data center leases as Apple, Alibaba double down on AI

  • Microsoft has canceled multiple data center leases, indicating lower AI demand than expected.
  • Apple and Alibaba are investing heavily in AI infrastructure.
  • Microsoft cited delays in facility construction and power delivery as reasons for canceling the leases.
  • Despite cancelations, Microsoft plans to spend $80 billion on AI-enabled data centers.

Read Full Article

like

24 Likes

source image

Dbi-Services

1M

read

44

img
dot

Image Credit: Dbi-Services

Updating SUSE/ SLES the right way

  • To upgrade a SUSE system, preparation is key and can be simplified using YaST.
  • Cleanup unused modules and add-ons before migration to avoid activation issues.
  • Activation and preparing SLES subscription can be done via YaST or shell commands.
  • Use SUSEConnect commands to register and activate necessary modules for the base system.
  • In case of connectivity issues, check SUSEConnect file and add proxy settings.
  • Online migration involves upgrading from one SLES version to another, ensuring compatibility.
  • Use 'zypper migration' command or YaST for online migration, updating repositories and applying patches.
  • For offline migration, boot the server using the desired SLES ISO version and follow the step-by-step upgrade process.
  • Check and adjust repositories during the upgrade, ensuring all necessary products are included.
  • Finalize the process by reviewing the summary and confirming the correct settings before system reboot.

Read Full Article

like

2 Likes

source image

Dbi-Services

1M

read

175

img
dot

Image Credit: Dbi-Services

OpenStack – Deploying the first compute instance

  • To create the first compute instance in OpenStack, preparation is needed, including defining flavors for instance metrics like CPU, memory, and disk.
  • Flavors in OpenStack define base metrics for instances to be launched, similar to AWS EC2 instance types.
  • You can create flavors either through the command line or the Horizon dashboard.
  • Creating a small flavor 'm1.nano' for the 'CirrOS' image involves setting vCPUs, RAM, and disk size.
  • Setting up an SSH keypair and security groups to allow access over SSH and ping is essential.
  • Deploying the first instance in OpenStack involves specifying flavor, image, network, SSH key, security group, and instance name.
  • You can monitor the instance creation event and check its status through commands like 'openstack server event list/show'.
  • Accessing the deployed instance using SSH and testing connectivity to the internet are crucial steps.
  • The article concludes by emphasizing readiness to utilize the instance for various purposes and hints at possible next steps like adding storage components.
  • Overall, the article provides a detailed guide on deploying the first compute instance in OpenStack with necessary configurations and steps involved.

Read Full Article

like

10 Likes

source image

Mysql

1M

read

386

img
dot

Rancher and Galera Cluster in the partner repository

  • Codership's Galera Cluster is now available as a partner chart in the Rancher by SUSE repository.
  • Rancher is a Kubernetes management tool that allows deployment and management of clusters on any provider.
  • Galera Cluster can be deployed on Rancher by following the provided documentation and utilizing the partner charts.
  • The latest release of Galera Cluster is 8.0.40, with further releases expected in the near future.

Read Full Article

like

23 Likes

source image

Dev

1M

read

179

img
dot

Image Credit: Dev

Building a Banking Intelligence System: Transforming Customer Data for Insights

  • In today's digital banking era, financial institutions need intelligent systems to analyze customer behavior and make data-driven decisions.
  • To analyze customer behavior effectively, we need to process multiple types of banking data such as transactional data, account data, and customer data.
  • A denormalized feature table aggregates data from multiple sources into a structured format, making it easier for machine learning models and business intelligence tools to extract insights quickly.
  • The steps to create a denormalized feature table involve extracting data from MySQL, transforming the data for feature engineering using Python, and storing the transformed dataset back in MySQL.

Read Full Article

like

10 Likes

source image

Dev

1M

read

0

img
dot

Image Credit: Dev

Choosing Integer Types in MySQL & PostgreSQL

  • Choosing the right integer type in MySQL and PostgreSQL depends on storage size, range, and performance.
  • MySQL offers TINYINT, SMALLINT, MEDIUMINT, INT/INTEGER, and BIGINT, while PostgreSQL has similar options with some differences.
  • TINYINT is suitable for boolean values and small counters, SMALLINT for small IDs and counts, and MEDIUMINT for larger counters in MySQL.
  • INT/INTEGER is commonly used for general-purpose IDs and counters, while BIGINT is ideal for large IDs and timestamps.
  • When deciding between SIGNED and UNSIGNED, choose UNSIGNED for only positive numbers and SIGNED for both positive and negative values.
  • For PostgreSQL, use SMALLINT or INTEGER in place of TINYINT and MEDIUMINT and SERIAL types for auto-incrementing IDs.
  • Real-world examples include using TINYINT for boolean flags, SMALLINT for small counters, MEDIUMINT for large counters, etc.
  • MySQL's TINYINT(1) can be used for storing Boolean values (0 or 1) and small flags or statuses (1-100).
  • PostgreSQL's BOOLEAN type can be used for similar purposes, and SERIAL types can be used for auto-incrementing IDs.
  • In conclusion, choosing the right integer type involves considering the specific use case and requirements of the data being stored.

Read Full Article

like

Like

For uninterrupted reading, download the app