menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Databases

Databases

source image

Medium

3w

read

250

img
dot

Image Credit: Medium

Struggling with Slow SQL Queries: Execution Order for Cleaner, Faster Results!

  • Ignoring the actual execution order can lead to inefficient queries, wrong results, and a whole lot of wasted time.
  • In this article, the author explains the SQL execution order and emphasizes the importance of getting it right for optimized and performant queries.
  • The SQL engine processes a query in a specific order, which can help in debugging errors, improving performance, and ensuring correct data retrieval.
  • The execution order of SQL queries starts with the FROM clause, followed by other steps involving selection, filtering, grouping, and sorting.

Read Full Article

like

15 Likes

source image

Dev

3w

read

109

img
dot

Image Credit: Dev

EXECUTE IMMEDIATE with BULK COLLECT

  • Oracle SQL provides the EXECUTE IMMEDIATE command along with BULK COLLECT to dynamically execute SQL statements and fetch multiple rows into PL/SQL collections.
  • The EXECUTE IMMEDIATE command is used when the SQL statement cannot be determined until runtime, while BULK COLLECT is used to efficiently retrieve multiple rows into collections in a single operation.
  • Examples provided include fetching multiple rows into a collection, fetching multiple columns into a collection of records, and using bind variables for security and performance.
  • Using bind variables is recommended to prevent SQL injection and optimize performance.

Read Full Article

like

6 Likes

source image

Amazon

3w

read

329

img
dot

Image Credit: Amazon

Automatic reconnection to Amazon RDS for Oracle read replicas during disaster recovery

  • Amazon RDS for Oracle offers ease of operability, maintenance, scalability, and reliability while migrating Oracle workloads to AWS.
  • Automatic switchover described in this post can reduce the overall RTO by automatically redirecting the client connection to the newly promoted or switched-over database.
  • The reason for using the automatic switchover is that it saves time and effort required to update the application tnsnames.ora file manually.
  • This post provides guidance to configure the automatic reconnection of the application to a standby database during a disaster recovery incident.
  • Key concepts related to the Oracle Data Guard replication status are defined.
  • The solution involves creating and starting the database service ...
  • During a switchover, the primary database transitions to a standby role, and the standby database transitions to the primary role.
  • Advantages of using Amazon RDS for Oracle for automatic switchover include straightforward implementation and minimizing the RTO and RPO.
  • This post demonstrated the steps to configure automatic reconnection of an application to a standby database during a disaster recovery incident using Oracle Data Guard.
  • The authors have deep expertise in database migration and administration on Amazon RDS for Oracle, Aurora PostgreSQL, and RDS SQL Server databases.

Read Full Article

like

19 Likes

source image

Siliconangle

3w

read

114

img
dot

Image Credit: Siliconangle

Teradata highlights scalable AI and open data integration as critical drivers for enterprise growth

  • According to a survey by Teradata, 40% of C-suite and AI decision-makers do not believe their company’s data is accurate enough to conclude accurate outcomes. Teradata’s Trusted AI initiative ensures companies have reliable, scalable and actionable data to drive more intelligent business decisions.
  • Significant product enhancements have been made to VantageCloud Lake and ClearScape Analytics, offering AI-driven insights that are more accessible to businesses. Teradata has also formed strategic partnerships with major entities such as Amazon Web Services, Google Cloud, and Microsoft Azure to deliver scalable, cloud-native solutions.
  • Most recently, Teradata announced collaboration with Nvidia Corp. to integrate Nvidia’s graphics processing unit technology, accelerating AI and machine workloads.
  • Trusted AI requires reliable, integrated and harmonized data across various systems, ensuring that the insights generated can guide meaningful business outcomes. Teradata believes that data quality is non-negotiable and AI models function optimally only when the data they work with is accurate and reliable.
  • Teradata’s open environment has developed an ecosystem that integrates with industry-standard tools such as Anaconda Inc., DataRobot Inc. and Hugging Face Inc. This ensures that businesses can analyze data without vendor lock-in.
  • Scaling AI projects poses significant challenges as organizations grow and deal with increasingly complex workflows. Teradata addresses these scaling challenges through ClearScape Analytics and GPU integration, which enable efficient training and inference, and help in deploying AI projects faster and more accurately.
  • Enterprises can use Teradata’s BYOM capability to integrate pre-built models without rebuilding them, reducing time-to-market and supporting quicker AI deployment.
  • Teradata’s scalable AI approach is vital for enterprises looking to stay competitive in an AI-driven world, offering the flexibility and power needed to support advanced AI-based insights and initiatives.
  • Teradata’s mission to build AI models that drive confident decision-making across organizations is in line with its overarching commitment to maintaining an open and connected ecosystem.
  • Teradata’s infrastructure supports different enterprise personas, from data scientists to engineers, to use their preferred coding environments such as structured query language, R programming, and Visual Studio code.

Read Full Article

like

6 Likes

source image

Dev

3w

read

272

img
dot

Image Credit: Dev

CAP | ACID Properties| BASE in Database System

  • The CAP theorem states that a distributed database can provide only two out of the following three guarantees simultaneously: Consistency (C), Availability (A), and Partition Tolerance (P).
  • ACID properties (Atomicity, Consistency, Isolation, Durability) ensure reliable processing of database transactions, primarily found in traditional relational databases.
  • BASE properties (Basically Available, Soft State, Eventual Consistency) prioritize availability and scalability over immediate consistency, suitable for distributed and NoSQL databases.
  • The choice between these models depends on the application's requirements: high availability and scalability (BASE) or strict reliability and accuracy (ACID).

Read Full Article

like

16 Likes

source image

Dev

3w

read

123

img
dot

Image Credit: Dev

Oracle to Postgres : Happily Ever After edition

  • The article discusses some challenges faced during migrations from Oracle to Postgres.
  • One of the major issues discussed is the lack of an equivalent Mview rewrite feature in Postgres in comparison to the magical query rewrite feature for Oracle.
  • Another issue is that migration tooling and planning often overlook the difference in architecture between multi-node RAC in Oracle and Patroni clusters in Postgres.
  • Postgres does not lock down a specific execution plan for a SQL statement similar to Oracle’s dbms_spm. Developers can consider pg_hint_plan as an alternative for migrating SQLs with spm baselines.
  • Oracle and PostgreSQL behave differently in their approach to exceptions. Avoid using savepoints to mimic exception handling code.
  • The article also mentions ExtensionMigrationAssistant, an open-source tool designed to detect cloud vendor-specific objects and functions in a migrated Postgres schema.
  • Postgres ecosystem is a bubbling cauldron of innovation with contributions flowing in from passionate developers around the globe.
  • It's always better to identify query rewrites while still on Oracle, before the migration, rather than scrambling to optimize a heavy batch process over the weekend post-migration to Postgres.
  • When converting a multi-node RAC to a Postgres HA cluster, it's crucial to anticipate the additional load on the primary node and upgrade the hardware accordingly.
  • Developers can use on conflict .. do nothing rather than pk violation exception section to avoid using exceptions altogether.

Read Full Article

like

7 Likes

source image

Dev

3w

read

339

img
dot

Image Credit: Dev

Indexing for a Scalable Serialization Isolation Level

  • YugabyteDB uses wait-on-conflict to prevent a serializable error, allowing transactions to commit without index.
  • Serializable isolation can be scalable, but requires optimal schema and indexes with a good knowledge of how the database works.
  • Indexes define the predicate locks for PostgreSQL Serializable Snapshot Isolation and the primary key defines the range locks used by YugabyteDB's Two-Phase Commit.
  • The primary key should be defined in YugabyteDB instead of an additional index to minimize latency.
  • In PostgreSQL, even if read intents are defined by scan predicates, they are stored in memory.
  • YugabyteDB uses a primary key LSM tree to store rows, so intent and table row locks are stored together.
  • If the table has a primary key defined as a unique constraint in YugabyteDB, both transactions can be committed without waiting for the other.
  • There are no limitations because all secondary indexes are global and shared on their key in YugabyteDB.
  • Primary key and secondary indexes perform a crucial role in using serializable isolation level.
  • In PostgreSQL, predicate locks define the indexes, and in YugabyteDB, range locks define indexes.

Read Full Article

like

20 Likes

source image

Dev

3w

read

273

img
dot

Image Credit: Dev

Upsert a row in a DB that doesn't use primary keys or unique constraints

  • To upsert a row in a DB that doesn't use primary keys or unique constraints, the process can be broken down into two parts.
  • First, update any existing rows with a matching slug using an UPDATE statement.
  • Then, create a new row if no rows with matching slugs exist using an INSERT statement with a WHERE NOT EXISTS clause.
  • This approach allows for updating or inserting a row without the need for stored procedures or application logic.

Read Full Article

like

16 Likes

source image

Dev

3w

read

415

img
dot

Image Credit: Dev

The Doctor's On-Call Shift example: a Normalized Relational Schema to Avoid Write Skew

  • Martin Kleppmann explains how write skew can occur in Read Committed isolation level in his book by using the example of multiple doctors being on-call for a shift in the hospital.
  • The default isolation level is Read Committed, which allows for write skew.
  • One way to address write skew in databases that do not support the ANSI/ISO isolation level is by using explicit locking to serialize transactions that work on the same data.
  • The same example is demonstrated on the new schema in Read Committed isolation level with explicit locking.
  • YugabyteDB bases this optimization on the SQL schema by locking a subset of a primary key.
  • YugabyteDB can display row locks in the pg_locks table.
  • YugabyteDB implements all SQL isolation levels according to the ANSI/ISO definition and the PostgreSQL runtime behavior.
  • It also supports the PostgreSQL implicit locking SELECT FOR SHARE and SELECT FOR UPDATE.
  • If your database does not guarantee serializability with isolation levels, you can still use explicit locking.
  • This solution with SELECT FOR UPDATE achieves the same business logic as serializable transactions but requires more code and tests due to data consistency relying on application design.

Read Full Article

like

24 Likes

source image

Dev

3w

read

190

img
dot

Image Credit: Dev

MySQL Terminal: Create Databases, Tables and more.

  • The CREATE DATABASE command in MySQL is used to create a new database.
  • The SHOW DATABASES command in MySQL is used to list all the databases available on the MySQL server instance.
  • The USE database_name command in MySQL is used to select a specific database and set it as the active database for the current session.
  • The CREATE TABLE command is used in MySQL to create a new table within a database.
  • The DESC command in MySQL is used to display the structure of a table.

Read Full Article

like

11 Likes

source image

Dev

3w

read

313

img
dot

Image Credit: Dev

Unlock the Power of Stored Procedures and Cursors: A Project-Based Course on Database Retrieval

  • Dive into the world of database management and unlock the full potential of your data with the 'Stored Procedure Cursor Database Retrieval' course.
  • Learn how to use stored procedures and cursors to efficiently retrieve and manipulate data from your database.
  • Gain expertise in creating stored procedures using cursors and execute them to display course details.
  • LabEx offers an immersive learning experience with hands-on practice and automated verification.

Read Full Article

like

18 Likes

source image

Amazon

3w

read

181

img
dot

Image Credit: Amazon

Troubleshoot INCOMPATIBLE_CREATE issues due to missing instance profile permissions during Amazon RDS Custom instance creation

  • RDS Custom instances require an instance profile for a built-in automation to evaluate the instance profile permissions, which can be impacted by IAM policies, resource-based policies, VPC endpoint policies, and SCPs.
  • If any of these permissions have explicit deny policies or incorrect permissions, an instance may go into an INCOMPATIBLE_CREATE state.
  • To troubleshoot this error, you can review the Amazon RDS events, use it to identify resource-specific permission or SCP permissions blocking instance creation.
  • This post provides information to identify the causes of the INCOMPATIBLE_CREATE state of RDS Custom instances due to incorrect instance profile permissions.
  • Furthermore, possible reasons for this state include missing required IAM policy permissions, IAM permissions boundary, resource-based policies, VPC endpoint policy, and SCPs.
  • This blog post also demonstrates a scenario where RDS Custom for SQL Server instance creation fails due to resource-based policy in AWS KMS.
  • To prevent or fix this error, grant all the required permissions to your IAM principal and make sure that your IAM instance profile role has all the necessary permissions.
  • You can avoid ongoing charges by deleting the relevant resources such as IAM roles, AWS KMS keys, and RDS Custom SQL Server instances.
  • The authors of this post are cloud support professionals working with AWS for several years and subject matter experts in Amazon RDS.
  • They provide technical assistance to customers on database migration, RDS infrastructure, monitoring, and security related scenarios.

Read Full Article

like

10 Likes

source image

Cloudblog

3w

read

53

img
dot

Image Credit: Cloudblog

From Cassandra to Bigtable: Database migration tips from Palo Alto Networks

  • Palo Alto Networks migrated their Advanced WildFire product from Apache Cassandra to Bigtable, Google Cloud’s enterprise-grade, low-latency NoSQL database service.
  • Palo Alto Networks' Global Verdict Service (GVS), a core component of WildFire, relies on a NoSQL database to process vast amounts of data that needs to be highly available for service uptime.
  • Initially, when developing Wildfire, Apache Cassandra seemed like a suitable choice.
  • However, as data volumes and performance demands grew, several limitations emerged.
  • The migration to Bigtable yielded remarkable results for Palo Alto Networks: 5x lower latency, 50% lower cost, increased availability, simplified architecture, fewer production issues, and enhanced scalability.
  • During this phase, continue writing new data to the Cassandra cluster.
  • Implement dual writes to Bigtable (along with Cassandra) for all tables.
  • Gradually shift read operations from Cassandra to Bigtable by updating the existing application code and/or load balancers.
  • Tools for migrating existing data Palo Alto Networks used the following: Use the 'dsbulk' tool for data unloading and Create Dataflow pipelines for loading into Bigtable.
  • Adapt your application code to interact with Bigtable's API and leverage its unique features.

Read Full Article

like

3 Likes

source image

Dbi-Services

3w

read

349

img
dot

Image Credit: Dbi-Services

PostgreSQL 17: Enhancing JSON Support for Web Developers

  • PostgreSQL 17 introduces new features for working with JSON data.
  • JSON_TABLE: Converting JSON into Tabular Format, allows JSONB data to be transformed into rows and columns.
  • SQL/JSON Query Functions simplify JSON queries.
  • Enhanced jsonpath Expressions improve support for advanced queries.
  • PostgreSQL allows developers to create JSON directly from SQL expressions, using the function jsonb_build_object.
  • PostgreSQL 17 aligns more closely with the SQL/JSON standard and improves the developer experience when dealing with semi-structured data.
  • Upgrading to version 17 will streamline workflow and capabilities.
  • Working with JSON data making it easier for web developers to query and manipulate JSON in a database.
  • Build more efficient and flexible web applications.
  • PostgreSQL 17 Release note https://www.postgresql.org/about/news/postgresql-17-released-2936/

Read Full Article

like

21 Likes

source image

Dbi-Services

3w

read

141

img
dot

Improved not null/null planning in PostgreSQL 17

  • PostgreSQL 17 introduces improved handling of 'is null' and 'is not null' queries when columns have a 'not null' constraint.
  • Before PostgreSQL 17, there was overhead when evaluating 'is not null' queries even when it was redundant due to the 'not null' constraint.
  • PostgreSQL 17's optimizer is smarter and eliminates the need to evaluate 'is not null' queries when the 'not null' constraint is already in place.
  • Additionally, PostgreSQL 17 also improves handling of 'is null' queries by avoiding unnecessary parallel sequential scans.

Read Full Article

like

8 Likes

For uninterrupted reading, download the app