menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Databases

Databases

source image

Medium

2w

read

256

img
dot

Image Credit: Medium

Understanding SQL HAVING Clause: Advanced Filtering After Aggregation

  • The HAVING clause is used to filter records that meet specific conditions after an aggregation is applied.
  • It is often used with the GROUP BY clause to narrow down the results based on the results of aggregate functions.
  • By understanding how to apply the HAVING clause, you can enhance your querying capabilities.
  • Happy querying!

Read Full Article

like

15 Likes

source image

Dbi-Services

2w

read

387

img
dot

Image Credit: Dbi-Services

Oracle Database on Apple Silicon with Virtualbox 7.1

  • Consultants and DBAs can now run Oracle Database test environments on Apple Silicon using Virtualbox 7.1.
  • The steps to setup and run an Oracle DB on ARM are the same as on x86-64.
  • There are some limitations on ARM, such as Oracle Enterprise Linux 9 (OEL9) not being supported on ARM.
  • This development opens up new possibilities for running Oracle databases on Apple Silicon devices.

Read Full Article

like

23 Likes

source image

Cloudblog

2w

read

39

img
dot

Image Credit: Cloudblog

How Current leveraged Spanner to build a resilient platform for banking services

  • Current, a fintech start-up, utilized Spanner, Google’s globally distributed database, to create a scalable and robust technological foundation for its financial services.
  • Central to the fintech’s success was a cloud-native infrastructure built on Google Cloud although its previous self-hosted graph database lacked cloud-native capabilities and horizontal scalability.
  • The cloud-native platform required a database solution with consistent writes, horizontal scalability, low read latency under load, and multi-region failover.
  • Spanner emerged as the ideal solution for the fintech start-up, fulfilling all its requirements.
  • Current previously faced a challenge of making financial services available to low-income Americans without a bank account, solving which they developed a modern core banking system to power its platform.
  • Their Spanner-powered user graph service delivered the consistency, reliability, and scalability essential for their financial platform.
  • It dramatically improved resilience, reduced RTO and RPO by more than 10x, and cut times to just one hour implementing streamlined data restoration process.
  • The company has reduced its availability-related incidents to zero since migrating to Spanner, eliminated any potential disruption in customer satisfaction and potential churn.
  • Current’s Spanner-powered infrastructure empowers it to deliver reliable and efficient financial services, critical for building and maintaining member trust.
  • Their strategic migration process to Spanner resulted in a zero-downtime, zero-loss cutover, minimizing risks and maximizing efficiency.

Read Full Article

like

2 Likes

source image

Dev

2w

read

339

img
dot

Image Credit: Dev

Day 4 of SQL Series || SELECT Statement Use cases

  • The SELECT statement is used to select data from the database or more specifically table.
  • Use Case 1: Selecting all rows of table -
  • Use Case 2: Selecting specific column from table -
  • Use Case 3: Select distinct items from table -

Read Full Article

like

20 Likes

source image

Dbi-Services

2w

read

348

img
dot

FreeBSD basics – 5 – Providing services using Jails

  • This post demonstrates how to create a simple PostgreSQL as a service platform using FreeBSD Jails.
  • The post shows how to install PostgreSQL into the base, clone the ZFS snapshot and create new Jail configuration based on a few parameters.
  • A script is created to handle the creation and the initialization of PostgreSQL and starting up PostgreSQL.
  • This method demonstrates how to easily add additional PostgreSQL jails.
  • The size of the PostgreSQL Jail on disk is impressive--a bit more than a GB for a complete PostgreSQL service.

Read Full Article

like

20 Likes

source image

Medium

2w

read

283

img
dot

Image Credit: Medium

Use JMS for asynchronous messaging in Spring Boot

  • This article discusses the implementation of producer/consumer messaging using Spring JMS with Oracle Database Transactional Event Queues.
  • Transactional Event Queues are high-throughput messaging systems capable of multiple producers/consumers and exactly-once messaging.
  • The necessary dependencies for using Spring JMS with Oracle Database Transactional Event Queues are spring-boot-starter-data-jdbc and oracle-spring-boot-starter-aqjms.
  • A comprehensive test suite using Testcontainers is also provided to demonstrate real-time message processing.

Read Full Article

like

17 Likes

source image

VentureBeat

2w

read

65

img
dot

AWS debuts advanced RAG features for structured, unstructured data

  • AWS has launched a series of services, including structured data retrieval support and extraction, transform and load for unstructured data, aimed at SSL (Structure Summarization and Language Models) development to ease enterprise data access into retrieval-machine-augmented generation (RAG) pipelines.
  • To be accessible for RAG, structured data requires more than merely looking up a single row in a table. It needs the translation of natural language queries into complex SQL queries to filter, join and aggregate tables – difficult when working with unstructured data, as there is no pre-defined format.
  • AWS's extract, transform and load for unstructured data, structured data retrieval support, automatic data automation and knowledge base support services aid this process.
  • Amazon Bedrock Knowledge Bases service has also been launched, a fully-managed RAG facility that enables contextual, relevant data for customised response.
  • Structured data retrieval support in Amazon Bedrock Knowledge Bases will provide a fully-managed RAG solution from querying all structured data to enrich the model’s response and improve accuracy, learning from query patterns to tailor customisations.
  • GraphRG solves the challenge of explaining RAG systems, piecing together distinct pieces of data and connecting them to build a foundation for enhanced Gen AI application accuracy by using knowledge graphs – relationships within and across multiple data sources – and creating graph embeddings for Gen AI applications.
  • Unstructured data is challenging to extract, transform and load for SSL development as it needs to be processed and restructured. Amazon Bedrock Data Automation technology provides SSL powered ETL for unstructured data.
  • Amazon Bedrock Data Automation technology handles the processing of enterprise content extraction, transformation and parsing multi-model content for enhanced gen AI applications, with both automatic data transformation and industry-aligned customisation available.
  • AWS's new structured data retrieval support and GraphRG capabilities in Amazon Bedrock Knowledge Bases will create more complex gen AI applications without the need for graph expertise to retrieve and connect various data sources.
  • The new AWS services and updates like retrieval augmented generation (RAG) feature solve the challenges of accessing structured and unstructured data, helping build contextually more relevant gen AI applications for enterprise SSL development.

Read Full Article

like

3 Likes

source image

Cloudblog

2w

read

26

img
dot

Image Credit: Cloudblog

Build agentic RAG on Google Cloud databases with LlamaIndex

  • Agentic RAG is a type of AI agent that combines the power of information retrieval with advanced action planning capabilities, such as multiple-step tasks that reason, plan, make decisions, and execute goals.
  • LlamaIndex is a framework for building knowledge-driven and agentic systems, providing pre-built agent architectures and customizable workflows for developers to build sophisticated AI agents.
  • LlamaIndex has collaborated with Google Cloud databases including AlloyDB for PostgreSQL and Cloud SQL for PostgreSQL, offering integrations for LlamaIndex Vector Store, Document Store, and Index Store.
  • LlamaIndex supports various industry use cases, including report generation, agentic RAG, customer support, SQL agents, and productivity assistants.
  • Joint customers of LlamaIndex and Google Cloud databases can expect a streamlined knowledge retrieval process, efficient complex document parsing, and secure authentication and authorization.
  • LlamaIndex Workflows provide the flexibility to build and deploy complex agentic systems with granular control of conditional execution, as well as powerful state management.
  • Report generation spans many industries, and LlamaIndex provides all the core components for generating reports such as structured output definitions, intelligent document parsing, knowledge base storage, and agentic workflows.
  • Developers can follow the provided tutorial to get started with LlamaIndex integrations for AlloyDB and Cloud SQL for PostgreSQL.
  • Overall, the LlamaIndex and Google Cloud collaboration opens up new possibilities for developers to build cutting-edge, knowledge-driven AI agents.

Read Full Article

like

1 Like

source image

Javacodegeeks

2w

read

117

img
dot

Image Credit: Javacodegeeks

Fixing MysqlDataTruncation: Data Too Long for Column Error

  • MysqlDataTruncation: Data too long for column is a common error when data length exceeds the defined column length in MySQL.
  • This error occurs when the length of the string exceeds the maximum length allowed for the target column in the database schema.
  • Two strategies to fix the error are: alter column definition and adjust data size to fit the column constraints.
  • To alter the column definition, increase the column size to accommodate longer data values.
  • To adjust data size, either validate the data before inserting or truncating it to the maximum size.
  • In Java, when the error occurs, the application must catch the exception and log or notify the user.
  • The article includes Java code examples to reproduce and fix the error using both strategies.
  • By applying these techniques, data truncation issues can be resolved while maintaining data integrity and complying with database constraints.
  • Download the full source code of the Java examples is provided at the end of the article.
  • The article covers MySQL, MysqlDataTruncation, SQL, and error handling perspectives.

Read Full Article

like

7 Likes

source image

Dev

2w

read

8

img
dot

Image Credit: Dev

ChatWithSQL — Secure, Schema-Validated Text-to-SQL Python Library, Eliminating Arbitrary Query Risks from LLMs

  • ChatWithSQL is a Python library that revolutionizes how developers interact with SQL databases.
  • It combines Natural Language Processing (NLP) with schema-validated SQL generation and data retrieval.
  • The library ensures secure database interaction by validating queries against strict schema definitions.
  • ChatWithSQL supports intuitive natural language interface, robust schema validation, and seamless integration with leading Language Learning Models (LLMs).

Read Full Article

like

Like

source image

Mysql

2w

read

52

img
dot

Image Credit: Mysql

MySQL High Availability: Techniques to Ensure Uptime in Critical Systems

  • Ensuring the continuous availability of MySQL databases is essential, especially when handling high transaction volumes.
  • ProxySQL, a powerful middleware, provides advanced support on techniques ranging from replication to clustering that assist with MySQL high availability.
  • Essential components that contribute to achieving high availability in MySQL include redundancy, automatic failover, and load balancing.
  • Common MySQL HA techniques are source-replica replication, source-source replication, MySQL Group Replication, and Galera Cluster.
  • Monitoring the health and performance of MySQL replication, configuring automated failover mechanisms, and load balancing are necessary for ensuring MySQL high availability.
  • Regular backups, disaster recovery plans, testing failover scenarios under actual conditions, enhancing security measures, and effective monitoring and alerting systems can maximize MySQL high availability with ProxySQL.
  • The article highlights the importance of MySQL high availability in mission-critical environments such as e-commerce platforms, financial systems, and SaaS applications.
  • MySQL High Availability minimizes downtime and ensures continuous access to critical database systems during failures.
  • High Availability is the ability of a system to remain operational without significant interruptions or downtime caused by hardware failures, network issues, or application errors.
  • Galera Cluster guarantees strong data consistency, but the real-time synchronization introduces performance overhead, especially in high-latency networks.

Read Full Article

like

3 Likes

source image

Dev

2w

read

401

img
dot

Image Credit: Dev

DBA Considered Harmful

  • Having a separate team of DBAs leads to siloing and disengagement of developers from learning about databases.
  • Organizations waste time on communication between DBAs and developers leading to lower performance.
  • Database guardrails can automate DBAs' tasks and allow them to focus on architecture and long-term organization direction.
  • DBAs can solve intricate issues while database guardrails minimize risks of performance degradation and data loss.
  • This approach helps developers feel more responsible and accountable for their work and learn about databases.

Read Full Article

like

24 Likes

source image

Sanjeevvellaisamydba

2w

read

223

img
dot

Top 10 Linux Commands for Monitoring and Troubleshooting Your System

  • Monitor disk usage and verify file system configurations using 'df -h' command
  • Validate DNS configurations and network routes using 'ls -ld /etc/resolv.conf' and 'ip route' commands
  • Gather system details and verify date/time settings using 'uname -a', 'uname -r', and 'date' commands
  • Ensure proper permissions and verify user/group details using 'ls -ld /etc/passwd /etc/group /etc/hosts', 'cat /etc/passwd', 'cat /etc/group', and 'cat /etc/hosts' commands

Read Full Article

like

13 Likes

source image

TechCrunch

3w

read

385

img
dot

Image Credit: TechCrunch

AWS announces Aurora DSQL, a new distributed SQL database that promises virtually unlimited scalability

  • Amazon's AWS announces Aurora DSQL, a new serverless, distributed SQL database.
  • Aurora DSQL offers high availability, strong consistency, and PostgreSQL compatibility.
  • It promises 4x faster reads and writes compared to other distributed SQL databases.
  • AWS uses an active-active architecture to ensure resiliency and handles all security updates.

Read Full Article

like

23 Likes

source image

Amazon

3w

read

341

img
dot

Image Credit: Amazon

Introducing Amazon Aurora DSQL

  • Amazon has introduced its fastest serverless distributed SQL database, Aurora DSQL, to reduce the complexity of legacy processes, make the build and deployment of apps quicker and simpler, and provide an alternative to commercial databases. Aurora can handle single-region or multi-region configurations with 99.99% and 99.999% availability, respectively. It utilizes two linked cluster regions with a third as a witness for data replication and durability and can scale up or down to accommodate businesses of any size. The serverless design eliminates the need for patching and upgrades, architecture management and maintenance downtime.
  • Aurora DSQL is PostgreSQL-compatible, allowing developers to use familiar concepts and improve productivity without sacrificing protocol compatibility. The OCC (optimistic concurrency control) of Aurora DSQL assures that long transactions won't slow down parallel processes. It is designed to run in single- or multi-region configurations, respectively ensuring component and AZ (Availability Zone) failures don't disrupt service or even, in a multi-region setup, keep read consistency strong across locations.
  • Aurora DSQL offers single-region clusters that are active-active over three AZs, ensuring minimal replication lag, and multi-region clusters which allow users to balance applications for different locations, performance or resiliency needs. Highly resilient and highly available, it can cope with growing demands and meet any workload challenge. It comes with simple declarative privacy and security controls and full integration with IAM (Identity and Access Management) and AWS CloudTrail. It blocks password-based authentication, and supports token-based authentication using IAM, and requires minimal need for protocol changes.
  • Aurora DSQL is designed to be operational with a few steps and an API build-up. It provides a framework with key capabilities that reduce downtime, increase performance and make real-time data accessible to applications and business processes. Developers have access to a flexible framework with high availability and scalability coupled, reducing complexity while improving resilience, scalability and availability.
  • Aurora DSQL is available on preview on the Aurora DSQL console, for further information, visit Aurora DSQL's overview page or refer to its comprehensive user guide for in-depth information.
  • Raluca Constantin is a Senior Database Engineer in the AWS Distributed SQL database team and Arun Sankaranarayanan is a Database Specialist Solution Architect based in London, UK.

Read Full Article

like

20 Likes

For uninterrupted reading, download the app