menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Databases

Databases

source image

Cloudblog

1M

read

96

img
dot

Image Credit: Cloudblog

Unity Ads uses Memorystore to power up to 10 million operations per second

  • Unity Ads, a mobile advertising platform migrated their workloads to Memorystore for Redis Cluster, a fully managed service designed for high-performance workloads.
  • Unity infrastructure now handles up to 10 million Redis operations per second for a single instance.
  • They gained a more reliable and scalable infrastructure, reduced costs, and gained time to focus on high-value activities.
  • Memorystore for Redis Cluster service offers significantly high throughput and data capacity while maintaining microsecond latencies, consolidating larger workloads into a single, high-performance cluster.
  • Scalable feature of Memorystore for Redis Cluster is that it can scale with zero downtime, and users can expand their clusters to handle terabytes of keyspace with a simple click or command.
  • Memorystore offers automatic replica node management, placing them in different zones from their primaries to protect against outages and thus enhance reliability.
  • Unity Ads can now scale efficiently in production, allowing them to adapt to changing demands without disrupting their services.
  • With Memorystore, there is no need to worry about erratic behavior, which was present in their previous DIY Redis cluster setup.
  • Unity Ads has gained operational improvements by moving to Memorystore, including cost efficiencies and simplicity to enable persistence on it.
  • Memorystore has opened up new possibilities for how Unity Ads can architect their systems, allowing them to confidently expand their use of Redis across more of their infrastructure to lower costs and improve performance.

Read Full Article

like

5 Likes

source image

Dbi-Services

1M

read

248

img
dot

Image Credit: Dbi-Services

Mastering SSIS with Excel – Resolving the ‘Microsoft.ACE.OLEDB.16.0 Provider is Not Registered’ Error

  • Encountered the 'Microsoft.ACE.OLEDB.16.0 Provider is Not Registered' error while working on a data extraction task using SSIS.
  • Provides a guide on migrating SSIS packages from SQL Server 2017 to SQL Server 2022 using Visual Studio 2022.
  • Shares steps on deploying SSIS packages to the SSIS Catalog.
  • Offers solutions to resolve the 'Microsoft.ACE.OLEDB.16.0 Provider is Not Registered' error.

Read Full Article

like

14 Likes

source image

Dev

1M

read

192

img
dot

Image Credit: Dev

These 10+ comparisons cover entire SQL concepts, Is it?

  • The article highlights some important SQL concepts and key terminologies that are frequently misunderstood or confusing.
  • 10+ comparisons are made in a tabular form in the article to clarify the differences between these concepts.
  • The primary key uniquely identifies every record in a table and can not contain null values, while a foreign key is used to link two tables together.
  • Clustered Index stores data rows in the table based on key values, while non-clustered index stores a separate structure from data rows with pointers back to the data rows.
  • Inner join returns matching value rows in both tables, while outer join returns all records when there is a match in either left, right or both tables.
  • Where filters records before any groupings are made in the result set,
  • Having filters records after the group by clause is applied.
  • Union combines the result sets of two queries and removes duplicate records, whereas union all combines the result sets of two queries but doesn't remove duplicates.
  • Delete removes rows one at a time and logs each row deletion, while truncate removes all rows from a table without logging individual row deletions.
  • Drop removes a table entirely, while delete removes rows based on the condition but keeps table structure intact.
  • Char is a fixed-length data type that always occupies fixed storage space, while varchar is a variable-length data type that occupies space based on the actual length of the data.

Read Full Article

like

11 Likes

source image

Mysql

1M

read

256

img
dot

Image Credit: Mysql

MySQL: information_schema.tables performance regression

  • Queries against i_s.tables should be fast and scale to a million tables. In MySQL 8.x, queries against data dictionary tables do not scale well in case of a large number of tables in a single schema e.g. 313k tables. MySQL 8.x uses hidden InnoDB tables to store the data dictionary. Since queries against the data dictionary are run very often and often implicit, it is a matter of great concern when the system behaves so poorly in such cases.
  • Queries against the DD do not scale well, in my case against 313k tables in a single database. Even explicit queries against information_schema tables or SHOW TABLES LIKE statements are slow and a drop database on a schema with 313k tables would never terminate.
  • This issue arises due to the heavy usage of these queries for almost everything, including internally and implicitly. This slow down is even worse when the number of tables is large in a single schema, rendering it a matter of concern.
  • Asking for a table in the million schema is slow even when asking for tables from other schemas with only 6 tables is slow. Even command show tables like is affected by the slowdown.
  • Queries against i_s.tables should be fast and scale to a million tables. But, not only are explicit queries against I_S.TABLES and SHOW TABLES LIKE statements slower, it's also a matter of great concern when it fails in cases such as
  • MySQL 8.0 uses hidden InnoDB tables to store the data dictionary, and queries are not scaled well in cases with a large number of tables in a single schema; this causes concern as queries against DD are run often and often implicit.
  • Drop a database on a schema with 313k tables would never terminate. Several queries against I_S.TABLES are simpler than the query plan shows and experience worse performance than anticipated.
  • This issue causes a severe slowdown since these queries are run often and often implicit, and not just when they're physically sent to the server.e.g. events processing, and distributed transactions.
  • Even queries with table_schema from a schema with only 6 tables are slow. Queries against i_s.tables should scale well to millions of tables, but MySQL 8.x has hidden InnoDB tables to store a data dictionary and causes slow downs.
  • Queries against the data dictionary tables in MySQL7 were mostly CPU-bound and started to be more IO-bound from MySQL 8. As a result, heavy-disk activity with long latencies caused performance regression in MySQL8 when more than a few thousand tables exist in a schema.

Read Full Article

like

15 Likes

source image

Mysql

1M

read

417

img
dot

Image Credit: Mysql

MySQL: dual passwords

  • MySQL allows multiple passwords on a single account for regulatory requirement.
  • There is no native view or reporting query that can give an overview of all accounts and their password usage.
  • Log analysis with elevated log levels is required to track password usage.
  • Rotating machine accounts and using mechanisms like vault for login information distribution is recommended.

Read Full Article

like

25 Likes

source image

Dbi-Services

1M

read

151

img
dot

Image Credit: Dbi-Services

PostgreSQL Conference Europe 2024 – Last day

  • PGConf.EU 2024, the PostgreSQL Conference in Athens, was a hub of creativity, expertize sharing and community spirit surrounding PostgreSQL.
  • Automated Database Change Management with Flyway and CI/CD Pipelines was a talk of practical insights into automating database migrations, ensuring consistency and reducing deployment risks.
  • PG_createsubscriber patch Dev Euler Taveira held a session on Speeding Up Logical Replication Setup, addressing initial data synchronization in logical replication, and suggested optimization strategies to expedite replication setup.
  • PostgreSQL Executor: Executing Your Execution Plan was a talk that explained how PostgreSQL executes query plans and provided a clearer understanding for those looking to optimize queries.
  • HELP!!1! The Database is Slow!! was a treasure trove of actionable advice for database administrators faced with performance challenges.
  • Postgres Platform “Best Practices” for the Modern DBA was the talk that explored the intersection of traditional DBA roles and platform engineering and highlighted how modern DBAs can leverage platform engineering principles to enhance efficiency and adaptability.
  • The conference ended with a closing talk that encapsulated the spirit of the event, highlighting the incredible work done by the community and the organizers.
  • The article also includes personal reflections from the writer and encourages readers to support the PostgreSQL project regardless of their expertise level.

Read Full Article

like

9 Likes

source image

Dbi-Services

2M

read

330

img
dot

Image Credit: Dbi-Services

2024.pgconf.eu – Bigger than ever, the family is growing fast

  • The PostgreSQL community is growing every year and the same happens for pgconf.eu, which happened in Athens, Greece, this year.
  • Attending this event is one of the best opportunities to get in touch with the people behind PostgreSQL.
  • The conference is about people, with no marketing, and a focus on making PostgreSQL the best open source database.
  • The conference offers a mix of technical sessions, discussions, and opportunities to connect with experts in PostgreSQL.

Read Full Article

like

19 Likes

source image

Dev

2M

read

13

img
dot

Image Credit: Dev

How to implement CREATE TABLE IF NOT EXISTS in Oracle

  • Unlike PostgreSQL and MySQL, Oracle doesn't provide a direct CREATE TABLE IF NOT EXISTS syntax.
  • The most common and reliable approach is to use a PL/SQL block with exception handling.
  • A more verbose but explicit approach is to check the data dictionary first.
  • For simple scripts where PL/SQL might be overkill, you can use a single-line alternative.

Read Full Article

like

Like

source image

Medium

2M

read

133

img
dot

Spatie Backup Mysqldump Not Recognized: Troubleshooting on Windows for Laravel Projects

  • Building powerful and scalable applications requires a well-chosen technology stack.
  • PHP, Python, and JavaScript are popular programming languages for web development.
  • MySQL is a reliable and flexible database for storing structured data.
  • Laravel, a PHP framework, promotes rapid development and code reusability.
  • Windows offers a familiar environment and tools for developers.
  • The integration of Programming, MySQL, Laravel, and Windows creates a seamless workflow.
  • Challenges include performance optimization, security measures, and database scaling.
  • Spatie Backup Mysqldump Not Recognized: Troubleshooting on Windows for Laravel Projects is a common issue.
  • Combining Programming, MySQL, Laravel, and Windows provides a powerful platform for building applications.

Read Full Article

like

8 Likes

source image

Dev

2M

read

183

img
dot

Image Credit: Dev

SQL-92 in TPC Benchmarks: Are They Still Relevant?

  • The author explores whether queries from analytics benchmarks written on SQL-92 are irrelevant and inefficient in 2021, considering the window functions introduced in SQL:2003.
  • The original benchmark comparison between PostgreSQL and DuckDB was based on an execution time that exceeded one minute, caused by an inefficient SQL query.
  • Efficient SQL queries perform well with PostgreSQL-compatible databases, compared to inefficient ones.
  • The author writes an SQL query to address the business question, which uses standard SQL features established 29 years ago and implemented in all databases.
  • The author finds it easier to reason through each step and obtain accurate results using structured SQL queries with a WITH clause to define common table expressions.
  • The author recommends focusing on queries that avoid inefficient code to effectively compare performance with PostgreSQL-compatible databases.
  • PostgreSQL has prioritized simplicity in its code, which means it may perform poorly with poorly designed queries.
  • Indexing tables properly could further optimize query performance.
  • The author concludes that running benchmarks with queries that ignore the past 30 years of SQL is not recommended.

Read Full Article

like

11 Likes

source image

Amazon

2M

read

344

img
dot

Image Credit: Amazon

Unlock cost savings using compression with Amazon DocumentDB

  • Amazon DocumentDB now supports document compression using the LZ4 compression algorithm with compressed documents up to seven times smaller.
  • Document compression can reduce storage usage and I/O cost leading to lower storage and I/O costs.
  • You can use the Amazon DocumentDB Compression Review Tool to get a sense of how compressible your data is, before enabling compression.
  • Using existing APIs, you can monitor compression status and collection size after compression.
  • Compressed documents require less storage space and fewer I/O operations during database reads and writes, leading to lower storage and I/O costs.
  • Document compression/decompression requires additional CPU as well as increases read/write latency – but the benefits will outweigh the overhead if you have collections with compressible data.
  • You can configure document compression for individual Amazon DocumentDB collections based on collection access patterns and storage requirements.
  • Document compression is only supported on Amazon DocumentDB version 5.0 and only collection data is compressed in Amazon DocumentDB.
  • You can always enable compression on a collection as your workload changes in future.
  • Amazon DocumentDB Compression can be a great way to reduce costs and improve performance for your DocumentDB workloads with compressible data.

Read Full Article

like

20 Likes

source image

Amazon

2M

read

335

img
dot

Image Credit: Amazon

Achieve a high-speed InnoDB purge on Amazon RDS for MySQL and Amazon Aurora MySQL

  • Purge is essential for a MySQL database to clean up delete-marked table records and undo logs that are no longer needed for multiversion concurrency control (MVCC) or rollback operations.
  • The slow advancement of purge in the background compared to numerous data changes causes poor database performance.
  • InnoDB uses undo logs as the key data structure to support MVCC and rollback operations and purge to clean them up.
  • The most heavy-duty part of the purge operation is identifying delete-marked table records including those in clustered indexes, secondary indexes, and BLOB columns.
  • The number of purge threads, the workload characteristics, and database resource utilization can significantly impact the performance of purge operation.
  • Choosing the right instance class, partitioning and sharding strategies can improve the efficiency of the purge operation.
  • Using the schema_unused_indexes view in SYS schema to identify and drop the unused secondary indexes is a recommended optimization.
  • Monitoring and setting up alarms for the rollback segment history list length metric can warn of the need to optimize database capacity and workload characteristics.
  • Purge operation can be an essential tool to optimize a MySQL database's performance.
  • Optimizing purge operation needs careful consideration of workload optimization, database capacity planning, and configurations.

Read Full Article

like

20 Likes

source image

The New Stack

2M

read

4

img
dot

Image Credit: The New Stack

How to Run Databases on Kubernetes: An 8-Step Guide

  • Running databases on Kubernetes requires persistent data storage and consistency.
  • Key concepts to consider when running databases in Kubernetes include: database storage, scaling databases, data consistency and backups, and StatefulSets.
  • StatefulSets are a Kubernetes resource for managing stateful applications that ensures pods possess persistent storage.
  • To create a database StatefulSet application on Kubernetes you will need to create a StorageClass, PersistentVolume (PV), Persistent Volume Claim (PVC), and MySQL StatefulSet.
  • To enable the pods to communicate between each other in Kubernetes, a MySQL StatefulSets headless service is created.
  • Performing regular backups and routine restore of databases is important to ensure availability of your Kubernetes workloads and maintain database integrity.
  • Monitoring tools such as Prometheus, Grafana and Nagios can be used to monitor database health and notify the engineer if there is a degradation in service or issue with the database.
  • As Kubernetes evolves, the support for StatefulSets will increase, making running databases on Kubernetes a powerful solution for modern infrastructures.
  • To learn more about what Kubernetes can do for your business, you can read more about it on Andela.
  • Using Kubernetes to run databases can be achieved with proper planning and attention to StatefulSets, PersistentVolumes, and persistent storage needs.

Read Full Article

like

Like

source image

Dbi-Services

2M

read

280

img
dot

Image Credit: Dbi-Services

PostgreSQL Conference Europe 2024 – Day 2

  • The second day of the PostgreSQL Conference Europe 2024 in Athens covered a range of technical sessions on topics like upgrading PostgreSQL, handling statistics, and using DBA skills in the cloud.
  • One session focused on upgrading PostgreSQL installations and highlighted the importance of logical replication and careful schema design for smooth upgrades.
  • Another session explored the inner workings of PostgreSQL's statistics collection, providing insights on optimizing query performance.
  • A talk from Microsoft announced the first iteration of PostgreSQL Query Store in Azure, offering performance tuning tools for PostgreSQL users.
  • The final session explained how PostgreSQL operators in Kubernetes can simplify database management in containerized environments, bridging the gap between traditional administration and cloud-native practices.

Read Full Article

like

16 Likes

source image

Medium

2M

read

362

img
dot

The Comprehensive Guide to SQL: Unlocking the Power of Data

  • SQL is a standardized language used to manage and manipulate relational databases.
  • The SELECT statement allows you to retrieve data from one or more tables.
  • The WHERE clause filters records based on specific conditions.
  • JOINs allow you to combine rows from multiple tables based on related columns.

Read Full Article

like

21 Likes

For uninterrupted reading, download the app