menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Databases

Databases

source image

VentureBeat

3w

read

79

img
dot

Image Credit: VentureBeat

Breaking the cloud backup ‘black box’ with intelligent data mapping and retrieval

  • Eon, an Israel and New York-based start-up founded by a team of ex-AWS engineers, has come up with a new cloud-native backup solution.
  • The service continuously maps and backs up resources for enterprises, depending on the type of data involved.
  • It makes these backups usable by letting users retrieve specific files or records according to their needs and is challenging the status quo in the cloud backup domain.
  • The approach to setting up these backups has largely remained the same: static and error-prone.
  • Eon creates snapshots by automating resource mapping, classification, and policy association.
  • After creating the snapshots, the company makes them accessible to users, with global search across all the backed-up data.
  • This gives users the ability to locate and restore relevant data down to specific files.
  • Eon is essentially making cloud backups smart and immediately usable- unlike what's been the story so far.
  • As of now, Eon is working to scale its offering and is engaging with dozens of companies across industries.
  • It will be interesting to see how the company continues to differentiate in the highly competitive cloud backup space.

Read Full Article

like

4 Likes

source image

Dbi-Services

4w

read

283

img
dot

FreeBSD basics – 1 – The first steps

  • FreeBSD is a complete operating system (including packages and ports), while Linux usually refers to the kernel only.
  • FreeBSD comes with almost everything you need by default.
  • Additional programs in FreeBSD do not go to either “/bin/” or “/usr/bin”, they go to “/usr/local/bin”.
  • Additional configuration files in FreeBSD to not go to “/etc” but into “/usr/local/etc/”.
  • You can add permanent network configurations in FreeBSD using the “sysrc” utility.
  • Name resolution is configured in “/etc/resolv.conf“ in FreeBSD.
  • System updates in FreeBSD can be done using “freebsd-update”.
  • Updating or managing packages in FreeBSD is done with “pkg”.
  • Next step includes adding users and groups in FreeBSD.

Read Full Article

like

17 Likes

source image

TheNewsCrypto

4w

read

186

img
dot

Image Credit: TheNewsCrypto

Oracle Corp Resumes its Legal Battle Against Crypto Oracle LLC

  • Oracle Corporation has resumed its legal battle against Crypto Oracle LLC.
  • The lawsuit accuses Crypto Oracle of trademark infringement and dilution.
  • Oracle Corporation is seeking injunctive relief and damages.
  • Several other crypto firms have faced similar trademark infringement lawsuits.

Read Full Article

like

11 Likes

source image

Dev

4w

read

341

img
dot

Image Credit: Dev

Things Platform Engineers Lack When Dealing With Databases

  • Platform engineers lack the tools, processes, and mindset to unleash the power of their teams when dealing with databases.
  • They lack tools that enable developers to work with databases independently and seamlessly integrate with their environments, CI/CD pipelines, IDEs, and other development tools.
  • Monitoring solutions overload developers with unnecessary data, making it harder for them to gain meaningful insights. They need solutions that can provide comprehensive understanding and context when issues arise.
  • Platform engineers should focus on automation and self-service platforms, allowing developers to resolve issues independently and boost development speed.

Read Full Article

like

20 Likes

source image

Dev

4w

read

0

img
dot

Image Credit: Dev

Working with PostgreSQL Date and Time Functions

  • PostgreSQL provides four main data types for handling dates and times: DATE, TIME, TIMESTAMP, and INTERVAL.
  • PostgreSQL offers functions to perform calculations or extract components from date and time values.
  • The date_part() and EXTRACT() functions can be used to extract specific components from a timestamp.
  • PostgreSQL also provides functions to create datetime values from components.

Read Full Article

like

Like

source image

Dev

4w

read

213

img
dot

Image Credit: Dev

SQL: Multi field mixed deduplication followed by numbering #eg81

  • The SQL SERVER database table stores personnel records from multiple sources.
  • Requirement: Add personnel number 'no' as a calculation column, find duplicate records, and assign an independent 'no' to each set of duplicate records.
  • SPL code provided to query the database through JDBC, add a new 'no' column, and use an infinite loop to traverse and adjust the 'no' column.
  • The algorithm compares records and synchronizes the 'no' if they are considered duplicates, taking the smaller one between them.

Read Full Article

like

12 Likes

source image

Sdtimes

4w

read

177

img
dot

Image Credit: Sdtimes

Petition filed to cancel Oracle’s trademark for JavaScript

  • Deno, a company that provides a JavaScript runtime, has filed a petition with the U.S. Patent and Trademark Office to cancel Oracle’s trademark for JavaScript.
  • Deno claims that Oracle has abandoned the JavaScript trademark and it should be free for use by the developer community.
  • Oracle's lack of significant products or services under the name 'JavaScript' in years shows their inaction, meeting the legal threshold for abandonment.
  • If Oracle does not respond by January 4th, the trademark cancellation case will proceed by default.

Read Full Article

like

10 Likes

source image

Amazon

4w

read

403

img
dot

Image Credit: Amazon

Migrate time series data to Amazon Timestream for LiveAnalytics using AWS DMS

  • Amazon Timestream for LiveAnalytics is now supported by AWS Database Migration Service (AWS DMS).
  • Timestream is a serverless time series database service that stores and analyzes trillions of events per day.
  • AWS DMS allows you to migrate time-series data from an AWS DMS supported source database to Timestream with reduced downtime.
  • Timestream supports AWS DMS Parallel Load and Parallel Apply features for a faster data transfer.
  • This post provides a guide for migrating data from an example PostgreSQL source endpoint in AWS DMS to Timestream.
  • The guide includes setting up a Timestream database, creating a target Timestream endpoint, and creating a migration task.
  • Configuration of the AWS DMS migration task with a Timestream target endpoint is detailed, including table mappings and parallel load and apply settings.
  • Error handling and instructions for how to clean up resources once migration is complete are also included.
  • The guide enables running a full end-to-end migration from a relational database to Timestream.
  • The new migration capability paves the way for a range of use cases to derive real-time insights, monitor critical business applications, and analyze millions of real-time events across websites and applications.

Read Full Article

like

24 Likes

source image

Dev

4w

read

40

img
dot

Image Credit: Dev

Packages in Oracle SQL | Package specification | Package body

  • A package in Oracle SQL consists of a package specification and a package body.
  • The package specification declares what is available to be called by other programs.
  • The package body contains the actual implementation of the functions and procedures declared in the specification.
  • A simple example of a package is created, which includes a function to say hello with a personalized greeting and a procedure to show a message.

Read Full Article

like

2 Likes

source image

Dbi-Services

4w

read

248

img
dot

PostgreSQL: Maybe we should give ZFS a chance (3) – testing with an aligned record size

  • In this post, the author explores whether changing the ZFS record size would affect the speed of PostgreSQL tests as the previous tests showed that ZFS was slower for the standard tests.
  • By testing with a ZFS having an aligned record size to match the PostgreSQL block size, it was found that aligning the PostgreSQL block size and the ZFS record size mattered a lot.
  • The results showed that using an aligned record size improved the speed considerably, which means that ZFS today is a valid option to run PostgreSQL workloads.
  • The overall tests included running standard and simple-update tests with a 8KB and 32KB PostgreSQL block size and ZFS record size.
  • It was found that using a 32KB block- and recordsize gave better results than using an aligned 8KB block- and recordsize.
  • The author recommends testing your own workload before making any decisions. Some other factors like shared_buffers, checkpointing etc are also to be adapted.

Read Full Article

like

14 Likes

source image

Dev

4w

read

246

img
dot

Image Credit: Dev

🗂️ Master SQL Review with GUI 🎛️, GitOps 🖇️ and API 🔌

  • Bytebase offers SQL Review to evaluate SQL changes before applying to database through GUI, GitOps, or API.
  • The SQL Review policies for PostgreSQL in Bytebase and adding rules to avoid errors are explained.
  • Steps to trigger SQL Review through GUI and GitOps workflows are illustrated with examples.
  • Starting ngrok and configuring GitOps followed by creating repository and new token in GitHub are important prerequisites.
  • Bytebase APIs can be utilized to trigger SQL Review through internal portal or GitHub Actions.
  • This tutorial provides a comprehensive understanding of SQL Review feature, involving different aspects of accessing Bytebase and using its workflows effectively.

Read Full Article

like

14 Likes

source image

Dev

4w

read

381

img
dot

Image Credit: Dev

Crossing the Line before the Finish Line. Also the line before that.

  • Starting small and taking the first step is crucial in overcoming the overwhelming dread of coding boot camps.
  • Persistence is key in facing the challenges and failures inherent in coding.
  • Reaching the point where your code is actually working is a mystifying and rewarding experience.
  • Once you've achieved success, the possibilities for future projects become endless, making the finish line just the beginning.

Read Full Article

like

22 Likes

source image

Amazon

4w

read

84

img
dot

Image Credit: Amazon

Run event-driven stored procedures with AWS Lambda for Amazon Aurora PostgreSQL and Amazon RDS for PostgreSQL

  • AWS Lambda can be used to securely connect to any Amazon Aurora PostgreSQL-Compatible or Amazon RDS for PostgreSQL database to run stored procedures. This approach provides the flexibility, security and scalability to manage stored procedures in the cloud seamlessly.
  • In the solution overview, the process is initiated through the AWS Command Line Interface (CLI), invoking the Lambda function, which performs the necessary operations and sends notifications to DBAs through Amazon SNS.
  • Creating a Secrets Manager secret, storing a PostgreSQL database user credential, and configuring Amazon SNS to send notifications to designated DBAs users upon successful completion of stored procedures are some prerequisites.
  • Lambda layer for psycopg3 is used to run the Lambda function which connects to the PostgreSQL database, runs a stored procedure, and sends a success message to an SNS topic. The function retrieves the database credentials from Secrets Manager and other connection parameters from environment variables.
  • The Lambda function is customized by setting environment variables and running the desired stored procedures using the configuration of Lambda function's execution role with the necessary permissions to execute.
  • Lambda functions have a default timeout of 3 seconds, and you can configure the timeout to up to 15 minutes to accommodate long-running stored procedures. You can break down the stored procedure into smaller, more manageable tasks that can be run sequentially, or implement an asynchronous processing pattern.
  • Make sure to implement proper logging and error handling in your Lambda function to monitor and troubleshoot any issues that might arise during database connectivity. Handling long-running stored procedures can be performed using various options.
  • Delete the resources created to avoid incurring future charges.
  • This approach offers several advantages, including improved security, scalability, reduced infrastructure overhead, seamless integration, and cost optimization.
  • Senior Database Consultant with AWS, Ramesh Pathuri, and Senior Partner Solutions Architect at AWS, Gautam Bhaghavatula worked on this solution.

Read Full Article

like

5 Likes

source image

Siliconangle

4w

read

235

img
dot

Image Credit: Siliconangle

Grading our 2024 enterprise technology predictions

  • The article evaluates the 2024 predictions made alongside Enterprise Technology Research's Erik Bradley and discusses whether they have held up to data so far. In terms of sustained budget pressures for global tech spending, midsize firms have landed within the 4-5% growth forecast, smaller firms appear to be facing greater budget pressures and the Global 2000 is tracking at 2.7% growth for the year. AI has not been a tide that lifts all ships, and those firms that are seeing ROI on AI projects are experiencing small 'wins' which are not self-funding. Visualization of code output is helping to lower the barrier for shipping generated code in real product use cases.
  • The prediction that cybersecurity would remain a top investment priority proved accurate, as did the focus on addressing sophisticated threats and regulatory compliance. Hybrid cloud models are gaining traction over aggressive public cloud migrations. Organizations across industries are prioritizing data literacy and addressing challenges such as AI hallucinations to ensure meaningful, actionable GenAI outputs. There has been a notable increase in demand for roles such as 'gen AI prompt engineers', ethical AI use and data literacy.
  • The GenAI trend is bifurcated, with 'me too' AI lagging behind leaders and hyper-scalers benefiting from the AI wave. Networking challenges in AI environments, particularly around latency and bandwidth, are becoming more prominent. Legacy players such as Dell, IBM, Oracle and HPE have leveraged AI and hybrid cloud offerings to remain competitive. Gen AI models are also creating a renewed focus on governance and data quality which benefits trusted ecosystems such as cloud.
  • Private-market shifts, M&As and IPOs are slowly picking up, but economic constraints haven't provided second lives for companies that are struggling to gain market share. Overall, the predictions balance well-established trends with forward-looking insights, keeping in mind the degree of difficulty. Looking into the year ahead, experts suggest that certain predictions such as hybrid cloud strategies and the rise of gen AI-driven skills and tools will continue to gain traction.

Read Full Article

like

14 Likes

source image

Amazon

4w

read

390

img
dot

Image Credit: Amazon

Understanding how ACU minimum and maximum range impacts scaling in Amazon Aurora Serverless v2

  • Part 2 of the two-part blog post series explains how the minimum and maximum configuration of ACUs impact scaling behavior in Aurora Serverless v2 and the speed at which scaling occurs after it starts.
  • Aurora Serverless v2 provides an on-demand, auto scaling configuration for Amazon Aurora with the ACU as the unit of measure.
  • Each Aurora Serverless v2 workload requires unique minimum and maximum ACU requirements. Finding the right ACU configuration is essential.
  • Aurora Serverless v2 automatically scales the capacity of your database up and down in fine-grained increments called ACUs.
  • The scaling of Aurora Serverless v2 DB clusters works based on the workload on your database.
  • The scaling process of Aurora Serverless v2 is transparent and seamless and does not disrupt database operations or connections.
  • Observations revealed that Aurora Serverless v2 was responsive to scaling and had a more gradual scaling down process when higher ACU limits were set.
  • Based on the authors' findings, they recommend setting a balanced minimum ACU, setting a scalable maximum ACU, optimizing queries, and performing regular load testing to verify that the ACU settings can handle peak loads.
  • Aurora Serverless v2 offers a robust solution for businesses seeking flexible and cost-effective database management.
  • Authors of this article are Priyanka, Database Specialist Solutions Architect at AWS, and Venu Koneru, a Database Specialist Solutions Architect at Amazon Web Services (AWS).

Read Full Article

like

23 Likes

For uninterrupted reading, download the app