decor
 

planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.

 

March 26, 2017


Robert Catterall

DB2 for z/OS: Running REORG to Reclaim Disk Space

Think of why you run the DB2 for z/OS REORG utility, and a number of reasons are likely to come quickly to mind: to restore row order per a table's clustering key; to reestablish free space (for inserts and/or for updates); to remove the AREO* status set for a table space following (for example) an ALTER TABLE ADD COLUMN operation; or to materialize a pending DDL change such as an enlargement of a table space's DSSIZE. How about disk space reclamation? If that REORG motivation has not...

(Read more)

DB2Night Replays

The DB2Night Show #191: DB2 LUW V11 Certification Training - Part 2

Follow @mohankumarsp Follow @rcollins963 Special Guests: Mohan Saraswatipura and Kent Collins, Authors DB2 LUW V11 Certification Training - Part 2 100% of our audience learned something! If you want to earn DB2 LUW V11 IBM Certifications, there are no better teachers than the professionals that are writing the exams and Certification Study Guides! Watch this replay and learn about DB2 LUW V11, BLU, DPF/MPP, pureScale, and more! Sample...

(Read more)
 

March 24, 2017


Triton Consulting

DBA’s Holiday Checklist – DB2 cover

We’re so over winter….bring on spring! We all look forward to the lighter, brighter evenings and warm spring sunshine and not forgetting the all-important time off work, but what if your organisation has just one or two DBAs? Managing holiday … Continue reading →

(Read more)

DB2 Guys

Just one month remains until the IIUG Event in Raleigh, NC. Register today!

by Rajesh Govindan, Portfolio Marketing Manager, IBM Informix If you’ve got the time, we’ve got the space! But you’re going to want to register now — before the conference is 100% booked. Already, we’ve had lots of Informix developers from all over the world sign up to attend this year’s IIUG conference. They’re not only […]
 

March 22, 2017


Data and Technology

News from IBM InterConnect 2017

This week I am in Las Vegas for the annual IBM InterConnect conference. IBM touts the event as a way to tap into the most advanced cloud technology in the market today. And that has merit, but there...

(Read more)
 

March 21, 2017

Jack Vamvas

3 signs of low Linux memory

Question: I’ve received some monitoring alerts about low memory availability on a Linux server hosting a DB2 LUW installation.

I’d like to be able to check myself. What are some indicators I can report on – giving me more detail on the low memory usage?

Also, what are some of the signals that Linux memory health is good?

 

Answer: Linux offers some commands which will allow you to assess the Linux memory usage. These suggestions are for general usage and it’s important to understand these figures in the context

of the server. A deeper understanding is required to capacity plan and troubleshoot.

When I have a situation of low memory usage some I make some checks to identify if the alerts are valid .

  • Swap memory increases (or moves frequently)  Read more on Linux swap space and DB2
  • Free and buffers/cache = 0 or very close to 0
  • dmesg | grep oom-killer       --display OutOfMemory-killer ,If there no value returned then oom-killer is not running. The oom-killer sacrifices processes to free memory – it’s a last resort scenario

 

One of these 3 indicators or all 3 is a strong argument for some immediate action. It may require some troubleshooting to identify root cause .

Read More

DB2 Instance Memory and dbptnmem (DBA DB2)

 

 

Big Data University

This Week in Data Science (March 21, 2017)

Here’s this week’s news in Data Science and Big Data. hybrid-cloud

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Featured Courses From BDU

  • Big Data 101 – What Is Big Data? Take Our Free Big Data Course to Find Out.
  • Predictive Modeling Fundamentals I
    – Take this free course and learn the different mathematical algorithms used to detect patterns hidden in data.
  • Using R with Databases
    – Learn how to unleash the power of R when working with relational databases in our newest free course.
  • Deep Learning with TensorFlow – Take this free TensorFlow course and learn how to use Google’s library to apply deep learning to different data types in order to solve real world problems.

Cool Data Science Videos

The post This Week in Data Science (March 21, 2017) appeared first on BDU.


Adam Gartenberg

All you need to know is…

I was introduced to a story shared by Larry and Ann McDuff about the lessons they learned long-distance hiking by Seth Godin a while back, and it immediately resonated with me. I went to share it...

(Read more)

DB2utor

Using Installation SYSOPR Authority

These days, most enterprises limit access to sensitive data to only those users who need to see this information to do their jobs. In the meantime, security features have evolved over the past few DB2 releases.
 

March 20, 2017


Henrik Loeser

IBM Bluemix in Germany, includes dashDB and Cloudant

IBM Bluemix in Germany, in German Today, I wanted to share some exciting news with you. Most of you know that I am German. Thus, it is terrific to have IBM Bluemix available from Frankfurt, Germany,...

(Read more)

Adam Gartenberg

Recent Announcements (Cisco, Feature Pack 8, Connections 6)

I wanted to make sure that I posted on some recent announcements, as they do (and will) offer some significant improvements in our offerings. Cisco Integration: Chapter 2 At the Connect conference,...

(Read more)

Craig Mullins

The DB2 12 for z/OS Blog Series – Part 7: Relative Page Number Table Spaces

One of the most significant new features for supporting big data in a DB2 12 environment is relative page numbering (or RPN) for range-partitioned table spaces. You can either create a new RPN range-partitioned table space, or an existing range-partitioned table space can be changed to RPN via an ALTER TABLESPACE with PAGENUM RELATIVE, followed by an online REORG of the entire table space. But...

(Read more)
Big Data University

Learn TensorFlow and Deep Learning Together and Now!

I get a lot of questions about how to learn TensorFlow and Deep Learning. I’ll often hear, “How do I start learning TensorFlow?” or “How do I start learning Deep Learning?”. My answer is, “Learn Deep Learning and TensorFlow at the same time!”. See, it’s not easy to learn one without the other. Of course, you can use other libraries like Keras or Theano, but TensorFlow is a clear favorite when it comes to libraries for deep learning. And now is the best time to start. If you haven’t noticed, there’s a huge wave of new startups or big companies adopting deep learning. Deep Learning is the hottest skill to have right now.

So let’s start from the basics. What actually is “Deep Learning” and why is it so hot in data science right now? What’s the difference between Deep Learning and traditional machine learning? Why TensorFlow? And where can you start learning?

What is Deep Learning?

Inspired by the brain, deep learning is a type of machine learning that uses neural networks to model high-level abstractions in data. The major difference between Deep Learning and Neural Networks is that Deep Learning has multiple hidden layers, which allows deep learning models (or deep neural networks) to extract complex patterns from data.

How is Deep Learning different from traditional machine learning algorithms, such as Neural Networks?

Under the umbrella of Artificial Intelligence (AI), machine learning is a sub-field of algorithms that can learn on their own, including Decision Trees, Linear Regression, K-means clustering, Neural Networks, and so on. Deep Neural Networks, in particular, are super-powered Neural Networks that contain several hidden layers. With the right configuration/hyper-parameters, deep learning can achieve impressively accurate results compared to shallow Neural Networks with the same computational power.

Why is Deep Learning such a hot topic in the Data Science community?

Simply put, across many domains, deep learning can attain much faster and more accurate results than ever before, such as image classification, object recognition, sequence modeling, speech recognition, as so on. It all started recently, too; around 2015. There were three key catalysts that came together resulting in the popularity of deep learning:

  1. Big Data: the presence of extremely large and complex datasets;
  2. GPUs: the low cost and wide availability of GPUs made the parallel processing faster and cheaper than ever;
  3. Advances in deep learning algorithms, especially for complex pattern recognition.

These three factors resulted in the deep learning boom that we see today. Self-driving cars and drones, chat bots, translations, AI playing games. You can now see a tremendous surge in the demand for data scientists and cognitive developers. Big companies are recognizing this evolution in data-driven insights, which is why you now see IBM, Google, Apple, Tesla, and Microsoft investing a lot of money in deep learning.

What are the applications of Deep Learning?

Historically, the goal of machine learning was to move humanity towards the singularity of “General Artificial Intelligence”. But not surprisingly, this goal has been tremendously difficult to attain. So instead of trying to develop generalized AI, scientists started to develop a series of models and algorithms that excelled in specific tasks.

So, to realize the main applications of Deep Learning, it is better to briefly take a look at each of the different types of Deep Neural Networks, their main applications, and how they work.

What are the different types of Deep Neural Networks?

Convolutional Neural Networks (CNNs)

Assume that you have a dataset of images of cats and dogs, and you want to build the model that can recognize and differentiate them. Traditionally, your first step would be “feature selection”. That is, to choose the best features from your images, and then use those features in a classification algorithm (e.g., Logistic Regression or Decision Tree), resulting in a model that could predict “cat” or “dog” given an image. These chosen features could simply be the color, object edges, pixel location, or countless other features that could be extracted from the images.

Of course, the better and effective the feature sets you found, the more accurate and efficient image classification you could obtain. In fact, in the last two decades, there has been a lot of scientific research in image processing just about how one can find the best feature sets from images for the purposes of classification. However, as you can imagine, the process of selecting and using the best features is a tremendously time-consuming task and is often ineffective. Further, extending the features to other types of images becomes an even greater problem – the features you used to discriminate cats and dogs cannot be generalized, for example, for recognizing hand-written digits. Therefore, the importance of feature selection can’t be overstated.

Enter convolutional neural networks (CNNs). Suddenly, without having to find or select features, CNNs finds the best features for you automatically and effectively. So instead of you choosing what image features to classify dogs vs. cats, CNNs can automatically find those features and classify the images for you.

Convolutional Neural Network (Wikipedia)

What are the CNN applications?

CNNs have gained a lot of attention in the machine learning community over the last few years. This is due to the wide range of applications where CNNs excel, especially machine vision projects: image recognition/classifications, object detection/recognition in images, digit recognition, coloring black and white images, translation of text on the images, and creating art images,

Lets look closer to a simple problem to see how CNNs work. Consider the digit recognition problem. We would like to classify images of handwritten numbers, where the target will be the digit (0,1,2,3,4,5,6,7,8,9) and the observations are the intensity and relative position of pixels. After some training, it’s possible to generate a “function” that map inputs (the digit image) to desired outputs (the type of digit). The only problem is how well this map operation occurs. While trying to generate this “function”, the training process continues until the model achieves a desired level of accuracy on the training data. You can learn more about this problem and the solution for it through our convolution network with hands-on notebooks.

How does it work?

Convolutional neural networks (CNNs) is a type of feed-forward neural network, consist of multiple layers of  neurons that have learnable weights and biases. Each neuron in a layer that receives some input, process it, and optionally follows it with a non-linearity. The network has multiple layers such as convolution, max pool, drop out and fully connected layers. In each layer, small neurons process portions of the input image. The outputs of these collections are then tiled so that their input regions overlap, to obtain a higher-resolution representation of the original image; and it is repeated for every such layer. The important point here is: CNNs are able to break the complex patterns down into a series of simpler patterns, through multiple layers.

Recurrent Neural Network (RNN)

Recurrent Neural Network tries to solve the problem of modeling the temporal data. You feed the network with the sequential data, it maintains the context of data and learns the patterns in the temporal data.

What are the applications of RNN?

Yes, you can use it to model time-series data such as weather data, stocks, or sequential data such as genes. But you can also do other projects, for example, for text processing tasks like sentiment analysis and parsing. More generally, for any language model that operates at word or character level. Here are some interesting projects done by RNNs: speech recognition, adding sounds to silent movies, Translation of Text, chat bot, hand writing generation, language modeling (automatic text generation), and Image Captioning.

How does it work?

The Recurrent Neural Network is a specialized type of Neural Network that solves the issue of maintaining context for sequential data. RNNs are models with a simple structure and a feedback mechanism built-in. The output of a layer is added to the next input and fed back to the same layer. At each iterative step, the processing unit takes in an input and the current state of the network and produces an output and a new state that is re-fed into the network.

However, this model has some problems. It’s very computationally expensive to maintain the state for large amounts of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. To solve these problems, a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable was found. What is it? Long-Short Term Memory (LSTM).

LSTM is an abstraction of how computer memory works: you have a linear unit, which is the information cell itself, surrounded by three logistic gates responsible for maintaining the data. One gate is for inputting data into the information cell, one is for outputting data from the input cell, and the last one is to keep or forget data depending on the needs of the network.

If you want to practice the basic of RNN/LSTM with TensorFlow or language modeling, you can practice it here.

Restricted Boltzmann Machine (RBM)

RBMs are used to find the patterns in data in an unsupervised fashion. They are shallow neural nets that learn to reconstruct data by themselves. They are very important models, because they can automatically extract meaningful features from a given input, without the need to label them. RBMs might not be outstanding if you look at them as independent networks, but they are significant as building blocks of other networks, such as Deep Believe Networks.

What are the applications of RBM?

RBM is useful for unsupervised tasks such as feature extraction/learning, dimensionality reduction, pattern recognition, recommender systems (Collaborative Filtering), classification, regression, and topic modeling.

To understand the theory of RBM and application of RBM in Recommender Systems you can run these notebooks.

How does it work?

It only possesses two layers: a visible input layer and a hidden layer where the features are learned. Simply put, RBM takes the inputs and translates them into a set of numbers that represents them. Then, these numbers can be translated back to reconstruct the inputs. Through several forward and backward passes, the RBM will be trained. Now we have a trained RBM model that can reveal two things: first, what is the interrelationship among the input features; second, which features are the most important ones when detecting patterns.

Deep Belief Networks (DBN)

Deep Belief Network is an advanced Multi-Layer Perceptron (MLP). It was invented to solve an old problem in traditional artificial neural networks. Which problem? The backpropagation in traditional Neural Networks can often lead to “local minima” or “vanishing gradients”. This is when your “error surface” contains multiple grooves and you fall into a groove that is not the lowest possible groove as you perform gradient descent.

What are the applications of DBN?

DBN is generally used for classification (same as traditional MLPs). One the most important applications of DBN is image recognition. The important part here is that DBN is a very accurate discriminative classifier and we don’t need a big set of labeled data to train DBN; a small set works fine because feature extraction is unsupervised by a stack of RBMs.

How does it work?

DBN is similar to MLP in term of architecture, but different in training approach. DBNs can be divided into two major parts. The first one is stacks of RBMs to pre-train our network. The second one is a feed-forward backpropagation network, that will further refine the results from the RBM stack. In the training process, each RBM learns the entire input. Then, the stacked RBMs, can detect inherent patterns in inputs.DBN solves the “vanishing problem” by using this extra step, so-called

DBN solves the “vanishing problem” by using this extra step, so-called pre-training. Pre-training is done before backpropagation and can lead to an error rate not far from optimal. This puts us in the “neighborhood” of the final solution. Then we use backpropagation to slowly reduce the error rate from there.

Autoencoder

An autoencoder is an artificial neural network employed to recreate a given input. It takes a set of unlabeled inputs, encodes them and then tries to extract the most valuable information from them. They are used for feature extraction, learning generative models of data, dimensionality reduction and can be used for compression. They are very similar to RBMs but can have more than 2 layers.

What are the applications of Autoencoders?

Autoencoders are employed in some of the largest deep learning applications, especially for unsupervised tasks. For example, for Feature Extraction, Pattern recognition, and Dimensionality Reduction. In another example, say that you want to extract what feeling the person in a photography is feeling, Nikhil Buduma explains the utility of this type of Neural Network with excellence.

How does it work?

RBM is an example of Autoencoders, but with fewer layers. An autoencoder can be divided into two parts: the encoder and the decoder.

Let’s say that we want to classify some facial images and each image is very high dimensionally (e.g 50×40). The encoder needs to compress the representation of the input. In this case we are going to compress the face of our person, that consists of 2000 dimensional data to only 30 dimensions, taking some steps between this compression. The decoder is a reflection of the encoder network. It works to recreate the input, as closely as possible. It has an important role during training, to force the autoencoder to select the most important features in the compressed representation. After training, you can use 30 dimensions to apply your algorithms.

Why TensorFlow? How does it work?

TensorFlow is also just a library but an excellent one. I believe that TensorFlow’s capability to execute the code on different devices, such as CPUs and GPUs, is its superpower. This is a consequence of its specific structure. TensorFlow defines computations as graphs and these are made with operations (also know as “ops”). So, when we work with TensorFlow, it is the same as defining a series of operations in a Graph.

To execute these operations as computations, we must launch the Graph into a Session. The session translates and passes the operations represented in the graphs to the device you want to execute them on, be it a GPU or CPU.

For example, the image below represents a graph in TensorFlow. Wx, and b are tensors over the edges of this graph. MatMul is an operation over the tensors W and x, after that Add is called and add the result of the previous operator with b. The resultant tensors of each operation cross the next one until the end, where it’s possible to get the wanted result.

TensorFlow is really an extremely versatile library that was originally created for tasks that require heavy numerical computations. For this reason, TensorFlow is a great library for the problem of machine learning and deep neural networks.

Where should I start learning?

Again, as I mentioned first, it does not matter where to start, but I strongly suggest that you learn TensorFlow and Deep Learning together. Deep Learning with TensorFlow is a course that we created to put them together. Check it out and please let us know what you think of it.

Good luck on your journey into one of the most exciting technologies to surface in our field over the past few years.

The post Learn TensorFlow and Deep Learning Together and Now! appeared first on BDU.


DB2 Guys

Join fellow DB2 Professionals at the 2017 IDUG DB2 Tech Conference in Anaheim

by Michael Roecken, DB2 Linux, UNIX and Windows Development Join fellow DB2 professionals at the 2017 IDUG DB2 Tech Conference, April 30 – May 4 in Anaheim for a comprehensive 5-day event, featuring a wide variety of user-focused technical education and an exceptional mix of DB2 thought leaders. At the IDUG DB2 Tech Conference, you […]
 

March 18, 2017


DB2Night Replays

The DB2Night Show #190: DB2 LUW V11 Certification Training - Part 1

Follow @mohankumarsp Follow @rcollins963 Special Guests: Mohan Saraswatipura and Kent Collins, Authors DB2 LUW V11 Certification Training - Part 1 100% of our audience learned something! If you want to earn DB2 LUW V11 IBM Certifications, there are no better teachers than the professionals that are writing the exams and Certification Study Guides! Watch this replay and learn about DB2 LUW V11, BLU, DPF/MPP, pureScale, and more! Sample...

(Read more)
 

March 16, 2017


DB2 Guys

Winning with Machine Learning

by Sajan Kuttappa – Content Marketing Manager IBM is hosting 3 no-cost, face-to-face events in April designed to show how your current IBM investments can help you to easily embrace private, public, and hybrid cloud, and be ready for Machine Learning. These events feature an executive keynote, “Winning with Machine Learning”, followed by sessions across […]
 

March 14, 2017

Big Data University

This Week in Data Science (March 14, 2017)

Here’s this week’s news in Data Science and Big Data. IBMresearch

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

Featured Courses From BDU

  • Big Data 101 – What Is Big Data? Take Our Free Big Data Course to Find Out.
  • Predictive Modeling Fundamentals I
    – Take this free course and learn the different mathematical algorithms used to detect patterns hidden in data.
  • Using R with Databases
    – Learn how to unleash the power of R when working with relational databases in our newest free course.
  • Deep Learning with TensorFlow – Take this free TensorFlow course and learn how to use Google’s library to apply deep learning to different data types in order to solve real world problems.

Cool Data Science Videos

The post This Week in Data Science (March 14, 2017) appeared first on BDU.


DB2utor

Converting DBRMs to Packages in DB2 12

Prior to DB2 12, the DB2 bind command allowed you to specify either a list of database request modules (DBRM) using the member keyword or a list of packages using the PKLIST (package list) keyword. However, IBM, which deprecated the member keyword option in DB2 10, has removed it entirely from DB2 12. So any migration to DB2 12 should include taking the steps needed to rebind all plans without the member keyword.
 

March 13, 2017

Jack Vamvas

Powershell grep and awk

I use grep for exploring data sets   and awk for text processing on Linux. They are very powerful utilities which simplify  extracting and processing text.

For example , I have a CRON job running on command a Linux server which does something like:

Example 1

db2pd -d mydb -logs |grep 'Method 1 Archive Status' | awk '{print " JackV@myserver - MYDB "$3" "$4":  "$5}'

This command uses the db2pd utility to extract information from the db2pd logs output and process to return certain information from the data extracted

The output is :

JackV@myserver MYDB Archive Status:   Success

 

A question that regularly appears is how to achieve the same result in Powershell. In this case I’m referring to executing the db2pd utility on a DB2 LUW database on a Windows platform.

This is an example of using the Powershell command like to extract and present the text in the same output as the Linux grep and awk combination above. The example uses the select-string   cmdlet , similar to the grep process , and then creating an array using the split function

Example 2

db2pd -d MYDB -logs | select-string "Method 1 Archive Status" |%{$a=$_.ToString().split(" ");Write-Host "JackV@myserver" $a[0] $a[2] $a[3] $a[10]}

 

One difference to notice is the final column . In the awk example 1 – the final column is in the 5th position. In the Powershell Example 2 version the position is 10 . That’s because Powershell counts all the white spaces.

Read More

The great Windows or Linux debate for DB2 LUW

db2pd troubleshooting guide (DBA DB2)

 

 

 


Kim May

Webinar: Oracle Renewal Looming? Consider DB2!

The Fillmore Group is inviting organizations frustrated with rising Oracle costs to join us for a webinar focusing on the two primary motivations behind DB2 migrations: cost and functionality....

(Read more)
 

March 08, 2017


Dave Beulke

IBM Announces Machine Learning on System z

Last month IBM provided more details about their new Watson Machine Learning Initiatives starting with System z (WMLz). Beginning a new initiative with the mainframe platform is an interesting move for IBM. The new WMLz offerings: Introduce an end-to-end enterprise machine-learning platform....

(Read more)
 

March 07, 2017


Data and Technology

Inside the Data Reading Room – Analytics Edition

If you are a regular reader of this blog you know that, from time-to-time, I review data-related books. Of course, it has been over a year since the last book review post, so this post is long...

(Read more)
Jack Vamvas

DB2 performance monitoring with db2pd

For ease of use and accessibility db2pd in DB2 LUW is my favourite for DB2 performance troubleshooting

Here are some approaches on gathering data from DB2 LUW to assist root cause analysis. This is the first post – I’ll keep adding more information over the next few days

If you want to read on a wider range of db2pd capabilities read db2pd troubleshooting guide (DBA DB2)

A typical situation is sustained high CPU – and query response times begin to degrade.

Check vmstat and get an overall picture of the CPU. Focus on the difference between sys and usr CPU. What exactly is a high CPU can be arbitrary and will depend on an understanding of the system resource usage patterns.

To keep it simple – let’s say sys is double the usr CPU.

 For some vmstat details read Linux swap space and DB2

 If CPU sys is high check out either IO stalls \ Network \ latches.

For latch investigation use the –latches switch . This example will run every 2 seconds 20 times

                                                            db2pd -latches -rep 2 20

 

Investigate the “Waiter” column.

 

If CPU usr is high – it means application code is the main contributor to high CPU. You’ll need to list of EDUs consuming the highest CPU

                                                                    db2pd -db <dbname> -edus interval=5 top=5

Next piece of the puzzle is to identify which query is generating the bottleneck. Use the –apinfo switch to correlate

                                                                        db2pd -db <dbname> -apinfo -rep 2 5

 

Read More

How to monitor table reorgs with db2pd -reorgs (DBA DB2)

DB2 Instance Memory and dbptnmem

Check DB2 instance status

Big Data University

This Week in Data Science (March 7, 2017)

Here’s this week’s news in Data Science and Big Data. Artificial Intelligence

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Featured Courses From BDU

  • Big Data 101 – What Is Big Data? Take Our Free Big Data Course to Find Out.
  • Predictive Modeling Fundamentals I
    – Take this free course and learn the different mathematical algorithms used to detect patterns hidden in data.
  • Using R with Databases
    – Learn how to unleash the power of R when working with relational databases in our newest free course.
  • Deep Learning with TensorFlow – Take this free TensorFlow course and learn how to use Google’s library to apply deep learning to different data types in order to solve real world problems.

The post This Week in Data Science (March 7, 2017) appeared first on BDU.


DB2utor

DB2 12 Enhancements to System Profiles

DB2 11 introduced a powerful feature that provides monitoring capabilities using DB2 profile tables. With these tables, you can monitor the use of system resources by distributed applications as well as set special registers values for the remote application without having to change application code.
 

March 06, 2017


DB2 Guys

Why it is a great time to migrate to DB2 11.1?

by Sajan Kuttappa, Content Marketing Manager, IBM DB2 IBM released the latest version of its DB2 software –  DB2 11.1 for Linux, UNIX and Windows in 2016.  With DB2 11.1, several improvements were introduced including simplification of the upgrade process. Recognizing that many organizations still have databases on older releases, DB2 11.1 increased the number […]
 

March 03, 2017


DB2 Guys

Renowned speakers, educational sessions…this is an event no developer will want to miss!

by Rajesh Govindan, Portfolio Marketing Manager, IBM Informix Informix developers from all over the world are signing up to attend this year’s IIUG conference in Raleigh, North Carolina. They’re coming for the popular speakers – like John Cohn, esteemed IBM engineer, inventor and author will be the keynote speaker — and for the deep-dive […]
 

March 02, 2017


Craig Mullins

The DB2 12 for z/OS Blog Series – Part 6: Transferring Ownership of Database Objects

When a database object is created it is given a qualified two-part name. This applies to tables, indexes, table spaces, distinct types, functions, stored procedures, and triggers. The first part is the schema name (or the qualifier), which is either implicitly or explicitly specified. The default schema is the authorization ID of the owner of the plan or package. The second part is the name of...

(Read more)
 

February 28, 2017

Jack Vamvas

How to troubleshoot DB2 lock waits

Monitoring lock waits is a great way of measuring query performance slowdown.

Lock waits are a normal part query processing but when the lock waits start taking longer than normal , it’s a sign of trouble.

According to the DB2 LUW documentation a lock wait is defined as “A lock wait occurs when one transaction (composed of one or more SQL

statements) tries to acquire a lock whose mode conflicts with a lock held by another transaction”

Common symptoms of lock waits taking longer than normal are:

  1. Applications are not completing tasks
  2. SQL query performance slowdown
  3. Lock escalations. A small amount is OK, but excessive counts are an issue

The ideal is to monitor continuously : Lock wait, Lock timeout and deadlock locking

 To report on lock wait chains

Db2pd –locks waits –alldbs

Use this query with joins on views sysibmadm.snapappl_info which return information about applications from an application snapshot and sysibmadm.snapappl which reports on  cumulative counts and the latest SQL statement executed . This query reports on useful wait stats details

 

db2 "SELECT ai.appl_name AS app_name , \
ai.primary_auth_id AS auth_id , ap.agent_id AS app_handle,\
ap.lock_waits AS lock_waits, ap.lock_wait_time / 1000 AS Total_Wait_S, \
(ap.lock_wait_time / ap.lock_waits ) AS Avg_Wait_ms \
FROM sysibmadm.snapappl_info ai, sysibmadm.snapappl ap \
WHERE ai.agent_id = ap.agent_id AND ap.lock_waits > 0"


 

Read More

Database Tuning for complex sql queries (DBA DB2)

Trace sql statements in DB2 database (DBA DB2)

DB2 Query Tuning – db2expln (DBA DB2)

Big Data University

This Week in Data Science (February 28, 2017)

Here’s this week’s news in Data Science and Big Data. IBM

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

Featured Courses From BDU

  • Big Data 101 – What Is Big Data? Take Our Free Big Data Course to Find Out.
  • Predictive Modeling Fundamentals I
    – Take this free course and learn the different mathematical algorithms used to detect patterns hidden in data.
  • Using R with Databases
    – Learn how to unleash the power of R when working with relational databases in our newest free course.

Cool Data Science Videos


DB2utor

DB2 12 Active Logs Greater than 4 GB

Recently while presenting a DB2 12 technical workshop, I discussed pre-migration requirements. One important consideration is the need to ensure that the current DB2 11 active logs are not greater than 4 GB. When the active logs are greater than 4 GB, DB2 11 will startup without a problem and ignore the portion of the log greater than 4GB; however, DB2 12 will not start when in V12R1M100 function mode.
 

February 27, 2017


ChannelDB2 Videos

Learn How to Use R with Databases


Thumbnail

Whether you are a database professional or a data scientist, you can easily learn how to use R with Databases.

Subscribe by email

 

About

planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

Bloggers

decor