decor
 

planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.

 

February 10, 2016


Dave Beulke

3 DB2 Critical Design Factors for Big Data Analytics Scalability

Faster, bigger, and better analytical insights. These are the goals that management lays out as the new big data analytics process is talked about during the startup meetings. Looking around the room and listening to the CIO’s comments it becomes abundantly clear that they only know the big data...

(Read more)

Craig Mullins

The Most Misunderstood Features of DB2 – Part 3: Nulls

Welcome to Part 3 in my on-going blog series on The Most Misunderstood Features of DB2. You can find the first two parts here: Part 1(on locking) and Part 2 (OPTIMIZE FOR v. FETCH FIRST). Today’s topic is one that confuses many SQL developers, Nulls. What is a Null?A null represents missing or unknown information at the column level. When a column is set as null, it can mean one of two things:...

(Read more)
Jack Vamvas

How to view harvested user defined storage resources for TSAMP

When troubleshooting a  db2haicu adding a mountpoint operation , it was necessary to review the

resources harvested. This is the first step in trying to figure how TSAMP is viewing the online

resources

To view the online user defined storage resources  harvested use this command:


lsrsrc -Ab IBM.AgFileSystem


When reviewing the output you'll notice the mount point information. This information is harvested by

default from the /etc/fstab details .

If a new resource has been added to the system and you want to refresh the IBM.AgFileSystem output use

the force method to refresh the  list

refrsrc IBM.Disk

The alternative to using the refrsrc IBM.Disk  is to restart the domain. Read TSAMP Cheat Sheet for DBA managing DB2 clustering to view the restart

domain command.

Although by default the output is from the /etc/fstab file, it is possible to amend the MountPoint

attribute.

Using the methods listed above it is possible to view the mounted file system as represented by

IBM.AgFileSystem resource

Read more on managing TSAMP management

How to backup the TSA SAMP policy details

 

 

February 09, 2016


Leons Petrazickis

Spark Summit East 2016

Next week I'll be demoing Data Scientist Workbench at Spark Summit East (official site) in New York. Polong Lin will be there with me. Come by the expo floor next Wednesday and Thursday and chat with...

(Read more)

DB2utor

Options for Storing ARRAY Data in a DB2 Table

I’ve heard some confusion around ARRAY, the new user-defined data type (UDT) introduced with DB211, and the preferred way (read: easiest) to store an ARRAY of data into a column on a DB2 table.
 

February 08, 2016

Big Data University

Wow!! Amazing Introduction to “R” session.

600_446502916

Were you one of the 120 people who joined us for the Introduction to “R” hands-on session on February 2, 2016 at the Launch Zone?   If you were, I am sure you can attest to the great session we had.

Polong Lin did a great job of delivering the course materials as can be seen by some of the comments that we had on our meetup page.

“Thanks Raul for organizing this, Polong’s presentation was awesome.” (Anirban)

“Excellent intro to R” (Paul Yip)

“Thanks to the whole BDU team. It was a tremendous meeting and Polong made the understanding of R easy and smooth.” (Rafael Gomes)

As with all our Introductory sessions, we try to gear the materials towards a true beginner.   However, everyone regardless of level is invited to attend.

As with all our meetups we started the event with a fun and informative poll, delivered by one of our community members Janice, while enjoying pizza and pop.   It was an opportunity for our team to gauge the level of expertise of the attendees so that the materials presented would be well geared towards them.   It was also an opportunity for us to have a bit of fun and break the ice.

Social Media is a large part of our meetup experience, all attendees are encouraged to tweet at us @BigDataU. Doing so may get you a small token.   We recognize most tweets, most retweeted tweets and most innovative/funny tweet.

We recognized the recipients of the winning tweets from our first meetup at this meetup. Congrats once again to Mahesh, Supriya and Neo!!

Thank you to all who attended!!

For future meetups, we welcome your input on topics.   Also, if you are proficient in a specific topic that you think would be of interest to this community and you can deliver, please be sure to suggest.   We are always looking for new ideas and community presenters.

If you are in the Toronto area and want to learn about Python join us at our next meetup on February 16th at The Launch Zone at Ryerson http://meetu.ps/2V8p9t .

We look forward to seeing you!

The post Wow!! Amazing Introduction to “R” session. appeared first on Big Data University.

 

February 04, 2016


Xtivia

DB2 LUW Error Message: SQL1024N

Error Message

SQL1024N  A database connection does not exist.  SQLSTATE=08003

or sometimes:

DB21034E  The command was processed as an SQL statement because it was not a 
valid Command Line Processor command.  During SQL processing it returned:
SQL1024N  A database connection does not exist.  SQLSTATE=08003

What is Really Happening?

This error message occurs when you are attempting to perform an action that requires a database connection, but a database connection has not been established. Actions that require database connections include issuing SQL statements – including those that access monitoring table functions, creating or altering objects (DDL), and querying or updating database configuration without specifying a database in the command, among others.

How to Resolve

To resolve this, simply establish a database connection using the CONNECT statement. For more information on how to connect to a DB2 LUW database, see How to Connect to a DB2 Database

Databases must be cataloged on the server you are on before you can connect to them. Creating a database automatically catalogs it, or you can explicitly catalog a remote database. For more information on cataloging a database, see How to Catalog A DB2 Database

If you have only one database on a DB2 instance and want DB2 to implicitly connect to it for any command requiring a connection, you can set the DB2 registry variable DB2DBDFT. This will cause DB2 to implicitly establish a connection to the database specified for most commands that require connections. If there is more than one database on an instance, you may want to avoid setting this, so people do not think they are working with one database and accidentally work with another database. I prefer not to set it in most situations.

The post DB2 LUW Error Message: SQL1024N appeared first on Xtivia.


Scott Hayes

Announcement: DB2 User Group Sydney (DUGS) Free Seminar 9th FEB

The next meeting of Sydney DB2 User Group will be 9th February 2016 at 2pm at the historic Custom House, 2nd Floor Library. Afterward (5pm), the plan is to spend DUGS funds at http://www.tapavino.com.au/ networking. During this DUGs meeting, Frank McEwen has a short presentation on the DB2 LUW package cache, then Scott Hayes, President & Founder of DBI, IBM GOLD Consultant and Champion, will present "Fearless Tuning" which covers...

(Read more)
 

February 03, 2016

Jack Vamvas

How to remove a TSA peer domain

I was installing TSA for a shared storage DB2 clustering , and there were some disk error issues. Rather

than spending time troubleshooting dealing with a half installed installation, it was easier to drop the

peer domain and start the TSA install again. 

Making the decision on whether you reinstall or troubleshoot, will depend on your circumstances. It's worth highlighting the TSA uninstall procedure is easy . It then gives you the opportunity to complete any due diligence, bug fixing or other preparatory steps

Once the disk error was fixed , these were the steps I completed to delete the TSA domain.


Step 1 -  Check if the peer domain exists on the machine by executing the following command:

 root@myserver# lsrpdomain

results  Name OpState RSCTActiveVersion MixedVersions TSPort GSPort
 mydb2domain Online 2.5.3.2 No 12347 12348

If you execute lsrpdomain and there is no result, it means there is no peer domain isntalled. This means

there is no TSA install.You cannot have multiple domains installed. 
 
Step 2 .Use the lsrpnode to list all the nodes as part of the domain. This is a typical resultset.

[root@myserver.com ~]# lsrpnode
Name         OpState RSCTVersion
server1 Online  3.1.5.5
server2  Online  3.1.5.5

 

Step 3  If you'd like to proceed in a logical progression , you can remove the node membership from the

domain.

 
a) Identify and remove every node using the lsrpnode
b) Execute stoprpnode my_node_name   to stop the node
c) Execute rmrpnode my_node_name to remove the node 
d) Repeat until all nodes are removed
Once you've confirmed all nodes are removed correctly , use the rmrpdomain command. The rmrpdomain will attempt to remove the domain. There is also a force option


rmrpdomain mydoamin
rmrpdomain -f mydomain

You can use the commands lsrpdomain and lsrpdomain to confirm if the actions are completed.
If you only needed to remove a node , you can use lsrpnode
If you need to check if the whole domain is deleted use lsrpnode


If you are attempting to remove a DB2 TSA installation , which required you to use db2haicu as the

interface to manage\ complete the TSA install, then I'd recommend you complete the following step before

you start the node and domain uninstall.


db2haicu -delete

 

 

Read More on TSA and TSA management

TSAMP maintenance and diagnostics - DBA DB2

TSAMP Cheat Sheet for DBA managing DB2 clustering - DBA DB2

How to backup the TSA SAMP policy details - DBA DB2

 

February 02, 2016


Leons Petrazickis

Datapalooza Seattle on Feb 9-11

On February 9 through 11, I'll be mentoring hackers and budding data scientists at Galvanize during Datapalooza Seattle. It should be a great conference covering topics like things like machine...

(Read more)

Frank Fillmore

Lunch Event March 3rd: #zIBM (Mainframes!) for Data Architects and Data Scientists

Kim has just returned from IBM’s annual “z Bootcamp” where they prepare their System z sellers for the upcoming year with a series of product updates, announcements and education.  Her...

(Read more)
Big Data University

Making Machine Intelligence Available To All

Alex Kern and Nikhil Srinivasan believe machine intelligence will shape the future of humanity. Advances in data science will soon begin to reshape existing industries, while paving the way for entirely new ones. Their thesis: rather than focus on model development, the two would rather create the supporting infrastructure, tooling, and marketplace to make these technologies more broadly accessible by the wider developer community.

 

By striving to democratize access to the tools of data science, individuals can focus on deriving insights from their data instead of building and operating complex machine learning pipelines. With Pavlov, Alex and Nikhil plan to reduce the operational complexity associated with data science, by providing a framework to design and develop the next generation of intelligent applications.

 

Pavlov was started out of their frustrations building applications using modern advances in deep learning. Nikhil spent five years at Harvard Medical School, where he recognized the challenges researchers faced in operating data processing pipelines. Alex spent time at NASA’s JPL and Apple building distributed machine learning infrastructure and realized these pain points personally.

 

I had an opportunity to sit down with the two to talk more about what they’re building. In the arti
cle, we explore questions ranging from what they’ve built, to advice they have for others with an interest in this space.

 

What they’re building.

pavlovml.png Pavlov is a framework that allows developers to focus on the “magic” of machine learning by letting them focus on implementation and predictions rather than infrastructure. Rather than managing ad-hoc data pipelines, Pavlov provides the infrastructure layer that enables teams to quickly build, deploy, and scale models. The company hopes to tackle every layer in the digital intelligence stack, from proprietary hardware to human intelligence for supervised learning, allowing users to focus on application development and model optimization.

 

One of the bigger challenges with the evolving landscape is that it’s deeply rooted in an academia – most of the innovation is being driven by researchers, creating a huge barrier for anyone interested in learning more. Industry messaging such as “regional convolutional neural networks” and the general lack of enterprise support for available software make data science more a craft than a hard skill.

 

After going through the first YC Fellowship batch, the team has been laser focused on delivering their machine intelligence solution to early enterprise customers and are focused on delivering computer vision solutions.

 

The problem they’re trying to solve.

Both commercial and academic research efforts are producing enormous amounts of data, yet people are still making critical decisions driven by intuition rather than data-driven insight. While significant efforts have been made building systems that can find the signal in the noise, very few have delivered robust solutions for computer vision problems – this is where Pavlov comes in. The team is building a toolkit that will allow users to build robust applications that leverage the latest advantages, termed ‘convolutional neural networks’ to make sense of everything from satellite to medical imagery.

 

Hardest decision they’ve had to make.

While it’s clear machine intelligence is transforming enterprise, one of the bigger challenges in this space is establishing product-market fit, one that supports clear business value despite how quickly the field is evolving. Their hardest decisions were ones where they turned down opportunities – as with any startup, your most valuable resource is your time and focus is valuable. This is especially critical with companies in this space, given the inherent research rather than product driven innovation cycle.

 

Why they think this is the right time.

This is the right time for them because there are a lot of exciting developments in the field. With Google opening up Tensor Flow – an open source software library for machine learning, and computer vision delivering performance improvements, Alex & Nikhil believe that this is the renaissance period for all things machine learning. There are a ton of companies who want to get into the consumer space, however Alex and Nikhil see the opportunity within the enterprise space.

 

Their vision is to build the infrastructure and tooling that makes it easier for enterprise customers to become heavily invested in this space and hope to serve as an abstraction for the infrastructure and human intelligence layers. While they do not have any direct competitors, companies like Palantir and Orbital Insights come to mind when trying to find analogous data solutions – that said, Pavlov is unique in it’s primary focus as a full-stack computer vision solution.

 

You were part of the inaugural YC Fellowship class. What was the best question a Y Combinator Partner asked you?

When pitching their product to a technical partner, during Y Combinator’s office hours, hearing the question about simplification, was a great turning point for the two founders. Realizing that they have been embedded within the data science/machine learning community so long that not even a technical partner could understand what they were saying, allowed them to take a step back and take a simpler approach when pitching in order to adequately explain their target market and how they are positioning their business.

 

What keeps you up at night?

The number one thing that keeps the founders up at night is the fact that training these algorithms is quite expensive. To compound the issue, the current capabilities for cloud computing is not quite where it needs to be for Pavlov to be successful. As a result the co founders will need to build their own computers and server farms – something that they will have to learn along the way. Having their own hardware will allow for much quicker turnaround times for clients, which will prove essential in the long run when working on more time-sensitive contracts.

 

What will you be celebrating 1 year from now?

One year from now, the team hopes to have multiple paying customers with recurring revenue. By this time they also hope to have been surprised by different use cases that customers have for their tool kit. Most importantly though they want to deliver on what they’ve both envisioned for the product, and would like to feel confident that what they’re working on has the potential to make an impact in the field.

 

Any advice for aspiring data scientists?

The general advice that Alex & Nikhil have offered is that you should simply jump in and start getting involved with what’s happening in the data science space. People need to be aware that there’s not a lot of conventional wisdom right now, which means there is a lot of opportunity to learn and break things.

 


For those looking to get early access to the beta test, feel free to check out Pavlov. To further expand your knowledge in the data science world, as always, our courses at Big Data University, offer you an opportunity to get free access to learn more about these concepts.

 

The post Making Machine Intelligence Available To All appeared first on Big Data University.


Craig Mullins

The Most Misunderstood Features of DB2 – Part 2: Optimize vs. Limited Fetch

Welcome to Part 2 in my on-going blog series on The Most Misunderstood Features of DB2. In Part 1 of the series we tackled the topic of locking, which IMHO is easily the most misunderstood feature of DB2 (probably of most DBMSes). Today's topic is a brief one, but one that I've found folks to be confused about. Namely, the difference between the OPTIMIZE FOR and FETCH x ROWS ONLY clauses. The...

(Read more)

Henrik Loeser

Parse shutting down, move your data

Parse shutting down This week Parse.com, Facebook’s Mobile Backend as a Service offering, surprised their users. The service will shut down next year and all users are asked to move on. The Parse...

(Read more)

DB2utor

DB2 Certification Preparation for IDUG NA 2016

It's not too early to plan on attending IDUG DB2 Tech Conference in Austin, Texas, May 23-26. If you are going to the conference, you should consider taking a DB2 certification test. One of the great benefits of attending IDUG is that IBM sponsors up to two certifications to help attendees get certified for DB2.
 

January 30, 2016


DB2Night Show News

DB2's GOT TALENT 2016 is calling YOU! We need contestants!

Based on feedback from prior years, we've made the contest simpler, less time consuming, and more fun. Instead of possible multiple callbacks, contestants only present ONE time, and every contestant...

...

DB2Night Replays

The DB2Night Show #170: DB2 LUW Data Security 102

@idbjorh Special Guest: Ian BjorhovdePrincipal Consultant at DataProxy LLC DB2 LUW Data Security 102 90% of our audience learned something! Security isn't sexy, but it is important, especially if you've been a victim of identity theft or endured a data breach. During this show, Ian Bjorhovde "The Master of DB2 Security" shares valuable information and advice on how to achieve a more secure DB2 database environment. Please learn from, and...

(Read more)

Robert Catterall

DB2 for z/OS: Thoughts on History and Archive Tables

I'll state up front that I'm using the terms "history table" and "archive" table not in a generic sense, but as they have technical meaning in a DB2 for z/OS context. A history table is paired with a "base" table that has been enabled for system-time temporal support (introduced with DB2 10 for z/OS), and an archive table goes with a base table that has been enabled for DB2-managed archiving (also known as "transparent archiving" -- a feature delivered with DB2 11). Last week, I posted to this...

(Read more)
 

January 29, 2016


Data and Technology

Inside the Data Reading Room – New Year 2016 Edition

Regular readers of my blog know that I periodically take the time to review recent data-related books that I have been reading. This post is one of those blogs! Today, I will take a quick look at...

(Read more)
Jack Vamvas

How to customise TSAMP start and stop scripts to make monitoring agents cluster aware

I’ve integrated Tivoli Storage Automation (TSA) and DB2 .  The  solution is based around a two node TSAMP solution which is “automation software”,  running as part of a clustered Reliable Scalable Cluster Technology (RSCT) environment. (RSCT  is the “cluster software”)

Read More on Tivoli Storage Automation for Multiplatforms(SA MP) and the shared disk approach

TSAMP uses a set of automation scripts to manage DB2 that is, start, stop and monitor them. TSAMP responds to an unexpected change in state for individual resources (eg DB2 instance failure), a server failure/crash, a network/NIC related outage

To confirm the TSA integration with DB2 , you can check the DB2 dbm configuration . If you’re using Linux check :

db2 get dbm cfg | grep 'Cluster manager'

One of the challenges in integrating TSAMP with DB2 is  supporting  services need to be  made cluster-aware. Some examples are monitoring and backups.

In a non clustered environment , there may be a monitoring agent installed monitoring DB2. Monitoring is based on DB2 as a single unit. In the cluster scenario, there is the added complication of correlating the state of one DB2 instance  (Node 1) with the state of another  DB2 instance (Node 2).

There are multiple approaches  to this problem. The approach is dependant on the monitoring platform. You may have an agentless remote monitoring system in place. The solution discussed here  is for situations where a monitoring agent is installed on every OS

For monitoring , the solution applied is to customise the TSA start and stop scripts. The TSA scripts are located on  /usr/sbin/rsct/sapolicies/db2  . They are refreshed at every DB2  version upgrade, therefore tight management is required.

As well as managing the TSA scripts at every DB2 version upgrade, there’ll be a requirement to manage monitoring agent upgrades. The upgrade will probably reset the defaults including the automatic start up at OS start

The basic principle is to take out the monitoring agent from the standard start \ stop process. i.e don’t automatically start the db2  monitoring agent when the OS starts.  Use the clustering software to manage the stop \ start of the db2 agent monitor.

The files  customised are db2V10_start.ksh   and db2V10_stop.ksh.  When the TSA cluster software is running and  is attempting to initiate db2start on Node 1 , one of the scripts it will execute is db2V10_start.ksh.

When there is a failover , one of the scripts TSA will use is db2V10_stop.ksh.  These scripts can be exploited to stop and start various supporting services such as monitoring.

In the situation of a server crash i.e the db2V10_stop.ksh doesn’t run , the server on start up won’t start the monitoring agent , as you’ve already taken it out of the /etc/init.d /   section

The next step is to look at correlating the different states of the Cluster Nodes and avoid false alerts. I’ll discuss this process in a future post.

I’ll also write a post about the monitoring which comes with RSCT. The RSCT monitoring supplies loads of  detail on every aspect of the cluster state  

Read More on DB2 clustering and high availability

TSAMP maintenance and diagnostics

TSAMP Cheat Sheet for DBA managing DB2 clustering

 


Henrik Loeser

Combining Bluemix, Open Data on Tourism and Watson Analytics for some Friday Insight

Inbound and Outbound Tourism, Watson Analytics Yup, it is Friday again and the weekend is coming closer and closer. The Carnival or Fasnet/Fastnacht season is close to its peak and some school...

(Read more)
 

January 27, 2016


Dave Beulke

3 Critical Programming Performance Criteria

The new year always presents great opportunities after the holiday rush and code freeze-thaw of the holiday year-end. Unfortunately the code freeze-thaw can present wonderful opportunities for growth. Recently these three critical factors came to light during the analysis of performance aspects of...

(Read more)

Xtivia

Allocation of primary logs in db2

Recently, we noticed on a staging environment that the LOGPRIMARY setting was set to 35, yet the number of logs on the log path was only 21.

Number of primary log files                (LOGPRIMARY) = 35                         35
 Number of secondary log files               (LOGSECOND) = 45                         45


db2inst1# ls -ltr
total 13440704

-rw-------   1 db2inst1   db2iadm1       512 Dec  1 19:10 SQLLPATH.TAG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 12:18 S0063132.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 13:32 S0063133.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 13:36 S0063134.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 14:36 S0063135.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 14:55 S0063136.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 15:51 S0063137.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 16:13 S0063138.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  6 18:02 S0063139.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  7 11:28 S0063140.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  7 14:17 S0063141.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  8 13:19 S0063142.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan  9 05:06 S0063143.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 11 09:26 S0063145.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 11 09:26 S0063144.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 11 10:25 S0063146.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 11 10:51 S0063147.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 11 11:17 S0063148.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 11 11:39 S0063149.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 16 17:37 S0063129.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 18 06:59 S0063130.LOG
-rw-------   1 db2inst1   db2iadm1   327688192 Jan 21 13:06 S0063131.LOG

How was anyone able to connect to the database if DB2 could not allocate the number of logs required? Well, here’s what actually happened. When DB2 was initially started, there was plenty of space on the filesystem where the log path is located, so db2 started fine and had 35 logs allocated. Eventually the space ran out (it is after all a staging environment), so DB2 stopped allocating more logs than the available space. To understand this better, here’s what happens behind the scenes. When a log file fills up, DB2 has to copy that log file to the archive log directory and then rename to allocate more logs. In this case, because there was no space to be able to do that, it just managed to continue functioning with the available resources. DB2 ignores that kind of error.

2015-10-21-16.33.04.258307-240 E1738A463          LEVEL: Event
/primary
  Block size        = 8192 bytes
  Total size        = 18169724928 bytes
  Free size         = 0 bytes
  Total # of inodes = 5664
  FS name           = 
  Mount point       = 
  FSID              = 18446744071563116545
  FS type name      = vxfs
  DIO/CIO mount opt = None
  Device type       = N/A
  FS type           = 0x2
CALLSTCK: (Static functions may not be resolved correctly, as they are resolved to the nearest symbol)
  [0] 0xC00000001444B740
2015-10-22-09.35.16.499947-240 I172033A432        LEVEL: Info
PID     : 27322                TID  : 94445       PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000         DB   : SAMPLE
EDUID   : 94445                EDUNAME: db2loggr (SAMPLE) 0
FUNCTION: DB2 UDB, data protection services, sqlpgInitRecoverable, probe:8210
MESSAGE : Not able to allocate all primary log files due to DISK FULL. This
          error is ignored.

So, please make sure that you have plenty of space allocated to the filesystem where the logs reside. If DB2 is already up and running, it won’t complain. But, if for some reason you did not free up space and restarted the DB2 engine, you won’t be able to do that, till you free up space. Planning is key.

The post Allocation of primary logs in db2 appeared first on Xtivia.


Willie Favero

What are your plans for DB2's Synonyms?

(Posted Monday, January 27, 2016) If you look up synonyms in IBM?s Knowledge Center, the very first thing you?ll see, the heading for the entry, should grab your attention. In very bold print, it states?

(Read more)
 

January 26, 2016


Craig Mullins

The Most Misunderstood Features of DB2 – Part 1: Locking

Today I am introducing a new series of blog posts here on misunderstood DB2 features and functions. But before I start this blog post I want to emphasize that this is just my opinion. I’m sure many of you have your own ideas of the DB2 features that are most misunderstood. But please, take a moment to consider my thoughts here… and then share your own in the comments section below! Locking! One...

(Read more)

DB2utor

Recent XML Enhancements

In general, XML enhancements in DB2 11 involve improving native XML language support using XQuery. This is designed to save developers the effort of converting to SQL/XML syntax. While differences remain between DB2 for LUW and DB2 for z/OS, improvements continue on the mainframe side.
 

January 25, 2016


Henrik Loeser

MySQL-Style LIMIT and OFFSET in DB2 Queries

I was recently asked whether DB2 support MySQL-style syntax to page through query result sets. The good news is that DB2 supports LIMIT and OFFSET in addition to its own syntax. The only drawback is...

(Read more)

Henrik Loeser

The Cloud, Mood-Enhancing Substances, World Economic Forum, and More

DataWorks and Connect & Compose Right now, the Winter sky is mostly covered by some low hanging clouds, giving way only for some random rays of sun. The past weeks I have been plagued by a cold...

(Read more)

Henrik Loeser

A Cache of Identities and a Sequence of Events

Bits and Bytes of Sequences Recently I received an interesting question: DB2 and other database systems have a feature to automatically generate numbers in a specified order. For DB2 this generator...

(Read more)
 

January 22, 2016


DB2Night Replays

The DB2Night Show #169: DB2 LUW WLM (Workload Manager) Monitoring Features

@globomike Special Guest: Michael TiefenbacherManager Business Services & Data Management Specialist at ids-System Facilitate WLM Functions and Tools to Improve Your DB2 LUW Monitoring 100% of our audience learned something! If you don't have tools, you need to be clever about monitoring DB2 with built-in monitoring capabilities. During this show, Michael does a great job of explaining how DB2 Workload Manager (WLM) features and...

(Read more)
Jack Vamvas

TSAMP maintenance and diagnostics

You’ve set your TSAMP environment on Linux and DB2. Tested a few failover scenarios and everything is looking positive.  

One of the key things about TSAMP and DB2  is maintaining the environment. Finding any small problems before they become big problems.

For example, monitoring message logs and backing up copies of configuration and state files can save you lots of work when troubleshooting.

These are some guidelines to follow, which will give you a firm basis. There are different ways of using the information, but I like to create reports , which are reviewed and apply any fixes

1. Before making any SAMP change , save the a backup copy of the current SAMP policy.Don’t forget to also backup the SAMP policy after you’ve made the change. Compare the current SAMP policy to the backup SAMP policy every time there is an HA incident.
The command to save the SAMP policy is on How to backup the TSA SAMP policy details
2. Investigate every process id which blocks or interferes with the tsamp commands.
3. Further to point 1 – maintain backup copies of db2nodes.cfg and db2ha.sys
Db2nodes.cfg is normally found in the $INSTHOME/sqllib folder
Db2ha.sys  is normally found  in $INSTHOME/sqllib/cfg/db2ha.sys
4. Save backup copies of db2pd -ha output before and after every SAMP change. Compare the current db2pd outputs to the backup db2pd outputs every time there is an HA incident. db2pd troubleshooting guide - DBA DB2
5. Save backup copies of the samdiag outputs. Samdiag gives detailed information about the resources  and requires root authority
6. TSA actively writes to log files. Regularly monitor   the History file , which logs commands sent to TSA
/var.ct/IBM.RecoveryRM.log
7. The Linux Error logs display the start.stor,monitor scripts output.  When diagnosing an outage or similar use the Linux error logs to gather information.  They are normally found on /var/log/messages

For more detailed information on TSAMP commands read TSAMP Cheat Sheet for DBA managing DB2 clustering - DBA DB2

Subscribe by email

 

About

planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

Bloggers

decor