decor
 

planetDB2 logo

  planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

April 18, 2014


Willie Favero

APAR Friday: Today it's about stats, WLM_REFRESH, and storage management

(Posted on Friday, April 18, 2014) Last year, APAR  PM88804 changed that behavior of REALSTORAGE_MANAGEMENT to solve a CPU usage issue.  That behavior is being reversed and changed back to how it originally acted by APAR PM99575. PM99575: CHANGE THE DISCARDDATA LOGIC ...

(Read more)

DB2Night Replays

The DB2Night Show #133: Why Low Cardinality Indexes (STINK), Ember Crooks

Special Guest: Ember Crooks, Rosetta Why Low Cardinality Indexes Negatively Impact Performance 100% of our studio audience learned something! Based on her popular IBM Developer Works article and blogs, IBM DB2 GOLD Consultant Ember Crooks brings her articles on Index Cardinality to life via her excellent IDUG presentation! Watch our replay for details...

(Read more)
 

April 17, 2014


Ember Crooks

Ember on DB2Night Show on April 18th!

Thanks to a late cancellation on the DB2Night Show, I’ll be presenting on Friday, April 18. I’ll be talking about why low-cardinality indexes negatively impact performance. It is the same...

...

Willie Favero

DB2 for z/OS is not affected by Heartbleed bug

(Posted Friday, April 17, 2014) As if there could be any doubt, here?s the official word.... IBM DB2 for z/OS is not affected by the OpenSSL Heartbleed vulnerability (CVE-2014-0160) The flash states that ?DB2 for z/OS in all editions and all platforms is NOT vulnerable to the...

(Read more)

Data and Technology

The Problem with Prediction

Predicting the future is a messy business. I try to avoid making predictions about the future of technology for many reasons. First off, nobody can see into the future, no matter what some fortune...

(Read more)

Dave Beulke

Three Ways to Avoid Big Data Chaos

Companies are always trying everything to gain a competitive edge, but with dysfunctional data procedures, management/CIO turnover, and lean business profits, new data projects face unprecedented difficulties. With the plethora of IT trends, directions and technologies, cloud platforms, big data,...

(Read more)

DB2 Guys

Achieving High Availability with PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM

A short time ago, I wrote about improving IT productivity with IBM PureData System for Transactions and I mentioned a couple of new white papers and solution briefs on that topic.  Today, I’d like to highlight another one of these new papers: Achieving high availability with PureData System for Transactions.

I’ve recently been meeting with a lot of different companies and organizations to talk about DB2 pureScale and PureData System for Transactions, and while there’s a lot of interest and discussion around performance and scalability, the primary reason that I’m usually there is to talk about high availability and how they can achieve higher levels than what they’re seeing today. One thing I’m finding is that there are a lot of different interpretations of what high availability means (and I’m not going to argue here over what the correct definition is). To some, it’s simply a matter of what happens when some sort of localized unplanned outage occurs, like a failure of their production server or a component of that server. How can downtime be minimized in that case?  Others extend this discussion out to include planned outages, such as maintenance operations or adding more capacity into the system. And others will include disaster recovery under the high availability umbrella as well (while many keep them as distinctly separate topics — but that’s just semantics). It’s not enough that they’re protected in the event of some sort of hardware component failure for their production system, but what would happen if the entire data center was to experience an outage? Finally (and I don’t mean to imply that this is an exhaustive list — when it comes to keeping the business available and running, there may be other things that come into the equation as well), availability could also include a discussion on performance. There is typically an expectation of performance and response time associated with transactions, especially those that are being executed on behalf of customers, users, and business processes. If a customer clicks on button on a website and it doesn’t come back quickly, it may not be distinguishable from an outage and the customer may leave that site, choosing to go to a competitor instead.

It should be pointed out that not every database requires the highest levels of availability. It might not be a big deal to an organization if a particular departmental database is offline for 20 minutes, or an hour, or even the entire day. But there are certainly some business-critical databases that are considered “tier 1″ that do require the highest availability possible. Therefore, it is important to understand the availability requirements that your organization has.  But I’m likely already preaching to the choir here and you’re reading this because you do have a need and you understand the ramifications to your business if these needs aren’t met. With respect to the companies I’ve been meeting with, just hearing about what kinds of systems they depend on from both an internal and external perspective- and what it means to them if there’s an interruption in service- has been fascinating.  Of course, I’m sympathetic to their plight, but as a consumer and a user I still have very high expectations around service. I get pretty mad when I can’t make an online trade, check the status of my travel reward accounts, or even order a pizza online ; especially when I know what those companies could be doing to provide better availability to their users.  :-)

Those things I mentioned above — high availability, disaster recovery, and performance (through autonomics) — are all discussed as part of the paper in the context of PureData System for Transactions. PureData System for Transactions is a reliable and resilient expert integrated system designed for high availability, high throughput online transaction processing (OLTP). It has built-in redundancies to continue operating in the event of a component failure, disaster recovery capabilities to handle complete system unavailability, and autonomic features to dynamically manage utilization and performance of the system. Redundancies include power, compute nodes, storage, and networking (including the switches and adapters). In the case of a component failure, a redundant component keeps the system available. And if there is some sort of data center outage (planned or unplanned), a standby system at another site can take over for the downed system. This can be accomplished via DB2′s HADR feature (remember that DB2 pureScale is the database environment within the system) or through replication technology such as Q Replication or Change Data Capture (CDC), part of IBM InfoSphere Data Replication (IIDR).

Just a reminder that the IDUG North America 2014 conference will be taking place in Phoenix next month from May 12-16. Being in a city that just got snowed on this morning, I’m very much looking forward to some hot weather for a change. Various DB2, pureScale, and PureData topics are on the agenda. And since I’m not above giving myself a shameless plug, come by and see me at my session: A DB2 DBA’s Guide to pureScale (session G05). Click here for more details on the conference. Also, check out Melanie Stopfer’s article on IDUG.  Hope to see you there!


 

April 16, 2014


DB2Night Show News

18 APR 10a CDT: Update - New Topic on DB2 LUW Indexes with Ember Crooks

Attend Episode #133 of The DB2Night Show™ to learn "Why Low Cardinality Indexes Negatively Impact Performance". Ember Crooks, IBM DB2 GOLD Consultant and Sr. Director at Rosetta, replaces Gopi...

...

Willie Favero

Survey: Understanding the business applications running on System z (Mainframe)

(Posted Friday, April 16, 2014) Help us (IBM) to better understand the business applications you are (or have been) running on System z, your IBM Mainframe, by completing a short 5 minute survey! This survey has been designed to help IBM better understand how your IBM System z inv...

(Read more)

DB2 Guys

Fraud detection? Not so elementary, my dear.

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

Did you know that fraud and financial crime has been estimated at over $3.5 trillion annually1?  Identity theft alone cost Americans over $24 billion i.e. $10 billion more than all other property crimes2?  And, 70% of all companies have experienced some type of fraud3?

While monetary loss due to fraud is significant, the loss of reputation and trust can be even more devastating.  In fact, according to a 2011 study by Ponemon Institute, organizations lose an average of $332 million in brand value in the year following a data breach. Unfortunately, fraud continues to accelerate due to advances in technology, organizational silos, lower risks of getting caught, weak penalties, and economic conditions.  In this era of big data, fraud detection needs to go beyond traditional data sources i.e. not just transaction and application data, but also machine, social, and geospatial data for greater correlation and actionable insights. The only way you can sift through vast amount of structured and unstructured data and keep up with the evolving complexity of fraud is through smarter application of analytics to identify patterns,construct fraud models, and conduct real-time detection of fraudulent activity.

 IBM Watson Foundation portfolio for end-to-end big data and analytics needs

watson

While IBM has an impressive array of offerings addressing all your big data and analytical needs, our focus here is on how DB2 solutions can help you develop and test fraud models, score customers for fraud risk, and conduct rapid, near-real-time analytics to detect potential fraud.  You have the flexibility to choose the type of solution that best fits your needs – select software solutions to take advantage of your existing infrastructure or choose expert-integrated appliance-based solutions for simplified experience and fast time to value.

Highly available and scalable operational systems for reliable transaction data

DB2 for Linux, UNIX and Windows software is optimized to deliver industry-leading performance across multiple workloads – transactional, analytic and operational analytic – while lowering administration, storage, development, and server costs.  DB2 pureScale, with its cluster based, shared disk architecture, provides application transparent scalability beyond 100 nodes, helps achieve failover between two nodes in seconds, and offers business continuity with built-in disaster recovery over distances of a thousand kilometers.

IBM PureData System for Transactions, powered by DB2, is an expert integrated server, storage, network, and tools selected and tuned specifically for the demands of high-availability , high-throughput transactional processing—so you do not have to research, purchase, install, configure and tune the different pieces to work together. With its pre-configured topology and database patterns, you can set up high availability cluster instances and database nodes to meet your specific needs and deploy the same day rather than spend weeks or months. As your business grows, you can add new databases in minutes and manage the whole system using its intuitive system management console.

Analytics for fraud detection

 DB2 Warehouse Analytics  DB2 advanced editions offer capabilities such as online analytical processing (OLAP), continuous data ingest, data mining, and text analytics that are well-suited for real-time enterprise analytics and can help you extract structured information out of previously untapped business text.  Its business value in enabling fraud detection is immense.

IBM PureData System for Operational Analytics, powered by DB2, helps you deliver near-real-time insights with continuous data ingest and immediate data analysis.  It is reliable, scalable, and optimized to handle 1,000s of concurrent operational queries with outstanding performance. You can apply fraud models to identify suspicious transactions while they are in progress, not hours later. This can apply across any industry segment, including financial services, health care, insurance, retail, manufacturing, and government services.  PureData System for Operational Analytics helps with not just real-time fraud detection, but also cross-sell or up-sell offers/services identifying customer preferences, anticipating their behavior, and predicting the optimum offer/server in real-time.

DB2 with BLU Acceleration, available in advanced DB2 editions, uses advanced in-memory columnar technologies to help you analyze data and generate new insights in seconds instead of days.  It can provide performance improvements ranging from 10x to 25x and beyond, with some queries achieving 1,000 times improvement4,  for analytical queries with minimal tuning.  DB2 with BLU Acceleration is extremely simple to deploy and provides good out-of-the-box performance for analytic workloads. From a DBA’s perspective, you simply create table, load and go. There are no secondary objects, such as indexes or MQTs that need to be created to improve query performance.

DB2 with BLU Acceleration can handle terabytes of data to help you conduct customer scoring across your entire customer data set, develop and test fraud models that explore a full range of variables based on all available data.  Sometimes creating a fraud model may involve looking at 100s of terabytes of data, where IBM® PureData™ System for Analytics would fare better.  Once a fraud model is created, you can use DB2 with BLU Acceleration to apply fraud model to every transaction that comes in for speed of thought insight.

IBM Cognos® BI  DB2 advanced editions come with 5 user licenses for Cognos BI, which enable users to access and analyze the information consumers need to make the decisions that lead to better business outcomes.  Cognos BI with Dynamic Cubes, in-memory accelerator for dimensional analysis,enables high-speed interactive analysis and reporting over terabytes of data.  DB2 with BLU acceleration integrated with Cognos BI with Dynamic Cubes offers you a fast-on-fast performance for all your BI needs.

With the array of critical challenges facing financial institutions today, smarter are the ones that successfully protect their core asset – data. IBM data management solutions help you integrate information, generate new insights to detect and mitigate fraud. We invite you to explore and experience DB2 and the rest of Watson foundation offerings made with IBM.

Stay tuned for the second part of this blog that will explore the product features in detail.

1 ACFE 2012 report to the nations
2 BJS 2013 report on identity theft
3Kroll 2013/2014 global fraud report

4 Based on internal IBM tests of analytic workloads comparing queries accessing row-based tables on DB2 10.1 vs. columnar tables on DB2 10.5. Results not typical. Individual results will vary depending on individual workloads, configurations and conditions, including size and content of the table, and number of elements being queried from a given table.

Follow Radha on Twitter @rgowda

 


 

April 15, 2014


Ember Crooks

DB2 Basics: Capitalization

When does case matter in DB2? Well, it doesn’t unless it does. Nice and clear, huh? When Text Must be in the Correct Case Text must be in the correct case whenever it is part of a literal...

...

Chris Eaton

How Data Skipping works in BLU Acceleration - Part 2

In my first part on Data Skipping I gave you the reasons for, and a short example on the benefits of data skipping. In this blog posting I will describe how the synopsis table works and how it is used by DB2.

 

Data skipping in BLU is made possible by storing "metadata" about the various values in a given columns so that at runtime we can skip over portions of the


DB2Night Show News

18 APR 10a CDT: Make DB2 LUW Tuning Less Taxing

Attend Episode #133 of The DB2Night Show™ to learn about DB2 V10.1 performance enhancements with guest Gopi Attaluri from the IBM Silicon Valley Lab. Mr. Attaluri will focus on Index Jump...

...
 

April 14, 2014


Craig Mullins

Aggregating Aggregates Using Nested Table Expressions

Sometimes when you are writing your SQL to access data you come across the need to work with aggregates. Fortunately, SQL offers many simple ways of aggregating data. But what happens when you uncover then need to perform aggregations of aggregates? What does that mean? Well, consider an example. Let's assume that you  want to compute the average of a sum. This is a reasonably common...

(Read more)

Frank Fillmore

IBM DB2 Analytics Accelerator (#IDAA) Workshop a Success!

Last week in Baltimore The Fillmore Group conducted a deep-dive technical workshop on IBM DB2 Analytics Accelerator.  We covered Basic and Advanced hands-on labs with the support of the IBM...

(Read more)

Ember Crooks

Journey of a DB2′s Got Talent Winner

I’ve been encouraged by a few to tell my story. How I was encouraged into DB2′s Got Talent 2014, what it was like, decisions I made on the fly, any advice, and what I learned from my...

...
 

April 11, 2014


Data and Technology

The Ethical DBA?

Today’s posting is a re-blog of a post I wrote several years ago for another blog (that has since been discontinued). But I think the subject matter is important enough that it warrants...

(Read more)

DB2Night Replays

The DB2Night Show #132: All About IDUG & DB2's GOT TALENT Winners!

Hover your mouse over finalist photos for more information and links to LinkedIn profiles! All About IDUGPhoenix, AZ - May 12-16, 2014 DB2's GOT TALENT Winners Announced! 100% of our audience learned something! IDUG volunteers Bob Vargo and Terry Johnson shared with us important news, updates, and tips for the upcoming IDUG Conference. There's a new process this year to sign up for the Thursday Dine-Arounds with your favorite speakers! ...

(Read more)

Henrik Loeser

DB2 Quiz: Find the website for this screenshot

Today's DB2 quiz is not that technical, but it requires that you are up-to-date on IBM's offerings for DB2. What is the context for this screenshot? On which website did I take it? Probably easy...

(Read more)
 

April 10, 2014


Willie Favero

APAR Friday: LOB insert performance issue resolved

(Posted Friday, April 10, 2014) This post is going to be pretty short this afternoon.   I?ve been talking with a lot more people lately that are messing around with LOBs.   So when I saw this APAR show up that takes care of of the reuse of free space, I thought ...

(Read more)

DB2 Guys

Improve IT Productivity with IBM PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM
I’m a command line kind of guy, always have been. When I’m loading a presentation or a spreadsheet on my laptop, I don’t open the application or the file explorer and work my way through it to find the file in question and double click the icon to open it. Instead, I open a command line window (one of the few icons on my desktop), navigate to the directory I know where the file is (or will do a command line file search to find it) and I’ll execute/open the file directly from there. When up in front of a crowd, I can see the occasional look of wonder at that, and while I’d like to think it’s them thinking “wow, he’s really going deep there… very impressive skills”, in reality it’s probably more like “what is this caveman thinking… doesn’t he know there are easier, more intuitive ways of accomplishing that?!?”

The same goes for managing and monitoring the systems I’ve been responsible for in the past. Where possible, I’ve used command line interfaces, I’ve written scripts, and I’ve visually pored through raw data to investigate problems. But inevitably I’d end up doing something wrong, like miss a step, do something out of order, or miss some important output - leaving things not working or not performing as expected. Over the years, I’ve considered that part of the fun and challenge of the job. How do I fix this problem? But nowadays, I don’t find it so fun. In fact, I find it extremely frustrating.Things have gotten more complex and there are more demands on my time. I have much more important things to do than figure out why the latest piece of software isn’t interacting with the hardware or other software on my system in a way it is supposed to. When I try to do things on my own now, any problem is immediately met with an “argh!” followed by a google search hoping to find others who are trying to do what I’m doing and have a solution for it.

When I look at enterprise-class systems today, there’s just no way that some of the old techniques of implementation, configuration, tuning, and maintenance are going to be effective. Systems are getting larger and more complex. Can anybody tell me that they enjoy installing fix packs from a command line or ensuring that all of the software levels are at exactly the right level before proceeding with an installation of some modern piece of software (or multiple pieces that all need to work together, which is fairly typical today)? Or feel extremely confident in getting it all right? And you’ve all heard about the demands placed on IT today by “Big Data”. Most DBAs, system administrators, and other IT staff are just struggling to keep the current systems functioning, not able to give much thought to implementing new projects to handle the onslaught of all this new information. The thought of bringing a new application and database up, especially one that requires high availability and/or scalability, is pretty daunting. As is the work to grow out such a system when more demands are placed on it.

It’s for these reasons and others that IBM introduced PureSystems. Specifically, I’d like to talk here about IBM PureData System for Transactions. It’s an Expert Integrated System that is designed to ensure that the database environment is highly available, scalable, and flexible to meet today’s and tomorrow’s online transaction processing demands. These systems are a complete package and they include the hardware, storage, networking, operating system, database management software, cluster management software, and the tools. It is all pre-integrated, pre-configured, and pre-tested. If you’ve ever tried to manually stand up a new system, including all of the networking stuff that goes into a clustered database environment, you’ll greatly appreciate the simplicity that this brings.

The system is also optimized for transaction processing workloads, having been built to capture and automate what experts do when deploying, managing, monitoring, and maintaining these types of systems. System administration and maintenance is all done through an integrated systems console, which simplifies a lot of the operational work that system administrators and database administrators need to do on a day-to-day basis. What? Didn’t I just say above that I don’t like GUIs? No, I didn’t quite say that. Yeah, I still like those opportunities for hands-on, low-level interactions with a system, but it’s hard not to appreciate something that is going to streamline everything I need to do to manage a system and at the same time keep my “argh” moments down to a minimum. The fact that I can deploy a DB2 pureScale cluster within the system in about an hour and deploy a database in minutes (which, by the way, also automatically sets it up for performance monitoring) with just a few clicks is enough to make me love my mouse.

IBM has recently released some white papers and solution briefs around this system and a couple of them talk to these same points that I mentioned above. To see how the system can improve your productivity and efficiency, allowing your organization to focus on the more important matters at hand, I suggest you give them a read:

Improve IT productivity with IBM PureData System for Transactions solution brief
Four strategies to improve IT staff productivity white paper

The four strategies as described in these papers, that talk to the capabilities of PureData System for Transactions, are:

  • Simplify and accelerate deployment of high availability clusters and databases
  • Streamline systems management
  • Reduce maintenance time and risk
  • Scale capacity without incurring downtime

I suspect that I won’t be changing my command line and management/maintenance habits on my laptop and PCs any time soon, but when it comes to this system, I’m very happy to come out of my cave.



Ember Crooks

Last Chance to Vote in DB2′s Got Talent!

Have you enjoyed posts from this blog? Have they helped you? Now is your chance to pay it forward. Go vote in DB2′s Got Talent and help YOUR favorite competitor win. I’m voting for...

...

Willie Favero

April 2014 (RSU1403) service package has been tested and is now available

(Posted Friday, April 10, 2014) Testing for RSU service package RSU1403 is now complete. This RSU closes out 2013 and starts with January 2014. This April 2014 1st Quarter 2014 "Quarterly Report" (118 KB PDF file) contains ALL service through the end of December 2013 not already marked RSU. This...

(Read more)
 

April 08, 2014


Scott Hayes

FREE IDUG Phoenix AZ Exhibit Hall Pass

Will you be in the Phoenix AZ area during May 13-15? Are you a DB2 professional? DBI Software invites you to join us, free of charge, in the IDUG Exhibit Hall, booth #105, with this FREE PASS! Check out the latest DB2 solutions and enjoy magic performed by professional magician Frank Velasco at DBI's Booth #105!

(Read more)

Susan Visser

The Rise of the Information Producer

 

As part of a continuing series, Claudia Imhoff has created a paper that describes the impact the emerging Information Producer role can have on warehouse environments and IT organizations. 

In a recent blog, Rachel Bland of IBM sees that the Information Producer role in organizations has the potential to cause angst for IT.  The Information Producer’s need for self-service must to be balanced against other needs such as governance, system stability and performance, and the need to sustain services for other end users.

To address these competing needs such as these, IT can look to innovative technologies that speed deployment and query performance, simplify maintenance and offer affordability. 

One IBM advantage is having a broad perspective and a portfolio of products that includes The IBM Business Intelligence Pattern with BLU Acceleration. Aligning with these objectives, this pattern offering can help to reduce the burden on IT and provide every user who interacts with the system with a “speed of thought” response.

Read more of Rachel’s blog –  http://www.ibmbluhub.com/info-producer/

 

Thanks to Cindy for sharing!

 

Susan


DB2Night Show News

Fri 11 APR 10am CDT: ALL ABOUT IDUG and DB2's GOT TALENT 2014 Winners!

Incredible! Once again, we are reminded that EVERY VOTE COUNTS! As of 1pm CDT 8 April 2014, over 1,054 votes have been cast and there is ONLY ONE vote difference between 1st place and 2nd place! ...

...

Ember Crooks

IBM DB2 Certification – A Comprehensive Guide as of Today

Nearly every DB2 conference I go to has me thinking about DB2 certification. That is probably because most conferences include free or reduced cost certification testing. I have covered this topic...

...

DB2utor

DB2 Subsystem Tuning Tips

Many, many years ago, I was a DB2 systems programmer. As part of my training, I was sent to an IBM DB2 systems tuning course....

About

planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

Search

Bloggers

decor