decor
 

planetDB2 logo

  planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

April 23, 2014


Data and Technology

Surrogate Keys or Natural Keys?

If you’ve worked with relational database systems for any length of time you’ve probably participated in a discussion (argument?) about the topic of today’s blog post… surrogate keys. A...

(Read more)

Willie Favero

Please attend my DB2Night Show webinar this Friday (Apr 25)

(Posted on Wednesday, April 23, 2014) Today?s post is a short announcement (reminder, plead) that Martin Hubel and Scott Hayes have been kind enough to invite me to speak at (another) the next DB2Night Show webinar scheduled for Friday, April 25 at 10:00 AM Central (8:00 AM Pacific, 11:0...

(Read more)
 

April 22, 2014


Susan Visser

Data Management at IBM Impact

IBM Impact 2014 is the place to learn how to use disruptive technologies like Cloud, Big Data, and Mobile to create faster, more adaptive and secure solutions to overcome challenges and thrive in the digital economy.

 

The conference takes place at the Venetian Hotel in Las Vegas, April 27 - May 1.  

 

 

There is much taking place at Impact 2014 for those of you in the Data Management business.  Attend these sessions to help your organization get faster answers and breakthrough performance, with exceptional time-to-value.

 

 

From our Executives:

 

 

On Monday 4:00-5:15, join the keynote "Give Your Application an Unfair Advantage" featuring Beth Smith and Leon Katsnelson in Palazzo L.  You’ll get a practical perspective on how to turn the world of insight into a weapon that gives your organization and applications an unfair advantage.

 

 

On Tuesday 8:30am - 10:00am the General Session presentation is “Made with IBM” and features Bob Picciano.  This takes place in Level 2, Hall B.

 

You will learn how to drive better engagement through deeper insight and smarter applications. And you will learn how to embrace open innovation to put data to work for your organization.

 

At the Expo Theater:

 

We’ve carefully chosen two of our most exciting topics to share with you:

 

Internet of Things: Choose an Intelligent Database by Fred Ho

 

Expo Theater, Session #1, Sunday Apr 27, 7:00 pm

The "Internet of Things" refers to the growing number of devices and sensors that communicate and interact via the Internet, offering businesses new customers and revenue opportunities. Harnessing data from billions of connected devices lies in the ability to capture, store, access and query multiple data types seamlessly and use that data in meaningful ways.  Attend this session to see the Internet of Things in action with a demo of this end-to-end solution from IBM and Shaspa.

 

Data Warehousing for everyone with BLU Acceleration by Adam Ronthal

 

Expo Theater, Session #2, Monday Apr 28, 7:00 pm

IBM’s BLU Acceleration is a game-changer for data warehousing and analytics. When paired with the cloud infrastructure and business models, BLU Acceleration opens up the world of analytics to clients looking to benefit from business intelligence (BI) technology, without lengthy project approval times.

 

For Fun:

 

Stop by the BLU Experience, located in the Social Impact Lounge, Sands Foyer.  Get a BLU tattoo to show your colors!  You can also take a "selfie" and upload to facebook or twitter so it "doesn't have to stay in Vegas!"  Meet with our BLU Ambassadors onsite to get further details.

 

 

 

For Sessions:

 

 

 

I’ve pulled out these two sessions related to big data.  Unfortunately, they occur at the same time, so you have to choose which you want to attend.

Getting Started with Big Data - 5 Game Changing Use Cases

with Rick Clements

Tuesday, April 29 5:00 - 6:00 pm

Session 3145: Lando 4201 A

IBM has seen a pattern emerge within and across its clients’ organizations and has identified the top five high-value applications of big data technology that can be the first step into big data. During this session, hear about each of these use cases - data warehouse modernization, enhanced 360-degree view of the customer, security/intelligence extension, big data exploration, and operations analysis - and how clients are identifying and tackling big data projects today. In addition, presenters explore the IBM big data and analytics platform - Watson Foundations - and how it sets the standard in the market with its breadth and depth of capabilities and is packaged so clients can address their immediate need, build on what they have, and realize value at every step of their journey.                            

       

Taming Big Data Derived from the Internet of Things with Big SQL

with Berni Schiefer

Tuesday April 29, 5:00 - 6:00 pm

Session 3477: Delfino 4103

Big data means many things. But with the many trillions of objects in the Internet of Things, each with many attributes, including geospatial and temporal, big data takes on new meaning. Gaining insight from all this data effectively and efficiently is a daunting challenge. SQL access over Hadoop data is an ideal and highly productive interface to extract value from all this data. In this talk, presenters describe why SQL is the right interface and how IBM InfoSphere BigInsights, with its next generation of big SQL processing, opens new frontiers for exploring big Data. Presenters provide performance-oriented best practices for storing, searching, and analyzing big data with Big SQL.

 

In the Expo:

 

We have three booths staffed with experts to answer your questions related to these areas:


InfoSphere - Information Integration & Governance


Information Integration and Governance (IIG)—a critical element of Watson Foundations, increases trust in your information; makes business operations more efficient, and mitigates risk. Learn how IIG brings together a unified set of capabilities; including data integration, master data management, data security and lifecycle management.

Think BIG: Big Data, BigInsights and Big SQL


Big SQL 3.0 is the next generation of IBM’s SQL on Hadoop offering in InfoSphere BigInsights. Big SQL 3.0 delivers full/rich SQL language support, industry-leading performance and security, open integration with analytics and reporting tools and built in security. Learn how Big SQL can give you a single point of access to your data within Hadoop.

Data Management for the Era of Big Data


Business and IT leaders in forward-thinking organizations are taking an integrated approach to unlocking value from all available data by exploiting a new generation of data management solutions. Learn how the next generation of in-memory computing can help deliver greater scale and efficiency in the era of big data.

 

As always, the Expo area is huge, so find us with these directions: Find the Big Data Area and go to Booth BD-7.  Besides expert advice, visit also for a few trickets, books, and other surprises.

 

 

Social, Bookstore, Certification

 

Three of my favourite things… and yes, they’ll be well represented at the Impact conference.  Join the social lounges to meet the faces behind the social messaging; drop by the bookstore to browse through the great selection of books, and pick one or two up at a discounted price; take a certification exam to prove to the world that you have the skills to perform your job.

 

 

This will be my first time attending Impact.  I’m looking forward to learning as much as I can while at the conference and networking with many people.  I’ll being taking photos and live tweeting as much as I can.  If you’re at the conference, come say hi and tell me you’ve read my blog!

 

 


Susan


DB2Night Show News

25 APR 10am: "Willie's Winging It!" w/ IBM DB2 z/OS Rockstar Willie Favero

There are new shows scheduled on www.DB2NightShow.com that you won't want to miss! Check out the show schedule and register! Here are some highlights...

...

Ember Crooks

IDUG NA Technical Conference in Phoenix – May 2014

I’m so excited! The IDUG North American Technical Conference is less than a month away. Much like Melanie Stopfer mentioned in her blog entry on IDUG, this is one of MY favorite weeks of the...

...

DB2utor

The DB2 for z/OS Performance Handbook

I’m very excited to announce that CA Technologies (my employer) commissioned Dan Luksetich to author a document on DB2 for z/OS performance. Dan is an...

Vincent McBurney

IBM free Connect Events coming to Australia and New Zealand this May

IBM will be touring Australia with a set of free conferences called BusinessConnect and SolutionConnect in Sydney, Melbourne, Perth, Brisbane and Auckland in May.
 

April 21, 2014


Robert Catterall

DB2 for z/OS: the Importance of Historical Performance Data

Just within the past couple of weeks, I've received questions of a similar nature from two different DB2 for z/OS people. In each message, a DBA described a change that he had made in a production DB2 environment (in one case a size increase for several buffer pools, and in the other a change from an external stored procedure to a native SQL procedure). The questions asked pertained to assessing the impact of the change on system and/or application performance. I've been fielding questions of...

(Read more)
BLU for Cloud

BLU Acceleration for Cloud will be at IBM Impact 2014

impact-smThe BLU Acceleration for Cloud Beta is in full swing, and we’ve got some cool stuff planned for the next drop.  Come find out more at my session at IBM Impact 2014.

Why move analytics to the cloud?  Well, for starters, Cloud is the new normal!  Analytics, long recognized as a competitive differentiator has traditionally required significant resources — both in skills, and capital investment to enter the game.    Most on-premise data warehouses usually have at least a six-figure price tag associated with them, with many implementations costing millions.  And while you do get significant value and performance with an on-premise implementation, that capital investment means longer procurement lead times, and longer lead times in general to ramp up an analytics project

Cloud computing represents a paradigm shift… now even small organizations with limited budgets and resources can access the same powerful analytic technology leveraged in the most advanced analytic environments.  BLU for Cloud is a columnar, in-memory solution that brings appliance simplicity and ease of use for data warehousing and analytics to everyone — all for less than the price of a cup of coffee per hour.

BLU for Cloud is perfect for:

  • Pop-up Analytics Environments – need a quick, agile data warehouse for a temporary project?  Put it in the cloud!
  • Dev/Test Environments – Yes, it’s compatible with the enterprise databases already in use within your organization because it’s based on DB2, an industry standard!
  • Analytic Marts – Augment and modernize your existing data warehouse infrastructure by leveraging cloud flexibility
  • Self Contained Agile Data Warehousing - leverage BLU for Cloud for almost any analytics application

Come find out more at my Impact session in Las Vegas next week: 

Session 3442A, Monday, April 28, 7:00pm at the Venetian Expo Theater

Or check out the BLU for Cloud website at http://www.bluforcloud.com for more details.

 

The post BLU Acceleration for Cloud will be at IBM Impact 2014 appeared first on BLU Acceleration for Cloud.


Willie Favero

WLM assisted DB2 buffer pool sizing: The story continues... Part 4

(Posted on Monday, April 21, 2014) This is Part 4 or 4 of a multi-part post covering details on the APARs, other blog post, and the product manuals and Redbooks used as references

(Read more)

Willie Favero

WLM assisted DB2 buffer pool sizing: The story continues... Part 3

(Posted on Monday, April 21, 2014) This is Part 3 of 4 of a multi-part post that goes into the ?DISPLAY BUFFERPOOL command and some implementation warnings. How do you verify what?s happening with AUTOSIZE and your buffer pools? Is AUTOSIZE enabled, disabled, and what are the thresholds set...

(Read more)

Willie Favero

WLM assisted DB2 buffer pool sizing: The story continues... Part 2

(Posted on Monday, April 21, 2014) This is Part 2 of 4 of a multi-part post covering how WLM buffer pool managements works.

(Read more)

Willie Favero

WLM assisted DB2 buffer pool sizing: The story continues... Part 1

(Posted on Monday, April 21, 2014) This is Part 1 of 4 of a multi-part post covering the background and setup for Parts 2-4.

(Read more)

Craig Mullins

A Little Bit About LOBs

In today's blog post we will take a brief look at LOBs, or Large OBjects, in DB2. I have been prepping for my webinar later this week, titled Bringing Big Data to DB2 for z/OS with LOBs: Understanding, Using, andManaging DB2 LOBs. Be sure to click on the link for that and join me on April 24, 2014 for that webinar! But back to the topic du jour... LOBs. Let's start with a bit of...

(Read more)

ChannelDB2 Videos

DB2 Tech Talk: SQL Tips and Techniques, Leverage the Power of SQL

SQL is a powerful language that you will learn to use to a greater extent in this talk! Explore some of the less well-known features of SQL and see how they can help in some practical situations.

We also look at some common mistakes and misconceptions and discuss ways of avoiding them. This on demand Tech Talk is an excellent way to deepen your DB2 SQL skills, with a focus on DB2 for Linux, UNIX and Windows.
 

April 20, 2014


Willie Favero

How a DB2 getpage compares to an I/O

(Posted on Sunday, April 20, 2014) They would seem like a simple concept; a getpage and an I/O. However, it would surprise you how many times the two terms get confused and misused. Now I?m not saying that folk don?t know the difference. I think in most cases they do. They still on oc...

(Read more)

ChannelDB2 Videos

DB2 Tips n Tricks Part 31 - Find Which Application is consuming most log space


Thumbnail

How to Find Which Application is consuming most transaction log space Solution: db2pd -db dbname -transactions db2 get snapshot for applications on test|grep...
 

April 18, 2014


Willie Favero

APAR Friday: Today it's about stats, WLM_REFRESH, and storage management

(Posted on Friday, April 18, 2014) Last year, APAR  PM88804 changed that behavior of REALSTORAGE_MANAGEMENT to solve a CPU usage issue.  That behavior is being reversed and changed back to how it originally acted by APAR PM99575. PM99575: CHANGE THE DISCARDDATA LOGIC ...

(Read more)

DB2Night Replays

The DB2Night Show #133: Why Low Cardinality Indexes (STINK), Ember Crooks

Special Guest: Ember Crooks, Rosetta Why Low Cardinality Indexes Negatively Impact Performance 100% of our studio audience learned something! Based on her popular IBM Developer Works article and blogs, IBM DB2 GOLD Consultant Ember Crooks brings her articles on Index Cardinality to life via her excellent IDUG presentation! Watch our replay for details...

(Read more)
 

April 17, 2014


Ember Crooks

Ember on DB2Night Show on April 18th!

Thanks to a late cancellation on the DB2Night Show, I’ll be presenting on Friday, April 18. I’ll be talking about why low-cardinality indexes negatively impact performance. It is the same...

...

Willie Favero

DB2 for z/OS is not affected by Heartbleed bug

(Posted Friday, April 17, 2014) As if there could be any doubt, here?s the official word.... IBM DB2 for z/OS is not affected by the OpenSSL Heartbleed vulnerability (CVE-2014-0160) The flash states that ?DB2 for z/OS in all editions and all platforms is NOT vulnerable to the...

(Read more)

Data and Technology

The Problem with Prediction

Predicting the future is a messy business. I try to avoid making predictions about the future of technology for many reasons. First off, nobody can see into the future, no matter what some fortune...

(Read more)

Dave Beulke

Three Ways to Avoid Big Data Chaos

Companies are always trying everything to gain a competitive edge, but with dysfunctional data procedures, management/CIO turnover, and lean business profits, new data projects face unprecedented difficulties. With the plethora of IT trends, directions and technologies, cloud platforms, big data,...

(Read more)

DB2 Guys

Achieving High Availability with PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM

A short time ago, I wrote about improving IT productivity with IBM PureData System for Transactions and I mentioned a couple of new white papers and solution briefs on that topic.  Today, I’d like to highlight another one of these new papers: Achieving high availability with PureData System for Transactions.

I’ve recently been meeting with a lot of different companies and organizations to talk about DB2 pureScale and PureData System for Transactions, and while there’s a lot of interest and discussion around performance and scalability, the primary reason that I’m usually there is to talk about high availability and how they can achieve higher levels than what they’re seeing today. One thing I’m finding is that there are a lot of different interpretations of what high availability means (and I’m not going to argue here over what the correct definition is). To some, it’s simply a matter of what happens when some sort of localized unplanned outage occurs, like a failure of their production server or a component of that server. How can downtime be minimized in that case?  Others extend this discussion out to include planned outages, such as maintenance operations or adding more capacity into the system. And others will include disaster recovery under the high availability umbrella as well (while many keep them as distinctly separate topics — but that’s just semantics). It’s not enough that they’re protected in the event of some sort of hardware component failure for their production system, but what would happen if the entire data center was to experience an outage? Finally (and I don’t mean to imply that this is an exhaustive list — when it comes to keeping the business available and running, there may be other things that come into the equation as well), availability could also include a discussion on performance. There is typically an expectation of performance and response time associated with transactions, especially those that are being executed on behalf of customers, users, and business processes. If a customer clicks on button on a website and it doesn’t come back quickly, it may not be distinguishable from an outage and the customer may leave that site, choosing to go to a competitor instead.

It should be pointed out that not every database requires the highest levels of availability. It might not be a big deal to an organization if a particular departmental database is offline for 20 minutes, or an hour, or even the entire day. But there are certainly some business-critical databases that are considered “tier 1″ that do require the highest availability possible. Therefore, it is important to understand the availability requirements that your organization has.  But I’m likely already preaching to the choir here and you’re reading this because you do have a need and you understand the ramifications to your business if these needs aren’t met. With respect to the companies I’ve been meeting with, just hearing about what kinds of systems they depend on from both an internal and external perspective- and what it means to them if there’s an interruption in service- has been fascinating.  Of course, I’m sympathetic to their plight, but as a consumer and a user I still have very high expectations around service. I get pretty mad when I can’t make an online trade, check the status of my travel reward accounts, or even order a pizza online ; especially when I know what those companies could be doing to provide better availability to their users.  :-)

Those things I mentioned above — high availability, disaster recovery, and performance (through autonomics) — are all discussed as part of the paper in the context of PureData System for Transactions. PureData System for Transactions is a reliable and resilient expert integrated system designed for high availability, high throughput online transaction processing (OLTP). It has built-in redundancies to continue operating in the event of a component failure, disaster recovery capabilities to handle complete system unavailability, and autonomic features to dynamically manage utilization and performance of the system. Redundancies include power, compute nodes, storage, and networking (including the switches and adapters). In the case of a component failure, a redundant component keeps the system available. And if there is some sort of data center outage (planned or unplanned), a standby system at another site can take over for the downed system. This can be accomplished via DB2′s HADR feature (remember that DB2 pureScale is the database environment within the system) or through replication technology such as Q Replication or Change Data Capture (CDC), part of IBM InfoSphere Data Replication (IIDR).

Just a reminder that the IDUG North America 2014 conference will be taking place in Phoenix next month from May 12-16. Being in a city that just got snowed on this morning, I’m very much looking forward to some hot weather for a change. Various DB2, pureScale, and PureData topics are on the agenda. And since I’m not above giving myself a shameless plug, come by and see me at my session: A DB2 DBA’s Guide to pureScale (session G05). Click here for more details on the conference. Also, check out Melanie Stopfer’s article on IDUG.  Hope to see you there!


 

April 16, 2014


DB2Night Show News

18 APR 10a CDT: Update - New Topic on DB2 LUW Indexes with Ember Crooks

Attend Episode #133 of The DB2Night Show™ to learn "Why Low Cardinality Indexes Negatively Impact Performance". Ember Crooks, IBM DB2 GOLD Consultant and Sr. Director at Rosetta, replaces Gopi...

...

Willie Favero

Survey: Understanding the business applications running on System z (Mainframe)

(Posted Friday, April 16, 2014) Help us (IBM) to better understand the business applications you are (or have been) running on System z, your IBM Mainframe, by completing a short 5 minute survey! This survey has been designed to help IBM better understand how your IBM System z inv...

(Read more)

DB2 Guys

Fraud detection? Not so elementary, my dear.

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

Did you know that fraud and financial crime has been estimated at over $3.5 trillion annually1?  Identity theft alone cost Americans over $24 billion i.e. $10 billion more than all other property crimes2?  And, 70% of all companies have experienced some type of fraud3?

While monetary loss due to fraud is significant, the loss of reputation and trust can be even more devastating.  In fact, according to a 2011 study by Ponemon Institute, organizations lose an average of $332 million in brand value in the year following a data breach. Unfortunately, fraud continues to accelerate due to advances in technology, organizational silos, lower risks of getting caught, weak penalties, and economic conditions.  In this era of big data, fraud detection needs to go beyond traditional data sources i.e. not just transaction and application data, but also machine, social, and geospatial data for greater correlation and actionable insights. The only way you can sift through vast amount of structured and unstructured data and keep up with the evolving complexity of fraud is through smarter application of analytics to identify patterns,construct fraud models, and conduct real-time detection of fraudulent activity.

 IBM Watson Foundation portfolio for end-to-end big data and analytics needs

watson

While IBM has an impressive array of offerings addressing all your big data and analytical needs, our focus here is on how DB2 solutions can help you develop and test fraud models, score customers for fraud risk, and conduct rapid, near-real-time analytics to detect potential fraud.  You have the flexibility to choose the type of solution that best fits your needs – select software solutions to take advantage of your existing infrastructure or choose expert-integrated appliance-based solutions for simplified experience and fast time to value.

Highly available and scalable operational systems for reliable transaction data

DB2 for Linux, UNIX and Windows software is optimized to deliver industry-leading performance across multiple workloads – transactional, analytic and operational analytic – while lowering administration, storage, development, and server costs.  DB2 pureScale, with its cluster based, shared disk architecture, provides application transparent scalability beyond 100 nodes, helps achieve failover between two nodes in seconds, and offers business continuity with built-in disaster recovery over distances of a thousand kilometers.

IBM PureData System for Transactions, powered by DB2, is an expert integrated server, storage, network, and tools selected and tuned specifically for the demands of high-availability , high-throughput transactional processing—so you do not have to research, purchase, install, configure and tune the different pieces to work together. With its pre-configured topology and database patterns, you can set up high availability cluster instances and database nodes to meet your specific needs and deploy the same day rather than spend weeks or months. As your business grows, you can add new databases in minutes and manage the whole system using its intuitive system management console.

Analytics for fraud detection

 DB2 Warehouse Analytics  DB2 advanced editions offer capabilities such as online analytical processing (OLAP), continuous data ingest, data mining, and text analytics that are well-suited for real-time enterprise analytics and can help you extract structured information out of previously untapped business text.  Its business value in enabling fraud detection is immense.

IBM PureData System for Operational Analytics, powered by DB2, helps you deliver near-real-time insights with continuous data ingest and immediate data analysis.  It is reliable, scalable, and optimized to handle 1,000s of concurrent operational queries with outstanding performance. You can apply fraud models to identify suspicious transactions while they are in progress, not hours later. This can apply across any industry segment, including financial services, health care, insurance, retail, manufacturing, and government services.  PureData System for Operational Analytics helps with not just real-time fraud detection, but also cross-sell or up-sell offers/services identifying customer preferences, anticipating their behavior, and predicting the optimum offer/server in real-time.

DB2 with BLU Acceleration, available in advanced DB2 editions, uses advanced in-memory columnar technologies to help you analyze data and generate new insights in seconds instead of days.  It can provide performance improvements ranging from 10x to 25x and beyond, with some queries achieving 1,000 times improvement4,  for analytical queries with minimal tuning.  DB2 with BLU Acceleration is extremely simple to deploy and provides good out-of-the-box performance for analytic workloads. From a DBA’s perspective, you simply create table, load and go. There are no secondary objects, such as indexes or MQTs that need to be created to improve query performance.

DB2 with BLU Acceleration can handle terabytes of data to help you conduct customer scoring across your entire customer data set, develop and test fraud models that explore a full range of variables based on all available data.  Sometimes creating a fraud model may involve looking at 100s of terabytes of data, where IBM® PureData™ System for Analytics would fare better.  Once a fraud model is created, you can use DB2 with BLU Acceleration to apply fraud model to every transaction that comes in for speed of thought insight.

IBM Cognos® BI  DB2 advanced editions come with 5 user licenses for Cognos BI, which enable users to access and analyze the information consumers need to make the decisions that lead to better business outcomes.  Cognos BI with Dynamic Cubes, in-memory accelerator for dimensional analysis,enables high-speed interactive analysis and reporting over terabytes of data.  DB2 with BLU acceleration integrated with Cognos BI with Dynamic Cubes offers you a fast-on-fast performance for all your BI needs.

With the array of critical challenges facing financial institutions today, smarter are the ones that successfully protect their core asset – data. IBM data management solutions help you integrate information, generate new insights to detect and mitigate fraud. We invite you to explore and experience DB2 and the rest of Watson foundation offerings made with IBM.

Stay tuned for the second part of this blog that will explore the product features in detail.

1 ACFE 2012 report to the nations
2 BJS 2013 report on identity theft
3Kroll 2013/2014 global fraud report

4 Based on internal IBM tests of analytic workloads comparing queries accessing row-based tables on DB2 10.1 vs. columnar tables on DB2 10.5. Results not typical. Individual results will vary depending on individual workloads, configurations and conditions, including size and content of the table, and number of elements being queried from a given table.

Follow Radha on Twitter @rgowda

 


 

April 15, 2014


Ember Crooks

DB2 Basics: Capitalization

When does case matter in DB2? Well, it doesn’t unless it does. Nice and clear, huh? When Text Must be in the Correct Case Text must be in the correct case whenever it is part of a literal...

...

About

planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

Search

Bloggers

decor