decor
 

planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.

 

September 29, 2016

Big Data University

Mexico: Amazing interest in Data Science

bdu-mexico-bootcamp-sep-2016-2

When I was asked by Raul F. Chong, BDU WW leader, about the opportunity to travel to Mexico to deliver the bootcamp I had been running in China, I was hooked immediately. Tequila was of course my main motivator (just kidding!). I was curious about the interest in Data Science in other countries other than North America, and China.

From September 19 – 22, I had the honor to deliver the Data Science bootcamp in collaboration with IBM to 80 participants (40 university professors, 15 researchers and 15 IBM employees) in Guadalajara, Mexico. This city is like an IT hub in Mexico with big companies like IBM (5000 employees), Intel, Oracle, HP, etc. It has a large population, and it has the second biggest university in the country with 17 campuses!

In this bootcamp, I gave an intro about Data Science, Big Data University,  and then hands-on workshop on Intro to R language, data analysis, visualization, , machine learning and working with Spark. The bootcamp was very successful and I believe it was academically profitable for participants. The bootcamp was offered at no charge, with the motivation to get professors to teach this knowledge to their students. The ultimate results of this initiatives won’t be known for at least 6 months, but the interest it generated amongst all attendees was incredible.

160919_boot_camp_big_data_ag_5I modified the delivery of this bootcamp slightly from the way I ran it in China. I included many data science case studies as the organizer wanted the bootcamp to be not just about teaching the R language, but also wanted topics about research in Big Data and Data Science. I have personally received many positive feedback, but the most important one is that we must launch BDU in Spanish!

BDU, being a community initiative, relies heavily in the passion of its members.  The people from Mexico are super excited and will be helping to translate BDU material, so we should soon see courses in Spanish popping up in our web site.

Participants loved the Data Scientist Workbench (DSWB) which ran smoothly for the entire 4 days. I’m happy with the stability the team has been able to achieve with DSWB.

I’m looking forward to a return to Mexico in 2017. The University de Guadalajara has invited me as keynote speaker in their annual conference in February!

The post Mexico: Amazing interest in Data Science appeared first on Big Data University.

 

September 28, 2016

Big Data University

BDU China initiatives

bdu-cda-summit-pic-1

Every time I travel to China, I can’t stop thinking that the entire population of Canada probably fits in a single Chinese city!  To serve such large population Chinese officials, professionals, and workers are used to doing things fast. Really fast.  From the moment I applied for my Chinese visa to checking-in at airports in China, I’m often amazed at the speed and efficiency in which they operate. Technological advances and adoption have followed the same pace. AliPay or WeChat Pay have gone mainstream in China for some time, while Apple Pay in the west is yet to take off.

With such great and vibrant population we could not have done something small to relaunch BDU in China. On September 3rd and 4th BDU sponsored the CDA (Certified Data Analyst) Summit in Beijing, China with an attendance of more than 3000 data analysts, data scientists, data engineers, bdu-cda-summit-pic-3students and academia.  The event put BDU in the minds and hearts of the community we want to reach. There were 6,382 people registered to the event, of which 3,221 checked-in, and others watched online.  We participated in the keynote, a panel, and 4 breakout sessions.

Keynote:

  • “Great opportunities ahead for Data Scientists” by Yan Yong Ji (Y.Y) Director, Analytics Platform Services on Cloud, IBM China Development Lab (CDL).
  • “BDU initiatives in China” by me (Raul F. Chong)

Panel:

This was a mixed panel (not just mixed bdu-cda-summit-pic-2 backgrounds but also languages!), with the theme being: “Getting started with Data Science”. I was honored to be the moderator and to have my colleagues Saeed Aghabozorgi (BDU Chief Data Scientist), and Henry Zeng (IBM China Senior Data and Solutions Architect), with me on stage along with four other panelists representing the industry, academia, and startup companies. We managed to cover interesting questions in English and Chinese (live translation) from comparing the Chinese with the US data science outlook, to clarifying the distinction between the terms “Data Analyst”, “Data Scientist”, and “Data Engineer”. 

Breakout sessions:

  • Smarter Traffic (Henry Zeng)
  • Data science: Competition to beat humans (Saeed Aghabozorgi)
  • Data science: Methodology, tools and skills (Saeed Aghabozorgi)
  • Data science: From university to Big Data University (Saeed Aghabozorgi)

The team also participated in media interviews, and had a booth were flyers and small gifts were provided.

bdu-cda-summit-pic-4

Announcements:

At the event, the following announcements were made:

With WeChat dominating social media and communication in China, we focused on launching and promoting our WeChat official BDU Account. While at the conference, this newly created account grew to more than 1000 subscribers!  If you have not yet done it, please subscribe!:

BDU WeChat QRCode
Video recording is available here:
http://e.vhall.com/395340718
 
I’m looking forward to continued collaboration with our existing BDU Ambassadors, and new partnerships for the rest of the year and 2017!

The post BDU China initiatives appeared first on Big Data University.


Dave Beulke

Process to Justify an IBM DB2 Analytics Accelerator (IDAA) Part 2

The benefits of having an IDAA appliance is something that every mainframe DB2 shop should investigate. Investigating and justifying an IDAA appliance can be done at no cost. In the previous blog, I talked about the set up steps for getting an IDAA Virtual Server configured and deployed within your...

(Read more)
 

September 27, 2016

Big Data University

This Week in Data Science (September 27, 2016)

Here’s this week’s news in Data Science and Big Data. Smart City

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

New in Big Data University

  • Text Analytics – This course introduces the field of Information Extraction and how to use a specific system, SystemT, to solve your Information Extraction problem.
  • Advanced Text Analytics – This course goes into details about the SystemT optimizer and how it addresses the limitations of previous IE technologies.

The post This Week in Data Science (September 27, 2016) appeared first on Big Data University.


DB2 Guys

IBM DB2 – the database for the cognitive era at IBM World of Watson 2016

IBM Insight, the premiere data, analytics and cognitive IBM conference, is now part of IBM World of Watson 2016 to be held at Las Vegas from October 24-27.  This year attendees will be able to experience first-hand a world of cognitive capabilities that IBM has been at the forefront of. World of Watson incorporates the […]

Triton Consulting

DB2 12 Latest News – Join the IBM Webcast featuring Jeff Josten and Julian Stuhler

We are delighted to let you know that Julian Stuhler, Solutions Delivery Director at Triton Consulting and IBM Gold Consultant will be joining the panel on IBM’s next DB2 12 webcast on Tuesday 4th October. Register here Julian will be speaking … Continue reading →

(Read more)

DB2utor

DB2 12 In-Memory Index Optimization

Last month I wrote about the trend of increasing the amount of real storage when configuring new mainframe systems. DB2 12 (which is set for delivery in the fourth quarter of 2016) features many new performance improvements that are designed to take advantage of available real storage. One of these enhancements is called In-Memory Index Optimization -- aka, index fast traversal blocks (FTBs).
 

September 26, 2016

Big Data University

Introducing Two New SystemT Information Extraction Courses

This article on information extraction is authored by Laura Chiticariu and Yunyao Li.

We are all hungry to extract more insight from data. Unfortunately, most of the world’s data is not stored in neat rows and columns. Much of the world’s information is hidden in plain sight in text. As humans, we can read and understand the text. The challenge is to teach machines how to understand text and further draw insights from the wealth of information present in text. This problem is known as Text Analytics.

An important component of Text Analytics is Information Extraction. Information extraction (IE) refers to the task of extracting structured information from unstructured or semi-structured machine-readable documents. It has been a well-known task in the Natural Language Processing (NLP) community for a few decades.

Two New Information Extraction Courses

We just released two courses on Big Data University that get you up and running with Information Extraction in no time.

The first one, Text Analytics – Getting Results with System T introduces the field of Information Extraction and how to use a specific system, SystemT, to solve your Information Extraction problem. At the end of this class, you will know how to write your own extractor using the SystemT visual development environment.

The second one, Advanced Text Analytics – Getting Results with System T goes into details about the SystemT optimizer and how it addresses the limitations of previous IE technologies. For a brief introduction to how SystemT will solve your Information Extraction problems, read on.

Common Applications of Information Extraction

The recent rise of Big Data analytics has led to reignited interest in IE, a foundational technology for a wide range of emerging enterprise applications. Here are a few examples.

Financial Analytics. For regulatory compliance, companies submit periodic reports about their quarterly and yearly accounting and financial metrics to regulatory authorities such as the Securities and Exchange Committee. Unfortunately, the reports are in textual format, with most of the data reported in tables with complex structures. In order to automate the task of analyzing the financial health of companies and whether they comply with regulations, Information Extraction is used extract the relevant financial metrics from the textual reports and make them available in structured form to downstream analytics.

Data-Driven Customer Relationship Management (CRM).  The ubiquity of user-created content, particularly those on social media, has opened up new possibilities for a wide range of CRM applications. IE over such content, in combination with internal enterprise data (such as product catalogs and customer call logs), enables enterprises to have a deep understanding of their customers to an extent never possible before.Besides demographic information of their individual customers, IE can extract important information from user-created content and allows enterprises to build detailed profiles for their customers, such as their opinions towards a brand/product/service, their product interests (e.g. “Buying a new car tomorrow!” indicating the intent to buy car), and their travel plans (“Looking forward to our vacation in Hawaii” implies intent to travel) among many other things.  Such comprehensive customer profiles allow the enterprise to manage customer relationship tailored to different demographics at

Besides demographic information of their individual customers, IE can extract important information from user-created content and allows enterprises to build detailed profiles for their customers, such as their opinions towards a brand/product/service, their product interests (e.g. “Buying a new car tomorrow!” indicating the intent to buy car), and their travel plans (“Looking forward to our vacation in Hawaii” implies intent to travel) among many other things. Such comprehensive customer profiles allow the enterprise to manage customer relationship tailored to different demographics at

Such comprehensive customer profiles allow the enterprise to manage customer relationship tailored to different demographics at fine granularity, and even to individual customers. For example, a credit card company can offer special incentives to customers who have indicated plans to travel abroad in the near future and encourage them to use credit cards offered by the company while overseas.

Machine Data Analytics. Modern production facilities consist of many computerized machines performing specialized tasks. All these machines produce a constant stream of system log data. Using IE over the machine-generated log data it is possible to automatically extract individual pieces of information from each log record and piece them together into information about individual production sessions. Such session information permits advanced analytics over machine data such as root cause analysis and machine failure prediction.

A Brief Introduction to SystemT

SystemT is a state-of-the-art Information Extraction system. SystemT allows to express a variety of algorithms for performing information extraction, and automatically optimizes them for efficient runtime execution. SystemT started as a research project in IBM Research – Almaden in 2006 and is now commercially available as IBM BigInsights Text Analytics.

On the high level, SystemT consists of the following three major parts:

1. Language for expressing NLP algorithms. The AQL (Annotation Query Language) language is a declarative language that provides powerful primitives needed in IE tasks including:

  • Morphological Processing including tokenization, part of speech detection, and finding matches of dictionaries of terms;
  • Other Core primitives such as finding matches of regular expressions, performing span operations (e.g., checking if a span is followed by another span) and relational operations (unioning, subtracting, filtering sets of extraction results);
  • Semantic Role Labeling primitives providing information at the level of each sentence, of who did what to whom, where and in what manner;
  • Machine Learning Primitives to embed a machine learning algorithm for training and scoring.

2. Development Environment. The development environment provides facilities for users to construct and refine information extraction programs (i.e., extractors). The development environment supports two kinds of users:

  • Data scientists who do may not wish to learn how to code can develop their extractor in a visual drag-and-drop environment loaded with a variety of prebuilt extractors that they can adapt for a new domain and build on top of. The visual extractor is converted behind the scenes into AQL code.

Information Extraction

  • NLP engineers can write extractors directly using AQL. An example simple statement in AQL is shown below. The language itself looks a lot like SQL, the language for querying relational databases. The familiarity of many software developers with SQL helps them in learning and using AQL.

AQL Information Extraction

3. Optimizer and Runtime Environment. AQL is a declarative language: the developer declares the semantics of the extractor in AQL in a logical way, without specifying how the AQL program should be executed. During compilation, the SystemT Optimizer analyzes the AQL program and breaks it down into specialized individual operations that are necessary to produce the output.

The Optimizer then enumerates many different plans, or ways in which individual operators can be combined together to compute the output, estimates the cost of these plans, and chooses one plan that looks most efficient.

This process is very similar to how SQL queries are optimized in relational database systems, but the types of optimizations are geared towards text operations which are CPU-intensive, as opposed to I/O intensive operations as in relational databases. This helps the productivity of the developer since they only need to focus on “what” to extract, and leave the question of the “how” to do it efficiently to be figured out by the Optimizer.

Given a compiled extractor, the Runtime Environment instantiates and executes the corresponding physical operators. The runtime engine is highly optimized and memory efficient, allowing it to be easily embedded inside the processing pipeline of a larger application. The Runtime has a document-a-time executive model: It receives a continuous stream of documents, annotates each document and output the annotations for further application-specific processing. The source of the document stream depends on the overall applications.

Advantages of SystemT

SystemT handles gracefully requirements dictated by modern applications such as the ones described above. Specifically:

  • Scalability. The SystemT Optimizer and Runtime engine ensures the high-performance execution of the extractors over individual documents. In our tests with many different scenarios, SystemT extractors run extremely fast on a variety of documents, ranging from very small documents such as Twitter messages of 140 bytes to very large documents of tens of megabytes.
  • Expressivity. AQL enables developers to write extractors in a compact manner, and provides a rich set of primitives to handle both natural language text (in many different languages) as well as other kinds of text such as machine generated data, or tables. A few AQL statements may be able to express complex extraction semantics that may require hundreds or thousands lines of code. Furthermore, one can implement functionalities not yet available via AQL natively via User Defined Functions (UDFs). For instance, developers can leverage AQL to extract complex features for statistical machine learning algorithms, and in turn embed the learned models back into AQL.
  • Transparency. As a declarative language, AQL allows developers to focus on what to extract rather than how to extract when developing extractors. It enables developers to write extractors in a much more compact manner, with better readability and maintainability. Since all operations are declared explicitly, it is possible to trace a particular result and understand exactly why and how it is produced, and thus to correct a mistake at its source. Thus, AQL extractors are easy to comprehend, debug and adapt to a new domain.

If you’d like to learn more about how SystemT handles these requirements and how to create your own extractors, enroll today in Text Analytics – Getting Results with System T and then Advanced Text Analytics – Getting Results with System T.

The post Introducing Two New SystemT Information Extraction Courses appeared first on Big Data University.

 

September 23, 2016


Robert Catterall

DB2 for z/OS: Using PGFIX(YES) Buffer Pools? Don't Forget About Large Page Frames

Not long ago, I was reviewing an organization's production DB2 for z/OS environment, and I saw something I very much like to see: a REALLY BIG buffer pool configuration. In fact, it was the biggest buffer pool configuration I'd ever seen for a single DB2 subsystem: 162 GB (that's the combined size of all the buffer pools allocated for the subsystem). Is that irresponsibly large -- so large as to negatively impact other work in the system by putting undue pressure on the z/OS LPAR's central...

(Read more)

DB2Night Replays

The DB2Night Show #184: DB2 - The Corner Stone of IBM Analytics

Follow @LesKing00 !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Special Guest: Les King, Director of Big Things, IBM DB2 - The Corner Stone of the IBM Analytics Platform Strategy 100% of our audience learned...

(Read more)
 

September 22, 2016


Kim May

TFG Website and Blog – We’re Back!

Many thanks to our colleagues at Substance151, particularly Ida Cheinman, for their rapid response to a website error blocking access to the many documents and presentations attached to both our...

(Read more)
 

September 21, 2016

Jack Vamvas

HOW TO clear inactive DB2 LUW transaction log files

How to clear inactive DB2 LUW transaction log files is a solution to a common problem. Before we discuss how to prune the inactive transaction logs we’ll need to establish which log files are inactive.

It is important to identify which logs to delete. If you delete an active transaction log file, you will cause an outage on the database

Before you start any activity , speak to the owners and users of the database . It is safer to stop the applications. It is not always possible - as it may be a live online database. Also consider create a backup for recovery . Completing a DB2 online backup won't force any applications

Follow the steps below to clear the DB2 LUW transaction log files.

>>Connect to the database using the command

su - db2usr1

$db2 connect to myDB

>>Connect to the database configuration details using either of these methods

$db2 get db config

db2 get db config | grep "log”

>>Search for the First active log file database parameter. For example:

First active log file = S0130573.LOG

>>Once you define the active log file execute

$db2 prune logfile prior to <activeLogFileName>

>>Applying this example to our command:

$db2 prune logfile prior to S0130573.LOG

 >>On execution of this command, DB2 would clear all inactive transaction logs prior to S0130573.LOG

 

Read More

DB2 – Restore database from a ONLINE backup (DBA DB2)

DB2 SQL2413N - Online backup is not allowed (DBA DB2)

 

 

September 20, 2016

Big Data University

This Week in Data Science (September 20, 2016)

Here’s this week’s news in Data Science and Big Data. AI robot

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

New in Big Data University

  • Data Science Fundamentals Learning Path – When a butterfly flaps its wings what happens? Does it fly away and move on to another flower or is there a spike in the rotation of wind turbines in the British Isles. Come be exposed to the world of data science where we are working to create order out of chaos that will blow you away!

The post This Week in Data Science (September 20, 2016) appeared first on Big Data University.


DB2utor

Open Source Tools and Languages for z/OS

The application development landscape on the mainframe -- which for me is really z/OS -- continues to evolve. Now businesses that are moving to cloud and mobile applications use not only Java, but also Perl, PHP, Python, R and TCL. These open source languages are very powerful because they can facilitate certain types of processing through a minimal amount of coding.
 

September 14, 2016


ChannelDB2 Videos

DB2 Tips n Tricks Part 95 - How To Protect Backup Image using DB2 Native Encryption


Thumbnail

How To Protect Backup Image using DB2 Native Encryption implement DB2 Encryption Feature Happy Learning & Sharing

Dave Beulke

Process to Justify an IBM DB2 Analytics Accelerator (IDAA) Part 1

Unbelievably, the IBM DB2 Analytics Accelerator (IDAA) has been available for many years, helping all types of customers improve overall processing, especially their analytical processing. Many companies do not have an IDAA appliance helping their DB2 for z/OS environments. Since most every shop’s...

(Read more)
 

September 13, 2016


Craig Mullins

The Tao of DB2 - Part 7: Dealing with Performance Issues

The last time we checked in on our DBAs (the soon-to-retire mentor and his intern) the mentor was schooling his young intern on storage and data retention issues. But there is still much to learn, as our intern will soon find out! "Soon you will get the chance to learn about performance tuning," said the mentor, as he nodded solemnly in his chair. As if on cue, one of the programmers came...

(Read more)

DB2utor

Become a Part of Generation z

In my job with IBM, I spend a lot of time at customer sites. During these visits, I’m seeing an increasing number of young IT professionals who are just starting their careers on z/OS.
Big Data University

This Week in Data Science (September 13, 2016)

Here’s this week’s news in Data Science and Big Data. NBA data

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

Cool New Courses

The post This Week in Data Science (September 13, 2016) appeared first on Big Data University.

 

September 12, 2016


Data and Technology

A Dozen SQL Rules of Thumb, Part 3

Today we pick up our three-part series of SQL rules of thumb (ROTs) with the third and final installment… You can think of these rules as general guiding principles you should follow as your...

(Read more)
 

September 06, 2016

Big Data University

This Week in Data Science (September 06, 2016)

Here’s this week’s news in Data Science and Big Data. Tech Ethics

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

The post This Week in Data Science (September 06, 2016) appeared first on Big Data University.


DB2utor

DB2 12 Continuous Delivery Model Webinar

Software development on z/OS has traditionally adhered to tried and true practices to ensure that poorly constructed application code does not make it into production. In many shops we call these change control people the “gatekeepers.” Nothing gets past these individuals. Developers must rigorously test their code before an application or update goes live. Beyond that, fallback procedures must be devised in case there's a problem once the code is moved into production. On top of that, changes to production systems are subject to a strict schedule to ensure that no critical business event are interrupted.
 

September 03, 2016


DB2Night Replays

The DB2Night Show #183: DB2 LUW Security from the Data Center to the Cloud!

Follow @Roger_E_Sanders !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Special Guest: Roger Sanders, Author, Teacher, and Security Guru, IBM DB2 LUW Security: From the Data Center to the Cloud! 100% of our...

(Read more)
 

August 30, 2016

Jack Vamvas

Troubleshoot Long running sql statements with LONG_RUNNING_SQL view

A customer called me yesterday and complained about slow response time on a DB2 database. I asked a few questions and they mentioned a few  ad-hoc queries were executing.  If I’m doing production troubleshooting – running a sql trace is a powerful method to retrieve detailed information.

Prior to getting the detailed information , I’ll use the SYSIBMADM.LONG_RUNNING_SQL view . This is a very useful DB2 administrative view , presenting long – running queries.

Under the hood , the LONG_RUNNING_SQL view joins some system snapshots.

 

 SELECT APPL_NAME,AUTHID,INBOUND_COMM_ADDRESS, STMT_TEXT, AGENT_ID,   ELAPSED_TIME_MIN, APPL_STATUS, DBPARTITIONNUM   FROM SYSIBMADM.LONG_RUNNING_SQL ORDER BY APPL_NAME

 

Check the APPL_STATUS value on the query. Some troubleshooting scenarios which I use are:

a) Finding LOCKWAIT – Use the lock snapshots to dig deeper into the source of the issue

b) Find UOWAIT – check the requesting application

Read More

How to read db2detaileventlock event monitor trace file (DBA DB2)

Database Tuning for complex sql queries (DBA DB2)

Database Tuning – Five Basic Principles according to Shasha (DBA ...

Big Data University

This Week in Data Science (August 30, 2016)

Here’s this week’s news in Data Science and Big Data. Self Driving Car

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

The post This Week in Data Science (August 30, 2016) appeared first on Big Data University.


DB2utor

Greater Memory Already Making a Great Impact

In November 2015 I wrote about the launch of DB2 12 for z/OS ESP, citing all the various enhancements DB2 has made by exploiting significantly greater amounts of available memory. Well, it isn't just DB2 12 that will benefit from additional memory to reduce CPU cost.

Robert Catterall

DB2 for z/OS: Clearing Up Some Matters Pertaining to Database Access Threads

I have recently received a number of questions pertaining to DB2 for z/OS database access threads, or DBATs. DBATs are threads used in the execution of SQL statements that are sent to DB2 from network-attached applications (i.e., from DRDA requesters that access DB2 for z/OS by way of DB2's distributed data facility, also known as DDF). Thinking that these questions (and associated answers) might be of interest to a good many people in the DB2 for z/OS community, I'm packaging them in this blog...

(Read more)
 

August 26, 2016


ChannelDB2 Videos

DB2 Tips n Tricks Part 94 - How To Find Tablespaces included inside Tablespace Level Backup Image


Thumbnail

How To Find Tablespaces included inside Tablespace Level Backup Image db2ckbkp -T imgname Happy Learning & Sharing

ChannelDB2 Videos

DB2 Tips n Tricks Part 93 - How LOGARCHMETH2 is not alternative for LOGARCHMETH1


Thumbnail

How LOGARCHMETH2 is not alternative or backup for LOGARCHMETH1 Configure failarchlog DB CFG Parameter Happy Learning & Sharing
Jack Vamvas

How to write a DB2 loop with INSERT

Question: I’d like write a sql statement to loop through an INSERT statement and increment with a  count. The purpose is to create some test tables for load testing.

Answer: It is possible to create a loop in DB2 which loops through an incremental INSERT. This is a basic example, which can be customised for your purposes.Note the use of ATOMIC. The purpose of ATOMIC is to rollback before the call is passed back to the requestor, if there is a problem.

In this example the CNT variable increments at every INSERT, up until it is under 100000.

 

db2 “CREATE TABLE mytble (ID INT)”
db2 "BEGIN ATOMIC DECLARE CNT INT DEFAULT 5; WHILE CNT < 100000 DO INSERT INTO mytbl (ID) VALUES('16'); SET CNT = CNT + 1; END WHILE;END"

 Read More

Software unit testing and DB2 sql loop test code (DBA DB2)

DB2 Tuning Toolkit – DB2 Design advisor - Ddb2advis

 

 

 

 

 

 

 

 

August 25, 2016


Data and Technology

A Dozen SQL Rules of Thumb, Part 2

Today’s blog post picks up where we left off in our three-part series of rules of thumb (ROTs) that apply generally to SQL development regardless of the underlying DBMS. These are the general guiding...

(Read more)
 

August 24, 2016


Dave Beulke

3 Consideration for Enjoying the Data Lake

With all the outside activities with friends and family, summer vacations are always wonderful. Being outside at the lake enjoying the warm weather and cooling off in the lake are a wonderfully relaxing great times. This is the safe, content image that everyone thinks about when discussing the new...

(Read more)

Subscribe by email

 

About

planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

Bloggers

decor