planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.


July 21, 2017


July 19, 2017

Jack Vamvas

How to block database connections

During a recent consultation there was a requirement to block DB2 LUW database connections. The purpose was to test monitoring alerts and assess the correct response when a database connection could not be established.

The requirement was to maintain the DB2 instance active and all the other databases as online

An easy way to block database connections is to use the MAXAPPLS database configurable. This parameter specifies the maximum number of concurrent applications that can be connected (both local and remote) to a database.

The trick is to set the DB2 LUW database MAXAPPLS configurable to 1, and ensure that you obtain the database connection. Any other database connection will fail, regardless of whether it is remote or local.

You may discover some issues applying MAXAPPLS to 1. Adjusting MAXAPPLS is closely associated to MAXLOCKS . But the decisions to adjust the MAXLOCKS is dependant on how you currently have the database configured   

db2 update db cfg for MTTESTDB using MAXAPPLS 1




July 18, 2017

Data and Technology

What the Null? Handling Missing Information in Database Systems

In relational database systems, a null represents missing or unknown information at the column level. A null is not the same as 0 (zero) or blank. Null means no entry has been made for the column and...

(Read more)


A Happy 10 Years

Ten years ago this week--July 16, 2007, to be specific--this blog published its first post, on Universal Table Space. IBM Systems Magazine had asked me if I would consider writing a blog about Db2 for z/OS. I'm not exaggerating when I say I felt like the luckiest guy in the world.

July 17, 2017

Craig Mullins

The IBM z14: There's a New Mainframe Coming!

Today, July 17, 2017, IBM announced the next iteration of its stalwart mainframe hardware, the IBM z14. The news comes a little more than two years since the previous version, the IBM z13. The hardware will be generally available later this year. The big news is that IBM delivering pervasive encryption as part of the IBM z14. With pervasive encryption you can encrypt and protect all...

(Read more)

Henrik Loeser

Best practices for lively chatbots

TJBot as lively chatbot More and more chatbots are being developed and there are good reasons for it. Not all chatbot projects succeed. Often, missing user acceptance is stated. The dialog system...

(Read more)

July 13, 2017

Henrik Loeser

Chatbots: Testing Contexts

Watson Conversation Tool in action Some weeks ago I blogged about a tool I wrote for the IBM Watson Conversation Service (WCS). It allows you to manage your conversation workspaces from the command...

(Read more)

July 12, 2017

Craig Mullins

The DB2 12 for z/OS Blog Series - Part 13: DRDA Fast Load

Have you ever had a situation where you needed to load data into a DB2 table, but the file with the data was not on the mainframe? So you had to PTF that data to the mainframe and then load it. Well, with DB2 12 for z/OS you get a new capability to load the data to the mainframe without moving the file. The DRDA fast load feature provides you with an efficient way to load data to DB2 for z/OS...

(Read more)

July 11, 2017


Db2: A Small but Significant Change

If you scan IBM websites, or you're on Twitter looking for news about Db2, or you're following those who have a hand in its direction, you may have noticed something: DB2 is now Db2. The b is lower-case.

July 10, 2017

Kim May

Tealeaf Fatigue

I have been reviewing our first half of 2017 and trying to get a handle on where our DB2 world is headed in the second half of this confusing year.  Frankly, I’m baffled!  While the year...

(Read more)

Leons Petrazickis

Command line client for Sentry (Bash)

Sentry is a great error aggregation service. We use it for every service we deploy at work. A command line client for Sentry lets you log Bash errors. The post Command line client for Sentry (Bash)...

(Read more)

Triton Consulting

Three is the Magic Number – Triton Trio to Present at IDUG EMEA 2017

We are delighted to announce that three members of the Triton Consulting team will be heading to Lisbon this year to present at Europe’s biggest and most prestigious DB2 technical conference, IDUG EMEA*. This year’s conference will take place at … Continue reading →

(Read more)

July 07, 2017


July 05, 2017

Triton Consulting

Black Friday – surviving the inevitable peaks

IT teams working for large retailers, logistics and fulfilment organisations are under immense pressure in the lead up to Black Friday and Cyber Monday with record numbers of online purchases taking place last year. According to IMRG UK online spending … Continue reading →

(Read more)

July 03, 2017

Henrik Loeser

IBM Marketing and DB2, Db2 and dashDB

IBM Cloud with Bluemix: DB2, Db2, dashDB This week I am going to acknowledge that I have been with IBM for 16 years. Looking back, everything might seem brighter and better than it was. However, I...

(Read more)

June 30, 2017

Robert Catterall

DB2 12 for z/OS SQL Enhancements: Piece-Wise DELETE

DB2 12 for z/OS, which became generally available in October of 2016, delivered a number of key enhancements pertaining to SQL statements. Over the course of a few blog entries, I'll describe some of my favorites among these enhancements. This first post in the series will cover piece-wise DELETE. First, the backdrop: suppose you have a need to remove a large number of rows from a table, using SQL (I say, "using SQL," because a utility-based alternative, REORG with DISCARD, is not always...

(Read more)

June 29, 2017

Craig Mullins

The DB2 12 for z/OS Blog Series - Part 12: New Built-in Functions

As with most new releases of DB2 for z/OS, at least lately, there are several new built-in functions (or BIFs) that have been added. DB2's BIFs are used to translate data from one form or state to another. They can be used to overcome data format, integrity and transformation issues when you are reading data from DB2 tables.  So what new things can we do with functions in DB2 12 for...

(Read more)

June 27, 2017

Big Data University

Cognitive Class Uses Machine Learning to Help SETI Find Little Green Men

This month the team at was in Galvanize San Francisco with Adam Cox and Patrick Titzler, running a Code challenge that will help SETI (Search for extraterrestrial intelligence) look for Aliens.

The goal of the event was to help the SETI Institute develop new signal classification algorithms and models that can aid in the radio signal detection efforts at the SETI Institute’s Allen Telescope Array (ATA) in Northern California.

Our Chief data scientist Saeed Aghabozorgi developed several Jupyter notebooks including one to transform the signals into spectrograms using a Spark cluster. In addition, Saeed provided several Tensorflow notebooks, one of which used a Convolutional Neural Network [1] to classify the Spectrogram. Check out the Github page and see all the scripts from Saeed Aghabozorgi , Adam Cox and Patrick Titzler.

Our developer Daniel Rudnitski developed a scoreboard that evaluates everyone’s algorithms. The scoreboard works by comparing the predicted results and the true labels in a holdout set for which the participants did not know the labels (shown in Figure 1). I gave a tutorial on Neural Networks and Tensorflow, helped the participants debug their code, and enjoyed the free food. 😄😁😀😁

Figure 1: Cognitive Class’ leaderboard used to assess results of Hackathons


SETI searches for E.T. by scanning star systems with known exoplanets. The idea is that nature does not produce sine waves, therefore the system looks for narrow-band carrier waves like sign waves. The detection system sometimes triggers on signals that are not narrow-band signals. The goal of the event was to classify these signals accurately in real-time, allowing the signal detection system to make better informed observational decisions. [2]

We transformed the observed time-series radio signals into a spectrogram. A spectrogram is a 2-dimensional chart that represents how the power of the signal is distributed over time and frequency [3]. An example is shown in Figure 2. The top chart is a spectrogram in which the bright green represents higher intensity values, and the blue represents low intensity values. The bottom chart contains two amplitude modulated signals labeled A and B. The two brightly colored patches in the spectrogram directly above the signal represent the distribution of the signal energy in time and frequency. The horizontal axis represents time, while the vertical axis represents frequency. If we examine signal A we see that it oscillates at a much lower rate than signal B, meaning that it has a much lower frequency. This is exhibited by a much lower location of the energy on the vertical axis of the Spectrogram.


Fig 2: Spectrogram (top) of two amplitude modulated Gaussian signals (bottom)


The 2D representation provided by the spectrogram allows us to change the problem into a visual recognition problem. Allowing us to apply methods such as convolutional neural networks. Individuals without expertise in design and implementing Deep Neural Networks could focus on the signal processing problem and let IBM Watson Visual Recognition tool handle the complex problem of image classification. The process is demonstrated in figure 3 with a Chirp signal (a signal in which the frequency increases or decreases over time). After the spectrogram, several convolutional layers are applied to extract features from the image, then the output is flattened and placed as inputs into a fully connected neural network. To learn more about deep learning check out our Deep Learning 101 and Deep Learning with TensorFlow courses.

Figure 3: Example architecture used in the event. (Source: Wikipedia)


To speed up the process of developing and testing these neural network, participants were given access to GPUs on IBM PowerAI Deep Learning. PowerAI speeds up deep learning and AI using GPU. Built on IBM’s Power Systems, PowerAI is a scalable software platform that accelerates deep learning and AI with blazing performance for individual users or enterprises. The PowerAI platform supports popular machine learning libraries, and was provided through public cloud provider, NIMBIXParticipants used libraries such as Caffe, Theano, Torch, and Tensorflow. In addition, given the vast amounts of data for signal processing, participants were also given access to an IBM Apache Spark Enterprise cluster. For example, the spectrograms where calculated on several nodes as shown in figure 4.

Figure 4: Example architecture used in the event.


The top team was Magic AI. This team used a wide neural net, a network that has less layers than a deep network, but more neurons per layer. According to Jerry Zhang, a Graduate Researcher at UC Berkeley Radio Astronomy Lab, the spectrogram exhibited less complex shapes then a standard image like those in Modified National Institute of Standards and Technology database (MIST), as a result less convolutional layers where required to encode features like edges. We see this by examining figure 5, the left image shows 5 spectrograms and the right image shows 5 images from MIST. The Spectrogram is colored using the standard gray scale where white represents the largest values and black represents the lowest values. We see the edges of the spectrogram are predominantly vertical and straight while the numbers exhibit horizontal lines, parallel lines, arches and circles.


Figure 5: Spectrograms and the right image shows 5 images from MIST


The Best Signal Processing Solution was by the Benders. They applied a method for detecting earthquakes to improve signal processing. Arun Kumar Ramamoorthy, one of the members, also made an interesting discovery while plotting out some of the data points. Check out their blog post here.

The prize for best Non Neural Network/Watson: went to team Explorers and most Interesting went to team Signy McSigFace. The trophies are shown in Figure 6.

Figure 6: Custom trophies designed for winners of this hackathon.


The weekend was quite interesting with talks from , Dr. Jill Tarter, Dr. Gerry Harp, and Jon Richards who gave talks about SETI, the radio data processing and operations. They were also available to answer questions from participants. Kyle Buckingham gave a talk about the radio telescope he built in his backyard! Everyone who participated is shown in the image below.

Figure 7: SETI Hackathon participants


Check out the event GitHub page:

For more information on SETI, please check out:

To donate to SETI:

Would you like to make your own predictions? Learn about Deep Learning with our Deep Learning 101 and Deep Learning with TensorFlow courses.


[1] Krizhevsky, Alex, Ilya Sutskever, and. Hinton Geoffrey E ,. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems. 2012.

[2]  Aghabozorgi, Saeed   with  Cox, Adam  and  Titzler, Patrick,. ML4SETI

[3] Cohen, Leon. “Time-frequency distributions-a review.” Proceedings of the IEEE 77.7 (1989): 941-981.

The post Cognitive Class Uses Machine Learning to Help SETI Find Little Green Men appeared first on Cognitive Class.


Subsystem Parameter Changes in DB2 12

Following up on last week's post about new and changed with DB2 11 for z/OS subsystem parameters, here's an update about DB2 12 zparms. It’s important to note that, starting with DB2 12, zparms become available at function level at which they're introduced (e.g., function level V12R1M500, V12R1M501, etc.).

June 26, 2017

Henrik Loeser

More Tips and Tricks for Building Chatbots

Chatbot Architecture You build your first chatbot and it is working ok. Did you know that you can make chatbots even more interactive? That you can access conversation metadata and application...

(Read more)

Craig Mullins

BMC and CA: Impending Nuptials?

Have you heard the one about BMC Software acquiring CA, Inc.?  At first glance, to those of us who have been in the business for awhile, that sentence may look odd? Those two have been slugging it out in the mainframe software world for decades now. And to be fair, not just the mainframe world, but also in the distributed world. But the chatter is out there that this could happen. Read...

(Read more)

June 23, 2017

Data and Technology

Who Owns Data?

Who owns data? This is a complex question that can’t be answered quickly or easily. It requires some thought and you have to break things down in order to even attempt to answer it. First of...

(Read more)

DB2Night Replays

The DB2Night Show #195: Breaking News from IBM Canada Db2 Lab!

Follow @Roger_E_Sanders Follow @jrockwood Special Guests: Roger Sanders and Jessica Rockwood, IBM Breaking Db2 (LUW) News from the IBM Canada Lab 100% of our audience learned something! Shocker! "DB2 for LUW" is now just "Db2". Mainframe folks will still have to type out "Db2 for z/OS". Some IBM products, or solution offerings, have been renamed. What's new? Well, V11.1.2.2 is new, and there are plenty of goodies and enhancements in...

(Read more)

June 22, 2017


June 21, 2017

Dave Beulke

DB2 LUW Security: More Ways to Secure your System

Over the years, I have developed many of my security practices and used many of the government security guidelines to perform security audits in client environments. The combination of the government guidelines and my experience helps me uncover and secure the desired client’s systems and...

(Read more)

June 20, 2017

Triton Consulting

Time is running out! Plan your move

For organisations currently running DB2 LUW versions 9.7 or 10.1 time is running out. IBM will cease to support customers running either version from 30th September 2017, just 3 months away. For those organisations that haven’t begun their migration planning, … Continue reading →

(Read more)


New and Changed Subsystem Parameters in DB2 11

Even though DB2 12 is now available, a fair number of customers are now or have recently migrated from DB2 10 to DB2 11. I know this because I've been getting queries about subsystem parameters (zparms) default changes in DB2 11. So I'll share with you what I've been telling them: IBM, thankfully, documents this information in the Knowledge Center's What’s New guide.
Jack Vamvas

Cannot drop database SQL1035N The operation failed because the specified database cannot be connected to in the mode requested. SQLSTATE=57019

I cannot drop the database and getting the error message SQL1035N  The operation failed because the specified database cannot be connected to in the mode requested. SQLSTATE=57019

What is the safest way to work around this problem?

Answer: The DB2  error message appears because the current database is not a valid state for the DROP DATABASE command you’re attempting to commit.

From the IBM DB2 LUW documentation “The database must not be in use; all users must be disconnected from the database before the database can be dropped”.

Before you deactivate the database , speak with the application owner and negotiate a timeslot .


To fulfil the task deactivate the database and the complete the DROP DATABASE statement. Before you deactivate the database , complete a db2 force applications all.


The DROP DATABASE command deletes all data & log files , so make sure you first have a valid backup , just in case.



June 18, 2017

ChannelDB2 Videos

DB2 HADR Part 2 - How Log Chain Id change in Primary is replayed to Standby on the fly


How Log Chain Id change in Primary is replayed to Standby on the fly Happy Learning & Sharing

June 16, 2017

DB2Night Replays

The DB2Night Show #Z79: zIIP Experiences: Exploitation Not Abuse

Presented by: Adrian Burke DB2 for z/OS SWAT team "The DB2Night Show #Z79: zIIP Experiences: Exploitation Not Abuse" Replays available in WMV and M4V formats! 100% of our studio audience learned something!Adrian gave many tips for the proper use and exploitation of zIIP processors. Watch the replay...

(Read more)

June 15, 2017

Modern SQL

What's New in SQL:2016

In December 2016, ISO released a new version of the international SQL standard (ISO/IEC 9075:2016). It supersedes the previous version from 2011.

This article is a brief overview of the new features being introduced into the SQL language. Strictly speaking, this article covers the additions to part 2 of the standard (SQL/Foundation), i.e. the most commonly used part.

This article also shows the availability of these features among six major databases. Note that respective figures—shown below—only reflect whether or not the databases support the features in the way described by the standard. For example, an X in the JSON row does not mean the database has no JSON support—it simply means the database doesn't support the JSON features described by the SQL standard. As a matter of fact, all tested databases support JSON in some way—but not necessarily in the way described by the standard.0

Overview of Availability of new SQL:2016 features

Before going through all the shiny new features, let's look at some trivia: part 2 of the SQL:2016 standard has 1732 pages—that's 260 pages more (~18%) than the 2011 edition. It introduces 44 new optional features (+14%). Let's take a look at them…

Row Pattern Recognition

Row pattern recognition captures groups of rows that follow a pattern. Rows in matched groups can be filtered, grouped, and aggregated. The pattern itself is described with a simple regular expression syntax.

The main use of row pattern recognition is to check time series for patterns. However, the match_recognize clause combines aspects of the where, group by, having and over clauses (window functions) so it is also useful in many other cases.

I have given a presentation about row pattern recognition. It discusses several examples in two implementation variants: with and without the new match_recognize clause. The examples cover some typical use cases, and also some atypical use cases for row pattern recognition:

  • Consecutive events: identifying sessions in a web-log; tolerating gaps in sequences

  • Top-N per group (might be faster than window functions!)

  • Time intervals: finding gaps (e.g. for scheduling)

  • Time intervals: closing gaps (creating new rows)

If you understand German, you can watch the video recording here.

Readers interested in more details may refer to the technical report “Row Pattern Recognition in SQL” (ISO/IEC TR 19075-5:2016) available for free at ISO.

Availability of “Row Pattern Recognition”


SQL has been supporting arrays, multisets (nested tables) and composite types for 18 years. In 2003, SQL was given a native XML type. Now it has JSON support.

The following sections briefly describe the key parts of the new standard JSON functionality. A more detailed discussion is available in the technical report by ISO.

No Native JSON Type

Even though XML and JSON are somewhat similar—they are documents with nested structures—their integration into SQL is quite different. The most striking difference is that the standard does not define a native JSON type like it does for XML. Instead, the standard uses strings to store JSON data.

Note that this does not prevent vendors from providing a JSON type. The standard just defines functions that interpret strings as JSON data. To claim conformity to the standard's JSON features, a database must support these string-based functions. Vendors are still free to add a JSON type and related functions. They are even allowed to provide the standard's JSON functions for their native JSON type—a very reasonable option in my opinion.

The only annoying consequence of using strings for JSON data is the error handling.1 If there was a JSON type, parsing errors could only happen when casting a string into the JSON type. Instead, the standard defines an on error clause for all functions that interpret strings as JSON data.

json_value(json, '$.id' null on error)

The default is error or error. The string '$.id' is a JSON path (described below).

Note that is still possible to declare table columns that only accept valid JSON data:

   jcol CLOB CHECK (jcol IS JSON)

The check constraint uses the new is json predicate to test whether the string contains valid JSON data or not. The is json predicate can even test for a specific JSON types and is thus more flexible than a single JSON type:

<expr> is [not] json [ value | array | object | scalar ]

Availability of SQL/JSON

JSON Formats

The SQL standard allows database vendors to support different JSON formats. The one described by RFC 7159 is the mandatory default.

Functions that generate or parse JSON data accept the optional format clause to specify which format to use (format json is default). The alternatives offered might have a very different syntax (like BSON).

Creating JSON Data

The standard defines the following functions to create JSON strings:

json_object([key] <expr> value <expression> [,…])

Creates a JSON object. The keywords key (optional) and value introduce the attribute name and value:

json_object( key 'id'   value 1234
           ,     'name' value 'Markus')

The standard also accepts a colon (:) between key and value:2

json_object( 'id': 1234
           , 'name': 'Markus')

A comma is only used to list multiple key/value pairs. This is a noteworthy difference to json_object (MySQL, SQLite) and json_build_object (PostgreSQL): they use the comma for both separating keys from values and listing multiple key/value pairs.


Creates a JSON array from the values provided.


Creates a JSON array from the values returned by <query>. The query must return exactly one column.

json_arrayagg(<expr> [order by …])

Creates a JSON array from the values of a group (like array_agg):

SELECT json_arrayagg(col [order by seq])
  FROM …

Note that the optional order by clause3 is inside the parentheses—not in a within group clause as for listagg.

json_objectagg([key] <expr> value <expression>)

Creates a JSON object from the key/value pairs of a group. It uses the same syntax to denote the key and value as json_object:

SELECT json_objectagg(k value v)
  FROM …

An order by clause is not allowed because JSON objects are unordered key/value pairs.

Availability of SQL/JSON

Accessing JSON Items

The following functions use the so-called SQL/JSON path language (described below) to access parts of a JSON document. They interpret strings as JSON data and thus accept the on error clause.

json_exists(<json>, <path>)

Tests whether a specific path exists in JSON document. It evaluates to true, false or unknown and can be used in the where clause directly:

WHERE json_exists(jcol, '$.name')
json_value(<json>, <path> [returning <type>])

Extracts a scalar JSON value—everything except object and array—and returns it as a native SQL type. The optional returning clause performs a typecast. Without a returning clause, json_value returns a string.

json_query(<json>, <path> …)

Extracts a part out of JSON document and returns it as a JSON string. The main differences to json_value are: (1) it can extract any JSON type; (2) it always returns a string; (3) it can extract multiple elements from a JSON document.

Due to these differences, there are two special cases:

Multiple hits

By default, json_query raises an exception if the JSON path matches more than one element (like json_value). The optional with [ conditional | unconditional ] [array] wrapper clause wraps the result into an array and returns all hits.

JSON path denotes a single string

Json_query generally returns JSON data. If the JSON path matches a single JSON string, the result is still enclosed in double quotes with JSON special characters escaped. The omit quotes [on scalar string] clause returns the raw data instead (like json_value).

Finally, there is the json_table function: it is basically a table function—i.e. it is used in the from clause.

json_table(<json>, <path> columns …)

Transforms JSON data into a table. Json_table does not introduce any new functionality but is often simpler (and faster) than using multiple json_value calls.4

Please refer to ISO’s technical report (paragraph 5.3.4) for the full syntax. As a teaser, consider the following example. It transforms a JSON document (an array containing two objects) into a table:

[{id: 1, name: "Marvin"},
 {id: 2, name: "Arthur"}
  FROM t
       ( jcol
       , '$[*]'
         COLUMNS (id   NUMERIC      PATH '$.id',
                  name VARCHAR(255) PATH '$.name'
       ) jt

The json_table function produces one row for each element matched by the JSON path $[*]. The columns clause declares the names and types of the result columns and the JSON paths to the actual values (relative to the main JSON path).


Availability of SQL/JSON


The SQL standard specifies a path language for JSON.5 It “adopts many features of ECMAscript, though it is neither a subset nor a superset of ECMAscript.”6

In the JSON path language, the dollar sign ($) represents the current context element, the period (.) an object member, and the brackets ([]) an array element. The following examples illustrate this:


Denotes the value of the name attribute of the current JSON object.


Denotes the first element of the current JSON array.


Denotes the last element of the array stored in the attribute events of the current JSON object.

$.events[0 to 4, last]

Denotes the first five and the last array element of the array stored in the attribute events of the current JSON object.


Denotes the values of all attributes of the current JSON object.

JSON path supports filter expressions in the form ?(<expression>). In filter expressions, the at sign (@) denotes the current context.

$.* ?(@.type()=="number")

Denotes all attribute values of type number.

$.events[*] ?⁠(exists(

Denotes all array elements of the attribute events that contain a JSON object with the attribute name.

$?⁠( starts with "Lorem")

Denotes the full JSON document if it is a JSON object that has an attribute name that contains a string that starts with Lorem.

Other functions available in filter expressions include size() (array size), ceiling(), floor(), abs(), and datetime() (for parsing, see below).

The SQL/JSON path language defines two modes: lax, which is the default, and strict. The mode can be specified by adding a prefix to the JSON path expression (e.g. 'strict $.name'). The strict mode triggers error handling (subject to the on error clause) for all errors. This includes accessing non-existing object members and using an array accessor on a scalar value or JSON object.

The lax mode suppresses these errors. If required, it unwraps arrays or wraps scalar values so that the document structure and JSON path expression fit to each other. The lax mode allows working with variable document structures without adjusting the JSON path to each document.

Availability of SQL/JSON Path

What's missing in SQL/JSON

The SQL standard does not provide functions to update parts of JSON documents (like json_set in MySQL, PostgreSQL, and SQLite or json_modify in SQL Server).

On my Own Behalf

If you like this article, you might also like my book SQL Performance Explained or my training.

Date and Time Formatting and Parsing

Formatting dates and times (temporal data) is one of the gaps in the SQL standard that was filled in pretty much every database—of course, every vendor filled it differently. SQL:2016 finally added this functionality to the standard.

The SQL standard uses a format template such as 'YYYY-MM-DD' to describe how to format or parse temporal data. The following table summarized the available mnemonics—notice the similarities with the “datetime format models” used by the Oracle database.

MnemonicMeaningextract field
RRRR | RRRounded year7
DDDay of monthDAY
DDDDay of year
HH | HH1212 hour
HH2424 hourHOUR
SSSecond of minuteSECOND
SSSSSSecond of day
FF1 | … | FF9Fraction(in SECOND)
A.M. | P.M.AM or PM

Remember that the extract expression (good old SQL-928) can access the individual components of temporal types. I've added the respective extract field names for reference.

Format templates can be used in two ways: (1) in the JSON path method datetime (see above); (2) in a cast specification:

CAST(<datetime> AS <char string type> [FORMAT <template>])
CAST(<char string> AS <datetime type> [FORMAT <template>])

Availability of CAST FORMAT


Listagg is a new ordered set function that resembles the group_concat and string_agg functions offered by some databases. It transforms values from a group of rows into a delimited string.

The minimal syntax is:

LISTAGG(<expr>, <separator>) WITHIN GROUP(ORDER BY …)

Listagg accepts the optional on overflow clause to define the behavior if the result becomes too long:

LISTAGG(<expr>, <separator> ON OVERFLOW …)

The default is on overflow error. The on overflow truncate clause prevents the overflow by only concatenating as many elements as the result type can accommodate. Furthermore, the on overflow truncate clause allows you to specify how to terminate the result:


The optional <filler> defaults to three periods (...) and this is added as the last element if truncation happens. If with count is specified, the number of omitted values is put in parentheses and appended to the result.

I have written a more detailed article about listagg. Please keep the type-safe alternatives in mind, which are a better choice in many cases.

Availability of LISTAGG

Trigonometric and Logarithmic Functions

SQL:2016 introduces trigonometric functions (sin, cos, tan, asin, acos, atan, sinh, cosh, tanh), a general logarithm function (log(<base>, <value>)) and a shortcut for the logarithm with base 10 (log10(<value>)).

Note that the natural logarithm ln(<value>) was introduced with SQL:2003.

Availability of New Mathematical Functions of SQL:2016

Polymorphic Table Functions

SQL table functions—standardized with SQL:2003—are functions that return tables. Table functions have to specify the names and types of the columns they return—the so-called row-type—at the time of creation:

CREATE FUNCTION <name> (<parameters>)

This function returns a table with two columns: c1 and c2 with the respective types.

Once declared, table functions can be used in the from and join clauses similarly to regular tables.9

Prominent examples of table functions are:

SQL:2016 introduces polymorphic table functions (PTF) that don't need to specify the result type upfront. Instead, they can provide a describe component procedure that determines the return type at run time. Neither the author of the PTF nor the user of the PTF need to declare the returned columns in advance.

PTFs as described by SQL:2016 are not yet available in any tested database.10 Interested readers may refer to the free technical report “Polymorphic table functions in SQL” released by ISO. The following are some of the examples discussed in the report:

  • CSVreader, which reads the header line of a CVS file to determine the number and names of the return columns

  • Pivot (actually unpivot), which turns column groups into rows (example: phonetype, phonenumber)

  • TopNplus, which passes through N rows per partition and one extra row with the totals of the remaining rows

Personally, I think many of the examples from the technical report could be solved using other SQL features.11 Implementing the json_table function as a PTF might have been an interesting example.

Miscellaneous Features

Join … using now accepts as (F404):

JOIN B USING (…) AS correlation_name

There is a new type decfloat[(<precision>)] (T076).

Named arguments in function and procedure invocations (name => value) are no longer limited to call statements (T524).

The default clause is allowed for arguments to functions (T525) and inout arguments (T523).

Availability of Miscellaneous SQL:2016 Features

Feature Taxonomy: Obsolete and New Features

For the sake of completeness: part 2 of SQL:2016 introduces no new mandatory features.12

The feature T581, “Regular expression substring function” has been deprecated (“The functionality is essentially subsumed by Feature F844, “SUBSTRING_REGEX”"13). Note that feature T141, “SIMILAR predicate” has been deprecated in 2011.14

The following table lists the new optional features in ISO/IEC 9075-2:2016 (compared to 2011).

Feature IDFeature Name
B200Polymorphic table functions
B201More than one PTF generic table parameter
B202PTF Copartitioning
B203More than one copartition specification
B205Pass-through columns
B206PTF descriptor parameters
B207Cross products of partitionings
B208PTF component procedure interface
B209PTF extended names
F404Range variable for common column names
R010Row pattern recognition: FROM clause
R020Row pattern recognition: WINDOW clause
R030Row pattern recognition: full aggregate support
T076DECFLOAT data type
T523Default values for INOUT parameters of SQL-invoked procedures
T524Named arguments in routine invocations other than a CALL statement
T525Default values for parameters of SQL-invoked functions
T622Trigonometric functions
T623General logarithm functions
T624Common logarithm functions
T811Basic SQL/JSON constructor functions
T821Basic SQL/JSON query operators
T824JSON_TABLE: specific PLAN clause
T826General value expression in ON ERROR or ON EMPTY clauses
T827JSON_TABLE: sibling NESTED COLUMNS clauses
T830Enforcing unique keys in SQL/JSON constructor functions
T831SQL/JSON path language: strict mode
T832SQL/JSON path language: item method
T833SQL/JSON path language: multiple subscripts
T834SQL/JSON path language: wildcard member accessor
T835SQL/JSON path language: filter expressions
T836SQL/JSON path language: starts with predicate
T837SQL/JSON path language: regex_like predicate
T839Formatted cast of datetimes to/from character strings

What's New in SQL:2016” by Markus Winand was originally published at modern SQL.

Subscribe by email



planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.