decor
 

planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.

 

June 23, 2017


Data and Technology

Who Owns Data?

Who owns data? This is a complex question that can’t be answered quickly or easily. It requires some thought and you have to break things down in order to even attempt to answer it. First of...

(Read more)

DB2Night Replays

The DB2Night Show #195: Breaking News from IBM Canada Db2 Lab!

Follow @Roger_E_Sanders Follow @jrockwood Special Guests: Roger Sanders and Jessica Rockwood, IBM Breaking Db2 (LUW) News from the IBM Canada Lab 100% of our audience learned something! Shocker! "DB2 for LUW" is now just "Db2". Mainframe folks will still have to type out "Db2 for z/OS". Some IBM products, or solution offerings, have been renamed. What's new? Well, V11.1.2.2 is new, and there are plenty of goodies and enhancements in...

(Read more)
 

June 21, 2017


Dave Beulke

DB2 LUW Security: More Ways to Secure your System

Over the years, I have developed many of my security practices and used many of the government security guidelines to perform security audits in client environments. The combination of the government guidelines and my experience helps me uncover and secure the desired client’s systems and...

(Read more)
 

June 20, 2017


Triton Consulting

Time is running out! Plan your move

For organisations currently running DB2 LUW versions 9.7 or 10.1 time is running out. IBM will cease to support customers running either version from 30th September 2017, just 3 months away. For those organisations that haven’t begun their migration planning, … Continue reading →

(Read more)

DB2utor

New and Changed Subsystem Parameters in DB2 11

Even though DB2 12 is now available, a fair number of customers are now or have recently migrated from DB2 10 to DB2 11. I know this because I've been getting queries about subsystem parameters (zparms) default changes in DB2 11. So I'll share with you what I've been telling them: IBM, thankfully, documents this information in the Knowledge Center's What’s New guide.
Jack Vamvas

Cannot drop database SQL1035N The operation failed because the specified database cannot be connected to in the mode requested. SQLSTATE=57019

I cannot drop the database and getting the error message SQL1035N  The operation failed because the specified database cannot be connected to in the mode requested. SQLSTATE=57019

What is the safest way to work around this problem?

Answer: The DB2  error message appears because the current database is not a valid state for the DROP DATABASE command you’re attempting to commit.

From the IBM DB2 LUW documentation “The database must not be in use; all users must be disconnected from the database before the database can be dropped”.

Before you deactivate the database , speak with the application owner and negotiate a timeslot .

 

To fulfil the task deactivate the database and the complete the DROP DATABASE statement. Before you deactivate the database , complete a db2 force applications all.

 

The DROP DATABASE command deletes all data & log files , so make sure you first have a valid backup , just in case.

 

 

June 16, 2017


DB2Night Replays

The DB2Night Show #Z79: zIIP Experiences: Exploitation Not Abuse

Presented by: Adrian Burke DB2 for z/OS SWAT team "The DB2Night Show #Z79: zIIP Experiences: Exploitation Not Abuse" Replays available in WMV and M4V formats! 100% of our studio audience learned something!Adrian gave many tips for the proper use and exploitation of zIIP processors. Watch the replay...

(Read more)
 

June 15, 2017

Modern SQL

What's New in SQL:2016

In December 2016, ISO released a new version of the international SQL standard (ISO/IEC 9075:2016). It supersedes the previous version from 2011.

This article is a brief overview of the new features being introduced into the SQL language. Strictly speaking, this article covers the additions to part 2 of the standard (SQL/Foundation), i.e. the most commonly used part.

This article also shows the availability of these features among six major databases. Note that respective figures—shown below—only reflect whether or not the databases support the features in the way described by the standard. For example, an X in the JSON row does not mean the database has no JSON support—it simply means the database doesn't support the JSON features described by the SQL standard. As a matter of fact, all tested databases support JSON in some way—but not necessarily in the way described by the standard.0

Overview of Availability of new SQL:2016 features

Before going through all the shiny new features, let's look at some trivia: part 2 of the SQL:2016 standard has 1732 pages—that's 260 pages more (~18%) than the 2011 edition. It introduces 44 new optional features (+14%). Let's take a look at them…

Row Pattern Recognition

Row pattern recognition captures groups of rows that follow a pattern. Rows in matched groups can be filtered, grouped, and aggregated. The pattern itself is described with a simple regular expression syntax.

The main use of row pattern recognition is to check time series for patterns. However, the match_recognize clause combines aspects of the where, group by, having and over clauses (window functions) so it is also useful in many other cases.

I have given a presentation about row pattern recognition. It discusses several examples in two implementation variants: with and without the new match_recognize clause. The examples cover some typical use cases, and also some atypical use cases for row pattern recognition:

  • Consecutive events: identifying sessions in a web-log; tolerating gaps in sequences

  • Top-N per group (might be faster than window functions!)

  • Time intervals: finding gaps (e.g. for scheduling)

  • Time intervals: closing gaps (creating new rows)

If you understand German, you can watch the video recording here.

Readers interested in more details may refer to the technical report “Row Pattern Recognition in SQL” (ISO/IEC TR 19075-5:2016) available for free at ISO.

Availability of “Row Pattern Recognition”

JSON

SQL has been supporting arrays, multisets (nested tables) and composite types for 18 years. In 2003, SQL was given a native XML type. Now it has JSON support.

The following sections briefly describe the key parts of the new standard JSON functionality. A more detailed discussion is available in the technical report by ISO.

No Native JSON Type

Even though XML and JSON are somewhat similar—they are documents with nested structures—their integration into SQL is quite different. The most striking difference is that the standard does not define a native JSON type like it does for XML. Instead, the standard uses strings to store JSON data.

Note that this does not prevent vendors from providing a JSON type. The standard just defines functions that interpret strings as JSON data. To claim conformity to the standard's JSON features, a database must support these string-based functions. Vendors are still free to add a JSON type and related functions. They are even allowed to provide the standard's JSON functions for their native JSON type—a very reasonable option in my opinion.

The only annoying consequence of using strings for JSON data is the error handling.1 If there was a JSON type, parsing errors could only happen when casting a string into the JSON type. Instead, the standard defines an on error clause for all functions that interpret strings as JSON data.

json_value(json, '$.id' null on error)

The default is error or error. The string '$.id' is a JSON path (described below).

Note that is still possible to declare table columns that only accept valid JSON data:

CREATE TABLE … (
   jcol CLOB CHECK (jcol IS JSON)
)

The check constraint uses the new is json predicate to test whether the string contains valid JSON data or not. The is json predicate can even test for a specific JSON types and is thus more flexible than a single JSON type:

<expr> is [not] json [ value | array | object | scalar ]

Availability of SQL/JSON

JSON Formats

The SQL standard allows database vendors to support different JSON formats. The one described by RFC 7159 is the mandatory default.

Functions that generate or parse JSON data accept the optional format clause to specify which format to use (format json is default). The alternatives offered might have a very different syntax (like BSON).

Creating JSON Data

The standard defines the following functions to create JSON strings:

json_object([key] <expr> value <expression> [,…])

Creates a JSON object. The keywords key (optional) and value introduce the attribute name and value:

json_object( key 'id'   value 1234
           ,     'name' value 'Markus')

The standard also accepts a colon (:) between key and value:2

json_object( 'id': 1234
           , 'name': 'Markus')

A comma is only used to list multiple key/value pairs. This is a noteworthy difference to json_object (MySQL, SQLite) and json_build_object (PostgreSQL): they use the comma for both separating keys from values and listing multiple key/value pairs.

json_array([<expr>[,…]])

Creates a JSON array from the values provided.

json_array(<query>)

Creates a JSON array from the values returned by <query>. The query must return exactly one column.

json_arrayagg(<expr> [order by …])

Creates a JSON array from the values of a group (like array_agg):

SELECT json_arrayagg(col [order by seq])
  FROM …
 GROUP BY x

Note that the optional order by clause3 is inside the parentheses—not in a within group clause as for listagg.

json_objectagg([key] <expr> value <expression>)

Creates a JSON object from the key/value pairs of a group. It uses the same syntax to denote the key and value as json_object:

SELECT json_objectagg(k value v)
  FROM …
 GROUP BY x

An order by clause is not allowed because JSON objects are unordered key/value pairs.

Availability of SQL/JSON

Accessing JSON Items

The following functions use the so-called SQL/JSON path language (described below) to access parts of a JSON document. They interpret strings as JSON data and thus accept the on error clause.

json_exists(<json>, <path>)

Tests whether a specific path exists in JSON document. It evaluates to true, false or unknown and can be used in the where clause directly:

WHERE json_exists(jcol, '$.name')
json_value(<json>, <path> [returning <type>])

Extracts a scalar JSON value—everything except object and array—and returns it as a native SQL type. The optional returning clause performs a typecast. Without a returning clause, json_value returns a string.

json_query(<json>, <path> …)

Extracts a part out of JSON document and returns it as a JSON string. The main differences to json_value are: (1) it can extract any JSON type; (2) it always returns a string; (3) it can extract multiple elements from a JSON document.

Due to these differences, there are two special cases:

Multiple hits

By default, json_query raises an exception if the JSON path matches more than one element (like json_value). The optional with [ conditional | unconditional ] [array] wrapper clause wraps the result into an array and returns all hits.

JSON path denotes a single string

Json_query generally returns JSON data. If the JSON path matches a single JSON string, the result is still enclosed in double quotes with JSON special characters escaped. The omit quotes [on scalar string] clause returns the raw data instead (like json_value).

Finally, there is the json_table function: it is basically a table function—i.e. it is used in the from clause.

json_table(<json>, <path> columns …)

Transforms JSON data into a table. Json_table does not introduce any new functionality but is often simpler (and faster) than using multiple json_value calls.4

Please refer to ISO’s technical report (paragraph 5.3.4) for the full syntax. As a teaser, consider the following example. It transforms a JSON document (an array containing two objects) into a table:

[{id: 1, name: "Marvin"},
 {id: 2, name: "Arthur"}
]
SELECT jt.*
  FROM t
     , JSON_TABLE
       ( jcol
       , '$[*]'
         COLUMNS (id   NUMERIC      PATH '$.id',
                  name VARCHAR(255) PATH '$.name'
                 )
       ) jt

The json_table function produces one row for each element matched by the JSON path $[*]. The columns clause declares the names and types of the result columns and the JSON paths to the actual values (relative to the main JSON path).

idname
1Marvin
2Arthur

Availability of SQL/JSON

JSON Path

The SQL standard specifies a path language for JSON.5 It “adopts many features of ECMAscript, though it is neither a subset nor a superset of ECMAscript.”6

In the JSON path language, the dollar sign ($) represents the current context element, the period (.) an object member, and the brackets ([]) an array element. The following examples illustrate this:

$.name

Denotes the value of the name attribute of the current JSON object.

$[0]

Denotes the first element of the current JSON array.

$.events[last]

Denotes the last element of the array stored in the attribute events of the current JSON object.

$.events[0 to 4, last]

Denotes the first five and the last array element of the array stored in the attribute events of the current JSON object.

$.*

Denotes the values of all attributes of the current JSON object.

JSON path supports filter expressions in the form ?(<expression>). In filter expressions, the at sign (@) denotes the current context.

$.* ?(@.type()=="number")

Denotes all attribute values of type number.

$.events[*] ?⁠(exists(@.name))

Denotes all array elements of the attribute events that contain a JSON object with the attribute name.

$?⁠(@.name starts with "Lorem")

Denotes the full JSON document if it is a JSON object that has an attribute name that contains a string that starts with Lorem.

Other functions available in filter expressions include size() (array size), ceiling(), floor(), abs(), and datetime() (for parsing, see below).

The SQL/JSON path language defines two modes: lax, which is the default, and strict. The mode can be specified by adding a prefix to the JSON path expression (e.g. 'strict $.name'). The strict mode triggers error handling (subject to the on error clause) for all errors. This includes accessing non-existing object members and using an array accessor on a scalar value or JSON object.

The lax mode suppresses these errors. If required, it unwraps arrays or wraps scalar values so that the document structure and JSON path expression fit to each other. The lax mode allows working with variable document structures without adjusting the JSON path to each document.

Availability of SQL/JSON Path

What's missing in SQL/JSON

The SQL standard does not provide functions to update parts of JSON documents (like json_set in MySQL, PostgreSQL, and SQLite or json_modify in SQL Server).

On my Own Behalf

If you like this article, you might also like my book SQL Performance Explained or my training.

Date and Time Formatting and Parsing

Formatting dates and times (temporal data) is one of the gaps in the SQL standard that was filled in pretty much every database—of course, every vendor filled it differently. SQL:2016 finally added this functionality to the standard.

The SQL standard uses a format template such as 'YYYY-MM-DD' to describe how to format or parse temporal data. The following table summarized the available mnemonics—notice the similarities with the “datetime format models” used by the Oracle database.

MnemonicMeaningextract field
YYYY | YYY | YY | YYearYEAR
RRRR | RRRounded year7
MMMonthMONTH
DDDay of monthDAY
DDDDay of year
HH | HH1212 hour
HH2424 hourHOUR
MIMinuteMINUTE
SSSecond of minuteSECOND
SSSSSSecond of day
FF1 | … | FF9Fraction(in SECOND)
A.M. | P.M.AM or PM
TZHTime zone hourTIMEZONE_HOUR
TZMTime zone minuteTIMEZONE_MINUTE

Remember that the extract expression (good old SQL-928) can access the individual components of temporal types. I've added the respective extract field names for reference.

Format templates can be used in two ways: (1) in the JSON path method datetime (see above); (2) in a cast specification:

CAST(<datetime> AS <char string type> [FORMAT <template>])
CAST(<char string> AS <datetime type> [FORMAT <template>])

Availability of CAST FORMAT

Listagg

Listagg is a new ordered set function that resembles the group_concat and string_agg functions offered by some databases. It transforms values from a group of rows into a delimited string.

The minimal syntax is:

LISTAGG(<expr>, <separator>) WITHIN GROUP(ORDER BY …)

Listagg accepts the optional on overflow clause to define the behavior if the result becomes too long:

LISTAGG(<expr>, <separator> ON OVERFLOW …)

The default is on overflow error. The on overflow truncate clause prevents the overflow by only concatenating as many elements as the result type can accommodate. Furthermore, the on overflow truncate clause allows you to specify how to terminate the result:

ON OVERFLOW TRUNCATE [<filler>] WITH[OUT] COUNT

The optional <filler> defaults to three periods (...) and this is added as the last element if truncation happens. If with count is specified, the number of omitted values is put in parentheses and appended to the result.

I have written a more detailed article about listagg. Please keep the type-safe alternatives in mind, which are a better choice in many cases.

Availability of LISTAGG

Trigonometric and Logarithmic Functions

SQL:2016 introduces trigonometric functions (sin, cos, tan, asin, acos, atan, sinh, cosh, tanh), a general logarithm function (log(<base>, <value>)) and a shortcut for the logarithm with base 10 (log10(<value>)).

Note that the natural logarithm ln(<value>) was introduced with SQL:2003.

Availability of New Mathematical Functions of SQL:2016

Polymorphic Table Functions

SQL table functions—standardized with SQL:2003—are functions that return tables. Table functions have to specify the names and types of the columns they return—the so-called row-type—at the time of creation:

CREATE FUNCTION <name> (<parameters>)
       RETURNS TABLE (c1 NUMERIC(10,2), c2 TIMESTAMP)
   ...

This function returns a table with two columns: c1 and c2 with the respective types.

Once declared, table functions can be used in the from and join clauses similarly to regular tables.9

Prominent examples of table functions are:

SQL:2016 introduces polymorphic table functions (PTF) that don't need to specify the result type upfront. Instead, they can provide a describe component procedure that determines the return type at run time. Neither the author of the PTF nor the user of the PTF need to declare the returned columns in advance.

PTFs as described by SQL:2016 are not yet available in any tested database.10 Interested readers may refer to the free technical report “Polymorphic table functions in SQL” released by ISO. The following are some of the examples discussed in the report:

  • CSVreader, which reads the header line of a CVS file to determine the number and names of the return columns

  • Pivot (actually unpivot), which turns column groups into rows (example: phonetype, phonenumber)

  • TopNplus, which passes through N rows per partition and one extra row with the totals of the remaining rows

Personally, I think many of the examples from the technical report could be solved using other SQL features.11 Implementing the json_table function as a PTF might have been an interesting example.

Miscellaneous Features

Join … using now accepts as (F404):

FROM A
JOIN B USING (…) AS correlation_name

There is a new type decfloat[(<precision>)] (T076).

Named arguments in function and procedure invocations (name => value) are no longer limited to call statements (T524).

The default clause is allowed for arguments to functions (T525) and inout arguments (T523).

Availability of Miscellaneous SQL:2016 Features

Feature Taxonomy: Obsolete and New Features

For the sake of completeness: part 2 of SQL:2016 introduces no new mandatory features.12

The feature T581, “Regular expression substring function” has been deprecated (“The functionality is essentially subsumed by Feature F844, “SUBSTRING_REGEX”"13). Note that feature T141, “SIMILAR predicate” has been deprecated in 2011.14

The following table lists the new optional features in ISO/IEC 9075-2:2016 (compared to 2011).

Feature IDFeature Name
B200Polymorphic table functions
B201More than one PTF generic table parameter
B202PTF Copartitioning
B203More than one copartition specification
B204PRUNE WHEN EMPTY
B205Pass-through columns
B206PTF descriptor parameters
B207Cross products of partitionings
B208PTF component procedure interface
B209PTF extended names
F404Range variable for common column names
R010Row pattern recognition: FROM clause
R020Row pattern recognition: WINDOW clause
R030Row pattern recognition: full aggregate support
T076DECFLOAT data type
T523Default values for INOUT parameters of SQL-invoked procedures
T524Named arguments in routine invocations other than a CALL statement
T525Default values for parameters of SQL-invoked functions
T622Trigonometric functions
T623General logarithm functions
T624Common logarithm functions
T625LISTAGG
T811Basic SQL/JSON constructor functions
T812SQL/JSON: JSON_OBJECTAGG
T813SQL/JSON: JSON_ARRAYAGG with ORDER BY
T814Colon in JSON_OBJECT or JSON_OBJECTAGG
T821Basic SQL/JSON query operators
T822SQL/JSON: IS JSON WITH UNIQUE KEYS predicate
T823SQL/JSON: PASSING clause
T824JSON_TABLE: specific PLAN clause
T825SQL/JSON: ON EMPTY and ON ERROR clauses
T826General value expression in ON ERROR or ON EMPTY clauses
T827JSON_TABLE: sibling NESTED COLUMNS clauses
T828JSON_QUERY
T830Enforcing unique keys in SQL/JSON constructor functions
T831SQL/JSON path language: strict mode
T832SQL/JSON path language: item method
T833SQL/JSON path language: multiple subscripts
T834SQL/JSON path language: wildcard member accessor
T835SQL/JSON path language: filter expressions
T836SQL/JSON path language: starts with predicate
T837SQL/JSON path language: regex_like predicate
T838JSON_TABLE: PLAN DEFAULT clause
T839Formatted cast of datetimes to/from character strings

What's New in SQL:2016” by Markus Winand was originally published at modern SQL.

 

June 14, 2017


Henrik Loeser

DB2 Security: Privilege(d) Insight

DB2 Security Today's blog entry is about an old topic: Who I am, why I am and what I am (allowed) to do? Users and their privileges are at the core of database security. Information about them are...

(Read more)
 

June 13, 2017


DB2utor

MERGE Statement Enhancements in DB2 12

The MERGE statement has been around awhile. It was actually among the first topics I covered upon launching DB2utor in 2007.
 

June 12, 2017


Kim May

The Fillmore Group – Another Migration Success!

I am pleased to report a team of four consultants from The Fillmore Group, a team with over 100 years of combined DB2 experience, recently completed a successful migration of the courts system...

(Read more)
 

June 11, 2017


DB2Night Replays

The DB2Night Show #194: Interviewing the DB2 LUW Optimizer w/ Ember Crooks

@ember_crooks Special Guest: Ember Crooks Director, Xtivia IBM DB2 GOLD Consultant Interviewing the DB2 LUW Optimizer 100% of our audience learned something! Ember took us on a deep dive journey through DB2 LUW Explain information including how to get it, how to interpret it, how to make the most of it, and much more. You will learn about db2exfmt, DB2 Explain from Actuals, IBM DSM, dsmtop, db2top, other tools, and just what the heck all...

(Read more)
 

June 08, 2017


Craig Mullins

The DB2 12 for z/OS Blog Series - Part 11: Enhanced Support for Arrays

The ARRAY data type was added to DB2 in the last release (Version 11) with the ability to define both ordinary arrays or associative arrays. An ordinary array has a user-defined number of elements that are referenced by their ordinal position in the array. An associative array has no user-defined number of elements that are referenced by the array index value. An associative...

(Read more)
 

June 07, 2017


Dave Beulke

Five Ways to Make your DB2 Data Delicious

Recently on the REDDIT DB2 forum someone raised a question (found here) about DB2’s future in application development. The question and answers talked about the database industry and the reason DB2 is positioned in the upper right quadrant in the latest Gartner Magic Quadrants found here. To...

(Read more)
 

June 06, 2017


DB2utor

DB2 12 Technical Workshops

Many customers have asked when IBM will offer DB2 12 for z/OS education and training. The answer is "very soon." IBM is hosting six one-day workshops around the U.S. in June and July.
 

June 02, 2017


DB2Night Replays

The DB2Night Show z78: IBM DB2 Analytics Accelerator for z/OS Update

Presented by: Peter Bendel IBM STSM, Germany "The DB2Night Show z78: IBM DB2 Analytics Accelerator for z/OS Update" Replays available in WMV and M4V formats! 100% of our studio audience learned something!Peter gave insight into the many new enhancements of the IBM DB2 Analytics Accelerator. Watch the replay...

(Read more)

Henrik Loeser

EgoBot: Fun with a Slightly Mutating ChatBot

Fun with the Bluemix EgoBot Over the past day and evening I had some fun with a slightly mutating chatbot. The API for the IBM Watson Conversation service offers REST calls to query and change the...

(Read more)
 

June 01, 2017

Big Data University

We’re Now Cognitive Class

Today, we’re renaming our site to Cognitive Class in order to better embody our focus on Data Science and Cognitive Computing.

One has been called “the sexiest profession of the 21st century” and the other is the biggest revolution in the world of computing since the invention of the Turing Machine.

A brief history of Cognitive Class

When we – a small group of IBMers – first launched this volunteer initiative back in 2010, we chose to call the site DB2 University. Our goal at the time was to provide online data education and literacy free of charge. Our goal hasn’t changed; our name, implementation, and the industry itself have.

Most of our team had worked on DB2 at IBM and so we launched the site with a handful of courses related to SQL and specifically IBM DB2. Despite the limited amount of content at the time, we quickly managed to attract the attention of many professionals and students.

As the number of registrants grew, we quickly realized that our mission for the site would be to democratize learning beyond the scope of a single product.

Seven years ago our students were expressing interest in learning not just about databases but also about this hot new thing called Big Data. Our own team’s scope at IBM expanded to include emerging technologies in the cloud and analytics space. Big Data very much included. So in 2011, we renamed DB2 University to Big Data University.

We kept that name for six years and it has served us well. Our Big Data courses are extremely popular and the site has grown to just shy of 700,000 learners worldwide.

Our catalog expanded considerably and, despite our name, the site became an education and skill building tool for more than just Big Data. For Data Science at first, and in more recent times, for Cognitive Computing as well. (By the way, the latter is IBM’s favorite terminology for technologies that include, but are not limited, to Artifical Intelligence.)

Data Science and Cognitive Computing are the future

Our industry never sits still. Teaching data analytics today is no longer just about relational databases. And it’s no longer about Big Data alone either.

Those two topics remain foundational, but the future of data insight that serves human and business needs lies in Data Science and Cognitive Computing.

As we go about creating more and more content that satisfies your demands for future-proof skills, we decided to double down on them. And with this new focus in mind, we had to pick a better name to represent our aim. We chose Cognitive Class.

The domain is a .ai to further reinforce what we are all about.

What it means for you

Realistically, not much should change for you in terms of logistics. Our emails will come from Cognitive Class rather than BDU. Your account will still work. The courses you’re enrolled in and completed will still be there. You keep your certificates and badges. (However, if you run into any issue, please use our Support tab to open a ticket with us.)

You can expect us to release many new courses on hot topics within our industry. Think machine learning, deep learning, AI, NLP, chatbots, blockchain, and so on.

We look forward to continuing to provide you with the indispensable skills you need to tackle the problems of the future. Please stay tuned. 🙂

PS: If you’re interested in becoming a course author, feel free to contact us. If you are a business or academic institution interested in deploying your own Cognitive Class, reach us via the Business page.

The post We’re Now Cognitive Class appeared first on Cognitive Class.


Henrik Loeser

How to Manage Bluemix Service Keys via CLI

You probably know that CLI stands for Command Line Interface. And you are aware that IBM Bluemix and Cloud Foundry offer a CLIs. Did you know that you can manage service keys from the command line?...

(Read more)
 

May 31, 2017


Kim May

$5!! – June 14th Balto/Wash DB2 Users Group Meeting

The Baltimore/Washington DB2 Users Group has been supporting the local DB2 community for 25 successful years.  Our quarterly meetings are packed with interesting, relevant information about how to...

(Read more)

Henrik Loeser

DB2 Security Mysteries, Surrogates and Trusted Contexts

DB2 Security Question Recently, I was contacted regarding an older blog entry discussing the DB2 security feature of surrogates. During an audit a strange entry was found in the catalog table...

(Read more)

Triton Consulting

Flexible DB2 Resourcing – Don’t take work on holiday with you this summer

Taking a break from DB2? With summer just around the corner many of us will be looking forward to a well-earned break. However, we all know how stressful the workplace can be in the lead up to a holiday. Managing … Continue reading →

(Read more)
 

May 30, 2017

Big Data University

This Week in Data Science (May 30, 2017)

Here’s this week’s news in Data Science and Big Data.guide-to-ai

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

  • Big Data Toronto 2017 –June 20, 2017 – June 21, 2017 all-day Metro Toronto Convention Center, 255 Front Street West Toronto, Ontario M5V 2W6 Canada

Featured Courses From Cognitive Class

Upcoming Data Science Events

  • Big Data Toronto 2017 –June 20, 2017 – June 21, 2017 all-day Metro Toronto Convention Center, 255 Front Street West Toronto, Ontario M5V 2W6 Canada

The post This Week in Data Science (May 30, 2017) appeared first on Cognitive Class.


DB2utor

DB2 Tools Fix Lists

The IBM Support page lists all fixes for DB2 and IMS tools products. You'll find summaries of problem descriptions (APARs) and fixes (PTFs), by product and version. The list of APARs can be downloaded as a CSV file and imported into a spreadsheet or database for sorting and filtering.
 

May 25, 2017

Jack Vamvas

MongoDB cheat sheet

MongoDB is an open-source NoSQL  document-oriented database program. I’ve recently started supporting MongoDB . This is my quick notes to  support MongoDB

Mongo

How to install MongoDB Enterprise on Red Hat Linux

--create a repo file and insert the text below

vi /etc/yum.repos.d/mongodb-enterprise.repo

[mongodb-enterprise]

name=MongoDB Enterprise Repository

baseurl=https://repo.mongodb.com/yum/redhat/$releasever/mongodb-enterprise/3.4/$basearch/

gpgcheck=1

enabled=1

gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc

--install the package

sudo yum install -y mongodb-enterprise

 

Create the user Administrator 

Note:Once you've created the admin Users, you'll need to disconnect from the Mongo Shell & restart the Mongo Server

use admin
db.createUser(
  {
    user: "myUserAdmin",
    pwd: "abc123",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  }
)

 

 

Start the MongoDB service

sudo service mongod start

 

Start at the next system reboot

sudo chkconfig mongod on

 

To allow remote connections

Step1 : how to create a user in a db

db.createUser({

    user: 'robert',

    pwd: 'myM0ng0_pw',

    roles: [{ role: 'readWrite', db:'testdb'}]

})

Step 2 : Get the private address

/sbin/ifconfig | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}'

 

Step 3:edit mongo configuration file

--get private address from step 2 and add to bindip

vim /etc/mongod.conf

--change #security to:

security:

   authorization: enabled

 

Step 4: Then from the terminal you can run:

First you must have the mongo cli installed in your system (https://docs.mongodb.org/manual/installation/).

mongo someurl.mongodomain.com:45475/database_name -u username -p password

List all databases on a Mongo Server

show dbs

Show current Mongo db

db

db.getname()


Robert Catterall

DB2 for z/OS and Data-as-a-Service and Database-as-a-Service

______-as-a-service is all the rage in IT these days, and understandably so -- the "service-ization" of information technology capabilities and interfaces will be transformational; indeed, transformations have already occurred or are underway at many organizations and within many application development and deployment environments. In this blog entry I want to highlight a couple of ______-as-a-service concepts, data-as-a-service (aka DaaS) and database-as-a-service (aka DBaaS), and examine...

(Read more)
 

May 24, 2017


Craig Mullins

The DB2 12 for z/OS Blog Series - Part 10: Improved EDM Memory Management

EDM pool management is improved in DB2 12 for z/OS, which is especially beneficial to organizations running many concurrent threads with with large EDM pools specified. Prior to DB2 12, the EDMDBDC, EDMSTMTC and EDM_SKELETON_POOL DSNZPARM values were used to allocate DBD cache, dynamic statement cache and EDM skeleton pool memory and the values could not be lowered past the specified...

(Read more)

Henrik Loeser

Updates to Chatbot API and Tooling

Build Chatbots with IBM Bluemix If you have been coding up your own chatbot using the IBM Watson Conversation service on IBM Bluemix, then you might be aware of the new features in Conversation API....

(Read more)
 

May 23, 2017

Jack Vamvas

How to check DB2 LUW database write suspend state with db2pd

DB2 LUW write suspend state is commonly checked during a splitting of mirrored database.I’d like the flexibility of checking via the db2pd monitoring tool, How can I get the output in db2pd to check if a DB2 database is in the write suspend state?

Answer: There’s a couple of different things you can check in this situation. The db2pd tool will allow you to check and report on the database configuration and to identify if the database is in write suspend state.

This command will return the relevant information. Check for the item “database is in write suspend state”

db2pd -db MYDB -dbcfg

Another thing you may want to check if the database is active. Essentially you’re checking if there are any connections. Use:

Db2 list active databases

 

If you'd like to read more about using db2pd , here are some posts which will assist in explaining db2pd option

db2pd troubleshooting guide (DBA DB2)

DB2 performance monitoring with db2pd (DBA DB2)

 

 

 


Henrik Loeser

Chatbots: Manage Your Watson Conversations from the Command Line or App

Manage Watson Conversation Workspaces I am a big fan of using the command line as most of you know by now. This applies to interacting with IBM Bluemix cloud and its services and for database...

(Read more)
Big Data University

This Week in Data Science (May 23, 2017)

Here’s this week’s news in Data Science and Big Data.

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

Featured Courses From Cognitive Class

The post This Week in Data Science (May 23, 2017) appeared first on Cognitive Class.

Subscribe by email

 

About

planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
 

Bloggers

decor