Mohankumar Saraswatipura

Mohankumar Saraswatipura

Dallas-Fort Worth Metroplex
3K followers 500+ connections

About

I am an IBM Gold Consultant with over 19 years of experience in database analysis…

Activity

Join now to see all activity

Experience

  • DBA Automation Graphic

    DBA Automation

    Dallas/Fort Worth Area

  • -

    Dallas/Fort Worth Area

  • -

    Washington D.C. Metro Area

  • -

    Manchester, United Kingdom

  • -

    Manchester, United Kingdom

  • -

  • -

  • -

    Bangalore

  • -

  • -

Education

  • Indian Institute of Management, Calcutta Graphic

    Indian Institute of Management, Calcutta

    -

    Activities and Societies: Specialization: IT Project Management, People Management and Marketing Strategies.

  • -

    Activities and Societies: Topping the class with the rating of 4.98 out of 5. Specialization: Database Management Systems, Object Oriented Ananlysis and Design, Algorithms, Advanced Data Structures, Advanced Operating and Distributed Systems

  • -

    Activities and Societies: Came out of college with an aggregate of 78% over 4 years (8 semesters). Major Subjects: Digital Electronics, Analog Electronics, Medical Electronics, Micro Processors, Digital Image Processing, Control Systems, Digital Designs, and System Programming (C, C++ and Linux Internals).

Licenses & Certifications

Publications

  • Database Concurrency in PostgreSQL

    Red Gate

    Isolation levels in databases dictate how transactions interact with each other in terms of visibility of data changes and the locking behavior. Two-Phase Locking (2PL) and Multi-Version Concurrency Control (MVCC) are the mechanisms used in general to control isolation in the database management system, but these two have different approaches and characteristics.

    Here is my recent article about "Database Concurrency in PostgreSQL". This article will provide details around isolation…

    Isolation levels in databases dictate how transactions interact with each other in terms of visibility of data changes and the locking behavior. Two-Phase Locking (2PL) and Multi-Version Concurrency Control (MVCC) are the mechanisms used in general to control isolation in the database management system, but these two have different approaches and characteristics.

    Here is my recent article about "Database Concurrency in PostgreSQL". This article will provide details around isolation levels, and visible map rules etc. within PostgreSQL.

    https://www.red-gate.com/simple-talk/databases/postgresql/database-concurrency-in-postgresql/

    See publication
  • EDS1: Db2 v11.5 DBA Certification Crammer Course

    IDUG

    EDS1: Db2 v11.5 DBA Certification Crammer Course
    Mohankumar Saraswatipura,
    This certification crammer course is designed for IT professionals who plan to take the IBM Certified Database Administrator–Db2 11.5 for Linux, Unix, and Windows exam. In this course, you will learn how to do the following:

    Configure and manage Db2 v11.5 servers, instances, and databases
    Implement Db2 BLU Acceleration databases
    Create and implement database objects
    Implement business rules using…

    EDS1: Db2 v11.5 DBA Certification Crammer Course
    Mohankumar Saraswatipura,
    This certification crammer course is designed for IT professionals who plan to take the IBM Certified Database Administrator–Db2 11.5 for Linux, Unix, and Windows exam. In this course, you will learn how to do the following:

    Configure and manage Db2 v11.5 servers, instances, and databases
    Implement Db2 BLU Acceleration databases
    Create and implement database objects
    Implement business rules using constraints
    Implement high availability and disaster recovery solutions
    Monitor and troubleshoot at instance and database levels
    Implement security at instance, database, and objects levels
    This study course is designed to provide the Db2 professional with the information required to successfully obtain the certification. Each chapter contains topics covered in the exam, plus valuable insights into each topic, along with sample exam questions and detailed answers.

    See publication
  • Which replication method is best for you - SQL, CDC and Q Replication?

    IDUG

    This presentation is all about data replication from a transactional system to operational data store or data mart for real time reporting purposes with limited impact to transactional workload. It will provide insight into the replication architecture for SQL Replication, Q Replication, and Change Data Capture. It explains the business driver behind the data replication and challenges in replicating the data in locally or geographically dispersed locations. We will also provide pointers to…

    This presentation is all about data replication from a transactional system to operational data store or data mart for real time reporting purposes with limited impact to transactional workload. It will provide insight into the replication architecture for SQL Replication, Q Replication, and Change Data Capture. It explains the business driver behind the data replication and challenges in replicating the data in locally or geographically dispersed locations. We will also provide pointers to understand the replication workload statistics and the SLA requirements to propose a precise replication strategy.

    See publication
  • Which Replication Method is Best for You - SQL Replication vs Q Replication vs CDC?

    IDUG

    IDUG 2021 EMEA Presentation
    This discussion is all about data replication from a transactional system to operational data store or data mart for real time reporting purpose with limited impact to transactional workload. It will provide insight into the replication architecture for SQL Replication, Q Replication and Change Data Capture. We will explain the business driver behind the data replication and challenges in replicating the data locally or geographically dispersed locations. It will…

    IDUG 2021 EMEA Presentation
    This discussion is all about data replication from a transactional system to operational data store or data mart for real time reporting purpose with limited impact to transactional workload. It will provide insight into the replication architecture for SQL Replication, Q Replication and Change Data Capture. We will explain the business driver behind the data replication and challenges in replicating the data locally or geographically dispersed locations. It will also provide pointers to understand the replication workload statistics and the SLA requirements to propose a precise replication strategy.

    See publication
  • Db2 v11.5 pureScale and Implementation Principles

    IDUG North America

    Monday, June 14, 2:15 - 5:30 p.m. EST

    Instructor: Mohankumar Saraswatipura, Kronsys Inc.

    This course is designed for experienced Db2 database administrators (Linux, UNIX and Windows). The goal is to prepare DBAs with the below listed intermediate to advanced topics.

    Db2 v11.5 pureScale:

    1. What’s new in Db2 v11.5?

    2. Introduction to Db2 pureScale

    3. Internals of Db2 pureScale

    The use of local and global locking to support application…

    Monday, June 14, 2:15 - 5:30 p.m. EST

    Instructor: Mohankumar Saraswatipura, Kronsys Inc.

    This course is designed for experienced Db2 database administrators (Linux, UNIX and Windows). The goal is to prepare DBAs with the below listed intermediate to advanced topics.

    Db2 v11.5 pureScale:

    1. What’s new in Db2 v11.5?

    2. Introduction to Db2 pureScale

    3. Internals of Db2 pureScale

    The use of local and global locking to support application concurrency
    The use of page level locks
    Monitoring lock related statistics
    Explicit Hierarchical Locking (EHL) to remove data sharing overhead
    The components that interact with Db2 Cluster Services to manage cluster availability
    The storage architecture and Spectrum file system for pureScale

    4. Installation and management of pureScale cluster

    How to install Db2 pureScale feature
    Start and stop of the cluster
    Configure database manager and database members
    Adding or removing member and cluster caching facility server
    Performing database fix pack upgrade and release upgrade

    5. Backup and Recovery in a pureScale environment

    6. Monitoring and problem determination in a pureScale environment

    See publication
  • How to Implement Unidirectional Q Replication Setup on Four Node HADR Clusters?

    IDUG North America

    Requirement:

    Client was looking for a replication solution to fulfil the audit requirements and offload the reporting workload to a sidecar system.
    The source system is IBM Financial Transaction Manager (FTM) and running on Db2 11.1 Linux, UNIX and Windows
    The target system is Operational Data Store running on Db2 11.1 Linux, UNIX and Windows

    Agenda:

    Introduction
    Replication requirements
    Replication solution preferences
    Understanding the audit requirements…

    Requirement:

    Client was looking for a replication solution to fulfil the audit requirements and offload the reporting workload to a sidecar system.
    The source system is IBM Financial Transaction Manager (FTM) and running on Db2 11.1 Linux, UNIX and Windows
    The target system is Operational Data Store running on Db2 11.1 Linux, UNIX and Windows

    Agenda:

    Introduction
    Replication requirements
    Replication solution preferences
    Understanding the audit requirements and implementation methods
    Database system architecture/infrastructure
    Q Replication components
    Unidirectional replication implementation steps
    HADR and Q Replication Failover and Failback Automation

    See publication
  • Replication Made Simple Using InfoSphere Data Replication 11.4

    IDUG North America

    Monday, June 14, 10:30 a.m. - 2 p.m. EST

    Instructor: Mohankumar Saraswatipura, Kronsys Inc.

    This course is designed for experienced Db2 database administrators (Linux, UNIX, and Windows). The goal is to prepare DBAs with the below listed intermediate to advanced topics.


    1. Introduction to InfoSphere Data Replication 11.4

    2. Implementing Change Data Capture (CDC) Replication

    CDC Architecture and Components
    Understand the CDC Capabilities
    Understand…

    Monday, June 14, 10:30 a.m. - 2 p.m. EST

    Instructor: Mohankumar Saraswatipura, Kronsys Inc.

    This course is designed for experienced Db2 database administrators (Linux, UNIX, and Windows). The goal is to prepare DBAs with the below listed intermediate to advanced topics.


    1. Introduction to InfoSphere Data Replication 11.4

    2. Implementing Change Data Capture (CDC) Replication

    CDC Architecture and Components
    Understand the CDC Capabilities
    Understand CDC capture and apply methods
    How to setup CDC Replication
    Information about CDC Replication scenarios
    Monitoring CDC
    CDC best practices
    3. Implementing Q Replication

    Q replication architecture and components
    Different types of replication
    Db2 database objects for replication
    Quick introduction about IBM WebSphere MQ
    ASNCLP interface and examples
    Administration and monitoring tasks

    See publication
  • F11 - Db2 Database Migration Using InfoSphere Change Data Capture v11.4

    IDUG

    This presentation covered below listed topics

    Objective 1: Database Migration Techniques and Procedures.
    Objective 2: InfoSphere Change Data Capture architecture, replication methods and features available in the product.
    Objective 3: InfoSphere Change Data Capture replication scenarios and understanding of capture and apply processes.
    Objective 4: Step by step procedure illustrating the install, configure and use of InfoSphere Change Data Capture to migrate a Db2 10.1 database…

    This presentation covered below listed topics

    Objective 1: Database Migration Techniques and Procedures.
    Objective 2: InfoSphere Change Data Capture architecture, replication methods and features available in the product.
    Objective 3: InfoSphere Change Data Capture replication scenarios and understanding of capture and apply processes.
    Objective 4: Step by step procedure illustrating the install, configure and use of InfoSphere Change Data Capture to migrate a Db2 10.1 database hosted on AIX operating system to Db2 11.1 database hosted on a Linux operating system.
    Objective 5: This unit provides information about monitoring utilities and best practices

    See publication
  • Db2 11.1 DBA for Linux, UNIX and Windows Certification (C2090-600) Crammer Course

    IDUG

    This certification crammer course is designed for IT professionals who plan to take the IBM Certified Database Administrator–Db2 11.1 for Linux, Unix, and Windows exam C2090-600. In this course, you will learn how to do the following:
    1. Configure and manage Db2 v11.1 servers, instances, and databases
    2. Implement Db2 BLU Acceleration databases
    3. Create and implement database objects
    4. Implement business rules using constraints
    5. Implement high availability and disaster…

    This certification crammer course is designed for IT professionals who plan to take the IBM Certified Database Administrator–Db2 11.1 for Linux, Unix, and Windows exam C2090-600. In this course, you will learn how to do the following:
    1. Configure and manage Db2 v11.1 servers, instances, and databases
    2. Implement Db2 BLU Acceleration databases
    3. Create and implement database objects
    4. Implement business rules using constraints
    5. Implement high availability and disaster recovery solutions
    6. Monitor and troubleshoot at instance and database levels
    7. Implement security at instance, database, and objects levels
    This certification study course is designed to provide the Db2 professional with the information required to successfully obtain C2090-600 certification. Each chapter contains topics covered in the exam, plus valuable insights into each topic, along with sample exam questions and detailed answers.

    See publication
  • DB2 v11.1 Certification Study Guide

    Packt Publishing

    Key Features
    A comprehensive certification preparation guide for exam C2090-600
    Provides complete coverage of the IBM DB2 v11.1 including step by step DB2 implementation procedures
    A practical, easily accessible guide with over 100 practice questions and answers
    Book Description
    Much more than a simple certification study aid, this comprehensive book is designed to help you master all aspects of IBM DB2 database administration and prepare you to take and pass IBM's Certification…

    Key Features
    A comprehensive certification preparation guide for exam C2090-600
    Provides complete coverage of the IBM DB2 v11.1 including step by step DB2 implementation procedures
    A practical, easily accessible guide with over 100 practice questions and answers
    Book Description
    Much more than a simple certification study aid, this comprehensive book is designed to help you master all aspects of IBM DB2 database administration and prepare you to take and pass IBM's Certification Exams C2090-600. Building on years of extensive hands-on experience, the authors step you through all the areas covered on the test. The book dives deep inside each certification topic: DB2 server management, physical design, business rules implementation, activity monitoring, utilities, high availability, security, and connectivity and networking. There is even a ""crash course"" chapter on DB2 v11.1 features. Each chapter includes an extensive set of practice questions along with carefully explained answers. This book provides more than 400 practice questions and answers, more than 120 ""flash cards"" to help you study for the exam, and 50 step-by-step DB2 feature implementation procedures.

    What you will learn
    Configure and manage DB2 servers, instances, and databases
    Describe the partitioning capabilities available
    Enforce constraint checking with the SET INTEGRITY command
    Use the DB2 Problem Determination tool (db2pd)
    Configure and manage HADR
    Understand how to encrypt data in transit and data at res

    Other authors
    See publication
  • C16 - Deep Dive into Locks and Logs of Db2 v11.1 pureScale

    IDUG

    This session was delivered in IDUG 2018 NA covering below listed topics.

    1: Understanding lock management in pureScale such as local lock manager and global lock manager
    2: Explains the lock negotiations and optimized way of accessing data in a pureScale environment
    3: Explains reclaiming page and how to identify page reclaims and best practices to control them.
    4: Explains the concept of log streams, log stream merges and log sequence numbers in a DB2 pureScale system and…

    This session was delivered in IDUG 2018 NA covering below listed topics.

    1: Understanding lock management in pureScale such as local lock manager and global lock manager
    2: Explains the lock negotiations and optimized way of accessing data in a pureScale environment
    3: Explains reclaiming page and how to identify page reclaims and best practices to control them.
    4: Explains the concept of log streams, log stream merges and log sequence numbers in a DB2 pureScale system and demonstrates its uses during database restart or crash recovery.
    5: Describes all of the locks and logs configuration parameters that affect the DB2 pureScale cluster functionality and also explains the Explicit Hierarchical Locking (EHL) behavior.

    See publication
  • Db2 LUW v11.1 DBA Certification Crash Course - Exam C2090-600

    IDUG

    • This course helped audience to understand the DB2 v11.1 DBA certification topics.
    • It covered all the topics in a greater detail with sample questions and answers at the end of each chapter
    • It covered all of the DB2 v11.1 new features and it’s benefits in the real world
    • Covered more than 50 step-by-step procedures for DBA’s to use in their day to day work

    1. Introduction
    2. DB2 Server Management
    3. Physical Design
    4. Business Rules Implementation
    5. Monitoring…

    • This course helped audience to understand the DB2 v11.1 DBA certification topics.
    • It covered all the topics in a greater detail with sample questions and answers at the end of each chapter
    • It covered all of the DB2 v11.1 new features and it’s benefits in the real world
    • Covered more than 50 step-by-step procedures for DBA’s to use in their day to day work

    1. Introduction
    2. DB2 Server Management
    3. Physical Design
    4. Business Rules Implementation
    5. Monitoring DB2 Activity
    6. Utilities
    7. High Availability
    8. Security

    See publication
  • EDS1 - DB2 11.1 for Linux, UNIX, and Windows Database Administration Certification Exam (C2090-600) Crammer Course

    IDUG

    This course is intended to describe the major technical features and enhancements provided by the DB2 v11.1 for Linux, UNIX and Windows. Also this is designed to introduce the students to the DB2 11.1 for Linux, UNIX, and Windows Database Administration certification exam (C2090-600). The course material is aligned with the exam objectives and was created by individuals who helped develop this exam.

    See publication
  • DB2 10.1/10.5 for Linux, UNIX and Windows Database Administration: Certification Study Guide

    MC Press Online



    This book covers everything a reader needs to know to pass the exam to become an IBM certified Database Administrator (DB2) for DB2 10.1/DB2 10.5 for Linux, UNIX and Windows (LUW). This comprehensive study guide steps you through all of the topics covered on the test. Each chapter contains an extensive set of practice questions along with carefully explained answers. The book also provides a complete practice test of questions that closely models the actual exam, along with an answer…



    This book covers everything a reader needs to know to pass the exam to become an IBM certified Database Administrator (DB2) for DB2 10.1/DB2 10.5 for Linux, UNIX and Windows (LUW). This comprehensive study guide steps you through all of the topics covered on the test. Each chapter contains an extensive set of practice questions along with carefully explained answers. The book also provides a complete practice test of questions that closely models the actual exam, along with an answer key with a full description of why the answer is the correct one. No other source gives you this much help in passing the exam.

    Release date: 03/01/2015
    Chapter 1: DB2 UDB Certification
    Chapter 2: DB2 Server Management
    Chapter 3: Physical Design
    Chapter 4: Business Rules Implementation
    Chapter 5: Monitoring DB2 Activity
    Chapter 6: Utilities
    Chapter 7: High Availability
    Chapter 8: Security
    Chapter 9: Connectivity and Networking
    Chapter 10: DB2 10.5 Exam Crash Course
    Chapter 11: Question Bank

    Other authors
    See publication
  • A step-by-step procedure to migrate terabytes of database from IBM Balanced Warehouse to IBM Smart Analytics System

    IBM developerWorks

    This article describes how to use the DB2's database restore and DPF scale-out techniques to migrate terabytes database from BCU to ISAS meeting business requirements. Truly a simple guide to accomplish a complex task.

    See publication
  • 5 Steps for Migrating Data to IBM DB2 with BLU Acceleration

    IBM Data Magazine

    In the dynamics of today’s business world the success of any organization depends largely on organizational decisions made at the appropriate times. And a reliable data warehouse environment plays a pivotal role in helping drive the business decisions that can move an organization toward successful outcomes.

    IBM DB2 10.5 with BLU Acceleration offers an innovative database and highly advanced technology for business intelligence (BI) applications that is designed to provide…

    In the dynamics of today’s business world the success of any organization depends largely on organizational decisions made at the appropriate times. And a reliable data warehouse environment plays a pivotal role in helping drive the business decisions that can move an organization toward successful outcomes.

    IBM DB2 10.5 with BLU Acceleration offers an innovative database and highly advanced technology for business intelligence (BI) applications that is designed to provide cost-effective, optimized performance and ease of use.

    Migrating data from an existing warehouse to DB2 10.5 with BLU Acceleration can be handled with ease and provide several benefits at the same time.

    See publication
  • How to Improve Performance in Warehouse Systems Using Replicated MQTs

    IBM Data Magazine

    In any Database Partitioning Feature (DPF) warehouse environment, the replicated Materialized Query Table (MQT) plays an important role in improving database performance. The performance improvement can be significant for the frequently joined SQL statements between the bulky fact tables and tiny dimension tables. The improvement is achieved by assisting the joins, presenting the data locally instead of broadcasting the data across all the data partitions.

    See publication
  • IBM DB2 9.7 Performance Tuning for Siebel 8.1 Applications

    IBM Data Management Magazine

    This article focuses on three database tuning steps developed using real-time performance tuning experiences on IBM DB2 9.7 for the Siebel 8.1 global sales application. Most of the time, the performance of a Siebel application depends largely on the performance of the underlying database. This article will walk through the steps we followed during the performance tuning exercise to achieve an 83 percent performance improvement for Enterprise Integration Manager (EIM) jobs and an approximately…

    This article focuses on three database tuning steps developed using real-time performance tuning experiences on IBM DB2 9.7 for the Siebel 8.1 global sales application. Most of the time, the performance of a Siebel application depends largely on the performance of the underlying database. This article will walk through the steps we followed during the performance tuning exercise to achieve an 83 percent performance improvement for Enterprise Integration Manager (EIM) jobs and an approximately 17 percent improvement for user interface (UI) components.

    See publication
  • DB2 9.7 Advanced Application Developer Cookbook

    Packt Publishing

    Learn new features in DB2 9.7 such as Performance enhancements, pureXML enhancements, high availability, backup, logging, resiliency, recovery enhancements and so on for application development
    Master different security aspects to design highly secured applications.
    Get step-by-step instructions to write advanced Java applications
    Work with DB2 routines and modules
    Master the tips and tricks for optimized application performance
    Learn to enable Oracle…

    Learn new features in DB2 9.7 such as Performance enhancements, pureXML enhancements, high availability, backup, logging, resiliency, recovery enhancements and so on for application development
    Master different security aspects to design highly secured applications.
    Get step-by-step instructions to write advanced Java applications
    Work with DB2 routines and modules
    Master the tips and tricks for optimized application performance
    Learn to enable Oracle applications on DB2, migrating Oracle database objects on to DB2 9.7, and more

    Other authors
    See publication
  • Effectively use DB2 data movement utilities in a data warehouse environment

    IBM

    Choosing proper data movement utilities and methodologies is key to efficiently moving data between different systems in a large data warehouse environment. To help you with your data movement tasks, this article provides insight on the pros and cons of each method with IBM® InfoSphere® Warehouse, and includes a comparative study of the various methods using actual DB2® code for the data movement.

    See publication
  • Understanding the advantages of DB2 9 autonomic computing features

    IBM developerWorks

    The self tuning memory manager (STMM) is a revolutionary memory tuning feature that was first introduced in IBM ® DB2 ® 9. The STMM eases the task of memory configuration by automatically setting optimal values for most memory configuration parameters, including buffer pools, package cache, locking memory, sort heap, and total database shared memory. When STMM is enabled, the memory tuner dynamically distributes the available memory among the various memory consumers. This article explains the…

    The self tuning memory manager (STMM) is a revolutionary memory tuning feature that was first introduced in IBM ® DB2 ® 9. The STMM eases the task of memory configuration by automatically setting optimal values for most memory configuration parameters, including buffer pools, package cache, locking memory, sort heap, and total database shared memory. When STMM is enabled, the memory tuner dynamically distributes the available memory among the various memory consumers. This article explains the function of the STMM, teaches you to enable the feature, and also discusses how STMM can bring real benefits to your business environment.

    See publication
  • What's new in IDS 11?

    IBM developerWorks

    IBM® Informix® Dynamic Server (IDS) is a fast and scalable database server that manages traditional relational, object-relational, and web-based databases. IDS supports alphanumeric and rich data, such as graphics, multimedia, geospatial, HTML, and user-defined types. You can use IDS on UNIX®, Linux®, or Windows® with online transaction processing (OLTP), data marts, data warehouses, and e-business applications.

    See publication
  • How to go hand-in-hand with DB2 and Informix

    IBM developerWorks

    Database technology is a constantly growing field of knowledge. Leveraging your current knowledge on one product and applying it to another similar product is one way to keep up with the constant change. This article demonstrates how you can leverage skills acquired in either Informix or DB2 to learn the other, and compares the technologies and terminologies used in IBM® Informix® Dynamic Server (IDS) 10 with IBM DB2® 9.

    See publication
  • DB2 performance tuning using the DB2 Configuration Advisor

    IBM developerWorks

    Tuning a database to get optimal performance can be an overwhelming task. DB2® configuration parameters play an important role in performance, as they affect the operating characteristics of a database or database manager. When you are tuning your database for performance, the worst possible approach is to change the value of many performance tuning parameters without having any idea of what is causing the performance problem. In such cases, DB2 Configuration Advisor wizard gives DBAs a good…

    Tuning a database to get optimal performance can be an overwhelming task. DB2® configuration parameters play an important role in performance, as they affect the operating characteristics of a database or database manager. When you are tuning your database for performance, the worst possible approach is to change the value of many performance tuning parameters without having any idea of what is causing the performance problem. In such cases, DB2 Configuration Advisor wizard gives DBAs a good starting point with initial configuration parameter settings upon which they could make improvements if they want. This article familiarizes you with various database configuration parameters and the use of the Configuration Advisor wizard in performance tuning. To see how these principles apply, you examine two business case scenarios for online transaction processing (OLTP) and online analytical processing (OLAP) systems.

    See publication

Courses

  • AWS Certified Solutions Architect - Associate

    -

  • AWS Certified Solutions Architect - Professional

    -

  • Adanced Algorithms

    -

  • Advanced Compilers

    -

  • Advanced Data Structures

    -

  • Advanced Database Management Systems

    -

  • Apache Kafka Fundamentals by Confluent

    -

  • DataStax Enterprise 6 Foundations of Apache Cassandra

    DS201

  • DataStax Enterprise 6 Graph

    DS330

  • DataStax Enterprise 6 Operations of Apache Cassandra

    DS210

  • DataStax Enterprise 6 Practical Application Data Modeling with Apache Cassandra

    DS220

  • Database Migration Using DMO - SAP HANA 2.0 SPS04

    HA250

  • Design and Implement A Data Science Solution On Azure

    DP100

  • Distributed Computing

    -

  • EDB PostgreSQL Advanaced Server DBA Essentials v11

    EnterpriseDB

  • IBM DB2 pureScale

    -

  • IBM MQ V9 Advanced System Administration

    ZM213

  • IBM MQ V9 System Administration (Using Windows for Labs)

    ZM153

  • IBM Netezza 6.0

    -

  • Implementing Microsoft Azure Infrastructure Solutions

    70-533

  • Introduction to Apache Cassandra

    DS101

  • Microsoft SQL Server 2012 Internals

    -

  • Microsoft SQL Server 2014 Internals: In-Memory

    Hekaton Engine

  • SAP BASIS ECC 6.0

    -

  • SAP HANA - Installation & Operation

    HA200

  • SAP HANA - Introduction

    HA100

  • SAP HANA 2.0 SPS04 - High Availability and Disaster Tolerance Administration

    HA201

  • SAP HANA 2.0 SPS04 - Installation and Administration

    HA200

  • SAP HANA 2.0 SPS04 - Using Monitoring and Performance Tools

    HA215

  • SQL Server 2008 DBA

    -

Projects

  • Db2 Universal (Db2 U) Container Book Publication

    Db2 U Book

  • Azure High Availability Implementation

    -

    Implementation of High Availability (HA) for IBM Db2 using Pacemaker and Corosync

    1. Custom HADR for Un-Supported DB's
    2. SQL Replication Latency Monitoring using Custom Heartbeat Table and Replication Subscription
    3. Custom Load Balancer/Probe Resources for Un-Supported DB's
    4. Custom HA for Informatica and Automation for Informatica
    5. Additional Voting Device to Protect the systems from Split Brain race conditions
    6. Custom DR for Unsupported Databases

  • Db2 11.5.8 HADR Integrated solution using Pacemaker

    -

    The goal was to setup the Pacemaker with 2 nodes on primary DC and 2 nodes on the secondary DC. Provide the steps to setup the Pacemaker and prototype/codification of failover and failback within data center and across the data center.

    Avoided the requirement to have passwordless SSH setup for ROOT by using db2locssh binary. Documenting the differences between Tivoli System Automation (TSA) and Pacemaker setup.

  • Db2 Database Backup Optimizations

    -

    Understanding the recovery requirement for customers and building the following:

    1. Db2 recovery log management
    2. Dropped table recovery (native and also using Db2 Recovery Expert)
    3. Database rebuild support for bigger databases
    4. Tablespace recovery for non-recoverable LOAD operated tables
    5. Database and Tablespace Relocation Techniques
    6. Db2 Split Mirror Support for Faster Recovery

  • IBM Db2ReadLog () API Interface with Kafka

    -

    1. Reverse engineering the Db2 transaction log to identify the Byte Array representation of Db2 data types (INT, SMALL INT, BIG INT, DECIMAL, CHAR, VARCHAR, LONG VARCHAR, FOR BIT DATA, VARBINARY, LOB, FLOAT, DATA, TIME, TIMESTAMP (WITH NULLS and WITHOUT NULLS)
    2. Rebuilding the transaction based on the byte array in memory and discarding the memory buffer in case of a ROLLBACK
    3. Ordering of transactions based on LFS and LSN (pureScale and non-pureScale deployments)

  • High Availability and Disaster Recovery Implementation in Azure

    -

    Implement Db2 HADR solution between primary and disaster sites in Azure with the following characteristics:

    1. Database performance benchmark to find the current workload statistics
    Normal business hours 500~600 transactions per second
    Peak business hours 800~1200 transactions per second

    2. Copy Recoverable Load operations files between data centers using Azure native BLOB storage device
    3. Enforce recoverable operations on primary data center
    4. SQL replication…

    Implement Db2 HADR solution between primary and disaster sites in Azure with the following characteristics:

    1. Database performance benchmark to find the current workload statistics
    Normal business hours 500~600 transactions per second
    Peak business hours 800~1200 transactions per second

    2. Copy Recoverable Load operations files between data centers using Azure native BLOB storage device
    3. Enforce recoverable operations on primary data center
    4. SQL replication between OLTP and EDW for reporting purpose
    5. Document the complete implementation plan
    6. Perform HADR test cases
    (a) Failover and Failback
    (b) Optimize Db2 parameters for HADR performance without using express route
    (c) Perform LOAD tests and bench mark the file transfer
    (d) SQL Replication restart procedure

    Architect HA solution using Corosync/Pacemaker using Azure Load Balancer.

  • Db2 High Availability and Disaster Recovery Solution Design and Implementation in Azure Cloud

    -

    1. Understand HA and DR requirements in terms of Infrastructure, Recovery Point Objective, Recovery Time Objective, Workload Characteristics
    2. Design HA and DR Solution for the entire Db2 (LUW) Landscape
    3. Implement HADR Db2 Native Solution using TSA and Corosync/Pacemaker (Db2 11.5.7.0)
    4. Document the HA and DR Invocation Methods
    5. Automate the DR Invocation using Shell and Python Scripts

  • Oracle 10g, 11g, 12c and 19c Migration from On-Premise to Azure Cloud (AIX to Linux)

    -

    This project was to migrate business critical databases from on-prem to Azure and convert data from Big Endian to Little Endian (AIX to RHEL).

    1. Discover the inventory
    2. Database to Application Mapping
    3. Database Object Validations
    4. Understanding the workload
    5. Propose an efficient method of migration
    - Used XTTS for 11g, 12c and 19c databases which were able to afford an outage of 60 minutes.
    - Used GoldenGate for near zero downtime required…

    This project was to migrate business critical databases from on-prem to Azure and convert data from Big Endian to Little Endian (AIX to RHEL).

    1. Discover the inventory
    2. Database to Application Mapping
    3. Database Object Validations
    4. Understanding the workload
    5. Propose an efficient method of migration
    - Used XTTS for 11g, 12c and 19c databases which were able to afford an outage of 60 minutes.
    - Used GoldenGate for near zero downtime required databases
    6. Prepared migration plan checklist and executed the migration with zero defect

  • Db2 Q Replication Performance Optimization

    -

    Improve Q replication performance for a workload running 144,000,000 transactions per day with an average Db2ReadLog () 11~240 ms call.
    -------------------------------------------------------------------------------------------------------
    Problem Statement: Q Replication Latency Increase from 1 Second to x Hours
    -------------------------------------------------------------------------------------------------------
    Optimization Steps:

    1. Analyzed the performance at Source and…

    Improve Q replication performance for a workload running 144,000,000 transactions per day with an average Db2ReadLog () 11~240 ms call.
    -------------------------------------------------------------------------------------------------------
    Problem Statement: Q Replication Latency Increase from 1 Second to x Hours
    -------------------------------------------------------------------------------------------------------
    Optimization Steps:

    1. Analyzed the performance at Source and Target Systems
    2. Captured MQ and Db2ReadLog () System Call trace to understand the bottleneck
    3. Made MQ Configuration Changes to Improve the Channel Performance
    4. Implemented Parallel Send Queue to Improve the Performance by 4x
    -------------------------------------------------------------------------------------------------------
    End Result:
    -------------------------------------------------------------------------------------------------------
    Performance improvement by 3.8x (x Hours back to 1 Second latency (this latency is generally network F5 latency with Q latency of 12 ms) for an increased workload of 3x). With this optimization, Q replication performance will be intact even for an increased workload up to 280,000,000 transactions per day.

  • DevOps for Db2 Databases

    -

    Implemented DevOps Concepts for Db2 Databases.

    1. Db2 Fix Pack/Mod Pack Update Automation for 4 node HADR and Single Node Databases
    2. Db2 Version Upgrade Automation for 4 Node and Single Node Databases
    3. Db2 Performance Triage Script Execution via RunDeck
    4. Integration of Service Now Incident Management with RunDeck for Db2 Problems
    5. Q Replication End to End Monitoring for 4 Node HADR (across data centers) and Multi Instance Q Managers

  • Db2 (LUW) Advanced Support

    -

    Provide Db2 Advanced Support for Clients
    1. Solutions for Data Recovery using Db2 Advanced Recovery Options
    2. Solutions for HA and DR
    3. Solutions for Db2 Product Defects
    4. Solutions for TSA Problems
    5. Solutions for Performance Problems within Transactional and Analytics (Db2 BLU Acceleration) Systems
    6. Fix Data Replication Problems
    7. Fix Database Page Corruptions
    8. Fix Data Center Outage Database Issues

  • Advisory for Db2 Data Management Console UX for IBM

    -

    Helping IBM UX Design Team to Develop an Excellent User Interface for Db2 Data Management Console (DMC)
    1. Monitoring Console Design
    2. Q Replication Monitoring Console Design

  • PostgreSQL 13.1 and 14.1 RDS - Data Purge Automation Framework

    -

    Data Purge Automation Service -
    The data purge service is created to offer data owners a simple way to purge data from large PostgreSQL application tables. During the build process, all the SQL statements required to find the rows qualified for the purge will be identified and added to the control tables. However, in the future, data owners will add or remove the tables and associated SQL statements and modification to the order of delete execution from the control tables as and when…

    Data Purge Automation Service -
    The data purge service is created to offer data owners a simple way to purge data from large PostgreSQL application tables. During the build process, all the SQL statements required to find the rows qualified for the purge will be identified and added to the control tables. However, in the future, data owners will add or remove the tables and associated SQL statements and modification to the order of delete execution from the control tables as and when necessary.

    Project Scope:
    Client and Service Provider agree that Service Provider shall assist in the Archival of PostgreSQL Database as per ongoing direction from the Client and in accordance with the tasks identified under this Statement of Work:
    1. Analyze the Postgres database and document the database model and relationships and review with the application team and get sign off
    2. Define the e2e archival process from the source database and keep the data in the archive database (Snowflake)
    3. Build a scalable service which will orchestrate the e2e archival process with proper error handling and restart capabilities
    4. Build audit trail reports which will show what data has been archived from the source on the previous runs
    5. Perform unit testing, integration testing and UAT (User Acceptance Testing) with key business users and get sign off
    6. Document the project and create production run book and handover to application owners

  • FiveTran (HVR) Data Replication for Db2

    -

    Replicate the data from Db2 11.5.0.6 to Snowflake for a near-real time Operational Data Store (ODS) reporting.

  • DB Discover - A Complete Monitoring Solutions for SQL Server

    -

    This solution is a one stop shop for database health checks & database portfolio management.

    Application team benefits
    - Application team can directly check basic database information such as the following:
    o Availability of database (online/offline)
    o Database information (data files, log files, file size)
    o Job status (jobs, job schedule, last job run status)
    o Data space utilization
    o Current log file size

    DBA benefits
    o One stop shop for performing…

    This solution is a one stop shop for database health checks & database portfolio management.

    Application team benefits
    - Application team can directly check basic database information such as the following:
    o Availability of database (online/offline)
    o Database information (data files, log files, file size)
    o Job status (jobs, job schedule, last job run status)
    o Data space utilization
    o Current log file size

    DBA benefits
    o One stop shop for performing database daily health checks (Connectivity check, job status, alerts)
    o Central place for any database related information
    o DBAs need not be contacted for basic information regarding databases. These information will be available in the respective application web page.

    Overall Business Benefits
    o Time spent by DBAs on daily check reduced drastically, enabling them to utilize that time to work on other improvement areas.
    o Dependency on DBA team for basic database information removed, application team can check the data anytime at their will.

    Other creators
    • R Karthik
  • IBM FTM Upgrade from FTM 3.0.6.10 to FTM 3.2.5 (Zelle)

    -

    - Migrate Db2 database from Db2 10.1 to Db2 11.1 from AIX to Linux using Q Replication (2 TB in size)
    - Upgrade FTM Zelle Application from 3.0.6.10 to 3.2.5 (latest mandates)
    - Application performance tuning for PEP+ and ACH processes
    - Improve SQL statement performance and reduce the CPU utilization by 60%
    - Setup HADR and TSA across data centers
    - Setup Q replication between 4 node HADR FTM to a 4 node HADR ODS
    - Automate the Q replication and MQ monitoring process
    -…

    - Migrate Db2 database from Db2 10.1 to Db2 11.1 from AIX to Linux using Q Replication (2 TB in size)
    - Upgrade FTM Zelle Application from 3.0.6.10 to 3.2.5 (latest mandates)
    - Application performance tuning for PEP+ and ACH processes
    - Improve SQL statement performance and reduce the CPU utilization by 60%
    - Setup HADR and TSA across data centers
    - Setup Q replication between 4 node HADR FTM to a 4 node HADR ODS
    - Automate the Q replication and MQ monitoring process
    - Automate the HADR and Q replication failover between 4 nodes

  • Db2 Q Replication Implementation

    -

    1. Determine which replication method is best for the workload – SQL Replication vs Q Replication vs CDC?
    2. Collecting replication data volume and implementing right replication solution
    3. Designing Q replication components, filtering, transformation and conflict detection
    4. Installing and configuring multi-instance Q manager (IBM MQ v9.1)
    5. Create control tables and designing the naming standards for Queues, Q Managers, Listeners, Channels, Q Subscriptions, and Replication Q…

    1. Determine which replication method is best for the workload – SQL Replication vs Q Replication vs CDC?
    2. Collecting replication data volume and implementing right replication solution
    3. Designing Q replication components, filtering, transformation and conflict detection
    4. Installing and configuring multi-instance Q manager (IBM MQ v9.1)
    5. Create control tables and designing the naming standards for Queues, Q Managers, Listeners, Channels, Q Subscriptions, and Replication Q Map
    6. Integrating the Q replication capture and apply processes with HADR primary, principal standby, auxiliary standbys across the data centers
    7. Design and implement automatic fail-over and fail-back of Q Capture, Q Apply, MQ (multi-instance Q manager). HADR primary is the driving factor for the Q component fail-over/fail-back
    8. Implementing Q replication monitoring dashboard and scripts for monitoring/alerting. Integrate the solution with remedy/servicenow

    See project
  • Db2 11.5 pureScale Implementation and Support

    -

    - Implement Db2 11.5 pureScale Linux cluster (5 members and 2 CF's)
    - Complete Initial Verification Process (generous testing of the cluster and get the fixes from IBM Toronto Lab for the issues we found)
    - Performance tuning for application workload running 30,000 transactions per second
    - Implementing Explicit Hierarchical Locking (EHL) to eliminate data sharing performance issues
    - Optimize Local Lock Manager (LLM) and Global Lock Manager (GLM)
    - Optimizing TSA and Cluster…

    - Implement Db2 11.5 pureScale Linux cluster (5 members and 2 CF's)
    - Complete Initial Verification Process (generous testing of the cluster and get the fixes from IBM Toronto Lab for the issues we found)
    - Performance tuning for application workload running 30,000 transactions per second
    - Implementing Explicit Hierarchical Locking (EHL) to eliminate data sharing performance issues
    - Optimize Local Lock Manager (LLM) and Global Lock Manager (GLM)
    - Optimizing TSA and Cluster Services including Spectrum File System Mounts during CF recycle
    - Estimate Cluster Caching Facility (CF) CPU and memory resources that is required to run multiple databases
    - Optimize DBM and database configuration parameters and Db2 registry variables based on the resource and the application workload
    - Implement HADR to replicate the data from production into DR (5 member and 2 CF's)
    - Implement Monitoring Solutions

  • Implementing SAP HANA XS Advanced and Smart Data Access (SDA)

    -

    This project is all about enhancing the application development life cycle using cutting edge SAP HANA features.

    Responsibilities:

    1. Install and configure SAP HANA XS Advanced
    2. XS Advanced Organizations and Spaces
    3. Development tools provisioning including Web IDE, Explorer, Advanced Cockpit
    4. Development workflow for multi-target applications
    5. Data model design and HDI deployment
    6. Building Smart Data Access objects between SAP HANA and Db2 Linux, UNIX…

    This project is all about enhancing the application development life cycle using cutting edge SAP HANA features.

    Responsibilities:

    1. Install and configure SAP HANA XS Advanced
    2. XS Advanced Organizations and Spaces
    3. Development tools provisioning including Web IDE, Explorer, Advanced Cockpit
    4. Development workflow for multi-target applications
    5. Data model design and HDI deployment
    6. Building Smart Data Access objects between SAP HANA and Db2 Linux, UNIX and Windows to provision real time data access between the heterogeneous databases

  • SAP HANA Migration from HANA 1.0 to 2.0

    -

    This project was to understand the current application workload and the upcoming deployment and size the appliance and migrate the database from HANA 1.0 SPS 12 to HANA 2.0 SPS 3.

    Responsibilities:

    1. SAP HANA Sizing
    2. Appliance Selection
    3. Installation of HANA 2.0 and Create Databases for both ECC and BW for N and N+1 Landscape
    4. Installation and Configuration of SAP HANA SR and HAWK Cluster using SLES 12 pacemaker and corosync
    5. Implement SAP HANA System…

    This project was to understand the current application workload and the upcoming deployment and size the appliance and migrate the database from HANA 1.0 SPS 12 to HANA 2.0 SPS 3.

    Responsibilities:

    1. SAP HANA Sizing
    2. Appliance Selection
    3. Installation of HANA 2.0 and Create Databases for both ECC and BW for N and N+1 Landscape
    4. Installation and Configuration of SAP HANA SR and HAWK Cluster using SLES 12 pacemaker and corosync
    5. Implement SAP HANA System Replication for HA and DR (SYNCMEM and ASYNC)
    6. Optimize the Fail-over and Fail-back Timings
    7. Implement Active-Active and Reroute Read Only Workload to Secondary node via an Additional Virtual IP instead of using HINT based reroute mechanism (RESULT_LAG('hana_sr'))
    8. Building Overall Deployment Plan
    9. Application cutover from HANA 1.0 to HANA 2.0 with a very minimal outage (under 10 minutes including ABAP application configuration changes)
    10. Implementation of SAP HANA Encryption at Rest and In-flight

  • Operational Data Store (ODS) Migration

    -

    Project Description: Migration of 1.3 TB of Operational Data Store (ODS) database from DB2 9.7 to DB2 10.5 to the new server with additional capacity.

    Project Members:
    Robert Kent Collins,
    Charles Ulary
    Angela Jones
    Janel Randall
    Mohankumar Saraswatipura

    Setup Details:
    1. DB2 10.5 Enterprise Edition
    2. TSA enabled HADR Implementation

    Challenges:
    1. Very minimal downtime i.e. < 60 minutes window
    2. No replication technology toolkit to…

    Project Description: Migration of 1.3 TB of Operational Data Store (ODS) database from DB2 9.7 to DB2 10.5 to the new server with additional capacity.

    Project Members:
    Robert Kent Collins,
    Charles Ulary
    Angela Jones
    Janel Randall
    Mohankumar Saraswatipura

    Setup Details:
    1. DB2 10.5 Enterprise Edition
    2. TSA enabled HADR Implementation

    Challenges:
    1. Very minimal downtime i.e. < 60 minutes window
    2. No replication technology toolkit to replicate the data before the cutover
    3. Two different versions of DB2

    Solution:
    1. Home grown replication scripts to copy the data from one server to another in full and in delta
    2. Identify the deleted data from DB2 9.7 database and replay them on the new

    Result:
    1. Completed the move in less than 60 minutes

    Other creators
  • PostgreSQL and Pgpool Upgrade

    -

    - Implementation of PostgreSQL 11.1 and 12.6
    - Implementation of Pgpool 4.1 with active-active load-balancer between primary and secondary
    - Migrate Db2 database into PostgreSQL for Digital Safe Application
    - Implementation of MySQL database primary and replica for Non-SMTP Application
    - Implementation of Monitoring Solutions

  • SQL Server 2005 Enterprise Cluster Migration

    -

    The project is about migrating the databases from SQL server 2005 cluster to SQL server 2012 cluster. We have had 322 databases on the SQL Server 2005 cluster. The challenges were as listed below:
    1. Identify the unused databases
    2. Identify the application compatibility with SQL Server 2012
    3. Understand the multi-subnet cluster (multi-site) design and implementation plans
    4. Understand the implications around the settings RegisterAllProvidersIP and HostRecordTTL on the Listener…

    The project is about migrating the databases from SQL server 2005 cluster to SQL server 2012 cluster. We have had 322 databases on the SQL Server 2005 cluster. The challenges were as listed below:
    1. Identify the unused databases
    2. Identify the application compatibility with SQL Server 2012
    3. Understand the multi-subnet cluster (multi-site) design and implementation plans
    4. Understand the implications around the settings RegisterAllProvidersIP and HostRecordTTL on the Listener connections

    The overall responsibilities consist of:
    • Creating a migration plan along with project manager (Mark Cashen)
    • Writing scripts using DMV’s to identify the unused databases
    • Doing the pre-work on a test environment to check the incompatibilities using SQL Server Migration Assistant v5.3 and SQL Server Upgrade Advisor and recommending appropriate changes in the application code
    • Design the cluster set along with IBM DBA (James Brophy, Kristian Skotzen) and IBM Wintel (Akash Jain) teams
    • Assist in SQL Server installation and Surface Area Security Policies
    • Setting up the Availability Groups and Availability Group Listeners
    • Making corrections to eliminate the connection timeout issues in the listeners by appropriately using the MultiSubnetFailover parameter in the connection string or by using the method of having lesser DNS refresh times along with Registering only active IP addresses in DNS
    • Migrate the databases from 2005 to 2012
    • Make use of 2012 data and index compression features help improve the performance and also help reduce the storage requirements
    • Liaise with application teams and help testing the applications with the new connection strings and troubleshoot the SQL Server 2012 problems
    • Perform HA and DR tests to make sure all fine with the setup

    Highlights:
    • Successfully completed the migration project with no delay
    • Efficiently used the available resources and in turn it was a very cost effective project (saving about 100K GBP)

    Other creators
  • EDW Disaster Recovery Implementation and Test

    -

    The project delivery includes
    1. Identifying the ways to implement a cost effective DR solution between DB2 9.7 FP 7 DPF having 65 logical partition database environment and a 33 logical partition database environment.
    2. Understanding the overall Enterprise Stage implementation and the DR invocation with very minimal ETL downtime
    3. Configuring Cognos 10.1 reporting on the DR site
    4. Implementing the file system level changes to map 65 to 33 on SLES
    5. Create shell scripts to…

    The project delivery includes
    1. Identifying the ways to implement a cost effective DR solution between DB2 9.7 FP 7 DPF having 65 logical partition database environment and a 33 logical partition database environment.
    2. Understanding the overall Enterprise Stage implementation and the DR invocation with very minimal ETL downtime
    3. Configuring Cognos 10.1 reporting on the DR site
    4. Implementing the file system level changes to map 65 to 33 on SLES
    5. Create shell scripts to map APT configurations changes in IBM Information Server in the DR site
    6. Documenting the complete DR plan
    7. Invoking the DR with the current production performance baseline statistics in place
    8. Finishing all the DB2 database partition restores on time in 2 parallel threads (3.0 hours)
    9. Completeing the roll-forward recovery with over 650 transaction logs
    10. Achieving a best RTO and RPO overall
    11. Creating the performance baseline for DR and publishing it with the business stake holders (expectation management)

    Other creators
  • Rapid Incident Miner (RIM) - An easy way to get to hold of a recurring problem

    -

    RIM -

    Help identifying pain points in each vertical by categorizing the incidents into “Primary Prevention Focus Areas (PPFA)”

    • What benefits you anticipate?

    o In each vertical identify focus areas where they get the most number of tickets.
    o Automatically categorize tickets into its focus area.
    o This categorization will help them in solving issues with more speed and later help them in analysis.
    o Identifying pain points will help the team to build more…

    RIM -

    Help identifying pain points in each vertical by categorizing the incidents into “Primary Prevention Focus Areas (PPFA)”

    • What benefits you anticipate?

    o In each vertical identify focus areas where they get the most number of tickets.
    o Automatically categorize tickets into its focus area.
    o This categorization will help them in solving issues with more speed and later help them in analysis.
    o Identifying pain points will help the team to build more efficient ways to solve them.
    o RIM will help each vertical in training any new resource about their focus areas.

    Overall Business Benefits
    o The time any support resource take to resolve tickets affect the business in many ways. The RIM enables the efficient and effective issue resolution.
    o RIM also enables the resource planning capabilities based on PPFA in each vertical.

    Other creators
    • R karthik
    • Immanuel Rajkumar
  • DB Doctor – An Innovative Way of Solving Space Issues

    -

    This tool is for visualizing the growth of all database files in SQL servers. This will help us in identifying the peak hours in a day when a particular database grows very large, which in turn will help us in analyzing any job failures and disk space issues.

    This tool will ultimately make the number of tickets related to space issues to go down over a period of time.

    Other creators
    • Karthik R
    • Immanuel Rajkumar

Honors & Awards

  • Bravo Award for EDI Problem Troubleshooting

    BNSF Railway

  • You Nailed It!!

    BNSF Railway

    The "You Nailed It" award at BNSF refers to a recognition or appreciation award given to employees who have demonstrated exceptional accomplishment, or contribution in their work at BNSF Railway Company.

    This was provided to Mohan for "pureScale Production Recovery of a highly critical system at BNSF.

  • IBM Gold Consultant - 2018 to 2024

    IBM

    The IBM Gold Consultants are Independent Consultants (non-IBM'ers) who deliver superior consulting services to IBM clients, with demonstrated technical leadership, vast industry experience, market knowledge and a strong digital presence. IBM and The Gold Consultants will align on strategy, priorities and initiatives to contribute to the collective benefit of IBM Clients

  • IBM Champion - 2010 to 2024

    IBM, United States

    The IBM Champion program recognizes innovative thought leaders in the technical community — and rewards these contributors by amplifying their voice and increasing their sphere of influence. An IBM Champion is an IT professional, business leader, developer, or educator who influences and mentors others to help them make best use of IBM software, solutions, and services.

    IBM Champions are not employees of IBM.

    *http://www.ibm.com/developerworks/champion/

  • IBM Champion 2018 - Analytics

    IBM

    IBM Champion 2018 - Analytics
    This individual is recognized as an innovative thought leader in the technical community who has demonstrated exceptional expertise and contribution in helping others derive greater value from IBM software, solutions, and services. An IBM Champion is an non-IBMer IT professional, business leader, developer, or educator who influences and mentors others through blogging, speaking at conferences, moderating forums, leading user groups, and authoring books or…

    IBM Champion 2018 - Analytics
    This individual is recognized as an innovative thought leader in the technical community who has demonstrated exceptional expertise and contribution in helping others derive greater value from IBM software, solutions, and services. An IBM Champion is an non-IBMer IT professional, business leader, developer, or educator who influences and mentors others through blogging, speaking at conferences, moderating forums, leading user groups, and authoring books or articles.

  • IBM Champion 2017 - Analytics

    IBM

    IBM Champion 2017 - Analytics
    This individual is recognized as an innovative thought leader in the technical community who has demonstrated exceptional expertise and contribution in helping others derive greater value from IBM software, solutions, and services. An IBM Champion is an non-IBMer IT professional, business leader, developer, or educator who influences and mentors others through blogging, speaking at conferences, moderating forums, leading user groups, and authoring books or…

    IBM Champion 2017 - Analytics
    This individual is recognized as an innovative thought leader in the technical community who has demonstrated exceptional expertise and contribution in helping others derive greater value from IBM software, solutions, and services. An IBM Champion is an non-IBMer IT professional, business leader, developer, or educator who influences and mentors others through blogging, speaking at conferences, moderating forums, leading user groups, and authoring books or articles.

    WHAT IT TAKES TO EARN THIS BADGE
    To earn this badge, the individual must be nominated and selected as an IBM Champion. Nomination criteria includes the following (click on each criteria for more detail):
    Evangelize and advocate for IBM
    Share knowledge and expertise
    Help grow and nurture the community
    Expand reach across the IBM portfolio
    Share feedback with IBM teams

  • Most Creative Player

    BNSF - Infrastructure Technology Group

    Mohan is recognized as this season's "Most Creative Player" for phenomenal participation on Operational Data Store (ODS) Migration.

  • DBA of the Month - March 2015

    HP

  • DB2's Got Talent 2013 - Winner

    DBI Softwares, United States

    DB2’s Got Talent is a globally recognized DB2 Talent Program run by DBI. Mohan presented below listed topics before emerging as a winner:

    1. Effective Data Movement Utilities in Data Warehouse Environment
    2. Simple steps for substantially reducing report execution time in Data Warehouse environment
    3. How DB2 10.1 Changed the Data Warehouse Terminology ETL to ETI?
    4. How to Migrate the DB2 9.7 database from BCU D5100 stack to ISAS 5600 V2 stack?
    5. DB2 9.7 Performance…

    DB2’s Got Talent is a globally recognized DB2 Talent Program run by DBI. Mohan presented below listed topics before emerging as a winner:

    1. Effective Data Movement Utilities in Data Warehouse Environment
    2. Simple steps for substantially reducing report execution time in Data Warehouse environment
    3. How DB2 10.1 Changed the Data Warehouse Terminology ETL to ETI?
    4. How to Migrate the DB2 9.7 database from BCU D5100 stack to ISAS 5600 V2 stack?
    5. DB2 9.7 Performance Tuning for Siebel 8.1 Application – A complete success story

  • Outstanding Achiever of the Month award for the following months, June 2008, October 2008, November 2008, McAfee Inc.

    McAfee

    Outstanding Achiever of the Month award for the following months, June 2008, October 2008, November 2008, McAfee Inc.

    This was a departmental level achievement award for development, code optimization, database performance tuning. The contestants were Siebel developers, Siebel Operations, Application Support and DBA's.

Languages

  • English

    -

Recommendations received

More activity by Mohankumar

View Mohankumar’s full profile

  • See who you know in common
  • Get introduced
  • Contact Mohankumar directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses