Published May 19, 2017 by

5 Ways To Improve Collaboration Between Testers And Developers

Team work between testers and developers is crucial for successfully deploying and maintaining a high quality software product. Here are a few ways to reduce friction and increase collaboration between testing team and developers:

1) Encourage Collaboration and Communication

Promote a collaborative work culture where quality is everyone’s responsibly. Get testers and developers to work together while researching about customers, building user stories, and creating unit tests. Inculcate a team culture of collaboration where testers are more than mere gatekeepers who approve or disapprove features before delivery.
Encourage both QA and developers to share their thoughts and ideas. Make everyone feel a part of a single integrated team that is focused on the product’s quality. Apart from arranging meetings, encourage both testers and developers to take the initiative to interact more often using informal channels. Relying too much on formal channels like e-mails to communicate not only wastes time but also discourages open communication and delays resolution of conflicts. Better communication incites more collaboration.
Read More
    email this
Published May 12, 2017 by

Dynamic Pages, Window Alerts, Pop-Ups

  1. Is Selenium able to handle dynamic AJAX elements?
    1. Yes
    2. No
    3. Cant Say
    Ans: 1
  2. How to use xpath for dynamic elements?
    1. Identify the pattern and modify xpath pattern
    2. Directly use the xpath?
    3. Both 1 and 2
    4. None of the above
    Ans: 1
Read More
    email this
Published May 12, 2017 by

Maven Questions & Answers

  1. What is Maven?
    1. software tool for project management
    2. build automation
    3. building and managing Java projects
    4. All of the above
    Ans: 1
  2. How to compile application sources using Maven?
    1. change to the directory where pom.xml is created by archetype:generate
    2. "execute the following command:mvn compile"
    3. Both 1 & 2
    4. None of the above
    Ans: 3
Read More
    email this
Published May 12, 2017 by

Selenium Grid , TestNG Questions & Answers

  1. What is Selenium Grid?
    1. Scale by distributing tests on several machines
    2. manage multiple environments from a central point, making it easy to run the tests against a vast combination of browsers / OS.
    3. minimize the maintenance time for the grid by allowing you to implement custom hooks to leverage virtual infrastructure for instance
    4. All of the above
    Ans: 4
  2. The selenium-server-standalone package includes the ____.
    1. Hub
    2. Webdriver
    3. legacy RC needed to run the grid
    4. All of the above
    Ans: 4
Read More
    email this
Published May 12, 2017 by

Webdriver Questions & Answers

    1. Which of these WebDriver interface methods is used to open a URL in the browser?
      1. get
      2. navigate().to
      3. Any of the above
      4. None of the above
      Ans: 3
    2. What is the difference between WebDriver close and quit methods?
      1. Nothing, these methods are interchangeable.
      2. close method clears the browser memory and the quit method closes the browser window.
      3. close method closes the browser but the quit method additionally removes the connection to the server.
      4. close method closes the main browser window but the quit method closes all the browser windows including popups.
      Ans: 4
Read More
    email this
Published May 05, 2017 by

ETL Tool Selection Process

Selecting a tool is an organizational level decision since it will be the base for all future data integration works. Hence huge amount of analysis and time need to be invested before selecting a tool.
Below are few factors to be considered for tool comparisons,
Read More
    email this
Published May 05, 2017 by

ETL tools in data warehouse



There are multiple tools available in the market for ETL process. Tools are developed with different technologies and offering more features for a smooth end to end data integration. Here are few ETL tools,
Read More
    email this
Published May 05, 2017 by

ETL job development steps in Informatica

  1. Open Informatica power center designer
  2. Add a repository
    • Click on “Repository -> Add”
    • Enter a repository and username
    • Click on ok
  3. Connect to a repository
    • Right click on created repository
    • Click on connect
    • Click on Add under “Connection Settings” section
    • Enter value for “Domain name”, “Gateway Host” and “Gateway port”
    • Click on Ok
    • Select the security domain
    • Enter password
    • Click on connect
  4. Connect to a folder
    • Right click on folder which you want to work
    • Click on Open
  5. Creating “Source Analyzer”
    • Click on “Source Analyzer” icon
    • Click on Sources from toolbar and select “Import from Database” or “Import from file”
    • Select “data source”
    • Enter username, schema, and password
    • Click on connect
    • You can see the list of tables under “Select Tables” section
    • Select a table
    • Click on ok
  6. Creating “Target Designer”
    • Click on “Target Designer” icon
    • Click on Targets from toolbar and select “Import from Database”
    • Select “data source”
    • Enter username, schema, and password
    • Click on connect
    • You can see the list of tables under “Select Tables” section
    • Select a table
    • Click on ok
  7. Creating Mappings
    • Click on “Mapping Designer” icon
    • Click on Mappings from toolbar and select “Create”
    • Enter mapping name and click ok
    • Drag and drop the created Source Analyzer and Target Designer
    • Click on Transformation from toolbar and select “Create”
    • Select type of transformation
    • Enter the name for transformation
    • Select the columns from source and drag into transformation component
    • Double click on transformation
  8. Creating Workflow
    • Click on “Workflow Manager” icon
    • Connect to the repository
    • Right click on folder and select Open
    • Click on “task” from toolbar and select create
    • Select the task type as “Session”
    • Enter session name
    • Click on Ok
    • Select the created mapping and click ok
    • Double click on Session
    • Click on Mapping tab
    • Select the source table
    • Set source database connections under “connections” section
    • Select the target table
    • Set target database connections under “connections” section
    • Link the Start task and session task
  9. Running Workflow
    • In workflow Designer section, right click on workflow and select “Start workflow”
  10. Monitoring Workflow
    • “Workflow monitor” will be opened automatically when we start a workflow
    • The execution status will be displayed
    • Double click on the session
    • Session information including Source/Target statistics will be displayed
Read More
    email this
Published May 05, 2017 by

Partitioning in Informatica

Partitioning is a concept of creating parallel threads and processing the data distribution technique. It will be used when the volume of data is huge which directly impact the data load and other transformation progress.
Database and ETL tools are offering this partition concept to improvise the job execution time for high volume data tables.
Below are the types of partitioning available in Informatica power center tool,
Read More
    email this
Published May 05, 2017 by

Informatica Tasks

The below are major workflow tasks available in Informatica power center tool.
  1. Session
    It allows to select the mappings and order can be of execution can be set. Also for running mappings instructions will be captured. It is a reusable task in a workflow.
Read More
    email this
Published May 05, 2017 by

Transformation Rules

The transformation term describes a manual intervention which is required to modify the data into required format after extracting the source data.
  1. Active Transformation
    The output record count of the transformation may or may not equal to input record count.
    For example, when we apply filter transformation for age column with the condition of age between 25 and 40. In this case, the data will come out which satisfies this condition, hence the outcome count cannot be predicted.
Read More
    email this
Published May 05, 2017 by

Types of data load in ETL

There are two major types of data load available based on the load process.
1.Full Load (Bulk Load)
The data loading process when we do it at very first time. It can be referred as Bulk load or Fresh load.
The job extracts entire volume of data from a source table or file and loading into truncated target table after applying transformations logics.
Read More
    email this
Published May 05, 2017 by

Extract Transform Load

ETL (extract, transform, and load) is the process of extracting data from the source, transforming into required format and loading into the target database.
Extract
Extraction is the process of selecting/fetching data from source table or source file. As we know that the data will be coming from different types of sources (heterogeneous source) to data warehouse database. In that case, the extraction refers all type of source formats.The few source formats include a relational database, flat file, excel file and XML file.
Read More
    email this
Published May 05, 2017 by

Data masking

What does data masking mean?

Organizations never want to disclose highly confidential information into all users. All sensitive data will be restricted to access in all environments other than production. The process of masking/hiding/encrypting sensitive data is called data masking.
Read More
    email this
Published May 05, 2017 by

OLAP vs OLTP


Transaction DatabaseData warehouse database
Dedicated database available for specific subject area or business applicationIntegrated from different business applications
It does not keep  historyIt keeps history of data for analyzing past performance
It allows user to perform the below DML operations (Select/Insert/Update/Delete)It allows only Select for end users
The main purpose is for using day to day transactionsPurpose is for analysis and reporting
Data volume will be lessData volume is huge
Data stored in normalized formatData stored in de-normalized format
Read More
    email this
Published May 05, 2017 by

Data Cleansing (data scrubbing)

Data cleansing is a process of removing irrelevant and redundant data, and correcting the incorrect and incomplete data. It is also called as data cleaning or data scrubbing.
All organizations are growing drastically with huge competitions, they take business decisions based on their past performance data and future projection. Always better decision can be made through right and consistent data.
Read More
    email this
Published May 05, 2017 by

Data Retention – Archiving or Purging

What does data retention mean?

Data retention means defining how long the data need to be available in the database.
Why data retention policy is required?
  1. Performance will be impacted due to increase of data
  2. The cost will be high if database accumulates data
  3. The old record may not be useful for end user over a certain period of time
Read More
    email this
Published May 05, 2017 by

Operational data store (ODS)

What is an Operational data store (ODS)?

It is a database which has integrated data from different business or sources with different rules. The data cleansing process applied also. It gets data from the transactional database directly or through staging area.
It will have a limited period of history data, hardly 30 to 90 days of data.
Read More
    email this
Published May 05, 2017 by

Factless fact table

What is a factless fact table?

When a fact table does not have any fact is called factless fact table. It has only foreign keys of dimension tables.
How will it be useful?
It can be used to view a report of whether any event has occurred or not instead of any data aggregation.
Read More
    email this
Published May 05, 2017 by

Fact Table Types in Data warehousing

As we know fact table is for storing a fact or measure, based on the type of data, the level of rollup/granular and the frequency of data loading there is four fact types table. Based on the business need the type of fact table will be selected.
Read More
    email this
Published May 05, 2017 by

Different types of fact in data warehousing

A fact table can have multiple facts and can have a reference with multiple dimensions. Ultimately the objective of fact is for doing aggregations to view the data in different dimensions.
There are three fact types categorized based on the level of sum up/roll up by each dimension of a fact table.
Read More
    email this
Published May 05, 2017 by

Confirmed dimension in data warehouse

What is a Confirmed dimension?

As we know that, a dimension table stores non-quantifying data. Dimensions will be used as a viewpoint in the analysis process. A data warehouse database contains integrated fact and dimension tables for different business lines.
In some cases based on business need, a dimension can be referred in multiple fact tables of different business lines. This type of dimension is called confirmed dimension.
Read More
    email this
Published May 05, 2017 by

Degenerate dimension with example in data warehouse


What is a Degenerated dimension?

Degenerated dimension is a dimension table which is being derived from fact table columns.
In a case, a fact table contains more than one Boolean column. Group all Boolean columns into a single table along with the surrogate key. This new dimension table is called degenerated dimension.
Read More
    email this
Published May 05, 2017 by

Junk Dimension in data warehouse

What is a Junk dimension?

When a dimension has only Boolean columns except the primary or surrogate key is called junk dimension. It can be derived from dimension or fact table.
Consider a dimension table contains more than one Boolean column. All Boolean columns will be grouped into a single table along with the surrogate key.
Read More
    email this
Published May 05, 2017 by

Rapidly Changing Dimension (RCD)

What is a Rapidly changing dimension (RCD) or Fast growing dimension?

The Rapidly changing dimension (RCD) is a dimension which has attributes where values will be getting changed often.

How to handle the RCD type table?

We have three major types of solutions to handle the slowly changing dimensions (SCD). By creating mini dimension the rapidly changing dimension can be handled.
Read More
    email this