Search This Blog

Monday, July 29, 2013

Metrics in Automation


                                                            AUTOMATION MTERICS

 



“When you can measure what you are speaking about, and can express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.”

-- Lord Kelvin, a physicist.

 

 

As part of a successful automated testing program it is important that goals and strategies are defined and then implemented. During implementation progress against these goals and strategies set out to be accomplished at the onset of the program needs to be continuously tracked and measured. This article discusses various types of automated and general testing metrics that can be used to measure and track progress.

Based on the outcome of these various metrics the defects remaining to be fixed in a testing cycle can be assessed; schedules can be adjusted accordingly or goals can be reduced. For example, if a feature is still left with too many high priority defects a decision can be made that the ship date is moved or that the system is shipped or even goes live without that specific feature.

Success is measured based on the goal we set out to accomplish relative to the expectations of our stakeholders and customers.

if you can measure something, then you have something you can quantify. If you can quantify something, then you can explain it in more detail and know something more about it. If you can explain it, then you have a better chance to attempt to improve upon it, and so on.

Metrics can provide insight into the status of automated testing efforts.

Automation efforts can provide a larger test coverage area and increase the overall quality of the product. Automation can also reduce the time of testing and the cost of

delivery. This benefit is typically realized over multiple test cycles and project cycles. Automated testing metrics can aid in making assessments as to whether progress, productivity and quality goals are being met.

What is a Metric?

The basic definition of a metric is a standard of measurement. It also can be described as a system of related measures that facilitates the quantification of some particular characteristic.1 For our purposes, a metric can be looked at as a measure which can be utilized to display past and present performance and/or used for predicting future performance.

What Are Automated Testing Metrics?

Automated testing metrics are metrics used to measure the performance (e.g. past, present, future) of the implemented automated testing process.

What Makes A Good Automated Testing Metric?

As with any metrics, automated testing metrics should have clearly defined goals of the automation effort. It serves no purpose to measure something for the sake of measuring. To be meaningful, it should be something that directly relates to the performance of the effort.

Prior to defining the automated testing metrics, there are metrics setting fundamentals you may want to review. Before measuring anything, set goals. What is it you are trying to accomplish? Goals are important, if you do not have goals, what is it that you are measuring? It is also important to continuously track and measure on an ongoing basis. Based on the metrics outcome, then you can decide if changes to deadlines, feature lists, process strategies, etc., need to be adjusted accordingly. As a step toward goal setting, there may be questions that need to be asked of the current state of affairs. Decide what questions can be asked to determine whether or not you are tracking towards the defined goals. For example:

                        How much time does it take to run the test plan?

                        How is test coverage defined (KLOC, FP, etc)?

                        How much time does it take to do data analysis?

                        How long does it take to build a scenario/driver?

                        How often do we run the test(s) selected?

                        How many permutations of the test(s) selected do we run?

                        How many people do we require to run the test(s) selected?

                        How much system time/lab time is required to run the test(s) selected?

Etc

In essence, a good automated testing metric has the following characteristics:

                        is Objective

                        is Measurable

                        is Meaningful

                        has data that is easily gathered

                        can help identify areas of test automation improvement

                        is Simple

 

A good metric is clear and not subjective, it is able to be measured, it has meaning to the project, it does not take enormous effort and/or resources to obtain the data for the metric, and it is simple to understand. A few more words about metrics being simple. Albert Einstein once said



“Make everything simple as possible, but not simpler.”

When applying this wisdom towards software testing, you will see that:

                        Simple reduces errors

                        Simple is more effective

                        Simple is elegant

                        Simple brings focus

 

Percent Automatable

At the beginning of an automated testing effort, the project is either automating existing manual test procedures, starting a new automation effort from scratch, or some combination of both. Whichever the case, a percent automatable metric can be determined.

Percent automatable can be defined as: of a set of given test cases, how many are automatable? This could be represented in the following equation:

ATC # of test cases automatable

PA (%) = -------- = ( ----------------------------------- )

TC # of total test cases

PA = Percent Automatable

ATC = # of test cases automatable

TC = # of total test cases

In evaluating test cases to be developed, what is to be considered automatable and what is not to be considered automatable? Given enough ingenuity and resources, one can argue that almost anything can be automated. So where do you draw the line? Something that can be considered ‘not automatable’ for example, could be an application area that is still under design, not very stable, and much of it is in flux. In cases such as this, we should:

“evaluate whether it make sense to automate”

 

We would evaluate for example, given the set of automatable test cases, which ones would provide the biggest return on investment:

“just because a test is automatable doesn’t necessary mean it should be automated”

When going through the test case development process, determine what tests can be AND makes sense to automate. Prioritize your automation effort based on your outcome. This metric can be used to summarize, for example, the % automatable of various projects or component within a project, and set the automation goal.

 

Automation Progress

Automation Progress refers to, of the percent automatable test cases, how many have been automated at a given time? Basically, how well are you doing in the goal of automated testing? The goal is to automat 100% of the “automatable” test cases. This metric is useful to track during the various stages of automated testing development.

AA # of actual test cases automated

AP (%) = -------- = ( -------------------------------------- )

ATC # of test cases automatable

AP = Automation Progress

AA = # of actual test cases automated

ATC = # of test cases automatable

The Automation Progress metric is a metric typically tracked over time. In the case below, time in “weeks”.

A common metric closely associated with progress of automation, yet not exclusive to automation is Test Progress. Test progress can simply be defined as the number of test cases attempted (or completed) over time.

TC # of test cases (attempted or completed)

TP = -------- = ( ------------------------------------------------ )

T time (days/weeks/months, etc)

TP = Test Progress

TC = # of test cases (either attempted or completed)

T = some unit of time (days / weeks / months, etc)

The purpose of this metric is to track test progress and compare it to the plan. This metric can be used to show where testing is tracking against the overall project plan. Test Progress over the period of time of a project usually follows an “S” shape. This typical “S” shape usually mirrors the testing activity during the project lifecycle. Little initial testing, followed by an increased amount of testing through the various development phases, into quality assurance, prior to release or delivery.

This is a metric to show progress over time. A more detailed analysis is needed to determine pass/fail, which can be represented in other metrics.

Percent of Automated Testing Test Coverage

Another automated software metric we want to consider is Percent of Automated Testing Test Coverage. That is a long title for a metric to determine what test coverage is the automated testing actually achieving? It is a metric which indicates the completeness of the testing. This metric is not so much measuring how much automation is being executed, but rather, how much of the product’s functionality is being covered. For example, 2000 test cases executing the same or similar data paths may take a lot of time and effort to execute, does not equate to a large percentage of test coverage. Percent of automatable testing coverage does not specify anything about the effectiveness of the testing taking place, it is a metric to measure its’ dimension.

AC automation coverage

PTC(%) = ------- = ( ------------------------------- )

C total coverage

PTC = Percent of Automatable testing coverage

AC = Automation coverage

C = Total Coverage (KLOC, FP, etc)

Size of system is usually counted as lines of code (KLOC) or function points (FP). KLOC is a common method of sizing a system, however, FP has also gained acceptance. Some argue that FPs can be used to size software applications more accurately. Function Point Analysis was developed in an attempt to overcome difficulties associated with KLOC (or just LOC) sizing. Function Points measure software size by quantifying the functionality provided to the user based logical design and functional specifications. There is a wealth of material available regarding the sizing or coverage of systems. A useful resourse is Stephen H Kan’s book entitled ”Metrics and Models in Software Quality Engineering” (Addison Wesley, 2003).

The Percent Automated Test Coverage metric can be used in conjunction with the standard software testing metric called Test Coverage.

TTP total # of TP

TC(%) = ------- = ( ----------------------------------- )

TTR total # of Test Requirements

TC = Percent of Testing Coverage

TTP = Total # of Test Procedures developed

TTR = Total # of defined Test Requirements

This measurement of test coverage divides the total number of test procedures developed, by the total number of defined test requirements. This metric provides the test team with a barometer to gage the depth of test coverage. The depth of test coverage is usually based on the defined acceptance criteria. When testing a mission critical system, such as operational medical systems, the test coverage indicator would need to be high relative to the depth of test coverage for non-mission critical systems. The depth of test coverage for a commercial software product that will be used by millions of end users may also be high relative to a government information system with a couple of hundred end users. 3

Defect Density

Measuring defects is a discipline to be implemented regardless if the testing effort is automated or not. Josh Bloch, Chief Architect at Google stated:



Regardless of how talented and meticulous a developer is, bugs and security vulnerabilities will be found in any body of code – open source or commercial.”, “Given this inevitably, it’s critical that all developers take the time and measures to find and fix these errors.”

Defect density is another well known metric not specific to automation. It is a measure of the total known defects divided by the size of the software entity being measured. For example, if there is a high defect density in a specific functionality, it is important to conduct a causal analysis. Is this functionality very complex, and therefore it is to be expected that the defect density is high? Is there a problem with the design/implementation of the functionality? Were the wrong (or not enough) resources assigned to the functionality, because an inaccurate risk had been assigned to it? It also could be inferred that the developer, responsible for this specific functionality, needs more training.

D # of known defects

DD = ------- = ( ------------------------------- )

SS total size of system

DD = Defect Density

D = # of known defects

SS = Total Size of system

One use of defect density is to map it against software component size. A typical defect density curve that we have experienced looks like the following, where we see small and lager sized components having a higher defect density ratio as shown below. Additionally, when evaluating defect density, the priority of the defect should be considered. For example, one application requirement may have as many as 50 low priority defects and still pass because the acceptance criteria have been satisfied. Still, another requirement might only have one open defect that prevents the acceptance criteria from being satisfied because it is a high priority. Higher priority requirements are generally weighted heavier.

The graph below shows one approach to utilizing the defect density metric. Projects can be tracked over time (for example, stages in the development cycle).

Another closely related metric to Defect Density is Defect Trend Analysis. Defect Trend Analysis is calculated as:

4 Graph adapted from article: http://www.teknologika.com/blog/SoftwareDevelopmentMetricsDefectTracking.aspx

D # of known defects

DTA = ------- = ( ------------------------------------ )

TPE # of test procedures executed

DTA = Defect Trend Analysis

D = # of known Defects

TPE = # of Test Procedures Executed over time

Defect Trend Analysis can help determine the trend of defects found. Is the trend improving as the testing phase is winding down or is the trend worsening? Defects the test automation uncovered that manual testing didn't or couldn't have is an additional way to demonstrate ROI. During the testing process, we have found defect trend analysis one of the more useful metrics to show the health of a project. One approach to show trend is to plot total number of defects along with number of open Software Problem Reports as shown in the graph below.

4

Effective Defect Tracking Analysis can present a clear view of the status of testing throughout the project. A few additional common metrics sometimes used related to defects are as follows:

 

􀂾 Cost to locate defect = Cost of testing / the number of defects located

 

􀂾 Defects detected in testing = Defects detected in testing / total system defects

 

 

􀂾 Defects detected in production = Defects detected in production/system size

 

Some of these metrics can be combined and used to enhance quality measurements as shown in the next section.

Actual Impact on Quality

One of the more popular metrics for tracking quality (if defect count is used as a measure of quality) through testing is Defect Removal Efficiency (DRE), not specific to automation, but very useful when used in conjunction with automation efforts. DRE is a metric used to determine the effectiveness of your defect removal efforts. It is also an indirect measurement of the quality of the product. The value of the DRE is calculated as a percentage. The higher the percentage, the higher positive impact on the quality of the product. This is because it represents the timely identification and removal of defects at any particular phase.

DT # of defects found during testing

DRE(%) = --------------- = ( -------------------------------------------- )

DT + DA # of defects found during testing +

# of defect found after delivery

DRE = Defect Removal Efficiency

DT = # of defects found during testing

DA = # of defects acceptance defects found after delivery The highest attainable value of DRE is “1” which equates to “100%”. In practice we have found that an efficiency rating of 100% is not likely. DRE should be measured during the different development phases. If the DRE is low during analysis and design, it may indicate that more time should be spent improving the way formal technical reviews are conducted, and so on.

This calculation can be extended for released products as a measure of the number of defects in the product that were not caught during the product development or testing phase.

Other Software Testing Metrics

Along with the metrics mentioned in the previous sections, here are a few more common test metrics. These metrics do not necessarily just apply to automation, but could be, and most often are, associated with software testing in general. These metrics are broken up into three categories:

                        Coverage: Meaningful parameters for measuring test scope and success.

.

Progress: Parameters that help identify test progress to be matched against success criteria. Progress metrics are collected iteratively over time. They can be used to graph the process itself (e.g. time to fix defects, time to test, etc).

 

                        Quality: Meaningful measures of excellence, worth, value, etc. of the testing product. It is difficult to measure quality directly; however, measuring the effects of quality is easier and possible.

 

5 Adapted from “Automated Software Testing” Addison Wesley, 1999, Dustin, et al



Metric Name

Description

Category

Test Coverage

Total number of test procedures/total number of test requirements.

The Test Coverage metric will indicate planned test coverage.

Coverage

System Coverage Analysis

The System Coverage Analysis measures the amount of coverage at the system interface level.

Coverage

Test Procedure Execution Status

Executed number of test procedures/total number of test procedures

This Test Procedure Execution metric will indicate the extent of the testing effort still outstanding.

Progress

Error Discovery Rate

Number total defects found/number of test procedures executed. The Error Discovery Rate metric uses the same calculation as the defect density metric. Metric used to analyze and support a rational product release decision

Progress

Defect Aging

Date Defect was opened versus date defect was fixed

Defect Aging metric provides an indication of turnaround of the defect.

Progress

Defect Fix Retest

Date defect was fixed & released in new build versus date defect was re-tested. The Defect Fix Retest metric provides an idea if the testing team is re-testing the fixes fast enough, in order to get an accurate progress metric

Progress

Current Quality Ratio

Number of test procedures successfully executed (without defects) versus the number of test procedures. Current Quality Ratio metric provides indications about the amount of functionality that has successfully been demonstrated.

Quality

Quality of Fixes

Number total defects reopened/total number of defects fixed

This Quality of Fixes metric will provide indications of development issues.

Quality

Ratio of previously working functionality versus new errors introduced

The Quality of Fixes metric will keep track of how often previously working functionality was adversarial affected by software fixes.

Quality
 

Problem Reports

Number of Software Problem Reports broken down by priority. The Problem Reports Resolved measure counts the number of software problems reported, listed by priority.

Quality

Test Effectiveness

Test effectiveness needs to be assessed statistically to determine how well the test data has exposed defects contained in the product.

Quality

Test Efficiency

Number of test required / the number of system errors

Quality

 

Types of Licenses in QTP


Types of UFT(QTP) license

There are two main types of UFT licenses -

  1. Seat License: This license is tied to the computer on which it is installed. The trial or a demo license of UFT is a seat license which has a validity of 30 days. You don’t require any keys for the trial license.
  2. Concurrent License: This is also known as floating license. This license type requires a concurrent license server to be installed in your office/local network. With concurrent license, a pool of licenses are assigned to the concurrent license server. Anybody in a local network can connect to this pool as long as at least a single license is available. For example – Let’s say your company has purchased 50 concurrent licenses of UFT. All these 50 licenses would be assigned to the license server. Now at any point of time, maximum 50 people in the local network of your company can work on UFT.
    1. Commuter License: This can be said as a special type of concurrent license which can be used when you don’t have access to the license server. In this case, you checkout a license from the concurrent license server for ‘n’ days where n <= 180. A use case for concurrent license can be – say you need to travel on work where you can’t connect to your company’s concurrent license server. In that case, you can check out a license from the server before you leave, go about your work and check-in back the license to the pool. The duration during which the licensed is checked out, it will behave like a seat license on your machine and the number of licenses on the license server will be reduced by one.
Remote Commuter License: This is used when you want a license for a particular machine (say John’s machine) but John’s machine is not able to connect to the license server for checkout purposes. In that case, you would take help of a machine(say Mike’s machine) which IS able to connect to the license server and checkout a license for John’s machine.

Page Load time using VB Script

url = "http://www.anyurl.com/"
Set dom = CreateObject ("InternetExplorer.Application") 'Create an IE object
Set dom = WScript.CreateObject("InternetExplorer.Application", "dom_")
Dom.Visible = True
dom.Navigate (url) 'open the specified URL
time_start = Now () 'Get the statistics at the beginning of time
timer_start = Timer () 'Get the number of milliseconds for the current time
a = dom.ReadyState 'Get the current IE status value, the state will use the value judgments IE the current state of
dom.visible = True 'to set IE visible
While dom.busy or (dom.readyState <> 4) 'when IE is BUSY or loaded unfinished (readyState is not equal to 4), according to the the IE state of statistical time, once every millisecond Statistics
WScript.Sleep (1) 'interval of 1 ms, if the time interval is relatively long, it is likely to take less than state value
Select Case dom.readystate 'judgment dom.readystate value
 Case 0 'IE is not initialized, in fact, in this method, readyState = 0 meaningless, because the loop is at least starting from 1.
  time0 = Now ()
  timer0 = timer ()
 Case 1 '"Sending request"
  Time1 = Now ()
  timer1 = timer ()
 Case 2 'request has been sent to complete "
  time2 = Now ()
  timer2 = timer ()
 Case 3 'can be received part of the response data "
  time3 = Now ()
  Timer3 = timer ()
 Case 4 'page is loaded
  time4 = now ()
  timer4 = timer ()
 End Select
wend
time_end = Now () 'statistics time
MsgBox "start time is:" & time1 & "; End Time" & time2
timeCount = "Statistics start time:" & start_time & vbCrLf & "time0:" & TIME0 & vbcrlf & "Time1:" & time1 & vbcrlf & "time2:" & time2 & vbcrlf & "Time3:" & time3 & vbcrlf & "time4:" & time4 & vbcrlf & "To complete the IE initialization and send the request:" & (timer1-timer_start) & "seconds" & vbcrlf & "Send completed and accepted part of the server response data: "& (timer3-timer1) &" "& vbcrlf & "100% to receive and complete the parsing of HTML content:" & (timer4-timer3 &"seconds") & "" & vbcrlf & "Spent a total of:" & (timer4-timer_start) & "seconds"
msgbox timeCount


 

Tuesday, July 9, 2013

Interview Tips for Experienced Professionals

Job Interviewing for the Experienced Professional
An interview is an exchange of information. It is important to remember to leave the interview with as much information as possible in order to make an informed decision when the job offer is made.

BASIC GUIDELINES

  • Be prepared! Review information on the organization and the position well in advance of the interview. Be prepared to talk about your assets and how they relate to the organization and position.
  • Be comfortable discussing everything on your resume, some interviewers will use it as their only guide for the interview.
  • Practice! Have a friend ask you common interview questions.
  • Dress appropriately. A positive first impression gets the interview off to a good start. If you do not know what is appropriate dress, ask the employer what is appropriate dress for an interview with their organization. Many employers now have a business casual work environment; however, most prefer professional dress for interviews.
  • Utilize nonverbal communication to show your interest.
  • Be positive. Keep answers to questions positive and upbeat; do not dwell on negatives.
  • Use examples from professional work experience, projects, achievements, and community involvement. Interviewers often hear the same answers from several candidates, but the stories your tell are unique to you.
  • Listen attentively to the interviewer. If you do not understand a question, ask to have it restated.
  • Let the interviewer control the questions while you control the answers. Controlling the answers means that you will be deciding what to say and what examples to give as a result of your interview preparation.
  • If you do not know the answer to the question, don't be afraid to admit it.
  • If you think your answer may have been too short, ask the interviewer if you answered the question or if he or she would like additional details. If you think your answers are too long and the interviewer does not maintain eye contact with you, stop and ask if you are answering the question.
  • Be honest. Any information you give is subject to verification.
  • Being nervous is normal; denying it will make you more anxious. If you are interested in the position whether it is a promotional opportunity with your current employer or a new employer, you will be nervous. You will be making an important decision based on the interview. The interviewer is interested in getting to know you and as a rule, will try to relieve your anxiety.
  • At the conclusion of the interview, if you are still interested, politely reaffirm your interest in the position.
 

RESEARCH THE EMPLOYER

  • Don't expect the employer to educate you about what they do! Identify the organization's products or services, investigate its history and growth, and learn what you can about the positions for which you are applying. If you cannot find any specific information about the organization, then learn something about the industry or field.
  • Request a copy of the job description for the position you are considering. It will help you identify your strengths as they relate to the position.

 

QUESTIONS YOU SHOULD BE PREPARED TO ANSWER

Many of the questions interviewers ask are included in this section. No two interviews or interviewers will be alike. Questions generally take three forms, situational which asks an applicant to respond to a given situation; observational where an applicant is asked to reflect upon the actions of a third party or conceptual where an applicant is asked about their personal philosophy or future goals. However, you should be prepared to answer the following questions in any interview, including the behavioral interview questions that follow in the next section.

 

  • Please give me an overview of your qualifications. This is the most frequently asked question in interviews. Always be prepared to summarize your background as it relates to the position for which you are interviewing. It is a wonderful opportunity to sell yourself and you should look forward to this question. Tell the interviewer where you plan to start. You may want to go back to high school if you feel it is relevant, or start with college. Briefly comment on items highlighted on your resume.
  • What are your career goals? This question tests whether you've determined your career goals, and whether your goals match what the organization has to offer. Sound clear and definite about your goals and demonstrate your knowledge of the organization.
  • Employers are concerned about loyalty and staff turnover. Emphasize the fact that you are being very thorough with your job search to assure that you find the right match.
  • Why do you want to work for our organization? This is your opportunity to demonstrate what you know about the organization from your research. Reasons might include the reputation of the organization or department in terms of products or service; the company's rapid growth, or positive information you have received from employees of the organization.
  • Why are you specifically interested in this position? Comment on the skills and experiences you possess that relate to the position. If it is a promotional opportunity, discuss why you are interested in the challenge and how you have prepared yourself for the additional responsibilities.
  • What are your strengths? Your strengths may be your leadership experience, your academic achievement, your career commitment, your relevant experience, or personal traits such as motivation and dependability. Don't be afraid to repeat or emphasize items on your resume or items that may have already been discussed in the interview.
  • What are some areas of expertise you feel you still need to develop as a professional? Comment on areas that you continue to improve upon such as your computer knowledge or your time management. If you obviously don't meet one of the qualifications for the position, address that issue and discuss how you will acquire that knowledge or skill.
  • Tell me what you learned from your previous work experiences. Be prepared to spend the majority of the interview on this topic. Be ready to give more detail on your responsibilities. Discuss what you learned and observed, and how you grew professionally. Give examples of what you accomplished. Relay positive feedback given to you by co-workers and supervisors.
  • Please discuss your personality strengths as they relate to this position. Make a list of 6-8 of your personality traits that you believe are assets. Write down experiences and examples that demonstrate these traits and be prepared to relay them in the interview.
  • What additional comments do you wish to make regarding your application? This question usually comes at the end of the interview. If there are important experiences or skills and abilities that you have not had the opportunity to discuss, mention them now. Encourage them to contact your references. Tell them how interested you are in the position.
 

BEHAVIOR-BASED/TARGETED INTERVIEWS

Some employers believe that the best predictor of future success is past success. In behavior-based interviews, you will constantly be asked to give examples or stories, to provide evidence that you have the skills required for the position. In fact, the interviewer will not continue until you have provided a specific example. Success in behavior-based interviews requires preparation and practice. You must be able to recall many experiences quickly, select the most appropriate one, and then describe it effectively. Create a list of 15-20 experiences that demonstrate a variety of your skills and abilities. Draw upon your college experiences, academic and extracurricular; volunteer and work experiences, and when appropriate, personal experiences. Practice telling about these experiences. When answering behavior-based questions, be certain to answer the question completely. One way to do this is to follow the STAR acronym in planning and presenting your answers. 
    
Situation or TaskDescribe the situation that you were in or the task that you needed to accomplish. You must describe a specific event or situation, not a generalized description of what you have done in the past. Be sure to give enough detail for the interviewer to understand.
Action you took Keep the focus on you. Even if you are discussing a group project or effort, describe what you did--not the efforts of the team. Don't tell what you might do, tell what you did.
Results you achieved What happened? How did the event end? What did you accomplish?

Here is a list of sample behavior-based interview questions that may help you practice:

Teamwork/Cooperation

  • Please give me your best example of working cooperatively as a team member to accomplish an important goal. What was the goal or objective? What was your role in achieving this objective? To what extent did you interact with others on this project?
  • Describe a project you were responsible for that required interaction with people over a long period of time.
  • Describe a time when you contributed to a team's achievements.
  • Give me an example of a time when you motivated others.

Customer Orientation

  • Give me a specific example of a time when you had to address an angry customer. What was the problem and what was the outcome? How would you assess your role in defusing the situation?
  • Describe a service that you have provided or experienced that you believe represents a concern for the customer.

Creativity/Innovation

  • Describe the most significant or creative presentation/idea that you developed/implemented.
  • Can you give me an example of how you have been creative in completing your responsibilities?

Flexibility/Adaptability to Change/Continuous Learning/Development

  • Tell me about a decision you made while under pressure.
  • Give me an example of how you react in a pressure situation. How did the situation come about? How did you react? What made you decide to handle it that way? What effect, if any, did this have on your other responsibilities?
  • Describe a decision you made or a situation that you would have handled differently if you had to do it over again.
  • Tell me about a time when your supervisor/co-workers gave you feedback about your work/actions. What did you learn about yourself?
  • Give me an example of something you have done that was unique to further your own professional development in college.
  • Tell me about a time when you were asked to complete a difficult assignment even though the odds were against you. What did you learn from that experience?

Leadership/Initiative

  • Give me an example of a time when you went beyond the call of duty in order to get the job done.
  • Describe a situation in which you were able to use persuasion to successfully convince someone to approach things your way. What level was the person you had to persuade?
  • Describe a leadership situation that you would handle differently if you had to do it over again.
  • Tell me about a time when you reached out for additional responsibility.
  • Tell me about a project/suggestion that you initiated. Explain how you communicated the project/suggestion.
  • Give me an example of what you have done in your present/previous job that goes beyond what was required?
  • Give me an example of when you showed initiative and took the lead.
  • Give me an example of something you've done in previous jobs that demonstrate your willingness to work.

Supports Diversity and Understands Related Issues

  • Tell me about a time when you had to adapt to a wide variety of people by accepting/understanding their perspective.
  • Give me an example of something you have done to further your knowledge/understanding of diversity.
  • Tell me about a time that you successfully adapted to a culturally different environment.
  • Tell me about a time that you evaluated your own beliefs or opinions around issues or difference.

Honesty/Fairness/Integrity/Trust

  • Tell me about a specific time when you had to handle a tough problem which challenged fairness or ethical issues.
  • Give me examples of how you have acted with integrity (walked your talk) in your job/work relationship.
  • Can you tell me about a time when you chose to trust someone? What was the outcome?

Planning/Organization/Goal Setting

  • Describe a time when you set high standards for the quality of your work.
  • Give me an example of a time when you set a goal and were able to meet or achieve it.
  • Tell me about a time when you had too many things to do and you were required to prioritize your tasks.
  • Are you better at working on many things at a time, or are you better at working on and getting results from a few specific things? Please give me two examples that illustrate this.
  • Describe one of you best accomplishments, including where the assignment came from, your plans in carrying it out, how you eventually did carry it out, and any obstacles you overcame.
  • Problem Solving/Judgment/Stress Management
  • Describe an instance when you had to think quickly to free yourself from a difficult situation.
  • Describe a time when you were faced with a stressful situation that demonstrated your coping skills.
  • Give an example of a challenging problem that you are proud you solved.
  • What is your typical way of dealing with conflict? Give me an example.
  • Give me an example of a time when you used your fact-finding skills to solve a problem.
  • Give me a specific example of a time when you used good judgment and logic in solving a problem.

Making Effective Decisions

  • Tell me about an experience in which you had a limited amount of time to make a difficult decision.
  • Tell me about a difficult decision you've made in the last year.
  • Tell me about a decision that you've made in the past that if you had it to do over, you would do differently.

Communicate Effectively

  • Describe a situation in which you were able to use persuasion to successfully convince someone to see things your way.
  • Tell me about a time in which you had to use your written communication skills in order to get an important point across.
  • Tell me about a time when you had to use your presentation skills to influence someone's opinion.

 

  

POSSIBLE QUESTIONS TO ASK

At some point in the interview, usually at the end, the interviewer will ask if you have any questions. You should plan your questions in advance of the interview and perhaps write them down on index cards or a note pad to take with you. Prepare more questions than you will be able to ask, assuming that some of them will be answered during the interview. Do not ask about salary in an initial interview. Wait for an employment offer to ask about salary and benefits. The following is a list of questions you may want to consider asking:
  • What would be the scope of my job responsibilities?
  • What major challenges and opportunities are facing this organization?
  • What do you believe are the major challenges of this job?
  • How are employees evaluated?
  • What forms of communication exist within the organization?
  • How would you describe the organizational structure?
  • Could you give me some additional information about your training programs/support of continuing education?
  • What skills do you think are important for your employees?
  • If I do my job well, where should I be after years with this organization?
  • How do you feel about community involvement?
  • Why have you chosen to pursue a career with this organization?
  • When do you expect to make a hiring decision?
It is possible that the interviewer will answer all of your questions through the course of the interview. If that happens, inform the interviewer that you had questions coming into the interview; however, he or she has done a wonderful job of providing information and at this time your questions have been answered.
At the conclusion of the interview, thank the interviewer and, if you still wish to be considered, sincerely reaffirm your interest in the position.