ALM – Test Runs stuck in ‘Pending Creating Analysis Data’

While using Performance Center 12.53 I encountered an issue where my Test Runs were becoming stuck in ‘Pending Creating Analysis’ whenever the scenario was configured to Collate (vs Collate and Analyze). The Data Processor Queue in PC was stuck. The issue only became worse as numerous runs began to become stuck in this status, effectively creating a log-jam.

To remedy the issue the following queries were executed in Site Admin – PC LAB. This effectively removed the ‘TASKS’ so that the bad runs could be skipped. Be sure to back-up the ALM DBs / ALM Repository before running any update statements in Site Admin. 

update DP_TASKS
Where DP_Progress_Status = ‘Pending’
and DP_Operation_Type = ‘Analyzing’
and QC_Project = ‘<ProjectNameHere>’
and QC_RUN_ID = ‘<QCRunIDHere>’

The statement above was used and fixed the issue. I recommend being very specific and testing it with 1 Test Run before running a mass-update on the DP_TASK table.


Fun with UAC – Change Machine Identity Failed Error.

Change Machine Identity failed: Reason: RunProcessWithLogon: Failed to create process [D:\PCHost\al_agent\bin\alagentservice.exe] with user <My PC User> windows error code [183].

My team recently was encountering the error above on multiple VMs after a server patching was completed. The maintenance re-enabled UAC which disabled communication from the PC Hosts and ALM which caused the systems to be Non-Operational.

After a little digging, UAC was determined to be the culprit. For Server 2012, update the registry below to disable UAC.

  1. Browse to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
  2. In the right pane, double-click EnableLUA
  3. Change Value Data to 0
  4. Reboot the Host, Attempt to Re-Configure in the LAB

Spectre & Meltdown

Epic Games blames Meltdown CPU performance issues for Fortnite downtime – The Verge

How slow will your computer be once Intel fixes the ‘meltdown’ security flaw? – Clotheshorse

Christmas comes twice this year for performance test engineers. The exposure of flaws like the ones recently announced stress the need for consistent, thoughtful performance testing. The impact of defects like the ones recently announced should be taken seriously and validated to ensure any and all measures to prevent degradation are taken.

While Intel has dominated the data center CPU market share it will be interesting to see if there is a drastic shift to diversify CPU types to minimize risk. The expansion of players in the data center (Nvidia, ARM) space continues to provide opportunities for performance testing.

Performance Testing & Cloud

Performance testing by its very nature can be expensive. With the popularity of the cloud and increased pressure to reduce costs testing teams are increasingly being asked to before efficient. Leveraging the cloud for load generation does provide for greater scale at reduced total cost of ownership. There are a few things that should be considered before selling all your servers…

  • Network Capacity. Do the applications/servers you plan on testing have sufficient network capacity? If you normally execute performance tests at large loads 5k+ users locally (servers generating load are local to servers under test) you need to consider his impact. Cloud load generation could easily have the effect of a DDOS attack on your network if you don’t have sufficient capacity.
  • Cloud Providers. AWS and Azure are the top dogs but Google and others are joining the party and rapidly gaining market share. Identify which provider is the best fit for your organization. Look at security, options and features that each provider before choosing one. Also I reccomend thinking about long term support – which organization/tool is most likely to retain support long term. Depending on a Cloud provider increases risk – choose wisely.
  • Script / Protocol Type. While HTTP scripts will be supported without question, support for more exotic protocols may not be supported. Be sure to complete a POC before jumping off the deep end.
  • Security. Cloud is a double edged sword. Scalability comes at a price and putting servers off prem does increase the risk of someone accessing your servers. Extra caution should be take to lock down anything you spin up in the cloud.
  • Pricing. The pricing for cloud providers is anything but simple. Traffic, Space, Sizing, and numerous options complicate pricing. Do your homework to ensure the ROI for Cloud is actually worth the effort. 
  • Oversight. With all the increased complexities of the cloud proper oversight is required. Set a budget and ensure you have proper controls to ensure you don’t over spend.
  • Organizational buy-in. While looking at providers don’t forget to check to see if the organization has a ‘preferred’ provider.

HP SiteScope Points Conundrum

HPE has made some pretty drastic changes to the licensing model in the last 2 years. The the model takes what was once effectively free to a very expensive paid model. The new and currently supported version of SS (11.3X) only allows 25 systems to be monitored without additional paid licensing. It must be said that you have unlimited monitoring of counters within those 25 systems but you are restricted to a total of 25 systems. HPE has conveniently made it very painful to stay on 11.2X by removing support for it later this calendar year. Knowing that all major corporations will upgrade due to fear alone due to patch support – everyone will shift to 11.3X unless they are willing to pay for extended support.

With that in mind there are a few options:

  1. I don’t see anything in the license agreement that prevents you from running multiple instances of SS using the LoadRunner / Performance Center license. While this does cost more in Server hardware it does avoid the license costs.
  2. Maximize internal monitoring within LoadRunner / Performance Center when possible, especially for Windows based systems.
  3. Engage Server and Support Teams within your organization to pull monitoring data via other tools that are supported in the organization. Most large organizations have a set or sets of additional tools that pull similar data to SS.
  4. Evaluate options that exist with the competition. If HPE has a tool out there, there is more than likely someone else that has something similar at a cheaper rate. The trick is to find if it will fit well with your organization and your implementation of HP LoadRunner / Performance Center.
  5. Get creative with the monitoring… While 25 OSI license won’t cover large platforms, it might be enough for you to focus on a smattering of key systems on the platform. Less than ideal, but in a pinch it would give you a higher level of confidence than going without.
  6. Engage each team and request them to monitor their system(s) while the performance test executes. This is makes each performance test cost a lot more due to the effort involved but could be a short-term work-around.

Let me know if you have exposure to other monitoring solutions that work well within HP LoadRunner / Performance Center. I’ll be researching this independently and will report back on my next post.


Checking for Blocked Ports (NetStat vs TelNet)

When debugging HP Performance Center connectivity, determining if ports are opened is a key step. In the past I’ve used Netstat -A to identify what ports on the target host are listening. While it is useful, it only shows the ports the local host is listening for, it doesn’t indicate if the port is truly open through the firewall. Using TelNet is much more effective as it allows you to see if port is open from source to destination. Enjoy.

How to use:  

Enabling TelNet: