Epic Games blames Meltdown CPU performance issues for Fortnite downtime – The Verge
How slow will your computer be once Intel fixes the ‘meltdown’ security flaw? – Clotheshorse
Christmas comes twice this year for performance test engineers. The exposure of flaws like the ones recently announced stress the need for consistent, thoughtful performance testing. The impact of defects like the ones recently announced should be taken seriously and validated to ensure any and all measures to prevent degradation are taken.
While Intel has dominated the data center CPU market share it will be interesting to see if there is a drastic shift to diversify CPU types to minimize risk. The expansion of players in the data center (Nvidia, ARM) space continues to provide opportunities for performance testing.
Performance testing by its very nature can be expensive. With the popularity of the cloud and increased pressure to reduce costs testing teams are increasingly being asked to before efficient. Leveraging the cloud for load generation does provide for greater scale at reduced total cost of ownership. There are a few things that should be considered before selling all your servers…
- Network Capacity. Do the applications/servers you plan on testing have sufficient network capacity? If you normally execute performance tests at large loads 5k+ users locally (servers generating load are local to servers under test) you need to consider his impact. Cloud load generation could easily have the effect of a DDOS attack on your network if you don’t have sufficient capacity.
- Cloud Providers. AWS and Azure are the top dogs but Google and others are joining the party and rapidly gaining market share. Identify which provider is the best fit for your organization. Look at security, options and features that each provider before choosing one. Also I reccomend thinking about long term support – which organization/tool is most likely to retain support long term. Depending on a Cloud provider increases risk – choose wisely.
- Script / Protocol Type. While HTTP scripts will be supported without question, support for more exotic protocols may not be supported. Be sure to complete a POC before jumping off the deep end.
- Security. Cloud is a double edged sword. Scalability comes at a price and putting servers off prem does increase the risk of someone accessing your servers. Extra caution should be take to lock down anything you spin up in the cloud.
- Pricing. The pricing for cloud providers is anything but simple. Traffic, Space, Sizing, and numerous options complicate pricing. Do your homework to ensure the ROI for Cloud is actually worth the effort.
- Oversight. With all the increased complexities of the cloud proper oversight is required. Set a budget and ensure you have proper controls to ensure you don’t over spend.
- Organizational buy-in. While looking at providers don’t forget to check to see if the organization has a ‘preferred’ provider.
HPE has made some pretty drastic changes to the licensing model in the last 2 years. The the model takes what was once effectively free to a very expensive paid model. The new and currently supported version of SS (11.3X) only allows 25 systems to be monitored without additional paid licensing. It must be said that you have unlimited monitoring of counters within those 25 systems but you are restricted to a total of 25 systems. HPE has conveniently made it very painful to stay on 11.2X by removing support for it later this calendar year. Knowing that all major corporations will upgrade due to fear alone due to patch support – everyone will shift to 11.3X unless they are willing to pay for extended support.
With that in mind there are a few options:
- I don’t see anything in the license agreement that prevents you from running multiple instances of SS using the LoadRunner / Performance Center license. While this does cost more in Server hardware it does avoid the license costs.
- Maximize internal monitoring within LoadRunner / Performance Center when possible, especially for Windows based systems.
- Engage Server and Support Teams within your organization to pull monitoring data via other tools that are supported in the organization. Most large organizations have a set or sets of additional tools that pull similar data to SS.
- Evaluate options that exist with the competition. If HPE has a tool out there, there is more than likely someone else that has something similar at a cheaper rate. The trick is to find if it will fit well with your organization and your implementation of HP LoadRunner / Performance Center.
- Get creative with the monitoring… While 25 OSI license won’t cover large platforms, it might be enough for you to focus on a smattering of key systems on the platform. Less than ideal, but in a pinch it would give you a higher level of confidence than going without.
- Engage each team and request them to monitor their system(s) while the performance test executes. This is makes each performance test cost a lot more due to the effort involved but could be a short-term work-around.
Let me know if you have exposure to other monitoring solutions that work well within HP LoadRunner / Performance Center. I’ll be researching this independently and will report back on my next post.
It never ceases to amaze… retailers continue to struggle with performance and availability. In most cases these types of issues could be averted (or at least minimized) with thorough performance testing.
Nordstrom website crashes during anniversary sale – USA TODAY
When debugging HP Performance Center connectivity, determining if ports are opened is a key step. In the past I’ve used Netstat -A to identify what ports on the target host are listening. While it is useful, it only shows the ports the local host is listening for, it doesn’t indicate if the port is truly open through the firewall. Using TelNet is much more effective as it allows you to see if port is open from source to destination. Enjoy.
How to use:
My team has recently been through a large PC Upgrade, where we moved our servers and updated to the latest version of HP PC. Below is a checklist that might help you avoid a few snags:
- Schedule the upgrade well in-advance and communicate the impact to all stakeholders.
- Notify your team via Calendar Invite during the maintenance period
- Prevent miscommunication by adding all hosts to a ‘Maintenance’ Timeslot within the Lab
- Ask all team members to check-in items before the upgrade.
- Scale up any PC systems as needed. (CPU/RAM/Disk)
- Examine OS choices and see if it’s a good time to move to a new version.
- Examine IE requirements and ensure compatibility if you plan on moving to a newer version of IE.
- Examine backup schedule for key-systems.
- Does it make sense to increase or reduce backup frequency post migration?
- Examine service account usage.
- Does it make sense to create new accounts along with the upgrade?
- Clean up ALM. (Old scripts, Test Run, Monitor Profiles, etc)
- Stage all the install files on the PC Hosts so the installs are efficient
- Build consistency and use the same install method for all VMs
- Build an install guide for the team to follow
- Verify you have all the patches downloaded as well for all components
- PC Server
- PC Host
- Network Virtualization
- Ensure the team has Admin rights to all impacted systems
- Be sure to update/remove permissions once complete
- Ensure your main service account has Admin access to the SQL server, PC, and ALM servers
- Test! Yes, build a sample project and verify the new instance works as designed. Check monitoring, script creation, check-in process, general GUI functionality, test execution, etc.
On the day of conversion/upgrade:
- Ensure all the systems you plan on modifying are backed-up in-case of catastrophe
- Ensure you have all the support you may need to in terms of support.
- Server, Network, SQL, etc.
- To avoid issues with the upgrade, Disable Versioning before you move Project DBs.
- (Ensure all versions controlled items are checked in)
- Use RoboCopy to move the file repository
- This will take hours unless your team is very small.
- Test all critical ALM projects!
- Verify connectivity to any cloud providers that existed prior to the upgrade.
Google has released a new JPG encoder that reduces file sizes up to 35%. This could mean slimmer pages and faster response times. Yay.
Check it out here!!!