Archive for the ‘Vmware’ Category

VMware View Usage Dashboard

October 4, 2011 1 comment

I’ve had the new version of my app running for a few months. This has given me quite a bit of raw data, but no really nice method for perusing it (aside from just raw SQL on the DB).

So I took some time yesterday to write a sort of dashboard based on the data I’ve collected and based on questions I get from the “higher-ups.”

This is version number 1. The dashboard is web-based using a combination of whatever I felt like writing at the time, some .NET, some classic ASP, javascript and AJAX. All of the graphs/charts on the page are from Google charts.

The top of the page shows current usage in each of my three pools. The gauge is scaled for the number of machines in the pool: pool 1 has 20 machines, pools 2 and 3 each have 100. The CPU utilization is averaged across all users as the percent reported by Windows itself (like you would see in Task Manager).

Next to those is a term cloud that shows the top 10 currently running apps in the pools. As a cloud it means that the more instances there are of a given app, the larger the font in respect to the other apps listed.

The Start and Stop buttons control the AJAX that refreshes the gauges and term cloud on a specific interval.

Under that is a graph that shows logins and average CPU for the month of September. This one is just looking at one specific pool right now (Pool 2 from the gauges). And below that are two pie charts that show top apps for the month by frequency and time-in-use.

I think I will continue to tweak this based on what I’d like to see along with any other requests I get from ‘higher up.’

Comments appreciated


Desktop Usage Stats

June 15, 2011 Leave a comment

I’ve completely re-written my application to gather more data on usage. Now I have access to real-time data on which apps users are running along with CPU utilization. Kind of interesting to see the numbers emerging from the data.

As an example, top apps are Internet Explorer, Word, Firefox then trailing quite a bit behind Excel (Access was 6th, Visual Studio was 9th). One of the surprising things that came up is idle time. Seeing scrnsave.scr show up in the list of applications started me looking at how often the machines are sitting ‘idle’ as evidenced by scrnsave.scr showing up in the application list. A quick bit of ts-sql showed that for about 11% of the logged-in time, scrnsave.scr was active. I need to let the new version run a bit longer to get a better idea what the average CPU utilization is.

Categories: Usage Stats, View, Vmware

VMware View usage statistics re-visited

June 1, 2011 Leave a comment

well, that didn’t take long. I’ve re-written my program that collects usage statistics on my view pools (any physical machines too.)  First thing I did was to generalize the program a little and set it up for command-line options. I can over-ride the default database and catalog settings on the command line if I wish (it currently uses a DSN-less connection to MSSQL server).

I setup a command line switch for data collection level. I can then set level equal to 1,2 or 3. At 1, the program will only record login, logout and View Broker URL, View Client IP and Client OS type. Level 2 is level 1 and the list of applications the user opened. Level 3 is level 2 along with average CPU utilization as a percent and eventually average memory used.

The View information comes out of the Volatile Environment reg key. This will be used to differentiate users connecting from on- or off-site. The CPU comes from System.Diagnostics.PerformanceCounters. CPU is accumulated every 3 seconds and then at logout or shutdown the CPU accumulator is divided by logged-in time/3. I’m not sure on how frequently to sample CPU at this point. I will probably set it up as another command-line switch to change frequency so I can do some testing to see what the best sample rate is. The minimum is 1 second per msdn, but that seems excessive. Average logged-in time for my users is 55 minutes which would be 3,300 samples. One of the things I don’t want to have happen is that my app skews any of the data collected, at least not skew it too badly.

so, as it stands I can run the program from an AD GPO and use command-line switches to control the functionality. It would look something like:  datacol.exe -ds=mySQLserver -ic=mylabs -L=3

The exe is 24k for the release build and on my win7 test pool seems to start somewhere around 20meg of ram used. Over time this creeps up to about 30meg but then drops back down to 27.

I also have the program set to minimize to the system tray, ignore mouse clicks and not allow the user to terminate it. Almost seems like it’s getting to the point where it might make more sense to re-write as a service instead.

Comments/suggestions appreciated.

Categories: View, Vmware

VMware View Usage statistics

May 31, 2011 2 comments

My boss kept asking me for usage stats for our View installation. I looked at Stratusphere UX, which looks to be a great product, but at  $15,600 for 400 licenses, is out of our budget at this time. What to do, what to do?  Write my own, of course!

While I’m not saying that my homegrown app will come close to the functionality of Stratusphere, it does a decent job of tracking usage and see what apps are used and for how long. One other tidbit that comes up from the data is a list of applications that users leave open when they logoff or disconnect.

It is written in visual studio 2008 .net and uses Process.GetProcesses to enumerate any process that has a title set. This is stored in an array and a timer then checks every so often to see if the process is still running. If not, the stop time is recorded in the array. Once the user logsout, the app then calls some stored procs on a database server to dump all of the data.

The data has the machine name, userid of the person, date/time stamp for login, date/time stamp for logout, then in a different table it has all of the applications that the user ran along with date/time stamp for start and stop of the app.

With this data I can then use excel to analyze what’s going on, how long users are staying connected (55 mins on average) which applications they are running and for how long (IE, then excel then word are the most used). Then based on login/logout DTS I can plot usage by day and time. So in the graph you can see that usage ramps up just after 8 am and then winds down again just afer 8 pm. Not the best program I’ve ever written but it sure has been handy to show the higher-ups some graphs and pictures to help justify View. Having the data for application usage helps too in that we can better leverage licensing (I know that SPSS was only used 20 times during the data gathering period and can track it back to specific users) NOTE: this was run on one small pool, hence the max concurrent of just over 100 users

While not a direct relacement for something like Stratusphere, I think that others can use in-house developed apps to get much of the same data in their own environments.

Up next: I will be re-writing the app to include connection broker URL, client IP and OS to help differentiate locally connected users and those connecting through a security gateway from off-site

Categories: View, Vmware

VMware View 4.5 real-time usage statistics pt2

January 5, 2011 Leave a comment

I have redone my trigger for pool-based usage stats. In looking through the event log table I found that there was not always an AGENT_DISCONNECTED record for all user logoff events. I don’t know if it is dependent upon whether the user logs off or just disconnects or what, but it seems like there are always BROKER_DESKTOP_REQUEST and AGENT_ENDED both of which contain the pool in a field called DESKTOPID.

So now I am basing my logic on those two types. If I get BROKER_DESKTOP_REQUESTED, I then get the count from the table based on that pool, then increment it. Same goes for the AGENT_ENDED event, I get the count from a table (which is pre-populated with pool names of course) and then decrement the count. Works great so far, but my users are not back in full swing yet so we’ll see how things go when there’s some real load on the View infrastructure.

So, to recap it looks like a View client login/logout shows up in the event table as follows:


This is on a floating pool with View Composer set to refresh the machine at logoff. I guess now only time will tell how well this works. The database server shouldn’t have a problem with the extra bit of load from the trigger.. I’m sure I’ll find out soon enough if it doesn’t work. Everyone knows where my office is 😉

Categories: View, Vmware

VMware View 4.5 real-time usage statistics

January 4, 2011 3 comments

I was somewhat disappointed by the lack of usage information from View 4.5. Specifically that I had to login to the admin side of a connection server to find out how many machines were in use out of a given pool.

The data that View stores is in ADAM and in searching the net, I’ve not found a good way to query that data. What I’d like to do is have a webpage that shows how many desktops are in use or available for a given pool.

I had already started parsing the View Event database for historical statistics (users per day, average length of time logged on, busiest days etc) but nowhere in there was an easy method to just go “hey, there are 75 people on right now”

What I came up with (this may or may not work in your environment) was to put an AFTER INSERT  trigger on the viewevent table. The trigger either increments or decrements a counter in a table on a remote-linked server that is accessible to my web server.

At this point I am not looking at specific pools, only overall usage. There are a few event types in the log table that you can use and it all depends on which one you want to pick as the definitive “a user logged on” event for the trigger:


For me, the most obvious seemed to be to base counts on the BROKER_USERLOGGED -IN and -OUT event types. The issue with these is that in the record, neither the pool nor the desktop are listed. Only the userid of the person logging in and the name of the View Connection server. If you based your trigger on the “AGENT_CONNECTED” event type, you have the name of the user and the name of the responding desktop. If you use BROKER_DESKTOP_REQUEST then you have the name of the pool.

I didn’t want to make my trigger complex at this point so I am just getting an overall/aggregate usage for all pools.

Basically my trigger is as follows (note, I removed the linked server name and some other specific data, this code is NOT guaranteed to run as-is or at all):
--written by Paul Dunn
--(C) 2011
--will increment/decrement a counter for view usage based on event type
--in the viewevent table
ALTER TRIGGER [test1]    ON  [dbo].[viewevent]  
  AFTER insert
declare @etype as varchar(50),
@count as int
 -- SET NOCOUNT ON added to prevent extra result sets from
 -- interfering with SELECT statements.  
select @etype = [EventType] from inserted  
select @count = (select num from view_usage where poolid = 1)  
update view_usage set num = (@COUNT + 1)
 if @count> 0   
update view_usage set num = (@COUNT - 1)  

Now all I have to do it query that value in the table and display it on the page. I have the tables setup so that I can track usage by pool, which makes more sense than all users, but this is a start at least. This should also help get others off my case as I run View 4.5 pools for 3 other departments besides my own and now I don’t have to give them access to the admin side for them to see the usage on their pools.

I am going to re-write this eventually so that it will be pool-based, shouldn’t be that big of a deal, just means using a different event type in the table and parsing out the name of the pool. Could even go so far as to grab the name of the logged in user, although at this point I don’t see much use for that.

Categories: View, Vmware

VMware View 4.5 pool re-compose storage throughput

October 1, 2010 Leave a comment

I just upgraded my VMware View 4.5 RC to GA. After upgrading the Connection servers, and then composer, I updated the agent on the master image for the pool.

Keep in mind this cluster is only running on 4 servers. Somehow the bid – purchase process got screwed up and my 10 new servers never showed. Of course that pricing is now no longer valid.. waiting on 6 HP DL387s. Oh well.

So, I told View to recompose the pool at 17:53. Pool consists of 100 Windows XP SP3 as linked clones. The ESX servers are connected to a Sun 7410. 2GB aggr from servers to Cisco switches with 4GB aggr to 7410 head.

According to View MGR the complete pool re-compose took 37 minutes. The SUN shows a peak of 9,898 NFS ops/sec. I didn’t add Latency to the Analytics page until part way through, but latencies stayed well below the 15mS mark. In the screen shot that highlights the NFS Ops peak you can see how most of the latencies are at the 0uS mark (zero).. so they are between zero and 1mS.

There is still a bit of NFS traffic on there, it didn’t drop off really low as that was VCenter migrating the desktops around to re-loadbalance.

Now at 18:33 the SUN shows NFS down to < 100 ops/sec so all-in-all I’d say 45 minutes total for a 100 desktop pool re-compose. I exported the data from the SUN and a quick look shows an average of 2,849.3 ops/sec over the time the re-compose was running

Categories: Storage, View, Vmware