Transcript

Expand

Whether you are standing up a new SQL Server environment or migrating to a new on-premises or cloud environment, SQL Diagnostic Manager can help a DBA understand how you need to size your environment in terms of disk storage and disk capacity. But you also need to consider processing power and memory usage that will ensure optimal performance of your new SQL Server environment. So where do I go within SQL Diagnostic Manager to find this important information.

Well, the first place I would start out is the summary view in my databases tab. So I am already there for my SQL 2012 named instance. And the chart that you see below after I select a few of my databases here is a chart below that shows my log usage in megabytes here. So I can see the space used for my finance database. And I can also see the unused space for my finance database. Also my SQL QT [Query Tuner] database as well. So you can not only see that size and information for your log file sizes but also your data file sizes as well. So you can see the amount of space taken up by tables, not space taken up by indexes, and just your new space. And if we had any text information here, we would also see that space usage as well. And any of these charts that you see within SQL Diagnostic Manager can be saved off as an image or exported to excel

Alright next let us talk about reports here is where you can find valuable information on how healthy your SQL Server environment is running and the historical storage and the forecasted footprint as well. So if I go to the Reports section under the analysis report section here which is this area in the middle, there is one server-level report, in particular, that is compelling, and that is the baseline statistics report. That is in the server section again here under baseline statistics. The baseline statistics report reports analysis and compares baselines within a single SQL Server instance and across two instances as well. When viewing baseline statistics for a monitored SQL Server instance, you can compare the baseline metric at two different times or two different metrics at the same time. If you include another instance to compare to you can compare the same basic metric value occurring at the same time or different times. Baselines are calculated for particular metrics in SQL Diagnostic Manager based off of the rolling seven days of history. And you can configure multiple baselines to measure expected behavior for SQL Servers that experience a lot of off-hours processing. For example, whether it is reporting and ETL [extract, transfer, and load] processes or just other offshore development groups accessing that particular database. So I am going to run this report for my 2012 named instance here. I am going to select my page life expectancy value here, and I am going to select the last three days. All of these time frames are standard across all of your reports, including any custom reports you create. It can also create a custom range. I am always going to select the last 30 days, just to be consistent. And you can optionally compare to another server another metric at a different time frame. I am just going to leave this as is. You can see how my page life expectancy has gone up and down over time in actual values and also baseline values here. So you can see it kind of went up a little bit my blue is my baseline and then it is gone down which is not good. Page life expectancy is kind of a measure of how well SQL Server is swapping pages in and out and leveraging those pages. And then it is kind of going up here towards the end so definitely a nice upward trend as far as page life expectancy here as well. And we can see the detail down below if you want to export to Excel create different charts as well.

In the databases section, there are also some very important reports as well as the resources section. So if we go to the databases section, we will look at the database statistics report. So this database statistics report is used to analyze and compare performance trends across one database or two databases if you so choose. All right so again, I will choose my 2012 named instance. I will select my DM repository database, that is the most active here on my instance, 30 days. And I will keep this at data growth percentage but just note he can change the chart type to data or log file size, data size, reads and writes per second, or transactions per second. So let us see what this report gives us. We can see the data growth over the last 30 days goes up and down here. And that is because SQL Diagnostic Manager does have aggregation and grooming or purge processes to account for and keep that repository in check in a reasonable size.

The next report I will show here is the top tables by growth here. Alright so again selecting my 2012 named instance and this report is going to identify the fastest-growing tables on my particular instance here. We can identify those fastest-growing tables by a few different metrics here. So the number of rows is a good metric, data size, text size, index size. But I am going to account for all of those three sizes with total size here. Alright and I can optionally apply additional filters check this into a particular database here. I will select my DM repository again just to be consistent. We can also apply size and row count filters as well. Alright, so within my DM repository database is my (let us see here my) server activity report seems to be the highest in over the last 30 days it is grown quite a bit here, and to a lesser extent, my other tables seem to be a little bit more stable here. We also get a breakdown by the various other metrics that we did in charts here, tables by size growth and row growth percentage, tabletop tables by a number of rows here. So you will see some different tables for all of these other metrics here.

Some reports of interest in the databases section here, as well we have just gone over. But there are also some reports that are applicable in our resources section. The first one I want to point out is our CPU statistics report. So we can track key CPU performance metrics here. So if we select the last 30 days. So this report is important to run so that you can understand okay what is the CPU usage over the last (you know) again 30 days, last year, however long, that servers been active here. I choose to show the baselines when I can because I like to understand the expected level of performance of our SQL Server as far as CPU goes. All right so we can see our baseline is kind of aligned very closely with our actual CPU usage share. The blue is the baseline here, and the Green is the OS [operating system] CPU usage. I am sorry the blue is the SQL Server CPU usage. And we can also see our processor length as well as our compilations for me. So these are important metrics to track to see if you need to maybe up that processing power in your new environment.

We also have a disc stats report as well that you can run again over the last 30 days on a particular instance here, which is what I am going to choose. And again I am going to show that baseline just to understand the expected level of performance here. So my I/O [input and output] activity included with my baseline is pretty low overall because I have a lot of memory. It is in allocates in the work files per minute. You can see the baseline that aligns pretty closely with the actual values per minute. Tempdb size, you can see there are some spikes here at the beginning but what I actually did was to cut down on some of that I/O [input and output] was to come up with a tempdb size us mint and that seems to have been working pretty low. There is a little spike up and down here activity, but overall the baseline remains constant, actually seems to be going down currently. And then you can see the other metrics charted down below.

We also have a disk space history report where you can view the history of activity for specific disk here. I only have one disk in my virtual environment, so this makes this report pretty easy. So you can chart are on different metrics here – data used, free log file, free disk, and disk reads per second writes per sec. I always like to report on SQL data used since I am trying to do a forecast of space usage on my new environment right. Number days and I choose my desk here. So let us run this report see what we get. So you can see my secret data used on my C Drive here, the only job I have here seems to have started out 30 days ago at about 7 Gigs here. And then it seems to have gone up close to 9.4. And then it leveled off a little bit more probably around 9.3 or so or 9 Gigs here. (You know) showing some more constant growth after a little bit spike the last 30 days so I may want to extend this back even further.

Some of the more important reports that we have or that will forecast the growth of your disk usage and database growth in your table growth. You can also choose different forecasting types, linear or exponential when running your plan reports here, alright. So if I just select my normal parameters, that is 30 days. I always like to keep my forecast units in alignment with my time period that I am selecting, in this case, 30. And what we can do here is actually select a different forecast type. So what are these linear forecast in the exponential forecast types means? So a linear forecasting type will allow you to trend the available data and extend that forecast out in a straight line by the number of forecast units all right. So if we run this report, let us see what we get. So over the next 30 days here, based off of our last 30 days of data, it looks pretty constant. So (you know) there may be a slight downward trend, but that is kind of hard to tell, so I will just chalk this up to, hey (you know) it is going to be fairly constant here. What happens if we change this to exponential? Now exponential is going to exaggerate that data trend that exists in the data to see if the trend is more away or towards a critical threshold. So if the rate of growth is not constant, the exponential forecast type usually will give more accurate forecasts of future growth here. So since it looks like it is pretty constant, but let us see what we get here. And again it is going to be the same exact same (you know) graph (you know) it is going to be a constant (you know) leveling off it whatever this is 90 Gigs or whatever the value is.

Okay, now we can also see this for a database growth forecast here. I find this particularly useful when looking at particular databases that are migrating or consolidating over to another virtual environment, alright. So I will just choose one repository, and I will choose the last 30 days. And again, I will choose linear. I will change my forecast type to 30. So let us see what we get from my DM repository. So I see so we are showing actually a pretty sharp upward trend here of data growing at about at just over a here a 1.1 Gigs to (you know) over about 1.3 Gigs here alright. So again, this is just working off the upward trend of the data, and you can see that looks to be fairly accurate. What if happens if we choose exponential here? So this is going to exaggerate that up and down that we see here in the last 30 days. We can see that (hey) since it is going up and down up and down it is going to exaggerate that and it kind of goes up during the middle of June here. And then it is expected to kind of go down and level off at about where it kind of started out at. That is actually probably more accurate since SQL Diagnostic Manager does have grooming or purging and aggregation procedures to keep that database in check all right. We can also do this at the table level.

We can select multiple tables in a particular database here. Again I will select my team repository as my model. And I like to select a few here alerts. I think we saw database statistics in one of our reports in the past here and I think that is good I will just select two just to keep it easy. Last 30 days or caste units 230. Again I will just leave this as linear forecasting time. And we can see for these two tables they are growing again at a constant rate based off of their past growth here. So that looks (you know) to be pretty accurate to know what is what 140 Megs and probably close 250 Megs and for my database statistics report and starting out currently dipped what about 50 Megs and going to close to a 65 or 70 Meg range all right. So again let us try exponential to see the difference. We can see that these are expected to go up a little bit more sharply here. But again (you know) you will have to see (you know) in these tables as well (you know) do collect data quite frequently, alerts especially, it looks are captured with every snapshot of SQL Diagnostic Manager so that may be somewhat realistic. So hopefully you were able to gain some insight on how SQL Diagnostic Manager can help you as a DBA forecast in (you know) the capacity plan for the future growth of environments and databases and tables and lots of other things, CPU memory consumption things like that. So I appreciate your time and have a great day.

 

Topics : Database Backup,Database Diagnostics,Database Monitoring,

Products : SQL Diagnostic Manager for SQL Server,

How To Plan Capacity with SQL Diagnostic Manager for SQL Server

Capacity planning for SQL Server is not for the faint of heart. Storage and disk capacity are apparent, but when trying to right-size databases for a new SQL Server implementation or migration, there is a lot more to think about than just space. Beyond considering sizing and future growth, also consider design, configuration, and other essential resources to ensure that the new SQL Server is performing optimally. SQL Diagnostic Manager can help gather data concerning the current usage and growth trends of databases for capacity planning.

facebook  
Contact IDERA: