Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Friday, March 30, 2012

Merge Join's poor performance

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

Liran R wrote:

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

If I were you I would use the OLE DB Source component to do the join - there is absolutely nothing wrong with doing that. If you have a super-performant relational databsae engine at your disposal - why not use it?

Donald Farmer talks around this a little in his OVAL webcast. If you only watch one SSIS webcast in your life then it should be this one.

Donald Farmer's Technet webcast
(http://blogs.conchango.com/jamiethomson/archive/2006/06/14/4076.aspx)

-Jamie

|||Thanks, I'll take a look. does the merge join takes place in the cache?|||

Can you elaborate as to what you mean by "the cache"?

-Jamie

|||

Sure. When I'm using a Lookup component to make the join, The lookup table is been cached by default. When I want to join to a large table (with several millions) I understood It's best practice to use the Merge Join Component, but as I see also this component caches the records in the machines memory, so I don't understand the benefit...

|||

There's a different sort of memory usage going on here.

Yes, the LOOKUP has an area of memory that we call the cache and is used for storing a lookup set.

MERGE JOIN stores data in memory (as do all asynchronous components) but its a different kind of memory and we don't refer to it as a cache. it is just the component's working area - more commonly termed a buffer. Also, this working area will change as MERGE JOIN does its work whereas the LOOKUP cache is static.

It is also worth saying that the MERGE JOIN can spool data to disk if it s running out of memory. LOOKUP cannot do that with its cache.

-Jamie

|||

Hi SSIS friend,

I remember asking a similar question a couple of months back. I think I had 15+ Merge Join components in a test package and the performance was awful. Jamie's advice then and now is pretty sound.

During my short period experimenting with SSIS, I came to realise that in order to create efficient packages, I had to utilise the power of both the SQL Server and SSIS engines. Each one is good at performing certain tasks better than the other. It takes time and I'm still learning, but the more you play around with it, the easier it gets to choose which engine should be used.

|||

Thanks guys.

sql

Merge Join's poor performance

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

Liran R wrote:

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

If I were you I would use the OLE DB Source component to do the join - there is absolutely nothing wrong with doing that. If you have a super-performant relational databsae engine at your disposal - why not use it?

Donald Farmer talks around this a little in his OVAL webcast. If you only watch one SSIS webcast in your life then it should be this one.

Donald Farmer's Technet webcast
(http://blogs.conchango.com/jamiethomson/archive/2006/06/14/4076.aspx)

-Jamie

|||Thanks, I'll take a look. does the merge join takes place in the cache?|||

Can you elaborate as to what you mean by "the cache"?

-Jamie

|||

Sure. When I'm using a Lookup component to make the join, The lookup table is been cached by default. When I want to join to a large table (with several millions) I understood It's best practice to use the Merge Join Component, but as I see also this component caches the records in the machines memory, so I don't understand the benefit...

|||

There's a different sort of memory usage going on here.

Yes, the LOOKUP has an area of memory that we call the cache and is used for storing a lookup set.

MERGE JOIN stores data in memory (as do all asynchronous components) but its a different kind of memory and we don't refer to it as a cache. it is just the component's working area - more commonly termed a buffer. Also, this working area will change as MERGE JOIN does its work whereas the LOOKUP cache is static.

It is also worth saying that the MERGE JOIN can spool data to disk if it s running out of memory. LOOKUP cannot do that with its cache.

-Jamie

|||

Hi SSIS friend,

I remember asking a similar question a couple of months back. I think I had 15+ Merge Join components in a test package and the performance was awful. Jamie's advice then and now is pretty sound.

During my short period experimenting with SSIS, I came to realise that in order to create efficient packages, I had to utilise the power of both the SQL Server and SSIS engines. Each one is good at performing certain tasks better than the other. It takes time and I'm still learning, but the more you play around with it, the easier it gets to choose which engine should be used.

|||

Thanks guys.

Monday, March 26, 2012

Merge Agent System Stored Procedures and sp_Recompile

Hello,

We are trying to be proactive and stop a potential performance issue by reducing the number of recompiles in our SQL 2000 database application. This database is replicated. After viewing output from Profiler and PerfMon it seems that over 90% of the recompiles are due to system stored procedures generated by replication merge agents. Can anything be done about this?

Thanks

Can you be more specific - which procs, and under what conditions?|||The databases are setup to use push merge replication with the polling interval set to every 15 seconds. It is replicating 4 databases to one subscriber. The procs that are involved are the ones generated by the merge agent when you set up replication. Hope this helps!|||Thanks Candice, but merge replication involves many many procs, including triggers. Can you please be more specific about which ones? And is this occuring at the publisher, or subscriber?

merge agent slowing performance

Hi,
I have setup merge replication with push subcription for every 10 min. I
have changed the agent profile to 'high Volume Server to server' but still
the duration and delivery rate remain almost the same. How can i increase the
delivery rate, or what should be the suggested agent profile be. Please help,
it is urget.
Thanks in Advance,
SVur
The history of the merge agent looks like this for hte last run:
# Actions : 32
Last action message: merged 3252 data changes(3209 inserts, 16 updates, 27
deletes)
Duration(secs): 00.00.43
Delivery rate (rows per sec): 73.0000
Inserts at Publisher:2912
Updates at Publisher:16
Deletes at publisher:27
conflicts at publisher:0
Inserts at subscriber:297
Updates at subscriber:0
Deletes at subscriber:0
conflicts at subscriber:0
Hi,
This is my merge agent history. the agent approximately took 9 min to merge
about 6500 changes in the last run. How can i improve the performance of the
merge agent? What are the implications of increasing the upload and download
batch size?
Thanks,
SVur
118/17/2004 8:20:18 AM8/17/2004 8:20:18 AM0Starting
agent.00000000000<Binary>1
118/17/2004 8:20:31 AM8/17/2004 8:20:31
AM0Initializing00000000000<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:20:31 AM0Connecting to Publisher
'01AL10015000025'00000000000<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:20:36 AM4Connecting to Subscriber
'01AL10015000026'00000000000<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:24 AM52Uploaded 104 data changes
(100 inserts, 0 updates, 4 deletes, 0
conflicts).18734200001000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:24 AM52Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).368903.92307692307600002000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:26 AM55Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).587185.52727272727200003000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:29 AM57Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).806247.08771929824500004000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:47 AM76Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).935936.63157894736800005000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:51 AM80Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).1079847.5500006000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:54 AM82Uploaded 123 data changes
(123 inserts, 0 updates, 0 deletes, 0
conflicts).1247818.86585365853600007230400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:59 AM87Uploaded 104 data changes
(100 inserts, 4 updates, 0 deletes, 0
conflicts).1408599.55172413793100008234400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:11 AM99Uploaded 69 data changes
(69 inserts, 0 updates, 0 deletes, 0
conflicts).1408599.09090909090900008924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:14 AM102Downloaded 142 data
changes (0 inserts, 0 updates, 142 deletes, 0
conflicts).14151510.2156862745090014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:25 AM114Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14289010.017543859649100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:26 AM115Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14406210.8200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:27 AM116Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14520311.568965517241300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:28 AM117Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14620312.324786324786400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:42 AM130Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14726511.861538461538500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:43 AM132Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14817112.439393939393600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:45 AM133Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14932713.097744360902700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:46 AM134Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15012413.746268656716800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:59 AM147Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15099913.210884353741900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:00 AM149Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15198413.7046979865771000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:01 AM149Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15295314.3758389261741100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:01 AM150Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15381314.9466666666661200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:11 AM160Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15471914.63751300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:13 AM161Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15553115.1677018633541400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:14 AM163Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15648515.5950920245391500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:15 AM164Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15743816.109756097561600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:26 AM175Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15822015.6685714285711700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:27 AM175Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15898616.241800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:28 AM176Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15981416.7159090909091900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:30 AM178Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16218917.0898876404492000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:37 AM185Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16348616.9837837837832100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:38 AM186Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16450217.4301075268812200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:38 AM187Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16558017.871657754012300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:39 AM187Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16650218.4064171122992400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:52 AM200Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16725217.712500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:53 AM201Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16809618.1194029850742600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:54 AM203Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16895618.4334975369452700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:55 AM204Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16978418.8333333333332800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:06 AM215Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17064318.334883720932900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:07 AM215Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17137818.83000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:08 AM217Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17242519.0875576036863100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:09 AM217Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17341019.5483870967743200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:22 AM230Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17445718.8782608695653300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:23 AM231Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17555019.2294372294373400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:23 AM232Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17664419.5775862068963500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:24 AM232Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17762920.0086206896553600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:32 AM240Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17847319.7583333333333700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:33 AM241Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17951920.0912863070533800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:33 AM242Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18041020.4214876033053900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:34 AM242Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18117620.8347107438014000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:44 AM252Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18258220.4047619047614100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:44 AM252Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18384820.8015873015874200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:44 AM253Downloaded 115 data
changes (115 inserts, 0 updates, 0 deletes, 0
conflicts).18511421.1739130434784315014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:45 AM253Downloaded 151 data
changes (100 inserts, 51 updates, 0 deletes, 0
conflicts).18634921.77075098814244155114208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:45 AM254Downloaded 100 data
changes (0 inserts, 100 updates, 0 deletes, 0
conflicts).18741122.07874015748441515114208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18858222.38431372549451515114208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (99 inserts, 1 updates, 0 deletes, 0
conflicts).18972322.776470588235461415214208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).19091023.16862745098471415214208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).19189423.560784313725481415214208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:48 AM256Downloaded 171 data
changes (170 inserts, 1 updates, 0 deletes, 0
conflicts).19278524.13671875498415314208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:48 AM256Downloaded 100 data
changes (98 inserts, 2 updates, 0 deletes, 0
conflicts).19373824.52734375508215514208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:48 AM257Downloaded 100 data
changes (99 inserts, 1 updates, 0 deletes, 0
conflicts).19475424.821011673151518115614208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:49 AM257No data needed to be
merged.19475424.821011673151518115614208924400<Binary>0
128/17/2004 8:20:31 AM8/17/2004 8:24:49 AM257Merged 6379 data changes
(6073 inserts, 160 updates, 146 deletes, 0 resolved
conflicts).024.821011673151518115614208924400<Binary>1
"SVur" wrote:

> Hi,
> I have setup merge replication with push subcription for every 10 min. I
> have changed the agent profile to 'high Volume Server to server' but still
> the duration and delivery rate remain almost the same. How can i increase the
> delivery rate, or what should be the suggested agent profile be. Please help,
> it is urget.
> Thanks in Advance,
> SVur
> The history of the merge agent looks like this for hte last run:
> # Actions : 32
> Last action message: merged 3252 data changes(3209 inserts, 16 updates, 27
> deletes)
> Duration(secs): 00.00.43
> Delivery rate (rows per sec): 73.0000
> Inserts at Publisher:2912
> Updates at Publisher:16
> Deletes at publisher:27
> conflicts at publisher:0
> Inserts at subscriber:297
> Updates at subscriber:0
> Deletes at subscriber:0
> conflicts at subscriber:0
>

Wednesday, March 21, 2012

Memory, Available Bytes

I'm using a performance console system monitor to observe memory performance
of a Win 2K server, running 2K SQL server.
The available bytes indicates less than half of the system's total physical
memory of
4 gig. Could this be a possible indication of a memory leak. What, if any
possible
solutions are available. Thanks to everyone for being there to help.Hi
If the Server has 4GB RAM, and is Enterprise Edition of Windows 2000, is the
/3GB flag set in boot.ini ?
If not, only 2GB of the possible 3GB is presented to applications. The other
1GB is always reserved for the OS.
If it is Standard Edition of Windows 2000, supported is only 2GB of the 4GB.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"coenzyme" wrote:

> I'm using a performance console system monitor to observe memory performan
ce
> of a Win 2K server, running 2K SQL server.
> The available bytes indicates less than half of the system's total physica
l
> memory of
> 4 gig. Could this be a possible indication of a memory leak. What, if any
> possible
> solutions are available. Thanks to everyone for being there to help.

Memory, Available Bytes

I'm using a performance console system monitor to observe memory performance
of a Win 2K server, running 2K SQL server.
The available bytes indicates less than half of the system's total physical
memory of
4 gig. Could this be a possible indication of a memory leak. What, if any
possible
solutions are available. Thanks to everyone for being there to help.
Hi
If the Server has 4GB RAM, and is Enterprise Edition of Windows 2000, is the
/3GB flag set in boot.ini ?
If not, only 2GB of the possible 3GB is presented to applications. The other
1GB is always reserved for the OS.
If it is Standard Edition of Windows 2000, supported is only 2GB of the 4GB.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"coenzyme" wrote:

> I'm using a performance console system monitor to observe memory performance
> of a Win 2K server, running 2K SQL server.
> The available bytes indicates less than half of the system's total physical
> memory of
> 4 gig. Could this be a possible indication of a memory leak. What, if any
> possible
> solutions are available. Thanks to everyone for being there to help.

Memory, Available Bytes

I'm using a performance console system monitor to observe memory performance
of a Win 2K server, running 2K SQL server.
The available bytes indicates less than half of the system's total physical
memory of
4 gig. Could this be a possible indication of a memory leak. What, if any
possible
solutions are available. Thanks to everyone for being there to help.Hi
If the Server has 4GB RAM, and is Enterprise Edition of Windows 2000, is the
/3GB flag set in boot.ini ?
If not, only 2GB of the possible 3GB is presented to applications. The other
1GB is always reserved for the OS.
If it is Standard Edition of Windows 2000, supported is only 2GB of the 4GB.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"coenzyme" wrote:
> I'm using a performance console system monitor to observe memory performance
> of a Win 2K server, running 2K SQL server.
> The available bytes indicates less than half of the system's total physical
> memory of
> 4 gig. Could this be a possible indication of a memory leak. What, if any
> possible
> solutions are available. Thanks to everyone for being there to help.sql

memory used by sqlservr

Hi,
My sql server is running on a Windows 2000 server with 2G physical memory
machine and I'm looking at the private-bytes performance counter for
sqlservr process and it's taking 1.8G. Is that something I should be
concerned about? I assume that sqlserver can be smart in taking most
possible available memory - so that would explain it. But how do I know when
I should be concerned - in the case 1.8 is the actual minimum it needs to
run efficiently (constantly swapping memory with page file is not efficient
of course).
In another word, how do I know when to start investing in more hardware for
my sql server?
thanks!Have a look at
INF: SQL Server Memory Usage
http://support.microsoft.com/defaul...b;en-us;q321363
http://www.mssqlserver.com/faq/trou...-memoryleak.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Zen" <zen@.nononospam.com> wrote in message news:uolOKk9YGHA.4688@.TK2MSFTNGP04.phx.gbl...[vb
col=seagreen]
> Hi,
> My sql server is running on a Windows 2000 server with 2G physical memory
> machine and I'm looking at the private-bytes performance counter for
> sqlservr process and it's taking 1.8G. Is that something I should be
> concerned about? I assume that sqlserver can be smart in taking most
> possible available memory - so that would explain it. But how do I know wh
en
> I should be concerned - in the case 1.8 is the actual minimum it needs to
> run efficiently (constantly swapping memory with page file is not efficien
t
> of course).
> In another word, how do I know when to start investing in more hardware fo
r
> my sql server?
> thanks!
>[/vbcol]

memory used by sqlservr

Hi,
My sql server is running on a Windows 2000 server with 2G physical memory
machine and I'm looking at the private-bytes performance counter for
sqlservr process and it's taking 1.8G. Is that something I should be
concerned about? I assume that sqlserver can be smart in taking most
possible available memory - so that would explain it. But how do I know when
I should be concerned - in the case 1.8 is the actual minimum it needs to
run efficiently (constantly swapping memory with page file is not efficient
of course).
In another word, how do I know when to start investing in more hardware for
my sql server?
thanks!Have a look at
INF: SQL Server Memory Usage
http://support.microsoft.com/default.aspx?scid=kb;en-us;q321363
http://www.mssqlserver.com/faq/troubleshooting-memoryleak.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Zen" <zen@.nononospam.com> wrote in message news:uolOKk9YGHA.4688@.TK2MSFTNGP04.phx.gbl...
> Hi,
> My sql server is running on a Windows 2000 server with 2G physical memory
> machine and I'm looking at the private-bytes performance counter for
> sqlservr process and it's taking 1.8G. Is that something I should be
> concerned about? I assume that sqlserver can be smart in taking most
> possible available memory - so that would explain it. But how do I know when
> I should be concerned - in the case 1.8 is the actual minimum it needs to
> run efficiently (constantly swapping memory with page file is not efficient
> of course).
> In another word, how do I know when to start investing in more hardware for
> my sql server?
> thanks!
>

Monday, March 19, 2012

Memory usage\Performance problem

I am experiencing the following problem;
I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
Task manager. It is ca 20 mb, everything is OK. Then I run this query;
select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC
There's no index or constraint on field "postoffice" and tblTable contains
ca 916.000 rows. I thought this query would still execute in only a few
seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
memory usage grows to about 300-400 mb when the query runs. What am I doing
wrong?
Tnx in advance!
/Magnus
Hi
916'000 rows and no index? It is best to have an index for larger tables
else it has to load each row and look at it.
Have a look at http://www.sql-server-performance.com/ for some ideas.
INF: SQL Server Memory Usage:
http://support.microsoft.com/default...;en-us;q321363
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Magnus ?sterberg" wrote:

> I am experiencing the following problem;
> I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
> Task manager. It is ca 20 mb, everything is OK. Then I run this query;
> select postoffice, count(*) as counter from tblTable
> where postoffice is not null
> group by postoffice
> order by counter DESC
> There's no index or constraint on field "postoffice" and tblTable contains
> ca 916.000 rows. I thought this query would still execute in only a few
> seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
> memory usage grows to about 300-400 mb when the query runs. What am I doing
> wrong?
> Tnx in advance!
> /Magnus
>
>
|||On Mon, 29 Aug 2005 12:31:12 +0300, "Magnus sterberg"
<magnus.osterberg@.abo.fi> wrote:

>I am experiencing the following problem;
>I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
>Task manager. It is ca 20 mb, everything is OK. Then I run this query;
>select postoffice, count(*) as counter from tblTable
>where postoffice is not null
>group by postoffice
>order by counter DESC
>There's no index or constraint on field "postoffice" and tblTable contains
>ca 916.000 rows. I thought this query would still execute in only a few
>seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
>memory usage grows to about 300-400 mb when the query runs. What am I doing
>wrong?
That's the entire query? Nobody else on the machine? Machine has how
much ram, 512mb? 1gb? More? Hmm. I'm going to guess your machine
is only 512mb, that that table totals more than 512mb, on a cold run
of course none of the data will be cached in RAM, and yes, SQLServer
can be quite slow when it has to do physical IO, that is, far slower
than when the data is in cache. SQLServer is designed to grab all
available memory if it has even the least interest in doing so, so the
growth to 400mb doesn't surprise me at all.
You may also be getting an "autostats" on the postoffice column which
will take some time, you can turn on the profiler, make sure you
capture the object/autostats event, and get some idea.
How long does it take if you immediately rerun the same query? Lot
faster?
J.
J.

Memory usage\Performance problem

I am experiencing the following problem;
I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
Task manager. It is ca 20 mb, everything is OK. Then I run this query;
select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC
There's no index or constraint on field "postoffice" and tblTable contains
ca 916.000 rows. I thought this query would still execute in only a few
seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
memory usage grows to about 300-400 mb when the query runs. What am I doing
wrong'
Tnx in advance!
/MagnusHi
916'000 rows and no index? It is best to have an index for larger tables
else it has to load each row and look at it.
Have a look at http://www.sql-server-performance.com/ for some ideas.
INF: SQL Server Memory Usage:
http://support.microsoft.com/defaul...b;en-us;q321363
--
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Magnus ?sterberg" wrote:

> I am experiencing the following problem;
> I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage i
n
> Task manager. It is ca 20 mb, everything is OK. Then I run this query;
> select postoffice, count(*) as counter from tblTable
> where postoffice is not null
> group by postoffice
> order by counter DESC
> There's no index or constraint on field "postoffice" and tblTable contains
> ca 916.000 rows. I thought this query would still execute in only a few
> seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
> memory usage grows to about 300-400 mb when the query runs. What am I doin
g
> wrong'
> Tnx in advance!
> /Magnus
>
>|||On Mon, 29 Aug 2005 12:31:12 +0300, "Magnus sterberg"
<magnus.osterberg@.abo.fi> wrote:

>I am experiencing the following problem;
>I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
>Task manager. It is ca 20 mb, everything is OK. Then I run this query;
>select postoffice, count(*) as counter from tblTable
>where postoffice is not null
>group by postoffice
>order by counter DESC
>There's no index or constraint on field "postoffice" and tblTable contains
>ca 916.000 rows. I thought this query would still execute in only a few
>seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
>memory usage grows to about 300-400 mb when the query runs. What am I doing
>wrong'
That's the entire query? Nobody else on the machine? Machine has how
much ram, 512mb? 1gb? More? Hmm. I'm going to guess your machine
is only 512mb, that that table totals more than 512mb, on a cold run
of course none of the data will be cached in RAM, and yes, SQLServer
can be quite slow when it has to do physical IO, that is, far slower
than when the data is in cache. SQLServer is designed to grab all
available memory if it has even the least interest in doing so, so the
growth to 400mb doesn't surprise me at all.
You may also be getting an "autostats" on the postoffice column which
will take some time, you can turn on the profiler, make sure you
capture the object/autostats event, and get some idea.
How long does it take if you immediately rerun the same query? Lot
faster?
J.
J.

Memory usage\Performance problem

I am experiencing the following problem;

I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
Task manager. It is ca 20 mb, everything is OK. Then I run this query;

select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC

There's no index or constraint on field "postoffice" and tblTable contains
ca 916.000 rows. I thought this query would still execute in only a few
seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
memory usage grows to about 300-400 mb when the query runs. What am I doing
wrong??

Tnx in advance!

/MagnusMagnus sterberg (magnus.osterberg@.abo.fi) writes:
> I am experiencing the following problem;
> I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage
> in Task manager. It is ca 20 mb, everything is OK. Then I run this
> query;
> select postoffice, count(*) as counter from tblTable
> where postoffice is not null
> group by postoffice
> order by counter DESC
> There's no index or constraint on field "postoffice" and tblTable
> contains ca 916.000 rows. I thought this query would still execute in
> only a few seconds, but it takes minutes. And the worst thing is that
> sqlserver.exe's memory usage grows to about 300-400 mb when the query
> runs. What am I doing wrong??

So what is the average row size of this table? Say that is 300 bytes,
then that is 300 MB of data to read. That is not very likely to be done
instantly.

SQL Server's memory consumption will increase, as it will read the entire
table into cache, and the table will stay in the cache as long as no
other data competes about the space. This means that if you resubmit the
query, the response time will be significantly shorter.

SQL Server is designed to grab as much memory it can, as the more data
in can have in cache, the better the response times. If there are other
applications competing for memory on the machine, SQL Server will yield,
but in this case it may be better to configure how much memory you want
SQL Server to use.

Note also that framgmenation could cause extra delay. Use DBCC SHOWCONTIG
to see what shape the table is in. To defragment it, you would have to
create a clustered index on the table, and then drop that index.

The query itself would benefit enormously by a non-clustered index on
postoffice.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns96C183A0F9A0DYazorman@.127.0.0.1...
> Magnus sterberg (magnus.osterberg@.abo.fi) writes:
>> I am experiencing the following problem;
>>
>> I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage
>> in Task manager. It is ca 20 mb, everything is OK. Then I run this
>> query;
>>
>> select postoffice, count(*) as counter from tblTable
>> where postoffice is not null
>> group by postoffice
>> order by counter DESC
>>
>> There's no index or constraint on field "postoffice" and tblTable
>> contains ca 916.000 rows. I thought this query would still execute in
>> only a few seconds, but it takes minutes. And the worst thing is that
>> sqlserver.exe's memory usage grows to about 300-400 mb when the query
>> runs. What am I doing wrong??
> So what is the average row size of this table? Say that is 300 bytes,
> then that is 300 MB of data to read. That is not very likely to be done
> instantly.
> SQL Server's memory consumption will increase, as it will read the entire
> table into cache, and the table will stay in the cache as long as no
> other data competes about the space. This means that if you resubmit the
> query, the response time will be significantly shorter.
> SQL Server is designed to grab as much memory it can, as the more data
> in can have in cache, the better the response times. If there are other
> applications competing for memory on the machine, SQL Server will yield,
> but in this case it may be better to configure how much memory you want
> SQL Server to use.
> Note also that framgmenation could cause extra delay. Use DBCC SHOWCONTIG
> to see what shape the table is in. To defragment it, you would have to
> create a clustered index on the table, and then drop that index.
> The query itself would benefit enormously by a non-clustered index on
> postoffice.
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp

Thanks for a well-written answer!

I guess my expectations on selecting based on a non-indexed column were
somewhat high. When I added an index, my query executes in seconds.
One index solved this problem, but my table contains 30-40 similar columns,
and I don't think indexing every column is a good idea.

Isn't there any other ways of speeding up selects? Well, I guess not...

/Magnus|||Magnus sterberg (magnus.osterberg@.abo.fi) writes:
> I guess my expectations on selecting based on a non-indexed column were
> somewhat high. When I added an index, my query executes in seconds. One
> index solved this problem, but my table contains 30-40 similar columns,
> and I don't think indexing every column is a good idea.
> Isn't there any other ways of speeding up selects? Well, I guess not...

Well, once data is in cache it will be faster. Or at least less slow.

But if you need to do this on every column, it sounds to me like one
of those things Analysis Services is good for.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Magnus sterberg wrote:
> I guess my expectations on selecting based on a non-indexed column
> were somewhat high. When I added an index, my query executes in
> seconds.
> One index solved this problem, but my table contains 30-40 similar
> columns, and I don't think indexing every column is a good idea.

Two things come to mind:

1. rethink your table design. If these columns are so similar you might
be better off with a second table which is joined. But that of course
depends on your data - just an idea.

2. Create indexes (possibly composite indexes) for most used queries. You
might even get away with a single (or few) composite index if your queries
only use a leading portion of this index's fields.

> Isn't there any other ways of speeding up selects? Well, I guess
> not...

Well, there are tons of other options, like having data files on several
disks, putting tx log on a separate disk, adding memory, ... It all
depends. :-)

Kind regards

robert|||Thank you Robert, these all are things that I'll keep in mind.

One more thing though; why is the sqlservr.exe process consuming 395 mb RAM
when the entire server is idle?
My "cron job" is inserting a few 1000 rows of data at night, but now it is
late morning here in Finland, and the entire machine is more or less idle.
On my other servers, the RAM consumption on idle is about 20-30 mb only.
Anyone got any ideas?

/Magnus

"Robert Klemme" <bob.news@.gmx.net> wrote in message
news:3ngkc3F1dnm7U1@.individual.net...
> Magnus sterberg wrote:
>> I guess my expectations on selecting based on a non-indexed column
>> were somewhat high. When I added an index, my query executes in
>> seconds.
>> One index solved this problem, but my table contains 30-40 similar
>> columns, and I don't think indexing every column is a good idea.
> Two things come to mind:
> 1. rethink your table design. If these columns are so similar you might
> be better off with a second table which is joined. But that of course
> depends on your data - just an idea.
> 2. Create indexes (possibly composite indexes) for most used queries. You
> might even get away with a single (or few) composite index if your queries
> only use a leading portion of this index's fields.
>> Isn't there any other ways of speeding up selects? Well, I guess
>> not...
> Well, there are tons of other options, like having data files on several
> disks, putting tx log on a separate disk, adding memory, ... It all
> depends. :-)
> Kind regards
> robert|||Magnus sterberg (magnus.osterberg@.abo.fi) writes:
> One more thing though; why is the sqlservr.exe process consuming 395 mb
> RAM when the entire server is idle?

I believe that was in my first reply.

SQL Server is designed to get as much memory as it can, and only yield
if an another application needs it. This is because it keeps data in
cache so that future requests for the same data can be answered without
reading from disk.

Thus, this is perfectly normal behaviour.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland Sommarskog wrote:
> Magnus sterberg (magnus.osterberg@.abo.fi) writes:
>> One more thing though; why is the sqlservr.exe process consuming 395
>> mb RAM when the entire server is idle?
> I believe that was in my first reply.
> SQL Server is designed to get as much memory as it can, and only yield
> if an another application needs it. This is because it keeps data in
> cache so that future requests for the same data can be answered
> without reading from disk.
> Thus, this is perfectly normal behaviour.

Adding to that max memory consumption is easily configurable so if 400MB
is too much for you then simply turn that down.

Kind regards

robert

Memory usage\Performance problem

I am experiencing the following problem;
I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
Task manager. It is ca 20 mb, everything is OK. Then I run this query;
select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC
There's no index or constraint on field "postoffice" and tblTable contains
ca 916.000 rows. I thought this query would still execute in only a few
seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
memory usage grows to about 300-400 mb when the query runs. What am I doing
wrong'
Tnx in advance!
/MagnusHi
916'000 rows and no index? It is best to have an index for larger tables
else it has to load each row and look at it.
Have a look at http://www.sql-server-performance.com/ for some ideas.
INF: SQL Server Memory Usage:
http://support.microsoft.com/default.aspx?scid=kb;en-us;q321363
--
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Magnus Ã?sterberg" wrote:
> I am experiencing the following problem;
> I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
> Task manager. It is ca 20 mb, everything is OK. Then I run this query;
> select postoffice, count(*) as counter from tblTable
> where postoffice is not null
> group by postoffice
> order by counter DESC
> There's no index or constraint on field "postoffice" and tblTable contains
> ca 916.000 rows. I thought this query would still execute in only a few
> seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
> memory usage grows to about 300-400 mb when the query runs. What am I doing
> wrong'
> Tnx in advance!
> /Magnus
>
>|||On Mon, 29 Aug 2005 12:31:12 +0300, "Magnus Österberg"
<magnus.osterberg@.abo.fi> wrote:
>I am experiencing the following problem;
>I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
>Task manager. It is ca 20 mb, everything is OK. Then I run this query;
>select postoffice, count(*) as counter from tblTable
>where postoffice is not null
>group by postoffice
>order by counter DESC
>There's no index or constraint on field "postoffice" and tblTable contains
>ca 916.000 rows. I thought this query would still execute in only a few
>seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
>memory usage grows to about 300-400 mb when the query runs. What am I doing
>wrong'
That's the entire query? Nobody else on the machine? Machine has how
much ram, 512mb? 1gb? More? Hmm. I'm going to guess your machine
is only 512mb, that that table totals more than 512mb, on a cold run
of course none of the data will be cached in RAM, and yes, SQLServer
can be quite slow when it has to do physical IO, that is, far slower
than when the data is in cache. SQLServer is designed to grab all
available memory if it has even the least interest in doing so, so the
growth to 400mb doesn't surprise me at all.
You may also be getting an "autostats" on the postoffice column which
will take some time, you can turn on the profiler, make sure you
capture the object/autostats event, and get some idea.
How long does it take if you immediately rerun the same query? Lot
faster?
J.
J.

Friday, March 9, 2012

memory setting in SQL 2005 in Windows 2003

We have a SQL 2005 installed in Windows 2003 64 bit system, which has 16 CPU and 32GB RAM, but the performance is poor. SQL server is AWE enable and the sql start account with "Lock Pages In Memory". I checked the task manager and it looks that the SQL server used only about 300 MB memory. Here is what I found:

http://blogs.msdn.com/psssql/archive/2006/11/28/sql-server-becomes-sluggish-or-appears-to-stall-on-64-bit-installations.aspx?CommentPosted=true#commentmessage

What I need to do?

Thanks

Sounds like you have installed the 32 bit version of SQL Server, not the 64bit version of SQL Server.

Can you verify which version of SQL Server was installed?

|||

If I recall, AWE has no effect in 64 bit OS, and extended memory usage is not properly captured with Perf Monitor.

Please refer to this article:


Configuration -Memory, 32 bit SQL 2005 on 64 Bit Windows 2003
http://msdn2.microsoft.com/en-us/library/ms187499.aspx

And for reference on Memory, this is good starting point:

http://msdn2.microsoft.com/en-us/library/ms187499.aspx

|||

Hi Sandlu,

AWE will have no effect on 64 bit servers.

Yes you still need "LOCK PAGES IN MEMORY".

To check exactly how much memory is being used by SQL Server use Perfmon to check for SQL ServerMemoryManager:

Total Server Memory (KB)

Also check if you have capped the memory by using Max Server Memory under sp_configure.

With regards to slow performance, it could be any problem like disks, memory or no/out of date stats or Index defragmentation.

Jag

|||Thank you, everyone, will try.

Wednesday, March 7, 2012

Memory problems & package performance

Hi,

I know the SSIS memory problem has probably been covered quite a bit, but being a newbie in using the SSIS, I'm not quite sure how to improved the performance of my SSIS package.

Basically, I've a package that loops through all the subdirectories within a specified directory, and it then loops through each file in the subdirectory and with the use of the Data Flow, process each file (according to their filenames) with a Script Component to insert data into a SQL DB.

Each subdirectory has up to 15 different csv files, but each is less than 5kB. I probably have about 100 subdirectories.

When I run the package, it functioned properly, but the package stalled (no error but just stuck in one Data Flow) after a while, and when I checked my CPU memory, it was running at 100%.

I'm not sure how I could fix it or improved the memory allocation. I was not expecting to have any memory problems as the file size is small and the number of rows of data going into and out of the Script Component is no more than 20.

Any advice? Thanks.

Just an update, I observed that running my package from the Visual Studio (development environment) takes up a large chunk of memory. So, I close VS and run the package through DTEXEC.

From the Windows Task Manager, I observe that the memory usage for DTEXEC.exe keeps increasing, so I'm guessing that all the previous process is still being stored in the memory. Is there a way I could flush the memory each time I process a new file, as I don't need the memory to keep data that has been processed.

I'm not keen to increase the PC memory size as a solution as I don't think the SSIS memory should keep increasing. Any suggestions?

Friday, February 24, 2012

Memory Performance counter?

Can anyone tell me what is considered a bad number for the "Memory:
Pages/sec" counter? Microsoft indicates a number greater than zero by a small
amount is acceptable, but what number is a "small amount"?
Thanks RichDBA wrote:
> Can anyone tell me what is considered a bad number for the "Memory:
> Pages/sec" counter? Microsoft indicates a number greater than zero by a small
> amount is acceptable, but what number is a "small amount"?
> Thanks Rich
It depends - you first need to determine what is "normal" for your
server. A lot of paging activity usually, but not always, indicates
that something other than SQL is demanding memory. For instance, on my
servers, I have no page swapping during the majority of the day. During
the period of time when my tape backup agent is backing up the SQL
backups, then I'll get a flurry of paging activity. When backups
complete, the paging stops.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||The Memory:pages/sec counter indicates hard paging activities. On a box
dedicated to SQL Server, if its value is greater than zero for any sustained
period of time, you should look into it.
Linchi
"DBA" wrote:
> Can anyone tell me what is considered a bad number for the "Memory:
> Pages/sec" counter? Microsoft indicates a number greater than zero by a small
> amount is acceptable, but what number is a "small amount"?
> Thanks Rich|||Those pages are actually the backup files being written. The write
mechanism used by SQL Backup triggers this counter. This happens both for
local file or tape writes and network file writes. It isn't caused by
memory pressure, it is just an artifact of writing backup files. As such,
you have to disregard the non-zero counters when a SQL is running an active
backup. Yet another operating complication with SQL.
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:enjwmcBsGHA.2240@.TK2MSFTNGP04.phx.gbl...
> DBA wrote:
>> Can anyone tell me what is considered a bad number for the "Memory:
>> Pages/sec" counter? Microsoft indicates a number greater than zero by a
>> small amount is acceptable, but what number is a "small amount"? Thanks
>> Rich
> It depends - you first need to determine what is "normal" for your server.
> A lot of paging activity usually, but not always, indicates that something
> other than SQL is demanding memory. For instance, on my servers, I have
> no page swapping during the majority of the day. During the period of
> time when my tape backup agent is backing up the SQL backups, then I'll
> get a flurry of paging activity. When backups complete, the paging stops.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Geoff N. Hiten wrote:
> Those pages are actually the backup files being written. The write
> mechanism used by SQL Backup triggers this counter. This happens both for
> local file or tape writes and network file writes. It isn't caused by
> memory pressure, it is just an artifact of writing backup files. As such,
> you have to disregard the non-zero counters when a SQL is running an active
> backup. Yet another operating complication with SQL.
>
That was basically the point I was trying to make. Also, I don't see
excessive paging when *SQL* is running a backup, I see paging when my
tape backup agent is backing up the SQL backups. I dump SQL backups to
disk, then back those dumps up to tape.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||We're running sql 2000 on 64bit itanium server with 12gig ram. We have batch
processes that run all hours of the day as well internet users hitting the db
via a web application. The memory Pages/Sec are consistently between 3000 and
6000. Is this reasonable or excessive?
Thanks Rich
"Linchi Shea" wrote:
> The Memory:pages/sec counter indicates hard paging activities. On a box
> dedicated to SQL Server, if its value is greater than zero for any sustained
> period of time, you should look into it.
> Linchi
> "DBA" wrote:
> > Can anyone tell me what is considered a bad number for the "Memory:
> > Pages/sec" counter? Microsoft indicates a number greater than zero by a small
> > amount is acceptable, but what number is a "small amount"?
> >
> > Thanks Rich

Memory Performance counter?

Can anyone tell me what is considered a bad number for the "Memory:
Pages/sec" counter? Microsoft indicates a number greater than zero by a smal
l
amount is acceptable, but what number is a "small amount"?
Thanks RichDBA wrote:
> Can anyone tell me what is considered a bad number for the "Memory:
> Pages/sec" counter? Microsoft indicates a number greater than zero by a sm
all
> amount is acceptable, but what number is a "small amount"?
> Thanks Rich
It depends - you first need to determine what is "normal" for your
server. A lot of paging activity usually, but not always, indicates
that something other than SQL is demanding memory. For instance, on my
servers, I have no page swapping during the majority of the day. During
the period of time when my tape backup agent is backing up the SQL
backups, then I'll get a flurry of paging activity. When backups
complete, the paging stops.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||The Memory:pages/sec counter indicates hard paging activities. On a box
dedicated to SQL Server, if its value is greater than zero for any sustained
period of time, you should look into it.
Linchi
"DBA" wrote:

> Can anyone tell me what is considered a bad number for the "Memory:
> Pages/sec" counter? Microsoft indicates a number greater than zero by a sm
all
> amount is acceptable, but what number is a "small amount"?
> Thanks Rich|||Those pages are actually the backup files being written. The write
mechanism used by SQL Backup triggers this counter. This happens both for
local file or tape writes and network file writes. It isn't caused by
memory pressure, it is just an artifact of writing backup files. As such,
you have to disregard the non-zero counters when a SQL is running an active
backup. Yet another operating complication with SQL.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:enjwmcBsGHA.2240@.TK2MSFTNGP04.phx.gbl...
> DBA wrote:
> It depends - you first need to determine what is "normal" for your server.
> A lot of paging activity usually, but not always, indicates that something
> other than SQL is demanding memory. For instance, on my servers, I have
> no page swapping during the majority of the day. During the period of
> time when my tape backup agent is backing up the SQL backups, then I'll
> get a flurry of paging activity. When backups complete, the paging stops.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Geoff N. Hiten wrote:
> Those pages are actually the backup files being written. The write
> mechanism used by SQL Backup triggers this counter. This happens both fo
r
> local file or tape writes and network file writes. It isn't caused by
> memory pressure, it is just an artifact of writing backup files. As such,
> you have to disregard the non-zero counters when a SQL is running an activ
e
> backup. Yet another operating complication with SQL.
>
That was basically the point I was trying to make. Also, I don't see
excessive paging when *SQL* is running a backup, I see paging when my
tape backup agent is backing up the SQL backups. I dump SQL backups to
disk, then back those dumps up to tape.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||We're running sql 2000 on 64bit itanium server with 12gig ram. We have batch
processes that run all hours of the day as well internet users hitting the d
b
via a web application. The memory Pages/Sec are consistently between 3000 an
d
6000. Is this reasonable or excessive?
Thanks Rich
"Linchi Shea" wrote:
[vbcol=seagreen]
> The Memory:pages/sec counter indicates hard paging activities. On a box
> dedicated to SQL Server, if its value is greater than zero for any sustain
ed
> period of time, you should look into it.
> Linchi
> "DBA" wrote:
>

memory performance

The server is a SQL2k standard edition SP3, on Win2k server.
I have 4GB physical memory and 5GB page file.
I know that SQL2k standard edition can only utilize 2GB memory.
Running perfmon caught Process\SQLserver\pagefaults/sec and working set.
The working set maximum is 1779MB and pagefults/sec maximum is 34.
Do I have memory problem?
Thanks in advance.
Looks fine to me.
Andrew J. Kelly SQL MVP
"mike" <mike@.discussions.microsoft.com> wrote in message
news:BC14859F-F926-484A-9DFD-569C301A43E6@.microsoft.com...
> The server is a SQL2k standard edition SP3, on Win2k server.
> I have 4GB physical memory and 5GB page file.
> I know that SQL2k standard edition can only utilize 2GB memory.
> Running perfmon caught Process\SQLserver\pagefaults/sec and working set.
> The working set maximum is 1779MB and pagefults/sec maximum is 34.
> Do I have memory problem?
> Thanks in advance.

memory performance

The server is a SQL2k standard edition SP3, on Win2k server.
I have 4GB physical memory and 5GB page file.
I know that SQL2k standard edition can only utilize 2GB memory.
Running perfmon caught Process\SQLserver\pagefaults/sec and working set.
The working set maximum is 1779MB and pagefults/sec maximum is 34.
Do I have memory problem?
Thanks in advance.Looks fine to me.
Andrew J. Kelly SQL MVP
"mike" <mike@.discussions.microsoft.com> wrote in message
news:BC14859F-F926-484A-9DFD-569C301A43E6@.microsoft.com...
> The server is a SQL2k standard edition SP3, on Win2k server.
> I have 4GB physical memory and 5GB page file.
> I know that SQL2k standard edition can only utilize 2GB memory.
> Running perfmon caught Process\SQLserver\pagefaults/sec and working set.
> The working set maximum is 1779MB and pagefults/sec maximum is 34.
> Do I have memory problem?
> Thanks in advance.

Monday, February 20, 2012

Memory Mgt in SQL SERVER

I have a performance and memory management issues with SQL SERVER. I
feel SQL server does not releases the unused memory back to the OS. I
have been monitoring that at the end of the day the SQL SERVER
performance gets really bad. And it this point it has acquired all the
Cache it could. I have abt 4 GB of RAM on my server and out of it SQL
SERVER uses almost 2.7 GB whether there is load on the machine or not.
If there any way or tool to monitor this. Also is there any way to Free
memory from SQL server back to OS.Hi

If you have SQL Server configured to use memory dynamically, (Which is
the default and usually best left unless you have a speciffic reason to
change it) SQL Server will take as much memory as it needs. It then
does not release that memory unless another process needs it. It is
designed to work that way. I believe the only way to reset the memory
usage is to stop and start the SQL Server service. If it is not causing
a problem don't worry about it.

If it does cause a problem, set a limit on the amount of memory that
SQL Server can use.

Regards

John|||Take a look at:

http://support.microsoft.com/defaul...kb;en-us;321363

-Andy

"smileydip" <dipeshn.shah@.gmail.com> wrote in message
news:1112793252.056543.6430@.f14g2000cwb.googlegrou ps.com...
>I have a performance and memory management issues with SQL SERVER. I
> feel SQL server does not releases the unused memory back to the OS. I
> have been monitoring that at the end of the day the SQL SERVER
> performance gets really bad. And it this point it has acquired all the
> Cache it could. I have abt 4 GB of RAM on my server and out of it SQL
> SERVER uses almost 2.7 GB whether there is load on the machine or not.
> If there any way or tool to monitor this. Also is there any way to Free
> memory from SQL server back to OS.|||"smileydip" <dipeshn.shah@.gmail.com> wrote in message
news:1112793252.056543.6430@.f14g2000cwb.googlegrou ps.com...
> I have a performance and memory management issues with SQL SERVER. I
> feel SQL server does not releases the unused memory back to the OS.

Unless you're using AWE, it does release memory to the OS if the OS request.

However, as SQL is faster with as much memory as possible, it normally will
grab as much as it can and keep it as long as itcan.

> I
> have been monitoring that at the end of the day the SQL SERVER
> performance gets really bad. And it this point it has acquired all the
> Cache it could. I have abt 4 GB of RAM on my server and out of it SQL
> SERVER uses almost 2.7 GB whether there is load on the machine or not.

Sounds about right.

> If there any way or tool to monitor this. Also is there any way to Free
> memory from SQL server back to OS.

Why do you think this will help your performance issue? Almost certainly
the performance issue is related to something else.