Showing posts with label production. Show all posts
Showing posts with label production. Show all posts

Friday, March 30, 2012

Merge join: nr of output rows unchanged when amount of input changes

Dear all,

I created a package that seems to work fine with a small amount of data. When I run the package however with more data (as in production) the merge join output is limites to 9963 rows, no matter if I change the number of input rows.

Situation as follows.

The package has 2 OLE DB Sources, in which SQL-statements have been defined in order to retrieve the data.

The flow of source 1 is: retrieving source data -> trimming (non-key) columns -> sorting on the key-columns.

The flow of source 2 is: retrieving source data -> deriving 2 new columns -> aggregating the data to the level of source 1 -> sorting on the key columns.

Then both flows are merged and other steps are performed.

If I test with just a couple of rows it works fine. But when I change the where-clause in the data source retrieval, so that the number of rows is for instance 15000 or 150000 the number of rows after the merge join is 9963.

When I run the package in debug-mode the step is colored green, nevertheless an error is displayed:

Error: 0xC0047022 at Data Flow Task, DTS.Pipeline: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Merge Join" (4703) failed with error code 0xC0047020. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.

To be honest, a few more errormessages appear, but they don't seem related to this issue. The package stops running after some 6000 rows have been written to the destination.

Any help will be greatly appreciated.

Kind regards,

Albert.

If you could post the full error output here, that would probably be helpful. Sometimes it is the "big view" that helps point you at the cause of the problem, especially since many of SSIS's error messages are not particularly transparent. The earlier errors often show what triggered the later errors, even if they do not appear directly related.

|||Can you also include details of the next task in the pipeline, the one that accepts the ~9000 rows. It sounds like it's failing on the first buffer it receives as input.|||

Ok, for the big picture: I feel like a fool.

Solving one of the other errors solved the reported issue as well.

Apologies for bothering you.

Wednesday, March 7, 2012

Memory Problems in SQL2000 SP3A

Hi,
We get memory problems every so often on our Production SQL2000 SP3A server.
They are described in KB http://support.microsoft.com/kb/903002/en-us We
have made a number of our systems reduce the packet size as the article
suggests. Normally the WARNING: Failed to reserve contiguous memory message
leads to further problems and only a reboot of the server fixes the problem.
Is there something that could be done to fix the situation without shutting
down SQL and rebooting the server? Can we clear some of the buffer cache
with a DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
Thanks
ChrisChris,
That KB only talks about one of the many reasons why you can get that error.
This is what was called MemToLeave area of memory and is used for many
things that require contiguous memory space. This memory can get fragmented
from poor behaving things such as XP's and such. One example was earlier
version of SQL LiteSpeed would slowly fragment the MemToLeave area by
leaving blocks allocated and eventually there would not be enough contiguous
space for normal operation. You need to find what the culprit is of coarse
but these kbs might get you started. Look for things like heavy linked
server access, sp_oacreate, xps etc. You can also increase the size of
MemToLeave with the -g startup param which will give you more time between
restarts but wont solve the issue.
http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
http://support.microsoft.com/kb/316749/
--
Andrew J. Kelly SQL MVP
"Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
> Hi,
> We get memory problems every so often on our Production SQL2000 SP3A
> server. They are described in KB
> http://support.microsoft.com/kb/903002/en-us We have made a number of our
> systems reduce the packet size as the article suggests. Normally the
> WARNING: Failed to reserve contiguous memory message leads to further
> problems and only a reboot of the server fixes the problem. Is there
> something that could be done to fix the situation without shutting down
> SQL and rebooting the server? Can we clear some of the buffer cache with a
> DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
> Thanks
> Chris
>|||Andrew,
Thanks for the info. From what I had read the MemToLeave -g switch appears
as the only solution I had seen. We will be moving from SQL2000 to SQL2005
once SP2 appears so we may not have this situation too much longer. What I
would like confirmed is that a reboot is the only solution once the messages
start to appear or is there anything else that can be done, even tell
everyone to signoff for a few minutes. Would this allow the memory to be
de-allocated so the fragmentation problem would disappear?
Chris
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ecB78a3OHHA.1252@.TK2MSFTNGP02.phx.gbl...
> Chris,
> That KB only talks about one of the many reasons why you can get that
> error. This is what was called MemToLeave area of memory and is used for
> many things that require contiguous memory space. This memory can get
> fragmented from poor behaving things such as XP's and such. One example
> was earlier version of SQL LiteSpeed would slowly fragment the MemToLeave
> area by leaving blocks allocated and eventually there would not be enough
> contiguous space for normal operation. You need to find what the culprit
> is of coarse but these kbs might get you started. Look for things like
> heavy linked server access, sp_oacreate, xps etc. You can also increase
> the size of MemToLeave with the -g startup param which will give you more
> time between restarts but wont solve the issue.
>
> http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
> http://support.microsoft.com/kb/316749/
> --
> Andrew J. Kelly SQL MVP
> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
> news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> We get memory problems every so often on our Production SQL2000 SP3A
>> server. They are described in KB
>> http://support.microsoft.com/kb/903002/en-us We have made a number of our
>> systems reduce the packet size as the article suggests. Normally the
>> WARNING: Failed to reserve contiguous memory message leads to further
>> problems and only a reboot of the server fixes the problem. Is there
>> something that could be done to fix the situation without shutting down
>> SQL and rebooting the server? Can we clear some of the buffer cache with
>> a DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
>> Thanks
>> Chris
>|||I guess it depends on what is causing the fragmentation but most likely no.
You need to find out what is causing it and fix that.
--
Andrew J. Kelly SQL MVP
"Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
news:ec47k$9OHHA.4368@.TK2MSFTNGP03.phx.gbl...
> Andrew,
> Thanks for the info. From what I had read the MemToLeave -g switch appears
> as the only solution I had seen. We will be moving from SQL2000 to SQL2005
> once SP2 appears so we may not have this situation too much longer. What I
> would like confirmed is that a reboot is the only solution once the
> messages start to appear or is there anything else that can be done, even
> tell everyone to signoff for a few minutes. Would this allow the memory to
> be de-allocated so the fragmentation problem would disappear?
> Chris
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:ecB78a3OHHA.1252@.TK2MSFTNGP02.phx.gbl...
>> Chris,
>> That KB only talks about one of the many reasons why you can get that
>> error. This is what was called MemToLeave area of memory and is used for
>> many things that require contiguous memory space. This memory can get
>> fragmented from poor behaving things such as XP's and such. One example
>> was earlier version of SQL LiteSpeed would slowly fragment the MemToLeave
>> area by leaving blocks allocated and eventually there would not be enough
>> contiguous space for normal operation. You need to find what the culprit
>> is of coarse but these kbs might get you started. Look for things like
>> heavy linked server access, sp_oacreate, xps etc. You can also increase
>> the size of MemToLeave with the -g startup param which will give you more
>> time between restarts but wont solve the issue.
>>
>> http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
>> http://support.microsoft.com/kb/316749/
>> --
>> Andrew J. Kelly SQL MVP
>> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
>> news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> We get memory problems every so often on our Production SQL2000 SP3A
>> server. They are described in KB
>> http://support.microsoft.com/kb/903002/en-us We have made a number of
>> our systems reduce the packet size as the article suggests. Normally the
>> WARNING: Failed to reserve contiguous memory message leads to further
>> problems and only a reboot of the server fixes the problem. Is there
>> something that could be done to fix the situation without shutting down
>> SQL and rebooting the server? Can we clear some of the buffer cache with
>> a DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
>> Thanks
>> Chris
>>
>|||Thanks Andrew.
Chris
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:emNMuO$OHHA.404@.TK2MSFTNGP02.phx.gbl...
>I guess it depends on what is causing the fragmentation but most likely no.
>You need to find out what is causing it and fix that.
> --
> Andrew J. Kelly SQL MVP
> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
> news:ec47k$9OHHA.4368@.TK2MSFTNGP03.phx.gbl...
>> Andrew,
>> Thanks for the info. From what I had read the MemToLeave -g switch
>> appears as the only solution I had seen. We will be moving from SQL2000
>> to SQL2005 once SP2 appears so we may not have this situation too much
>> longer. What I would like confirmed is that a reboot is the only solution
>> once the messages start to appear or is there anything else that can be
>> done, even tell everyone to signoff for a few minutes. Would this allow
>> the memory to be de-allocated so the fragmentation problem would
>> disappear?
>> Chris
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:ecB78a3OHHA.1252@.TK2MSFTNGP02.phx.gbl...
>> Chris,
>> That KB only talks about one of the many reasons why you can get that
>> error. This is what was called MemToLeave area of memory and is used for
>> many things that require contiguous memory space. This memory can get
>> fragmented from poor behaving things such as XP's and such. One example
>> was earlier version of SQL LiteSpeed would slowly fragment the
>> MemToLeave area by leaving blocks allocated and eventually there would
>> not be enough contiguous space for normal operation. You need to find
>> what the culprit is of coarse but these kbs might get you started. Look
>> for things like heavy linked server access, sp_oacreate, xps etc. You
>> can also increase the size of MemToLeave with the -g startup param which
>> will give you more time between restarts but wont solve the issue.
>>
>> http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
>> http://support.microsoft.com/kb/316749/
>> --
>> Andrew J. Kelly SQL MVP
>> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
>> news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> We get memory problems every so often on our Production SQL2000 SP3A
>> server. They are described in KB
>> http://support.microsoft.com/kb/903002/en-us We have made a number of
>> our systems reduce the packet size as the article suggests. Normally
>> the WARNING: Failed to reserve contiguous memory message leads to
>> further problems and only a reboot of the server fixes the problem. Is
>> there something that could be done to fix the situation without
>> shutting down SQL and rebooting the server? Can we clear some of the
>> buffer cache with a DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
>> Thanks
>> Chris
>>
>>
>

Memory Problems in SQL2000 SP3A

Hi,
We get memory problems every so often on our Production SQL2000 SP3A server.
They are described in KB http://support.microsoft.com/kb/903002/en-us We
have made a number of our systems reduce the packet size as the article
suggests. Normally the WARNING: Failed to reserve contiguous memory message
leads to further problems and only a reboot of the server fixes the problem.
Is there something that could be done to fix the situation without shutting
down SQL and rebooting the server? Can we clear some of the buffer cache
with a DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
Thanks
Chris
Chris,
That KB only talks about one of the many reasons why you can get that error.
This is what was called MemToLeave area of memory and is used for many
things that require contiguous memory space. This memory can get fragmented
from poor behaving things such as XP's and such. One example was earlier
version of SQL LiteSpeed would slowly fragment the MemToLeave area by
leaving blocks allocated and eventually there would not be enough contiguous
space for normal operation. You need to find what the culprit is of coarse
but these kbs might get you started. Look for things like heavy linked
server access, sp_oacreate, xps etc. You can also increase the size of
MemToLeave with the -g startup param which will give you more time between
restarts but wont solve the issue.
http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
http://support.microsoft.com/kb/316749/
Andrew J. Kelly SQL MVP
"Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
> Hi,
> We get memory problems every so often on our Production SQL2000 SP3A
> server. They are described in KB
> http://support.microsoft.com/kb/903002/en-us We have made a number of our
> systems reduce the packet size as the article suggests. Normally the
> WARNING: Failed to reserve contiguous memory message leads to further
> problems and only a reboot of the server fixes the problem. Is there
> something that could be done to fix the situation without shutting down
> SQL and rebooting the server? Can we clear some of the buffer cache with a
> DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
> Thanks
> Chris
>
|||Andrew,
Thanks for the info. From what I had read the MemToLeave -g switch appears
as the only solution I had seen. We will be moving from SQL2000 to SQL2005
once SP2 appears so we may not have this situation too much longer. What I
would like confirmed is that a reboot is the only solution once the messages
start to appear or is there anything else that can be done, even tell
everyone to signoff for a few minutes. Would this allow the memory to be
de-allocated so the fragmentation problem would disappear?
Chris
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ecB78a3OHHA.1252@.TK2MSFTNGP02.phx.gbl...
> Chris,
> That KB only talks about one of the many reasons why you can get that
> error. This is what was called MemToLeave area of memory and is used for
> many things that require contiguous memory space. This memory can get
> fragmented from poor behaving things such as XP's and such. One example
> was earlier version of SQL LiteSpeed would slowly fragment the MemToLeave
> area by leaving blocks allocated and eventually there would not be enough
> contiguous space for normal operation. You need to find what the culprit
> is of coarse but these kbs might get you started. Look for things like
> heavy linked server access, sp_oacreate, xps etc. You can also increase
> the size of MemToLeave with the -g startup param which will give you more
> time between restarts but wont solve the issue.
>
> http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
> http://support.microsoft.com/kb/316749/
> --
> Andrew J. Kelly SQL MVP
> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
> news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
>
|||I guess it depends on what is causing the fragmentation but most likely no.
You need to find out what is causing it and fix that.
Andrew J. Kelly SQL MVP
"Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
news:ec47k$9OHHA.4368@.TK2MSFTNGP03.phx.gbl...
> Andrew,
> Thanks for the info. From what I had read the MemToLeave -g switch appears
> as the only solution I had seen. We will be moving from SQL2000 to SQL2005
> once SP2 appears so we may not have this situation too much longer. What I
> would like confirmed is that a reboot is the only solution once the
> messages start to appear or is there anything else that can be done, even
> tell everyone to signoff for a few minutes. Would this allow the memory to
> be de-allocated so the fragmentation problem would disappear?
> Chris
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:ecB78a3OHHA.1252@.TK2MSFTNGP02.phx.gbl...
>
|||Thanks Andrew.
Chris
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:emNMuO$OHHA.404@.TK2MSFTNGP02.phx.gbl...
>I guess it depends on what is causing the fragmentation but most likely no.
>You need to find out what is causing it and fix that.
> --
> Andrew J. Kelly SQL MVP
> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
> news:ec47k$9OHHA.4368@.TK2MSFTNGP03.phx.gbl...
>

Memory Problems in SQL2000 SP3A

Hi,
We get memory problems every so often on our Production SQL2000 SP3A server.
They are described in KB http://support.microsoft.com/kb/903002/en-us We
have made a number of our systems reduce the packet size as the article
suggests. Normally the WARNING: Failed to reserve contiguous memory message
leads to further problems and only a reboot of the server fixes the problem.
Is there something that could be done to fix the situation without shutting
down SQL and rebooting the server? Can we clear some of the buffer cache
with a DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
Thanks
ChrisChris,
That KB only talks about one of the many reasons why you can get that error.
This is what was called MemToLeave area of memory and is used for many
things that require contiguous memory space. This memory can get fragmented
from poor behaving things such as XP's and such. One example was earlier
version of SQL LiteSpeed would slowly fragment the MemToLeave area by
leaving blocks allocated and eventually there would not be enough contiguous
space for normal operation. You need to find what the culprit is of coarse
but these kbs might get you started. Look for things like heavy linked
server access, sp_oacreate, xps etc. You can also increase the size of
MemToLeave with the -g startup param which will give you more time between
restarts but wont solve the issue.
http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
http://support.microsoft.com/kb/316749/
Andrew J. Kelly SQL MVP
"Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
> Hi,
> We get memory problems every so often on our Production SQL2000 SP3A
> server. They are described in KB
> http://support.microsoft.com/kb/903002/en-us We have made a number of our
> systems reduce the packet size as the article suggests. Normally the
> WARNING: Failed to reserve contiguous memory message leads to further
> problems and only a reboot of the server fixes the problem. Is there
> something that could be done to fix the situation without shutting down
> SQL and rebooting the server? Can we clear some of the buffer cache with a
> DBCC FREEPROCCACHE or DBCC DROPCLEANBUFFERS?
> Thanks
> Chris
>|||Andrew,
Thanks for the info. From what I had read the MemToLeave -g switch appears
as the only solution I had seen. We will be moving from SQL2000 to SQL2005
once SP2 appears so we may not have this situation too much longer. What I
would like confirmed is that a reboot is the only solution once the messages
start to appear or is there anything else that can be done, even tell
everyone to signoff for a few minutes. Would this allow the memory to be
de-allocated so the fragmentation problem would disappear?
Chris
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ecB78a3OHHA.1252@.TK2MSFTNGP02.phx.gbl...
> Chris,
> That KB only talks about one of the many reasons why you can get that
> error. This is what was called MemToLeave area of memory and is used for
> many things that require contiguous memory space. This memory can get
> fragmented from poor behaving things such as XP's and such. One example
> was earlier version of SQL LiteSpeed would slowly fragment the MemToLeave
> area by leaving blocks allocated and eventually there would not be enough
> contiguous space for normal operation. You need to find what the culprit
> is of coarse but these kbs might get you started. Look for things like
> heavy linked server access, sp_oacreate, xps etc. You can also increase
> the size of MemToLeave with the -g startup param which will give you more
> time between restarts but wont solve the issue.
>
> http://msdn2.microsoft.com/en-us/library/aa175282(SQL.80).aspx
> http://support.microsoft.com/kb/316749/
> --
> Andrew J. Kelly SQL MVP
> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
> news:uBdKFLyOHHA.4104@.TK2MSFTNGP06.phx.gbl...
>|||I guess it depends on what is causing the fragmentation but most likely no.
You need to find out what is causing it and fix that.
Andrew J. Kelly SQL MVP
"Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
news:ec47k$9OHHA.4368@.TK2MSFTNGP03.phx.gbl...
> Andrew,
> Thanks for the info. From what I had read the MemToLeave -g switch appears
> as the only solution I had seen. We will be moving from SQL2000 to SQL2005
> once SP2 appears so we may not have this situation too much longer. What I
> would like confirmed is that a reboot is the only solution once the
> messages start to appear or is there anything else that can be done, even
> tell everyone to signoff for a few minutes. Would this allow the memory to
> be de-allocated so the fragmentation problem would disappear?
> Chris
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:ecB78a3OHHA.1252@.TK2MSFTNGP02.phx.gbl...
>|||Thanks Andrew.
Chris
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:emNMuO$OHHA.404@.TK2MSFTNGP02.phx.gbl...
>I guess it depends on what is causing the fragmentation but most likely no.
>You need to find out what is causing it and fix that.
> --
> Andrew J. Kelly SQL MVP
> "Chris Wood" <anonymous@.discussions.microsoft.com> wrote in message
> news:ec47k$9OHHA.4368@.TK2MSFTNGP03.phx.gbl...
>

memory probelm

Hi,
I have a production SQL server with 4-cpu, 8GB runs under SQL Server
Enterprise Edition & Windows 2000 Advance server.
AWE is enabled.
Max memory set to 7GB.
When I look at the Task Manager the 'Mem usage' is only 110,632 k.
Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
I am thinking there may be memory leak in the server.
My question is,
1. How can I find out if it is memory leak?
2. How to identify the process causing it?
Thanks for the help.
-sarav
Task manager is not the correct tool for monitoring memory usage with sql
server. Use perfmon counters instead. But when you use AWE memory Sql
Server will grab all 7GB right off the bat and will no longer dynamically
manage it.
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:%23XFFSaAAFHA.612@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have a production SQL server with 4-cpu, 8GB runs under SQL Server
> Enterprise Edition & Windows 2000 Advance server.
> AWE is enabled.
> Max memory set to 7GB.
> When I look at the Task Manager the 'Mem usage' is only 110,632 k.
> Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
> I am thinking there may be memory leak in the server.
> My question is,
> 1. How can I find out if it is memory leak?
> 2. How to identify the process causing it?
> Thanks for the help.
> -sarav
>
>
>
>
>
|||Moreover, make sure you set BOTH the /3GB and /PAE switches in the BOOT.ini
file.
The /PAE switch is required by the OS in order to manage memory allocations
above 4GB. It does this by replacing the normal 32-bit address register
with a make-shift 36-bit register; thus, allowing applications written on
the extended API calls to address up to 64 GB of memory.
The /3GB switch forces the OS to reserve only 1GB for private (kernel)
addresses and 3GB for user addresses. Doing this will give you up to a 3GB
Buffer/Proc Cache instead of the default 2GB. This will give you an AWE
address space of only 4GB instead of 5GB, which would be better optimized.
Sincerely,
Anthony Thomas

"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uoL31iAAFHA.1400@.TK2MSFTNGP11.phx.gbl...
Task manager is not the correct tool for monitoring memory usage with sql
server. Use perfmon counters instead. But when you use AWE memory Sql
Server will grab all 7GB right off the bat and will no longer dynamically
manage it.
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:%23XFFSaAAFHA.612@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have a production SQL server with 4-cpu, 8GB runs under SQL Server
> Enterprise Edition & Windows 2000 Advance server.
> AWE is enabled.
> Max memory set to 7GB.
> When I look at the Task Manager the 'Mem usage' is only 110,632 k.
> Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
> I am thinking there may be memory leak in the server.
> My question is,
> 1. How can I find out if it is memory leak?
> 2. How to identify the process causing it?
> Thanks for the help.
> -sarav
>
>
>
>
>

memory probelm

Hi,
I have a production SQL server with 4-cpu, 8GB runs under SQL Server
Enterprise Edition & Windows 2000 Advance server.
AWE is enabled.
Max memory set to 7GB.
When I look at the Task Manager the 'Mem usage' is only 110,632 k.
Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
I am thinking there may be memory leak in the server.
My question is,
1. How can I find out if it is memory leak?
2. How to identify the process causing it?
Thanks for the help.
-saravTask manager is not the correct tool for monitoring memory usage with sql
server. Use perfmon counters instead. But when you use AWE memory Sql
Server will grab all 7GB right off the bat and will no longer dynamically
manage it.
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:%23XFFSaAAFHA.612@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have a production SQL server with 4-cpu, 8GB runs under SQL Server
> Enterprise Edition & Windows 2000 Advance server.
> AWE is enabled.
> Max memory set to 7GB.
> When I look at the Task Manager the 'Mem usage' is only 110,632 k.
> Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
> I am thinking there may be memory leak in the server.
> My question is,
> 1. How can I find out if it is memory leak?
> 2. How to identify the process causing it?
> Thanks for the help.
> -sarav
>
>
>
>
>|||Moreover, make sure you set BOTH the /3GB and /PAE switches in the BOOT.ini
file.
The /PAE switch is required by the OS in order to manage memory allocations
above 4GB. It does this by replacing the normal 32-bit address register
with a make-shift 36-bit register; thus, allowing applications written on
the extended API calls to address up to 64 GB of memory.
The /3GB switch forces the OS to reserve only 1GB for private (kernel)
addresses and 3GB for user addresses. Doing this will give you up to a 3GB
Buffer/Proc Cache instead of the default 2GB. This will give you an AWE
address space of only 4GB instead of 5GB, which would be better optimized.
Sincerely,
Anthony Thomas
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uoL31iAAFHA.1400@.TK2MSFTNGP11.phx.gbl...
Task manager is not the correct tool for monitoring memory usage with sql
server. Use perfmon counters instead. But when you use AWE memory Sql
Server will grab all 7GB right off the bat and will no longer dynamically
manage it.
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:%23XFFSaAAFHA.612@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have a production SQL server with 4-cpu, 8GB runs under SQL Server
> Enterprise Edition & Windows 2000 Advance server.
> AWE is enabled.
> Max memory set to 7GB.
> When I look at the Task Manager the 'Mem usage' is only 110,632 k.
> Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
> I am thinking there may be memory leak in the server.
> My question is,
> 1. How can I find out if it is memory leak?
> 2. How to identify the process causing it?
> Thanks for the help.
> -sarav
>
>
>
>
>

memory probelm

Hi,
I have a production SQL server with 4-cpu, 8GB runs under SQL Server
Enterprise Edition & Windows 2000 Advance server.
AWE is enabled.
Max memory set to 7GB.
When I look at the Task Manager the 'Mem usage' is only 110,632 k.
Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
I am thinking there may be memory leak in the server.
My question is,
1. How can I find out if it is memory leak?
2. How to identify the process causing it?
Thanks for the help.
-saravTask manager is not the correct tool for monitoring memory usage with sql
server. Use perfmon counters instead. But when you use AWE memory Sql
Server will grab all 7GB right off the bat and will no longer dynamically
manage it.
--
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:%23XFFSaAAFHA.612@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have a production SQL server with 4-cpu, 8GB runs under SQL Server
> Enterprise Edition & Windows 2000 Advance server.
> AWE is enabled.
> Max memory set to 7GB.
> When I look at the Task Manager the 'Mem usage' is only 110,632 k.
> Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
> I am thinking there may be memory leak in the server.
> My question is,
> 1. How can I find out if it is memory leak?
> 2. How to identify the process causing it?
> Thanks for the help.
> -sarav
>
>
>
>
>|||Moreover, make sure you set BOTH the /3GB and /PAE switches in the BOOT.ini
file.
The /PAE switch is required by the OS in order to manage memory allocations
above 4GB. It does this by replacing the normal 32-bit address register
with a make-shift 36-bit register; thus, allowing applications written on
the extended API calls to address up to 64 GB of memory.
The /3GB switch forces the OS to reserve only 1GB for private (kernel)
addresses and 3GB for user addresses. Doing this will give you up to a 3GB
Buffer/Proc Cache instead of the default 2GB. This will give you an AWE
address space of only 4GB instead of 5GB, which would be better optimized.
Sincerely,
Anthony Thomas
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uoL31iAAFHA.1400@.TK2MSFTNGP11.phx.gbl...
Task manager is not the correct tool for monitoring memory usage with sql
server. Use perfmon counters instead. But when you use AWE memory Sql
Server will grab all 7GB right off the bat and will no longer dynamically
manage it.
--
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:%23XFFSaAAFHA.612@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have a production SQL server with 4-cpu, 8GB runs under SQL Server
> Enterprise Edition & Windows 2000 Advance server.
> AWE is enabled.
> Max memory set to 7GB.
> When I look at the Task Manager the 'Mem usage' is only 110,632 k.
> Some of the SQL jobs run time went up from 1:30 min. to 6Hrs.
> I am thinking there may be memory leak in the server.
> My question is,
> 1. How can I find out if it is memory leak?
> 2. How to identify the process causing it?
> Thanks for the help.
> -sarav
>
>
>
>
>

Monday, February 20, 2012

Memory Manager

The CPU pegged on a production server this morning. There is a warning
message in the log. I'm not sure what it means. Or what to do to prevent it
in the future.
Date 1/23/2008 4:11:36 AM
Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
Source spid3s
Message
Memory Manager
VM Reserved = 8521512 KB
VM Committed = 8504536 KB
AWE Allocated = 0 KB
Reserved Memory = 1024 KB
Reserved Memory In Use = 0 KB
Then later on
Date 1/23/2008 4:11:36 AM
Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
Source spid3s
Message
MEMORYCLERK_SQLGENERAL (Total)
VM Reserved = 0 KB
VM Committed = 0 KB
AWE Allocated = 0 KB
SM Reserved = 0 KB
SM Committed = 0 KB
SinglePage Allocator = 28920 KB
MultiPage Allocator = 4888 KB
Still later
Date 1/23/2008 4:11:36 AM
Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
Source spid3s
Message
Buffer Counts: Committed=1032192 Target=1047941 Hashed=608946
Internal Reservation=209588 External Reservation=1297
Stolen Potential=571000
Min Free=128 Visible=1047941
Available Paging File=858726400
Finally:
Date 1/23/2008 4:12:19 AM
Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
Source Server
Message
***Stack Dump being sent to D:\Program Files\Microsoft SQL
Server\MSSQL.1\MSSQL\LOG\SQLDump0008.txt
--
Regards,
JamieProbably time to call Microsoft tech support. Or scan the bug fix lists for
all patches you don't have on your server to see if one of them addresses
the symptoms you are seeing.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:B2F823A0-5AB3-4EF6-8F16-D515052F7DE4@.microsoft.com...
> The CPU pegged on a production server this morning. There is a warning
> message in the log. I'm not sure what it means. Or what to do to prevent
> it
> in the future.
> Date 1/23/2008 4:11:36 AM
> Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
> Source spid3s
> Message
> Memory Manager
> VM Reserved = 8521512 KB
> VM Committed = 8504536 KB
> AWE Allocated = 0 KB
> Reserved Memory = 1024 KB
> Reserved Memory In Use = 0 KB
>
> Then later on
> Date 1/23/2008 4:11:36 AM
> Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
> Source spid3s
> Message
> MEMORYCLERK_SQLGENERAL (Total)
> VM Reserved = 0 KB
> VM Committed = 0 KB
> AWE Allocated = 0 KB
> SM Reserved = 0 KB
> SM Committed = 0 KB
> SinglePage Allocator = 28920 KB
> MultiPage Allocator = 4888 KB
> Still later
> Date 1/23/2008 4:11:36 AM
> Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
> Source spid3s
> Message
> Buffer Counts: Committed=1032192 Target=1047941 Hashed=608946
> Internal Reservation=209588 External Reservation=1297
> Stolen Potential=571000
> Min Free=128 Visible=1047941
> Available Paging File=858726400
>
> Finally:
> Date 1/23/2008 4:12:19 AM
> Log SQL Server (Archive #1 - 1/23/2008 6:41:00 AM)
> Source Server
> Message
> ***Stack Dump being sent to D:\Program Files\Microsoft SQL
> Server\MSSQL.1\MSSQL\LOG\SQLDump0008.txt
>
> --
> Regards,
> Jamie

memory management

Hi all,

I'm probably missing something here but I need to know why AS2005 is not using the maximum memory available on our production server.

I have a 3.5 GB cube on our server and the process msmdsrv.exe is only using around 100 mb currently. When multiple users are hitting the server at the same time, msmdsrv.exe goes up to around 650 mb but will eventually return to 100mb.

There is actually 4 GB of memory available on the server, is there any way to maximize the memory used by AS2005 on this box and cach almost the whole cube ? AS2005 is the only thing running on this server.

Currently, we see a lot of disk access when users are querying the cubes.

Thanks,

By default Analysis Server allowed to use 80% of the physical memory avaliable on your machine. This controlled by TotalMemoryLimit server property ( set to 80).

For your users to see improved query performance you need to try and define aggregations. See this post: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=348980&SiteID=1 for discussion about cube performance.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

This is the reason why I'm asking this question...

If I have 3 GB of memory and AS2005 is allowed to use 80% of it... I would expect AS to be using around 2.4 GB of memory.

In our environment it is using 100 mb when idle and 650 mb when multiple users are querying the cubes...

This is what I need to understand.

BTW i really think that our aggregations are defined properly

Thanks,

Eric

|||

Analysis Server will go to the disk and will try to retrieve results if it cannot find any answers in a cache already. Analysis Server will always cache results from queries. But whether new query can be answered from the cache very much depends on several facts:

1. Queries are similar to the previous queries.
2. Calculation involved in a queries do not require going to the lowest level of granularity.
3. Attribute relationships defined correctly and new query can benefit from not exactly matching cache. ( for example you get answers from Months cache if you asked for Years )

You ultimate goal is not increase Analysis Server's memory footprint, but make your caches more effcient.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||Dear All,

I'm having the same problem. When querying the OLAP by using ProClarity or SSMS (Management Studio), the response time is very low the CPU Utilization is below 10% and Memory Usage not more than 50%. When I check in the performance monitor, the I/O access is very high continuously 100% utilization when I are running query from ProClarity Analytics Server. Is there anything I can do to reduce I/O disk time ?

Thanks in advance.

memory management

Hi all,

I'm probably missing something here but I need to know why AS2005 is not using the maximum memory available on our production server.

I have a 3.5 GB cube on our server and the process msmdsrv.exe is only using around 100 mb currently. When multiple users are hitting the server at the same time, msmdsrv.exe goes up to around 650 mb but will eventually return to 100mb.

There is actually 4 GB of memory available on the server, is there any way to maximize the memory used by AS2005 on this box and cach almost the whole cube ? AS2005 is the only thing running on this server.

Currently, we see a lot of disk access when users are querying the cubes.

Thanks,

By default Analysis Server allowed to use 80% of the physical memory avaliable on your machine. This controlled by TotalMemoryLimit server property ( set to 80).

For your users to see improved query performance you need to try and define aggregations. See this post: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=348980&SiteID=1 for discussion about cube performance.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

This is the reason why I'm asking this question...

If I have 3 GB of memory and AS2005 is allowed to use 80% of it... I would expect AS to be using around 2.4 GB of memory.

In our environment it is using 100 mb when idle and 650 mb when multiple users are querying the cubes...

This is what I need to understand.

BTW i really think that our aggregations are defined properly

Thanks,

Eric

|||

Analysis Server will go to the disk and will try to retrieve results if it cannot find any answers in a cache already. Analysis Server will always cache results from queries. But whether new query can be answered from the cache very much depends on several facts:

1. Queries are similar to the previous queries.
2. Calculation involved in a queries do not require going to the lowest level of granularity.
3. Attribute relationships defined correctly and new query can benefit from not exactly matching cache. ( for example you get answers from Months cache if you asked for Years )

You ultimate goal is not increase Analysis Server's memory footprint, but make your caches more effcient.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||Dear All,

I'm having the same problem. When querying the OLAP by using ProClarity or SSMS (Management Studio), the response time is very low the CPU Utilization is below 10% and Memory Usage not more than 50%. When I check in the performance monitor, the I/O access is very high continuously 100% utilization when I are running query from ProClarity Analytics Server. Is there anything I can do to reduce I/O disk time ?

Thanks in advance.

Memory Leak problem... in SQL Server 2K

Hello,

I am having trouble with a production db server that likes to gobble
up memory. It seems to be a slow burn (maxing out over about an 18
hour time frame, before pegging both procs on the server and bringing
everything to a standstill). After viewing the trace logs, it appears
that all the SPIDs are being recycled - does this assert that
connections are being properly closed when the need for them has
ended? The code base is huge and quite messy, so it's difficult to
discern where the problem is just by looking at code, and we can't
seem to nail it down by looking at it, and I'm not sure what to look
for in the trace logs or perfmon.

Does anyone have any suggestions about what else might cause such a
problem?

RyanRyan (ryan3677@.excite.com) writes:
> I am having trouble with a production db server that likes to gobble
> up memory. It seems to be a slow burn (maxing out over about an 18
> hour time frame, before pegging both procs on the server and bringing
> everything to a standstill). After viewing the trace logs, it appears
> that all the SPIDs are being recycled - does this assert that
> connections are being properly closed when the need for them has
> ended? The code base is huge and quite messy, so it's difficult to
> discern where the problem is just by looking at code, and we can't
> seem to nail it down by looking at it, and I'm not sure what to look
> for in the trace logs or perfmon.

SQL Server likes to gobble up memory. In fact this is by design. The
more data SQL Server can hold in cache, the more queries it can
respond to without disk access. So normally SQL Server expands to
get all avilable memory. But if there are other processes in need of
memory, SQL Server will yield. It may not yield fast enough, though,
and you can configure SQL Server to use only part of the memory.

So the perceived memory leak is not a problem, but since you talk about
standstill, it seems that you have a problem. And since you talk about
pegging the processors on the server, it seems that you have a query in
need of rewrite somewhere. Or a in need of a better index. So while that
code base may be big and ugly, and you prefer not to look at it, it is
most likely there you find the solution.

The Profiler is a good tool. Filter for Duration greeater than, say,
1000 ms. Then again, if you start tracing when that bad query starts
running, you will not see the query until it is completed. One
alternative is aba_lockinfo, which is on my home page,
http://www.sommarskog.se/sqlutil/aba_lockinfo.html. That procedure
is really intended for lock monitoring, but you get all active processes
and what they are doing. And since "standstill" often includes blocking
as well, it may be interesting. aba_lockinfo gives you a snapshot, but
can still reveal something about what is going on. One word of caution
though: aba_lockinfo can take some time to return on a busy system. I
have identiefied a few weaknesses in terms of performance, but I have
not came around to fix them yet.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Erland Sommarskog" <sommar@.algonet.se> wrote in message
news:Xns947AB6167287AYazorman@.127.0.0.1...
> Ryan (ryan3677@.excite.com) writes:

<snip
> SQL Server likes to gobble up memory. In fact this is by design. The
> more data SQL Server can hold in cache, the more queries it can
> respond to without disk access. So normally SQL Server expands to
> get all avilable memory. But if there are other processes in need of
> memory, SQL Server will yield. It may not yield fast enough, though,
> and you can configure SQL Server to use only part of the memory.
> So the perceived memory leak is not a problem, but since you talk about
> standstill, it seems that you have a problem. And since you talk about
> pegging the processors on the server, it seems that you have a query in
> need of rewrite somewhere. Or a in need of a better index. So while that
> code base may be big and ugly, and you prefer not to look at it, it is
> most likely there you find the solution.
> The Profiler is a good tool. Filter for Duration greeater than, say,
> 1000 ms. Then again, if you start tracing when that bad query starts
> running, you will not see the query until it is completed. One
> alternative is aba_lockinfo, which is on my home page,
> http://www.sommarskog.se/sqlutil/aba_lockinfo.html. That procedure
> is really intended for lock monitoring, but you get all active processes
> and what they are doing. And since "standstill" often includes blocking
> as well, it may be interesting. aba_lockinfo gives you a snapshot, but
> can still reveal something about what is going on. One word of caution
> though: aba_lockinfo can take some time to return on a busy system. I
> have identiefied a few weaknesses in terms of performance, but I have
> not came around to fix them yet.
>
> --
> Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp

I agree with every thing above... in the intrim you may want to limit the
amout no memory that MS SQL can have. This helps especially if you are
running other programs on the machine that compete for memory. The downside
is that MSSQL has less to work with and will possibly take longer. Upside is
that MSSQL will not gobble up all the memory bringing everything to a halt.

-p|||Absolutely. You should limit the database memory even if you have no other
applications running on the machine. Strangely enough, SQL Server can choke
the operating system by leaving too little memory for the OS to function at
optimum.
Hope this helps,
Chuck Conover
www.TechnicalVideos.net

>"Pippen" <123@.hotmail.com> wrote in message
news:bw0Rb.152962$na.259030@.attbi_s04...
> I agree with every thing below... in the intrim you may want to limit the
> amout no memory that MS SQL can have. This helps especially if you are
> running other programs on the machine that compete for memory. The
downside
> is that MSSQL has less to work with and will possibly take longer. Upside
is
> that MSSQL will not gobble up all the memory bringing everything to a
halt.
> -p
>
> "Erland Sommarskog" <sommar@.algonet.se> wrote in message
> news:Xns947AB6167287AYazorman@.127.0.0.1...
> > Ryan (ryan3677@.excite.com) writes:
> <snip>
> > SQL Server likes to gobble up memory. In fact this is by design. The
> > more data SQL Server can hold in cache, the more queries it can
> > respond to without disk access. So normally SQL Server expands to
> > get all avilable memory. But if there are other processes in need of
> > memory, SQL Server will yield. It may not yield fast enough, though,
> > and you can configure SQL Server to use only part of the memory.
> > So the perceived memory leak is not a problem, but since you talk about
> > standstill, it seems that you have a problem. And since you talk about
> > pegging the processors on the server, it seems that you have a query in
> > need of rewrite somewhere. Or a in need of a better index. So while that
> > code base may be big and ugly, and you prefer not to look at it, it is
> > most likely there you find the solution.
> > The Profiler is a good tool. Filter for Duration greeater than, say,
> > 1000 ms. Then again, if you start tracing when that bad query starts
> > running, you will not see the query until it is completed. One
> > alternative is aba_lockinfo, which is on my home page,
> > http://www.sommarskog.se/sqlutil/aba_lockinfo.html. That procedure
> > is really intended for lock monitoring, but you get all active processes
> > and what they are doing. And since "standstill" often includes blocking
> > as well, it may be interesting. aba_lockinfo gives you a snapshot, but
> > can still reveal something about what is going on. One word of caution
> > though: aba_lockinfo can take some time to return on a busy system. I
> > have identiefied a few weaknesses in terms of performance, but I have
> > not came around to fix them yet.
> > --
> > Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
> > Books Online for SQL Server SP3 at
> > http://www.microsoft.com/sql/techin.../2000/books.asp|||Chuck Conover (cconover@.commspeed.net) writes:
> Absolutely. You should limit the database memory even if you have no
> other applications running on the machine. Strangely enough, SQL Server
> can choke the operating system by leaving too little memory for the OS
> to function at optimum.

No, for a machine that only runs SQL Server, there is no reason to configure
the memory. The most likely result is that when you buy more memory, you
cannot understand why it does not pay off, because you had forgotten that
you had constrained the memory.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Chuck Conover" <cconover@.commspeed.net> wrote in message
news:1075132853.366896@.news.commspeed.net...
> Absolutely. You should limit the database memory even if you have no
other
> applications running on the machine. Strangely enough, SQL Server can
choke
> the operating system by leaving too little memory for the OS to function
at
> optimum.

I'd have to disagree. I've never seen this be an issue.|||Greg,
No problem to disagree. It is possible that I came to the wrong
conclusion. However, I did see a situation just recently whereby there were
several views (written very badly) that required the database to bring back
several million rows. The I/O was astronomical. It appeared that even
after the view had come back, the whole machine was incredibly slow, and our
diagnostics showed that the database had eaten almost all 2GB of memory on
the machine. So our assumption was that the OS did not have enough memory
to function optimally. Rebooting the machine was the short-term fix. Being
a production server, we made 3 fixes simultaneously to get the machine
working properly as quickly as possible. Correcting the views, adding
another 2GB of memory, and limiting the DB memory fixed the problem, but we
aren't sure which one of our fixes corrected the problem.

Thanks for the input. It is possible that the views did not ever finish
completely considering the I/O required. That could have been the reason
for the slowdown of the machine.

Best regards,
Chuck Conover
www.TechnicalVideos.net

"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:SgmRb.18$pE.4@.twister.nyroc.rr.com...
> "Chuck Conover" <cconover@.commspeed.net> wrote in message
> news:1075132853.366896@.news.commspeed.net...
> > Absolutely. You should limit the database memory even if you have no
> other
> > applications running on the machine. Strangely enough, SQL Server can
> choke
> > the operating system by leaving too little memory for the OS to function
> at
> > optimum.
> I'd have to disagree. I've never seen this be an issue.