Monday, March 26, 2012
Merge Agents Don't "Retry" After SQL Server Error 8645?
merge agents (running on the server) for two subscribers failed with
SQL Server Error 8645 (A time out occurred while waiting for memory
resources to execute the query). It's not surprising to see this error,
but the fact that the agents did not retry as configured is surprising.
I discovered that this message is listed as a "severity" of 17 - is
there some threshold that prevents a retry by the agent? A restart of
the agent was successful.
Justin H.
How many merge agents do you have? Are you limiting the number of concurrent
merge agents?
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Justin H." <jhenry@.gmail.com> wrote in message
news:1134585801.416828.82300@.g44g2000cwa.googlegro ups.com...
> During a memory- and CPU-intensive nightly database update process, the
> merge agents (running on the server) for two subscribers failed with
> SQL Server Error 8645 (A time out occurred while waiting for memory
> resources to execute the query). It's not surprising to see this error,
> but the fact that the agents did not retry as configured is surprising.
>
> I discovered that this message is listed as a "severity" of 17 - is
> there some threshold that prevents a retry by the agent? A restart of
> the agent was successful.
> Justin H.
>
|||There are only two subscribers, so two merge agents with no concurrency
limit configured. I'm not so concerned about the error as long as a
retry happens as configured.
merge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg
|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
merge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
|||
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
merge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
|||
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
merge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg
|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
merge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
|||
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
sqlmerge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg
|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
merge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
|||
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
merge agent error
Hi
We are using HTTPS merge replication.
One of my subscribers is getting this error:
Error messages:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Publisher for changes not yet sent to the Subscriber. You must reinitialize the subscription (without upload). (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199402)
This is a bit surprising - it has been working fine - and also there were no changes at the publisher (it only has one article in this publication - a stored proc)
Why would this have happened ? The retention period is 45 days and they synchronised successfully only a few days prior.
thanks
1. How often (frequently) does your app do merge between the publisher and this subscriber? If you had other subscribers and have done merge sync, did the same issue raise?
2. What if you select last_sync_date and metadatacleanuptime from sysmergesubscriptions on the publication database?
Thanks.
This posting is provided "AS IS" with no warranties, and confers no rights
|||Thanks for replying.
1. It syncs up every day
2. last_sync_date = 2006-04-28 21:28:58.803
metadatacleanuptime = 2006-05-02 17:37:25.420
We have a number of clients for whom we setup merge replication. For each database we create two publications
1) the first publication has all the data and procs - it is setup with the syncType set to None. Therefore when we initialise the subscription the subscriber already has the data and schema.
2) the second publication only has one proc initially - it is setup with the syncType set to Automatic - we use this publication if we need to add any tables or procs to the database - it obviously has a much smaller snapshot
We have noticed that it is the second publication which is often getting this error - even though the clients are regularly synchronising....
Regards
Bruce
Hi
Any ideas on this ?
Thanks
Bruce
Hi Bruce, it looks like an existing bug in SQL 2005 which is slated to be fixed in SP2. Can you confirm that "select use_partition_groups from sysmergepublications" has value of 0 for both your publications?
Also, how many articles are in the other publication?
|||Thanks for the response - I'm away for the next 5 days so can't get the answer on use_partition_groups yet...
In the other publication - there are about 600 I s'pose - 100 old tables and 500 procs or more.
Is there any way to stop this happening until sp2 ? We are rolling this out at the moment...
Thanks
|||One last question - you mentioned the subscribers sync daily, yet there's four days difference between last sync time and the metadata cleanup time. Do you know what happened between 4/28 and 5/2? Did the agent not sync?|||Hi
It syncs every day if the pc is on ! This client turns it off on the weekend.
Yes - use_partition_groups is set to 0 for both publications
In case it matters the subscriptions are setup as follows:
subscription.CreateSyncAgentByDefault = False -- I wrote a windows service to synchronise
subscription.UseWebSynchronization = True
subscription.InternetSecurityMode = AuthenticationMethod.BasicAuthentication
subscription.SubscriberType = MergeSubscriberType.Anonymous
As there is only one proc in the second publication at the moment (and it is a trivial proc) I can reinitialise it and I guess this will be a workaround for now - but as soon as I need to add tables to this for future releases I will not want to be doing this. Is it a matter of hanging out until SP2 for a fix for this ? I guess that will be 3/4 months away or worse.
One more question about reinitialising subscriptions. When marking the publication for reinitialization from the publisher, I right-click on the publication name and have to select 'reinitialize all subscriptions' (I'm guessing that because the subscriber type is anonymous I can't select the individual subscriber if there are multiple subscribers to this publication), I get a message box asking me to confirm and whether to use the current snapshot or to create a new one. What is most disconcerting is that it doesn't specify which publication it is doing this for. Just for my own sanity, can you confirm it is only doing this for the publication you are selecting, and not all of them !!!! Paranoid I know but it's always better to ask I think..
Thanks
Bruce
hi Greg
Any further update on a fix for this ?
Thanks
Bruce
Bruce,
You can wait for Service pack 2 for the fix.
As a workaround, what you can do is increase the retention period of the publications. That way, the possibility of hitting the bug is lesser.
As per your question of reintializing the publication, all subscriptions to the publication you right clicked (and chose reinitialize) will be renitialized. No other publication will be affected.
|||FYI,
We are having the same issue and are eagerly awaiting SP2 or a hotfix.
|||We are having a similar issue but it is the other way round.
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This has started to happen only after we have set-up a second publication. We have set the retention period to 5 days on both the publications. We replicate data every 3 minutes and hence 5 days is a long time for data not to replicate.
Any ideas as to why we are getting this problem? Is this a known issue?
|||
I posted a visual guide to addressing this issue at: http://www.vsteamsystemcentral.com/cs/blogs/applied_team_system/archive/2006/08/13/128.aspx.
:{> Andy
|||We have also run into this error, and have used the work around of extending our expiration out to 999 days.I am wondering if we will be running into performance issues soon because the metadata will not be cleaned up often enough. Will the tables get extrememly large? Is there a way to do the cleanup manually if performance suffers enough?
Is SP2 close to being released?
thanks,
jg
|||
Do subscribers and publishers both need to be upgraded to SP2a for this to be fixed
We upgraded the publisher, and this is still happening.
I'll see if it happens to any of the subscribers after we upgrade them to sp2a.
Bruce
Friday, March 23, 2012
merge agent displaying never started
Recently i configured merge replication with pull subscriber the snapshot
agent ran successfully but merge agent failed(never started)raising the
problem . any help would be appriciated.
Message posted via http://www.droptable.com
One posibility is that the job owner is invalid - if the job owner is a
domain login, try changing it to sa and restart the job.
Rgds,
Paul Ibison
|||Thnaks for reply but that didn't work.
when we see merge agent history shows nothing.
Message posted via http://www.droptable.com
|||sql server could not start error:22022
can any one tell the solution please.
Message posted via http://www.droptable.com
|||OK - I've found this article which hopefully explains what is happening:
http://support.microsoft.com/?kbid=870674
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
sql
Merege Replication Failed
Hi Guys im newbie but need to learn more of it so please cooperate with me and slolve my problem.
my Sql Server starts merging and it starts dumping the scema and data after some time it the status shows failed, it doesn't give any error message also. please let me know what to check and how to resolve this.
The merge agent must be saying something? Are you looking at the SQL Serrve agent job history?
Also you can try running the merge agent from command line with increased verbosity. -OutputVerboseLevel 2
|||yeah there is an error saying that
" The process could not deliver the snapshot to the subscriber. Note:The step was was retried the requested number of times(10) without succeeding. the step faild."
So please let me know wht to do on this. Thanks and Praises in advance.
|||Try one of all of the following:
1. Can you expand all the + in the job history view and see if there is any relevant information there.
2. Run the merge agent from command line tool D:\Program FIles\Microsoft SQL Server\90\COM\replmerg.exe with all the relevant parameters and also add -OutputVerboseLevel 2
3. Look in distribution..MSmerge_history for the error message for this session.
Wednesday, March 21, 2012
MemToLeave area! How to monitor?
Occasionally I get the error of "WARNING: Failed to reserve contiguous
memory of Size= 65536."
What is the unit for this size, is it bytes or KB? Additionally,
I have set up 511 MB for MemToLeave area by adding -g384 for Sql2k SP3a on
AWE enabled system. I have total of 16 GB of RAM, of which 14 GB is set up
maximum for Sql server.
If I look at Perfmon counter of Process->Private bytes for sql server
process it gives me 224 MB.
If I look at sqlserver.exe process on task manager it gives me 215MB.
If I look at DBCC memorystatus, OS in use, it gives me 12 MB.
I would like to know, How can I calculate exactly how much memory is being
assigned and how much in use from MemToLeave area by looking at above
mentioned counters or any other perfmon or dbcc counters?
I really appreciate any input on this matter.Please, take a look at "Inside SQL Server 2000's Memory Management Facilities" by Ken Henderson:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsqldev/html/sqldev_01262004.asp
"... None of the tools you typically use to inspect application memory use (Task Manager, Perfmon/Sysmon, etc.) show the amount of AWE memory used by individual processes. There's no indication of the amount of AWE memory used by each process, nor is this memory included in the working set size reported for a given process ..."
Thanks,
-Ivan
--Original Message--
From: james
Posted At: Tuesday, February 28, 2006 6:52 AM
Posted To: microsoft.public.sqlserver.server
Conversation: MemToLeave area! How to monitor?
Subject: MemToLeave area! How to monitor?
Gurus,
Occasionally I get the error of "WARNING: Failed to reserve contiguous memory of Size= 65536."
What is the unit for this size, is it bytes or KB? Additionally,
I have set up 511 MB for MemToLeave area by adding -g384 for Sql2k SP3a on AWE enabled system. I have total of 16 GB of RAM, of which 14 GB is set up maximum for Sql server.
If I look at Perfmon counter of Process->Private bytes for sql server process it gives me 224 MB.
If I look at sqlserver.exe process on task manager it gives me 215MB.
If I look at DBCC memorystatus, OS in use, it gives me 12 MB.
I would like to know, How can I calculate exactly how much memory is being assigned and how much in use from MemToLeave area by looking at above mentioned counters or any other perfmon or dbcc counters?
I really appreciate any input on this matter.|||Thanks Ivan. I had already checked that article and also few other google
search but didn't get the answer.
<ivanpe@.online.microsoft.com> wrote in message
news:OenNh9IPGHA.3840@.TK2MSFTNGP14.phx.gbl...
> Please, take a look at "Inside SQL Server 2000's Memory Management
> Facilities" by Ken Henderson:
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsqldev/html/sqldev_01262004.asp
> "... None of the tools you typically use to inspect application memory use
> (Task Manager, Perfmon/Sysmon, etc.) show the amount of AWE memory used by
> individual processes. There's no indication of the amount of AWE memory
> used by each process, nor is this memory included in the working set size
> reported for a given process ..."
> Thanks,
> -Ivan
> --Original Message--
> From: james
> Posted At: Tuesday, February 28, 2006 6:52 AM
> Posted To: microsoft.public.sqlserver.server
> Conversation: MemToLeave area! How to monitor?
> Subject: MemToLeave area! How to monitor?
>
> Gurus,
> Occasionally I get the error of "WARNING: Failed to reserve contiguous
> memory of Size= 65536."
> What is the unit for this size, is it bytes or KB? Additionally,
> I have set up 511 MB for MemToLeave area by adding -g384 for Sql2k SP3a on
> AWE enabled system. I have total of 16 GB of RAM, of which 14 GB is set up
> maximum for Sql server.
> If I look at Perfmon counter of Process->Private bytes for sql server
> process it gives me 224 MB.
> If I look at sqlserver.exe process on task manager it gives me 215MB.
> If I look at DBCC memorystatus, OS in use, it gives me 12 MB.
> I would like to know, How can I calculate exactly how much memory is being
> assigned and how much in use from MemToLeave area by looking at above
> mentioned counters or any other perfmon or dbcc counters?
> I really appreciate any input on this matter.|||James,
The unit of measure in the message you list is bytes, not KB. Basically,
the memory mgr is failing to reserve 64KB of memory.
The numbers you report from the various monitoring tools don't sound
surprising -- they all measure different things. For example,
Process:Private Bytes is a measure of _committed_ virtual memory, not
reserved. -G controls the region set aside for MTL, not committed or
reserved -- free. That virtual memory is reserved and committed as needed
by the various memory consumers running inside the SQL Server process.
Also, on SS2K, AWE can't be used for anything except caching data and index
pages. Regular MTL allocations never come from AWE.
Keep in mind that the MTL region is really not a region at all but just
refers to the memory left over once the BPool takes what it needs. It's the
unused virtual memory in the process's virtual address space. -G can grow
or shrink this area, but it basically only amounts to unused memory within
the process.
Allocations by external consumers (COM objects, xprocs (usually), OLEDB
providers, etc.) come from MTL. Also, allocations by the server itself that
are >8KB are serviced from MTL rather than the BPool. This just means that,
at some level, they call VirtualAlloc to allocate VM directly from Windows
rather than using pages already allocated to the BPool.
Accompanying the error message you list should be the equivalent of DBCC
MEMORYSTATUS output. This is more relevant than running the command
yourself because it's taken at the exact moment the error occurred. If I
were you, I'd have a look at the various buckets listed in that report to
see if any of them seem high. Keep in mind that many of them are page
counts, not byte counts, so you need to multiply them by 8KB to get the
exact byte count in use.
Also keep in mind that this error can be caused by extreme fragmentation as
well as over-allocation of memory. IOW, you could have well more than 64KB
available within the process, but no single contiguous block of that size or
larger.
If, after you've worked through the above, you still can't figure out why
the reservation is failing, you might want to contact PSS to help you
troubleshoot it further. They deal with these all the time and should be
able to get you fixed up in no time.
HTH,
-kh
"james" <kush@.brandes.com> wrote in message
news:%23hTPNHLPGHA.720@.TK2MSFTNGP14.phx.gbl...
> Thanks Ivan. I had already checked that article and also few other google
> search but didn't get the answer.
> <ivanpe@.online.microsoft.com> wrote in message
> news:OenNh9IPGHA.3840@.TK2MSFTNGP14.phx.gbl...
>> Please, take a look at "Inside SQL Server 2000's Memory Management
>> Facilities" by Ken Henderson:
>> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsqldev/html/sqldev_01262004.asp
>> "... None of the tools you typically use to inspect application memory
>> use (Task Manager, Perfmon/Sysmon, etc.) show the amount of AWE memory
>> used by individual processes. There's no indication of the amount of AWE
>> memory used by each process, nor is this memory included in the working
>> set size reported for a given process ..."
>> Thanks,
>> -Ivan
>> --Original Message--
>> From: james
>> Posted At: Tuesday, February 28, 2006 6:52 AM
>> Posted To: microsoft.public.sqlserver.server
>> Conversation: MemToLeave area! How to monitor?
>> Subject: MemToLeave area! How to monitor?
>>
>> Gurus,
>> Occasionally I get the error of "WARNING: Failed to reserve contiguous
>> memory of Size= 65536."
>> What is the unit for this size, is it bytes or KB? Additionally,
>> I have set up 511 MB for MemToLeave area by adding -g384 for Sql2k SP3a
>> on AWE enabled system. I have total of 16 GB of RAM, of which 14 GB is
>> set up maximum for Sql server.
>> If I look at Perfmon counter of Process->Private bytes for sql server
>> process it gives me 224 MB.
>> If I look at sqlserver.exe process on task manager it gives me 215MB.
>> If I look at DBCC memorystatus, OS in use, it gives me 12 MB.
>> I would like to know, How can I calculate exactly how much memory is
>> being assigned and how much in use from MemToLeave area by looking at
>> above mentioned counters or any other perfmon or dbcc counters?
>> I really appreciate any input on this matter.
>
Memory Warning
WARNING: Failed to reserve contiguous memory of Size= 65536.
Buffer Distribution: Stolen=4294940950 Free=112
Procedures=109559
Inram=0 Dirty=1167 Kept=0
I/O=0, Latched=206, Other=123974
Buffer Counts: Commited=208672 Target=208672
Hashed=125347
InternalReservation=346 ExternalReservation=58 Min
Free=128
Procedure Cache: TotalProcs=10484 TotalPages=109559
InUsePages=14527
Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
OS Committed=36671
OS In Use=29121
Query Plan=108697 Optimizer=0
General=3420
Utilities=6 Connection=68
Global Memory Objects: Resource=1808 Locks=77
SQLCache=653 Replication=2
LockBytes=2 ServerGlobal=46
Xact=18
Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
Available=165320
Can anyone help diagnose the problem?
Thanks,
HillaireThere are certain operations that can only happen in a contiguous block of
memory and this is done in what is called the MemToLeave area. By default
you only have about 256MB allocated for this area and you must have run low.
If this repeats you might considering adding the -g parameter to SQL Server
upon startup and reserve more than 256MB. Have a look at "startup options"
in BooksOnLine for more details.
--
Andrew J. Kelly SQL MVP
"kelly" <anonymous@.discussions.microsoft.com> wrote in message
news:08d501c46e65$82f1e660$a601280a@.phx.gbl...
> I have the following entries in my sql error log:
> WARNING: Failed to reserve contiguous memory of Size=> 65536.
> Buffer Distribution: Stolen=4294940950 Free=112
> Procedures=109559
> Inram=0 Dirty=1167 Kept=0
> I/O=0, Latched=206, Other=123974
> Buffer Counts: Commited=208672 Target=208672
> Hashed=125347
> InternalReservation=346 ExternalReservation=58 Min
> Free=128
> Procedure Cache: TotalProcs=10484 TotalPages=109559
> InUsePages=14527
> Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
> OS Committed=36671
> OS In Use=29121
> Query Plan=108697 Optimizer=0
> General=3420
> Utilities=6 Connection=68
> Global Memory Objects: Resource=1808 Locks=77
> SQLCache=653 Replication=2
> LockBytes=2 ServerGlobal=46
> Xact=18
> Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
> Available=165320
> Can anyone help diagnose the problem?
> Thanks,
> Hillaire|||What service pack are on for SQL Server? See if this applies to you:
http://support.microsoft.com/default.aspx?scid=kb;en-us;818095
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"kelly" <anonymous@.discussions.microsoft.com> wrote in message
news:08d501c46e65$82f1e660$a601280a@.phx.gbl...
I have the following entries in my sql error log:
WARNING: Failed to reserve contiguous memory of Size=65536.
Buffer Distribution: Stolen=4294940950 Free=112
Procedures=109559
Inram=0 Dirty=1167 Kept=0
I/O=0, Latched=206, Other=123974
Buffer Counts: Commited=208672 Target=208672
Hashed=125347
InternalReservation=346 ExternalReservation=58 Min
Free=128
Procedure Cache: TotalProcs=10484 TotalPages=109559
InUsePages=14527
Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
OS Committed=36671
OS In Use=29121
Query Plan=108697 Optimizer=0
General=3420
Utilities=6 Connection=68
Global Memory Objects: Resource=1808 Locks=77
SQLCache=653 Replication=2
LockBytes=2 ServerGlobal=46
Xact=18
Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
Available=165320
Can anyone help diagnose the problem?
Thanks,
Hillaire
Memory Warning
WARNING: Failed to reserve contiguous memory of Size=
65536.
Buffer Distribution: Stolen=4294940950 Free=112
Procedures=109559
Inram=0 Dirty=1167 Kept=0
I/O=0, Latched=206, Other=123974
Buffer Counts: Commited=208672 Target=208672
Hashed=125347
InternalReservation=346 ExternalReservation=58 Min
Free=128
Procedure Cache: TotalProcs=10484 TotalPages=109559
InUsePages=14527
Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
OS Committed=36671
OS In Use=29121
Query Plan=108697 Optimizer=0
General=3420
Utilities=6 Connection=68
Global Memory Objects: Resource=1808 Locks=77
SQLCache=653 Replication=2
LockBytes=2 ServerGlobal=46
Xact=18
Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
Available=165320
Can anyone help diagnose the problem?
Thanks,
Hillaire
There are certain operations that can only happen in a contiguous block of
memory and this is done in what is called the MemToLeave area. By default
you only have about 256MB allocated for this area and you must have run low.
If this repeats you might considering adding the -g parameter to SQL Server
upon startup and reserve more than 256MB. Have a look at "startup options"
in BooksOnLine for more details.
Andrew J. Kelly SQL MVP
"kelly" <anonymous@.discussions.microsoft.com> wrote in message
news:08d501c46e65$82f1e660$a601280a@.phx.gbl...
> I have the following entries in my sql error log:
> WARNING: Failed to reserve contiguous memory of Size=
> 65536.
> Buffer Distribution: Stolen=4294940950 Free=112
> Procedures=109559
> Inram=0 Dirty=1167 Kept=0
> I/O=0, Latched=206, Other=123974
> Buffer Counts: Commited=208672 Target=208672
> Hashed=125347
> InternalReservation=346 ExternalReservation=58 Min
> Free=128
> Procedure Cache: TotalProcs=10484 TotalPages=109559
> InUsePages=14527
> Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
> OS Committed=36671
> OS In Use=29121
> Query Plan=108697 Optimizer=0
> General=3420
> Utilities=6 Connection=68
> Global Memory Objects: Resource=1808 Locks=77
> SQLCache=653 Replication=2
> LockBytes=2 ServerGlobal=46
> Xact=18
> Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
> Available=165320
> Can anyone help diagnose the problem?
> Thanks,
> Hillaire
|||What service pack are on for SQL Server? See if this applies to you:
http://support.microsoft.com/default...b;en-us;818095
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"kelly" <anonymous@.discussions.microsoft.com> wrote in message
news:08d501c46e65$82f1e660$a601280a@.phx.gbl...
I have the following entries in my sql error log:
WARNING: Failed to reserve contiguous memory of Size=
65536.
Buffer Distribution: Stolen=4294940950 Free=112
Procedures=109559
Inram=0 Dirty=1167 Kept=0
I/O=0, Latched=206, Other=123974
Buffer Counts: Commited=208672 Target=208672
Hashed=125347
InternalReservation=346 ExternalReservation=58 Min
Free=128
Procedure Cache: TotalProcs=10484 TotalPages=109559
InUsePages=14527
Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
OS Committed=36671
OS In Use=29121
Query Plan=108697 Optimizer=0
General=3420
Utilities=6 Connection=68
Global Memory Objects: Resource=1808 Locks=77
SQLCache=653 Replication=2
LockBytes=2 ServerGlobal=46
Xact=18
Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
Available=165320
Can anyone help diagnose the problem?
Thanks,
Hillaire
Memory Warning
WARNING: Failed to reserve contiguous memory of Size=
65536.
Buffer Distribution: Stolen=4294940950 Free=112
Procedures=109559
Inram=0 Dirty=1167 Kept=0
I/O=0, Latched=206, Other=123974
Buffer Counts: Commited=208672 Target=208672
Hashed=125347
InternalReservation=346 ExternalReservation=58 Min
Free=128
Procedure Cache: TotalProcs=10484 TotalPages=109559
InUsePages=14527
Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
OS Committed=36671
OS In Use=29121
Query Plan=108697 Optimizer=0
General=3420
Utilities=6 Connection=68
Global Memory Objects: Resource=1808 Locks=77
SQLCache=653 Replication=2
LockBytes=2 ServerGlobal=46
Xact=18
Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
Available=165320
Can anyone help diagnose the problem?
Thanks,
HillaireThere are certain operations that can only happen in a contiguous block of
memory and this is done in what is called the MemToLeave area. By default
you only have about 256MB allocated for this area and you must have run low.
If this repeats you might considering adding the -g parameter to SQL Server
upon startup and reserve more than 256MB. Have a look at "startup options"
in BooksOnLine for more details.
Andrew J. Kelly SQL MVP
"kelly" <anonymous@.discussions.microsoft.com> wrote in message
news:08d501c46e65$82f1e660$a601280a@.phx.gbl...
> I have the following entries in my sql error log:
> WARNING: Failed to reserve contiguous memory of Size=
> 65536.
> Buffer Distribution: Stolen=4294940950 Free=112
> Procedures=109559
> Inram=0 Dirty=1167 Kept=0
> I/O=0, Latched=206, Other=123974
> Buffer Counts: Commited=208672 Target=208672
> Hashed=125347
> InternalReservation=346 ExternalReservation=58 Min
> Free=128
> Procedure Cache: TotalProcs=10484 TotalPages=109559
> InUsePages=14527
> Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
> OS Committed=36671
> OS In Use=29121
> Query Plan=108697 Optimizer=0
> General=3420
> Utilities=6 Connection=68
> Global Memory Objects: Resource=1808 Locks=77
> SQLCache=653 Replication=2
> LockBytes=2 ServerGlobal=46
> Xact=18
> Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
> Available=165320
> Can anyone help diagnose the problem?
> Thanks,
> Hillaire|||What service pack are on for SQL Server? See if this applies to you:
http://support.microsoft.com/defaul...kb;en-us;818095
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"kelly" <anonymous@.discussions.microsoft.com> wrote in message
news:08d501c46e65$82f1e660$a601280a@.phx.gbl...
I have the following entries in my sql error log:
WARNING: Failed to reserve contiguous memory of Size=
65536.
Buffer Distribution: Stolen=4294940950 Free=112
Procedures=109559
Inram=0 Dirty=1167 Kept=0
I/O=0, Latched=206, Other=123974
Buffer Counts: Commited=208672 Target=208672
Hashed=125347
InternalReservation=346 ExternalReservation=58 Min
Free=128
Procedure Cache: TotalProcs=10484 TotalPages=109559
InUsePages=14527
Dynamic Memory Manager: Stolen=83213 OS Reserved=40456
OS Committed=36671
OS In Use=29121
Query Plan=108697 Optimizer=0
General=3420
Utilities=6 Connection=68
Global Memory Objects: Resource=1808 Locks=77
SQLCache=653 Replication=2
LockBytes=2 ServerGlobal=46
Xact=18
Query Memory Manager: Grants=1 Waiting=0 Maximum=165378
Available=165320
Can anyone help diagnose the problem?
Thanks,
Hillaire