Showing posts with label application. Show all posts
Showing posts with label application. Show all posts

Wednesday, March 28, 2012

Merge Application Conflict Log

Is it possible to write conflict sql in some persistent table.
I know that they a writen in merge replication tables and can be viewed via
conflict viewer
but they a lost after the snapshot run.
I need to keep them forever.
Probably i can write custom resolver who will do thta - but is there any
'out of the box' solution?
Grigoris - as far as I know there isn't any out-of-the-box solution but
perhaps the easiest solution would be to regularly poll the conflict table
and to write the values into an audit table.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
|||Use perfmon to watch the conflicts per second counter of SQL
Server:Replication Merge and run a job which reads the conflict tables when
conflicts occur.
You can also use sp_helpmergearticleconflicts and run it on the publisher
and subscriber. It will return a list of articles with conflicts and you can
then manually or programmatically retrieve them.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Grigoris Tsolakidis" <gcholakidis@.spam_remove.hotmail.com> wrote in message
news:%237acphUZHHA.588@.TK2MSFTNGP06.phx.gbl...
> Is it possible to write conflict sql in some persistent table.
> I know that they a writen in merge replication tables and can be viewed
> via conflict viewer
> but they a lost after the snapshot run.
> I need to keep them forever.
> Probably i can write custom resolver who will do thta - but is there any
> 'out of the box' solution?
>

MERGE AND TRANSACTIONAL REPLICATION SUPPORT

We currently have a web application (using ASP for the front-end and SQL 2000
for the back-end dB). The site is accessed from several locations around the
world. Response is very slow for people in Europe when the server is located
in the US. We have now been tasked to deploy several servers at each
location and implement SQL replication )merge or trans?). I am not sure how
much that is done out in the industry and there is just not a lot of
documentation, either from MS or elsewhere, about how to do it. We have some
of the books that show you how to set up the servers but we are looking for
more detailed instructions and case studies on what the best practice are.
Before we get started we would like to learn about people who have actually
done this and find documentation to help us implement it. I have setup a lab
with two Windows 2003 servers and SQL in each. Thank you.
Edgar
saic_edgar@.yahoo.com
Paul,
Thank you for your suggestions including the books that will soon become
available. I plan to buy them as soon as they become available. I was
concerned why there is so little material on this subject when there are
hundreds of book when using Oracle for replication. I know MS is new at this
but we are hoping it will work well.
I am not sure I can answer your questions, but here is more details on what
we need.
We expect to have 4 servers located on 4 geographical areas. Each will
start with the same snapshot and each will be used to make updates. All
updates need to replicate to keep all servers in sync. Transactional may not
work since the WAN links connecting the servers may be too slow at times. I
have to do some reading on "the republisher replication model, or merge
replication
with alternative synchronization partners" you suggested to understand how
exactly it will work. My function in the team is as SA/DBA and I do not do
any programming. That will be done by 3 other SQL programmers. The ideal
solution would be to have people in each of the 4 locations use the
application in their local server, and periodically, maybe every two hours or
so, trigger a merge replication so that all servers are kept in sync. All
servers must be available at all times if possible.
We have been using SQL 200 for a long time but at a very low level so we
will need a lot of guidance in the are of getting replication going and
maintaining it, including conflict resolution. What methods of conflict
resolution do you suggest?
Thanks for all your help.
Edgar
"Edgar" wrote:

> We currently have a web application (using ASP for the front-end and SQL 2000
> for the back-end dB). The site is accessed from several locations around the
> world. Response is very slow for people in Europe when the server is located
> in the US. We have now been tasked to deploy several servers at each
> location and implement SQL replication )merge or trans?). I am not sure how
> much that is done out in the industry and there is just not a lot of
> documentation, either from MS or elsewhere, about how to do it. We have some
> of the books that show you how to set up the servers but we are looking for
> more detailed instructions and case studies on what the best practice are.
> Before we get started we would like to learn about people who have actually
> done this and find documentation to help us implement it. I have setup a lab
> with two Windows 2003 servers and SQL in each. Thank you.
> --
> Edgar
> saic_edgar@.yahoo.com
|||Edgar,
almost always the default has been suitable for me. This
is where the publisher wins against a subscriber and if 2
subscribers conflict, the first subscriber gets the
priority of the publisher. I've used global once where
the subscribers have priorities, but you only need one
person with a high priority who has a big gap before he
synchronizes for this to become a problem. From the other
ones I've looked at 'Subscriber Always Wins Conflict
Resolver' and custom stored procedure resolver. For the
main ones there's plenty of info in BOL. For the others
my impression from this newsgroup is that they have been
used only occasionally and in specialized cases.
Regards,
Paul Ibison

Monday, March 26, 2012

Merge Agent System Stored Procedures and sp_Recompile

Hello,

We are trying to be proactive and stop a potential performance issue by reducing the number of recompiles in our SQL 2000 database application. This database is replicated. After viewing output from Profiler and PerfMon it seems that over 90% of the recompiles are due to system stored procedures generated by replication merge agents. Can anything be done about this?

Thanks

Can you be more specific - which procs, and under what conditions?|||The databases are setup to use push merge replication with the polling interval set to every 15 seconds. It is replicating 4 databases to one subscriber. The procs that are involved are the ones generated by the merge agent when you set up replication. Hope this helps!|||Thanks Candice, but merge replication involves many many procs, including triggers. Can you please be more specific about which ones? And is this occuring at the publisher, or subscriber?

Friday, March 23, 2012

Merge Agent and ActiveX Parameters for Replication

Hi All,
I currently use the SQL Merge ActiveX control to perform synchronization
from my application. In SQL Enterprise Manager, you can configure the Merge
Agent Parameters (-KeepMessageAliveInterval, -QueryTimeout, etc.), but you
can also do this from the ActiveX control. My question is, which one takes
precedent? If I configure values in the Merge Agent's Parameters, then
configure different values in the ActiveX control, which are used? Does the
server take precedence? I would think the ActiveX control's parameters
would be used, but I just want to make sure.
Regards,
Brad
you can specify which profile you want to use. If you try to right click on
your agent, when it is created by an activeX control you will find that you
are unable to set these properties.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Brad M." <anonymous@.discussions.microsoft.com> wrote in message
news:OEZ57eopFHA.2504@.tk2msftngp13.phx.gbl...
> Hi All,
> I currently use the SQL Merge ActiveX control to perform synchronization
> from my application. In SQL Enterprise Manager, you can configure the
Merge
> Agent Parameters (-KeepMessageAliveInterval, -QueryTimeout, etc.), but you
> can also do this from the ActiveX control. My question is, which one
takes
> precedent? If I configure values in the Merge Agent's Parameters, then
> configure different values in the ActiveX control, which are used? Does
the
> server take precedence? I would think the ActiveX control's parameters
> would be used, but I just want to make sure.
> Regards,
> Brad
>

Monday, March 19, 2012

Memory Use

Hi there.
I have an application that connects to a SQL Server using ODBC (and
CRecordSets) to peruse and retrieve data from a number of large database
tables. As a result, SQL Server seems to be constantly using up a very large
amount of memory - even when my application doesn't seem to be doing anythin
g.
Q: Is this behaviour expected? The memory settings on the server are
configured to allow SQL Server to allocate the memory Dynamically, using the
total physical RAM.
Does SQL Server try to keep copies of database tables in memory if they've
been queried and the memory is available (in case they're queried again)? -
or should it relinquish that memory once the query has returned (implying
that my application must still have some connections/queries open)?
Any help would be appreciated!> I have an application that connects to a SQL Server using ODBC (and
> CRecordSets) to peruse and retrieve data from a number of large database
> tables. As a result, SQL Server seems to be constantly using up a very
> large
> amount of memory - even when my application doesn't seem to be doing
> anything.

> Q: Is this behaviour expected?
Yes. This behavior is expected, and is in fact, by design, on purpose:
Anything you SELECT is handed back to your application, but then SQL Server
holds that data in a cache, because it thinks that if you SELECT'ed it once,
you might do it again.
There are a couple things I know that you can do to reduce it:
1. Configure the server to use a specified maximum amount of RAM
2. Execute DBCC DROPCLEANBUFFERS to release some? most? of the cache
But be very careful about doing that on a production server. When you're a
developer (like me), you typically have sql server, IIS, outlook, visual
studio, enterprise manager, query analyser, outlook express, windows
explorer, and internet explorer all running at the same time, so this
becomes useful. If you fold back the memory on a production server, you
could very likely cripple the whole network, and wind up in the bosses
office for a meeting.
Guess how I know. <g>
Peace & happy computing,
Mike Labosh, MCSD MCT
Owner, vbSensei.Com
"Escriba coda ergo sum." -- vbSensei|||Great stuff thanks - that's just what I needed!
Actually, one more question... If SQL server is taking up a huge amount of
memory with this cached data, would that affect the performance of other
applications that may start up and need to use some of that memory? Will it
relinquish some of the memory without a struggle?
"Mike Labosh" wrote:

>
> Yes. This behavior is expected, and is in fact, by design, on purpose:
> Anything you SELECT is handed back to your application, but then SQL Serve
r
> holds that data in a cache, because it thinks that if you SELECT'ed it onc
e,
> you might do it again.
> There are a couple things I know that you can do to reduce it:
> 1. Configure the server to use a specified maximum amount of RAM
> 2. Execute DBCC DROPCLEANBUFFERS to release some? most? of the cache
> But be very careful about doing that on a production server. When you're
a
> developer (like me), you typically have sql server, IIS, outlook, visual
> studio, enterprise manager, query analyser, outlook express, windows
> explorer, and internet explorer all running at the same time, so this
> becomes useful. If you fold back the memory on a production server, you
> could very likely cripple the whole network, and wind up in the bosses
> office for a meeting.
> Guess how I know. <g>
> --
>
> Peace & happy computing,
> Mike Labosh, MCSD MCT
> Owner, vbSensei.Com
> "Escriba coda ergo sum." -- vbSensei
>
>|||SQL Server is very greedy about the memory it eats. ...Just imagine that
Tiger you saw on the Discovery Channel last w.
If you have other things that need to run on the same server where SQL is
installed, you need to administratively tell the different packages of
software how much memory, threads, etc. they are allowed to consume.
Example:
Server abc has SQL Server 2000 and IIS 4. I can set these up to behave
nicely by setting the specific maximum amount of RAM consumed by SQL, and I
can configure the RAM and Thread Pool of IIS within the IIS MMC console and
also the COM+ Console.
But I warn you again, Be VERY careful about doing that on a production
server. On a dev-box, nobody cares.
My laptop has 1 GB of memory, so I have SQL & IIS throttled WAY BACK. That
way, I have enough memory to develop applications in Visual Studio.NET,
check my mail, read these damn groups :) and still run Linux RedHat on a
virtual PC inside a window.
But nobody (hopefully) does that kind of stuff on a production server. If
you are talking about a production SQL Server, it should be setup to have
Dynamic Memory Allocation and eat all the RAM and CPU that it needs.
Peace & happy computing,
Mike Labosh, MCSD MCT
Owner, vbSensei.Com
"Escriba coda ergo sum." -- vbSensei

Monday, March 12, 2012

memory usage and configuration...

Hi,
I've a server to setup.
This server is not only dedicated to SQL Server, but SQL Server is the main
application on the machine. (Datawarehouse system)
I just want to know how to configure my windows swap file?
I know that SQL Server is optimized to not use the Windows Swap file.
I've 6 disks available:
4 * 64 Gb for my SQL Server databases (Raid 5)
On the 2 other disks I've to install Win 2003 + applications + swap file.
How to optimize this?
Where to setup the tempdb database?
Actually I plan to get 10Gb in 1 disk for the OS + application
and setup a stripped volume with the 2 disks for the swap file and the
tempdb file (2*40gb).
Do you think its a good config?
thanks for your feedback.
Jerome.How much memory is in your system in total, and what is
your drive configuration beyond the 6 disk RAID 5 set for
SQL Server? That may not be enough spindles, but I can't
say for sure. If you stick all of your SQL Server files
(system DBs, data/index, logs, tempdb) all on the same
stripe set, you are not buying yourself anything.
So if your goal as I saw is to put everything on the one
R5 stripe, I'd say wholeheartedly no, it is not an optimal
setup at all based on my experience.
However, you need to have done some system profiling to
tell you definitively.|||I've 4gb of RAM
I want to control the usage of my hard drive to keep a good performance.
it's why I want to separate the tempdb database in 1 disk or 2 stripped
disks.
And I'll keep 4 disks in raid 5 for my data. for security reasons.
Maybe I can put my indexes on a separate disk.
Config 1:
Disk0: OS + Applications (10Gb only)
Disk0,1: Stripped - 22Gb + 32 Gb - tempdb - user db indexes - Swap file
Disk3,4,5,6: Raid 5 for system DB and user DB
Config 2:
Disk0: OS + Swap + Applications (10Gb only)
Disk0: tempdb (22gb)
Disk1: indexes (32gb)
Disk3,4,5,6: Raid 5 for system DB and user DB
each disk is a 32gb disk
what do you think about this?
which config is better?
or what is your recommendation?
Also, what is the swap file size required?
"Allan Hirt" <anonymous@.discussions.microsoft.com> a écrit dans le message
de news:5ab501c474a8$a67500b0$a601280a@.phx.gbl...
> How much memory is in your system in total, and what is
> your drive configuration beyond the 6 disk RAID 5 set for
> SQL Server? That may not be enough spindles, but I can't
> say for sure. If you stick all of your SQL Server files
> (system DBs, data/index, logs, tempdb) all on the same
> stripe set, you are not buying yourself anything.
> So if your goal as I saw is to put everything on the one
> R5 stripe, I'd say wholeheartedly no, it is not an optimal
> setup at all based on my experience.
> However, you need to have done some system profiling to
> tell you definitively.

Friday, March 9, 2012

Memory required for SQL Server 2000

We have been running an application on SQL Server 2000 on a Windows 2000
Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows
2000 Server is around 200MB of memory. In this way, we attempt to change
the RAM from 2GB to 1GB. However, we find that the memory utilization is as
high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to 2
hours OR not ? If there any reason for a SQL Server 2000 to grab more
memory when we start it up ? Will the memory be stabilized to become 200MB
again ?
Your advice is sought.This is a multi-part message in MIME format.
--060500000502050507020305
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
In most situations, SQL Server's memory allocation will be lowest when
you first start the service. SQL Server will cache each data page as it
is read from disk until a minimum free memory threshold is reached (from
memory the threshold is 10MB by default). SQL Server then removes least
recently used pages from its cache, and releases memory, as necessary in
order to maintain the free memory threshold. This means that the longer
the service runs the more memory it will use (up to a limit). This is
Microsoft's dynamic memory management model.
One exception to this "increasing memory use" model is when you
configure SQL Server to grab its entire memory allocation on service
start-up, in which case all the memory, up to the configured maximum
limit, is reserved up front. However, there are no situations (that I
can think of) where SQL Server initially uses more memory and then
releases a substantial portion of it to "stabilise" at a certain level
(unless you start messing around with DBCC to flush the data buffer &
procedure cache).
--
*mike hodgson*
http://sqlnerd.blogspot.com
Robert wrote:
>We have been running an application on SQL Server 2000 on a Windows 2000
>Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
>From the statistics, we find that the memory utilization of that Windows
>2000 Server is around 200MB of memory. In this way, we attempt to change
>the RAM from 2GB to 1GB. However, we find that the memory utilization is as
>high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
>We just thinking the rise of memory required is only for the first 1 to 2
>hours OR not ? If there any reason for a SQL Server 2000 to grab more
>memory when we start it up ? Will the memory be stabilized to become 200MB
>again ?
>
>Your advice is sought.
>
>
--060500000502050507020305
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>In most situations, SQL Server's memory allocation will be lowest
when you first start the service. SQL Server will cache each data page
as it is read from disk until a minimum free memory threshold is
reached (from memory the threshold is 10MB by default). </tt><tt>SQL
Server then removes least recently used pages from its cache,
and releases memory, as necessary in order to maintain the free memory
threshold. </tt><tt>This means that the longer the service runs the
more memory it will use (up to a limit). This is Microsoft's dynamic
memory management model.<br>
<br>
One exception to this "increasing memory use" model is when you
configure SQL Server to grab its entire memory allocation on service
start-up, in which case all the memory, up to the configured maximum
limit, is reserved up front. However, there are no situations (that I
can think of) where SQL Server initially uses more memory and then
releases a substantial portion of it to "stabilise" at a certain level
(unless you start messing around with DBCC to flush the data buffer
& procedure cache).</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span> <b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2"><a href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Robert wrote:
<blockquote cite="midOeJwQEbnGHA.4868@.TK2MSFTNGP02.phx.gbl" type="cite">
<pre wrap="">We have been running an application on SQL Server 2000 on a Windows 2000
Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows
2000 Server is around 200MB of memory. In this way, we attempt to change
the RAM from 2GB to 1GB. However, we find that the memory utilization is as
high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to 2
hours OR not ? If there any reason for a SQL Server 2000 to grab more
memory when we start it up ? Will the memory be stabilized to become 200MB
again ?
Your advice is sought.
</pre>
</blockquote>
</body>
</html>
--060500000502050507020305--|||This is a multi-part message in MIME format.
--=_NextPart_000_0016_01C69ED0.60E2CBA0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Dear Mike,
Thank you for your advice.
If the physical RAM is 2GB and I change the maximum RAM setting for the =SQL Server 2000 to 1.5GB. Does it mean that the SQL Server 2000 will =use most up to 1.5GB RAM ?
Thanks
"Mike Hodgson" <e1minst3r@.gmail.com> wrote in message =news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl...
In most situations, SQL Server's memory allocation will be lowest when =you first start the service. SQL Server will cache each data page as it =is read from disk until a minimum free memory threshold is reached (from =memory the threshold is 10MB by default). SQL Server then removes least =recently used pages from its cache, and releases memory, as necessary in =order to maintain the free memory threshold. This means that the longer =the service runs the more memory it will use (up to a limit). This is =Microsoft's dynamic memory management model.
One exception to this "increasing memory use" model is when you =configure SQL Server to grab its entire memory allocation on service =start-up, in which case all the memory, up to the configured maximum =limit, is reserved up front. However, there are no situations (that I =can think of) where SQL Server initially uses more memory and then =releases a substantial portion of it to "stabilise" at a certain level =(unless you start messing around with DBCC to flush the data buffer & =procedure cache).
--
mike hodgson
http://sqlnerd.blogspot.com=20
Robert wrote: We have been running an application on SQL Server 2000 on a Windows 2000 =
Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows =
2000 Server is around 200MB of memory. In this way, we attempt to =change the RAM from 2GB to 1GB. However, we find that the memory utilization =is as high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to =2 hours OR not ? If there any reason for a SQL Server 2000 to grab more memory when we start it up ? Will the memory be stabilized to become =200MB again ?
Your advice is sought.
--=_NextPart_000_0016_01C69ED0.60E2CBA0
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Dear Mike,
Thank you for your advice.
If the physical RAM is 2GB and I change =the maximum RAM setting for the SQL Server 2000 to 1.5GB. Does it mean that =the SQL Server 2000 will use most up to 1.5GB RAM ?
Thanks
"Mike Hodgson" wrote =in message news:OYx8BvknGHA.3340=@.TK2MSFTNGP02.phx.gbl...In most situations, SQL Server's memory allocation will be lowest when =you first start the service. SQL Server will cache each data page as it is =read from disk until a minimum free memory threshold is reached (from =memory the threshold is 10MB by default). SQL Server then removes =least recently used pages from its cache, and releases memory, as necessary =in order to maintain the free memory threshold. This means that =the longer the service runs the more memory it will use (up to a =limit). This is Microsoft's dynamic memory management model.One =exception to this "increasing memory use" model is when you configure SQL Server to =grab its entire memory allocation on service start-up, in which case all =the memory, up to the configured maximum limit, is reserved up =front. However, there are no situations (that I can think of) where SQL =Server initially uses more memory and then releases a substantial portion of =it to "stabilise" at a certain level (unless you start messing around with =DBCC to flush the data buffer & procedure cache).
--mike =hodgsonhttp://sqlnerd.blogspot.com Robert wrote: We have been running an application on SQL =Server 2000 on a Windows 2000 Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows =2000 Server is around 200MB of memory. In this way, we attempt to =change the RAM from 2GB to 1GB. However, we find that the memory utilization =is as high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to =2 hours OR not ? If there any reason for a SQL Server 2000 to grab more memory when we start it up ? Will the memory be stabilized to become =200MB again ?
Your advice is sought.

--=_NextPart_000_0016_01C69ED0.60E2CBA0--|||This is a multi-part message in MIME format.
--060007000708090601070304
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
It means that SQL Server will be allowed, at most, a 1.5GB address
space. It does not necessarily mean this will be backed by physical
memory. If there is not enough physical memory to support SQL Server's
requests, in addition to the other applications requesting memory on the
box (including the OS itself), then part of that memory space will be
backed by virtual memory, which, needless-to-say, is not a
high-performance situation. The exception to this, which does not apply
to you because you are playing with less than 4GB of physical memory, is
AWE memory, which is always satisfied from physical memory (it is never
swapped to disk).
--
*mike hodgson*
http://sqlnerd.blogspot.com
Robert wrote:
> Dear Mike,
> Thank you for your advice.
> If the physical RAM is 2GB and I change the maximum RAM setting for
> the SQL Server 2000 to 1.5GB. Does it mean that the SQL Server 2000
> will use most up to 1.5GB RAM ?
> Thanks
>
> "Mike Hodgson" <e1minst3r@.gmail.com <mailto:e1minst3r@.gmail.com>>
> wrote in message news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl...
> In most situations, SQL Server's memory allocation will be lowest
> when you first start the service. SQL Server will cache each data
> page as it is read from disk until a minimum free memory threshold
> is reached (from memory the threshold is 10MB by default). SQL
> Server then removes least recently used pages from its cache, and
> releases memory, as necessary in order to maintain the free memory
> threshold. This means that the longer the service runs the more
> memory it will use (up to a limit). This is Microsoft's dynamic
> memory management model.
> One exception to this "increasing memory use" model is when you
> configure SQL Server to grab its entire memory allocation on
> service start-up, in which case all the memory, up to the
> configured maximum limit, is reserved up front. However, there
> are no situations (that I can think of) where SQL Server initially
> uses more memory and then releases a substantial portion of it to
> "stabilise" at a certain level (unless you start messing around
> with DBCC to flush the data buffer & procedure cache).
> --
> *mike hodgson*
> http://sqlnerd.blogspot.com
>
> Robert wrote:
>>We have been running an application on SQL Server 2000 on a Windows 2000
>>Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
>>From the statistics, we find that the memory utilization of that Windows
>>2000 Server is around 200MB of memory. In this way, we attempt to change
>>the RAM from 2GB to 1GB. However, we find that the memory utilization is as
>>high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
>>We just thinking the rise of memory required is only for the first 1 to 2
>>hours OR not ? If there any reason for a SQL Server 2000 to grab more
>>memory when we start it up ? Will the memory be stabilized to become 200MB
>>again ?
>>
>>Your advice is sought.
>>
>>
--060007000708090601070304
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>It means that SQL Server will be allowed, at most, a 1.5GB address
space. It does not necessarily mean this will be backed by physical
memory. If there is not enough physical memory to support SQL Server's
requests, in addition to the other applications requesting memory on
the box (including the OS itself), then part of that memory space will
be backed by virtual memory, which, needless-to-say, is not a
high-performance situation. The exception to this, which does not
apply to you because you are playing with less than 4GB of physical
memory, is AWE memory, which is always satisfied from physical memory
(it is never swapped to disk).</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span> <b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2"><a href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Robert wrote:
<blockquote cite="miducrwUrnnGHA.4728@.TK2MSFTNGP03.phx.gbl" type="cite">
<title></title>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1">
<meta content="MSHTML 6.00.2900.2912" name="GENERATOR">
<style></style>
<div><font face="Arial" size="2">Dear Mike,</font></div>
<div> </div>
<div><font face="Arial" size="2">Thank you for your advice.</font></div>
<div> </div>
<div><font face="Arial" size="2">If the physical RAM is 2GB and I
change the maximum RAM setting for the SQL Server 2000 to 1.5GB. Does
it mean that the SQL Server 2000 will use most up to 1.5GB RAM ?</font></div>
<div> </div>
<div><font face="Arial" size="2">Thanks</font></div>
<div> </div>
<blockquote
style="border-left: 2px solid rgb(0, 0, 0); padding-right: 0px; padding-left: 5px; margin-left: 5px; margin-right: 0px;"
dir="ltr">
<div>"Mike Hodgson" <<a href="http://links.10026.com/?link=mailto:e1minst3r@.gmail.com">e1minst3r@.gmail.com</a>>
wrote in message <a href="http://links.10026.com/?link=news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl">news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl</a>...</div>
<tt>In most situations, SQL Server's memory allocation will be
lowest when you first start the service. SQL Server will cache each
data page as it is read from disk until a minimum free memory threshold
is reached (from memory the threshold is 10MB by default). </tt><tt>SQL
Server then removes least recently used pages from its cache, and
releases memory, as necessary in order to maintain the free memory
threshold. </tt><tt>This means that the longer the service runs the
more memory it will use (up to a limit). This is Microsoft's dynamic
memory management model.<br>
<br>
One exception to this "increasing memory use" model is when you
configure SQL Server to grab its entire memory allocation on service
start-up, in which case all the memory, up to the configured maximum
limit, is reserved up front. However, there are no situations (that I
can think of) where SQL Server initially uses more memory and then
releases a substantial portion of it to "stabilise" at a certain level
(unless you start messing around with DBCC to flush the data buffer
& procedure cache).</tt><br>
<div class="moz-signature">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span><b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2"><a href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Robert wrote:
<blockquote cite="midOeJwQEbnGHA.4868@.TK2MSFTNGP02.phx.gbl"
type="cite">
<pre wrap="">We have been running an application on SQL Server 2000 on a Windows 2000
Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows
2000 Server is around 200MB of memory. In this way, we attempt to change
the RAM from 2GB to 1GB. However, we find that the memory utilization is as
high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to 2
hours OR not ? If there any reason for a SQL Server 2000 to grab more
memory when we start it up ? Will the memory be stabilized to become 200MB
again ?
Your advice is sought.
</pre>
</blockquote>
</blockquote>
</blockquote>
</body>
</html>
--060007000708090601070304--|||This is a multi-part message in MIME format.
--070103030000040705020106
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
For a little more background info about it, see the blog entry I just wrote:
http://sqlnerd.blogspot.com/2006/07/memory-use-in-sql-server.html
Plus there's a stack of info about it in BOL and several quality
publications including The Guru's Guide to SQL Server Architecture and
Internals
<http://www.amazon.com/gp/product/0201700476/sr=8-2/qid=1152005971/ref=sr_1_2/104-4550407-8498336?ie=UTF8>
by Ken Henderson and Inside SQL Server 2000
<http://www.amazon.com/gp/product/0735609985/sr=8-2/qid=1152006007/ref=pd_bbs_2/104-4550407-8498336?ie=UTF8>
by Kalen Delaney.
--
*mike hodgson*
http://sqlnerd.blogspot.com
Mike Hodgson wrote:
> It means that SQL Server will be allowed, at most, a 1.5GB address
> space. It does not necessarily mean this will be backed by physical
> memory. If there is not enough physical memory to support SQL
> Server's requests, in addition to the other applications requesting
> memory on the box (including the OS itself), then part of that memory
> space will be backed by virtual memory, which, needless-to-say, is not
> a high-performance situation. The exception to this, which does not
> apply to you because you are playing with less than 4GB of physical
> memory, is AWE memory, which is always satisfied from physical memory
> (it is never swapped to disk).
> --
> *mike hodgson*
> http://sqlnerd.blogspot.com
>
> Robert wrote:
>> Dear Mike,
>> Thank you for your advice.
>> If the physical RAM is 2GB and I change the maximum RAM setting for
>> the SQL Server 2000 to 1.5GB. Does it mean that the SQL Server 2000
>> will use most up to 1.5GB RAM ?
>> Thanks
>>
>> "Mike Hodgson" <e1minst3r@.gmail.com <mailto:e1minst3r@.gmail.com>>
>> wrote in message news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl...
>> In most situations, SQL Server's memory allocation will be lowest
>> when you first start the service. SQL Server will cache each
>> data page as it is read from disk until a minimum free memory
>> threshold is reached (from memory the threshold is 10MB by
>> default). SQL Server then removes least recently used pages from
>> its cache, and releases memory, as necessary in order to maintain
>> the free memory threshold. This means that the longer the
>> service runs the more memory it will use (up to a limit). This
>> is Microsoft's dynamic memory management model.
>> One exception to this "increasing memory use" model is when you
>> configure SQL Server to grab its entire memory allocation on
>> service start-up, in which case all the memory, up to the
>> configured maximum limit, is reserved up front. However, there
>> are no situations (that I can think of) where SQL Server
>> initially uses more memory and then releases a substantial
>> portion of it to "stabilise" at a certain level (unless you start
>> messing around with DBCC to flush the data buffer & procedure cache).
>> --
>> *mike hodgson*
>> http://sqlnerd.blogspot.com
>>
>> Robert wrote:
>>We have been running an application on SQL Server 2000 on a Windows 2000
>>Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
>>From the statistics, we find that the memory utilization of that Windows
>>2000 Server is around 200MB of memory. In this way, we attempt to change
>>the RAM from 2GB to 1GB. However, we find that the memory utilization is as
>>high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
>>We just thinking the rise of memory required is only for the first 1 to 2
>>hours OR not ? If there any reason for a SQL Server 2000 to grab more
>>memory when we start it up ? Will the memory be stabilized to become 200MB
>>again ?
>>
>>Your advice is sought.
>>
>>
--070103030000040705020106
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>For a little more background info about it, see the blog entry I
just wrote:<br>
<a class="moz-txt-link-freetext" href="http://links.10026.com/?link=http://sqlnerd.blogspot.com/2006/07/memory-use-in-sql-server.html</a><br>">http://sqlnerd.blogspot.com/2006/07/memory-use-in-sql-server.html">http://sqlnerd.blogspot.com/2006/07/memory-use-in-sql-server.html</a><br>
<br>
Plus there's a stack of info about it in BOL and several quality
publications including <a
href="http://links.10026.com/?link=The">http://www.amazon.com/gp/product/0201700476/sr=8-2/qid=1152005971/ref=sr_1_2/104-4550407-8498336?ie=UTF8">The
Guru's Guide to SQL Server Architecture and Internals</a> by Ken
Henderson and <a
href="http://links.10026.com/?link=Inside">http://www.amazon.com/gp/product/0735609985/sr=8-2/qid=1152006007/ref=pd_bbs_2/104-4550407-8498336?ie=UTF8">Inside
SQL Server 2000</a> by Kalen Delaney.<br>
</tt>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span> <b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2"><a href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Mike Hodgson wrote:
<blockquote cite="midef076WpnGHA.3340@.TK2MSFTNGP02.phx.gbl" type="cite">
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
<tt>It means that SQL Server will be allowed, at most, a 1.5GB
address
space. It does not necessarily mean this will be backed by physical
memory. If there is not enough physical memory to support SQL Server's
requests, in addition to the other applications requesting memory on
the box (including the OS itself), then part of that memory space will
be backed by virtual memory, which, needless-to-say, is not a
high-performance situation. The exception to this, which does not
apply to you because you are playing with less than 4GB of physical
memory, is AWE memory, which is always satisfied from physical memory
(it is never swapped to disk).</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span> <b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2"><a href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Robert wrote:
<blockquote cite="miducrwUrnnGHA.4728@.TK2MSFTNGP03.phx.gbl"
type="cite">
<title></title>
<meta http-equiv="Content-Type" content="text/html;">
<meta content="MSHTML 6.00.2900.2912" name="GENERATOR">
<style></style>
<div><font face="Arial" size="2">Dear Mike,</font></div>
<div> </div>
<div><font face="Arial" size="2">Thank you for your advice.</font></div>
<div> </div>
<div><font face="Arial" size="2">If the physical RAM is 2GB and I
change the maximum RAM setting for the SQL Server 2000 to 1.5GB. Does
it mean that the SQL Server 2000 will use most up to 1.5GB RAM ?</font></div>
<div> </div>
<div><font face="Arial" size="2">Thanks</font></div>
<div> </div>
<blockquote
style="border-left: 2px solid rgb(0, 0, 0); padding-right: 0px; padding-left: 5px; margin-left: 5px; margin-right: 0px;"
dir="ltr">
<div>"Mike Hodgson" <<a href="http://links.10026.com/?link=mailto:e1minst3r@.gmail.com">e1minst3r@.gmail.com</a>>
wrote in message <a href="http://links.10026.com/?link=news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl">news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl</a>...</div>
<tt>In most situations, SQL Server's memory allocation will be
lowest when you first start the service. SQL Server will cache each
data page as it is read from disk until a minimum free memory threshold
is reached (from memory the threshold is 10MB by default). </tt><tt>SQL
Server then removes least recently used pages from its cache, and
releases memory, as necessary in order to maintain the free memory
threshold. </tt><tt>This means that the longer the service runs the
more memory it will use (up to a limit). This is Microsoft's dynamic
memory management model.<br>
<br>
One exception to this "increasing memory use" model is when you
configure SQL Server to grab its entire memory allocation on service
start-up, in which case all the memory, up to the configured maximum
limit, is reserved up front. However, there are no situations (that I
can think of) where SQL Server initially uses more memory and then
releases a substantial portion of it to "stabilise" at a certain level
(unless you start messing around with DBCC to flush the data buffer
& procedure cache).</tt><br>
<div class="moz-signature">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span><b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2"><a href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Robert wrote:
<blockquote cite="midOeJwQEbnGHA.4868@.TK2MSFTNGP02.phx.gbl"
type="cite">
<pre wrap="">We have been running an application on SQL Server 2000 on a Windows 2000
Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows
2000 Server is around 200MB of memory. In this way, we attempt to change
the RAM from 2GB to 1GB. However, we find that the memory utilization is as
high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to 2
hours OR not ? If there any reason for a SQL Server 2000 to grab more
memory when we start it up ? Will the memory be stabilized to become 200MB
again ?
Your advice is sought.
</pre>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</body>
</html>
--070103030000040705020106--

Memory required for SQL Server 2000

We have been running an application on SQL Server 2000 on a Windows 2000
Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows
2000 Server is around 200MB of memory. In this way, we attempt to change
the RAM from 2GB to 1GB. However, we find that the memory utilization is as
high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to 2
hours OR not ? If there any reason for a SQL Server 2000 to grab more
memory when we start it up ? Will the memory be stabilized to become 200MB
again ?
Your advice is sought.In most situations, SQL Server's memory allocation will be lowest when
you first start the service. SQL Server will cache each data page as it
is read from disk until a minimum free memory threshold is reached (from
memory the threshold is 10MB by default). SQL Server then removes least
recently used pages from its cache, and releases memory, as necessary in
order to maintain the free memory threshold. This means that the longer
the service runs the more memory it will use (up to a limit). This is
Microsoft's dynamic memory management model.
One exception to this "increasing memory use" model is when you
configure SQL Server to grab its entire memory allocation on service
start-up, in which case all the memory, up to the configured maximum
limit, is reserved up front. However, there are no situations (that I
can think of) where SQL Server initially uses more memory and then
releases a substantial portion of it to "stabilise" at a certain level
(unless you start messing around with DBCC to flush the data buffer &
procedure cache).
*mike hodgson*
http://sqlnerd.blogspot.com
Robert wrote:

>We have been running an application on SQL Server 2000 on a Windows 2000
>Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
>From the statistics, we find that the memory utilization of that Windows
>2000 Server is around 200MB of memory. In this way, we attempt to change
>the RAM from 2GB to 1GB. However, we find that the memory utilization is a
s
>high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
>We just thinking the rise of memory required is only for the first 1 to 2
>hours OR not ? If there any reason for a SQL Server 2000 to grab more
>memory when we start it up ? Will the memory be stabilized to become 200MB
>again ?
>
>Your advice is sought.
>
>|||Dear Mike,
Thank you for your advice.
If the physical RAM is 2GB and I change the maximum RAM setting for the SQL
Server 2000 to 1.5GB. Does it mean that the SQL Server 2000 will use most u
p to 1.5GB RAM ?
Thanks
"Mike Hodgson" <e1minst3r@.gmail.com> wrote in message news:OYx8BvknGHA.3340@.
TK2MSFTNGP02.phx.gbl...
In most situations, SQL Server's memory allocation will be lowest when you f
irst start the service. SQL Server will cache each data page as it is read
from disk until a minimum free memory threshold is reached (from memory the
threshold is 10MB by default). SQL Server then removes least recently used
pages from its cache, and releases memory, as necessary in order to maintain
the free memory threshold. This means that the longer the service runs the
more memory it will use (up to a limit). This is Microsoft's dynamic memor
y management model.
One exception to this "increasing memory use" model is when you configure SQ
L Server to grab its entire memory allocation on service start-up, in which
case all the memory, up to the configured maximum limit, is reserved up fron
t. However, there are no situations (that I can think of) where SQL Server
initially uses more memory and then releases a substantial portion of it to
"stabilise" at a certain level (unless you start messing around with DBCC to
flush the data buffer & procedure cache).
mike hodgson
http://sqlnerd.blogspot.com
Robert wrote:
We have been running an application on SQL Server 2000 on a Windows 2000
Server with 2GB of memory (As a Virtual Machine on VMWare ESX Server).
From the statistics, we find that the memory utilization of that Windows
2000 Server is around 200MB of memory. In this way, we attempt to change
the RAM from 2GB to 1GB. However, we find that the memory utilization is as
high as around 1GB. In this way, we change the RAM from 2GB to 1GB.
We just thinking the rise of memory required is only for the first 1 to 2
hours OR not ? If there any reason for a SQL Server 2000 to grab more
memory when we start it up ? Will the memory be stabilized to become 200MB
again ?
Your advice is sought.|||It means that SQL Server will be allowed, at most, a 1.5GB address
space. It does not necessarily mean this will be backed by physical
memory. If there is not enough physical memory to support SQL Server's
requests, in addition to the other applications requesting memory on the
box (including the OS itself), then part of that memory space will be
backed by virtual memory, which, needless-to-say, is not a
high-performance situation. The exception to this, which does not apply
to you because you are playing with less than 4GB of physical memory, is
AWE memory, which is always satisfied from physical memory (it is never
swapped to disk).
*mike hodgson*
http://sqlnerd.blogspot.com
Robert wrote:
[vbcol=seagreen]
> Dear Mike,
> Thank you for your advice.
> If the physical RAM is 2GB and I change the maximum RAM setting for
> the SQL Server 2000 to 1.5GB. Does it mean that the SQL Server 2000
> will use most up to 1.5GB RAM ?
> Thanks
>
> "Mike Hodgson" <e1minst3r@.gmail.com <mailto:e1minst3r@.gmail.com>>
> wrote in message news:OYx8BvknGHA.3340@.TK2MSFTNGP02.phx.gbl...
> In most situations, SQL Server's memory allocation will be lowest
> when you first start the service. SQL Server will cache each data
> page as it is read from disk until a minimum free memory threshold
> is reached (from memory the threshold is 10MB by default). SQL
> Server then removes least recently used pages from its cache, and
> releases memory, as necessary in order to maintain the free memory
> threshold. This means that the longer the service runs the more
> memory it will use (up to a limit). This is Microsoft's dynamic
> memory management model.
> One exception to this "increasing memory use" model is when you
> configure SQL Server to grab its entire memory allocation on
> service start-up, in which case all the memory, up to the
> configured maximum limit, is reserved up front. However, there
> are no situations (that I can think of) where SQL Server initially
> uses more memory and then releases a substantial portion of it to
> "stabilise" at a certain level (unless you start messing around
> with DBCC to flush the data buffer & procedure cache).
> --
> *mike hodgson*
> http://sqlnerd.blogspot.com
>
> Robert wrote:
>|||For a little more background info about it, see the blog entry I just wrote:
http://sqlnerd.blogspot.com/2006/07...sql-server.html
Plus there's a stack of info about it in BOL and several quality
publications including The Guru's Guide to SQL Server Architecture and
Internals
<http://www.amazon.com/gp/product/02...8498336?ie=UTF8>
by Ken Henderson and Inside SQL Server 2000
<http://www.amazon.com/gp/product/07...8498336?ie=UTF8>
by Kalen Delaney.
*mike hodgson*
http://sqlnerd.blogspot.com
Mike Hodgson wrote:
[vbcol=seagreen]
> It means that SQL Server will be allowed, at most, a 1.5GB address
> space. It does not necessarily mean this will be backed by physical
> memory. If there is not enough physical memory to support SQL
> Server's requests, in addition to the other applications requesting
> memory on the box (including the OS itself), then part of that memory
> space will be backed by virtual memory, which, needless-to-say, is not
> a high-performance situation. The exception to this, which does not
> apply to you because you are playing with less than 4GB of physical
> memory, is AWE memory, which is always satisfied from physical memory
> (it is never swapped to disk).
> --
> *mike hodgson*
> http://sqlnerd.blogspot.com
>
> Robert wrote:
>

Memory release

I am using SQL Server 2000 SE. When I used my application, the SQL server memory go on increasing & it will never come down. Application runs very slowly. Can anybody suggest how to release memory from sql server.

Thanks in advance.
SQL Server will release memory when something else on the machine requires the memory. If it is allocating more memory, then the SQL Server needs that memory to process requests. Just because the application is "running slowly" doesn't mean it is caused by SQL Server memory utilization.|||

Hi Michael, thank you very much.

I'm a SQL new programmer. I want to ask more.

Suppose some users use my application concurrently. They run different reports and these reports use almost server memory. I saw this in server. After that these users close reports. I watch memory in server, it isn't released. If new report runs, in this case SQL sever/OS will automatically reallocate memory for new task-report. SQL server/OS work this way, isn't it?

Thanks.

Memory question for the gurus

Hi Guys, first time posting here. My boss has a Lotus Notes application accessing SQL server with about 60 users. I have a custom VB app with 40+ users. Our SQL Server has
2 gigabyte of memory on it. Is this too low? My boss is expecting to have 300+ users on his Notes app when he rolls it out to our other branches. What would be the ideal amount of memory for 300+ users?Depends: will the 300+ users access the data simultaneously? 24/7? How much data will they transfer? How big is the database, and how is it used by the application? Are there any agreements with users concerning availablility/performance? What's acceptable to them?|||My boss has a Lotus Notes application accessing SQL server with about 60 users.

Nooooooooooooooooooooooooooooooooooooooo

AHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

In the name of GOD WHY?

Got a lot of left over Notus Lotes developers who haven't made the jump?

Notestrix? Notespump? How are they talking to SQL Server?

OH, the volume of users and the amount of memory isn't the issue.

How big is the database?|||Hi Guys,

Im not sure if you guys are referring to the size of the database file. Anyway the database that Notes is accessing is 13 gigabytes as of today and the database my VB app is using is 200 mb. I just ran performance monitor on the server and it says 90 MB of memory available and over 440 pages fault per second. It looks like SQL is taking up over 1.6 gig of memory. Yikes!|||You can perform an assessment using PERFMON while sQL server is in working state, which gives you full information.|||Lotus Notues and a 13gig database...hmmmm

Is he calling stored procedures or is everything in the application layer?

I'd be curious as to how it performs...

Oh, and SQL will grab as much memory as it needs...that's a good thing...

This is a dedicated SQL Server box...right?|||Yeah, it is a dedicated server box. We are using Lotus Notes through Citrix and I believe we have about 4 servers dedicated to Lotus Notes. His application is running fine so far but Im concern about the available memory on our SQL server (and espcecially since it started crashing and rebooting about once a week). The reason the SQL database Notes is accessing is fairly big is because of document archiving. I have a custom APP that produces customer statements and invoices in postcript format that we send to our customers. These documents then get archived each night in SQL server. The Notes App allows our customer reps to quickly find and view these documents. We send thousands and thousands of documents each month so this database is going to grow quickly. What do you guys think? Throw more memory in there? Im not a SQL DBA so I have no idea if 2 gig of memory is enough to handle 100+ users and like I said, it will be 300+ soon.|||There is no such thing as too much memory for MS-SQL! You can safely get that notion out of your head ;)

A lot depends on the architecture, how the Notes users are accessing the database, how your VB app works, etc.

Being the wild man that I am, I usually start my MS-SQL boxes at 8 Gb, then let somebody try to talk me down. I almost always manage to convince them there is no point in saving a few hundred dollars on RAM that would save them at least 10 hours of overtime each month.

-PatP|||Thanks Pat. I've convinced my boss to order more memory for the server. I'll see if I can get 8 gigs like you mentioned :D|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of important|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of important
Is that because Pat has a business on the side building bargain basement desktops?
;)|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of importantAnd not only that, it also depends on whether the box will take 8GB or not. It's nice to sit there and say: "Yeah, memory is cheap!" Sure, what about a box itself? Maybe you can afford only the one expandable to 6GB? Where are you gonna put the other 2? In your ear?|||Maybe you can afford only the one expandable to 6GB? Where are you gonna put the other 2? In your ear?

Dude! that is so funny! "In his ear"?! .. I almost made a mess with the cup of cofee on my desk.|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of importantGood point... Sometimes I miss details like that.

If jmondia is running NT 3.51 or earlier, then 8 Gb is a problem, since the OS has problems addressing that much memory. The same is true if they are running SQL 6.5 or earlier, although there used to be work-arounds for those problems from Micrsoft Professional Support Services.

I assumed that anyone planning to run 300+ simultaneous users would be running on server grade hardware (which by my definition has to support at least 8 Gb of RAM), with at least Windows 2000 and SQL 7. I shouldn't have taken those things for granted. Based on jmondia's response, it looks like I was safe making those assumptions though.

-PatP|||Man, I wish you were around when we were running our HMO on a 4-way with 4GB maxed out with 6.5 and NT (4.0 though, 3.51 wouldn't have taken it) PSS participated in setting up this server, all the specs were met...where did you get this idea that PSS would come up with a workaround for 6.5 to recognize even 2/3 of 8GB of RAM? Man, my veins (as Lindman once noticed) are about to pop even imagining this! We would have been all set with 8GB! Dreaming again?|||Dreaming again?A good TAM helps a lot.

-PatP

Wednesday, March 7, 2012

Memory problems using Sql Server 2000

I have an application which is inserting/updating upwards of hundreds of
thousands of records into a SqlServer table. I am using ADO, make my ADO
connection, then use ADO commands to insert/update each record as
appropriate. At the end of the insert/update I release the connection. I
then proceed to repeat the processing using a different table of the
same database.
The memory usage of Sql Server 2000 running under Win2k SP4 shows
approximately 675 MB after inserting a few hundred thousand records,
while the memory of the application stays very reasonable. This seems
excessive even under today's systems and my 1 GB of real memory. Why is
the Sql Server 2000 memory consumption building up so greatly ? I can
expect a certain hit as a connection to a SqlServer database is
established, and as each command is issued I can expect a certain hit,
but I would think that memory is being returned after each command is
executed, and after the connection is closed. Yet the memory keeps
building to a great amount. Even after my application is finished, and
exits, and all connections are closed, Task Manager shows the 675 MB of
memory still being used by Sql Server 2000. Any ideas on keeping this
memory to a reasonable amount, or why such a huge amount of memory is
being used by Sql Server 2000, would be appreciated.SQL Server consumes memory as required, but doesn't give it back unless the
os or other applications need it. The assumption is, I suppose, that if you
need the memory now, you'll probably need it again in five minutes, so why
go through the hassle of releasing it just to grab it again, if no other
apps are demanding it.
Are you seeing performance issues? Do you have other apps on the box that
need the memory? Or are you just watching task manager? Typically, in this
kind of scenario, you wouldn't have other apps or services on the box that
require memory at levels that compete with SQL Server.
As for "keeping this memory to a reasonable amount" yes it is possible in
the server properties to limit the upper bound or to even fix the amount of
memory used by SQL Server. Of course, this will affect the performance of
the server when it actually would have used more than what *you* consider to
be a "reasonable amount" so I don't recommend this unless you absolutely
need to reserve the memory for other processes.
A
"Edward Diener" <eddielee_no_spam_here@.tropicsoft.com> wrote in message
news:ueU2WCqEGHA.1028@.TK2MSFTNGP11.phx.gbl...
>I have an application which is inserting/updating upwards of hundreds of
>thousands of records into a SqlServer table. I am using ADO, make my ADO
>connection, then use ADO commands to insert/update each record as
>appropriate. At the end of the insert/update I release the connection. I
>then proceed to repeat the processing using a different table of the same
>database.
> The memory usage of Sql Server 2000 running under Win2k SP4 shows
> approximately 675 MB after inserting a few hundred thousand records, while
> the memory of the application stays very reasonable. This seems excessive
> even under today's systems and my 1 GB of real memory. Why is the Sql
> Server 2000 memory consumption building up so greatly ? I can expect a
> certain hit as a connection to a SqlServer database is established, and as
> each command is issued I can expect a certain hit, but I would think that
> memory is being returned after each command is executed, and after the
> connection is closed. Yet the memory keeps building to a great amount.
> Even after my application is finished, and exits, and all connections are
> closed, Task Manager shows the 675 MB of memory still being used by Sql
> Server 2000. Any ideas on keeping this memory to a reasonable amount, or
> why such a huge amount of memory is being used by Sql Server 2000, would
> be appreciated.|||By default SQL memory allocation is set to dynamic. This means SQL Server
will take as much memory as neccessary. Its more efficient for SQL to read
data from memory than disk.
If other applications request more memory SQL will relinquish what it is
holding.
You can change this dynamic memory setting if you wish and "cap" the amount
of memory SQL consumes.
HTH. Ryan
"Edward Diener" <eddielee_no_spam_here@.tropicsoft.com> wrote in message
news:ueU2WCqEGHA.1028@.TK2MSFTNGP11.phx.gbl...
>I have an application which is inserting/updating upwards of hundreds of
>thousands of records into a SqlServer table. I am using ADO, make my ADO
>connection, then use ADO commands to insert/update each record as
>appropriate. At the end of the insert/update I release the connection. I
>then proceed to repeat the processing using a different table of the same
>database.
> The memory usage of Sql Server 2000 running under Win2k SP4 shows
> approximately 675 MB after inserting a few hundred thousand records, while
> the memory of the application stays very reasonable. This seems excessive
> even under today's systems and my 1 GB of real memory. Why is the Sql
> Server 2000 memory consumption building up so greatly ? I can expect a
> certain hit as a connection to a SqlServer database is established, and as
> each command is issued I can expect a certain hit, but I would think that
> memory is being returned after each command is executed, and after the
> connection is closed. Yet the memory keeps building to a great amount.
> Even after my application is finished, and exits, and all connections are
> closed, Task Manager shows the 675 MB of memory still being used by Sql
> Server 2000. Any ideas on keeping this memory to a reasonable amount, or
> why such a huge amount of memory is being used by Sql Server 2000, would
> be appreciated.|||Aaron Bertrand [SQL Server MVP] wrote:
> SQL Server consumes memory as required, but doesn't give it back unless th
e
> os or other applications need it.
That is an odd use of memory. Most modules return dynamic memory after
it is no longer being used. This makes it much easier for the Windows
memory management to find needed memory for other modules.

> The assumption is, I suppose, that if you
> need the memory now, you'll probably need it again in five minutes, so why
> go through the hassle of releasing it just to grab it again, if no other
> apps are demanding it.
When it builds up to 675 MB it definitely is slowing anything else that
is running, including the module(s) which are using Sql Server for
database access.

> Are you seeing performance issues? Do you have other apps on the box that
> need the memory? Or are you just watching task manager?
See comment above.

> Typically, in this
> kind of scenario, you wouldn't have other apps or services on the box that
> require memory at levels that compete with SQL Server.
No, there are no other modules which consume 675 MB of memory.

> As for "keeping this memory to a reasonable amount" yes it is possible in
> the server properties to limit the upper bound or to even fix the amount o
f
> memory used by SQL Server. Of course, this will affect the performance of
> the server when it actually would have used more than what *you* consider
to
> be a "reasonable amount" so I don't recommend this unless you absolutely
> need to reserve the memory for other processes.
I need for Sql Server to not slow down the system when it gets up to
using 675 MB. How does one limit the maximum amount of dynamic memory
Sql Server uses ? Is there really no way to just tell it,
programatically, to release the dynamic memory it has accumulated for a
given connection when that connection closes ? Frankly I am surprised at
this "hoggish" behavior of Sql Server to just hold on to all the memory
it has accumulated for a given connection even when that connection has
closed. This appears to mean to me that as long as Sql Server is running
in a particular Windows session on a machine it will never use less than
the maximum amount of memory it has allocated for that session.

> A
>
>
>
> "Edward Diener" <eddielee_no_spam_here@.tropicsoft.com> wrote in message
> news:ueU2WCqEGHA.1028@.TK2MSFTNGP11.phx.gbl...
>
>
>|||Ryan wrote:
> By default SQL memory allocation is set to dynamic. This means SQL Server
> will take as much memory as neccessary. Its more efficient for SQL to read
> data from memory than disk.
> If other applications request more memory SQL will relinquish what it is
> holding.
> You can change this dynamic memory setting if you wish and "cap" the amoun
t
> of memory SQL consumes.
How does one cap the amount of memory which it consumes, programatically
or otherwise ? is there really no way to tell Sql Server,
programatically, to release whatever memory has been allocated for a
given connection when that conncection has closed ? I find it very odd
that Sql Server simply holds on to dynamic memory allocated for a
particular connection, after that connection is closed.|||> That is an odd use of memory.
Maybe to you... it sounds like you have much more experience in runtime
applications than with services.

> Most modules return dynamic memory after it is no longer being used. This
> makes it much easier for the Windows memory management to find needed
> memory for other modules.
SQL Server expects to be the only major application/service on a server.
That is why we have terms like "database server" and "application server."
I understand that budgets sometimes dictate they are one and the same, but
when you have fewer servers, expect tradeoffs.

> When it builds up to 675 MB it definitely is slowing anything else that is
> running, including the module(s) which are using Sql Server for database
> access.
What else is running *on the same box*? I can't imagine a scenario where
some program needs to be running on the same box, and the only time it needs
675 MB to operate just happens to coincide with the times when SQL Server
*does not* need 675 MB. Does this magical app have anything to do with the
database at all? Have you tried running the app on a different server?

> I need for Sql Server to not slow down the system when it gets up to using
> 675 MB.
How have you proven that it is SQL Server that is slowing down the system?
Do you have any actual metrics Unless your app is using the other 500MB and
then some and is paging to disk... in which case I would look at the app
before looking at the database. SQL Server is in use in thousands upon
thousands of environments, do you think you are the only one witnessing this
"bizarre phenomenen" of SQL Server "stealing" memory and not giving it back?
Besides, what kind of server, that you expect good performance from, has 1GB
of memory these days? Maybe you should consider adding more memory instead
of intentionally crippling the database.

> How does one limit the maximum amount of dynamic memory Sql Server uses ?
Do you have Enterprise Manager? Have you tried right-clicking on the server
name, selecting Properties, and moving to the Memory tab? Now, you can
change these settings and restart SQL Server, but let me be the first to
predict that you will be back here complaining that it made matters worse.
SQL Server is using 675 MB because it NEEDS it. Why should it give it back?
If you run a query that returns a resultset that is sized 500 MB, SQL Server
may be able to keep that in memory so that the *next* time you call the same
query, it will be faster. If you go and do something foolish like limit the
amount of memory SQL Server can use, to something less than this query
requires, (a) it is going to be slower because it may have to page some of
the results to disk, and (b) successive invocations of the same query will
have to do the same thing, so you lose economies of scale, too. Unless you
have SQL Server for no reason and can eliminate it from the equation, I
don't see how this can possibly make the system faster.
Maybe you could focus on tuning your queries and limiting the amount of
sorting and data retrieval that is going on so that SQL Server uses less
memory. How much memory do you think is appropriate? What kind of testing
and benchmarking have you done? What kind of testing and benchmarking are
you going to do afteryou make the change? Or are you just going to walk
away, happy knowing that your app will always have free reign over the
memory on the system?

> Frankly I am surprised at this "hoggish" behavior of Sql Server to just
> hold on to all the memory it has accumulated for a given connection even
> when that connection has closed.
Yes, this surprises a lot of people. Unfortunately for you, this is how it
works. And the only solution you have come up with so far is *not* going to
make things better.

> This appears to mean to me that as long as Sql Server is running in a
> particular Windows session on a machine it will never use less than the
> maximum amount of memory it has allocated for that session.
That is correct. Next you'll be asking how to limit the disk space and
number of CPUs SQL Server uses, because all of your other apps need that
stuff sometimes too. :-(
A|||In EnterPrise Manager Right Click the ServerName
Properties
Memory Tab
You will see x2 radio buttons, dynamic and fixed memory with a slide rule.
OR in TSQL the same can be achevied via sp_configure. The setting is the
value in MB, so 2048 for 2 gb etc.
You will need to restart the SQL Service for this to take effect if you are
on SQL2000 or earlier, don't think that's the case for 2005 but i'm not
sure.
HTH. Ryan
"Edward Diener" <eddielee_no_spam_here@.tropicsoft.com> wrote in message
news:OGl$XZsEGHA.2380@.TK2MSFTNGP12.phx.gbl...
> Ryan wrote:
> How does one cap the amount of memory which it consumes, programatically
> or otherwise ? is there really no way to tell Sql Server, programatically,
> to release whatever memory has been allocated for a given connection when
> that conncection has closed ? I find it very odd that Sql Server simply
> holds on to dynamic memory allocated for a particular connection, after
> that connection is closed.
>|||Ryan wrote:
> In EnterPrise Manager Right Click the ServerName
> Properties
> Memory Tab
> You will see x2 radio buttons, dynamic and fixed memory with a slide rule.
> OR in TSQL the same can be achevied via sp_configure. The setting is the
> value in MB, so 2048 for 2 gb etc.
> You will need to restart the SQL Service for this to take effect if you ar
e
> on SQL2000 or earlier, don't think that's the case for 2005 but i'm not
> sure.
Thanks for the information.|||> How does one cap the amount of memory which it consumes, programatically
> or otherwise ?
Egads, have you even OPENED Enterprise Manager? Maybe a picture would help?
http://www.aspfaq.com/img/memory_tab.gif
Do you know what Books Online is?
http://www.aspfaq.com/2229
http://msdn.microsoft.com/library/e...tun_1a_6rn7.asp
http://msdn.microsoft.com/library/e...config_9zfy.asp
How about Google?
http://tinyurl.com/dzl96 (find the third instance of "release" on the
page)
http://www.sql-server-performance.com/q&a115.asp
And Google Groups?
http://tinyurl.com/bjttb
And the Knowledge Base?
http://support.microsoft.com/kb/321363/
http://support.microsoft.com/kb/319942/

> I find it very odd that Sql Server simply holds on to dynamic memory
> allocated for a particular connection, after that connection is closed.
Why on earth do you think that the memory is used only for that connection?
Isn't it even remotely possible that five minutes (or five seconds) from
now, some other connection might query the same data pages and require the
same amount of memory? Why make SQL Server start from scratch, instead of
allowing it to keep the most frequently accessed data pages in memory,
speeding up the majority of your queries? Limiting SQL Server's memory is
like cutting off your left foot while lying on the couch. You don't need it
right now, so what good is it, right?|||> Limiting SQL Server's memory is
> like cutting off your left foot while lying on the couch. You don't need
it
> right now, so what good is it, right?
This belongs on a T-shirt. :)
ML
http://milambda.blogspot.com/

Memory Problem, please help

My SQL server has 3.68 G memory in total. It has no other application on it, the server is not budy.

The current memory usage of SQL server stays at 1.8, in sys.dm_os_memory_clerks, I see

MEMORYCLERK_SQLBUFFERPOOL Default 0 0 400 1636152 1636152
MEMORYCLERK_SQLCLR Default 0 720 1088 101888 22512
MEMORYCLERK_SQLSTORENG Default 0 1480 104 4480 4480
MEMORYCLERK_SQLCLRASSEMBLY Default 0 0 0 4208 4208
OBJECTSTORE_LOCK_MANAGER Lock Manager : Node 0 0 1912 0 4096 4096

The problem is I can not run a CLR function which read a xml file from harddisk, it gives error

.NET Framework execution was aborted by escalation policy because of out of memory.

I tried before, if I restart my SQL server, the function works, but what is the problem? it doesn't look like it is wrong that SQL server allocate 1.8 G memory, and the SQLCLR is only using 24M, it said if there is not enough memory, SQLCLR will upload appdmain to release memory, so even SQLCLR can reused the 24M , it should be able to load my file, which is only 1M/

Any idea?

thanks

It looks like you are running 32-bit SQL Server 2005. If so, you need to make sure that your boot.ini file has the /3GB switch in the boot.ini file (at the root of your C: drive). Then you need to enable AWE in SQL Server 2005. I would also suggest that you give the SQL Server Service Account the "Lock Pages in Memory" right.

This will allow SQL Server to use more than 2GB of memory and will allow SQL to control paging itself.

|||

Nice, really helpful, I will try it

thanks!

|||

Hi David,

Not trying to be contentious here but AWE is only useful if your server has more than 4GB of RAM which is not the case here.

Do you see any errors in the SQL Server error log?

regards

Jag

|||

Hi

Not trying to be contentious here but AWE is only useful if your server has more than 4GB of RAM which is not the case here.

Is there any errors in the SQL Server error log.

regards

Jag

|||

The problem you describe is a known bug in SQL Server 2005 SP1 with a hot fix available:

http://support.microsoft.com/kb/928083

However the fix list for SQL Server 2005 SP2 indicates that the fix is included in the new service pack.

-

The KB article suggests that a workaround is to not use the context connection in your CLR code but rather create a standard connection object.

|||

thanks for your suggestions.

My server doesn't have 4G memory, I did think about it when I applied the changes.

My SQL server looks ok now, after reboot that day after applied the changes. It consume less than 150M now.

I do see a lot of message in SQL log, like

Date 09/03/2007 4:03:41 PM
Log SQL Server (Current - 09/03/2007 8:44:00 PM)

Source spid23s

Message
The query notification dialog on conversation handle '{C7311ACC-9ACE-DB11-9441-000D5670268E}.' closed due to the following error: '<?xml version="1.0"?><Error xmlns="http://schemas.microsoft.com/SQL/ServiceBroker/Error"><Code>-8494</Code><Description>You do not have permission to access the service &apos;SqlQueryNotificationService-a1e3d318-87b4-4ae6-84e7-8ef2f04241c1&apos;.</Description></Error>'.

not sure what it affects, but it is not good since it logs a dozen entried every minute, any suggestion what causes that?

thanks

Friday, February 24, 2012

memory on server

I have an application that is used 85 % of the time by 25 % of the people in
the organization.
we have three servers, all having 2000 server and sql 2000 server the main
one is on SP2 and the two terminal servers have sp4
main server has 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of
ram in it.
the main server utilization runs anywhere from 20% to 100%. I am purposing
adding another 2048 in the main server, is that enough?
Kevin
kevins (kevins@.discussions.microsoft.com) writes:
> I have an application that is used 85 % of the time by 25 % of the
> people in the organization.
> we have three servers, all having 2000 server and sql 2000 server the
> main one is on SP2 and the two terminal servers have sp4 main server has
> 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of ram in it.
> the main server utilization runs anywhere from 20% to 100%. I am
> purposing adding another 2048 in the main server, is that enough?
When it comes to memory for SQL Server, too much is not enough! :-)
Here is the deal: SQL Server loves cache. The more data it can hold in
cache, the less often it has to read from disk. Thus, once SQL Server
has maxed out on memory, it will stay there (unless there is pressure
from other applications.)
Thus, if you get another 2 GB of memory, you may see that SQL Server
till maxes out. This all depends on your databasees and your application.
If you have a 100 GB database of which all data is accessed at some
point during the day, you will have to access disk accesses. On the
other hand, if your only database is 2GB, those extra 2GB will not have
that much effect, as you already have the database in memory.
Note also that you need Enterprise Edition to be able to make use from
more than 2GB memory. There are also switches in Windows you need to set,
to able to use this memory.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

memory on server

I have an application that is used 85 % of the time by 25 % of the people in
the organization.
we have three servers, all having 2000 server and sql 2000 server the main
one is on SP2 and the two terminal servers have sp4
main server has 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of
ram in it.
the main server utilization runs anywhere from 20% to 100%. I am purposing
adding another 2048 in the main server, is that enough?
Kevinkevins (kevins@.discussions.microsoft.com) writes:
> I have an application that is used 85 % of the time by 25 % of the
> people in the organization.
> we have three servers, all having 2000 server and sql 2000 server the
> main one is on SP2 and the two terminal servers have sp4 main server has
> 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of ram in it.
> the main server utilization runs anywhere from 20% to 100%. I am
> purposing adding another 2048 in the main server, is that enough?
When it comes to memory for SQL Server, too much is not enough! :-)
Here is the deal: SQL Server loves cache. The more data it can hold in
cache, the less often it has to read from disk. Thus, once SQL Server
has maxed out on memory, it will stay there (unless there is pressure
from other applications.)
Thus, if you get another 2 GB of memory, you may see that SQL Server
till maxes out. This all depends on your databasees and your application.
If you have a 100 GB database of which all data is accessed at some
point during the day, you will have to access disk accesses. On the
other hand, if your only database is 2GB, those extra 2GB will not have
that much effect, as you already have the database in memory.
Note also that you need Enterprise Edition to be able to make use from
more than 2GB memory. There are also switches in Windows you need to set,
to able to use this memory.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

memory on server

I have an application that is used 85 % of the time by 25 % of the people in
the organization.
we have three servers, all having 2000 server and sql 2000 server the main
one is on SP2 and the two terminal servers have sp4
main server has 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of
ram in it.
the main server utilization runs anywhere from 20% to 100%. I am purposing
adding another 2048 in the main server, is that enough?
Kevinkevins (kevins@.discussions.microsoft.com) writes:
> I have an application that is used 85 % of the time by 25 % of the
> people in the organization.
> we have three servers, all having 2000 server and sql 2000 server the
> main one is on SP2 and the two terminal servers have sp4 main server has
> 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of ram in it.
> the main server utilization runs anywhere from 20% to 100%. I am
> purposing adding another 2048 in the main server, is that enough?
When it comes to memory for SQL Server, too much is not enough! :-)
Here is the deal: SQL Server loves cache. The more data it can hold in
cache, the less often it has to read from disk. Thus, once SQL Server
has maxed out on memory, it will stay there (unless there is pressure
from other applications.)
Thus, if you get another 2 GB of memory, you may see that SQL Server
till maxes out. This all depends on your databasees and your application.
If you have a 100 GB database of which all data is accessed at some
point during the day, you will have to access disk accesses. On the
other hand, if your only database is 2GB, those extra 2GB will not have
that much effect, as you already have the database in memory.
Note also that you need Enterprise Edition to be able to make use from
more than 2GB memory. There are also switches in Windows you need to set,
to able to use this memory.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp

Monday, February 20, 2012

Memory Leaks using MSDE 2000

Hello.
Our company is running an application that uses the MSDE 2000 (SP4), to load
large amount of data using bulk inserts.
The sqlservr.exe process increases from the 4-10MB that it starts with to
80-100MB after all transactions has been completed.
When starting a new transaction session, the process just continue to grow
causing the OS to perform poorly.
Ok, I understand that while on an insert transaction, the memory should grow
but when we are done, why doesn’t it release the resources?
You can see that this is a bad behavior if you just close the process
manually and start it again, the memory will start with 4-10MB, and the
application still works, meaning that the exe have lots of unused resources.
What can I do about that (other than limiting the memory size to xMB)?
Hi
Not leaking, it is by design
INF: SQL Server Memory Usage
http://support.microsoft.com/default...b;en-us;321363
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Eli" <Eli@.discussions.microsoft.com> wrote in message
news:3444B8D1-3E64-4EF3-B457-2EFB5543C0C7@.microsoft.com...
> Hello.
> Our company is running an application that uses the MSDE 2000 (SP4), to
> load
> large amount of data using bulk inserts.
> The sqlservr.exe process increases from the 4-10MB that it starts with to
> 80-100MB after all transactions has been completed.
> When starting a new transaction session, the process just continue to grow
> causing the OS to perform poorly.
> Ok, I understand that while on an insert transaction, the memory should
> grow
> but when we are done, why doesn't it release the resources?
> You can see that this is a bad behavior if you just close the process
> manually and start it again, the memory will start with 4-10MB, and the
> application still works, meaning that the exe have lots of unused
> resources.
> What can I do about that (other than limiting the memory size to xMB)?
>
>
|||Ok, it’s by design, pretty weird design, by ok.
Now, how can I handle this, design or not, it is a problem…
Limiting the max memory? Stopping and starring the process of the SQL?
What is the best practice for it?
Eli.
"Mike Epprecht (SQL MVP)" wrote:

> Hi
> Not leaking, it is by design
> INF: SQL Server Memory Usage
> http://support.microsoft.com/default...b;en-us;321363
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Eli" <Eli@.discussions.microsoft.com> wrote in message
> news:3444B8D1-3E64-4EF3-B457-2EFB5543C0C7@.microsoft.com...
>
>
|||Hi
Set the MAX memory.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Eli" <Eli@.discussions.microsoft.com> wrote in message
news:5D23EFC5-6240-47E3-AF83-B8B0CB0DC5D7@.microsoft.com...[vbcol=seagreen]
> Ok, it's by design, pretty weird design, by ok.
> Now, how can I handle this, design or not, it is a problem.
> Limiting the max memory? Stopping and starring the process of the SQL?
> What is the best practice for it?
> Eli.
>
> "Mike Epprecht (SQL MVP)" wrote:

Memory Leak Reading Appointment from MS Exchange Mailbox

Hi,

When using the following code to read an appointment from a MS Exchange mailbox the application experiences a memory leak.

I can provide a test application that easily demonstrates the issue.

ADODB.Connection _Conn = new ADODB.Connection();

_Conn.Provider = "exoledb.datasource";

try

{

this.Cursor = Cursors.WaitCursor;

_Conn.Open("http://" + Environment.UserDomainName + "." + Environment.MachineName + "/Exchange/" + this.txtMailbox.Text + "/Calendar", "", "", 0);

CDO.Appointment cdoAppointment = new CDO.Appointment();

try

{

for (int i = 1; i < int.Parse(this.txtNoOfTestReads.Text); i++)

{

cdoAppointment.DataSource.Open("http://" + Environment.UserDomainName + "." + Environment.MachineName + "/Exchange/" + this.txtMailbox.Text + "/Calendar/" + this.txtApptFileName.Text + ".eml", _Conn, ADODB.ConnectModeEnum.adModeReadWrite, ADODB.RecordCreateOptionsEnum.adFailIfNotExists, ADODB.RecordOpenOptionsEnum.adOpenSource, "", "");

}

}

catch (Exception ex)

{

MessageBox.Show(ex.Message);

}

finally

{

cdoAppointment = null;

}

}

catch (Exception ex)

{

MessageBox.Show(ex.Message);

}

finally

{

if (_Conn.State == (int)ADODB.ObjectStateEnum.adStateOpen)

_Conn.Close();

_Conn = null;

this.Cursor = Cursors.Default;

}

Regards,

Craig

I would post this to the exchange dl ->

http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.exchange.applications

This could just be a GC issue, try adding a line in the loop like below:

System.Gc.Collect();

System.Gc.WaitForPendingFinalizers();

Matt