Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Friday, March 30, 2012

Merge Join's poor performance

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

Liran R wrote:

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

If I were you I would use the OLE DB Source component to do the join - there is absolutely nothing wrong with doing that. If you have a super-performant relational databsae engine at your disposal - why not use it?

Donald Farmer talks around this a little in his OVAL webcast. If you only watch one SSIS webcast in your life then it should be this one.

Donald Farmer's Technet webcast
(http://blogs.conchango.com/jamiethomson/archive/2006/06/14/4076.aspx)

-Jamie

|||Thanks, I'll take a look. does the merge join takes place in the cache?|||

Can you elaborate as to what you mean by "the cache"?

-Jamie

|||

Sure. When I'm using a Lookup component to make the join, The lookup table is been cached by default. When I want to join to a large table (with several millions) I understood It's best practice to use the Merge Join Component, but as I see also this component caches the records in the machines memory, so I don't understand the benefit...

|||

There's a different sort of memory usage going on here.

Yes, the LOOKUP has an area of memory that we call the cache and is used for storing a lookup set.

MERGE JOIN stores data in memory (as do all asynchronous components) but its a different kind of memory and we don't refer to it as a cache. it is just the component's working area - more commonly termed a buffer. Also, this working area will change as MERGE JOIN does its work whereas the LOOKUP cache is static.

It is also worth saying that the MERGE JOIN can spool data to disk if it s running out of memory. LOOKUP cannot do that with its cache.

-Jamie

|||

Hi SSIS friend,

I remember asking a similar question a couple of months back. I think I had 15+ Merge Join components in a test package and the performance was awful. Jamie's advice then and now is pretty sound.

During my short period experimenting with SSIS, I came to realise that in order to create efficient packages, I had to utilise the power of both the SQL Server and SSIS engines. Each one is good at performing certain tasks better than the other. It takes time and I'm still learning, but the more you play around with it, the easier it gets to choose which engine should be used.

|||

Thanks guys.

sql

Merge Join's poor performance

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

Liran R wrote:

Hello All,

I'm experiencing performance problems with the merge join task.
Every time I'm building a nice package using this task, I'm ending up deleting it and using SQL statement in the OLE DB source to accomplish the join since it takes forever to run and crushing my computer at the process.
It makes me feel I don't use the abilities SSIS has to offer compared to DTS.
Of course for the use of several thousands of records it works fine, but in a production surrounding with hundred of thousands of rows, it seems to be futile.

Maybe someone had a little more luck with it?

If I were you I would use the OLE DB Source component to do the join - there is absolutely nothing wrong with doing that. If you have a super-performant relational databsae engine at your disposal - why not use it?

Donald Farmer talks around this a little in his OVAL webcast. If you only watch one SSIS webcast in your life then it should be this one.

Donald Farmer's Technet webcast
(http://blogs.conchango.com/jamiethomson/archive/2006/06/14/4076.aspx)

-Jamie

|||Thanks, I'll take a look. does the merge join takes place in the cache?|||

Can you elaborate as to what you mean by "the cache"?

-Jamie

|||

Sure. When I'm using a Lookup component to make the join, The lookup table is been cached by default. When I want to join to a large table (with several millions) I understood It's best practice to use the Merge Join Component, but as I see also this component caches the records in the machines memory, so I don't understand the benefit...

|||

There's a different sort of memory usage going on here.

Yes, the LOOKUP has an area of memory that we call the cache and is used for storing a lookup set.

MERGE JOIN stores data in memory (as do all asynchronous components) but its a different kind of memory and we don't refer to it as a cache. it is just the component's working area - more commonly termed a buffer. Also, this working area will change as MERGE JOIN does its work whereas the LOOKUP cache is static.

It is also worth saying that the MERGE JOIN can spool data to disk if it s running out of memory. LOOKUP cannot do that with its cache.

-Jamie

|||

Hi SSIS friend,

I remember asking a similar question a couple of months back. I think I had 15+ Merge Join components in a test package and the performance was awful. Jamie's advice then and now is pretty sound.

During my short period experimenting with SSIS, I came to realise that in order to create efficient packages, I had to utilise the power of both the SQL Server and SSIS engines. Each one is good at performing certain tasks better than the other. It takes time and I'm still learning, but the more you play around with it, the easier it gets to choose which engine should be used.

|||

Thanks guys.

Wednesday, March 28, 2012

Merge Control Synchronization Problem

Hi Guys - I have been using the Merge Control ActiveX successfully for
some time. However, one client has found a problem synchronizing. Here is
the relavent piece of code:
agent.Publisher = "MyServer\InstanceName";
agent.Distributor = agent.Publisher;
agent.DistributorNetwork = TCPIP_SOCKETS;
agent.PublisherNetwork = TCPIP_SOCKETS;
agent.PublisherAddress = "192.168.1.100\InstanceName,2763";
agent.DistributorAddress =
"192.168.1.100\InstanceName,2763";
This is a Merge replication with pull anonymous subscriptions. The
Publisher and Distributor are on the same server.
I am using the PublisherAddress because I need to specify the port.
The error the client gets is the following:
[vbcol=seagreen]
If we try anything else for the publisher name such as
"192.168.1.100\InstanceName" or even the fully qualified name of the server,
it says:
[vbcol=seagreen]
valid Publisher'
Now, we checked and double-checked the settings. We can log in fine to
the publisher and subscribers. As a matter of fact, we were able to
subscribe just fine.
I tested the same scenario with my computers and it works fine for me.
What could cause it to say that The process could not connect to
Distributer?
Is the Publisher name always "ServerName\InstanceName"?
Any help will be apreciated,
Thanks,
Maer
I think you are having name resolution problems from this client. Can this
client ping the publisher/distributor by IP?
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Maer" <maer@.auditleverage.com> wrote in message
news:%23EO95jlPGHA.2080@.TK2MSFTNGP09.phx.gbl...
> Hi Guys - I have been using the Merge Control ActiveX successfully for
> some time. However, one client has found a problem synchronizing. Here is
> the relavent piece of code:
> agent.Publisher = "MyServer\InstanceName";
> agent.Distributor = agent.Publisher;
> agent.DistributorNetwork = TCPIP_SOCKETS;
> agent.PublisherNetwork = TCPIP_SOCKETS;
> agent.PublisherAddress = "192.168.1.100\InstanceName,2763";
> agent.DistributorAddress =
> "192.168.1.100\InstanceName,2763";
> This is a Merge replication with pull anonymous subscriptions. The
> Publisher and Distributor are on the same server.
> I am using the PublisherAddress because I need to specify the port.
> The error the client gets is the following:
>
> If we try anything else for the publisher name such as
> "192.168.1.100\InstanceName" or even the fully qualified name of the
> server, it says:
> valid Publisher'
> Now, we checked and double-checked the settings. We can log in fine to
> the publisher and subscribers. As a matter of fact, we were able to
> subscribe just fine.
> I tested the same scenario with my computers and it works fine for me.
> What could cause it to say that The process could not connect to
> Distributer?
> Is the Publisher name always "ServerName\InstanceName"?
> Any help will be apreciated,
> Thanks,
> Maer
>
>
>
|||Yes, he has pinged the IP and it responds ok. Also, we can connect fine
to SQL Server (through our application) on the publisher/distributor using
"MyServer\InstanceName" or "IP\InstanceName".
BTW, the publisher/distributor is running on a Windows 2003 Server.
Maer
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:%23gnkoQmPGHA.2912@.tk2msftngp13.phx.gbl...
>I think you are having name resolution problems from this client. Can this
>client ping the publisher/distributor by IP?
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Maer" <maer@.auditleverage.com> wrote in message
> news:%23EO95jlPGHA.2080@.TK2MSFTNGP09.phx.gbl...
>

Monday, March 26, 2012

Merge Agent takes all the CPU time

The merge Agent takes all the CPU resources and makes no space for other
aplications. When I stope the Server Agent the Server becomes available to
other applications. I start the merge agent again and every thing works fine
.
How can I handle this so I wont have to stop the merge agent each time the
usrs start complaining?
Thanks a lot,
LinaHow often is your merge agent running..? Hourly ..?
Ideally you want to run the merge agent frequently, that way it has a small
amount of data to process and will have less impact on resources
HTH. Ryan
"Lina Manjarres" <LinaManjarres@.discussions.microsoft.com> wrote in message
news:956B2254-A621-4752-BFCD-1DF47151A269@.microsoft.com...
> The merge Agent takes all the CPU resources and makes no space for other
> aplications. When I stope the Server Agent the Server becomes available to
> other applications. I start the merge agent again and every thing works
> fine.
> How can I handle this so I wont have to stop the merge agent each time the
> usrs start complaining?
> Thanks a lot,
> Lina|||Hi Ryan
I have to schedules:
One continuos
and the other one each 5 minutes.
Thank you, Lina
"Ryan" wrote:

> How often is your merge agent running..? Hourly ..?
> Ideally you want to run the merge agent frequently, that way it has a smal
l
> amount of data to process and will have less impact on resources
> --
> HTH. Ryan
> "Lina Manjarres" <LinaManjarres@.discussions.microsoft.com> wrote in messag
e
> news:956B2254-A621-4752-BFCD-1DF47151A269@.microsoft.com...
>
>|||In that case i would suggest running a profiler trace to capture what stored
procedure the merge agent is running when the CPU spike occurs. You may find
it's related to the size of the metadata tables "MSmerge_contents,
MSmerge_genhistory" or the filtering conditions within the publication.
How to troubleshoot SQL Server merge replication problems :-
http://support.microsoft.com/?id=315521
HTH. Ryan
"Lina Manjarres" <LinaManjarres@.discussions.microsoft.com> wrote in message
news:6FD33C8B-468A-4DC4-860C-9E27ED644743@.microsoft.com...
> Hi Ryan
> I have to schedules:
> One continuos
> and the other one each 5 minutes.
> Thank you, Lina
> "Ryan" wrote:
>|||Thanks a lot
Lina
"Ryan" wrote:

> In that case i would suggest running a profiler trace to capture what stor
ed
> procedure the merge agent is running when the CPU spike occurs. You may fi
nd
> it's related to the size of the metadata tables "MSmerge_contents,
> MSmerge_genhistory" or the filtering conditions within the publication.
>
> How to troubleshoot SQL Server merge replication problems :-
> http://support.microsoft.com/?id=315521
>
> --
> HTH. Ryan
> "Lina Manjarres" <LinaManjarres@.discussions.microsoft.com> wrote in messag
e
> news:6FD33C8B-468A-4DC4-860C-9E27ED644743@.microsoft.com...
>
>

Friday, March 23, 2012

Merge Agent

Hello,
I have a merge replication scenario where occasionally the
subscription server will go offline for a period of time.
It seems that when this is the case the merge agent stops
and when the subscription server becomes available I have
to manually restart the merge agent for syncronization to
continue. Does this seem right or should the merge agent
continue to run even while the subscription server is
unavailable so that when the subscription server becomes
available syncronization will continue automatically?
Any help would be appreciated!
Thanks in advance.
this behavior is by design. To fix it you should schedule the merge agent to
run every 10 minutes. This way it will continually retry until it succeeds.
You might also want to recreate the subscription as a pull subscription.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Jerry G." <anonymous@.discussions.microsoft.com> wrote in message
news:1c4e701c45257$6cd0c470$a101280a@.phx.gbl...
> Hello,
> I have a merge replication scenario where occasionally the
> subscription server will go offline for a period of time.
> It seems that when this is the case the merge agent stops
> and when the subscription server becomes available I have
> to manually restart the merge agent for syncronization to
> continue. Does this seem right or should the merge agent
> continue to run even while the subscription server is
> unavailable so that when the subscription server becomes
> available syncronization will continue automatically?
> Any help would be appreciated!
> Thanks in advance.

Mentored Learning

I discovered that many of my very busy colleagues are having an
extremely difficult time pulling themselves away to take vital
training when it requires being away for consecutive days. I just
completed training for .NET in Chicago through a mentored learning
program that personally helped me to tackle that same issue. Though
the mentored learning training allows complete interaction with
classroom version labs and interaction with an on staff expert on the
topic, it does require some discipline in that I was responsible for
the pace I was moving. The benefit for me is that I did not have to
commit myself to a set schedule of consecutive days being
inaccessible. I had the flexibility to engage in the mentored
learning program once a week until I completed the course. Plus the
mentored learning lab was very comfortable and for some of my
colleagues who would are able to travel, it is near O'Hare Airport and
there is a hotel next door.

Once I get more help in our data center, I do plan on spending some
time, and some of our firm's money on some instructor-led classes that
will require me to be away for consecutive days and I can get done a
lot quicker. This particular training centers line-up of classes and
its personnel impressed me. I thought I would alert the rest of you
to this attractive alternative and would be happy to provide you the
details on a need to know basis either through this discussion group
or you can send me an email to Robert@.vcmnetwork.com.Please post the additional information to the NG...the company's
website and things like that would help everyone.

Thank you, Tom

On Mar 21, 9:10 am, "whosesocks" <Robert-Mar...@.comcast.netwrote:

Quote:

Originally Posted by

I discovered that many of my very busy colleagues are having an
extremely difficult time pulling themselves away to take vital
training when it requires being away for consecutive days. I just
completed training for .NET in Chicago through a mentored learning
program that personally helped me to tackle that same issue. Though
the mentored learning training allows complete interaction with
classroom version labs and interaction with an on staff expert on the
topic, it does require some discipline in that I was responsible for
the pace I was moving. The benefit for me is that I did not have to
commit myself to a set schedule of consecutive days being
inaccessible. I had the flexibility to engage in the mentored
learning program once a week until I completed the course. Plus the
mentored learning lab was very comfortable and for some of my
colleagues who would are able to travel, it is near O'Hare Airport and
there is a hotel next door.
>
Once I get more help in our data center, I do plan on spending some
time, and some of our firm's money on some instructor-led classes that
will require me to be away for consecutive days and I can get done a
lot quicker. This particular training centers line-up of classes and
its personnel impressed me. I thought I would alert the rest of you
to this attractive alternative and would be happy to provide you the
details on a need to know basis either through this discussion group
or you can send me an email to Rob...@.vcmnetwork.com.

Wednesday, March 21, 2012

Memtoleave and -g-switch

hi out there
On our Windows 2003 servers w. sp1 and running MS SQL Server 2000 w. sp4 we
see from time to time that we get this error "cannot allocate 64k
continous memory" or "SQL Server could not spawn process_loginread thread"
which could be caused by nothing left in the "Memtoleave" pool - I have now
search for advice on how to determine the values for the -g-switch - but
without much success - and if I just go for the "try&error" concept my
sql-server just allocates less and less ? - ehh - in which units are the
parameters for the -g option specified - bytes, kilobytes, mbytes - 4k block
? Any suggestions for measuring the actual running value of this pool -
memtoleave ?

best regards /ti

The units for -g are MB. The default is 256.

Is this a 32 or 64 bit system?

How much memory is on the system?

Are you running SQL Server with AWE enabled?

|||

Could you run the following query (when you are having this problem) and gives us the results.

Code Snippet

SELECTtype, multi_pages_kb FROMsys.dm_os_memory_clerksWHERE multi_pages_kb > 0

ORDERBY multi_pages_kb DESC

WesleyB

Visit my SQL Server weblog @. http://dis4ea.blogspot.com

|||

SQL 2000 does not have DMVs. There's no easy way to profile memory usage in SQL 2000. See http://msdn2.microsoft.com/en-US/library/aa175282(sql.80).aspx for more information

Thanks, Ron D.

|||Oops, I missed the 2000 part :-)

Memtoleave and -g-switch

hi out there
On our Windows 2003 servers w. sp1 and running MS SQL Server 2000 w. sp4 we
see from time to time that we get this error "cannot allocate 64k
continous memory" or "SQL Server could not spawn process_loginread thread"
which could be caused by nothing left in the "Memtoleave" pool - I have now
search for advice on how to determine the values for the -g-switch - but
without much success - and if I just go for the "try&error" concept my
sql-server just allocates less and less ? - ehh - in which units are the
parameters for the -g option specified - bytes, kilobytes, mbytes - 4k block
? Any suggestions for measuring the actual running value of this pool -
memtoleave ?

best regards /ti

The units for -g are MB. The default is 256.

Is this a 32 or 64 bit system?

How much memory is on the system?

Are you running SQL Server with AWE enabled?

|||

Could you run the following query (when you are having this problem) and gives us the results.

Code Snippet

SELECT type, multi_pages_kb FROM sys.dm_os_memory_clerks WHERE multi_pages_kb > 0

ORDER BY multi_pages_kb DESC

WesleyB

Visit my SQL Server weblog @. http://dis4ea.blogspot.com

|||

SQL 2000 does not have DMVs. There's no easy way to profile memory usage in SQL 2000. See http://msdn2.microsoft.com/en-US/library/aa175282(sql.80).aspx for more information

Thanks, Ron D.

|||Oops, I missed the 2000 part :-)

Monday, March 19, 2012

Memory Usage Grows over time

I am very new to TSQL and the program that follows loops many times over several hours. In task manager I have noticed that the PF usage has grown from 1.08 to 2.05 over several hours and results in a virtual memory shortage error being displayed. I am inclined to believe I am doing something wrong and accumulating some stack space. Can anyone tell me what I am doing wrong? PF does drop back to normal when I drop out of SQL.

thanks in advance

-Soup-

use sm

declare @.test_date as smalldatetime

declare @.ticker char(6)

declare @.buy_date smalldatetime

declare @.buy_price smallmoney

declare @.sell_date_5d smalldatetime

declare @.sell_price_5d smallmoney

declare @.sell_date_10d smalldatetime

declare @.sell_price_10d smallmoney

declare @.sell_date_20d smalldatetime

declare @.sell_price_20d smallmoney

if object_id('sm.dbo.Open_High_v2')is not null

drop table sm.dbo.Open_High

create table sm.dbo.Open_High

( ticker char(6),

detect_date smalldatetime,

buy_date smalldatetime,

buy_price smallmoney,

sell_date_5d smalldatetime,

sell_price_5d smallmoney,

sell_date_10d smalldatetime,

sell_price_10d smallmoney,

sell_date_20d smalldatetime,

sell_price_20d smallmoney,

)

if object_id('sm.dbo.Temp_Price')is null

create table sm.dbo.Temp_Price

(

price char (6),

Trade_Date smalldatetime,

[Open] smallmoney,

High smallmoney,

Low smallmoney,

[Close] smallmoney,

Volume int

)

if object_id('date_list') is not null

begin

close date_list

deallocate date_list

end

declare date_list cursor for

select distinct ticker, trade_date

from price

order by Ticker asc, Trade_Date asc

open date_list

fetch next from date_list into @.ticker,@.test_date

while (@.@.fetch_status=0)

begin

set @.buy_date=NULL

set @.buy_price=NULL

set @.sell_date_5d=NULL

set @.sell_price_5d=NULL

set @.sell_date_10d=NULL

set @.sell_price_10d=NULL

set @.sell_date_20d=NULL

set @.sell_price_20d=NULL

insert into sm.dbo.temp_price

select Top 5 ticker,trade_date,[open],high,low,[close],volume

from price as p

where p.Ticker=@.Ticker and p.Trade_date>@.Test_date

order by p.Trade_Date asc

set @.buy_date=(select min(trade_date)from sm.dbo.temp_price)

set @.buy_price=(select [open] from sm.dbo.temp_price where trade_date=@.buy_date)

set @.sell_price_5d=(select max(high)from sm.dbo.temp_price)

set @.sell_date_5d=(select top 1 (trade_date) from sm.dbo.temp_price where high = @.sell_price_5d order by trade_date asc)

truncate table sm.dbo.temp_price

insert into sm.dbo.temp_price

select Top 10 ticker,trade_date,[open],high,low,[close],volume

from price as p

where p.Ticker=@.Ticker and p.Trade_date>@.Test_date

order by p.Trade_Date asc

set @.sell_price_10d=(select max(high)from sm.dbo.temp_price)

set @.sell_date_10d=(select top 1 (trade_date) from sm.dbo.temp_price where high = @.sell_price_5d order by trade_date asc)

truncate table sm.dbo.temp_price

insert into sm.dbo.temp_price

select Top 20 ticker,trade_date,[open],high,low,[close],volume

from price as p

where p.Ticker=@.Ticker and p.Trade_date>@.Test_date

order by p.Trade_Date asc

set @.sell_price_20d=(select max(high)from sm.dbo.temp_price)

set @.sell_date_20d=(select top 1 (trade_date) from sm.dbo.temp_price where high = @.sell_price_5d order by trade_date asc)

truncate table sm.dbo.temp_price

BEGIN TRANSACTION;

insert into sm.dbo.open_high_v2

values

(

@.ticker,

@.test_date,

@.buy_date,

@.buy_price,

@.sell_date_5d,

@.sell_price_5d,

@.sell_date_10d,

@.sell_price_10d,

@.sell_date_20d,

@.sell_price_20d

)

COMMIT TRANSACTION;

fetch next from date_list into @.ticker,@.test_date

end

close date_list

deallocate date_list

select * from sm.dbo.Open_High

One cause may be you're using transactions without error checking which reserves memory. If you're just inserting without error checking, you don't need to use transactions.

Adamus

|||

transactions was a desparate attempt to stop pf growth. pf growth was present before transaction start and commit were added.

Thanks for the idea.

-Soup-

|||

After looking at the code further, it appears the overhead you're concerned with is a necessary evil in order to accomplish your task.

Although performance should always be a concern, sometimes the elephant in the living room serves a productive and inevitable purpose.

Adamus

|||

Without further informaion on what I am doing incorrectly in TSQL, I guess it is time to turn in a bug report to the SQL developement team. Repeated loops of a program appear to discover a problem with TSQL or SQL server. Is this an acceptable trouble report process, or do I need to go somewhere else to enter this as a formal trouble report?

-Soup-

|||

I didn't understand your trouble..

There are someother way there to achive your desired output.. why not, you can try to rewrite your query in PROPER & best way.. as other says..

|||I have to withdraw previous post about memory growing over time. Something other than this program must have been the problem. Regarding Proper & best way, as I said I am new to TSQL and have know idea how this program differs from Proper & best way. I do know that this program may have been a learning experience for me, but I allowed it to execute for 11 Days, yes Days and it did not complete. After only 8 hours a C++ program did the job. I suspect my skill at TSQL is at fault.|||

For improving performance of above SQL queries, You can use "table" data type instead of physical temp tables temp_price and Open_High_v2. Another problem is cursor. First you can take all data from "price" table into another temp table. This temp table also derived by "table" datatype. And apply cursor on this table. Next you can remove "Begin Transaction" and "commit transaction" statements...

Next check performance of TSQL...

Jefy

|||Lay off the crack pipe Jefy

Friday, March 9, 2012

Memory question for the gurus

Hi Guys, first time posting here. My boss has a Lotus Notes application accessing SQL server with about 60 users. I have a custom VB app with 40+ users. Our SQL Server has
2 gigabyte of memory on it. Is this too low? My boss is expecting to have 300+ users on his Notes app when he rolls it out to our other branches. What would be the ideal amount of memory for 300+ users?Depends: will the 300+ users access the data simultaneously? 24/7? How much data will they transfer? How big is the database, and how is it used by the application? Are there any agreements with users concerning availablility/performance? What's acceptable to them?|||My boss has a Lotus Notes application accessing SQL server with about 60 users.

Nooooooooooooooooooooooooooooooooooooooo

AHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

In the name of GOD WHY?

Got a lot of left over Notus Lotes developers who haven't made the jump?

Notestrix? Notespump? How are they talking to SQL Server?

OH, the volume of users and the amount of memory isn't the issue.

How big is the database?|||Hi Guys,

Im not sure if you guys are referring to the size of the database file. Anyway the database that Notes is accessing is 13 gigabytes as of today and the database my VB app is using is 200 mb. I just ran performance monitor on the server and it says 90 MB of memory available and over 440 pages fault per second. It looks like SQL is taking up over 1.6 gig of memory. Yikes!|||You can perform an assessment using PERFMON while sQL server is in working state, which gives you full information.|||Lotus Notues and a 13gig database...hmmmm

Is he calling stored procedures or is everything in the application layer?

I'd be curious as to how it performs...

Oh, and SQL will grab as much memory as it needs...that's a good thing...

This is a dedicated SQL Server box...right?|||Yeah, it is a dedicated server box. We are using Lotus Notes through Citrix and I believe we have about 4 servers dedicated to Lotus Notes. His application is running fine so far but Im concern about the available memory on our SQL server (and espcecially since it started crashing and rebooting about once a week). The reason the SQL database Notes is accessing is fairly big is because of document archiving. I have a custom APP that produces customer statements and invoices in postcript format that we send to our customers. These documents then get archived each night in SQL server. The Notes App allows our customer reps to quickly find and view these documents. We send thousands and thousands of documents each month so this database is going to grow quickly. What do you guys think? Throw more memory in there? Im not a SQL DBA so I have no idea if 2 gig of memory is enough to handle 100+ users and like I said, it will be 300+ soon.|||There is no such thing as too much memory for MS-SQL! You can safely get that notion out of your head ;)

A lot depends on the architecture, how the Notes users are accessing the database, how your VB app works, etc.

Being the wild man that I am, I usually start my MS-SQL boxes at 8 Gb, then let somebody try to talk me down. I almost always manage to convince them there is no point in saving a few hundred dollars on RAM that would save them at least 10 hours of overtime each month.

-PatP|||Thanks Pat. I've convinced my boss to order more memory for the server. I'll see if I can get 8 gigs like you mentioned :D|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of important|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of important
Is that because Pat has a business on the side building bargain basement desktops?
;)|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of importantAnd not only that, it also depends on whether the box will take 8GB or not. It's nice to sit there and say: "Yeah, memory is cheap!" Sure, what about a box itself? Maybe you can afford only the one expandable to 6GB? Where are you gonna put the other 2? In your ear?|||Maybe you can afford only the one expandable to 6GB? Where are you gonna put the other 2? In your ear?

Dude! that is so funny! "In his ear"?! .. I almost made a mess with the cup of cofee on my desk.|||See...the thing of it is, is that Pat didn't ask you what version you're running or what the OS is...

This is kind of importantGood point... Sometimes I miss details like that.

If jmondia is running NT 3.51 or earlier, then 8 Gb is a problem, since the OS has problems addressing that much memory. The same is true if they are running SQL 6.5 or earlier, although there used to be work-arounds for those problems from Micrsoft Professional Support Services.

I assumed that anyone planning to run 300+ simultaneous users would be running on server grade hardware (which by my definition has to support at least 8 Gb of RAM), with at least Windows 2000 and SQL 7. I shouldn't have taken those things for granted. Based on jmondia's response, it looks like I was safe making those assumptions though.

-PatP|||Man, I wish you were around when we were running our HMO on a 4-way with 4GB maxed out with 6.5 and NT (4.0 though, 3.51 wouldn't have taken it) PSS participated in setting up this server, all the specs were met...where did you get this idea that PSS would come up with a workaround for 6.5 to recognize even 2/3 of 8GB of RAM? Man, my veins (as Lindman once noticed) are about to pop even imagining this! We would have been all set with 8GB! Dreaming again?|||Dreaming again?A good TAM helps a lot.

-PatP

Friday, February 24, 2012

memory on server

I have an application that is used 85 % of the time by 25 % of the people in
the organization.
we have three servers, all having 2000 server and sql 2000 server the main
one is on SP2 and the two terminal servers have sp4
main server has 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of
ram in it.
the main server utilization runs anywhere from 20% to 100%. I am purposing
adding another 2048 in the main server, is that enough?
Kevin
kevins (kevins@.discussions.microsoft.com) writes:
> I have an application that is used 85 % of the time by 25 % of the
> people in the organization.
> we have three servers, all having 2000 server and sql 2000 server the
> main one is on SP2 and the two terminal servers have sp4 main server has
> 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of ram in it.
> the main server utilization runs anywhere from 20% to 100%. I am
> purposing adding another 2048 in the main server, is that enough?
When it comes to memory for SQL Server, too much is not enough! :-)
Here is the deal: SQL Server loves cache. The more data it can hold in
cache, the less often it has to read from disk. Thus, once SQL Server
has maxed out on memory, it will stay there (unless there is pressure
from other applications.)
Thus, if you get another 2 GB of memory, you may see that SQL Server
till maxes out. This all depends on your databasees and your application.
If you have a 100 GB database of which all data is accessed at some
point during the day, you will have to access disk accesses. On the
other hand, if your only database is 2GB, those extra 2GB will not have
that much effect, as you already have the database in memory.
Note also that you need Enterprise Edition to be able to make use from
more than 2GB memory. There are also switches in Windows you need to set,
to able to use this memory.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

memory on server

I have an application that is used 85 % of the time by 25 % of the people in
the organization.
we have three servers, all having 2000 server and sql 2000 server the main
one is on SP2 and the two terminal servers have sp4
main server has 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of
ram in it.
the main server utilization runs anywhere from 20% to 100%. I am purposing
adding another 2048 in the main server, is that enough?
Kevinkevins (kevins@.discussions.microsoft.com) writes:
> I have an application that is used 85 % of the time by 25 % of the
> people in the organization.
> we have three servers, all having 2000 server and sql 2000 server the
> main one is on SP2 and the two terminal servers have sp4 main server has
> 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of ram in it.
> the main server utilization runs anywhere from 20% to 100%. I am
> purposing adding another 2048 in the main server, is that enough?
When it comes to memory for SQL Server, too much is not enough! :-)
Here is the deal: SQL Server loves cache. The more data it can hold in
cache, the less often it has to read from disk. Thus, once SQL Server
has maxed out on memory, it will stay there (unless there is pressure
from other applications.)
Thus, if you get another 2 GB of memory, you may see that SQL Server
till maxes out. This all depends on your databasees and your application.
If you have a 100 GB database of which all data is accessed at some
point during the day, you will have to access disk accesses. On the
other hand, if your only database is 2GB, those extra 2GB will not have
that much effect, as you already have the database in memory.
Note also that you need Enterprise Edition to be able to make use from
more than 2GB memory. There are also switches in Windows you need to set,
to able to use this memory.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

memory on server

I have an application that is used 85 % of the time by 25 % of the people in
the organization.
we have three servers, all having 2000 server and sql 2000 server the main
one is on SP2 and the two terminal servers have sp4
main server has 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of
ram in it.
the main server utilization runs anywhere from 20% to 100%. I am purposing
adding another 2048 in the main server, is that enough?
Kevinkevins (kevins@.discussions.microsoft.com) writes:
> I have an application that is used 85 % of the time by 25 % of the
> people in the organization.
> we have three servers, all having 2000 server and sql 2000 server the
> main one is on SP2 and the two terminal servers have sp4 main server has
> 2048 meg of ram and ts1 has 3840 meg and ts2 has 2048 meg of ram in it.
> the main server utilization runs anywhere from 20% to 100%. I am
> purposing adding another 2048 in the main server, is that enough?
When it comes to memory for SQL Server, too much is not enough! :-)
Here is the deal: SQL Server loves cache. The more data it can hold in
cache, the less often it has to read from disk. Thus, once SQL Server
has maxed out on memory, it will stay there (unless there is pressure
from other applications.)
Thus, if you get another 2 GB of memory, you may see that SQL Server
till maxes out. This all depends on your databasees and your application.
If you have a 100 GB database of which all data is accessed at some
point during the day, you will have to access disk accesses. On the
other hand, if your only database is 2GB, those extra 2GB will not have
that much effect, as you already have the database in memory.
Note also that you need Enterprise Edition to be able to make use from
more than 2GB memory. There are also switches in Windows you need to set,
to able to use this memory.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp

Monday, February 20, 2012

Memory Leak.. Performance?

Hi All,
I'm a first time poster in here. Hopefully I'm in the right place for this
type of question. I'm having a problem with one of my SQL Server's and
maybe someone here can shead some light.
I'm running SQL Server version 8.00.760 which I think is SP3a even though
I'm not sure about the "a" part because it doesn't show in the versioning.
The Operating system is Windows 2000 Advance Server running SP4. There are
65 workstations in the company and aproximately 57-59 of them running
applications against the SQL Server at any one time.
There is one major database with about 75 tables in the SQL Server. The
biggest application that is installed on all workstation uses this database
primarily and has direct drivers that connect to the database. It does not
use ODBC. It is written in C but I can't tell you what kind of connections
are being made programatically. There is a smaller application that is
written in FoxPro that also reads this database using ODBC. This
application is programmed with a timer and is never turned off. It runs
every 20 minutes or so. It does a read of the main table however I can't be
sure if it's using an index or doing a table scan. Also there are about 4
employees that use ODBC and Access or Excel to query these database for
custom reporting that they do.
The problem is that the server has to be rebooted every night for it to work
properly the next day. If they go more then one day without rebooting they
start seeing slow downs in the applications. I am leaning towards a memory
leak either in the SQL Server or being caused by one of the applications.
The primary app they are running runs on other SQL Server in other companies
and there are not complaints of memory leaks with that app. The FoxPro app
using ODBC was explained to be a pretty simple program that shouldnt' be
causing this.
I read up on SP3 and how it fixes a memory leak to do with the ODBC drivers.
My question is, does it sound like I have everything installed that is
necessary from Microsoft so I can say that it's probably not SQL Server that
has the memory leak. Also what's the best way to actually determine there
is a true memory leak. I can't use the Total Memory used because according
to Microsoft this is naturally always increasing. Are there newere ODBC
drivers that I should be using other then those that come in Windows 2000?
Any help, suggestions, or commets on this problem is more then welcome.
Thanks for any information.
Best Regards,
Henry Sheldon
South Florida, USif you have a sqlserver memory leak
watch the perfmon counter
Object: process
Counter: virtual bytes
Instance: sqlservr
it should get real close to 2*1024*1024*1024 or 3GB
in /3GB mode for AS/EE
when you start having performance problems
it is also possible the SQL Server: Memory Manager SQL
Cache Memory will start to drop if ODBC is leaking address
space, leaving less address space for the buffer cache
>--Original Message--
>Hi All,
>I'm a first time poster in here. Hopefully I'm in the
right place for this
>type of question. I'm having a problem with one of my
SQL Server's and
>maybe someone here can shead some light.
>I'm running SQL Server version 8.00.760 which I think is
SP3a even though
>I'm not sure about the "a" part because it doesn't show
in the versioning.
>The Operating system is Windows 2000 Advance Server
running SP4. There are
>65 workstations in the company and aproximately 57-59 of
them running
>applications against the SQL Server at any one time.
>There is one major database with about 75 tables in the
SQL Server. The
>biggest application that is installed on all workstation
uses this database
>primarily and has direct drivers that connect to the
database. It does not
>use ODBC. It is written in C but I can't tell you what
kind of connections
>are being made programatically. There is a smaller
application that is
>written in FoxPro that also reads this database using
ODBC. This
>application is programmed with a timer and is never
turned off. It runs
>every 20 minutes or so. It does a read of the main table
however I can't be
>sure if it's using an index or doing a table scan. Also
there are about 4
>employees that use ODBC and Access or Excel to query
these database for
>custom reporting that they do.
>The problem is that the server has to be rebooted every
night for it to work
>properly the next day. If they go more then one day
without rebooting they
>start seeing slow downs in the applications. I am
leaning towards a memory
>leak either in the SQL Server or being caused by one of
the applications.
>The primary app they are running runs on other SQL Server
in other companies
>and there are not complaints of memory leaks with that
app. The FoxPro app
>using ODBC was explained to be a pretty simple program
that shouldnt' be
>causing this.
>I read up on SP3 and how it fixes a memory leak to do
with the ODBC drivers.
>My question is, does it sound like I have everything
installed that is
>necessary from Microsoft so I can say that it's probably
not SQL Server that
>has the memory leak. Also what's the best way to
actually determine there
>is a true memory leak. I can't use the Total Memory used
because according
>to Microsoft this is naturally always increasing. Are
there newere ODBC
>drivers that I should be using other then those that come
in Windows 2000?
>Any help, suggestions, or commets on this problem is more
then welcome.
>Thanks for any information.
>Best Regards,
>Henry Sheldon
>South Florida, US
>
>.
>

Memory Leak problem... in SQL Server 2K

Hello,

I am having trouble with a production db server that likes to gobble
up memory. It seems to be a slow burn (maxing out over about an 18
hour time frame, before pegging both procs on the server and bringing
everything to a standstill). After viewing the trace logs, it appears
that all the SPIDs are being recycled - does this assert that
connections are being properly closed when the need for them has
ended? The code base is huge and quite messy, so it's difficult to
discern where the problem is just by looking at code, and we can't
seem to nail it down by looking at it, and I'm not sure what to look
for in the trace logs or perfmon.

Does anyone have any suggestions about what else might cause such a
problem?

RyanRyan (ryan3677@.excite.com) writes:
> I am having trouble with a production db server that likes to gobble
> up memory. It seems to be a slow burn (maxing out over about an 18
> hour time frame, before pegging both procs on the server and bringing
> everything to a standstill). After viewing the trace logs, it appears
> that all the SPIDs are being recycled - does this assert that
> connections are being properly closed when the need for them has
> ended? The code base is huge and quite messy, so it's difficult to
> discern where the problem is just by looking at code, and we can't
> seem to nail it down by looking at it, and I'm not sure what to look
> for in the trace logs or perfmon.

SQL Server likes to gobble up memory. In fact this is by design. The
more data SQL Server can hold in cache, the more queries it can
respond to without disk access. So normally SQL Server expands to
get all avilable memory. But if there are other processes in need of
memory, SQL Server will yield. It may not yield fast enough, though,
and you can configure SQL Server to use only part of the memory.

So the perceived memory leak is not a problem, but since you talk about
standstill, it seems that you have a problem. And since you talk about
pegging the processors on the server, it seems that you have a query in
need of rewrite somewhere. Or a in need of a better index. So while that
code base may be big and ugly, and you prefer not to look at it, it is
most likely there you find the solution.

The Profiler is a good tool. Filter for Duration greeater than, say,
1000 ms. Then again, if you start tracing when that bad query starts
running, you will not see the query until it is completed. One
alternative is aba_lockinfo, which is on my home page,
http://www.sommarskog.se/sqlutil/aba_lockinfo.html. That procedure
is really intended for lock monitoring, but you get all active processes
and what they are doing. And since "standstill" often includes blocking
as well, it may be interesting. aba_lockinfo gives you a snapshot, but
can still reveal something about what is going on. One word of caution
though: aba_lockinfo can take some time to return on a busy system. I
have identiefied a few weaknesses in terms of performance, but I have
not came around to fix them yet.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Erland Sommarskog" <sommar@.algonet.se> wrote in message
news:Xns947AB6167287AYazorman@.127.0.0.1...
> Ryan (ryan3677@.excite.com) writes:

<snip
> SQL Server likes to gobble up memory. In fact this is by design. The
> more data SQL Server can hold in cache, the more queries it can
> respond to without disk access. So normally SQL Server expands to
> get all avilable memory. But if there are other processes in need of
> memory, SQL Server will yield. It may not yield fast enough, though,
> and you can configure SQL Server to use only part of the memory.
> So the perceived memory leak is not a problem, but since you talk about
> standstill, it seems that you have a problem. And since you talk about
> pegging the processors on the server, it seems that you have a query in
> need of rewrite somewhere. Or a in need of a better index. So while that
> code base may be big and ugly, and you prefer not to look at it, it is
> most likely there you find the solution.
> The Profiler is a good tool. Filter for Duration greeater than, say,
> 1000 ms. Then again, if you start tracing when that bad query starts
> running, you will not see the query until it is completed. One
> alternative is aba_lockinfo, which is on my home page,
> http://www.sommarskog.se/sqlutil/aba_lockinfo.html. That procedure
> is really intended for lock monitoring, but you get all active processes
> and what they are doing. And since "standstill" often includes blocking
> as well, it may be interesting. aba_lockinfo gives you a snapshot, but
> can still reveal something about what is going on. One word of caution
> though: aba_lockinfo can take some time to return on a busy system. I
> have identiefied a few weaknesses in terms of performance, but I have
> not came around to fix them yet.
>
> --
> Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp

I agree with every thing above... in the intrim you may want to limit the
amout no memory that MS SQL can have. This helps especially if you are
running other programs on the machine that compete for memory. The downside
is that MSSQL has less to work with and will possibly take longer. Upside is
that MSSQL will not gobble up all the memory bringing everything to a halt.

-p|||Absolutely. You should limit the database memory even if you have no other
applications running on the machine. Strangely enough, SQL Server can choke
the operating system by leaving too little memory for the OS to function at
optimum.
Hope this helps,
Chuck Conover
www.TechnicalVideos.net

>"Pippen" <123@.hotmail.com> wrote in message
news:bw0Rb.152962$na.259030@.attbi_s04...
> I agree with every thing below... in the intrim you may want to limit the
> amout no memory that MS SQL can have. This helps especially if you are
> running other programs on the machine that compete for memory. The
downside
> is that MSSQL has less to work with and will possibly take longer. Upside
is
> that MSSQL will not gobble up all the memory bringing everything to a
halt.
> -p
>
> "Erland Sommarskog" <sommar@.algonet.se> wrote in message
> news:Xns947AB6167287AYazorman@.127.0.0.1...
> > Ryan (ryan3677@.excite.com) writes:
> <snip>
> > SQL Server likes to gobble up memory. In fact this is by design. The
> > more data SQL Server can hold in cache, the more queries it can
> > respond to without disk access. So normally SQL Server expands to
> > get all avilable memory. But if there are other processes in need of
> > memory, SQL Server will yield. It may not yield fast enough, though,
> > and you can configure SQL Server to use only part of the memory.
> > So the perceived memory leak is not a problem, but since you talk about
> > standstill, it seems that you have a problem. And since you talk about
> > pegging the processors on the server, it seems that you have a query in
> > need of rewrite somewhere. Or a in need of a better index. So while that
> > code base may be big and ugly, and you prefer not to look at it, it is
> > most likely there you find the solution.
> > The Profiler is a good tool. Filter for Duration greeater than, say,
> > 1000 ms. Then again, if you start tracing when that bad query starts
> > running, you will not see the query until it is completed. One
> > alternative is aba_lockinfo, which is on my home page,
> > http://www.sommarskog.se/sqlutil/aba_lockinfo.html. That procedure
> > is really intended for lock monitoring, but you get all active processes
> > and what they are doing. And since "standstill" often includes blocking
> > as well, it may be interesting. aba_lockinfo gives you a snapshot, but
> > can still reveal something about what is going on. One word of caution
> > though: aba_lockinfo can take some time to return on a busy system. I
> > have identiefied a few weaknesses in terms of performance, but I have
> > not came around to fix them yet.
> > --
> > Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
> > Books Online for SQL Server SP3 at
> > http://www.microsoft.com/sql/techin.../2000/books.asp|||Chuck Conover (cconover@.commspeed.net) writes:
> Absolutely. You should limit the database memory even if you have no
> other applications running on the machine. Strangely enough, SQL Server
> can choke the operating system by leaving too little memory for the OS
> to function at optimum.

No, for a machine that only runs SQL Server, there is no reason to configure
the memory. The most likely result is that when you buy more memory, you
cannot understand why it does not pay off, because you had forgotten that
you had constrained the memory.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Chuck Conover" <cconover@.commspeed.net> wrote in message
news:1075132853.366896@.news.commspeed.net...
> Absolutely. You should limit the database memory even if you have no
other
> applications running on the machine. Strangely enough, SQL Server can
choke
> the operating system by leaving too little memory for the OS to function
at
> optimum.

I'd have to disagree. I've never seen this be an issue.|||Greg,
No problem to disagree. It is possible that I came to the wrong
conclusion. However, I did see a situation just recently whereby there were
several views (written very badly) that required the database to bring back
several million rows. The I/O was astronomical. It appeared that even
after the view had come back, the whole machine was incredibly slow, and our
diagnostics showed that the database had eaten almost all 2GB of memory on
the machine. So our assumption was that the OS did not have enough memory
to function optimally. Rebooting the machine was the short-term fix. Being
a production server, we made 3 fixes simultaneously to get the machine
working properly as quickly as possible. Correcting the views, adding
another 2GB of memory, and limiting the DB memory fixed the problem, but we
aren't sure which one of our fixes corrected the problem.

Thanks for the input. It is possible that the views did not ever finish
completely considering the I/O required. That could have been the reason
for the slowdown of the machine.

Best regards,
Chuck Conover
www.TechnicalVideos.net

"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:SgmRb.18$pE.4@.twister.nyroc.rr.com...
> "Chuck Conover" <cconover@.commspeed.net> wrote in message
> news:1075132853.366896@.news.commspeed.net...
> > Absolutely. You should limit the database memory even if you have no
> other
> > applications running on the machine. Strangely enough, SQL Server can
> choke
> > the operating system by leaving too little memory for the OS to function
> at
> > optimum.
> I'd have to disagree. I've never seen this be an issue.

Memory leak or performance issue?

Dear all
My some users are complaining that when they restart their sql server
it works good for some time and then as time pass performance degrades.
Again they resrat and it works fine and as time goes it becomes slow.
Why when I restart it works fine and as time goes it becomes slow?
Is there any memory leak or any other problem for this?
Regards
AmishHi,
What is databaser server configuration hw and sw ?
- Windows version ?
- SQL Server version and SP ?
- How many memory do you want ?
- How many processors do you want ?
** * Esta msg foi útil pra você ? Então marque-a como tal. ***
Regards,
Rodrigo Fernandes
"amish" wrote:
> Dear all
> My some users are complaining that when they restart their sql server
> it works good for some time and then as time pass performance degrades.
> Again they resrat and it works fine and as time goes it becomes slow.
> Why when I restart it works fine and as time goes it becomes slow?
> Is there any memory leak or any other problem for this?
>
> Regards
> Amish
>

Memory leak or performance issue?

Dear all
My some users are complaining that when they restart their sql server
it works good for some time and then as time pass performance degrades.
Again they resrat and it works fine and as time goes it becomes slow.
Why when I restart it works fine and as time goes it becomes slow?
Is there any memory leak or any other problem for this?
Regards
AmishHi,
What is databaser server configuration hw and sw ?
- Windows version ?
- SQL Server version and SP ?
- How many memory do you want ?
- How many processors do you want ?
** * Esta msg foi Ăștil pra vocĂȘ ? Ent?o marque-a como tal. ***
Regards,
Rodrigo Fernandes
"amish" wrote:

> Dear all
> My some users are complaining that when they restart their sql server
> it works good for some time and then as time pass performance degrades.
> Again they resrat and it works fine and as time goes it becomes slow.
> Why when I restart it works fine and as time goes it becomes slow?
> Is there any memory leak or any other problem for this?
>
> Regards
> Amish
>