Showing posts with label couple. Show all posts
Showing posts with label couple. Show all posts

Wednesday, March 28, 2012

Merge Dynamic filter problem

Hi!

I'm a merge newbie and have a couple of questions. I'm about to setup a merge replication with Sql Server 2005 and Sql Server CE as a subscriber. Situation is like this, we have 10 service technicians using pda.

I want to each pda user have their own data. What I understand I need to use dynamic filter and SUSER_NAME()? Do I need to create a "translation" table to map my system's UserId against SUSER_NAME? How have you solved this problem?

/Magnus

Hello magnus,

One easy approach is to have a column in the table for filtering purpose.

Please take a look on sp_addmergearticle (Transact-SQL) , from http://msdn2.microsoft.com/en-us/library/ms174329.aspx.


[ @.subset_filterclause = ] 'subset_filterclause'
Is a WHERE clause specifying the horizontal filtering of a table article without the word WHERE included. subset_filterclause is of nvarchar(1000), with a default of an empty string.

Important:
For performance reasons, we recommended that you not apply functions to column names in parameterized row filter clauses, such as LEFT([MyColumn]) = SUSER_SNAME(). If you use HOST_NAME in a filter clause and override the HOST_NAME value, you might have to convert data types by using CONVERT. For more information about best practices for this case, see the section "Overriding the HOST_NAME() Value" in Parameterized Row Filters.

Parameterized Row Filters - http://msdn2.microsoft.com/en-us/library/ms152478.aspx.

and

[ @.partition_options = ] partition_options
Defines the way in which data in the article is partitioned, which enables performance optimizations when all rows belong in only one partition or in only one subscription. partition_options is tinyint, and can be one of the following values.

Value Description
0 (default)
The filtering for the article either is static or does not yield a unique subset of data for each partition, that is, an "overlapping" partition.
1
The partitions are overlapping, and data manipulation language (DML) updates made at the Subscriber cannot change the partition to which a row belongs.
2
The filtering for the article yields non-overlapping partitions, but multiple Subscribers can receive the same partition.
3
The filtering for the article yields non-overlapping partitions that are unique for each subscription.

Monday, March 19, 2012

Memory usage in RS(ASP.NET worker process)

Hi all,

We have been experiencing a problem with a couple of our reports that a
number of people seem to have hit previously. Basically the issue is if
the report is over a certain size the ASP.NET worker process hits its
threshold (60%) and the process gets shut down. We have followed the
advice listed in some of the posts in increasing this threshold which
manages to get the report through but we still have a few concerns.

1. Why won't the process use virtual memory. It seems to be limted to
physical memory available and when we increased the threshold, if that
physical memory runs out we get an out of memory exception.

2. The memory doesn't appear to be being released, I would have
throught the garbage collection would kick in pretty soon after the
report was finished but watching the process the memory stays in use
for a large amount of time after the report has finised rendering, with
no new activity on the server.

3. How does this scale at all? I have seen the argument that reports of
this size are unfeasable, and agree to an extent... unfortunately our
clients don't and they need a system capable of delivering them all the
data, regardless of the size of the report. I also read a suggestion to
use DTS to deliver a csv file to the client, but this sounds like a one
off workaround, more than an ongoing process that say an end user could
intiate once a month (for a thousand or so different companies).


This leads me to my final concern.... We have observed that the memory
will pile up
, ie if the user kicks off one report it uses x amount...
if another user kicks off another report that will use an additional
amount of memory.... so even if the report isn't too big, it would only
take ten users running medium size reports to run the server out of
memory.... does anyone have any suggestions as to how we should cater
for this?

Thanks in advance
Greg

We are still having issues in this regard, does anyone have any insights?

Friday, March 9, 2012

Memory Upgrade and SQL 2000

We have a SQL server that has a failover cluster on our network. We are looking to updrade the memory by adding a couple gigs of ram.
We don't need to upgrade the failover cluster to have the exact amount of memory as the primary do we?
DotNetJunkieHow big of the RAM you have currently? Do you want to or need to change the memory configuration after the upgrade? For example, turning the AWE on.|||joe,
Yes we will need to turn on AWE since we are currently only at 1.5 gigs.

It we break the 2gig barrier, would we then need to have > 2 gig on the failover server?

DotNetJunkie|||Yes, it would have to be the same for the failover. The configuration has to be the same. I don't think you would have problems installing it if it's not. But when the primary fails, the failover would have to take over whatever the primary is doing. That's when you have problems if you don't have the same settings in memory.

Wednesday, March 7, 2012

Memory problems

Hi
I've been having a problem with SQL Server for the last couple of
weeks. I've been unable to resolve it. Or, indeed, work out exactly
what the problem is.
We're running MS SQL Server 2000, Service Pack 3; it's running on
Windows Server 2003, on a Dell PowerEdge 2500 with 2 gigs of RAM.
There are two problems that we're getting; I'm assuming they're
related, but I've not got enough SQL Server experience to be sure.
The first is that, from time to time, we'll stop being able to access
the server - in Enterprise Manager, when we open the server, we get an
SSL error (something about SECDoClientHandShake()).
The second is a CryptoAPI error - I've not been able to get the
details for this, because it's only happening when our developer
accesses the machine remotely; we can still use the server locally
with no problems.
The information I've found online says that there was a known problem
with SQL Server 2000 and Certificate Server, which caused a similar
problem, but that that problem was resolved in Service Pack 1. I've
tried what I can - we've doubled the memory from 1 gig to 2, and I've
tweaked the options as much as I dare - but nothing seems to have
helped.
If anyone has any suggestions as to what might be causing this, I'd be
very grateful to hear them.
Finally, we're also, from time to time, getting errors in SQL Server's
log along the lines of:
2003-11-13 09:15:24.48 spid53 WARNING: Failed to reserve contiguous
memory of Size= 65536.
2003-11-13 09:15:24.48 spid53 Query Memory Manager: Grants=0
Waiting=0 Maximum=103895 Available=103895
2003-11-13 09:15:24.48 spid53 Procedure Cache: TotalProcs=217
TotalPages=778 InUsePages=449
2003-11-13 09:15:24.48 spid53 Global Memory Objects: Resource=925
Locks=18 ...
2003-11-13 09:15:24.48 spid53 Dynamic Memory Manager: Stolen=1156 OS
Reserved=992 ...
2003-11-13 09:15:24.48 spid53 Buffer Distribution: Stolen=378
Free=10 Procedures=778...
2003-11-13 09:15:24.48 spid53 Buffer Counts: Commited=1810
Target=139175 Hashed=644...
though it doesn't seem to be happening at the same time as we're
getting the problems with accessing the database.
If there's any information I've missed, please let me know. I really
would appreciate any help on this.
AndrewAndrew,
KBID=818095
PH
AndrewS@.mortdieu.demon.co.uk (andrewsi) wrote in message news:<29ba971a.0311130527.7f9e7dae@.posting.google.com>...
> Hi
> I've been having a problem with SQL Server for the last couple of
> weeks. I've been unable to resolve it. Or, indeed, work out exactly
> what the problem is.
> We're running MS SQL Server 2000, Service Pack 3; it's running on
> Windows Server 2003, on a Dell PowerEdge 2500 with 2 gigs of RAM.
> There are two problems that we're getting; I'm assuming they're
> related, but I've not got enough SQL Server experience to be sure.
> The first is that, from time to time, we'll stop being able to access
> the server - in Enterprise Manager, when we open the server, we get an
> SSL error (something about SECDoClientHandShake()).
> The second is a CryptoAPI error - I've not been able to get the
> details for this, because it's only happening when our developer
> accesses the machine remotely; we can still use the server locally
> with no problems.
>
> The information I've found online says that there was a known problem
> with SQL Server 2000 and Certificate Server, which caused a similar
> problem, but that that problem was resolved in Service Pack 1. I've
> tried what I can - we've doubled the memory from 1 gig to 2, and I've
> tweaked the options as much as I dare - but nothing seems to have
> helped.
> If anyone has any suggestions as to what might be causing this, I'd be
> very grateful to hear them.
> Finally, we're also, from time to time, getting errors in SQL Server's
> log along the lines of:
> 2003-11-13 09:15:24.48 spid53 WARNING: Failed to reserve contiguous
> memory of Size= 65536.
> 2003-11-13 09:15:24.48 spid53 Query Memory Manager: Grants=0
> Waiting=0 Maximum=103895 Available=103895
> 2003-11-13 09:15:24.48 spid53 Procedure Cache: TotalProcs=217
> TotalPages=778 InUsePages=449
> 2003-11-13 09:15:24.48 spid53 Global Memory Objects: Resource=925
> Locks=18 ...
> 2003-11-13 09:15:24.48 spid53 Dynamic Memory Manager: Stolen=1156 OS
> Reserved=992 ...
> 2003-11-13 09:15:24.48 spid53 Buffer Distribution: Stolen=378
> Free=10 Procedures=778...
> 2003-11-13 09:15:24.48 spid53 Buffer Counts: Commited=1810
> Target=139175 Hashed=644...
> though it doesn't seem to be happening at the same time as we're
> getting the problems with accessing the database.
> If there's any information I've missed, please let me know. I really
> would appreciate any help on this.
>
> Andrew