Showing posts with label reporting. Show all posts
Showing posts with label reporting. Show all posts

Wednesday, March 28, 2012

Merge Data

Hi,

I have an analysis server which get his data from an AIX. We use reporting server to get some reports. Unfortenately the data on the AIX is incorrect. I created a table called "corrections" . The corrections should be merged with the data

Solution 1 : modify the data source view in SSAS and integrate the corrections

Solution 2: make a stored procedure that gets the data from SSAS and merges it with the correction. Call the SP from the report.

Which one is better ?

And how do you implement solution 1 ?

And where can I find books over creating MDX queries ?

With regards,

Constantijn Enders

Hello.

Solution1 will not be good enough. I recommend you to build a data warehouse and make your corrections in the ETL-process. Thats is when you load data from your source system to a data warehouse. About data warehouse pros and cons I recommend you to have a look here(www.kimballgroup.com)

About MDX books I can recommend to start with "Analysis Services 2005, Step by Step" that is a good start book to learn Analysis Services 2005 and some MDX.

A respected MDX expert and writer of articles in this area you can find here: http://www.databasejournal.com/features/article.php/3593466

Look for William E. Pearson

HTH

Thomas Ivarsson

|||

Thank you for the excellent information. It will surely help me

Constantijn

Monday, March 26, 2012

Merge Agent is reporting deadlocks

Can anyone tell me what could be causing the following deadlock issues? I
have done a SQL Profiler trace which isn't picking up the Deadlock. The
subscription type is pull and there are more than 12 merge agents on the
Publisher
The process could not enumerate changes at the 'Publisher'.
Percent Complete: 0
The process could not enumerate changes at the 'Publisher'.
Repl Agent Status: 5
Percent Complete: 0
Category:NULL
Source: Merge Replication Provider
Number: -2147200999
Message: The process could not enumerate changes at the 'Publisher'.
Repl Agent Status: 3
Percent Complete: 0
Category:SQLSERVER
Source: SQLBDG01PA
Number: 1205
Message: Transaction (Process ID 94) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
Repl Agent Status: 3
Disconnecting from Publisher 'SQLBDG01PA'
Disconnecting from Subscriber 'EBIVCSA'
Disconnecting from Publisher 'SQLBDG01PA'
Disconnecting from Publisher 'SQLBDG01PA'
Disconnecting from Publisher 'SQLBDG01PA'
Disconnecting from Publisher 'SQLBDG01PA'
Disconnecting from Subscriber 'EBIVCSA'
Disconnecting from Subscriber 'EBIVCSA'
Disconnecting from Subscriber 'EBIVCSA'
Disconnecting from Subscriber 'EBIVCSA'
Disconnecting from Publisher 'SQLBDG01PA'
Disconnecting from Distributor 'SQLBDG01PA'
Hello,
You may want to enable 1204, 3605 and 1205 flag to see more information
about this deadlock:
The following article is for your reference:
832524 SQL Server technical bulletin - How to resolve a deadlock
http://support.microsoft.com/?id=832524
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
--
>Thread-Topic: Merge Agent is reporting deadlocks
>thread-index: AcYE+kfsejv9d52cQuKGAjAGyxI8Wg==
>X-WBNR-Posting-Host: 159.99.4.20
>From: =?Utf-8?B?U3lkbmV5?= <hsc@.newsgroup.nospam>
>Subject: Merge Agent is reporting deadlocks
>Date: Mon, 19 Dec 2005 16:14:02 -0800
>Lines: 36
>Message-ID: <1CF81079-5E0F-46A6-8C0C-2FFE03906D20@.microsoft.com>
>MIME-Version: 1.0
>Content-Type: text/plain;
>charset="Utf-8"
>Content-Transfer-Encoding: 7bit
>X-Newsreader: Microsoft CDO for Windows 2000
>Content-Class: urn:content-classes:message
>Importance: normal
>Priority: normal
>X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.0
>Newsgroups: microsoft.public.sqlserver.replication
>NNTP-Posting-Host: TK2MSFTNGXA03.phx.gbl 10.40.2.250
>Path: TK2MSFTNGXA02.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFT NGXA03.phx.gbl
>Xref: TK2MSFTNGXA02.phx.gbl microsoft.public.sqlserver.replication:67673
>X-Tomcat-NG: microsoft.public.sqlserver.replication
>Can anyone tell me what could be causing the following deadlock issues? I
>have done a SQL Profiler trace which isn't picking up the Deadlock. The
>subscription type is pull and there are more than 12 merge agents on the
>Publisher
>The process could not enumerate changes at the 'Publisher'.
>Percent Complete: 0
>The process could not enumerate changes at the 'Publisher'.
>Repl Agent Status: 5
>Percent Complete: 0
>Category:NULL
>Source: Merge Replication Provider
>Number: -2147200999
>Message: The process could not enumerate changes at the 'Publisher'.
>Repl Agent Status: 3
>Percent Complete: 0
>Category:SQLSERVER
>Source: SQLBDG01PA
>Number: 1205
>Message: Transaction (Process ID 94) was deadlocked on lock resources with
>another process and has been chosen as the deadlock victim. Rerun the
>transaction.
>Repl Agent Status: 3
>Disconnecting from Publisher 'SQLBDG01PA'
>Disconnecting from Subscriber 'EBIVCSA'
>Disconnecting from Publisher 'SQLBDG01PA'
>Disconnecting from Publisher 'SQLBDG01PA'
>Disconnecting from Publisher 'SQLBDG01PA'
>Disconnecting from Publisher 'SQLBDG01PA'
>Disconnecting from Subscriber 'EBIVCSA'
>Disconnecting from Subscriber 'EBIVCSA'
>Disconnecting from Subscriber 'EBIVCSA'
>Disconnecting from Subscriber 'EBIVCSA'
>Disconnecting from Publisher 'SQLBDG01PA'
>Disconnecting from Distributor 'SQLBDG01PA'
>
|||Thank you for that information Peter,
It helped me isolate where the deadlock is. Below is the output from the log
which shows that the deadlock is occurring in some SQL stored procedures. An
update is being done on one of our tables nad this is where it occurrs. This
is a major problem for us as the site we are supporting cannot synchronise
their servers at all. There are 12 servers to synchronise but only 8 can be
done and then they all start to fail one at a time. If anyone has any other
information about how to get around this problem it would be greatly
appreciated.
Node:1
KEY: 9:1977058079:7 (ee0128400385) CleanCnt:1 Mode: U Flags: 0x0
Grant List 0::
Owner:0x42bdf5c0 Mode: S Flg:0x0 Ref:0 Life:00000001 SPID:147 ECID:0
SPID: 147 ECID: 0 Statement Type: SELECT Line #: 23
Input Buf: RPC Event: sp_MSenumchanges;1
Requested By:
ResType:LockOwner Stype:'OR' Mode: X SPID:101 ECID:0 Ec0x5D4DB530)
Value:0x51ad0560 Cost0/1E8)
Node:2
KEY: 9:1977058079:1 (95002952f11f) CleanCnt:1 Mode: X Flags: 0x0
Grant List 0::
Owner:0x5a33bdc0 Mode: X Flg:0x0 Ref:2 Life:02000000 SPID:101 ECID:0
SPID: 101 ECID: 0 Statement Type: UPDATE Line #: 935
Input Buf: RPC Event: sp_upd_BE8D5D46BE0642C56A093B6C6808492A;1
Requested By:
ResType:LockOwner Stype:'OR' Mode: S SPID:147 ECID:0 Ec0x5E7E1530)
Value:0x42bdc420 Cost0/0)
Victim Resource Owner:
ResType:LockOwner Stype:'OR' Mode: S SPID:147 ECID:0 Ec0x5E7E1530)
Value:0x42bdc420 Cost0/0)
End deadlock search 485 ... a deadlock was found.
"Peter Yang [MSFT]" wrote:

> Hello,
> You may want to enable 1204, 3605 and 1205 flag to see more information
> about this deadlock:
> The following article is for your reference:
> 832524 SQL Server technical bulletin - How to resolve a deadlock
> http://support.microsoft.com/?id=832524
> Best Regards,
> Peter Yang
> MCSE2000/2003, MCSA, MCDBA
> Microsoft Online Partner Support
> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ================================================== ===
>
> This posting is provided "AS IS" with no warranties, and confers no rights.
>
> --
>
|||Hello,
Based on my research, the new SQL Server build 818 changed how the Merge
Agent locks records on the Publisher while it is synchronizing changes to
the subscriber. This new design change might cause the Merge Agent to
Deadlock with the Update from the application. We discovered we can "tune"
the Merge agent to lock a smaller number of records and hopefully avoid the
deadlocking.
The SQL Server help topic below describe how to create a new Merge Agent
Profile. In the new profile change the DownloadReadChangesPerBatch setting
from the default
setting of 100 to 25. This will lock a smaller number of records per batch
while synchronizing. We believe 25 is a good compromise but it may need to
be adjusted.
See SQL Server Help Topics:
- Merge Agent Profile
- How to create a replication agent profile (Enterprise Manager)
Another the work around is to run the Merge Agent every minute instead of
continuously so when it fails with a deadlock, the Agent will automatically
restart.
Please rest assured this issue has been routed to the proper channel. If
there is any update on this, we will let you know. However, it may take
some time and we appreciate your patience.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
--
>Thread-Topic: Merge Agent is reporting deadlocks
>thread-index: AcYGeQ48jpTy5VEoQhSZsrR+3H87vw==
>X-WBNR-Posting-Host: 159.99.4.20
>From: =?Utf-8?B?U3lkbmV5?= <hsc@.newsgroup.nospam>
>References: <1CF81079-5E0F-46A6-8C0C-2FFE03906D20@.microsoft.com>
<VdGdTKTBGHA.1236@.TK2MSFTNGXA02.phx.gbl>
>Subject: RE: Merge Agent is reporting deadlocks
>Date: Wed, 21 Dec 2005 13:54:02 -0800
>Lines: 127
>Message-ID: <EED67217-642D-4E1C-808D-E4E51CD66D2C@.microsoft.com>
>MIME-Version: 1.0
>Content-Type: text/plain;
>charset="Utf-8"
>Content-Transfer-Encoding: 7bit
>X-Newsreader: Microsoft CDO for Windows 2000
>Content-Class: urn:content-classes:message
>Importance: normal
>Priority: normal
>X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.0
>Newsgroups: microsoft.public.sqlserver.replication
>NNTP-Posting-Host: TK2MSFTNGXA03.phx.gbl 10.40.2.250
>Path: TK2MSFTNGXA02.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFT NGXA03.phx.gbl
>Xref: TK2MSFTNGXA02.phx.gbl microsoft.public.sqlserver.replication:67722
>X-Tomcat-NG: microsoft.public.sqlserver.replication
>Thank you for that information Peter,
>It helped me isolate where the deadlock is. Below is the output from the
log
>which shows that the deadlock is occurring in some SQL stored procedures.
An
>update is being done on one of our tables nad this is where it occurrs.
This
>is a major problem for us as the site we are supporting cannot
synchronise
>their servers at all. There are 12 servers to synchronise but only 8 can
be
>done and then they all start to fail one at a time. If anyone has any
other
>information about how to get around this problem it would be greatly
>appreciated.
>Node:1
>KEY: 9:1977058079:7 (ee0128400385) CleanCnt:1 Mode: U Flags: 0x0
> Grant List 0::
> Owner:0x42bdf5c0 Mode: S Flg:0x0 Ref:0 Life:00000001 SPID:147
ECID:0
> SPID: 147 ECID: 0 Statement Type: SELECT Line #: 23
> Input Buf: RPC Event: sp_MSenumchanges;1
> Requested By:
> ResType:LockOwner Stype:'OR' Mode: X SPID:101 ECID:0 Ec0x5D4DB530)
>Value:0x51ad0560 Cost0/1E8)
>Node:2
>KEY: 9:1977058079:1 (95002952f11f) CleanCnt:1 Mode: X Flags: 0x0
> Grant List 0::
> Owner:0x5a33bdc0 Mode: X Flg:0x0 Ref:2 Life:02000000 SPID:101
ECID:0[vbcol=seagreen]
> SPID: 101 ECID: 0 Statement Type: UPDATE Line #: 935
> Input Buf: RPC Event: sp_upd_BE8D5D46BE0642C56A093B6C6808492A;1
> Requested By:
> ResType:LockOwner Stype:'OR' Mode: S SPID:147 ECID:0 Ec0x5E7E1530)
>Value:0x42bdc420 Cost0/0)
>Victim Resource Owner:
> ResType:LockOwner Stype:'OR' Mode: S SPID:147 ECID:0 Ec0x5E7E1530)
>Value:0x42bdc420 Cost0/0)
>End deadlock search 485 ... a deadlock was found.
>
>"Peter Yang [MSFT]" wrote:
rights.[vbcol=seagreen]
I[vbcol=seagreen]
the[vbcol=seagreen]
with
>

Merge Agent Hangs (reporting no errors)

Hi,
In my merge replication scenario I use Push subscriptions. The
distributor is SQL server 2000 Sp3a, and subscribers are MSDE SP3a.
Everything works fine but recently merge agents started to "hang"
reporting no errors at random phases of replication session. They stop
sending or receiving any data between distributor and subscriber
(there is no network traffic). SQL Profiles show no activity on a
subscriber server. Last message generated by agent is "The process is
running and is waiting for a response from one of the backend
connections".
If there is a problem with communication between servers one would
expect returnig an error code and retrying operation. In my case the
merge agent "hangs" and waits for nothing. After 1 or 2 hours it
reports a communication link failure (waits for a random period of
time).
To continue a Merge process I have to Stop and Start the merge agent
manually.
Since my replication sends a lot of data from subscribers
(MSMerge_genhistory has approx 50000 entries when retention period is
set to 2 days.) after uploading data changes to the publisher (the
procedure sp_MScheckexistsgeneration is executed for every one of
50000 generations - am I right about this?) it takes a long time (40
minutes) to complete. If merge agent hangs, the process must be run
from the beginning. In this situation it is almost impossible to
complete the replication session.
What can be the cause of such behavior of merge agents?
Last lines of output (level 3) generated by agent are as follows:
{call sp_MScheckexistsgeneration (?, ?) }
{call sp_MScheckexistsgeneration (?, ?) }
Percent Complete: 0
The process is running and is waiting for a response from one of the
backend connections.
Repl Agent Status: 3
Tomek
Tomek,
it's difficult to say, but in some sense this looks like a connectivity
issue. However, when I've seen such a situation before, the error was
'General network error'. You might like to verify is this is/is not the
case, by using a Network Monitor tool:
http://support.microsoft.com/default...48942&sd=tech.
Alternatively you could open a window in QA on the publisher, with the
window connected to the subscriber SQL Server. In the window do a simple
select command in a loop and return the date. This should tell you if/when
the connectivity had problems.
Can you check in current activity or sp_who2/sp_lock/sp_blocker to identify
if there is any blocking involved. If the problem is not connectivity then
this would be my next test.
I'd also like to know what happens when you restart the agent - does it work
quickly or still take a long time but is successful.
No doubt you have considered this, but if possible can you synchronize more
often to shorten the batchsize?
You might also optimize performance by running
sp_mergemetadataretentioncleanup manually and optimize
he -DownloadGenerationsPerBatch parameter.
HTH,
Paul Ibison
|||Paul,
Thank you for your response. I hope my answers to your questions will
help us to solve the problem.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message news:<#EzzqyDQEHA.1312@.TK2MSFTNGP12.phx.gbl>...
> it's difficult to say, but in some sense this looks like a connectivity
> issue. However, when I've seen such a situation before, the error was
> 'General network error'. You might like to verify is this is/is not the
> case, by using a Network Monitor tool:
> http://support.microsoft.com/default...48942&sd=tech.
It can be related to connectivity because merge agents connect to
subscribers that not always use reliable network connection. But the
problem is that instead of reporting General network error and
retrying operation agent waits (hangs) doing nothing.

> Alternatively you could open a window in QA on the publisher, with the
> window connected to the subscriber SQL Server. In the window do a simple
> select command in a loop and return the date. This should tell you if/when
> the connectivity had problems.
QA in loop worked fine (no connectivity problems), but in the same
time agent assigned to the same subscriber hanged.

> Can you check in current activity or sp_who2/sp_lock/sp_blocker to identify
> if there is any blocking involved. If the problem is not connectivity then
> this would be my next test.
I've checked this ne too. Blocking is not the reason.

> I'd also like to know what happens when you restart the agent - does it work
> quickly or still take a long time but is successful.
I will describe how my agent works. Please tell me is it correct
behavior.
1) After connecting to subscriber agent uploads all the data changes
made by subscriber to the publisher. This works very fast and i do not
see any problem here (agent has never stopped at this phase).
2) Merge agent executes sp_MScheckexistsgeneration for each of 50000
generations stored in msmerge_genhistory at subscriber side. If there
is some data to download, it is downloaded to subscriber, and then
merge agent checks next generation.
Now let's assume that merge agent "hanged" ot generation 39000. After
restarting the agent it starts the process from the beginning checking
each of 39000 generations that where checked and merged in previus
session.
The merge agent is succesfull only if it is able to go through 50000
generations without loosing connection to subscriber. That is wy it is
so difficult to complete merge session.
IMHO the merge agent should start from generation 39000 making
completing session possible even on not reliable connections. Is it
"by design" behaviour?

> No doubt you have considered this, but if possible can you synchronize more
> often to shorten the batchsize?
It is not possible. subscribers can not synchronize on saturday and
sunday. So 2 days retention perod is minumum.

> You might also optimize performance by running
> sp_mergemetadataretentioncleanup manually and optimize
> he -DownloadGenerationsPerBatch parameter.
I think the problem is not metadata at publisher but metadata at
subscribers. It is not deleted after succesfull replication session (i
think it must work this way).
DownloadGenerationsPerBatch does not help because most of the time
merge agent executes sp_MScheckexistsgeneration without downloading
any data changes. Or maybe I am misunderstanding this parameter?.
Tomek
|||Tomek,
Slow and unreliable connections can result in more retries by the Merge
Agent, none of which you've seen. Your QA test also indicates the same
thing, so we can cross this off the list.
Answers inline...

> IMHO the merge agent should start from generation 39000 making
> completing session possible even on not reliable connections. Is it
> "by design" behaviour?
>
I guess it is not processed this way to try to avoid potential
non-convergence as inserts and child records could be in separate batches
(http://support.microsoft.com/default...b;enus;Q308266)

>
Having MetadataRetentionCleanup 1 should sort this out, but you could run
manually sp_mergemetadataretentioncleanup on the publisher and subscriber to
see if things are improved. It won't remove the 50000 entries, but what I am
hoping is that the search for new generations will be noticeably quicker
after old ones are removed.

> DownloadGenerationsPerBatch does not help because most of the time
> merge agent executes sp_MScheckexistsgeneration without downloading
> any data changes. Or maybe I am misunderstanding this parameter?.
Increasing the -DownloadGenerationsPerBatch Merge Agent parameters (even to
its maximum value of 2000) is a recommendation where a Subscriber has not
merged data with the Publisher for an extended period time, during which the
Publisher or other Subscribers have made numerous changes
(http://www.microsoft.com/technet/pro...n/mergperf.msp
x). So, I'd definitley test altering this parameter's value.
(Powodzenia)
Paul

Friday, March 23, 2012

Merge a PDF File and SQL Reporting Services Output

Does anyone know how i can go about merging preexisting pdf files and SQL server reporting services output. Can this be done in reporting services? For example, I have 5 pages from a pdf files which is created from another 3rd party software provider. I then i have output from sql reporting services. How can i merge these two outputs and deliver it over .Net/ ASP framework?

thanks

You could print the pdf to an image writer and use the images as background images in your Reporting Services report.

Or if the Reporting Services output is mainly data you could look at using the output data as input into you thrid party application (or vise versa).

hth

-- Leah

|||

I've tried what Leah suggested and the creation of the gif file was identical to the pdf (same width and height). However, when I used it as a background in the VS2005 report designer, the image became larger. How do you maintain the same dimensions of the original image when it is used as a background image?

Thanks

|||I came across a post from a different forum that was the closest to solving this issue. The solution was to scan a document at 96dpi prior to using it as the background. It wasn't perfect but it got me into the ball park where I could adjust positioning of the controls on a case by case basis.

Merge a PDF File and SQL Reporting Services Output

Does anyone know how i can go about merging preexisting pdf files and SQL server reporting services output. Can this be done in reporting services? For example, I have 5 pages from a pdf files which is created from another 3rd party software provider. I then i have output from sql reporting services. How can i merge these two outputs and deliver it over .Net/ ASP framework?

thanks

You could print the pdf to an image writer and use the images as background images in your Reporting Services report.

Or if the Reporting Services output is mainly data you could look at using the output data as input into you thrid party application (or vise versa).

hth

-- Leah

|||

I've tried what Leah suggested and the creation of the gif file was identical to the pdf (same width and height). However, when I used it as a background in the VS2005 report designer, the image became larger. How do you maintain the same dimensions of the original image when it is used as a background image?

Thanks

|||I came across a post from a different forum that was the closest to solving this issue. The solution was to scan a document at 96dpi prior to using it as the background. It wasn't perfect but it got me into the ball park where I could adjust positioning of the controls on a case by case basis.

merge 2 tables

Hi,
I need to merge 2 tables for the purpose of reporting on them.
1 table has actuals data that includes date, account type, branch location &
totals.
The other has budget figures which also includes business type, sub
catagory, date & budget amount.
Can someone explain how i could merge the 2 together.
I have attempted a union but get the error message operator must have equal
number of expressions.
here is the 2 queries
SELECT CONVERT(char(10), date, 103) AS Date, entity, entity_type,
sub_catagory, amount, comments, branch
FROM csd_budgets.dbo.tbl_budgets
WHERE (branch = 53) AND (sub_catagory = 2) AND (entity_type = 7) AND
(entity = 2) AND (date BETWEEN CONVERT(DATETIME, '2005-11-13 00:00:00', 102)
AND
GETDATE())
ORDER BY date
Thankyou in advance
Todd
SELECT CONVERT(char(10), date, 103) AS Date, Account_Code, Branch_Code,
Total_Balance, No_of_Accounts
FROM tbl_account_balances
WHERE (Branch_Code <= 56) AND (Branch_Code = 53) AND (NOT (Account_Code
IN (0001, 0004, 0006, 0085, 0808))) AND (CONVERT(char(10), date, 103)
= CONVERT(char(10), GETDATE(), 103))
and 2nd query isNot really clear on what you're trying to do here - if you use UNION
you basically stack a set of rows on top of another - column number
therefore must be the same...thus your error.
It sounds like you want to join the tables.
http://www.w3schools.com/sql/sql_join.asp|||In order to UNION 2 or more queries, the fields must match. UNION ALL will
give you the result you're looking for...I think...or just JOIN the tables o
n
a foreign key.
Just my twist on it,
Adam Turner
"Tango" wrote:

> Hi,
> I need to merge 2 tables for the purpose of reporting on them.
> 1 table has actuals data that includes date, account type, branch location
&
> totals.
> The other has budget figures which also includes business type, sub
> catagory, date & budget amount.
> Can someone explain how i could merge the 2 together.
> I have attempted a union but get the error message operator must have equa
l
> number of expressions.
> here is the 2 queries
> SELECT CONVERT(char(10), date, 103) AS Date, entity, entity_type,
> sub_catagory, amount, comments, branch
> FROM csd_budgets.dbo.tbl_budgets
> WHERE (branch = 53) AND (sub_catagory = 2) AND (entity_type = 7) AND
> (entity = 2) AND (date BETWEEN CONVERT(DATETIME, '2005-11-13 00:00:00', 10
2)
> AND
> GETDATE())
> ORDER BY date
> Thankyou in advance
> Todd
> SELECT CONVERT(char(10), date, 103) AS Date, Account_Code, Branch_Code
,
> Total_Balance, No_of_Accounts
> FROM tbl_account_balances
> WHERE (Branch_Code <= 56) AND (Branch_Code = 53) AND (NOT (Account_Cod
e
> IN (0001, 0004, 0006, 0085, 0808))) AND (CONVERT(char(10), date, 103)
> = CONVERT(char(10), GETDATE(), 103))
>
> and 2nd query is
>
>|||Tango-
I dint get whats ut exact requirement..but while using Union
Yor are getting error bcoz columns ur selecting while merging are not equal.
.
To override this ,
- Maintain the same columns in both select statements and if u dont want get
any columns from any of the table ,place 'null' instead of that column place
.
-COLUMN datatypes should be the same in both table while using Union/Union a
ll
Hoping this will help you
Kumar
"Tango" wrote:

> Hi,
> I need to merge 2 tables for the purpose of reporting on them.
> 1 table has actuals data that includes date, account type, branch location
&
> totals.
> The other has budget figures which also includes business type, sub
> catagory, date & budget amount.
> Can someone explain how i could merge the 2 together.
> I have attempted a union but get the error message operator must have equa
l
> number of expressions.
> here is the 2 queries
> SELECT CONVERT(char(10), date, 103) AS Date, entity, entity_type,
> sub_catagory, amount, comments, branch
> FROM csd_budgets.dbo.tbl_budgets
> WHERE (branch = 53) AND (sub_catagory = 2) AND (entity_type = 7) AND
> (entity = 2) AND (date BETWEEN CONVERT(DATETIME, '2005-11-13 00:00:00', 10
2)
> AND
> GETDATE())
> ORDER BY date
> Thankyou in advance
> Todd
> SELECT CONVERT(char(10), date, 103) AS Date, Account_Code, Branch_Code
,
> Total_Balance, No_of_Accounts
> FROM tbl_account_balances
> WHERE (Branch_Code <= 56) AND (Branch_Code = 53) AND (NOT (Account_Cod
e
> IN (0001, 0004, 0006, 0085, 0808))) AND (CONVERT(char(10), date, 103)
> = CONVERT(char(10), GETDATE(), 103))
>
> and 2nd query is
>
>

Wednesday, March 21, 2012

MemToLeave and ODBC

We are running SQL Server 2005, SP2, Standard Edition, 64bit.
We are considering an ODBC solution for client reporting, and have been
reading that the ODBC driver uses MemToLeave memory, up to 7MB per connection.
I was wondering if the queries sent via this ODBC connection, also utilize
MemToLeave, or if those queries use the Buffer Pool?
And then a separate question, not involving ODBC...does an adhoc query
executed from Management Studio utilize Buffer Pool memory, if it needs less
than 8K contiguous memory, or do all adhoc queries in general go directly
against MemToLeave?
As a side note, I have read some from Ken Hendersons blog, threads in this
discussion group, and other posts on the web. I cannot seem to find the
answer to the above questions. Thx.
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200710/1Still seeking for answers. Please help.
cbrichards wrote:
>We are running SQL Server 2005, SP2, Standard Edition, 64bit.
>We are considering an ODBC solution for client reporting, and have been
>reading that the ODBC driver uses MemToLeave memory, up to 7MB per connection.
>I was wondering if the queries sent via this ODBC connection, also utilize
>MemToLeave, or if those queries use the Buffer Pool?
>And then a separate question, not involving ODBC...does an adhoc query
>executed from Management Studio utilize Buffer Pool memory, if it needs less
>than 8K contiguous memory, or do all adhoc queries in general go directly
>against MemToLeave?
>As a side note, I have read some from Ken Hendersons blog, threads in this
>discussion group, and other posts on the web. I cannot seem to find the
>answer to the above questions. Thx.
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200710/1|||On a 64bit SQL instance, you should have little to worry about MemToLeave
because you have a huge virtual address space to go about. I might be wrong,
but my understanding is that whether MemToLeave is used depends on factors
such as memory allocation size, not on what database API is being used on the
client side.
Linchi
"cbrichards via SQLMonster.com" wrote:
> We are running SQL Server 2005, SP2, Standard Edition, 64bit.
> We are considering an ODBC solution for client reporting, and have been
> reading that the ODBC driver uses MemToLeave memory, up to 7MB per connection.
> I was wondering if the queries sent via this ODBC connection, also utilize
> MemToLeave, or if those queries use the Buffer Pool?
> And then a separate question, not involving ODBC...does an adhoc query
> executed from Management Studio utilize Buffer Pool memory, if it needs less
> than 8K contiguous memory, or do all adhoc queries in general go directly
> against MemToLeave?
> As a side note, I have read some from Ken Hendersons blog, threads in this
> discussion group, and other posts on the web. I cannot seem to find the
> answer to the above questions. Thx.
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200710/1
>|||So if I am running SQL Server 2005, SP2, Standard Edition, 64bit, with 16GB
RAM (14GB set as Max Server Memory), are you saying that just because I am
running 64bit, that I have a huge virtual address space? Where does this huge
virtual address space come from if 14Gb is dedicated to SQL Server? The disks?
Linchi Shea wrote:
>On a 64bit SQL instance, you should have little to worry about MemToLeave
>because you have a huge virtual address space to go about. I might be wrong,
>but my understanding is that whether MemToLeave is used depends on factors
>such as memory allocation size, not on what database API is being used on the
>client side.
>Linchi
>> We are running SQL Server 2005, SP2, Standard Edition, 64bit.
>[quoted text clipped - 11 lines]
>> discussion group, and other posts on the web. I cannot seem to find the
>> answer to the above questions. Thx.
--
Message posted via http://www.sqlmonster.com|||Yes, SQL Server isn't even aware of what API is used by the client application.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Linchi Shea" <LinchiShea@.discussions.microsoft.com> wrote in message
news:76F3D546-8C00-4E8A-81A6-CDA22E1E0193@.microsoft.com...
> On a 64bit SQL instance, you should have little to worry about MemToLeave
> because you have a huge virtual address space to go about. I might be wrong,
> but my understanding is that whether MemToLeave is used depends on factors
> such as memory allocation size, not on what database API is being used on the
> client side.
> Linchi
> "cbrichards via SQLMonster.com" wrote:
>> We are running SQL Server 2005, SP2, Standard Edition, 64bit.
>> We are considering an ODBC solution for client reporting, and have been
>> reading that the ODBC driver uses MemToLeave memory, up to 7MB per connection.
>> I was wondering if the queries sent via this ODBC connection, also utilize
>> MemToLeave, or if those queries use the Buffer Pool?
>> And then a separate question, not involving ODBC...does an adhoc query
>> executed from Management Studio utilize Buffer Pool memory, if it needs less
>> than 8K contiguous memory, or do all adhoc queries in general go directly
>> against MemToLeave?
>> As a side note, I have read some from Ken Hendersons blog, threads in this
>> discussion group, and other posts on the web. I cannot seem to find the
>> answer to the above questions. Thx.
>> --
>> Message posted via SQLMonster.com
>> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200710/1
>>|||MemToLeave is a virtual memory address space concept. It has nothing to do
with how mch physical memory you have.
Let's forget about performance and physical memory for now. When you write a
program in a high-level programming language, you may feel like you can just
keep allocating data structures. But eventually you'll run into out of memory
error because the compiler/OS knows that there is a limit to the number of
unique virtual memory addresses for a process. If those addresses are all
used up in the process, you are not allowed to allocate any more and it's the
OS' job to keep track of the usage of the virtual memeory addresses.
On a 32-bit system, the number of virtual memory addresses is limited.
Basically there are 2GB for a program to allocate its data structures (the
other 2GB are taken by the kernel). Since the SQL process' virtual address
space may get fragmented after it has been allocating data structures for a
while and fragmentation can get so bad so that it may not be able to find
contiguious virtual addrsses for larger data structures (i.e. larger than
8K), the strategy that SQL Server uses is to pre-allocate a chunk of virutal
addresses when it starts so that this chunk is available regardless of it
snormal allocation activities. This pre-allocated chunk of virtual memory
addresses is MemToLeave.
Now with a 64-bit system, the virutal address space is so large that there
is little to no danger that a SQL instance (primarily its buffer pool) will
ever use up so much of virtual addresses that its process will have no
contiguious virtual addresses left for larger data structures.
Again, this is all related virtual memory addresses, and has nothing to do
with how much physical memory you may have on a particular system. In
reality, of course if you don't have enough physical memory, you risk
swapping things to paging files and your performance will suffer.
Linchi
"cbrichards via SQLMonster.com" wrote:
> So if I am running SQL Server 2005, SP2, Standard Edition, 64bit, with 16GB
> RAM (14GB set as Max Server Memory), are you saying that just because I am
> running 64bit, that I have a huge virtual address space? Where does this huge
> virtual address space come from if 14Gb is dedicated to SQL Server? The disks?
>
> Linchi Shea wrote:
> >On a 64bit SQL instance, you should have little to worry about MemToLeave
> >because you have a huge virtual address space to go about. I might be wrong,
> >but my understanding is that whether MemToLeave is used depends on factors
> >such as memory allocation size, not on what database API is being used on the
> >client side.
> >
> >Linchi
> >
> >> We are running SQL Server 2005, SP2, Standard Edition, 64bit.
> >>
> >[quoted text clipped - 11 lines]
> >> discussion group, and other posts on the web. I cannot seem to find the
> >> answer to the above questions. Thx.
> --
> Message posted via http://www.sqlmonster.com
>|||I am really perplexed at this whole virtual address space with 64bit and I do
not seem to be getting a straight answer. It is probably the way I am
phrasing my questions. If so I apologize. If I have 16GB RAM, of which 14GB
is configured as MaxServerMemory, and I have 100GB carved out on the SAN for
my disks, then if the 14GB is being used by SQL Server, and then another
query comes along, and the 14GB is consumed, is the virtual address space the
2GB left for the operating system, plus the 100GB carved out on the SAN?
Tibor Karaszi wrote:
>Yes, SQL Server isn't even aware of what API is used by the client application.
>> On a 64bit SQL instance, you should have little to worry about MemToLeave
>> because you have a huge virtual address space to go about. I might be wrong,
>[quoted text clipped - 19 lines]
>> discussion group, and other posts on the web. I cannot seem to find the
>> answer to the above questions. Thx.
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200710/1|||First of all, virtual address space of a process has nothing to do with how
much physial memory or disk storage you may have on a system. On 32-bit
Windows, the user-mode virtual address space is 2GB (or 3GB with /3GB switch
in the boot.ini). On 64-bit Windows, the virtual address space of a process
is 8TB. These numbers don't change regardless how much RAM or disk storage
you have.
Secondly, the size of the virtual address space is a function of the width
of the address or the number of unique pointer values. So in theory, on
32-bit system, the virtual address space is 2^32, and if each address points
to a byte, that's 4GB. Also in theory, on a 64-bit system, the virtual
address space is 2^64, and if each points to a byte, that's more than 16
exabytes. But actual implementations often impose additional restriction. For
instance, 32-bit Windows gives 2GB to the kernel and leave 2GB to your app to
use. And since 16 exabytes are excessive, currently beyond reach anyway, and
costly to support, only 43 bits are actually used for user-mode virtual
addresses on current x64, and that's ~8TB.
Now, if you really end using this much virtual address space, they (i.e.
virtual memory allocated to represent your data structures) have to be
backed/supported with real stores, which can be physical memory or paging
files.
Linchi
"cbrichards via SQLMonster.com" wrote:
> I am really perplexed at this whole virtual address space with 64bit and I do
> not seem to be getting a straight answer. It is probably the way I am
> phrasing my questions. If so I apologize. If I have 16GB RAM, of which 14GB
> is configured as MaxServerMemory, and I have 100GB carved out on the SAN for
> my disks, then if the 14GB is being used by SQL Server, and then another
> query comes along, and the 14GB is consumed, is the virtual address space the
> 2GB left for the operating system, plus the 100GB carved out on the SAN?
> Tibor Karaszi wrote:
> >Yes, SQL Server isn't even aware of what API is used by the client application.
> >
> >> On a 64bit SQL instance, you should have little to worry about MemToLeave
> >> because you have a huge virtual address space to go about. I might be wrong,
> >[quoted text clipped - 19 lines]
> >> discussion group, and other posts on the web. I cannot seem to find the
> >> answer to the above questions. Thx.
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200710/1
>|||On Thu, 04 Oct 2007 01:49:25 GMT, "cbrichards via SQLMonster.com"
<u3288@.uwe> wrote:
>If I have 16GB RAM, of which 14GB
>is configured as MaxServerMemory, and I have 100GB carved out on the SAN for
>my disks, then if the 14GB is being used by SQL Server, and then another
>query comes along, and the 14GB is consumed, is the virtual address space the
>2GB left for the operating system, plus the 100GB carved out on the SAN?
There are, or so I heard at some time way in the past, computers with
the architecture where memory and disk are mapped with a common
address space, as if each was an extension of the other. That sounds
like what you are asking about. Microsoft SQL Server does not run on
any such architecture.
To oversimplify a bit... Most of SQL Server's memory is used for
caching database pages. If SQL Server needs a page that is already in
cache it is just referenced from that memory location. If it needs a
page that is not in cache it is read into cache. When there is no
room in memory for a new page a page that hasn't been used recently is
overwritten. When a page is updated it will be written to disk -
eventually. (Logs are handled a bit differently and always written
immediately.)
Sorry if I have misunderstood your statement.
Roy Harvey
Beacon Falls, CT

Monday, March 12, 2012

Memory Usage

Is there a way to configure the amount of memory that reporting
services uses? and if so can someone point me in the right direction in
regards to the documentation... I apperciate your assistance.
-WHi Wayne,
See the file RSReportServer.config, this file is located in the \Reporting
Services\ReportServer folder. There is a section of Memory Limit. See also
the article "RSReportServer Configuration File" in Reporting Services Books
Online.
Regards,
Armand
<wayne.small@.mbna.com> escreveu na mensagem
news:1130421493.518665.293030@.g14g2000cwa.googlegroups.com...
> Is there a way to configure the amount of memory that reporting
> services uses? and if so can someone point me in the right direction in
> regards to the documentation... I apperciate your assistance.
> -W
>