Wednesday, March 28, 2012
Merge doesn't re-propagate changes arrived from Transaccional repl
- A transactional publicaction "TRN2000" (in SrvSQLServer2000 server)
- A push subscription to "TRN2000" publication(in SrvSQLServer2005 server)
- A merge publication "Merge2005" defined in SrvSQLServer2005 over the same
Database/tables
that transactional subscription replicate
- Subscriptions to "Merge2005" publication
Then, this is the behavior:
- Data changes in SrvSQLServer2000 are replicated to SrvSQLServer2005 through
the TRN2000 subscription (OK)
- When I directly modify data(update, insert,) in the SrvSQLServer2005,
merge susbscriptions
receives data changes (OK)
But the problem is that data changes received in the SrvSQLServer2005
database
through the "Trn2000" replication..are not propagated to the "Merge2005"
subscriptions..
I suspect that the problem is that Transactional Push subscription use Bulk
Operations over
SrvSqlServer2005 and these are being ignored by the "Merge2005" replication
triggers.
some idea? workaround?
thanks in advance,
Alexander.
If the problem occurs during initialization, have a look at the
@.fire_triggers_on_snapshot parameter in sp_addarticle.
HTH,
Paul Ibison
|||Thanks Paul for the answer.
Unfortunately, the problem happens not only at initialization. It happens
when propagating any change on the data..
some idea?
"Paul Ibison" wrote:
> If the problem occurs during initialization, have a look at the
> @.fire_triggers_on_snapshot parameter in sp_addarticle.
> HTH,
> Paul Ibison
|||In that case I'm confused. The commands replicated are sent down as calls to
insert/update/delete stored procedures on the subscriber. This isn't like a
bulk insert or a fast load which themselves don't fire triggers. Is your
transactional setup one that involves updating subscribers ie has triggers of
its own? Are there any other triggers on the table? Are any/all triggers
marked as NOT FOR REPLICATION?
Rgds,
Paul Ibison
|||Table “T” on which the Transaction replication records the data (same table
“T” that also is published through the Merge replication) doesn’t have other
triggers.
OK, the use of “Bulk Insert” by the Transactional replication was only a
supposition.
I thought that insert could be doing “bulk insert” because if I execute:
sp_addtabletocontents @.table_name='T'
Merge replication replicate pending changes (changes that became on table
“T” through the Transaccional replication) to its subscribers.
(I could send you by mail my replication scripts if you think that they can
clarify something)
"Paul Ibison" wrote:
> In that case I'm confused. The commands replicated are sent down as calls to
> insert/update/delete stored procedures on the subscriber. This isn't like a
> bulk insert or a fast load which themselves don't fire triggers. Is your
> transactional setup one that involves updating subscribers ie has triggers of
> its own? Are there any other triggers on the table? Are any/all triggers
> marked as NOT FOR REPLICATION?
> Rgds,
> Paul Ibison
>
|||I'll have a go at reproducing and then post back...Meanwhile can you confirm
that the changes are definitely coming from transactional replication and not
being bulk loaded into the republished table directly.
Paul Ibison
|||Surely,
I have verified that making data modifications (through sql standard
sentences insert, update, delete)
on the database that has the transaccional publication, the changes arrive
correctly at table T and these
same changes are not re-propagated to merge subscriptions.
I thank for your dedication in this subject
Alexander Wolff
|||Hello Alexander & Paul,
I'm having exactly the same problem,
I wonder if you've found a solution (or explanation) for this issue.
Thanks,
Vladimir Kofman
> Surely,
> I have verified that making data modifications (through sql standard
> sentences insert, update, delete)
> on the database that has the transaccional publication, the changes
> arrive
> correctly at table T and these
> same changes are not re-propagated to merge subscriptions.
> I thank for your dedication in this subject
> Alexander Wolff
>
Merge and Transactional from the same server
I need to come up with a solution to my replication issue. The solution that I come up with involves using one publisher to push transactional and merge replication. The database that is published is the same.
Would this cause any issues? Can this be done? I would think that if a table is needed for both, I would have to create a separate publication for each type right?
Is there any documentation anywhere to help prove if this is possible?
Hi, William,
It can be done if you use readonly transactional replication and merge replication on the same db (same tables). You can't enable updatable transactional replication & merge replication at the same time on the same table same db (it's not allowed) since they both will try to add a guid column to the publishing table. Yes you need to create separate tran (readonly) publication and merge publication on the same set of tables. And you will have problems if tran and merge subscription are both on the same db (I guess you will not do this, right?). The other restriction is that for transactional replication, you need to have an primary key on the publisher table, I guess you know it already.
Not sure if we have any documentation specifically for this particular configuration but I just tried to play with this configuration and it worked just fine for me.
Thanks,
Zhiqiang Feng
|||So readonly for both transactional and merge? the tables I want to add readonly transactional to already have updatable merge (if I understand this correctly it is two way replication right)... The transactional will go downstream only to a server and the merge can go downstream (not to the same server as the transactional) and upstream....
It doesn't sound like this is possible does it? If not I have a backup plan..
|||William,
To avoid confusion, I should say "It can be done if you use merge replication and readonly transactional replication on the same db (same tables)". For merge, there is no concept of readonly, it only has one type which you can always download & upload changes between publisher and subscriber.
Thanks,
Zhiqiang Feng
|||Ok... So as long as the transactional only goes to the subscriber and not from the subscriber back to the publisher it should work with no issues? Is that what you are saying?
Sorry if I sound ignorant but I usually have only had one or the other and now I need both.
|||Yes, your understanding is correct.
Thanks,
Zhiqiang Feng
sqlMerge and Transactional from the same server
I need to come up with a solution to my replication issue. The solution that I come up with involves using one publisher to push transactional and merge replication. The database that is published is the same.
Would this cause any issues? Can this be done? I would think that if a table is needed for both, I would have to create a separate publication for each type right?
Is there any documentation anywhere to help prove if this is possible?
Hi, William,
It can be done if you use readonly transactional replication and merge replication on the same db (same tables). You can't enable updatable transactional replication & merge replication at the same time on the same table same db (it's not allowed) since they both will try to add a guid column to the publishing table. Yes you need to create separate tran (readonly) publication and merge publication on the same set of tables. And you will have problems if tran and merge subscription are both on the same db (I guess you will not do this, right?). The other restriction is that for transactional replication, you need to have an primary key on the publisher table, I guess you know it already.
Not sure if we have any documentation specifically for this particular configuration but I just tried to play with this configuration and it worked just fine for me.
Thanks,
Zhiqiang Feng
|||
So readonly for both transactional and merge? the tables I want to add readonly transactional to already have updatable merge (if I understand this correctly it is two way replication right)... The transactional will go downstream only to a server and the merge can go downstream (not to the same server as the transactional) and upstream....
It doesn't sound like this is possible does it? If not I have a backup plan..
|||William,
To avoid confusion, I should say "It can be done if you use merge replication and readonly transactional replication on the same db (same tables)". For merge, there is no concept of readonly, it only has one type which you can always download & upload changes between publisher and subscriber.
Thanks,
Zhiqiang Feng
|||Ok... So as long as the transactional only goes to the subscriber and not from the subscriber back to the publisher it should work with no issues? Is that what you are saying?
Sorry if I sound ignorant but I usually have only had one or the other and now I need both.
|||Yes, your understanding is correct.
Thanks,
Zhiqiang Feng
Monday, March 26, 2012
merge agent status
I have a push merge subscription using Sql Server 2000. The publisher &
distributor reside on the same host.
I would like to be able for clients to determine the status of their merge
agent at the publisher. This would be used as a verification at the client,
that their merge agent is properly running and there is no error condition.
In the ReplicationMonitor/Merge Agent display, the status "No data needs to
be merged" displays when everything is functioning properly. I figure this
status must be available in some system table for the given merge agent.
Anyone know of the field / table to retrieve, in order to determine the
status of a merge agent? Is there some system sp to run?
Thanks in advance.
Regards,
bruce
its in the distribution database, query the run_status column on
MSmerge_history
"rr" <bruceradtke@.REMOVEspamREMOVE.earthlink.net> wrote in message
news:V8ebc.3251$WA4.2734@.twister.nyc.rr.com...
> Hello All -
> I have a push merge subscription using Sql Server 2000. The publisher &
> distributor reside on the same host.
> I would like to be able for clients to determine the status of their merge
> agent at the publisher. This would be used as a verification at the
client,
> that their merge agent is properly running and there is no error
condition.
> In the ReplicationMonitor/Merge Agent display, the status "No data needs
to
> be merged" displays when everything is functioning properly. I figure
this
> status must be available in some system table for the given merge agent.
> Anyone know of the field / table to retrieve, in order to determine the
> status of a merge agent? Is there some system sp to run?
>
> Thanks in advance.
> Regards,
> bruce
>
>
sql
Merge agent start failure
I have a problem with push merge subscription. The publisher/distributer
reside on same host with several clients connecting for merge replication.
The startup of one of the merge agents fails. The error message "Login
failed for user 'dbApp'. Reason: not associated with a trusted SQL server
connection. Error 18452".
I am able to connect to the client via QueryAnalyer using the 'dbApp' login.
I've done A/B checking with the security info for the bad client with a good
client and can not see any diffs.
I don't know what has happened. This client machine did work several weeks
ago. I suspect maybe some software installation munged the settings
somewhere but am at a loss as to where to check for this.
Any clues?
Thanks,
bruce
can you check the SQL error log for any information?
I have encountered intermittent memory problems of the form - could not
reserve contiguous memory blocks around the time I experienced these errors.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"rr" <bruceradtke@.REMOVEspamREMOVE.earthlink.net> wrote in message
news:N4fDd.37555$kq2.26384@.twister.nyc.rr.com...
> Hello All -
> I have a problem with push merge subscription. The publisher/distributer
> reside on same host with several clients connecting for merge replication.
> The startup of one of the merge agents fails. The error message "Login
> failed for user 'dbApp'. Reason: not associated with a trusted SQL server
> connection. Error 18452".
> I am able to connect to the client via QueryAnalyer using the 'dbApp'
login.
> I've done A/B checking with the security info for the bad client with a
good
> client and can not see any diffs.
> I don't know what has happened. This client machine did work several
weeks
> ago. I suspect maybe some software installation munged the settings
> somewhere but am at a loss as to where to check for this.
> Any clues?
> Thanks,
> bruce
>
>
merge agent slowing performance
I have setup merge replication with push subcription for every 10 min. I
have changed the agent profile to 'high Volume Server to server' but still
the duration and delivery rate remain almost the same. How can i increase the
delivery rate, or what should be the suggested agent profile be. Please help,
it is urget.
Thanks in Advance,
SVur
The history of the merge agent looks like this for hte last run:
# Actions : 32
Last action message: merged 3252 data changes(3209 inserts, 16 updates, 27
deletes)
Duration(secs): 00.00.43
Delivery rate (rows per sec): 73.0000
Inserts at Publisher:2912
Updates at Publisher:16
Deletes at publisher:27
conflicts at publisher:0
Inserts at subscriber:297
Updates at subscriber:0
Deletes at subscriber:0
conflicts at subscriber:0
Hi,
This is my merge agent history. the agent approximately took 9 min to merge
about 6500 changes in the last run. How can i improve the performance of the
merge agent? What are the implications of increasing the upload and download
batch size?
Thanks,
SVur
118/17/2004 8:20:18 AM8/17/2004 8:20:18 AM0Starting
agent.00000000000<Binary>1
118/17/2004 8:20:31 AM8/17/2004 8:20:31
AM0Initializing00000000000<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:20:31 AM0Connecting to Publisher
'01AL10015000025'00000000000<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:20:36 AM4Connecting to Subscriber
'01AL10015000026'00000000000<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:24 AM52Uploaded 104 data changes
(100 inserts, 0 updates, 4 deletes, 0
conflicts).18734200001000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:24 AM52Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).368903.92307692307600002000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:26 AM55Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).587185.52727272727200003000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:29 AM57Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).806247.08771929824500004000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:47 AM76Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).935936.63157894736800005000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:51 AM80Uploaded 100 data changes
(100 inserts, 0 updates, 0 deletes, 0
conflicts).1079847.5500006000400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:54 AM82Uploaded 123 data changes
(123 inserts, 0 updates, 0 deletes, 0
conflicts).1247818.86585365853600007230400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:21:59 AM87Uploaded 104 data changes
(100 inserts, 4 updates, 0 deletes, 0
conflicts).1408599.55172413793100008234400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:11 AM99Uploaded 69 data changes
(69 inserts, 0 updates, 0 deletes, 0
conflicts).1408599.09090909090900008924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:14 AM102Downloaded 142 data
changes (0 inserts, 0 updates, 142 deletes, 0
conflicts).14151510.2156862745090014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:25 AM114Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14289010.017543859649100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:26 AM115Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14406210.8200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:27 AM116Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14520311.568965517241300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:28 AM117Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14620312.324786324786400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:42 AM130Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14726511.861538461538500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:43 AM132Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14817112.439393939393600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:45 AM133Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).14932713.097744360902700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:46 AM134Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15012413.746268656716800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:22:59 AM147Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15099913.210884353741900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:00 AM149Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15198413.7046979865771000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:01 AM149Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15295314.3758389261741100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:01 AM150Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15381314.9466666666661200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:11 AM160Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15471914.63751300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:13 AM161Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15553115.1677018633541400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:14 AM163Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15648515.5950920245391500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:15 AM164Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15743816.109756097561600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:26 AM175Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15822015.6685714285711700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:27 AM175Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15898616.241800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:28 AM176Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).15981416.7159090909091900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:30 AM178Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16218917.0898876404492000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:37 AM185Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16348616.9837837837832100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:38 AM186Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16450217.4301075268812200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:38 AM187Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16558017.871657754012300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:39 AM187Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16650218.4064171122992400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:52 AM200Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16725217.712500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:53 AM201Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16809618.1194029850742600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:54 AM203Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16895618.4334975369452700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:23:55 AM204Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).16978418.8333333333332800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:06 AM215Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17064318.334883720932900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:07 AM215Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17137818.83000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:08 AM217Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17242519.0875576036863100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:09 AM217Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17341019.5483870967743200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:22 AM230Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17445718.8782608695653300014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:23 AM231Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17555019.2294372294373400014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:23 AM232Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17664419.5775862068963500014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:24 AM232Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17762920.0086206896553600014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:32 AM240Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17847319.7583333333333700014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:33 AM241Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).17951920.0912863070533800014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:33 AM242Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18041020.4214876033053900014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:34 AM242Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18117620.8347107438014000014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:44 AM252Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18258220.4047619047614100014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:44 AM252Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18384820.8015873015874200014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:44 AM253Downloaded 115 data
changes (115 inserts, 0 updates, 0 deletes, 0
conflicts).18511421.1739130434784315014208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:45 AM253Downloaded 151 data
changes (100 inserts, 51 updates, 0 deletes, 0
conflicts).18634921.77075098814244155114208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:45 AM254Downloaded 100 data
changes (0 inserts, 100 updates, 0 deletes, 0
conflicts).18741122.07874015748441515114208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).18858222.38431372549451515114208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (99 inserts, 1 updates, 0 deletes, 0
conflicts).18972322.776470588235461415214208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).19091023.16862745098471415214208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:46 AM255Downloaded 100 data
changes (100 inserts, 0 updates, 0 deletes, 0
conflicts).19189423.560784313725481415214208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:48 AM256Downloaded 171 data
changes (170 inserts, 1 updates, 0 deletes, 0
conflicts).19278524.13671875498415314208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:48 AM256Downloaded 100 data
changes (98 inserts, 2 updates, 0 deletes, 0
conflicts).19373824.52734375508215514208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:48 AM257Downloaded 100 data
changes (99 inserts, 1 updates, 0 deletes, 0
conflicts).19475424.821011673151518115614208924400<Binary>0
138/17/2004 8:20:31 AM8/17/2004 8:24:49 AM257No data needed to be
merged.19475424.821011673151518115614208924400<Binary>0
128/17/2004 8:20:31 AM8/17/2004 8:24:49 AM257Merged 6379 data changes
(6073 inserts, 160 updates, 146 deletes, 0 resolved
conflicts).024.821011673151518115614208924400<Binary>1
"SVur" wrote:
> Hi,
> I have setup merge replication with push subcription for every 10 min. I
> have changed the agent profile to 'high Volume Server to server' but still
> the duration and delivery rate remain almost the same. How can i increase the
> delivery rate, or what should be the suggested agent profile be. Please help,
> it is urget.
> Thanks in Advance,
> SVur
> The history of the merge agent looks like this for hte last run:
> # Actions : 32
> Last action message: merged 3252 data changes(3209 inserts, 16 updates, 27
> deletes)
> Duration(secs): 00.00.43
> Delivery rate (rows per sec): 73.0000
> Inserts at Publisher:2912
> Updates at Publisher:16
> Deletes at publisher:27
> conflicts at publisher:0
> Inserts at subscriber:297
> Updates at subscriber:0
> Deletes at subscriber:0
> conflicts at subscriber:0
>
Merge agent login problem
I have a push merge replication up and working in development SQL Server 2005 (publisher) and SQL Express (subscriber). Because the upgrade from 2000 to 2005 for our production servers has been delayed for who knows how long, I am working to set the same replication up with 2000 and MSDE.
The merge agent is failing in 2000 with the error:
Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. (Source: MSSQLServer, Error number: 18452)
The 2005 merge agent runs fine. I think it is because the 2005 subscription wizard allowed me to set the 'Run as: SQL Agent Service Account' option, instead of a windows authentication uid. There is no opportunity to set this in the 2000 subscription wizard nor on the merge agen job step itself. I didn't take a screen dump, but am confident that I used the SECURITYMODE=SQL parameter on the MSDE client install.
Any help on whether I'm correct on what the problem is and what to change to get the 2000 merge to log in with the SQL agent service account would be great. Thanks.
Is the SQL agent service account a windows account? If so, then you have to change the subscriber machine to allow windows authentication. SECURITYMODE=SQL means only sql server logins can connect. You should be able to use Enterprise Manager to change the subscriber's server properties to allow both SQL and Windows authentication mode.|||The install document with MSDE 2000 says that SECURITYMODE=SQL enables mixed mode. The server shows Windows and SQL authentication enabled.
I've looked at the replication jobs with both Enterpise Manager and Management Studio and cannot see how and what the merge is trying to log in to the client with, except for the difference noted in my original post.
I also noticed that the program sqlmangr.exe fails when the MSDE PC is started. It seems to operate ok except for the merge connection, though.
|||The difference appears to be that 2005 can impersonate the service account and 2000 cannot. I added the publication server service account as a user with the same password on the subscription machine and the merge agent was able to log in.Merge Agent Hangs (reporting no errors)
In my merge replication scenario I use Push subscriptions. The
distributor is SQL server 2000 Sp3a, and subscribers are MSDE SP3a.
Everything works fine but recently merge agents started to "hang"
reporting no errors at random phases of replication session. They stop
sending or receiving any data between distributor and subscriber
(there is no network traffic). SQL Profiles show no activity on a
subscriber server. Last message generated by agent is "The process is
running and is waiting for a response from one of the backend
connections".
If there is a problem with communication between servers one would
expect returnig an error code and retrying operation. In my case the
merge agent "hangs" and waits for nothing. After 1 or 2 hours it
reports a communication link failure (waits for a random period of
time).
To continue a Merge process I have to Stop and Start the merge agent
manually.
Since my replication sends a lot of data from subscribers
(MSMerge_genhistory has approx 50000 entries when retention period is
set to 2 days.) after uploading data changes to the publisher (the
procedure sp_MScheckexistsgeneration is executed for every one of
50000 generations - am I right about this?) it takes a long time (40
minutes) to complete. If merge agent hangs, the process must be run
from the beginning. In this situation it is almost impossible to
complete the replication session.
What can be the cause of such behavior of merge agents?
Last lines of output (level 3) generated by agent are as follows:
{call sp_MScheckexistsgeneration (?, ?) }
{call sp_MScheckexistsgeneration (?, ?) }
Percent Complete: 0
The process is running and is waiting for a response from one of the
backend connections.
Repl Agent Status: 3
Tomek
Tomek,
it's difficult to say, but in some sense this looks like a connectivity
issue. However, when I've seen such a situation before, the error was
'General network error'. You might like to verify is this is/is not the
case, by using a Network Monitor tool:
http://support.microsoft.com/default...48942&sd=tech.
Alternatively you could open a window in QA on the publisher, with the
window connected to the subscriber SQL Server. In the window do a simple
select command in a loop and return the date. This should tell you if/when
the connectivity had problems.
Can you check in current activity or sp_who2/sp_lock/sp_blocker to identify
if there is any blocking involved. If the problem is not connectivity then
this would be my next test.
I'd also like to know what happens when you restart the agent - does it work
quickly or still take a long time but is successful.
No doubt you have considered this, but if possible can you synchronize more
often to shorten the batchsize?
You might also optimize performance by running
sp_mergemetadataretentioncleanup manually and optimize
he -DownloadGenerationsPerBatch parameter.
HTH,
Paul Ibison
|||Paul,
Thank you for your response. I hope my answers to your questions will
help us to solve the problem.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message news:<#EzzqyDQEHA.1312@.TK2MSFTNGP12.phx.gbl>...
> it's difficult to say, but in some sense this looks like a connectivity
> issue. However, when I've seen such a situation before, the error was
> 'General network error'. You might like to verify is this is/is not the
> case, by using a Network Monitor tool:
> http://support.microsoft.com/default...48942&sd=tech.
It can be related to connectivity because merge agents connect to
subscribers that not always use reliable network connection. But the
problem is that instead of reporting General network error and
retrying operation agent waits (hangs) doing nothing.
> Alternatively you could open a window in QA on the publisher, with the
> window connected to the subscriber SQL Server. In the window do a simple
> select command in a loop and return the date. This should tell you if/when
> the connectivity had problems.
QA in loop worked fine (no connectivity problems), but in the same
time agent assigned to the same subscriber hanged.
> Can you check in current activity or sp_who2/sp_lock/sp_blocker to identify
> if there is any blocking involved. If the problem is not connectivity then
> this would be my next test.
I've checked this ne too. Blocking is not the reason.
> I'd also like to know what happens when you restart the agent - does it work
> quickly or still take a long time but is successful.
I will describe how my agent works. Please tell me is it correct
behavior.
1) After connecting to subscriber agent uploads all the data changes
made by subscriber to the publisher. This works very fast and i do not
see any problem here (agent has never stopped at this phase).
2) Merge agent executes sp_MScheckexistsgeneration for each of 50000
generations stored in msmerge_genhistory at subscriber side. If there
is some data to download, it is downloaded to subscriber, and then
merge agent checks next generation.
Now let's assume that merge agent "hanged" ot generation 39000. After
restarting the agent it starts the process from the beginning checking
each of 39000 generations that where checked and merged in previus
session.
The merge agent is succesfull only if it is able to go through 50000
generations without loosing connection to subscriber. That is wy it is
so difficult to complete merge session.
IMHO the merge agent should start from generation 39000 making
completing session possible even on not reliable connections. Is it
"by design" behaviour?
> No doubt you have considered this, but if possible can you synchronize more
> often to shorten the batchsize?
It is not possible. subscribers can not synchronize on saturday and
sunday. So 2 days retention perod is minumum.
> You might also optimize performance by running
> sp_mergemetadataretentioncleanup manually and optimize
> he -DownloadGenerationsPerBatch parameter.
I think the problem is not metadata at publisher but metadata at
subscribers. It is not deleted after succesfull replication session (i
think it must work this way).
DownloadGenerationsPerBatch does not help because most of the time
merge agent executes sp_MScheckexistsgeneration without downloading
any data changes. Or maybe I am misunderstanding this parameter?.
Tomek
|||Tomek,
Slow and unreliable connections can result in more retries by the Merge
Agent, none of which you've seen. Your QA test also indicates the same
thing, so we can cross this off the list.
Answers inline...
> IMHO the merge agent should start from generation 39000 making
> completing session possible even on not reliable connections. Is it
> "by design" behaviour?
>
I guess it is not processed this way to try to avoid potential
non-convergence as inserts and child records could be in separate batches
(http://support.microsoft.com/default...b;enus;Q308266)
>
Having MetadataRetentionCleanup 1 should sort this out, but you could run
manually sp_mergemetadataretentioncleanup on the publisher and subscriber to
see if things are improved. It won't remove the 50000 entries, but what I am
hoping is that the search for new generations will be noticeably quicker
after old ones are removed.
> DownloadGenerationsPerBatch does not help because most of the time
> merge agent executes sp_MScheckexistsgeneration without downloading
> any data changes. Or maybe I am misunderstanding this parameter?.
Increasing the -DownloadGenerationsPerBatch Merge Agent parameters (even to
its maximum value of 2000) is a recommendation where a Subscriber has not
merged data with the Publisher for an extended period time, during which the
Publisher or other Subscribers have made numerous changes
(http://www.microsoft.com/technet/pro...n/mergperf.msp
x). So, I'd definitley test altering this parameter's value.
(Powodzenia)
Paul
Merge Agent fails to connect to mdb
Here is a challenge - and I'm loosing ...
I'm trying to get a push subscription to work to an Access mdb file from
SQLServer.
I get the following message when running the Merge Agent:
'S:\network\share\access.mdb' is not a valid path. Make sure that the path
name is spelled correctly and that you are connected to the server on which
the file resides.
(Source: MS.Jet.4.0 (Agent); Error number: -1023)
I'm successfully using the subscription's Linked Server definition via QA.
So I conclude that the .mdb path is valid.
I've successfully test the push replication with the .mdb file as a local
file.
My guess is that it is a security context issue for the Merge Agent -
however I don't know how to correct it!!!
I've checked the Agent definition - and it is running with the owner set to
the local admin account.
Both my SQLServer Instance and Agent service use the same local
administrator account (not LocalSystem). This account has got access to the
share and can open the mdb file via Access).
Environment:
SQLServer PE V7.0 with SP3a
Windows 2003 Server with SP1
Hope that someone has been here, and got the t-shirt...
Peter
Is S a network drive? If so you have to access it as a UNC. Secondly, IIRC
everytime the agent runs it creates a new copy of the database,
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Peter" <Peter@.discussions.microsoft.com> wrote in message
news:D872C520-1D22-4397-8A55-7E0A02DA4D26@.microsoft.com...
> Hi all,
> Here is a challenge - and I'm loosing ...
> I'm trying to get a push subscription to work to an Access mdb file from
> SQLServer.
> I get the following message when running the Merge Agent:
> 'S:\network\share\access.mdb' is not a valid path. Make sure that the
> path
> name is spelled correctly and that you are connected to the server on
> which
> the file resides.
> (Source: MS.Jet.4.0 (Agent); Error number: -1023)
> I'm successfully using the subscription's Linked Server definition via QA.
> So I conclude that the .mdb path is valid.
> I've successfully test the push replication with the .mdb file as a local
> file.
> My guess is that it is a security context issue for the Merge Agent -
> however I don't know how to correct it!!!
> I've checked the Agent definition - and it is running with the owner set
> to
> the local admin account.
> Both my SQLServer Instance and Agent service use the same local
> administrator account (not LocalSystem). This account has got access to
> the
> share and can open the mdb file via Access).
> Environment:
> SQLServer PE V7.0 with SP3a
> Windows 2003 Server with SP1
> Hope that someone has been here, and got the t-shirt...
> --
> Peter
|||Hi Hilary,
Yes S is a mapped network drive.
I have created a new Linked Server def using the UNC reference to the mdb.
Again the linked server def works fine when used in Query Analyser to select
data from the mdb.
However, I again get the msg below in my Merge Agent when using the Linked
Server def in a push subscription:
'\\server\path1\path2\access.mdb' is not a valid path. Make sure that the
path name is spelled correctly and that you are connected to the server on
which the file resides.
(Source: MS.Jet.4.0 (Agent); Error number: -1023)
------
Any ideas?
Curious about your second comment about creating a new copy of the database
everytime the agent runs - I did not see this when running the agent against
my local test copy of the mdb.
Regards,
Peter
Peter
"Hilary Cotter" wrote:
> Is S a network drive? If so you have to access it as a UNC. Secondly, IIRC
> everytime the agent runs it creates a new copy of the database,
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Peter" <Peter@.discussions.microsoft.com> wrote in message
> news:D872C520-1D22-4397-8A55-7E0A02DA4D26@.microsoft.com...
>
>
Merge Agent Error: invalid column 'rowguid'
schema and data.
the wizard dialog is: Initial Subscription, and I am choosing: No,
subscriber already has schema and data
Does the subscriber have an identical table ther already (including the guid
column)?
Regards,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||No. I just have four tables in my publication and checked for that first.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23W53vgb%23EHA.2608@.TK2MSFTNGP10.phx.gbl...
> Does the subscriber have an identical table ther already (including the
> guid
> column)?
> Regards,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||Robert,
the table must already be there in a nosync initialization, in identical
format to the publisher one.
If you are saying it isn't there, then you'll need to use backup/restore or
DTS or some other means to transfer it.
Rgds,
Paul Ibison
"Robert A. DiFrancesco" <bob.difrancesco@.comcash.com> wrote in message
news:OoAgSmb%23EHA.2032@.tk2msftngp13.phx.gbl...
> No. I just have four tables in my publication and checked for that first.
>
> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
> news:%23W53vgb%23EHA.2608@.TK2MSFTNGP10.phx.gbl...
>
|||Thank you Paul. I feel special today with all your attention.
I meant the column rowguid; I checked that it was not present as I assume
the PushNewSubscription process will create the column in the table at the
subscriber.
What I really am trying to accomplish, and forgive me, is how do I "turn on
replication" at the publisher-machine when my subscriber machine is already
in a working-production environment?
My thinking is that I start with a blank database at the publisher, schema
matching the subscriber, and enable publishing and push a new subscription
but NOT intializing schema and data. Because I'm guessing the blank
database at the publisher will destroy all records at the live subscriber,
making it blank also. Am I correct and what is the solution?
Thank you very, very much.
Bob
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23dAW%23ub%23EHA.2804@.TK2MSFTNGP15.phx.gbl.. .
> Robert,
> the table must already be there in a nosync initialization, in identical
> format to the publisher one.
> If you are saying it isn't there, then you'll need to use backup/restore
> or
> DTS or some other means to transfer it.
> Rgds,
> Paul Ibison
> "Robert A. DiFrancesco" <bob.difrancesco@.comcash.com> wrote in message
> news:OoAgSmb%23EHA.2032@.tk2msftngp13.phx.gbl...
>
|||Robert,
please keep the questions coming - this is my great
hobby
The initialization process can do one of two things:
(a) if you choose a synch initialization, then the
complete table will be BCPd out into the repldata share.
In that share there'll be several textfiles - for
creating the schema, the indexes, one containing data,
but the initial text file contains simply a 'drop table'
statement. That means that it really doesn't matter what
is on the subscriber - there can be a table of the same
name or not. if there is, it'll be deleted anyway.
(b) you select to do a nosync initialization. In this
case, the exact ssame table (inc guid) must exist on the
subscriber. The initialization process will create
metadata tables on the subscriber, but it won't touch the
replicated table at all.
There are further variations if we consider other options
for @.pre_creation_cmd, where you can select to leave the
table there if it already exists and such like, but these
are for specialist scenarios.
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)