In the example, the report uses the connection defined at design time, in my case a report defined in the designer passes a mysql connection at run time.
It works well and connects to the data set defined in the connection at run time, but repeats the content of the first item information for all rows.
you get mixed up, there are 2 methods of datasource:
1. push data, from main app, report.registerdata
don't do this : report.Dictionary.Connections.Add(conn);
2. pull data, define db connection and table in report template
you can alter the connection string from main app
db.collectionName.aggregate([
{ $match: {
name: { "$ne": '' } // discard selection criteria
}},
{ $group: {
_id: { name: "$name"}, // can be grouped on multiple properties
dups: { "$addToSet": "$_id" },
count: { "$sum": 1 }
}},
{ $match: {
count: { "$gt": 1 } // Duplicates considered as count greater than one
}}
],
{allowDiskUse: true} // For faster processing if set is larger
) // You can display result until this and check duplicates
.forEach(function(doc) {
doc.dups.shift(); // First element skipped for deleting
doc.dups.forEach( function(dupId){
duplicates.push(dupId); // Getting all duplicate ids
}
)
})
// If you want to Check all "_id" which you are deleting else print statement not needed
printjson(duplicates);
// Remove all duplicates in one go
db.collectionName.remove({_id:{$in:duplicates}})
b. You can delete documents one by one.
db.collectionName.aggregate([
// discard selection criteria, You can remove "$match" section if you want
{ $match: {
source_references.key: { "$ne": '' }
}},
{ $group: {
_id: { source_references.key: "$source_references.key"}, // can be grouped on multiple properties
dups: { "$addToSet": "$_id" },
count: { "$sum": 1 }
}},
{ $match: {
count: { "$gt": 1 } // Duplicates considered as count greater than one
}}
],
{allowDiskUse: true} // For faster processing if set is larger
) // You can display result until this and check duplicates
.forEach(function(doc) {
doc.dups.shift(); // First element skipped for deleting
db.collectionName.remove({_id : {$in: doc.dups }}); // Delete remaining duplicates
})
Reference - stackoverflow
Comments
When invoking it from an application C # shows 16 rows, but all with the information of the first article, the code is:
FastReport.Utils.RegisteredObjects.AddConnection(typeof(MySqlDataConnection));
using (Report report = new Report())
{
report.Load(@..\test.frx);
MySqlDataConnection conn = new MySqlDataConnection();
conn.ConnectionString = connStr;
conn.CreateAllTables(true);
report.Dictionary.Connections.Add(conn);
report.RegisterData(bindingSource1, "table1");
report.GetDataSource("table1").Enabled = true;
report.Show();
}
thanks for your reply
It works well and connects to the data set defined in the connection at run time, but repeats the content of the first item information for all rows.
From:
https://www.nuget.org/packages/FastReport.Data.MySql/#
example of use:
execute the following code once at the application start:
FastReport.Utils.RegisteredObjects.AddConnection(typeof(MySqlDataConnection));
now you should be able to create a new MySQL data source from Designer (.Net 4) or from code:
Report report = new Report();
report.Load(@YourReport.frx);
//...
MySqlDataConnection conn = new MySqlDataConnection();
conn.ConnectionString = "your connection string";
conn.CreateAllTables();
report.Dictionary.Connections.Add(conn);
1. push data, from main app, report.registerdata
don't do this : report.Dictionary.Connections.Add(conn);
2. pull data, define db connection and table in report template
you can alter the connection string from main app
var duplicates = [];
db.collectionName.aggregate([
{ $match: {
name: { "$ne": '' } // discard selection criteria
}},
{ $group: {
_id: { name: "$name"}, // can be grouped on multiple properties
dups: { "$addToSet": "$_id" },
count: { "$sum": 1 }
}},
{ $match: {
count: { "$gt": 1 } // Duplicates considered as count greater than one
}}
],
{allowDiskUse: true} // For faster processing if set is larger
) // You can display result until this and check duplicates
.forEach(function(doc) {
doc.dups.shift(); // First element skipped for deleting
doc.dups.forEach( function(dupId){
duplicates.push(dupId); // Getting all duplicate ids
}
)
})
// If you want to Check all "_id" which you are deleting else print statement not needed
printjson(duplicates);
// Remove all duplicates in one go
db.collectionName.remove({_id:{$in:duplicates}})
b. You can delete documents one by one.
db.collectionName.aggregate([
// discard selection criteria, You can remove "$match" section if you want
{ $match: {
source_references.key: { "$ne": '' }
}},
{ $group: {
_id: { source_references.key: "$source_references.key"}, // can be grouped on multiple properties
dups: { "$addToSet": "$_id" },
count: { "$sum": 1 }
}},
{ $match: {
count: { "$gt": 1 } // Duplicates considered as count greater than one
}}
],
{allowDiskUse: true} // For faster processing if set is larger
) // You can display result until this and check duplicates
.forEach(function(doc) {
doc.dups.shift(); // First element skipped for deleting
db.collectionName.remove({_id : {$in: doc.dups }}); // Delete remaining duplicates
})
Reference - stackoverflow