My Suggestions / Bugs Thread

Well, yes, you are putting all the data into an array, which is, quite literally all int the same document. What you might want to consider is using aggregation to unwind each line of the array into its own document and then using that aggregation as the source for the CSV export.

Would an option just to disable the default query being inserted also work or do you rely on it as a starting point?

I’m considering a couple of other scenarios of bookmark insertion, not just with the default query.

Yes, that would do. I’m never using the default query anyways.

Sorry, I only noticed these new replies just now but I thought I’d chime in anyway. :slight_smile:

New suggestion: If I open IntelliShell, there is a default .find(“”) statement and the cursor places itself inside the quotes. If I then select a bookmark to load, the code gets loaded within the quotes, the default query is “around” my imported bookmark code and therefore produces error. I would suggest: If the default query was not changed and a bookmark is loaded → delete the query before pasting in the bookmark code.

You can open the IntelliShell on the collection’s database instead of on the collection itself. That way, no default query should be there. :slight_smile:

In addition to that, we also changed how bookmarks behave in the IntelliShell starting with 2023.1. Bookmarks now aren’t pasted but rather replace the script content (much like files do).

1 Like

Thanks for these additions. In the newest version there is also a new IntelliShell option for that: Edit → Preferences → IntelliShell → [ ] Automatically execute the default query when opening a new IntelliShell tab
Just leave it blank and voila! :partying_face:

1 Like

Suggestion: Please let us submit the “Follow reference” popup by pressing [Enter] in the “Select target” input field. Currently [Tab] and [Space] is the fastest way and not so intuitive.

Can you give Ctrl + Enter a shot? A lot of our dialogs (if not all) ignore Enter as we have many multi-line-editor dialogs. :slight_smile:

Thanks rico, Ctrl + Enter is working just fine. I will get used to that :+1:

1 Like

Suggestion: Imagine you in a collection view and you put together a nice query with filters, projections, sort and stuff. Then you see that you will need data of different collections, therefore an aggregate based on that query but with some lookup additions.
It would be awesome to open the aggregate builder (while being in the collection view) and see that my query was converted into aggregation steps ($match, $projection and $sort) so that I just need to insert new stuff.
Maybe that’s possible to achieve with overseeable dev effort.

1 Like

Hi Hannes, this is a great idea, thank you! I’ve made a note of it for the relevant team who work with this feature.

1 Like

Suggestion: It would be awesome, really awesome, to have some kind of progress indicator when importing/exporting collection (especially imports of big collections). I never know if the import crashed (within one collection) and is running endlessly or maybe just needs more time.
image

1 Like

We attempt to show a progress for mongodump/mongorestore but sometimes we can’t parse the output properly. This is why your top Import is showing the “in progress and error” icon: we couldn’t parse one of the lines and we’re not sure if it’s working or not. This is obviously a bug and we try to squash all of these parsing errors. :slight_smile:

Could you send us the status log (minus any sensitive information, of course)? Then we can figure out which line caused the issue. You can get to it like this:
image

The status log will look something like this:

The status log is also available for in-progress operations, so you could also use this as a workaround for seeing if the operation is stuck or not. :wink:

1 Like
  1. You kind of can check the progress of a single collection import if you go into the collection that is currently importing and simply count the documents (view panel buttom right “Count documents”). This number is growing while importing and you can update whenever you want. (At least using mongodump archive restore this seems to work.) We discovered that our collection import hangs itself after 1-2 million restored documents, the import progress is still running but there are no new documents coming in.

  2. Status output of the cancled import:

Thu Apr 27 10:16:01 CEST 2023: BSON Import - 2023-04-27T10:16:01.595+0200    The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
Thu Apr 27 10:16:01 CEST 2023: BSON Import - Preparing collections to restore from
Thu Apr 27 10:16:01 CEST 2023: BSON Import - schulcloud.files - Reading metadata from archive 'C:\Users\Administrator\Desktop\Backups u Doku\JHD-45913\mongodb-schulcloud-20230406.gz'
Thu Apr 27 10:16:01 CEST 2023: BSON Import - schulcloud.files - Restoring from archive 'C:\Users\Administrator\Desktop\Backups u Doku\JHD-45913\mongodb-schulcloud-20230406.gz'
Thu Apr 27 10:21:08 CEST 2023: BSON Import - schulcloud.files - Cancelled
Thu Apr 27 10:21:08 CEST 2023: BSON Import - Done
Thu Apr 27 10:21:08 CEST 2023: Done
Thu Apr 27 10:21:08 CEST 2023: BSON Import - schulcloud.files - Done
Thu Apr 27 10:21:08 CEST 2023: BSON Import - Cancelled

2 Likes

Suggestion: On import it qould be nice to specify a query that only specific content of that backups need to be restored. Therefore I would not need to import collections with double-digit million documents.
But I assume mongorestore doesn’t support that with archive files? :thinking:

1 Like

Addition to my previous post: A collegue was letting the import run and after 27mins it finished restoring. So maybe it didn’t crash and my “Count documents” trick isn’t so good.

1 Like

Unfortunately mongorestore doesn’t let you run queries on the to-be-restored documents, indeed. The best it can do is include/exclude specified databases and collections.

mongodump, however, does let you run queries - maybe you could use that to narrow down which documents are stored in the dump? :slight_smile:

1 Like

Not an option, no. Our tech team backups entire product databases with all data (nothing else would be reasonable) but we as support only need parts of the data to respond to support cases.
My import is running since 3 hours, did not complete successfully. And thats my… 10th try, trying different combinations of import options (no validations, no indexes etc). So dependability of that import feature is… problematic. I’m currently not able to respond to that support case properly.
Any ideas what I can do to finally succeed?

1 Like

Unfortunately in the case of mongodump and mongorestore, there’s not much that Studio 3T itself does, it’s really mostly done by those two tools.

You could try running the command manually on the command line (the command can be copied from the status log, you’ll have to add the needed passwords though) and see if that’s any better. If not, this may just be a mongorestore issue. If it is better, there may be a bug in Studio 3T.

You could also try configuring a different/newer version in Preferences - MongoDB Tools. It could be that this is fixed in a newer release of mongorestore which we aren’t bundling yet (we currently bundle 100.5.4, the newest release seems to be 100.7.0).

1 Like

It’s defenitely not a Studio3T problem, I know that by now.
I’ve tried Studio3T (mongo-tools 100.5) restore in both MongoDB v5 and v6 server as well as mongo-tools 100.7 (mongorestore.exe via CMD with parameters) on v5 and v6 DB.
Command for example: mongorestore --verbose --nsInclude=schulcloud.files --drop --noIndexRestore --stopOnError --bypassDocumentValidation --archive="c:\Users\Administrator\Desktop\Backups u Doku\JHD-45913\mongodb-schulcloud-20230413.gz" --gzip --numInsertionWorkersPerCollection=1
the restore starts but halts somewhere in the process, always at a different progress:


So sometimes it restores almost 3GB (of 8GB in this collection), sometimes just 64MB, everything in between happened already. But at some point CMD just stays quiet, CTRL+C can’t interrupt the process anymore, so I need to forcefully close CMD.
mongod.log just prints {"t":{"$date":"2023-05-12T15:02:08.560+02:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1683896528:559770][7284:140716900309808], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 72935, snapshot max: 72935 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 689261"}}
once every minute but nothing is happening anymore, local collection gets no more new entries.
One of our techies restored this backup with mongorestore successfully (but entirely different setup with Linux and stuff).
So it has to be something in my system (1TB SSD, 64GB RAM, so the system is most probably not too weak) that causes this problem :man_shrugging: :disappointed_relieved:

2 Likes

Different story, back to thread topic:
I really like the new Collection History feature because it takes a step into the right (in terms of what I desire) direction.
Suggestions:

  1. The Collection History is kind of very hard to find. It truely is. After reading the newsletter, turning it on in pref and deleting one item I searched for the UI/button for maybe 5 full minutes. I had to go to the knowledge base and read the new “Restore deleted MongoDB documents” section to find it. Why isn’t it in the program menu (File / Edit / Database/ … / View / …)? Why isn’t there a button in the top button bar (Connect / Collection / IntelliShell …)? Or within any right-click context menu, for example on collection right click? Please make it more visible :wink:
  2. You have a broken link in this knowledge base article: How to Insert & Update MongoDB Documents | Studio 3T → Article navigation → both “Managing the Collection History” and “Restore deleted MongoDB documents” are not yet linked.
  3. Currently the history only recieves items that were deleted manually, by hand, right? It would be awesome if deletion via IntelliShell (script with deleteMany or so) and Aggregates (aggregate with $unset) would also result in a local backup. This would complete a real deletion history, currently it’s a bit limited. Most of our deletions are done by scripts or aggregates, “one does not simply delete items by hand in production” (<- possible meme… yeah, meme time!)
    7lj101
    (yeah, it’s Friday, there’s a minute of free time for bullshitting around :wink: )
  4. Please expand this feature to hold edits as well. 90% of the stuff done is edit, not delete. It would be nice to be able to redo these as well. Like in 3.) including script stuff.

Keep up the good work!

2 Likes