Support » Plugin: Import any XML or CSV File to WordPress » speed up imports with multiple similar IDs

  • Resolved mmtomm

    (@mmtomm)


    Hi,

    I try best to explain:
    I have about 5700 records in my csv.
    There are key IDs like a colorID a sizeID and a sku
    all these 3 IDs are appearing multiple times in the csv – even the sku appears multiple times since it is used to create variations – the final ‘general ID’ of a product is the combination of all these 3 IDs.
    I’m also using Reusable Content & Text Blocks by Loomisoft for to create several reusable blocks for i.e. detailed color descriptions, to have the long descriptions for products in one block instead of repeadetly using it in the product description itself and so on.
    Since those blocks are created by looping through the csv and finding the colorID or sizeID over and over again the import creates 1 block and the finding multiple occurences of the same ID in the import progress I see a lot’s of statements in the log like (translated from german log):
    combine data for…
    duplicate post detected for…
    updating…
    and so on…
    until it creates the next block.
    So in one case for example out of 5700 records only 134 blocks are generated (which is correct), but it has to loop over again and update as long as it finds those multiple occurences which of course slows down the import.

    Is there a way to tell to ignore those duplicates?

    It will not be possible to use another uniqe ID cause in the raw data there isn’t such one or in other words it will then not create all the blocks needed.

    Hope I could explain this question with my not so good english and could clearify the issue? So it is not a bug or some error, it works just fine but needs a lot of time while finding and updating content which was just created with the last looping through the csv.
    Thank you for assistance
    Tom

    • This topic was modified 1 year, 2 months ago by Jan Dembowski. Reason: Removed link
    • This topic was modified 1 year, 2 months ago by Jan Dembowski. Reason: Fixed link
Viewing 4 replies - 1 through 4 (of 4 total)
  • Plugin Author WP All Import

    (@wpallimport)

    Hi @mmtomm

    Based on your description of this I think you can speed things up by using an XPath Filter to get rid of the duplicates – see: http://www.wpallimport.com/documentation/advanced/filtering-with-xpath/.

    Hi,

    I tried an xpath filter as well:
    /node[column_17[1][not(contains(.,”Pants”))] or column_17[1][not(contains(.,”Accessoire”))] and column_6[1] = “L”]

    for example just looking at garments in the size L in this case.
    When I apply the same filter in excel only about 850 records are left, but in WP All Import when proceeding to the next step it still loads all of the 5700 records and it seems like it also loops through all of them when processing the import.

    Plugin Author WP All Import

    (@wpallimport)

    Hi @mmtomm

    Please try this XPath:

    /node[(column_17[1][not(contains(.,"Pants"))] or column_17[1][not(contains(.,"Accessoire"))]) and column_6[1] = "L"]

    If it doesn’t work, we’ll need to see your data file – in this case, open a ticket at http://www.wpallimport.com/support/ with details on the issue and a link to this ticket.

    Thank you, now it is filtering correctly.
    Nearby: I did use the drag and drop interface which did build up the XPath I sent in my first post.

Viewing 4 replies - 1 through 4 (of 4 total)
  • The topic ‘speed up imports with multiple similar IDs’ is closed to new replies.