We are maintaining the user database in our environment, every week automatically the excel will be placed in the the specific folder. Post this, the automatic script will start run and upload the user information in the database by reading the excel file. In our current scenario, the excel will have duplicate entries which will create more than one entry for in the user in the database. The problem is that every once in a while, duplicates end up in the CSV file

To over come this scenario we thought to create script to remove the duplicate detail (which similar in any other rows in excel). This can be achieve by using the Sort-Object and the Import-CSV cmdlet to remove duplicates from a CSV file.

After the contents of the CSV file sorted using Sort-Object, you can use the unique switch to return only unique rows from the file.

Example

Input CSV:

Output CSV: