I want to report an issue related to exporting tables to Excel. Exporting tables has always been a great and very useful feature for data analysis. When viewing tables inside Tulip, words with accented characters display perfectly without any problems.
However, for those of us working with text containing accents (for example, many Spanish words), we are unable to export the table data correctly. When the exported file is opened in Excel, accented characters are corrupted or transformed into incorrect symbols.
Below is a list of accented vowels and characters that get transformed after export:
Is there any difference viewing the csv content first in a text editor (e.g. notepad or notepad++)? Or importing from the CSV into Excel without any data transformation?
We have issues with trailing zeros being dropped when Excel “tries to help”.
Using “REPLACE” functions could be a temporary workaround, it’s not a scalable or reliable solution, especially for teams handling large volumes of multilingual data. Also, it would require us to manually identify and clean up each character every time we export a file, which adds unnecessary overhead.
I understand that Excel sometimes alters content when opening CSVs (such as trimming leading/trailing zeros), but in this case, the problem seems to be related to character encoding; likely a mismatch between UTF-8 and the encoding Excel expects by default (often ANSI or Windows-1252).
hi @Jhondy , is this the same issue as here? Acknowledge that CSV encodings with Excel can be tricky. Did the Excel autodetect suggestion from the last thread work?
Yes, this method works. It’s a good solution if you don’t export frequently. However, it’s a bit tedious if several people need to export multiple tables daily.
We can use this solution, but keep in mind that the Tulip platform performs CSV conversions with Unicode (UTF-8) sources.