mdc101 15 Posted June 26, 2011 Report Share Posted June 26, 2011 Hi Folks, I am scraping results and have noted that I get the same urls that I don't want from the website.The first 27 rows are always the same and I want to delete them in the text file before I use the file to get the information I need. What is the best approach to deleting the top 27 rows and removing any blank rows? Process i want to achieve for the clean up. - delete top 27 rows- delete any blank rows- save the file Thanks Matt Quote Link to post Share on other sites
rumen 3 Posted June 26, 2011 Report Share Posted June 26, 2011 loop if evaluate list position <26 replace nothingnext to remove blank mark remove dublicates Quote Link to post Share on other sites
mdc101 15 Posted June 26, 2011 Author Report Share Posted June 26, 2011 thanks that did the trick Quote Link to post Share on other sites
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.