Bash read & parse file - loop performance -
i'm trying read file, , parse in bash. need dd
convert ebcdic
ascii
, loop , read x bytes, piping each x bytes row in new file:
#!/bin/bash # $1 = input file in ebcdic # $2 = row length # $3 = output file # convert ascii , replace nul (^@) ' ' dd conv=ascii if=$1 | sed 's/\x0/ /g' > $3.tmp file=$(cat "$3.tmp") sindex=1 findex=$2 # remove file rm $3 echo "filesize: ${#file}"; # loop, retrieving each fixed-size record , appending file while true; # append record file echo "${file:sindex:findex}" >> $3; # break @ end of file if [ $findex -ge ${#file} ] break; fi # increment index sindex=$((sindex+findex)); done # remove tmp rm $3.tmp
any way make whole process faster?
answering own question. answer simple use of fold
!
# $1 = ascii input file # $2 = file record length (i.e. 100) # $3 = output file (non-delimited, row-separated file) # dd : convert ebcdic ascii # sed : replace nul (^@) space ' ' # fold : wrap input specified width (record length) dd conv=ascii if=$1 | sed 's/\x0/ /g' | fold -$2 > $3
Comments
Post a Comment