Package csv reads and writes comma-separated values (CSV) files. There are many kinds of CSV files; this package supports the format described in RFC 4180.
A csv file contains zero or more records of one or more fields per record. Each record is separated by the newline character. The final record may optionally be followed by a newline character.
var(ErrBareQuote=errors.New("bare \" in non-quoted-field")ErrQuote=errors.New("extraneous or missing \" in quoted-field")ErrFieldCount=errors.New("wrong number of fields")// Deprecated: ErrTrailingComma is no longer used.
// 已废弃:ErrTrailingComma不再使用。
ErrTrailingComma=errors.New("extra delimiter at end of line"))
These are the errors that can be returned in ParseError.Err.
这些是可以在ParseError.Err中返回的错误。
函数
This section is empty.
类型
type ParseError
1
2
3
4
5
6
typeParseErrorstruct{StartLineint// Line where the record starts// 记录开始的那一行
Lineint// Line where the error occurred// 发生错误的那一行
Columnint// Column (1-based byte index) where the error occurred // 发生错误的那一列(基于1的字节索引)。
Errerror// The actual error // 实际的错误
}
A ParseError is returned for parsing errors. Line numbers are 1-indexed and columns are 0-indexed.
typeReaderstruct{// Comma is the field delimiter.
// It is set to comma (',') by NewReader.
// Comma must be a valid rune and must not be \r, \n,
// or the Unicode replacement character (0xFFFD).
// 逗号是字段分隔符。
// 它被NewReader设置为逗号(',')。
// 逗号必须是一个有效的符文,不能是 \r, \n,
// 或Unicode替换字符(0xFFFD)。
Commarune// Comment, if not 0, is the comment character. Lines beginning with the
// Comment character without preceding whitespace are ignored.
// With leading whitespace the Comment character becomes part of the
// field, even if TrimLeadingSpace is true.
// Comment must be a valid rune and must not be \r, \n,
// or the Unicode replacement character (0xFFFD).
// It must also not be equal to Comma.
//注释,如果不是0,就是注释字符。以Comment字符开始的行,如果没有前面的空白,将被忽略。
// 如果有前导空白,Comment字符将成为字段的一部分,即使TrimLeadingSpace为真。
// Comment必须是一个有效的符文,不能是 \r, \n, 或Unicode替换字符(0xFFFD)。
// 它也不能等于逗号。
Commentrune// FieldsPerRecord is the number of expected fields per record.
// If FieldsPerRecord is positive, Read requires each record to
// have the given number of fields. If FieldsPerRecord is 0, Read sets it to
// the number of fields in the first record, so that future records must
// have the same field count. If FieldsPerRecord is negative, no check is
// made and records may have a variable number of fields.
// FieldsPerRecord是每条记录的预期字段数。
// 如果FieldsPerRecord是正数,Read要求每条记录都有给定的字段数。如果FieldsPerRecord为0,Read将其设置为第一条记录的字段数,这样以后的记录必须有相同的字段数。如果FieldsPerRecord为负数,则不做检查,记录可能有可变的字段数。
FieldsPerRecordint// If LazyQuotes is true, a quote may appear in an unquoted field and a
// non-doubled quote may appear in a quoted field.
// 如果LazyQuotes为真,一个引号可能出现在一个无引号的字段中,一个非双引号可能出现在一个有引号的字段中。
LazyQuotesbool// If TrimLeadingSpace is true, leading white space in a field is ignored.
// This is done even if the field delimiter, Comma, is white space.
// 如果TrimLeadingSpace为真,字段中的前导空白将被忽略。
// 即使字段的分隔符Comma是空白的,也会这样做。
TrimLeadingSpacebool// ReuseRecord controls whether calls to Read may return a slice sharing
// the backing array of the previous call's returned slice for performance.
// By default, each call to Read returns newly allocated memory owned by the caller.
// ReuseRecord控制对Read的调用是否可以返回共享前一次调用返回的片断的支持数组的片断,以提高性能。
// 默认情况下,每次对Read的调用都会返回由调用者拥有的新分配的内存。
ReuseRecordbool// Deprecated: TrailingComma is no longer used.
// 已废弃:不再使用TrailingComma。
TrailingCommabool// contains filtered or unexported fields
}
A Reader reads records from a CSV-encoded file.
Reader 从一个CSV编码的文件中读取记录。
As returned by NewReader, a Reader expects input conforming to RFC 4180. The exported fields can be changed to customize the details before the first call to Read or ReadAll.
The Reader converts all \r\n sequences in its input to plain \n, including in multiline field values, so that the returned data does not depend on which line-ending convention an input file uses.
packagemainimport("encoding/csv""fmt""log""strings")funcmain(){in:=`first_name;last_name;username
"Rob";"Pike";rob
# lines beginning with a # character are ignored
Ken;Thompson;ken
"Robert";"Griesemer";"gri"
`r:=csv.NewReader(strings.NewReader(in))r.Comma=';'r.Comment='#'records,err:=r.ReadAll()iferr!=nil{log.Fatal(err)}fmt.Print(records)}Output:[[first_namelast_nameusername][RobPikerob][KenThompsonken][RobertGriesemergri]]
func NewReader
1
funcNewReader(rio.Reader)*Reader
NewReader returns a new Reader that reads from r.
NewReader返回一个新的阅读器,从r中读取数据。
(*Reader) FieldPos <- go1.17
1
func(r*Reader)FieldPos(fieldint)(line,columnint)
FieldPos returns the line and column corresponding to the start of the field with the given index in the slice most recently returned by Read. Numbering of lines and columns starts at 1; columns are counted in bytes, not runes.
If this is called with an out-of-bounds index, it panics.
如果在调用这个函数时,索引超出了范围,它就会惊慌失措。
(*Reader) InputOffset <- go1.19
1
func(r*Reader)InputOffset()int64
InputOffset returns the input stream byte offset of the current reader position. The offset gives the location of the end of the most recently read row and the beginning of the next row.
Read reads one record (a slice of fields) from r. If the record has an unexpected number of fields, Read returns the record along with the error ErrFieldCount. Except for that case, Read always returns either a non-nil record or a non-nil error, but not both. If there is no data left to be read, Read returns nil, io.EOF. If ReuseRecord is true, the returned slice may be shared between multiple calls to Read.
ReadAll reads all the remaining records from r. Each record is a slice of fields. A successful call returns err == nil, not err == io.EOF. Because ReadAll is defined to read until EOF, it does not treat end of file as an error to be reported.
typeWriterstruct{Commarune// Field delimiter (set to ',' by NewWriter) // 字段分隔符(NewWriter设置为',' )。
UseCRLFbool// True to use \r\n as the line terminator // True,使用 \r\n 作为行结束符。
// contains filtered or unexported fields
}
A Writer writes records using CSV encoding.
Writer使用CSV编码来写记录。
As returned by NewWriter, a Writer writes records terminated by a newline and uses ‘,’ as the field delimiter. The exported fields can be changed to customize the details before the first call to Write or WriteAll.
If UseCRLF is true, the Writer ends each output line with \r\n instead of \n.
如果UseCRLF为真,Writer会以\r\n而不是\n结束每个输出行。
The writes of individual records are buffered. After all data has been written, the client should call the Flush method to guarantee all data has been forwarded to the underlying io.Writer. Any errors that occurred should be checked by calling the Error method.
packagemainimport("encoding/csv""log""os")funcmain(){records:=[][]string{{"first_name","last_name","username"},{"Rob","Pike","rob"},{"Ken","Thompson","ken"},{"Robert","Griesemer","gri"},}w:=csv.NewWriter(os.Stdout)for_,record:=rangerecords{iferr:=w.Write(record);err!=nil{log.Fatalln("error writing record to csv:",err)}}// Write any buffered data to the underlying writer (standard output).
w.Flush()iferr:=w.Error();err!=nil{log.Fatal(err)}}Output:first_name,last_name,usernameRob,Pike,robKen,Thompson,kenRobert,Griesemer,gri
func NewWriter
1
funcNewWriter(wio.Writer)*Writer
NewWriter returns a new Writer that writes to w.
NewWriter返回一个新的写入w的Writer。
(*Writer) Error <- go1.1
1
func(w*Writer)Error()error
Error reports any error that has occurred during a previous Write or Flush.
Error报告在之前的写或刷新过程中发生的任何错误。
(*Writer) Flush
1
func(w*Writer)Flush()
Flush writes any buffered data to the underlying io.Writer. To check if an error occurred during the Flush, call Error.
Write writes a single CSV record to w along with any necessary quoting. A record is a slice of strings with each string being one field. Writes are buffered, so Flush must eventually be called to ensure that the record is written to the underlying io.Writer.