Skip to content

Implement compression #1484

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 40 commits into from
Closed

Implement compression #1484

wants to merge 40 commits into from

Conversation

joe-mann
Copy link
Contributor

@joe-mann joe-mann commented Oct 2, 2023

Description

Implemented the SQL compression protocol. This new feature is used by using "compress=1".

A new PR to revive, rebase, and complete #649.

Checklist

  • Code compiles correctly
  • Created tests which fail without the change (if possible)
  • All tests passing
  • Extended the README / documentation, if necessary
  • Added myself / the copyright holder to the AUTHORS file

Brigitte Lamarche and others added 30 commits August 11, 2017 15:38
* rows: implement driver.RowsColumnTypeScanType

Implementation for time.Time not yet complete!

* rows: implement driver.RowsColumnTypeNullable

* rows: move fields related code to fields.go

* fields: use NullTime for nullable datetime fields

* fields: make fieldType its own type

* rows: implement driver.RowsColumnTypeDatabaseTypeName

* fields: fix copyright year

* rows: compile time interface implementation checks

* rows: move tests to versioned driver test files

* rows: cache parseTime in resultSet instead of mysqlConn

* fields: fix string and time types

* rows: implement ColumnTypeLength

* rows: implement ColumnTypePrecisionScale

* rows: fix ColumnTypeNullable

* rows: ColumnTypes tests part1

* rows: use keyed composite literals in ColumnTypes tests

* rows: ColumnTypes tests part2

* rows: always use NullTime as ScanType for datetime

* rows: avoid errors through rounding of time values

* rows: remove parseTime cache

* fields: remove unused scanTypes

* rows: fix ColumnTypePrecisionScale implementation

* fields: sort types alphabetical

* rows: remove ColumnTypeLength implementation for now

* README: document ColumnType Support
AWS Aurora returns a 1290 after failing over requiring the connection to
be closed and opened again to be able to perform writes.
Most forks won't be in goveralls and so this command in travis.yml was,
previously, failing and causing the build to fail.

Now, it doesn't!
* Drop support for Go 1.6 and lower

* Remove cloneTLSConfig for legacy Go versions
…#623)

* Added support for custom string types.

* Add author name

* Added license header

* Added a newline to force a commit.

* Remove newline.
* Also add conversions for additional types in ConvertValue
  ref golang/go@d7c0de9
* Fixed broken import for appengine/cloudsql

appengine.go
import path of appengine/cloudsql has changed to google.golang.org/appengine/cloudsql - Fixed.

* Added my name to the AUTHORS
* Differentiate between BINARY and CHAR

When looking up the database type name, we now check the character set
for the following field types:
 * CHAR
 * VARCHAR
 * BLOB
 * TINYBLOB
 * MEDIUMBLOB
 * LONGBLOB

If the character set is 63 (which is the binary pseudo character set),
we return the binary names, which are (respectively):
 * BINARY
 * VARBINARY
 * BLOB
 * TINYBLOB
 * MEDIUMBLOB
 * LONGBLOB

If any other character set is in use, we return the text names, which
are (again, respectively):
 * CHAR
 * VARCHAR
 * TEXT
 * TINYTEXT
 * MEDIUMTEXT
 * LONGTEXT

To facilitate this, mysqlField has been extended to include a uint8
field for character set, which is read from the appropriate packet.

Column type tests have been updated to ensure coverage of binary and
text types.

* Increase test coverage for column types
* Fix prepared statement

When there are many args and maxAllowedPacket is not enough,
writeExecutePacket() attempted to use STMT_LONG_DATA even for
0byte string.
But writeCommandLongData() doesn't support 0byte data. So it
caused to send malfold packet.

This commit loosen threshold for using STMT_LONG_DATA.

* Change minimum size of LONG_DATA to 64byte

* Add test which reproduce issue 730

* TestPreparedManyCols test only numParams = 65535 case

* s/as possible//
@joe-mann joe-mann marked this pull request as draft October 2, 2023 14:46
@joe-mann joe-mann closed this Oct 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.