update influxdb to v1.1.1 and change client to v2

This commit is contained in:
Karol Kraśkiewicz
2017-03-10 23:09:30 +01:00
parent 0ea3e9a2c1
commit 2a270e8f18
16 changed files with 1527 additions and 301 deletions

View File

@@ -26,7 +26,7 @@ Though not necessary for experimentation, you may want to create a new user
and authenticate the connection to your database.
For more information please check out the
[Admin Docs](https://docs.influxdata.com/influxdb/v0.10/administration).
[Admin Docs](https://docs.influxdata.com/influxdb/latest/administration/).
For the impatient, you can create a new admin user _bubba_ by firing off the
[InfluxDB CLI](https://github.com/influxdata/influxdb/blob/master/cmd/influx/main.go).
@@ -49,10 +49,8 @@ the configuration below.
package main
import (
"net/url"
"fmt"
"log"
"os"
"time"
"github.com/influxdata/influxdb/client/v2"
)
@@ -70,17 +68,17 @@ func main() {
Username: username,
Password: password,
})
if err != nil {
log.Fatalln("Error: ", err)
}
// Create a new point batch
bp, err := client.NewBatchPoints(client.BatchPointsConfig{
Database: MyDB,
Precision: "s",
})
if err != nil {
log.Fatalln("Error: ", err)
}
@@ -93,11 +91,11 @@ func main() {
"user": 46.6,
}
pt, err := client.NewPoint("cpu_usage", tags, fields, time.Now())
if err != nil {
log.Fatalln("Error: ", err)
}
bp.AddPoint(pt)
// Write the batch
@@ -257,6 +255,28 @@ func WriteUDP() {
}
```
### Point Splitting
The UDP client now supports splitting single points that exceed the configured
payload size. The logic for processing each point is listed here, starting with
an empty payload.
1. If adding the point to the current (non-empty) payload would exceed the
configured size, send the current payload. Otherwise, add it to the current
payload.
1. If the point is smaller than the configured size, add it to the payload.
1. If the point has no timestamp, just try to send the entire point as a single
UDP payload, and process the next point.
1. Since the point has a timestamp, re-use the existing measurement name,
tagset, and timestamp and create multiple new points by splitting up the
fields. The per-point length will be kept close to the configured size,
staying under it if possible. This does mean that one large field, maybe a
long string, could be sent as a larger-than-configured payload.
The above logic attempts to respect configured payload sizes, but not sacrifice
any data integrity. Points without a timestamp can't be split, as that may
cause fields to have differing timestamps when processed by the server.
## Go Docs
Please refer to