Drop viper stuff in test/
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
This commit is contained in:
		
							
								
								
									
										358
									
								
								LICENSES/vendor/github.com/hashicorp/hcl/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										358
									
								
								LICENSES/vendor/github.com/hashicorp/hcl/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,358 +0,0 @@ | |||||||
| = vendor/github.com/hashicorp/hcl licensed under: = |  | ||||||
|  |  | ||||||
| Mozilla Public License, version 2.0 |  | ||||||
|  |  | ||||||
| 1. Definitions |  | ||||||
|  |  | ||||||
| 1.1. “Contributor” |  | ||||||
|  |  | ||||||
|      means each individual or legal entity that creates, contributes to the |  | ||||||
|      creation of, or owns Covered Software. |  | ||||||
|  |  | ||||||
| 1.2. “Contributor Version” |  | ||||||
|  |  | ||||||
|      means the combination of the Contributions of others (if any) used by a |  | ||||||
|      Contributor and that particular Contributor’s Contribution. |  | ||||||
|  |  | ||||||
| 1.3. “Contribution” |  | ||||||
|  |  | ||||||
|      means Covered Software of a particular Contributor. |  | ||||||
|  |  | ||||||
| 1.4. “Covered Software” |  | ||||||
|  |  | ||||||
|      means Source Code Form to which the initial Contributor has attached the |  | ||||||
|      notice in Exhibit A, the Executable Form of such Source Code Form, and |  | ||||||
|      Modifications of such Source Code Form, in each case including portions |  | ||||||
|      thereof. |  | ||||||
|  |  | ||||||
| 1.5. “Incompatible With Secondary Licenses” |  | ||||||
|      means |  | ||||||
|  |  | ||||||
|      a. that the initial Contributor has attached the notice described in |  | ||||||
|         Exhibit B to the Covered Software; or |  | ||||||
|  |  | ||||||
|      b. that the Covered Software was made available under the terms of version |  | ||||||
|         1.1 or earlier of the License, but not also under the terms of a |  | ||||||
|         Secondary License. |  | ||||||
|  |  | ||||||
| 1.6. “Executable Form” |  | ||||||
|  |  | ||||||
|      means any form of the work other than Source Code Form. |  | ||||||
|  |  | ||||||
| 1.7. “Larger Work” |  | ||||||
|  |  | ||||||
|      means a work that combines Covered Software with other material, in a separate |  | ||||||
|      file or files, that is not Covered Software. |  | ||||||
|  |  | ||||||
| 1.8. “License” |  | ||||||
|  |  | ||||||
|      means this document. |  | ||||||
|  |  | ||||||
| 1.9. “Licensable” |  | ||||||
|  |  | ||||||
|      means having the right to grant, to the maximum extent possible, whether at the |  | ||||||
|      time of the initial grant or subsequently, any and all of the rights conveyed by |  | ||||||
|      this License. |  | ||||||
|  |  | ||||||
| 1.10. “Modifications” |  | ||||||
|  |  | ||||||
|      means any of the following: |  | ||||||
|  |  | ||||||
|      a. any file in Source Code Form that results from an addition to, deletion |  | ||||||
|         from, or modification of the contents of Covered Software; or |  | ||||||
|  |  | ||||||
|      b. any new file in Source Code Form that contains any Covered Software. |  | ||||||
|  |  | ||||||
| 1.11. “Patent Claims” of a Contributor |  | ||||||
|  |  | ||||||
|       means any patent claim(s), including without limitation, method, process, |  | ||||||
|       and apparatus claims, in any patent Licensable by such Contributor that |  | ||||||
|       would be infringed, but for the grant of the License, by the making, |  | ||||||
|       using, selling, offering for sale, having made, import, or transfer of |  | ||||||
|       either its Contributions or its Contributor Version. |  | ||||||
|  |  | ||||||
| 1.12. “Secondary License” |  | ||||||
|  |  | ||||||
|       means either the GNU General Public License, Version 2.0, the GNU Lesser |  | ||||||
|       General Public License, Version 2.1, the GNU Affero General Public |  | ||||||
|       License, Version 3.0, or any later versions of those licenses. |  | ||||||
|  |  | ||||||
| 1.13. “Source Code Form” |  | ||||||
|  |  | ||||||
|       means the form of the work preferred for making modifications. |  | ||||||
|  |  | ||||||
| 1.14. “You” (or “Your”) |  | ||||||
|  |  | ||||||
|       means an individual or a legal entity exercising rights under this |  | ||||||
|       License. For legal entities, “You” includes any entity that controls, is |  | ||||||
|       controlled by, or is under common control with You. For purposes of this |  | ||||||
|       definition, “control” means (a) the power, direct or indirect, to cause |  | ||||||
|       the direction or management of such entity, whether by contract or |  | ||||||
|       otherwise, or (b) ownership of more than fifty percent (50%) of the |  | ||||||
|       outstanding shares or beneficial ownership of such entity. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| 2. License Grants and Conditions |  | ||||||
|  |  | ||||||
| 2.1. Grants |  | ||||||
|  |  | ||||||
|      Each Contributor hereby grants You a world-wide, royalty-free, |  | ||||||
|      non-exclusive license: |  | ||||||
|  |  | ||||||
|      a. under intellectual property rights (other than patent or trademark) |  | ||||||
|         Licensable by such Contributor to use, reproduce, make available, |  | ||||||
|         modify, display, perform, distribute, and otherwise exploit its |  | ||||||
|         Contributions, either on an unmodified basis, with Modifications, or as |  | ||||||
|         part of a Larger Work; and |  | ||||||
|  |  | ||||||
|      b. under Patent Claims of such Contributor to make, use, sell, offer for |  | ||||||
|         sale, have made, import, and otherwise transfer either its Contributions |  | ||||||
|         or its Contributor Version. |  | ||||||
|  |  | ||||||
| 2.2. Effective Date |  | ||||||
|  |  | ||||||
|      The licenses granted in Section 2.1 with respect to any Contribution become |  | ||||||
|      effective for each Contribution on the date the Contributor first distributes |  | ||||||
|      such Contribution. |  | ||||||
|  |  | ||||||
| 2.3. Limitations on Grant Scope |  | ||||||
|  |  | ||||||
|      The licenses granted in this Section 2 are the only rights granted under this |  | ||||||
|      License. No additional rights or licenses will be implied from the distribution |  | ||||||
|      or licensing of Covered Software under this License. Notwithstanding Section |  | ||||||
|      2.1(b) above, no patent license is granted by a Contributor: |  | ||||||
|  |  | ||||||
|      a. for any code that a Contributor has removed from Covered Software; or |  | ||||||
|  |  | ||||||
|      b. for infringements caused by: (i) Your and any other third party’s |  | ||||||
|         modifications of Covered Software, or (ii) the combination of its |  | ||||||
|         Contributions with other software (except as part of its Contributor |  | ||||||
|         Version); or |  | ||||||
|  |  | ||||||
|      c. under Patent Claims infringed by Covered Software in the absence of its |  | ||||||
|         Contributions. |  | ||||||
|  |  | ||||||
|      This License does not grant any rights in the trademarks, service marks, or |  | ||||||
|      logos of any Contributor (except as may be necessary to comply with the |  | ||||||
|      notice requirements in Section 3.4). |  | ||||||
|  |  | ||||||
| 2.4. Subsequent Licenses |  | ||||||
|  |  | ||||||
|      No Contributor makes additional grants as a result of Your choice to |  | ||||||
|      distribute the Covered Software under a subsequent version of this License |  | ||||||
|      (see Section 10.2) or under the terms of a Secondary License (if permitted |  | ||||||
|      under the terms of Section 3.3). |  | ||||||
|  |  | ||||||
| 2.5. Representation |  | ||||||
|  |  | ||||||
|      Each Contributor represents that the Contributor believes its Contributions |  | ||||||
|      are its original creation(s) or it has sufficient rights to grant the |  | ||||||
|      rights to its Contributions conveyed by this License. |  | ||||||
|  |  | ||||||
| 2.6. Fair Use |  | ||||||
|  |  | ||||||
|      This License is not intended to limit any rights You have under applicable |  | ||||||
|      copyright doctrines of fair use, fair dealing, or other equivalents. |  | ||||||
|  |  | ||||||
| 2.7. Conditions |  | ||||||
|  |  | ||||||
|      Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in |  | ||||||
|      Section 2.1. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| 3. Responsibilities |  | ||||||
|  |  | ||||||
| 3.1. Distribution of Source Form |  | ||||||
|  |  | ||||||
|      All distribution of Covered Software in Source Code Form, including any |  | ||||||
|      Modifications that You create or to which You contribute, must be under the |  | ||||||
|      terms of this License. You must inform recipients that the Source Code Form |  | ||||||
|      of the Covered Software is governed by the terms of this License, and how |  | ||||||
|      they can obtain a copy of this License. You may not attempt to alter or |  | ||||||
|      restrict the recipients’ rights in the Source Code Form. |  | ||||||
|  |  | ||||||
| 3.2. Distribution of Executable Form |  | ||||||
|  |  | ||||||
|      If You distribute Covered Software in Executable Form then: |  | ||||||
|  |  | ||||||
|      a. such Covered Software must also be made available in Source Code Form, |  | ||||||
|         as described in Section 3.1, and You must inform recipients of the |  | ||||||
|         Executable Form how they can obtain a copy of such Source Code Form by |  | ||||||
|         reasonable means in a timely manner, at a charge no more than the cost |  | ||||||
|         of distribution to the recipient; and |  | ||||||
|  |  | ||||||
|      b. You may distribute such Executable Form under the terms of this License, |  | ||||||
|         or sublicense it under different terms, provided that the license for |  | ||||||
|         the Executable Form does not attempt to limit or alter the recipients’ |  | ||||||
|         rights in the Source Code Form under this License. |  | ||||||
|  |  | ||||||
| 3.3. Distribution of a Larger Work |  | ||||||
|  |  | ||||||
|      You may create and distribute a Larger Work under terms of Your choice, |  | ||||||
|      provided that You also comply with the requirements of this License for the |  | ||||||
|      Covered Software. If the Larger Work is a combination of Covered Software |  | ||||||
|      with a work governed by one or more Secondary Licenses, and the Covered |  | ||||||
|      Software is not Incompatible With Secondary Licenses, this License permits |  | ||||||
|      You to additionally distribute such Covered Software under the terms of |  | ||||||
|      such Secondary License(s), so that the recipient of the Larger Work may, at |  | ||||||
|      their option, further distribute the Covered Software under the terms of |  | ||||||
|      either this License or such Secondary License(s). |  | ||||||
|  |  | ||||||
| 3.4. Notices |  | ||||||
|  |  | ||||||
|      You may not remove or alter the substance of any license notices (including |  | ||||||
|      copyright notices, patent notices, disclaimers of warranty, or limitations |  | ||||||
|      of liability) contained within the Source Code Form of the Covered |  | ||||||
|      Software, except that You may alter any license notices to the extent |  | ||||||
|      required to remedy known factual inaccuracies. |  | ||||||
|  |  | ||||||
| 3.5. Application of Additional Terms |  | ||||||
|  |  | ||||||
|      You may choose to offer, and to charge a fee for, warranty, support, |  | ||||||
|      indemnity or liability obligations to one or more recipients of Covered |  | ||||||
|      Software. However, You may do so only on Your own behalf, and not on behalf |  | ||||||
|      of any Contributor. You must make it absolutely clear that any such |  | ||||||
|      warranty, support, indemnity, or liability obligation is offered by You |  | ||||||
|      alone, and You hereby agree to indemnify every Contributor for any |  | ||||||
|      liability incurred by such Contributor as a result of warranty, support, |  | ||||||
|      indemnity or liability terms You offer. You may include additional |  | ||||||
|      disclaimers of warranty and limitations of liability specific to any |  | ||||||
|      jurisdiction. |  | ||||||
|  |  | ||||||
| 4. Inability to Comply Due to Statute or Regulation |  | ||||||
|  |  | ||||||
|    If it is impossible for You to comply with any of the terms of this License |  | ||||||
|    with respect to some or all of the Covered Software due to statute, judicial |  | ||||||
|    order, or regulation then You must: (a) comply with the terms of this License |  | ||||||
|    to the maximum extent possible; and (b) describe the limitations and the code |  | ||||||
|    they affect. Such description must be placed in a text file included with all |  | ||||||
|    distributions of the Covered Software under this License. Except to the |  | ||||||
|    extent prohibited by statute or regulation, such description must be |  | ||||||
|    sufficiently detailed for a recipient of ordinary skill to be able to |  | ||||||
|    understand it. |  | ||||||
|  |  | ||||||
| 5. Termination |  | ||||||
|  |  | ||||||
| 5.1. The rights granted under this License will terminate automatically if You |  | ||||||
|      fail to comply with any of its terms. However, if You become compliant, |  | ||||||
|      then the rights granted under this License from a particular Contributor |  | ||||||
|      are reinstated (a) provisionally, unless and until such Contributor |  | ||||||
|      explicitly and finally terminates Your grants, and (b) on an ongoing basis, |  | ||||||
|      if such Contributor fails to notify You of the non-compliance by some |  | ||||||
|      reasonable means prior to 60 days after You have come back into compliance. |  | ||||||
|      Moreover, Your grants from a particular Contributor are reinstated on an |  | ||||||
|      ongoing basis if such Contributor notifies You of the non-compliance by |  | ||||||
|      some reasonable means, this is the first time You have received notice of |  | ||||||
|      non-compliance with this License from such Contributor, and You become |  | ||||||
|      compliant prior to 30 days after Your receipt of the notice. |  | ||||||
|  |  | ||||||
| 5.2. If You initiate litigation against any entity by asserting a patent |  | ||||||
|      infringement claim (excluding declaratory judgment actions, counter-claims, |  | ||||||
|      and cross-claims) alleging that a Contributor Version directly or |  | ||||||
|      indirectly infringes any patent, then the rights granted to You by any and |  | ||||||
|      all Contributors for the Covered Software under Section 2.1 of this License |  | ||||||
|      shall terminate. |  | ||||||
|  |  | ||||||
| 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user |  | ||||||
|      license agreements (excluding distributors and resellers) which have been |  | ||||||
|      validly granted by You or Your distributors under this License prior to |  | ||||||
|      termination shall survive termination. |  | ||||||
|  |  | ||||||
| 6. Disclaimer of Warranty |  | ||||||
|  |  | ||||||
|    Covered Software is provided under this License on an “as is” basis, without |  | ||||||
|    warranty of any kind, either expressed, implied, or statutory, including, |  | ||||||
|    without limitation, warranties that the Covered Software is free of defects, |  | ||||||
|    merchantable, fit for a particular purpose or non-infringing. The entire |  | ||||||
|    risk as to the quality and performance of the Covered Software is with You. |  | ||||||
|    Should any Covered Software prove defective in any respect, You (not any |  | ||||||
|    Contributor) assume the cost of any necessary servicing, repair, or |  | ||||||
|    correction. This disclaimer of warranty constitutes an essential part of this |  | ||||||
|    License. No use of  any Covered Software is authorized under this License |  | ||||||
|    except under this disclaimer. |  | ||||||
|  |  | ||||||
| 7. Limitation of Liability |  | ||||||
|  |  | ||||||
|    Under no circumstances and under no legal theory, whether tort (including |  | ||||||
|    negligence), contract, or otherwise, shall any Contributor, or anyone who |  | ||||||
|    distributes Covered Software as permitted above, be liable to You for any |  | ||||||
|    direct, indirect, special, incidental, or consequential damages of any |  | ||||||
|    character including, without limitation, damages for lost profits, loss of |  | ||||||
|    goodwill, work stoppage, computer failure or malfunction, or any and all |  | ||||||
|    other commercial damages or losses, even if such party shall have been |  | ||||||
|    informed of the possibility of such damages. This limitation of liability |  | ||||||
|    shall not apply to liability for death or personal injury resulting from such |  | ||||||
|    party’s negligence to the extent applicable law prohibits such limitation. |  | ||||||
|    Some jurisdictions do not allow the exclusion or limitation of incidental or |  | ||||||
|    consequential damages, so this exclusion and limitation may not apply to You. |  | ||||||
|  |  | ||||||
| 8. Litigation |  | ||||||
|  |  | ||||||
|    Any litigation relating to this License may be brought only in the courts of |  | ||||||
|    a jurisdiction where the defendant maintains its principal place of business |  | ||||||
|    and such litigation shall be governed by laws of that jurisdiction, without |  | ||||||
|    reference to its conflict-of-law provisions. Nothing in this Section shall |  | ||||||
|    prevent a party’s ability to bring cross-claims or counter-claims. |  | ||||||
|  |  | ||||||
| 9. Miscellaneous |  | ||||||
|  |  | ||||||
|    This License represents the complete agreement concerning the subject matter |  | ||||||
|    hereof. If any provision of this License is held to be unenforceable, such |  | ||||||
|    provision shall be reformed only to the extent necessary to make it |  | ||||||
|    enforceable. Any law or regulation which provides that the language of a |  | ||||||
|    contract shall be construed against the drafter shall not be used to construe |  | ||||||
|    this License against a Contributor. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| 10. Versions of the License |  | ||||||
|  |  | ||||||
| 10.1. New Versions |  | ||||||
|  |  | ||||||
|       Mozilla Foundation is the license steward. Except as provided in Section |  | ||||||
|       10.3, no one other than the license steward has the right to modify or |  | ||||||
|       publish new versions of this License. Each version will be given a |  | ||||||
|       distinguishing version number. |  | ||||||
|  |  | ||||||
| 10.2. Effect of New Versions |  | ||||||
|  |  | ||||||
|       You may distribute the Covered Software under the terms of the version of |  | ||||||
|       the License under which You originally received the Covered Software, or |  | ||||||
|       under the terms of any subsequent version published by the license |  | ||||||
|       steward. |  | ||||||
|  |  | ||||||
| 10.3. Modified Versions |  | ||||||
|  |  | ||||||
|       If you create software not governed by this License, and you want to |  | ||||||
|       create a new license for such software, you may create and use a modified |  | ||||||
|       version of this License if you rename the license and remove any |  | ||||||
|       references to the name of the license steward (except to note that such |  | ||||||
|       modified license differs from this License). |  | ||||||
|  |  | ||||||
| 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses |  | ||||||
|       If You choose to distribute Source Code Form that is Incompatible With |  | ||||||
|       Secondary Licenses under the terms of this version of the License, the |  | ||||||
|       notice described in Exhibit B of this License must be attached. |  | ||||||
|  |  | ||||||
| Exhibit A - Source Code Form License Notice |  | ||||||
|  |  | ||||||
|       This Source Code Form is subject to the |  | ||||||
|       terms of the Mozilla Public License, v. |  | ||||||
|       2.0. If a copy of the MPL was not |  | ||||||
|       distributed with this file, You can |  | ||||||
|       obtain one at |  | ||||||
|       http://mozilla.org/MPL/2.0/. |  | ||||||
|  |  | ||||||
| If it is not possible or desirable to put the notice in a particular file, then |  | ||||||
| You may include the notice in a location (such as a LICENSE file in a relevant |  | ||||||
| directory) where a recipient would be likely to look for such a notice. |  | ||||||
|  |  | ||||||
| You may add additional accurate notices of copyright ownership. |  | ||||||
|  |  | ||||||
| Exhibit B - “Incompatible With Secondary Licenses” Notice |  | ||||||
|  |  | ||||||
|       This Source Code Form is “Incompatible |  | ||||||
|       With Secondary Licenses”, as defined by |  | ||||||
|       the Mozilla Public License, v. 2.0. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| = vendor/github.com/hashicorp/hcl/LICENSE b278a92d2c1509760384428817710378 |  | ||||||
							
								
								
									
										29
									
								
								LICENSES/vendor/github.com/magiconair/properties/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										29
									
								
								LICENSES/vendor/github.com/magiconair/properties/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,29 +0,0 @@ | |||||||
| = vendor/github.com/magiconair/properties licensed under: = |  | ||||||
|  |  | ||||||
| goproperties - properties file decoder for Go |  | ||||||
|  |  | ||||||
| Copyright (c) 2013-2018 - Frank Schroeder |  | ||||||
|  |  | ||||||
| All rights reserved. |  | ||||||
|  |  | ||||||
| Redistribution and use in source and binary forms, with or without |  | ||||||
| modification, are permitted provided that the following conditions are met: |  | ||||||
|  |  | ||||||
| 1. Redistributions of source code must retain the above copyright notice, this |  | ||||||
|    list of conditions and the following disclaimer. |  | ||||||
| 2. Redistributions in binary form must reproduce the above copyright notice, |  | ||||||
|    this list of conditions and the following disclaimer in the documentation |  | ||||||
|    and/or other materials provided with the distribution. |  | ||||||
|  |  | ||||||
| THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND |  | ||||||
| ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED |  | ||||||
| WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE |  | ||||||
| DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR |  | ||||||
| ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES |  | ||||||
| (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; |  | ||||||
| LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND |  | ||||||
| ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT |  | ||||||
| (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS |  | ||||||
| SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |  | ||||||
|  |  | ||||||
| = vendor/github.com/magiconair/properties/LICENSE 1cb2e5b7268c1e1e630f6d0dafebfee8 |  | ||||||
							
								
								
									
										25
									
								
								LICENSES/vendor/github.com/pelletier/go-toml/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										25
									
								
								LICENSES/vendor/github.com/pelletier/go-toml/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,25 +0,0 @@ | |||||||
| = vendor/github.com/pelletier/go-toml licensed under: = |  | ||||||
|  |  | ||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2013 - 2017 Thomas Pelletier, Eric Anderton |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
|  |  | ||||||
| = vendor/github.com/pelletier/go-toml/LICENSE dc9ea87a81f62b8871b2a4158edbfde6 |  | ||||||
							
								
								
									
										24
									
								
								LICENSES/vendor/github.com/spf13/cast/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										24
									
								
								LICENSES/vendor/github.com/spf13/cast/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,24 +0,0 @@ | |||||||
| = vendor/github.com/spf13/cast licensed under: = |  | ||||||
|  |  | ||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2014 Steve Francia |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
| = vendor/github.com/spf13/cast/LICENSE 67fac7567cbf6ba946e5576d590b1ed4 |  | ||||||
							
								
								
									
										24
									
								
								LICENSES/vendor/github.com/spf13/jwalterweatherman/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										24
									
								
								LICENSES/vendor/github.com/spf13/jwalterweatherman/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,24 +0,0 @@ | |||||||
| = vendor/github.com/spf13/jwalterweatherman licensed under: = |  | ||||||
|  |  | ||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2014 Steve Francia |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
| = vendor/github.com/spf13/jwalterweatherman/LICENSE 67fac7567cbf6ba946e5576d590b1ed4 |  | ||||||
							
								
								
									
										24
									
								
								LICENSES/vendor/github.com/spf13/viper/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										24
									
								
								LICENSES/vendor/github.com/spf13/viper/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,24 +0,0 @@ | |||||||
| = vendor/github.com/spf13/viper licensed under: = |  | ||||||
|  |  | ||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2014 Steve Francia |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
| = vendor/github.com/spf13/viper/LICENSE 67fac7567cbf6ba946e5576d590b1ed4 |  | ||||||
							
								
								
									
										25
									
								
								LICENSES/vendor/github.com/subosito/gotenv/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										25
									
								
								LICENSES/vendor/github.com/subosito/gotenv/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,25 +0,0 @@ | |||||||
| = vendor/github.com/subosito/gotenv licensed under: = |  | ||||||
|  |  | ||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2013 Alif Rachmawadi |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in |  | ||||||
| all copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN |  | ||||||
| THE SOFTWARE. |  | ||||||
|  |  | ||||||
| = vendor/github.com/subosito/gotenv/LICENSE 0873257f40b8747d351ccc4288d06a40 |  | ||||||
							
								
								
									
										195
									
								
								LICENSES/vendor/gopkg.in/ini.v1/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										195
									
								
								LICENSES/vendor/gopkg.in/ini.v1/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,195 +0,0 @@ | |||||||
| = vendor/gopkg.in/ini.v1 licensed under: = |  | ||||||
|  |  | ||||||
| Apache License |  | ||||||
| Version 2.0, January 2004 |  | ||||||
| http://www.apache.org/licenses/ |  | ||||||
|  |  | ||||||
| TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION |  | ||||||
|  |  | ||||||
| 1. Definitions. |  | ||||||
|  |  | ||||||
| "License" shall mean the terms and conditions for use, reproduction, and |  | ||||||
| distribution as defined by Sections 1 through 9 of this document. |  | ||||||
|  |  | ||||||
| "Licensor" shall mean the copyright owner or entity authorized by the copyright |  | ||||||
| owner that is granting the License. |  | ||||||
|  |  | ||||||
| "Legal Entity" shall mean the union of the acting entity and all other entities |  | ||||||
| that control, are controlled by, or are under common control with that entity. |  | ||||||
| For the purposes of this definition, "control" means (i) the power, direct or |  | ||||||
| indirect, to cause the direction or management of such entity, whether by |  | ||||||
| contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the |  | ||||||
| outstanding shares, or (iii) beneficial ownership of such entity. |  | ||||||
|  |  | ||||||
| "You" (or "Your") shall mean an individual or Legal Entity exercising |  | ||||||
| permissions granted by this License. |  | ||||||
|  |  | ||||||
| "Source" form shall mean the preferred form for making modifications, including |  | ||||||
| but not limited to software source code, documentation source, and configuration |  | ||||||
| files. |  | ||||||
|  |  | ||||||
| "Object" form shall mean any form resulting from mechanical transformation or |  | ||||||
| translation of a Source form, including but not limited to compiled object code, |  | ||||||
| generated documentation, and conversions to other media types. |  | ||||||
|  |  | ||||||
| "Work" shall mean the work of authorship, whether in Source or Object form, made |  | ||||||
| available under the License, as indicated by a copyright notice that is included |  | ||||||
| in or attached to the work (an example is provided in the Appendix below). |  | ||||||
|  |  | ||||||
| "Derivative Works" shall mean any work, whether in Source or Object form, that |  | ||||||
| is based on (or derived from) the Work and for which the editorial revisions, |  | ||||||
| annotations, elaborations, or other modifications represent, as a whole, an |  | ||||||
| original work of authorship. For the purposes of this License, Derivative Works |  | ||||||
| shall not include works that remain separable from, or merely link (or bind by |  | ||||||
| name) to the interfaces of, the Work and Derivative Works thereof. |  | ||||||
|  |  | ||||||
| "Contribution" shall mean any work of authorship, including the original version |  | ||||||
| of the Work and any modifications or additions to that Work or Derivative Works |  | ||||||
| thereof, that is intentionally submitted to Licensor for inclusion in the Work |  | ||||||
| by the copyright owner or by an individual or Legal Entity authorized to submit |  | ||||||
| on behalf of the copyright owner. For the purposes of this definition, |  | ||||||
| "submitted" means any form of electronic, verbal, or written communication sent |  | ||||||
| to the Licensor or its representatives, including but not limited to |  | ||||||
| communication on electronic mailing lists, source code control systems, and |  | ||||||
| issue tracking systems that are managed by, or on behalf of, the Licensor for |  | ||||||
| the purpose of discussing and improving the Work, but excluding communication |  | ||||||
| that is conspicuously marked or otherwise designated in writing by the copyright |  | ||||||
| owner as "Not a Contribution." |  | ||||||
|  |  | ||||||
| "Contributor" shall mean Licensor and any individual or Legal Entity on behalf |  | ||||||
| of whom a Contribution has been received by Licensor and subsequently |  | ||||||
| incorporated within the Work. |  | ||||||
|  |  | ||||||
| 2. Grant of Copyright License. |  | ||||||
|  |  | ||||||
| Subject to the terms and conditions of this License, each Contributor hereby |  | ||||||
| grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, |  | ||||||
| irrevocable copyright license to reproduce, prepare Derivative Works of, |  | ||||||
| publicly display, publicly perform, sublicense, and distribute the Work and such |  | ||||||
| Derivative Works in Source or Object form. |  | ||||||
|  |  | ||||||
| 3. Grant of Patent License. |  | ||||||
|  |  | ||||||
| Subject to the terms and conditions of this License, each Contributor hereby |  | ||||||
| grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, |  | ||||||
| irrevocable (except as stated in this section) patent license to make, have |  | ||||||
| made, use, offer to sell, sell, import, and otherwise transfer the Work, where |  | ||||||
| such license applies only to those patent claims licensable by such Contributor |  | ||||||
| that are necessarily infringed by their Contribution(s) alone or by combination |  | ||||||
| of their Contribution(s) with the Work to which such Contribution(s) was |  | ||||||
| submitted. If You institute patent litigation against any entity (including a |  | ||||||
| cross-claim or counterclaim in a lawsuit) alleging that the Work or a |  | ||||||
| Contribution incorporated within the Work constitutes direct or contributory |  | ||||||
| patent infringement, then any patent licenses granted to You under this License |  | ||||||
| for that Work shall terminate as of the date such litigation is filed. |  | ||||||
|  |  | ||||||
| 4. Redistribution. |  | ||||||
|  |  | ||||||
| You may reproduce and distribute copies of the Work or Derivative Works thereof |  | ||||||
| in any medium, with or without modifications, and in Source or Object form, |  | ||||||
| provided that You meet the following conditions: |  | ||||||
|  |  | ||||||
| You must give any other recipients of the Work or Derivative Works a copy of |  | ||||||
| this License; and |  | ||||||
| You must cause any modified files to carry prominent notices stating that You |  | ||||||
| changed the files; and |  | ||||||
| You must retain, in the Source form of any Derivative Works that You distribute, |  | ||||||
| all copyright, patent, trademark, and attribution notices from the Source form |  | ||||||
| of the Work, excluding those notices that do not pertain to any part of the |  | ||||||
| Derivative Works; and |  | ||||||
| If the Work includes a "NOTICE" text file as part of its distribution, then any |  | ||||||
| Derivative Works that You distribute must include a readable copy of the |  | ||||||
| attribution notices contained within such NOTICE file, excluding those notices |  | ||||||
| that do not pertain to any part of the Derivative Works, in at least one of the |  | ||||||
| following places: within a NOTICE text file distributed as part of the |  | ||||||
| Derivative Works; within the Source form or documentation, if provided along |  | ||||||
| with the Derivative Works; or, within a display generated by the Derivative |  | ||||||
| Works, if and wherever such third-party notices normally appear. The contents of |  | ||||||
| the NOTICE file are for informational purposes only and do not modify the |  | ||||||
| License. You may add Your own attribution notices within Derivative Works that |  | ||||||
| You distribute, alongside or as an addendum to the NOTICE text from the Work, |  | ||||||
| provided that such additional attribution notices cannot be construed as |  | ||||||
| modifying the License. |  | ||||||
| You may add Your own copyright statement to Your modifications and may provide |  | ||||||
| additional or different license terms and conditions for use, reproduction, or |  | ||||||
| distribution of Your modifications, or for any such Derivative Works as a whole, |  | ||||||
| provided Your use, reproduction, and distribution of the Work otherwise complies |  | ||||||
| with the conditions stated in this License. |  | ||||||
|  |  | ||||||
| 5. Submission of Contributions. |  | ||||||
|  |  | ||||||
| Unless You explicitly state otherwise, any Contribution intentionally submitted |  | ||||||
| for inclusion in the Work by You to the Licensor shall be under the terms and |  | ||||||
| conditions of this License, without any additional terms or conditions. |  | ||||||
| Notwithstanding the above, nothing herein shall supersede or modify the terms of |  | ||||||
| any separate license agreement you may have executed with Licensor regarding |  | ||||||
| such Contributions. |  | ||||||
|  |  | ||||||
| 6. Trademarks. |  | ||||||
|  |  | ||||||
| This License does not grant permission to use the trade names, trademarks, |  | ||||||
| service marks, or product names of the Licensor, except as required for |  | ||||||
| reasonable and customary use in describing the origin of the Work and |  | ||||||
| reproducing the content of the NOTICE file. |  | ||||||
|  |  | ||||||
| 7. Disclaimer of Warranty. |  | ||||||
|  |  | ||||||
| Unless required by applicable law or agreed to in writing, Licensor provides the |  | ||||||
| Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, |  | ||||||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, |  | ||||||
| including, without limitation, any warranties or conditions of TITLE, |  | ||||||
| NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are |  | ||||||
| solely responsible for determining the appropriateness of using or |  | ||||||
| redistributing the Work and assume any risks associated with Your exercise of |  | ||||||
| permissions under this License. |  | ||||||
|  |  | ||||||
| 8. Limitation of Liability. |  | ||||||
|  |  | ||||||
| In no event and under no legal theory, whether in tort (including negligence), |  | ||||||
| contract, or otherwise, unless required by applicable law (such as deliberate |  | ||||||
| and grossly negligent acts) or agreed to in writing, shall any Contributor be |  | ||||||
| liable to You for damages, including any direct, indirect, special, incidental, |  | ||||||
| or consequential damages of any character arising as a result of this License or |  | ||||||
| out of the use or inability to use the Work (including but not limited to |  | ||||||
| damages for loss of goodwill, work stoppage, computer failure or malfunction, or |  | ||||||
| any and all other commercial damages or losses), even if such Contributor has |  | ||||||
| been advised of the possibility of such damages. |  | ||||||
|  |  | ||||||
| 9. Accepting Warranty or Additional Liability. |  | ||||||
|  |  | ||||||
| While redistributing the Work or Derivative Works thereof, You may choose to |  | ||||||
| offer, and charge a fee for, acceptance of support, warranty, indemnity, or |  | ||||||
| other liability obligations and/or rights consistent with this License. However, |  | ||||||
| in accepting such obligations, You may act only on Your own behalf and on Your |  | ||||||
| sole responsibility, not on behalf of any other Contributor, and only if You |  | ||||||
| agree to indemnify, defend, and hold each Contributor harmless for any liability |  | ||||||
| incurred by, or claims asserted against, such Contributor by reason of your |  | ||||||
| accepting any such warranty or additional liability. |  | ||||||
|  |  | ||||||
| END OF TERMS AND CONDITIONS |  | ||||||
|  |  | ||||||
| APPENDIX: How to apply the Apache License to your work |  | ||||||
|  |  | ||||||
| To apply the Apache License to your work, attach the following boilerplate |  | ||||||
| notice, with the fields enclosed by brackets "[]" replaced with your own |  | ||||||
| identifying information. (Don't include the brackets!) The text should be |  | ||||||
| enclosed in the appropriate comment syntax for the file format. We also |  | ||||||
| recommend that a file or class name and description of purpose be included on |  | ||||||
| the same "printed page" as the copyright notice for easier identification within |  | ||||||
| third-party archives. |  | ||||||
|  |  | ||||||
|    Copyright 2014 Unknwon |  | ||||||
|  |  | ||||||
|    Licensed under the Apache License, Version 2.0 (the "License"); |  | ||||||
|    you may not use this file except in compliance with the License. |  | ||||||
|    You may obtain a copy of the License at |  | ||||||
|  |  | ||||||
|      http://www.apache.org/licenses/LICENSE-2.0 |  | ||||||
|  |  | ||||||
|    Unless required by applicable law or agreed to in writing, software |  | ||||||
|    distributed under the License is distributed on an "AS IS" BASIS, |  | ||||||
|    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |  | ||||||
|    See the License for the specific language governing permissions and |  | ||||||
|    limitations under the License. |  | ||||||
|  |  | ||||||
| = vendor/gopkg.in/ini.v1/LICENSE 4e2a8d8f9935d6a766a5879a77ddc24d |  | ||||||
							
								
								
									
										4
									
								
								go.mod
									
									
									
									
									
								
							
							
						
						
									
										4
									
								
								go.mod
									
									
									
									
									
								
							| @@ -81,9 +81,7 @@ require ( | |||||||
| 	github.com/robfig/cron v1.1.0 | 	github.com/robfig/cron v1.1.0 | ||||||
| 	github.com/spf13/afero v1.2.2 | 	github.com/spf13/afero v1.2.2 | ||||||
| 	github.com/spf13/cobra v1.1.1 | 	github.com/spf13/cobra v1.1.1 | ||||||
| 	github.com/spf13/jwalterweatherman v1.1.0 // indirect |  | ||||||
| 	github.com/spf13/pflag v1.0.5 | 	github.com/spf13/pflag v1.0.5 | ||||||
| 	github.com/spf13/viper v1.7.0 |  | ||||||
| 	github.com/storageos/go-api v2.2.0+incompatible | 	github.com/storageos/go-api v2.2.0+incompatible | ||||||
| 	github.com/stretchr/testify v1.7.0 | 	github.com/stretchr/testify v1.7.0 | ||||||
| 	github.com/urfave/negroni v1.0.0 // indirect | 	github.com/urfave/negroni v1.0.0 // indirect | ||||||
| @@ -409,7 +407,7 @@ replace ( | |||||||
| 	github.com/spf13/afero => github.com/spf13/afero v1.2.2 | 	github.com/spf13/afero => github.com/spf13/afero v1.2.2 | ||||||
| 	github.com/spf13/cast => github.com/spf13/cast v1.3.0 | 	github.com/spf13/cast => github.com/spf13/cast v1.3.0 | ||||||
| 	github.com/spf13/cobra => github.com/spf13/cobra v1.1.1 | 	github.com/spf13/cobra => github.com/spf13/cobra v1.1.1 | ||||||
| 	github.com/spf13/jwalterweatherman => github.com/spf13/jwalterweatherman v1.1.0 | 	github.com/spf13/jwalterweatherman => github.com/spf13/jwalterweatherman v1.0.0 | ||||||
| 	github.com/spf13/pflag => github.com/spf13/pflag v1.0.5 | 	github.com/spf13/pflag => github.com/spf13/pflag v1.0.5 | ||||||
| 	github.com/spf13/viper => github.com/spf13/viper v1.7.0 | 	github.com/spf13/viper => github.com/spf13/viper v1.7.0 | ||||||
| 	github.com/stoewer/go-strcase => github.com/stoewer/go-strcase v1.2.0 | 	github.com/stoewer/go-strcase => github.com/stoewer/go-strcase v1.2.0 | ||||||
|   | |||||||
							
								
								
									
										14
									
								
								go.sum
									
									
									
									
									
								
							
							
						
						
									
										14
									
								
								go.sum
									
									
									
									
									
								
							| @@ -229,7 +229,6 @@ github.com/googleapis/gnostic v0.5.1 h1:A8Yhf6EtqTv9RMsU6MQTyrtV1TjWlR6xU9BsZIwu | |||||||
| github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU= | github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU= | ||||||
| github.com/gophercloud/gophercloud v0.1.0 h1:P/nh25+rzXouhytV2pUHBb65fnds26Ghl8/391+sT5o= | github.com/gophercloud/gophercloud v0.1.0 h1:P/nh25+rzXouhytV2pUHBb65fnds26Ghl8/391+sT5o= | ||||||
| github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= | github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= | ||||||
| github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8= |  | ||||||
| github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= | github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= | ||||||
| github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI= | github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI= | ||||||
| github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= | github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= | ||||||
| @@ -257,7 +256,6 @@ github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/b | |||||||
| github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= | github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= | ||||||
| github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU= | github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU= | ||||||
| github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= | github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= | ||||||
| github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= |  | ||||||
| github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= | github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= | ||||||
| github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= | github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= | ||||||
| github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ= | github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ= | ||||||
| @@ -285,7 +283,6 @@ github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX | |||||||
| github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68= | github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68= | ||||||
| github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= | github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= | ||||||
| github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= | github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= | ||||||
| github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo= |  | ||||||
| github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= | github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= | ||||||
| github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= | github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= | ||||||
| github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes= | github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes= | ||||||
| @@ -312,7 +309,6 @@ github.com/lucas-clemente/aes12 v0.0.0-20171027163421-cd47fb39b79f/go.mod h1:JpH | |||||||
| github.com/lucas-clemente/quic-clients v0.1.0/go.mod h1:y5xVIEoObKqULIKivu+gD/LU90pL73bTdtQjPBvtCBk= | github.com/lucas-clemente/quic-clients v0.1.0/go.mod h1:y5xVIEoObKqULIKivu+gD/LU90pL73bTdtQjPBvtCBk= | ||||||
| github.com/lucas-clemente/quic-go v0.10.2/go.mod h1:hvaRS9IHjFLMq76puFJeWNfmn+H70QZ/CXoxqw9bzao= | github.com/lucas-clemente/quic-go v0.10.2/go.mod h1:hvaRS9IHjFLMq76puFJeWNfmn+H70QZ/CXoxqw9bzao= | ||||||
| github.com/lucas-clemente/quic-go-certificates v0.0.0-20160823095156-d2f86524cced/go.mod h1:NCcRLrOTZbzhZvixZLlERbJtDtYsmMw8Jc4vS8Z0g58= | github.com/lucas-clemente/quic-go-certificates v0.0.0-20160823095156-d2f86524cced/go.mod h1:NCcRLrOTZbzhZvixZLlERbJtDtYsmMw8Jc4vS8Z0g58= | ||||||
| github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4= |  | ||||||
| github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= | github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= | ||||||
| github.com/mailru/easyjson v0.7.0 h1:aizVhC/NAAcKWb+5QsU1iNOZb4Yws5UO2I+aIprQITM= | github.com/mailru/easyjson v0.7.0 h1:aizVhC/NAAcKWb+5QsU1iNOZb4Yws5UO2I+aIprQITM= | ||||||
| github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs= | github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs= | ||||||
| @@ -387,7 +383,6 @@ github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.m | |||||||
| github.com/opencontainers/selinux v1.8.0 h1:+77ba4ar4jsCbL1GLbFL8fFM57w6suPfSS9PDLDY7KM= | github.com/opencontainers/selinux v1.8.0 h1:+77ba4ar4jsCbL1GLbFL8fFM57w6suPfSS9PDLDY7KM= | ||||||
| github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo= | github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo= | ||||||
| github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= | github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= | ||||||
| github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc= |  | ||||||
| github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= | github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= | ||||||
| github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI= | github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI= | ||||||
| github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= | github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= | ||||||
| @@ -429,23 +424,18 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5I | |||||||
| github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= | github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= | ||||||
| github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM= | github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM= | ||||||
| github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= | github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= | ||||||
| github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM= |  | ||||||
| github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= | github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= | ||||||
| github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s= |  | ||||||
| github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= | github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= | ||||||
| github.com/soheilhy/cmux v0.1.4 h1:0HKaf1o97UwFjHH9o5XsHUOF+tqmdA7KEzXLpiyaw0E= | github.com/soheilhy/cmux v0.1.4 h1:0HKaf1o97UwFjHH9o5XsHUOF+tqmdA7KEzXLpiyaw0E= | ||||||
| github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= | github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= | ||||||
| github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc= | github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc= | ||||||
| github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk= | github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk= | ||||||
| github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8= |  | ||||||
| github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= | github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= | ||||||
| github.com/spf13/cobra v1.1.1 h1:KfztREH0tPxJJ+geloSLaAkaPkr4ki2Er5quFV1TDo4= | github.com/spf13/cobra v1.1.1 h1:KfztREH0tPxJJ+geloSLaAkaPkr4ki2Er5quFV1TDo4= | ||||||
| github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI= | github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI= | ||||||
| github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk= | github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= | ||||||
| github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo= |  | ||||||
| github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= | github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= | ||||||
| github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= | github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= | ||||||
| github.com/spf13/viper v1.7.0 h1:xVKxvI7ouOI5I+U9s2eeiUfMaWBVoXA3AWskkrqK0VM= |  | ||||||
| github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg= | github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg= | ||||||
| github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8= | github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8= | ||||||
| github.com/storageos/go-api v2.2.0+incompatible h1:U0SablXoZIg06gvSlg8BCdzq1C/SkHVygOVX95Z2MU0= | github.com/storageos/go-api v2.2.0+incompatible h1:U0SablXoZIg06gvSlg8BCdzq1C/SkHVygOVX95Z2MU0= | ||||||
| @@ -454,7 +444,6 @@ github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48= | |||||||
| github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= | github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= | ||||||
| github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= | github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= | ||||||
| github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= | github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= | ||||||
| github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s= |  | ||||||
| github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= | github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= | ||||||
| github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI= | github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI= | ||||||
| github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= | github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= | ||||||
| @@ -544,7 +533,6 @@ gopkg.in/gcfg.v1 v1.2.0 h1:0HIbH907iBTAntm+88IJV2qmJALDAh8sPekI9Vc1fm0= | |||||||
| gopkg.in/gcfg.v1 v1.2.0/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o= | gopkg.in/gcfg.v1 v1.2.0/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o= | ||||||
| gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= | gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= | ||||||
| gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= | gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= | ||||||
| gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno= |  | ||||||
| gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= | gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= | ||||||
| gopkg.in/mcuadros/go-syslog.v2 v2.2.1/go.mod h1:l5LPIyOOyIdQquNg+oU6Z3524YwrcqEm0aKH+5zpt2U= | gopkg.in/mcuadros/go-syslog.v2 v2.2.1/go.mod h1:l5LPIyOOyIdQquNg+oU6Z3524YwrcqEm0aKH+5zpt2U= | ||||||
| gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8= | gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8= | ||||||
|   | |||||||
| @@ -59,8 +59,6 @@ import ( | |||||||
| 	_ "k8s.io/kubernetes/test/e2e/windows" | 	_ "k8s.io/kubernetes/test/e2e/windows" | ||||||
| ) | ) | ||||||
|  |  | ||||||
| var viperConfig = flag.String("viper-config", "", "The name of a viper config file (https://github.com/spf13/viper#what-is-viper). All e2e command line parameters can also be configured in such a file. May contain a path and may or may not contain the file suffix. The default is to look for an optional file with `e2e` as base name. If a file is specified explicitly, it must be present.") |  | ||||||
|  |  | ||||||
| // handleFlags sets up all flags and parses the command line. | // handleFlags sets up all flags and parses the command line. | ||||||
| func handleFlags() { | func handleFlags() { | ||||||
| 	config.CopyFlags(config.Flags, flag.CommandLine) | 	config.CopyFlags(config.Flags, flag.CommandLine) | ||||||
| @@ -76,14 +74,6 @@ func TestMain(m *testing.M) { | |||||||
| 	// Register test flags, then parse flags. | 	// Register test flags, then parse flags. | ||||||
| 	handleFlags() | 	handleFlags() | ||||||
|  |  | ||||||
| 	// Now that we know which Viper config (if any) was chosen, |  | ||||||
| 	// parse it and update those options which weren't already set via command line flags |  | ||||||
| 	// (which have higher priority). |  | ||||||
| 	if err := viperizeFlags(*viperConfig, "e2e", flag.CommandLine); err != nil { |  | ||||||
| 		fmt.Fprintln(os.Stderr, err) |  | ||||||
| 		os.Exit(1) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if framework.TestContext.ListImages { | 	if framework.TestContext.ListImages { | ||||||
| 		for _, v := range image.GetImageConfigs() { | 		for _, v := range image.GetImageConfigs() { | ||||||
| 			fmt.Println(v.GetE2EImage()) | 			fmt.Println(v.GetE2EImage()) | ||||||
|   | |||||||
| @@ -1,146 +0,0 @@ | |||||||
| /* |  | ||||||
| Copyright 2018 The Kubernetes Authors. |  | ||||||
|  |  | ||||||
| Licensed under the Apache License, Version 2.0 (the "License"); |  | ||||||
| you may not use this file except in compliance with the License. |  | ||||||
| You may obtain a copy of the License at |  | ||||||
|  |  | ||||||
|     http://www.apache.org/licenses/LICENSE-2.0 |  | ||||||
|  |  | ||||||
| Unless required by applicable law or agreed to in writing, software |  | ||||||
| distributed under the License is distributed on an "AS IS" BASIS, |  | ||||||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |  | ||||||
| See the License for the specific language governing permissions and |  | ||||||
| limitations under the License. |  | ||||||
| */ |  | ||||||
|  |  | ||||||
| package e2e |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"flag" |  | ||||||
| 	"fmt" |  | ||||||
| 	"path/filepath" |  | ||||||
|  |  | ||||||
| 	"github.com/pkg/errors" |  | ||||||
|  |  | ||||||
| 	"github.com/spf13/viper" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // viperizeFlags checks whether a configuration file was specified, |  | ||||||
| // reads it, and updates the configuration variables in the specified |  | ||||||
| // flag set accordingly. Must be called after framework.HandleFlags() |  | ||||||
| // and before framework.AfterReadingAllFlags(). |  | ||||||
| // |  | ||||||
| // The logic is so that a required configuration file must be present. If empty, |  | ||||||
| // the optional configuration file is used instead, unless also empty. |  | ||||||
| // |  | ||||||
| // Files can be specified with just a base name ("e2e", matches "e2e.json/yaml/..." in |  | ||||||
| // the current directory) or with path and suffix. |  | ||||||
| func viperizeFlags(requiredConfig, optionalConfig string, flags *flag.FlagSet) error { |  | ||||||
| 	viperConfig := optionalConfig |  | ||||||
| 	required := false |  | ||||||
| 	if requiredConfig != "" { |  | ||||||
| 		viperConfig = requiredConfig |  | ||||||
| 		required = true |  | ||||||
| 	} |  | ||||||
| 	if viperConfig == "" { |  | ||||||
| 		return nil |  | ||||||
| 	} |  | ||||||
| 	viper.SetConfigName(filepath.Base(viperConfig)) |  | ||||||
| 	viper.AddConfigPath(filepath.Dir(viperConfig)) |  | ||||||
| 	wrapError := func(err error) error { |  | ||||||
| 		if err == nil { |  | ||||||
| 			return nil |  | ||||||
| 		} |  | ||||||
| 		errorPrefix := fmt.Sprintf("viper config %q", viperConfig) |  | ||||||
| 		actualFile := viper.ConfigFileUsed() |  | ||||||
| 		if actualFile != "" && actualFile != viperConfig { |  | ||||||
| 			errorPrefix = fmt.Sprintf("%s = %q", errorPrefix, actualFile) |  | ||||||
| 		} |  | ||||||
| 		return errors.Wrap(err, errorPrefix) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if err := viper.ReadInConfig(); err != nil { |  | ||||||
| 		// If the user specified a file suffix, the Viper won't |  | ||||||
| 		// find the file because it always appends its known set |  | ||||||
| 		// of file suffices. Therefore try once more without |  | ||||||
| 		// suffix. |  | ||||||
| 		ext := filepath.Ext(viperConfig) |  | ||||||
| 		if _, ok := err.(viper.ConfigFileNotFoundError); ok && ext != "" { |  | ||||||
| 			viper.SetConfigName(filepath.Base(viperConfig[0 : len(viperConfig)-len(ext)])) |  | ||||||
| 			err = viper.ReadInConfig() |  | ||||||
| 		} |  | ||||||
| 		if err != nil { |  | ||||||
| 			// If a config was required, then parsing must |  | ||||||
| 			// succeed. This catches syntax errors and |  | ||||||
| 			// "file not found". Unfortunately error |  | ||||||
| 			// messages are sometimes hard to understand, |  | ||||||
| 			// so try to help the user a bit. |  | ||||||
| 			switch err.(type) { |  | ||||||
| 			case viper.ConfigFileNotFoundError: |  | ||||||
| 				if required { |  | ||||||
| 					return wrapError(errors.New("not found")) |  | ||||||
| 				} |  | ||||||
| 				// Proceed without config. |  | ||||||
| 				return nil |  | ||||||
| 			case viper.UnsupportedConfigError: |  | ||||||
| 				if required { |  | ||||||
| 					return wrapError(errors.New("not using a supported file format")) |  | ||||||
| 				} |  | ||||||
| 				// Proceed without config. |  | ||||||
| 				return nil |  | ||||||
| 			default: |  | ||||||
| 				// Something isn't right in the file. |  | ||||||
| 				return wrapError(err) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Update all flag values not already set with values found |  | ||||||
| 	// via Viper. We do this ourselves instead of calling |  | ||||||
| 	// something like viper.Unmarshal(&TestContext) because we |  | ||||||
| 	// want to support all values, regardless where they are |  | ||||||
| 	// stored. |  | ||||||
| 	return wrapError(viperUnmarshal(flags)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // viperUnmarshall updates all flags with the corresponding values found |  | ||||||
| // via Viper, regardless whether the flag value is stored in TestContext, some other |  | ||||||
| // context or a local variable. |  | ||||||
| func viperUnmarshal(flags *flag.FlagSet) error { |  | ||||||
| 	var result error |  | ||||||
| 	set := make(map[string]bool) |  | ||||||
|  |  | ||||||
| 	// Determine which values were already set explicitly via |  | ||||||
| 	// flags. Those we don't overwrite because command line |  | ||||||
| 	// flags have a higher priority. |  | ||||||
| 	flags.Visit(func(f *flag.Flag) { |  | ||||||
| 		set[f.Name] = true |  | ||||||
| 	}) |  | ||||||
|  |  | ||||||
| 	flags.VisitAll(func(f *flag.Flag) { |  | ||||||
| 		if result != nil || |  | ||||||
| 			set[f.Name] || |  | ||||||
| 			!viper.IsSet(f.Name) { |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// In contrast to viper.Unmarshal(), values |  | ||||||
| 		// that have the wrong type (for example, a |  | ||||||
| 		// list instead of a plain string) will not |  | ||||||
| 		// trigger an error here. This could be fixed |  | ||||||
| 		// by checking the type ourselves, but |  | ||||||
| 		// probably isn't worth the effort. |  | ||||||
| 		// |  | ||||||
| 		// "%v" correctly turns bool, int, strings into |  | ||||||
| 		// the representation expected by flag, so those |  | ||||||
| 		// can be used in config files. Plain strings |  | ||||||
| 		// always work there, just as on the command line. |  | ||||||
| 		str := fmt.Sprintf("%v", viper.Get(f.Name)) |  | ||||||
| 		if err := f.Value.Set(str); err != nil { |  | ||||||
| 			result = fmt.Errorf("setting option %q from config file value: %s", f.Name, err) |  | ||||||
| 		} |  | ||||||
| 	}) |  | ||||||
|  |  | ||||||
| 	return result |  | ||||||
| } |  | ||||||
| @@ -1,72 +0,0 @@ | |||||||
| /* |  | ||||||
| Copyright 2019 The Kubernetes Authors. |  | ||||||
|  |  | ||||||
| Licensed under the Apache License, Version 2.0 (the "License"); |  | ||||||
| you may not use this file except in compliance with the License. |  | ||||||
| You may obtain a copy of the License at |  | ||||||
|  |  | ||||||
|     http://www.apache.org/licenses/LICENSE-2.0 |  | ||||||
|  |  | ||||||
| Unless required by applicable law or agreed to in writing, software |  | ||||||
| distributed under the License is distributed on an "AS IS" BASIS, |  | ||||||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |  | ||||||
| See the License for the specific language governing permissions and |  | ||||||
| limitations under the License. |  | ||||||
| */ |  | ||||||
|  |  | ||||||
| package e2e |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"flag" |  | ||||||
| 	"io/ioutil" |  | ||||||
| 	"os" |  | ||||||
| 	"testing" |  | ||||||
| 	"time" |  | ||||||
|  |  | ||||||
| 	"github.com/stretchr/testify/require" |  | ||||||
| 	"k8s.io/kubernetes/test/e2e/framework/config" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| func TestViperConfig(t *testing.T) { |  | ||||||
| 	flags := flag.NewFlagSet("test", 0) |  | ||||||
| 	type Context struct { |  | ||||||
| 		Bool     bool          `default:"true"` |  | ||||||
| 		Duration time.Duration `default:"1ms"` |  | ||||||
| 		Float64  float64       `default:"1.23456789"` |  | ||||||
| 		String   string        `default:"hello world"` |  | ||||||
| 		Int      int           `default:"-1" usage:"some number"` |  | ||||||
| 		Int64    int64         `default:"-1234567890123456789"` |  | ||||||
| 		Uint     uint          `default:"1"` |  | ||||||
| 		Uint64   uint64        `default:"1234567890123456789"` |  | ||||||
| 	} |  | ||||||
| 	var context Context |  | ||||||
| 	require.NotPanics(t, func() { |  | ||||||
| 		config.AddOptionsToSet(flags, &context, "") |  | ||||||
| 	}) |  | ||||||
|  |  | ||||||
| 	viperConfig := ` |  | ||||||
| bool: false |  | ||||||
| duration: 1s |  | ||||||
| float64: -1.23456789 |  | ||||||
| string: pong |  | ||||||
| int: -2 |  | ||||||
| int64: -9123456789012345678 |  | ||||||
| uint: 2 |  | ||||||
| uint64: 9123456789012345678 |  | ||||||
| ` |  | ||||||
| 	tmpfile, err := ioutil.TempFile("", "viperconfig-*.yaml") |  | ||||||
| 	require.NoError(t, err, "temp file") |  | ||||||
| 	defer os.Remove(tmpfile.Name()) |  | ||||||
| 	if _, err := tmpfile.Write([]byte(viperConfig)); err != nil { |  | ||||||
| 		require.NoError(t, err, "write config") |  | ||||||
| 	} |  | ||||||
| 	require.NoError(t, tmpfile.Close(), "close temp file") |  | ||||||
|  |  | ||||||
| 	require.NoError(t, viperizeFlags(tmpfile.Name(), "", flags), "read config file") |  | ||||||
| 	require.Equal(t, |  | ||||||
| 		Context{false, time.Second, -1.23456789, "pong", |  | ||||||
| 			-2, -9123456789012345678, 2, 9123456789012345678, |  | ||||||
| 		}, |  | ||||||
| 		context, |  | ||||||
| 		"values from viper must match") |  | ||||||
| } |  | ||||||
							
								
								
									
										9
									
								
								vendor/github.com/hashicorp/hcl/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										9
									
								
								vendor/github.com/hashicorp/hcl/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,9 +0,0 @@ | |||||||
| y.output |  | ||||||
|  |  | ||||||
| # ignore intellij files |  | ||||||
| .idea |  | ||||||
| *.iml |  | ||||||
| *.ipr |  | ||||||
| *.iws |  | ||||||
|  |  | ||||||
| *.test |  | ||||||
							
								
								
									
										13
									
								
								vendor/github.com/hashicorp/hcl/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										13
									
								
								vendor/github.com/hashicorp/hcl/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,13 +0,0 @@ | |||||||
| sudo: false |  | ||||||
|  |  | ||||||
| language: go |  | ||||||
|  |  | ||||||
| go: |  | ||||||
|   - 1.x |  | ||||||
|   - tip |  | ||||||
|  |  | ||||||
| branches: |  | ||||||
|   only: |  | ||||||
|     - master |  | ||||||
|  |  | ||||||
| script: make test |  | ||||||
							
								
								
									
										354
									
								
								vendor/github.com/hashicorp/hcl/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										354
									
								
								vendor/github.com/hashicorp/hcl/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,354 +0,0 @@ | |||||||
| Mozilla Public License, version 2.0 |  | ||||||
|  |  | ||||||
| 1. Definitions |  | ||||||
|  |  | ||||||
| 1.1. “Contributor” |  | ||||||
|  |  | ||||||
|      means each individual or legal entity that creates, contributes to the |  | ||||||
|      creation of, or owns Covered Software. |  | ||||||
|  |  | ||||||
| 1.2. “Contributor Version” |  | ||||||
|  |  | ||||||
|      means the combination of the Contributions of others (if any) used by a |  | ||||||
|      Contributor and that particular Contributor’s Contribution. |  | ||||||
|  |  | ||||||
| 1.3. “Contribution” |  | ||||||
|  |  | ||||||
|      means Covered Software of a particular Contributor. |  | ||||||
|  |  | ||||||
| 1.4. “Covered Software” |  | ||||||
|  |  | ||||||
|      means Source Code Form to which the initial Contributor has attached the |  | ||||||
|      notice in Exhibit A, the Executable Form of such Source Code Form, and |  | ||||||
|      Modifications of such Source Code Form, in each case including portions |  | ||||||
|      thereof. |  | ||||||
|  |  | ||||||
| 1.5. “Incompatible With Secondary Licenses” |  | ||||||
|      means |  | ||||||
|  |  | ||||||
|      a. that the initial Contributor has attached the notice described in |  | ||||||
|         Exhibit B to the Covered Software; or |  | ||||||
|  |  | ||||||
|      b. that the Covered Software was made available under the terms of version |  | ||||||
|         1.1 or earlier of the License, but not also under the terms of a |  | ||||||
|         Secondary License. |  | ||||||
|  |  | ||||||
| 1.6. “Executable Form” |  | ||||||
|  |  | ||||||
|      means any form of the work other than Source Code Form. |  | ||||||
|  |  | ||||||
| 1.7. “Larger Work” |  | ||||||
|  |  | ||||||
|      means a work that combines Covered Software with other material, in a separate |  | ||||||
|      file or files, that is not Covered Software. |  | ||||||
|  |  | ||||||
| 1.8. “License” |  | ||||||
|  |  | ||||||
|      means this document. |  | ||||||
|  |  | ||||||
| 1.9. “Licensable” |  | ||||||
|  |  | ||||||
|      means having the right to grant, to the maximum extent possible, whether at the |  | ||||||
|      time of the initial grant or subsequently, any and all of the rights conveyed by |  | ||||||
|      this License. |  | ||||||
|  |  | ||||||
| 1.10. “Modifications” |  | ||||||
|  |  | ||||||
|      means any of the following: |  | ||||||
|  |  | ||||||
|      a. any file in Source Code Form that results from an addition to, deletion |  | ||||||
|         from, or modification of the contents of Covered Software; or |  | ||||||
|  |  | ||||||
|      b. any new file in Source Code Form that contains any Covered Software. |  | ||||||
|  |  | ||||||
| 1.11. “Patent Claims” of a Contributor |  | ||||||
|  |  | ||||||
|       means any patent claim(s), including without limitation, method, process, |  | ||||||
|       and apparatus claims, in any patent Licensable by such Contributor that |  | ||||||
|       would be infringed, but for the grant of the License, by the making, |  | ||||||
|       using, selling, offering for sale, having made, import, or transfer of |  | ||||||
|       either its Contributions or its Contributor Version. |  | ||||||
|  |  | ||||||
| 1.12. “Secondary License” |  | ||||||
|  |  | ||||||
|       means either the GNU General Public License, Version 2.0, the GNU Lesser |  | ||||||
|       General Public License, Version 2.1, the GNU Affero General Public |  | ||||||
|       License, Version 3.0, or any later versions of those licenses. |  | ||||||
|  |  | ||||||
| 1.13. “Source Code Form” |  | ||||||
|  |  | ||||||
|       means the form of the work preferred for making modifications. |  | ||||||
|  |  | ||||||
| 1.14. “You” (or “Your”) |  | ||||||
|  |  | ||||||
|       means an individual or a legal entity exercising rights under this |  | ||||||
|       License. For legal entities, “You” includes any entity that controls, is |  | ||||||
|       controlled by, or is under common control with You. For purposes of this |  | ||||||
|       definition, “control” means (a) the power, direct or indirect, to cause |  | ||||||
|       the direction or management of such entity, whether by contract or |  | ||||||
|       otherwise, or (b) ownership of more than fifty percent (50%) of the |  | ||||||
|       outstanding shares or beneficial ownership of such entity. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| 2. License Grants and Conditions |  | ||||||
|  |  | ||||||
| 2.1. Grants |  | ||||||
|  |  | ||||||
|      Each Contributor hereby grants You a world-wide, royalty-free, |  | ||||||
|      non-exclusive license: |  | ||||||
|  |  | ||||||
|      a. under intellectual property rights (other than patent or trademark) |  | ||||||
|         Licensable by such Contributor to use, reproduce, make available, |  | ||||||
|         modify, display, perform, distribute, and otherwise exploit its |  | ||||||
|         Contributions, either on an unmodified basis, with Modifications, or as |  | ||||||
|         part of a Larger Work; and |  | ||||||
|  |  | ||||||
|      b. under Patent Claims of such Contributor to make, use, sell, offer for |  | ||||||
|         sale, have made, import, and otherwise transfer either its Contributions |  | ||||||
|         or its Contributor Version. |  | ||||||
|  |  | ||||||
| 2.2. Effective Date |  | ||||||
|  |  | ||||||
|      The licenses granted in Section 2.1 with respect to any Contribution become |  | ||||||
|      effective for each Contribution on the date the Contributor first distributes |  | ||||||
|      such Contribution. |  | ||||||
|  |  | ||||||
| 2.3. Limitations on Grant Scope |  | ||||||
|  |  | ||||||
|      The licenses granted in this Section 2 are the only rights granted under this |  | ||||||
|      License. No additional rights or licenses will be implied from the distribution |  | ||||||
|      or licensing of Covered Software under this License. Notwithstanding Section |  | ||||||
|      2.1(b) above, no patent license is granted by a Contributor: |  | ||||||
|  |  | ||||||
|      a. for any code that a Contributor has removed from Covered Software; or |  | ||||||
|  |  | ||||||
|      b. for infringements caused by: (i) Your and any other third party’s |  | ||||||
|         modifications of Covered Software, or (ii) the combination of its |  | ||||||
|         Contributions with other software (except as part of its Contributor |  | ||||||
|         Version); or |  | ||||||
|  |  | ||||||
|      c. under Patent Claims infringed by Covered Software in the absence of its |  | ||||||
|         Contributions. |  | ||||||
|  |  | ||||||
|      This License does not grant any rights in the trademarks, service marks, or |  | ||||||
|      logos of any Contributor (except as may be necessary to comply with the |  | ||||||
|      notice requirements in Section 3.4). |  | ||||||
|  |  | ||||||
| 2.4. Subsequent Licenses |  | ||||||
|  |  | ||||||
|      No Contributor makes additional grants as a result of Your choice to |  | ||||||
|      distribute the Covered Software under a subsequent version of this License |  | ||||||
|      (see Section 10.2) or under the terms of a Secondary License (if permitted |  | ||||||
|      under the terms of Section 3.3). |  | ||||||
|  |  | ||||||
| 2.5. Representation |  | ||||||
|  |  | ||||||
|      Each Contributor represents that the Contributor believes its Contributions |  | ||||||
|      are its original creation(s) or it has sufficient rights to grant the |  | ||||||
|      rights to its Contributions conveyed by this License. |  | ||||||
|  |  | ||||||
| 2.6. Fair Use |  | ||||||
|  |  | ||||||
|      This License is not intended to limit any rights You have under applicable |  | ||||||
|      copyright doctrines of fair use, fair dealing, or other equivalents. |  | ||||||
|  |  | ||||||
| 2.7. Conditions |  | ||||||
|  |  | ||||||
|      Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in |  | ||||||
|      Section 2.1. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| 3. Responsibilities |  | ||||||
|  |  | ||||||
| 3.1. Distribution of Source Form |  | ||||||
|  |  | ||||||
|      All distribution of Covered Software in Source Code Form, including any |  | ||||||
|      Modifications that You create or to which You contribute, must be under the |  | ||||||
|      terms of this License. You must inform recipients that the Source Code Form |  | ||||||
|      of the Covered Software is governed by the terms of this License, and how |  | ||||||
|      they can obtain a copy of this License. You may not attempt to alter or |  | ||||||
|      restrict the recipients’ rights in the Source Code Form. |  | ||||||
|  |  | ||||||
| 3.2. Distribution of Executable Form |  | ||||||
|  |  | ||||||
|      If You distribute Covered Software in Executable Form then: |  | ||||||
|  |  | ||||||
|      a. such Covered Software must also be made available in Source Code Form, |  | ||||||
|         as described in Section 3.1, and You must inform recipients of the |  | ||||||
|         Executable Form how they can obtain a copy of such Source Code Form by |  | ||||||
|         reasonable means in a timely manner, at a charge no more than the cost |  | ||||||
|         of distribution to the recipient; and |  | ||||||
|  |  | ||||||
|      b. You may distribute such Executable Form under the terms of this License, |  | ||||||
|         or sublicense it under different terms, provided that the license for |  | ||||||
|         the Executable Form does not attempt to limit or alter the recipients’ |  | ||||||
|         rights in the Source Code Form under this License. |  | ||||||
|  |  | ||||||
| 3.3. Distribution of a Larger Work |  | ||||||
|  |  | ||||||
|      You may create and distribute a Larger Work under terms of Your choice, |  | ||||||
|      provided that You also comply with the requirements of this License for the |  | ||||||
|      Covered Software. If the Larger Work is a combination of Covered Software |  | ||||||
|      with a work governed by one or more Secondary Licenses, and the Covered |  | ||||||
|      Software is not Incompatible With Secondary Licenses, this License permits |  | ||||||
|      You to additionally distribute such Covered Software under the terms of |  | ||||||
|      such Secondary License(s), so that the recipient of the Larger Work may, at |  | ||||||
|      their option, further distribute the Covered Software under the terms of |  | ||||||
|      either this License or such Secondary License(s). |  | ||||||
|  |  | ||||||
| 3.4. Notices |  | ||||||
|  |  | ||||||
|      You may not remove or alter the substance of any license notices (including |  | ||||||
|      copyright notices, patent notices, disclaimers of warranty, or limitations |  | ||||||
|      of liability) contained within the Source Code Form of the Covered |  | ||||||
|      Software, except that You may alter any license notices to the extent |  | ||||||
|      required to remedy known factual inaccuracies. |  | ||||||
|  |  | ||||||
| 3.5. Application of Additional Terms |  | ||||||
|  |  | ||||||
|      You may choose to offer, and to charge a fee for, warranty, support, |  | ||||||
|      indemnity or liability obligations to one or more recipients of Covered |  | ||||||
|      Software. However, You may do so only on Your own behalf, and not on behalf |  | ||||||
|      of any Contributor. You must make it absolutely clear that any such |  | ||||||
|      warranty, support, indemnity, or liability obligation is offered by You |  | ||||||
|      alone, and You hereby agree to indemnify every Contributor for any |  | ||||||
|      liability incurred by such Contributor as a result of warranty, support, |  | ||||||
|      indemnity or liability terms You offer. You may include additional |  | ||||||
|      disclaimers of warranty and limitations of liability specific to any |  | ||||||
|      jurisdiction. |  | ||||||
|  |  | ||||||
| 4. Inability to Comply Due to Statute or Regulation |  | ||||||
|  |  | ||||||
|    If it is impossible for You to comply with any of the terms of this License |  | ||||||
|    with respect to some or all of the Covered Software due to statute, judicial |  | ||||||
|    order, or regulation then You must: (a) comply with the terms of this License |  | ||||||
|    to the maximum extent possible; and (b) describe the limitations and the code |  | ||||||
|    they affect. Such description must be placed in a text file included with all |  | ||||||
|    distributions of the Covered Software under this License. Except to the |  | ||||||
|    extent prohibited by statute or regulation, such description must be |  | ||||||
|    sufficiently detailed for a recipient of ordinary skill to be able to |  | ||||||
|    understand it. |  | ||||||
|  |  | ||||||
| 5. Termination |  | ||||||
|  |  | ||||||
| 5.1. The rights granted under this License will terminate automatically if You |  | ||||||
|      fail to comply with any of its terms. However, if You become compliant, |  | ||||||
|      then the rights granted under this License from a particular Contributor |  | ||||||
|      are reinstated (a) provisionally, unless and until such Contributor |  | ||||||
|      explicitly and finally terminates Your grants, and (b) on an ongoing basis, |  | ||||||
|      if such Contributor fails to notify You of the non-compliance by some |  | ||||||
|      reasonable means prior to 60 days after You have come back into compliance. |  | ||||||
|      Moreover, Your grants from a particular Contributor are reinstated on an |  | ||||||
|      ongoing basis if such Contributor notifies You of the non-compliance by |  | ||||||
|      some reasonable means, this is the first time You have received notice of |  | ||||||
|      non-compliance with this License from such Contributor, and You become |  | ||||||
|      compliant prior to 30 days after Your receipt of the notice. |  | ||||||
|  |  | ||||||
| 5.2. If You initiate litigation against any entity by asserting a patent |  | ||||||
|      infringement claim (excluding declaratory judgment actions, counter-claims, |  | ||||||
|      and cross-claims) alleging that a Contributor Version directly or |  | ||||||
|      indirectly infringes any patent, then the rights granted to You by any and |  | ||||||
|      all Contributors for the Covered Software under Section 2.1 of this License |  | ||||||
|      shall terminate. |  | ||||||
|  |  | ||||||
| 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user |  | ||||||
|      license agreements (excluding distributors and resellers) which have been |  | ||||||
|      validly granted by You or Your distributors under this License prior to |  | ||||||
|      termination shall survive termination. |  | ||||||
|  |  | ||||||
| 6. Disclaimer of Warranty |  | ||||||
|  |  | ||||||
|    Covered Software is provided under this License on an “as is” basis, without |  | ||||||
|    warranty of any kind, either expressed, implied, or statutory, including, |  | ||||||
|    without limitation, warranties that the Covered Software is free of defects, |  | ||||||
|    merchantable, fit for a particular purpose or non-infringing. The entire |  | ||||||
|    risk as to the quality and performance of the Covered Software is with You. |  | ||||||
|    Should any Covered Software prove defective in any respect, You (not any |  | ||||||
|    Contributor) assume the cost of any necessary servicing, repair, or |  | ||||||
|    correction. This disclaimer of warranty constitutes an essential part of this |  | ||||||
|    License. No use of  any Covered Software is authorized under this License |  | ||||||
|    except under this disclaimer. |  | ||||||
|  |  | ||||||
| 7. Limitation of Liability |  | ||||||
|  |  | ||||||
|    Under no circumstances and under no legal theory, whether tort (including |  | ||||||
|    negligence), contract, or otherwise, shall any Contributor, or anyone who |  | ||||||
|    distributes Covered Software as permitted above, be liable to You for any |  | ||||||
|    direct, indirect, special, incidental, or consequential damages of any |  | ||||||
|    character including, without limitation, damages for lost profits, loss of |  | ||||||
|    goodwill, work stoppage, computer failure or malfunction, or any and all |  | ||||||
|    other commercial damages or losses, even if such party shall have been |  | ||||||
|    informed of the possibility of such damages. This limitation of liability |  | ||||||
|    shall not apply to liability for death or personal injury resulting from such |  | ||||||
|    party’s negligence to the extent applicable law prohibits such limitation. |  | ||||||
|    Some jurisdictions do not allow the exclusion or limitation of incidental or |  | ||||||
|    consequential damages, so this exclusion and limitation may not apply to You. |  | ||||||
|  |  | ||||||
| 8. Litigation |  | ||||||
|  |  | ||||||
|    Any litigation relating to this License may be brought only in the courts of |  | ||||||
|    a jurisdiction where the defendant maintains its principal place of business |  | ||||||
|    and such litigation shall be governed by laws of that jurisdiction, without |  | ||||||
|    reference to its conflict-of-law provisions. Nothing in this Section shall |  | ||||||
|    prevent a party’s ability to bring cross-claims or counter-claims. |  | ||||||
|  |  | ||||||
| 9. Miscellaneous |  | ||||||
|  |  | ||||||
|    This License represents the complete agreement concerning the subject matter |  | ||||||
|    hereof. If any provision of this License is held to be unenforceable, such |  | ||||||
|    provision shall be reformed only to the extent necessary to make it |  | ||||||
|    enforceable. Any law or regulation which provides that the language of a |  | ||||||
|    contract shall be construed against the drafter shall not be used to construe |  | ||||||
|    this License against a Contributor. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| 10. Versions of the License |  | ||||||
|  |  | ||||||
| 10.1. New Versions |  | ||||||
|  |  | ||||||
|       Mozilla Foundation is the license steward. Except as provided in Section |  | ||||||
|       10.3, no one other than the license steward has the right to modify or |  | ||||||
|       publish new versions of this License. Each version will be given a |  | ||||||
|       distinguishing version number. |  | ||||||
|  |  | ||||||
| 10.2. Effect of New Versions |  | ||||||
|  |  | ||||||
|       You may distribute the Covered Software under the terms of the version of |  | ||||||
|       the License under which You originally received the Covered Software, or |  | ||||||
|       under the terms of any subsequent version published by the license |  | ||||||
|       steward. |  | ||||||
|  |  | ||||||
| 10.3. Modified Versions |  | ||||||
|  |  | ||||||
|       If you create software not governed by this License, and you want to |  | ||||||
|       create a new license for such software, you may create and use a modified |  | ||||||
|       version of this License if you rename the license and remove any |  | ||||||
|       references to the name of the license steward (except to note that such |  | ||||||
|       modified license differs from this License). |  | ||||||
|  |  | ||||||
| 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses |  | ||||||
|       If You choose to distribute Source Code Form that is Incompatible With |  | ||||||
|       Secondary Licenses under the terms of this version of the License, the |  | ||||||
|       notice described in Exhibit B of this License must be attached. |  | ||||||
|  |  | ||||||
| Exhibit A - Source Code Form License Notice |  | ||||||
|  |  | ||||||
|       This Source Code Form is subject to the |  | ||||||
|       terms of the Mozilla Public License, v. |  | ||||||
|       2.0. If a copy of the MPL was not |  | ||||||
|       distributed with this file, You can |  | ||||||
|       obtain one at |  | ||||||
|       http://mozilla.org/MPL/2.0/. |  | ||||||
|  |  | ||||||
| If it is not possible or desirable to put the notice in a particular file, then |  | ||||||
| You may include the notice in a location (such as a LICENSE file in a relevant |  | ||||||
| directory) where a recipient would be likely to look for such a notice. |  | ||||||
|  |  | ||||||
| You may add additional accurate notices of copyright ownership. |  | ||||||
|  |  | ||||||
| Exhibit B - “Incompatible With Secondary Licenses” Notice |  | ||||||
|  |  | ||||||
|       This Source Code Form is “Incompatible |  | ||||||
|       With Secondary Licenses”, as defined by |  | ||||||
|       the Mozilla Public License, v. 2.0. |  | ||||||
|  |  | ||||||
							
								
								
									
										18
									
								
								vendor/github.com/hashicorp/hcl/Makefile
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										18
									
								
								vendor/github.com/hashicorp/hcl/Makefile
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,18 +0,0 @@ | |||||||
| TEST?=./... |  | ||||||
|  |  | ||||||
| default: test |  | ||||||
|  |  | ||||||
| fmt: generate |  | ||||||
| 	go fmt ./... |  | ||||||
|  |  | ||||||
| test: generate |  | ||||||
| 	go get -t ./... |  | ||||||
| 	go test $(TEST) $(TESTARGS) |  | ||||||
|  |  | ||||||
| generate: |  | ||||||
| 	go generate ./... |  | ||||||
|  |  | ||||||
| updatedeps: |  | ||||||
| 	go get -u golang.org/x/tools/cmd/stringer |  | ||||||
|  |  | ||||||
| .PHONY: default generate test updatedeps |  | ||||||
							
								
								
									
										125
									
								
								vendor/github.com/hashicorp/hcl/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										125
									
								
								vendor/github.com/hashicorp/hcl/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,125 +0,0 @@ | |||||||
| # HCL |  | ||||||
|  |  | ||||||
| [](https://godoc.org/github.com/hashicorp/hcl) [](https://travis-ci.org/hashicorp/hcl) |  | ||||||
|  |  | ||||||
| HCL (HashiCorp Configuration Language) is a configuration language built |  | ||||||
| by HashiCorp. The goal of HCL is to build a structured configuration language |  | ||||||
| that is both human and machine friendly for use with command-line tools, but |  | ||||||
| specifically targeted towards DevOps tools, servers, etc. |  | ||||||
|  |  | ||||||
| HCL is also fully JSON compatible. That is, JSON can be used as completely |  | ||||||
| valid input to a system expecting HCL. This helps makes systems |  | ||||||
| interoperable with other systems. |  | ||||||
|  |  | ||||||
| HCL is heavily inspired by |  | ||||||
| [libucl](https://github.com/vstakhov/libucl), |  | ||||||
| nginx configuration, and others similar. |  | ||||||
|  |  | ||||||
| ## Why? |  | ||||||
|  |  | ||||||
| A common question when viewing HCL is to ask the question: why not |  | ||||||
| JSON, YAML, etc.? |  | ||||||
|  |  | ||||||
| Prior to HCL, the tools we built at [HashiCorp](http://www.hashicorp.com) |  | ||||||
| used a variety of configuration languages from full programming languages |  | ||||||
| such as Ruby to complete data structure languages such as JSON. What we |  | ||||||
| learned is that some people wanted human-friendly configuration languages |  | ||||||
| and some people wanted machine-friendly languages. |  | ||||||
|  |  | ||||||
| JSON fits a nice balance in this, but is fairly verbose and most |  | ||||||
| importantly doesn't support comments. With YAML, we found that beginners |  | ||||||
| had a really hard time determining what the actual structure was, and |  | ||||||
| ended up guessing more often than not whether to use a hyphen, colon, etc. |  | ||||||
| in order to represent some configuration key. |  | ||||||
|  |  | ||||||
| Full programming languages such as Ruby enable complex behavior |  | ||||||
| a configuration language shouldn't usually allow, and also forces |  | ||||||
| people to learn some set of Ruby. |  | ||||||
|  |  | ||||||
| Because of this, we decided to create our own configuration language |  | ||||||
| that is JSON-compatible. Our configuration language (HCL) is designed |  | ||||||
| to be written and modified by humans. The API for HCL allows JSON |  | ||||||
| as an input so that it is also machine-friendly (machines can generate |  | ||||||
| JSON instead of trying to generate HCL). |  | ||||||
|  |  | ||||||
| Our goal with HCL is not to alienate other configuration languages. |  | ||||||
| It is instead to provide HCL as a specialized language for our tools, |  | ||||||
| and JSON as the interoperability layer. |  | ||||||
|  |  | ||||||
| ## Syntax |  | ||||||
|  |  | ||||||
| For a complete grammar, please see the parser itself. A high-level overview |  | ||||||
| of the syntax and grammar is listed here. |  | ||||||
|  |  | ||||||
|   * Single line comments start with `#` or `//` |  | ||||||
|  |  | ||||||
|   * Multi-line comments are wrapped in `/*` and `*/`. Nested block comments |  | ||||||
|     are not allowed. A multi-line comment (also known as a block comment) |  | ||||||
|     terminates at the first `*/` found. |  | ||||||
|  |  | ||||||
|   * Values are assigned with the syntax `key = value` (whitespace doesn't |  | ||||||
|     matter). The value can be any primitive: a string, number, boolean, |  | ||||||
|     object, or list. |  | ||||||
|  |  | ||||||
|   * Strings are double-quoted and can contain any UTF-8 characters. |  | ||||||
|     Example: `"Hello, World"` |  | ||||||
|  |  | ||||||
|   * Multi-line strings start with `<<EOF` at the end of a line, and end |  | ||||||
|     with `EOF` on its own line ([here documents](https://en.wikipedia.org/wiki/Here_document)). |  | ||||||
|     Any text may be used in place of `EOF`. Example: |  | ||||||
| ``` |  | ||||||
| <<FOO |  | ||||||
| hello |  | ||||||
| world |  | ||||||
| FOO |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
|   * Numbers are assumed to be base 10. If you prefix a number with 0x, |  | ||||||
|     it is treated as a hexadecimal. If it is prefixed with 0, it is |  | ||||||
|     treated as an octal. Numbers can be in scientific notation: "1e10". |  | ||||||
|  |  | ||||||
|   * Boolean values: `true`, `false` |  | ||||||
|  |  | ||||||
|   * Arrays can be made by wrapping it in `[]`. Example: |  | ||||||
|     `["foo", "bar", 42]`. Arrays can contain primitives, |  | ||||||
|     other arrays, and objects. As an alternative, lists |  | ||||||
|     of objects can be created with repeated blocks, using |  | ||||||
|     this structure: |  | ||||||
|  |  | ||||||
|     ```hcl |  | ||||||
|     service { |  | ||||||
|         key = "value" |  | ||||||
|     } |  | ||||||
|  |  | ||||||
|     service { |  | ||||||
|         key = "value" |  | ||||||
|     } |  | ||||||
|     ``` |  | ||||||
|  |  | ||||||
| Objects and nested objects are created using the structure shown below: |  | ||||||
|  |  | ||||||
| ``` |  | ||||||
| variable "ami" { |  | ||||||
|     description = "the AMI to use" |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
| This would be equivalent to the following json: |  | ||||||
| ``` json |  | ||||||
| { |  | ||||||
|   "variable": { |  | ||||||
|       "ami": { |  | ||||||
|           "description": "the AMI to use" |  | ||||||
|         } |  | ||||||
|     } |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ## Thanks |  | ||||||
|  |  | ||||||
| Thanks to: |  | ||||||
|  |  | ||||||
|   * [@vstakhov](https://github.com/vstakhov) - The original libucl parser |  | ||||||
|     and syntax that HCL was based off of. |  | ||||||
|  |  | ||||||
|   * [@fatih](https://github.com/fatih) - The rewritten HCL parser |  | ||||||
|     in pure Go (no goyacc) and support for a printer. |  | ||||||
							
								
								
									
										19
									
								
								vendor/github.com/hashicorp/hcl/appveyor.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										19
									
								
								vendor/github.com/hashicorp/hcl/appveyor.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,19 +0,0 @@ | |||||||
| version: "build-{branch}-{build}" |  | ||||||
| image: Visual Studio 2015 |  | ||||||
| clone_folder: c:\gopath\src\github.com\hashicorp\hcl |  | ||||||
| environment: |  | ||||||
|   GOPATH: c:\gopath |  | ||||||
| init: |  | ||||||
|   - git config --global core.autocrlf false |  | ||||||
| install: |  | ||||||
| - cmd: >- |  | ||||||
|     echo %Path% |  | ||||||
|  |  | ||||||
|     go version |  | ||||||
|  |  | ||||||
|     go env |  | ||||||
|  |  | ||||||
|     go get -t ./... |  | ||||||
|  |  | ||||||
| build_script: |  | ||||||
| - cmd: go test -v ./... |  | ||||||
							
								
								
									
										729
									
								
								vendor/github.com/hashicorp/hcl/decoder.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										729
									
								
								vendor/github.com/hashicorp/hcl/decoder.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,729 +0,0 @@ | |||||||
| package hcl |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
| 	"reflect" |  | ||||||
| 	"sort" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/ast" |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/parser" |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // This is the tag to use with structures to have settings for HCL |  | ||||||
| const tagName = "hcl" |  | ||||||
|  |  | ||||||
| var ( |  | ||||||
| 	// nodeType holds a reference to the type of ast.Node |  | ||||||
| 	nodeType reflect.Type = findNodeType() |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Unmarshal accepts a byte slice as input and writes the |  | ||||||
| // data to the value pointed to by v. |  | ||||||
| func Unmarshal(bs []byte, v interface{}) error { |  | ||||||
| 	root, err := parse(bs) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return DecodeObject(v, root) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Decode reads the given input and decodes it into the structure |  | ||||||
| // given by `out`. |  | ||||||
| func Decode(out interface{}, in string) error { |  | ||||||
| 	obj, err := Parse(in) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return DecodeObject(out, obj) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // DecodeObject is a lower-level version of Decode. It decodes a |  | ||||||
| // raw Object into the given output. |  | ||||||
| func DecodeObject(out interface{}, n ast.Node) error { |  | ||||||
| 	val := reflect.ValueOf(out) |  | ||||||
| 	if val.Kind() != reflect.Ptr { |  | ||||||
| 		return errors.New("result must be a pointer") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If we have the file, we really decode the root node |  | ||||||
| 	if f, ok := n.(*ast.File); ok { |  | ||||||
| 		n = f.Node |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var d decoder |  | ||||||
| 	return d.decode("root", n, val.Elem()) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| type decoder struct { |  | ||||||
| 	stack []reflect.Kind |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decode(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	k := result |  | ||||||
|  |  | ||||||
| 	// If we have an interface with a valid value, we use that |  | ||||||
| 	// for the check. |  | ||||||
| 	if result.Kind() == reflect.Interface { |  | ||||||
| 		elem := result.Elem() |  | ||||||
| 		if elem.IsValid() { |  | ||||||
| 			k = elem |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Push current onto stack unless it is an interface. |  | ||||||
| 	if k.Kind() != reflect.Interface { |  | ||||||
| 		d.stack = append(d.stack, k.Kind()) |  | ||||||
|  |  | ||||||
| 		// Schedule a pop |  | ||||||
| 		defer func() { |  | ||||||
| 			d.stack = d.stack[:len(d.stack)-1] |  | ||||||
| 		}() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch k.Kind() { |  | ||||||
| 	case reflect.Bool: |  | ||||||
| 		return d.decodeBool(name, node, result) |  | ||||||
| 	case reflect.Float32, reflect.Float64: |  | ||||||
| 		return d.decodeFloat(name, node, result) |  | ||||||
| 	case reflect.Int, reflect.Int32, reflect.Int64: |  | ||||||
| 		return d.decodeInt(name, node, result) |  | ||||||
| 	case reflect.Interface: |  | ||||||
| 		// When we see an interface, we make our own thing |  | ||||||
| 		return d.decodeInterface(name, node, result) |  | ||||||
| 	case reflect.Map: |  | ||||||
| 		return d.decodeMap(name, node, result) |  | ||||||
| 	case reflect.Ptr: |  | ||||||
| 		return d.decodePtr(name, node, result) |  | ||||||
| 	case reflect.Slice: |  | ||||||
| 		return d.decodeSlice(name, node, result) |  | ||||||
| 	case reflect.String: |  | ||||||
| 		return d.decodeString(name, node, result) |  | ||||||
| 	case reflect.Struct: |  | ||||||
| 		return d.decodeStruct(name, node, result) |  | ||||||
| 	default: |  | ||||||
| 		return &parser.PosError{ |  | ||||||
| 			Pos: node.Pos(), |  | ||||||
| 			Err: fmt.Errorf("%s: unknown kind to decode into: %s", name, k.Kind()), |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeBool(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	switch n := node.(type) { |  | ||||||
| 	case *ast.LiteralType: |  | ||||||
| 		if n.Token.Type == token.BOOL { |  | ||||||
| 			v, err := strconv.ParseBool(n.Token.Text) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			result.Set(reflect.ValueOf(v)) |  | ||||||
| 			return nil |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return &parser.PosError{ |  | ||||||
| 		Pos: node.Pos(), |  | ||||||
| 		Err: fmt.Errorf("%s: unknown type %T", name, node), |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeFloat(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	switch n := node.(type) { |  | ||||||
| 	case *ast.LiteralType: |  | ||||||
| 		if n.Token.Type == token.FLOAT || n.Token.Type == token.NUMBER { |  | ||||||
| 			v, err := strconv.ParseFloat(n.Token.Text, 64) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			result.Set(reflect.ValueOf(v).Convert(result.Type())) |  | ||||||
| 			return nil |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return &parser.PosError{ |  | ||||||
| 		Pos: node.Pos(), |  | ||||||
| 		Err: fmt.Errorf("%s: unknown type %T", name, node), |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeInt(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	switch n := node.(type) { |  | ||||||
| 	case *ast.LiteralType: |  | ||||||
| 		switch n.Token.Type { |  | ||||||
| 		case token.NUMBER: |  | ||||||
| 			v, err := strconv.ParseInt(n.Token.Text, 0, 0) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if result.Kind() == reflect.Interface { |  | ||||||
| 				result.Set(reflect.ValueOf(int(v))) |  | ||||||
| 			} else { |  | ||||||
| 				result.SetInt(v) |  | ||||||
| 			} |  | ||||||
| 			return nil |  | ||||||
| 		case token.STRING: |  | ||||||
| 			v, err := strconv.ParseInt(n.Token.Value().(string), 0, 0) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if result.Kind() == reflect.Interface { |  | ||||||
| 				result.Set(reflect.ValueOf(int(v))) |  | ||||||
| 			} else { |  | ||||||
| 				result.SetInt(v) |  | ||||||
| 			} |  | ||||||
| 			return nil |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return &parser.PosError{ |  | ||||||
| 		Pos: node.Pos(), |  | ||||||
| 		Err: fmt.Errorf("%s: unknown type %T", name, node), |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeInterface(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	// When we see an ast.Node, we retain the value to enable deferred decoding. |  | ||||||
| 	// Very useful in situations where we want to preserve ast.Node information |  | ||||||
| 	// like Pos |  | ||||||
| 	if result.Type() == nodeType && result.CanSet() { |  | ||||||
| 		result.Set(reflect.ValueOf(node)) |  | ||||||
| 		return nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var set reflect.Value |  | ||||||
| 	redecode := true |  | ||||||
|  |  | ||||||
| 	// For testing types, ObjectType should just be treated as a list. We |  | ||||||
| 	// set this to a temporary var because we want to pass in the real node. |  | ||||||
| 	testNode := node |  | ||||||
| 	if ot, ok := node.(*ast.ObjectType); ok { |  | ||||||
| 		testNode = ot.List |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch n := testNode.(type) { |  | ||||||
| 	case *ast.ObjectList: |  | ||||||
| 		// If we're at the root or we're directly within a slice, then we |  | ||||||
| 		// decode objects into map[string]interface{}, otherwise we decode |  | ||||||
| 		// them into lists. |  | ||||||
| 		if len(d.stack) == 0 || d.stack[len(d.stack)-1] == reflect.Slice { |  | ||||||
| 			var temp map[string]interface{} |  | ||||||
| 			tempVal := reflect.ValueOf(temp) |  | ||||||
| 			result := reflect.MakeMap( |  | ||||||
| 				reflect.MapOf( |  | ||||||
| 					reflect.TypeOf(""), |  | ||||||
| 					tempVal.Type().Elem())) |  | ||||||
|  |  | ||||||
| 			set = result |  | ||||||
| 		} else { |  | ||||||
| 			var temp []map[string]interface{} |  | ||||||
| 			tempVal := reflect.ValueOf(temp) |  | ||||||
| 			result := reflect.MakeSlice( |  | ||||||
| 				reflect.SliceOf(tempVal.Type().Elem()), 0, len(n.Items)) |  | ||||||
| 			set = result |  | ||||||
| 		} |  | ||||||
| 	case *ast.ObjectType: |  | ||||||
| 		// If we're at the root or we're directly within a slice, then we |  | ||||||
| 		// decode objects into map[string]interface{}, otherwise we decode |  | ||||||
| 		// them into lists. |  | ||||||
| 		if len(d.stack) == 0 || d.stack[len(d.stack)-1] == reflect.Slice { |  | ||||||
| 			var temp map[string]interface{} |  | ||||||
| 			tempVal := reflect.ValueOf(temp) |  | ||||||
| 			result := reflect.MakeMap( |  | ||||||
| 				reflect.MapOf( |  | ||||||
| 					reflect.TypeOf(""), |  | ||||||
| 					tempVal.Type().Elem())) |  | ||||||
|  |  | ||||||
| 			set = result |  | ||||||
| 		} else { |  | ||||||
| 			var temp []map[string]interface{} |  | ||||||
| 			tempVal := reflect.ValueOf(temp) |  | ||||||
| 			result := reflect.MakeSlice( |  | ||||||
| 				reflect.SliceOf(tempVal.Type().Elem()), 0, 1) |  | ||||||
| 			set = result |  | ||||||
| 		} |  | ||||||
| 	case *ast.ListType: |  | ||||||
| 		var temp []interface{} |  | ||||||
| 		tempVal := reflect.ValueOf(temp) |  | ||||||
| 		result := reflect.MakeSlice( |  | ||||||
| 			reflect.SliceOf(tempVal.Type().Elem()), 0, 0) |  | ||||||
| 		set = result |  | ||||||
| 	case *ast.LiteralType: |  | ||||||
| 		switch n.Token.Type { |  | ||||||
| 		case token.BOOL: |  | ||||||
| 			var result bool |  | ||||||
| 			set = reflect.Indirect(reflect.New(reflect.TypeOf(result))) |  | ||||||
| 		case token.FLOAT: |  | ||||||
| 			var result float64 |  | ||||||
| 			set = reflect.Indirect(reflect.New(reflect.TypeOf(result))) |  | ||||||
| 		case token.NUMBER: |  | ||||||
| 			var result int |  | ||||||
| 			set = reflect.Indirect(reflect.New(reflect.TypeOf(result))) |  | ||||||
| 		case token.STRING, token.HEREDOC: |  | ||||||
| 			set = reflect.Indirect(reflect.New(reflect.TypeOf(""))) |  | ||||||
| 		default: |  | ||||||
| 			return &parser.PosError{ |  | ||||||
| 				Pos: node.Pos(), |  | ||||||
| 				Err: fmt.Errorf("%s: cannot decode into interface: %T", name, node), |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	default: |  | ||||||
| 		return fmt.Errorf( |  | ||||||
| 			"%s: cannot decode into interface: %T", |  | ||||||
| 			name, node) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Set the result to what its supposed to be, then reset |  | ||||||
| 	// result so we don't reflect into this method anymore. |  | ||||||
| 	result.Set(set) |  | ||||||
|  |  | ||||||
| 	if redecode { |  | ||||||
| 		// Revisit the node so that we can use the newly instantiated |  | ||||||
| 		// thing and populate it. |  | ||||||
| 		if err := d.decode(name, node, result); err != nil { |  | ||||||
| 			return err |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeMap(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	if item, ok := node.(*ast.ObjectItem); ok { |  | ||||||
| 		node = &ast.ObjectList{Items: []*ast.ObjectItem{item}} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ot, ok := node.(*ast.ObjectType); ok { |  | ||||||
| 		node = ot.List |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	n, ok := node.(*ast.ObjectList) |  | ||||||
| 	if !ok { |  | ||||||
| 		return &parser.PosError{ |  | ||||||
| 			Pos: node.Pos(), |  | ||||||
| 			Err: fmt.Errorf("%s: not an object type for map (%T)", name, node), |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If we have an interface, then we can address the interface, |  | ||||||
| 	// but not the slice itself, so get the element but set the interface |  | ||||||
| 	set := result |  | ||||||
| 	if result.Kind() == reflect.Interface { |  | ||||||
| 		result = result.Elem() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	resultType := result.Type() |  | ||||||
| 	resultElemType := resultType.Elem() |  | ||||||
| 	resultKeyType := resultType.Key() |  | ||||||
| 	if resultKeyType.Kind() != reflect.String { |  | ||||||
| 		return &parser.PosError{ |  | ||||||
| 			Pos: node.Pos(), |  | ||||||
| 			Err: fmt.Errorf("%s: map must have string keys", name), |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Make a map if it is nil |  | ||||||
| 	resultMap := result |  | ||||||
| 	if result.IsNil() { |  | ||||||
| 		resultMap = reflect.MakeMap( |  | ||||||
| 			reflect.MapOf(resultKeyType, resultElemType)) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Go through each element and decode it. |  | ||||||
| 	done := make(map[string]struct{}) |  | ||||||
| 	for _, item := range n.Items { |  | ||||||
| 		if item.Val == nil { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// github.com/hashicorp/terraform/issue/5740 |  | ||||||
| 		if len(item.Keys) == 0 { |  | ||||||
| 			return &parser.PosError{ |  | ||||||
| 				Pos: node.Pos(), |  | ||||||
| 				Err: fmt.Errorf("%s: map must have string keys", name), |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Get the key we're dealing with, which is the first item |  | ||||||
| 		keyStr := item.Keys[0].Token.Value().(string) |  | ||||||
|  |  | ||||||
| 		// If we've already processed this key, then ignore it |  | ||||||
| 		if _, ok := done[keyStr]; ok { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Determine the value. If we have more than one key, then we |  | ||||||
| 		// get the objectlist of only these keys. |  | ||||||
| 		itemVal := item.Val |  | ||||||
| 		if len(item.Keys) > 1 { |  | ||||||
| 			itemVal = n.Filter(keyStr) |  | ||||||
| 			done[keyStr] = struct{}{} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Make the field name |  | ||||||
| 		fieldName := fmt.Sprintf("%s.%s", name, keyStr) |  | ||||||
|  |  | ||||||
| 		// Get the key/value as reflection values |  | ||||||
| 		key := reflect.ValueOf(keyStr) |  | ||||||
| 		val := reflect.Indirect(reflect.New(resultElemType)) |  | ||||||
|  |  | ||||||
| 		// If we have a pre-existing value in the map, use that |  | ||||||
| 		oldVal := resultMap.MapIndex(key) |  | ||||||
| 		if oldVal.IsValid() { |  | ||||||
| 			val.Set(oldVal) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Decode! |  | ||||||
| 		if err := d.decode(fieldName, itemVal, val); err != nil { |  | ||||||
| 			return err |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Set the value on the map |  | ||||||
| 		resultMap.SetMapIndex(key, val) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Set the final map if we can |  | ||||||
| 	set.Set(resultMap) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodePtr(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	// Create an element of the concrete (non pointer) type and decode |  | ||||||
| 	// into that. Then set the value of the pointer to this type. |  | ||||||
| 	resultType := result.Type() |  | ||||||
| 	resultElemType := resultType.Elem() |  | ||||||
| 	val := reflect.New(resultElemType) |  | ||||||
| 	if err := d.decode(name, node, reflect.Indirect(val)); err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	result.Set(val) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeSlice(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	// If we have an interface, then we can address the interface, |  | ||||||
| 	// but not the slice itself, so get the element but set the interface |  | ||||||
| 	set := result |  | ||||||
| 	if result.Kind() == reflect.Interface { |  | ||||||
| 		result = result.Elem() |  | ||||||
| 	} |  | ||||||
| 	// Create the slice if it isn't nil |  | ||||||
| 	resultType := result.Type() |  | ||||||
| 	resultElemType := resultType.Elem() |  | ||||||
| 	if result.IsNil() { |  | ||||||
| 		resultSliceType := reflect.SliceOf(resultElemType) |  | ||||||
| 		result = reflect.MakeSlice( |  | ||||||
| 			resultSliceType, 0, 0) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Figure out the items we'll be copying into the slice |  | ||||||
| 	var items []ast.Node |  | ||||||
| 	switch n := node.(type) { |  | ||||||
| 	case *ast.ObjectList: |  | ||||||
| 		items = make([]ast.Node, len(n.Items)) |  | ||||||
| 		for i, item := range n.Items { |  | ||||||
| 			items[i] = item |  | ||||||
| 		} |  | ||||||
| 	case *ast.ObjectType: |  | ||||||
| 		items = []ast.Node{n} |  | ||||||
| 	case *ast.ListType: |  | ||||||
| 		items = n.List |  | ||||||
| 	default: |  | ||||||
| 		return &parser.PosError{ |  | ||||||
| 			Pos: node.Pos(), |  | ||||||
| 			Err: fmt.Errorf("unknown slice type: %T", node), |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	for i, item := range items { |  | ||||||
| 		fieldName := fmt.Sprintf("%s[%d]", name, i) |  | ||||||
|  |  | ||||||
| 		// Decode |  | ||||||
| 		val := reflect.Indirect(reflect.New(resultElemType)) |  | ||||||
|  |  | ||||||
| 		// if item is an object that was decoded from ambiguous JSON and |  | ||||||
| 		// flattened, make sure it's expanded if it needs to decode into a |  | ||||||
| 		// defined structure. |  | ||||||
| 		item := expandObject(item, val) |  | ||||||
|  |  | ||||||
| 		if err := d.decode(fieldName, item, val); err != nil { |  | ||||||
| 			return err |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Append it onto the slice |  | ||||||
| 		result = reflect.Append(result, val) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	set.Set(result) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // expandObject detects if an ambiguous JSON object was flattened to a List which |  | ||||||
| // should be decoded into a struct, and expands the ast to properly deocode. |  | ||||||
| func expandObject(node ast.Node, result reflect.Value) ast.Node { |  | ||||||
| 	item, ok := node.(*ast.ObjectItem) |  | ||||||
| 	if !ok { |  | ||||||
| 		return node |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	elemType := result.Type() |  | ||||||
|  |  | ||||||
| 	// our target type must be a struct |  | ||||||
| 	switch elemType.Kind() { |  | ||||||
| 	case reflect.Ptr: |  | ||||||
| 		switch elemType.Elem().Kind() { |  | ||||||
| 		case reflect.Struct: |  | ||||||
| 			//OK |  | ||||||
| 		default: |  | ||||||
| 			return node |  | ||||||
| 		} |  | ||||||
| 	case reflect.Struct: |  | ||||||
| 		//OK |  | ||||||
| 	default: |  | ||||||
| 		return node |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// A list value will have a key and field name. If it had more fields, |  | ||||||
| 	// it wouldn't have been flattened. |  | ||||||
| 	if len(item.Keys) != 2 { |  | ||||||
| 		return node |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	keyToken := item.Keys[0].Token |  | ||||||
| 	item.Keys = item.Keys[1:] |  | ||||||
|  |  | ||||||
| 	// we need to un-flatten the ast enough to decode |  | ||||||
| 	newNode := &ast.ObjectItem{ |  | ||||||
| 		Keys: []*ast.ObjectKey{ |  | ||||||
| 			&ast.ObjectKey{ |  | ||||||
| 				Token: keyToken, |  | ||||||
| 			}, |  | ||||||
| 		}, |  | ||||||
| 		Val: &ast.ObjectType{ |  | ||||||
| 			List: &ast.ObjectList{ |  | ||||||
| 				Items: []*ast.ObjectItem{item}, |  | ||||||
| 			}, |  | ||||||
| 		}, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return newNode |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeString(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	switch n := node.(type) { |  | ||||||
| 	case *ast.LiteralType: |  | ||||||
| 		switch n.Token.Type { |  | ||||||
| 		case token.NUMBER: |  | ||||||
| 			result.Set(reflect.ValueOf(n.Token.Text).Convert(result.Type())) |  | ||||||
| 			return nil |  | ||||||
| 		case token.STRING, token.HEREDOC: |  | ||||||
| 			result.Set(reflect.ValueOf(n.Token.Value()).Convert(result.Type())) |  | ||||||
| 			return nil |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return &parser.PosError{ |  | ||||||
| 		Pos: node.Pos(), |  | ||||||
| 		Err: fmt.Errorf("%s: unknown type for string %T", name, node), |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *decoder) decodeStruct(name string, node ast.Node, result reflect.Value) error { |  | ||||||
| 	var item *ast.ObjectItem |  | ||||||
| 	if it, ok := node.(*ast.ObjectItem); ok { |  | ||||||
| 		item = it |  | ||||||
| 		node = it.Val |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ot, ok := node.(*ast.ObjectType); ok { |  | ||||||
| 		node = ot.List |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Handle the special case where the object itself is a literal. Previously |  | ||||||
| 	// the yacc parser would always ensure top-level elements were arrays. The new |  | ||||||
| 	// parser does not make the same guarantees, thus we need to convert any |  | ||||||
| 	// top-level literal elements into a list. |  | ||||||
| 	if _, ok := node.(*ast.LiteralType); ok && item != nil { |  | ||||||
| 		node = &ast.ObjectList{Items: []*ast.ObjectItem{item}} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	list, ok := node.(*ast.ObjectList) |  | ||||||
| 	if !ok { |  | ||||||
| 		return &parser.PosError{ |  | ||||||
| 			Pos: node.Pos(), |  | ||||||
| 			Err: fmt.Errorf("%s: not an object type for struct (%T)", name, node), |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// This slice will keep track of all the structs we'll be decoding. |  | ||||||
| 	// There can be more than one struct if there are embedded structs |  | ||||||
| 	// that are squashed. |  | ||||||
| 	structs := make([]reflect.Value, 1, 5) |  | ||||||
| 	structs[0] = result |  | ||||||
|  |  | ||||||
| 	// Compile the list of all the fields that we're going to be decoding |  | ||||||
| 	// from all the structs. |  | ||||||
| 	type field struct { |  | ||||||
| 		field reflect.StructField |  | ||||||
| 		val   reflect.Value |  | ||||||
| 	} |  | ||||||
| 	fields := []field{} |  | ||||||
| 	for len(structs) > 0 { |  | ||||||
| 		structVal := structs[0] |  | ||||||
| 		structs = structs[1:] |  | ||||||
|  |  | ||||||
| 		structType := structVal.Type() |  | ||||||
| 		for i := 0; i < structType.NumField(); i++ { |  | ||||||
| 			fieldType := structType.Field(i) |  | ||||||
| 			tagParts := strings.Split(fieldType.Tag.Get(tagName), ",") |  | ||||||
|  |  | ||||||
| 			// Ignore fields with tag name "-" |  | ||||||
| 			if tagParts[0] == "-" { |  | ||||||
| 				continue |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if fieldType.Anonymous { |  | ||||||
| 				fieldKind := fieldType.Type.Kind() |  | ||||||
| 				if fieldKind != reflect.Struct { |  | ||||||
| 					return &parser.PosError{ |  | ||||||
| 						Pos: node.Pos(), |  | ||||||
| 						Err: fmt.Errorf("%s: unsupported type to struct: %s", |  | ||||||
| 							fieldType.Name, fieldKind), |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				// We have an embedded field. We "squash" the fields down |  | ||||||
| 				// if specified in the tag. |  | ||||||
| 				squash := false |  | ||||||
| 				for _, tag := range tagParts[1:] { |  | ||||||
| 					if tag == "squash" { |  | ||||||
| 						squash = true |  | ||||||
| 						break |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				if squash { |  | ||||||
| 					structs = append( |  | ||||||
| 						structs, result.FieldByName(fieldType.Name)) |  | ||||||
| 					continue |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// Normal struct field, store it away |  | ||||||
| 			fields = append(fields, field{fieldType, structVal.Field(i)}) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	usedKeys := make(map[string]struct{}) |  | ||||||
| 	decodedFields := make([]string, 0, len(fields)) |  | ||||||
| 	decodedFieldsVal := make([]reflect.Value, 0) |  | ||||||
| 	unusedKeysVal := make([]reflect.Value, 0) |  | ||||||
| 	for _, f := range fields { |  | ||||||
| 		field, fieldValue := f.field, f.val |  | ||||||
| 		if !fieldValue.IsValid() { |  | ||||||
| 			// This should never happen |  | ||||||
| 			panic("field is not valid") |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// If we can't set the field, then it is unexported or something, |  | ||||||
| 		// and we just continue onwards. |  | ||||||
| 		if !fieldValue.CanSet() { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		fieldName := field.Name |  | ||||||
|  |  | ||||||
| 		tagValue := field.Tag.Get(tagName) |  | ||||||
| 		tagParts := strings.SplitN(tagValue, ",", 2) |  | ||||||
| 		if len(tagParts) >= 2 { |  | ||||||
| 			switch tagParts[1] { |  | ||||||
| 			case "decodedFields": |  | ||||||
| 				decodedFieldsVal = append(decodedFieldsVal, fieldValue) |  | ||||||
| 				continue |  | ||||||
| 			case "key": |  | ||||||
| 				if item == nil { |  | ||||||
| 					return &parser.PosError{ |  | ||||||
| 						Pos: node.Pos(), |  | ||||||
| 						Err: fmt.Errorf("%s: %s asked for 'key', impossible", |  | ||||||
| 							name, fieldName), |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				fieldValue.SetString(item.Keys[0].Token.Value().(string)) |  | ||||||
| 				continue |  | ||||||
| 			case "unusedKeys": |  | ||||||
| 				unusedKeysVal = append(unusedKeysVal, fieldValue) |  | ||||||
| 				continue |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if tagParts[0] != "" { |  | ||||||
| 			fieldName = tagParts[0] |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Determine the element we'll use to decode. If it is a single |  | ||||||
| 		// match (only object with the field), then we decode it exactly. |  | ||||||
| 		// If it is a prefix match, then we decode the matches. |  | ||||||
| 		filter := list.Filter(fieldName) |  | ||||||
|  |  | ||||||
| 		prefixMatches := filter.Children() |  | ||||||
| 		matches := filter.Elem() |  | ||||||
| 		if len(matches.Items) == 0 && len(prefixMatches.Items) == 0 { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Track the used key |  | ||||||
| 		usedKeys[fieldName] = struct{}{} |  | ||||||
|  |  | ||||||
| 		// Create the field name and decode. We range over the elements |  | ||||||
| 		// because we actually want the value. |  | ||||||
| 		fieldName = fmt.Sprintf("%s.%s", name, fieldName) |  | ||||||
| 		if len(prefixMatches.Items) > 0 { |  | ||||||
| 			if err := d.decode(fieldName, prefixMatches, fieldValue); err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 		for _, match := range matches.Items { |  | ||||||
| 			var decodeNode ast.Node = match.Val |  | ||||||
| 			if ot, ok := decodeNode.(*ast.ObjectType); ok { |  | ||||||
| 				decodeNode = &ast.ObjectList{Items: ot.List.Items} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if err := d.decode(fieldName, decodeNode, fieldValue); err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		decodedFields = append(decodedFields, field.Name) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if len(decodedFieldsVal) > 0 { |  | ||||||
| 		// Sort it so that it is deterministic |  | ||||||
| 		sort.Strings(decodedFields) |  | ||||||
|  |  | ||||||
| 		for _, v := range decodedFieldsVal { |  | ||||||
| 			v.Set(reflect.ValueOf(decodedFields)) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // findNodeType returns the type of ast.Node |  | ||||||
| func findNodeType() reflect.Type { |  | ||||||
| 	var nodeContainer struct { |  | ||||||
| 		Node ast.Node |  | ||||||
| 	} |  | ||||||
| 	value := reflect.ValueOf(nodeContainer).FieldByName("Node") |  | ||||||
| 	return value.Type() |  | ||||||
| } |  | ||||||
							
								
								
									
										3
									
								
								vendor/github.com/hashicorp/hcl/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										3
									
								
								vendor/github.com/hashicorp/hcl/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,3 +0,0 @@ | |||||||
| module github.com/hashicorp/hcl |  | ||||||
|  |  | ||||||
| require github.com/davecgh/go-spew v1.1.1 |  | ||||||
							
								
								
									
										2
									
								
								vendor/github.com/hashicorp/hcl/go.sum
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										2
									
								
								vendor/github.com/hashicorp/hcl/go.sum
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,2 +0,0 @@ | |||||||
| github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= |  | ||||||
| github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= |  | ||||||
							
								
								
									
										11
									
								
								vendor/github.com/hashicorp/hcl/hcl.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										11
									
								
								vendor/github.com/hashicorp/hcl/hcl.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,11 +0,0 @@ | |||||||
| // Package hcl decodes HCL into usable Go structures. |  | ||||||
| // |  | ||||||
| // hcl input can come in either pure HCL format or JSON format. |  | ||||||
| // It can be parsed into an AST, and then decoded into a structure, |  | ||||||
| // or it can be decoded directly from a string into a structure. |  | ||||||
| // |  | ||||||
| // If you choose to parse HCL into a raw AST, the benefit is that you |  | ||||||
| // can write custom visitor implementations to implement custom |  | ||||||
| // semantic checks. By default, HCL does not perform any semantic |  | ||||||
| // checks. |  | ||||||
| package hcl |  | ||||||
							
								
								
									
										219
									
								
								vendor/github.com/hashicorp/hcl/hcl/ast/ast.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										219
									
								
								vendor/github.com/hashicorp/hcl/hcl/ast/ast.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,219 +0,0 @@ | |||||||
| // Package ast declares the types used to represent syntax trees for HCL |  | ||||||
| // (HashiCorp Configuration Language) |  | ||||||
| package ast |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"strings" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Node is an element in the abstract syntax tree. |  | ||||||
| type Node interface { |  | ||||||
| 	node() |  | ||||||
| 	Pos() token.Pos |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (File) node()         {} |  | ||||||
| func (ObjectList) node()   {} |  | ||||||
| func (ObjectKey) node()    {} |  | ||||||
| func (ObjectItem) node()   {} |  | ||||||
| func (Comment) node()      {} |  | ||||||
| func (CommentGroup) node() {} |  | ||||||
| func (ObjectType) node()   {} |  | ||||||
| func (LiteralType) node()  {} |  | ||||||
| func (ListType) node()     {} |  | ||||||
|  |  | ||||||
| // File represents a single HCL file |  | ||||||
| type File struct { |  | ||||||
| 	Node     Node            // usually a *ObjectList |  | ||||||
| 	Comments []*CommentGroup // list of all comments in the source |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (f *File) Pos() token.Pos { |  | ||||||
| 	return f.Node.Pos() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ObjectList represents a list of ObjectItems. An HCL file itself is an |  | ||||||
| // ObjectList. |  | ||||||
| type ObjectList struct { |  | ||||||
| 	Items []*ObjectItem |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (o *ObjectList) Add(item *ObjectItem) { |  | ||||||
| 	o.Items = append(o.Items, item) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Filter filters out the objects with the given key list as a prefix. |  | ||||||
| // |  | ||||||
| // The returned list of objects contain ObjectItems where the keys have |  | ||||||
| // this prefix already stripped off. This might result in objects with |  | ||||||
| // zero-length key lists if they have no children. |  | ||||||
| // |  | ||||||
| // If no matches are found, an empty ObjectList (non-nil) is returned. |  | ||||||
| func (o *ObjectList) Filter(keys ...string) *ObjectList { |  | ||||||
| 	var result ObjectList |  | ||||||
| 	for _, item := range o.Items { |  | ||||||
| 		// If there aren't enough keys, then ignore this |  | ||||||
| 		if len(item.Keys) < len(keys) { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		match := true |  | ||||||
| 		for i, key := range item.Keys[:len(keys)] { |  | ||||||
| 			key := key.Token.Value().(string) |  | ||||||
| 			if key != keys[i] && !strings.EqualFold(key, keys[i]) { |  | ||||||
| 				match = false |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 		if !match { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Strip off the prefix from the children |  | ||||||
| 		newItem := *item |  | ||||||
| 		newItem.Keys = newItem.Keys[len(keys):] |  | ||||||
| 		result.Add(&newItem) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return &result |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Children returns further nested objects (key length > 0) within this |  | ||||||
| // ObjectList. This should be used with Filter to get at child items. |  | ||||||
| func (o *ObjectList) Children() *ObjectList { |  | ||||||
| 	var result ObjectList |  | ||||||
| 	for _, item := range o.Items { |  | ||||||
| 		if len(item.Keys) > 0 { |  | ||||||
| 			result.Add(item) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return &result |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Elem returns items in the list that are direct element assignments |  | ||||||
| // (key length == 0). This should be used with Filter to get at elements. |  | ||||||
| func (o *ObjectList) Elem() *ObjectList { |  | ||||||
| 	var result ObjectList |  | ||||||
| 	for _, item := range o.Items { |  | ||||||
| 		if len(item.Keys) == 0 { |  | ||||||
| 			result.Add(item) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return &result |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (o *ObjectList) Pos() token.Pos { |  | ||||||
| 	// always returns the uninitiliazed position |  | ||||||
| 	return o.Items[0].Pos() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ObjectItem represents a HCL Object Item. An item is represented with a key |  | ||||||
| // (or keys). It can be an assignment or an object (both normal and nested) |  | ||||||
| type ObjectItem struct { |  | ||||||
| 	// keys is only one length long if it's of type assignment. If it's a |  | ||||||
| 	// nested object it can be larger than one. In that case "assign" is |  | ||||||
| 	// invalid as there is no assignments for a nested object. |  | ||||||
| 	Keys []*ObjectKey |  | ||||||
|  |  | ||||||
| 	// assign contains the position of "=", if any |  | ||||||
| 	Assign token.Pos |  | ||||||
|  |  | ||||||
| 	// val is the item itself. It can be an object,list, number, bool or a |  | ||||||
| 	// string. If key length is larger than one, val can be only of type |  | ||||||
| 	// Object. |  | ||||||
| 	Val Node |  | ||||||
|  |  | ||||||
| 	LeadComment *CommentGroup // associated lead comment |  | ||||||
| 	LineComment *CommentGroup // associated line comment |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (o *ObjectItem) Pos() token.Pos { |  | ||||||
| 	// I'm not entirely sure what causes this, but removing this causes |  | ||||||
| 	// a test failure. We should investigate at some point. |  | ||||||
| 	if len(o.Keys) == 0 { |  | ||||||
| 		return token.Pos{} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return o.Keys[0].Pos() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ObjectKeys are either an identifier or of type string. |  | ||||||
| type ObjectKey struct { |  | ||||||
| 	Token token.Token |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (o *ObjectKey) Pos() token.Pos { |  | ||||||
| 	return o.Token.Pos |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LiteralType represents a literal of basic type. Valid types are: |  | ||||||
| // token.NUMBER, token.FLOAT, token.BOOL and token.STRING |  | ||||||
| type LiteralType struct { |  | ||||||
| 	Token token.Token |  | ||||||
|  |  | ||||||
| 	// comment types, only used when in a list |  | ||||||
| 	LeadComment *CommentGroup |  | ||||||
| 	LineComment *CommentGroup |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *LiteralType) Pos() token.Pos { |  | ||||||
| 	return l.Token.Pos |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ListStatement represents a HCL List type |  | ||||||
| type ListType struct { |  | ||||||
| 	Lbrack token.Pos // position of "[" |  | ||||||
| 	Rbrack token.Pos // position of "]" |  | ||||||
| 	List   []Node    // the elements in lexical order |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *ListType) Pos() token.Pos { |  | ||||||
| 	return l.Lbrack |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *ListType) Add(node Node) { |  | ||||||
| 	l.List = append(l.List, node) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ObjectType represents a HCL Object Type |  | ||||||
| type ObjectType struct { |  | ||||||
| 	Lbrace token.Pos   // position of "{" |  | ||||||
| 	Rbrace token.Pos   // position of "}" |  | ||||||
| 	List   *ObjectList // the nodes in lexical order |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (o *ObjectType) Pos() token.Pos { |  | ||||||
| 	return o.Lbrace |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Comment node represents a single //, # style or /*- style commment |  | ||||||
| type Comment struct { |  | ||||||
| 	Start token.Pos // position of / or # |  | ||||||
| 	Text  string |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (c *Comment) Pos() token.Pos { |  | ||||||
| 	return c.Start |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // CommentGroup node represents a sequence of comments with no other tokens and |  | ||||||
| // no empty lines between. |  | ||||||
| type CommentGroup struct { |  | ||||||
| 	List []*Comment // len(List) > 0 |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (c *CommentGroup) Pos() token.Pos { |  | ||||||
| 	return c.List[0].Pos() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| //------------------------------------------------------------------- |  | ||||||
| // GoStringer |  | ||||||
| //------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| func (o *ObjectKey) GoString() string  { return fmt.Sprintf("*%#v", *o) } |  | ||||||
| func (o *ObjectList) GoString() string { return fmt.Sprintf("*%#v", *o) } |  | ||||||
							
								
								
									
										52
									
								
								vendor/github.com/hashicorp/hcl/hcl/ast/walk.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										52
									
								
								vendor/github.com/hashicorp/hcl/hcl/ast/walk.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,52 +0,0 @@ | |||||||
| package ast |  | ||||||
|  |  | ||||||
| import "fmt" |  | ||||||
|  |  | ||||||
| // WalkFunc describes a function to be called for each node during a Walk. The |  | ||||||
| // returned node can be used to rewrite the AST. Walking stops the returned |  | ||||||
| // bool is false. |  | ||||||
| type WalkFunc func(Node) (Node, bool) |  | ||||||
|  |  | ||||||
| // Walk traverses an AST in depth-first order: It starts by calling fn(node); |  | ||||||
| // node must not be nil. If fn returns true, Walk invokes fn recursively for |  | ||||||
| // each of the non-nil children of node, followed by a call of fn(nil). The |  | ||||||
| // returned node of fn can be used to rewrite the passed node to fn. |  | ||||||
| func Walk(node Node, fn WalkFunc) Node { |  | ||||||
| 	rewritten, ok := fn(node) |  | ||||||
| 	if !ok { |  | ||||||
| 		return rewritten |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch n := node.(type) { |  | ||||||
| 	case *File: |  | ||||||
| 		n.Node = Walk(n.Node, fn) |  | ||||||
| 	case *ObjectList: |  | ||||||
| 		for i, item := range n.Items { |  | ||||||
| 			n.Items[i] = Walk(item, fn).(*ObjectItem) |  | ||||||
| 		} |  | ||||||
| 	case *ObjectKey: |  | ||||||
| 		// nothing to do |  | ||||||
| 	case *ObjectItem: |  | ||||||
| 		for i, k := range n.Keys { |  | ||||||
| 			n.Keys[i] = Walk(k, fn).(*ObjectKey) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if n.Val != nil { |  | ||||||
| 			n.Val = Walk(n.Val, fn) |  | ||||||
| 		} |  | ||||||
| 	case *LiteralType: |  | ||||||
| 		// nothing to do |  | ||||||
| 	case *ListType: |  | ||||||
| 		for i, l := range n.List { |  | ||||||
| 			n.List[i] = Walk(l, fn) |  | ||||||
| 		} |  | ||||||
| 	case *ObjectType: |  | ||||||
| 		n.List = Walk(n.List, fn).(*ObjectList) |  | ||||||
| 	default: |  | ||||||
| 		// should we panic here? |  | ||||||
| 		fmt.Printf("unknown type: %T\n", n) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	fn(nil) |  | ||||||
| 	return rewritten |  | ||||||
| } |  | ||||||
							
								
								
									
										17
									
								
								vendor/github.com/hashicorp/hcl/hcl/parser/error.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										17
									
								
								vendor/github.com/hashicorp/hcl/hcl/parser/error.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,17 +0,0 @@ | |||||||
| package parser |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // PosError is a parse error that contains a position. |  | ||||||
| type PosError struct { |  | ||||||
| 	Pos token.Pos |  | ||||||
| 	Err error |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (e *PosError) Error() string { |  | ||||||
| 	return fmt.Sprintf("At %s: %s", e.Pos, e.Err) |  | ||||||
| } |  | ||||||
							
								
								
									
										532
									
								
								vendor/github.com/hashicorp/hcl/hcl/parser/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										532
									
								
								vendor/github.com/hashicorp/hcl/hcl/parser/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,532 +0,0 @@ | |||||||
| // Package parser implements a parser for HCL (HashiCorp Configuration |  | ||||||
| // Language) |  | ||||||
| package parser |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
| 	"strings" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/ast" |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/scanner" |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type Parser struct { |  | ||||||
| 	sc *scanner.Scanner |  | ||||||
|  |  | ||||||
| 	// Last read token |  | ||||||
| 	tok       token.Token |  | ||||||
| 	commaPrev token.Token |  | ||||||
|  |  | ||||||
| 	comments    []*ast.CommentGroup |  | ||||||
| 	leadComment *ast.CommentGroup // last lead comment |  | ||||||
| 	lineComment *ast.CommentGroup // last line comment |  | ||||||
|  |  | ||||||
| 	enableTrace bool |  | ||||||
| 	indent      int |  | ||||||
| 	n           int // buffer size (max = 1) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func newParser(src []byte) *Parser { |  | ||||||
| 	return &Parser{ |  | ||||||
| 		sc: scanner.New(src), |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Parse returns the fully parsed source and returns the abstract syntax tree. |  | ||||||
| func Parse(src []byte) (*ast.File, error) { |  | ||||||
| 	// normalize all line endings |  | ||||||
| 	// since the scanner and output only work with "\n" line endings, we may |  | ||||||
| 	// end up with dangling "\r" characters in the parsed data. |  | ||||||
| 	src = bytes.Replace(src, []byte("\r\n"), []byte("\n"), -1) |  | ||||||
|  |  | ||||||
| 	p := newParser(src) |  | ||||||
| 	return p.Parse() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var errEofToken = errors.New("EOF token found") |  | ||||||
|  |  | ||||||
| // Parse returns the fully parsed source and returns the abstract syntax tree. |  | ||||||
| func (p *Parser) Parse() (*ast.File, error) { |  | ||||||
| 	f := &ast.File{} |  | ||||||
| 	var err, scerr error |  | ||||||
| 	p.sc.Error = func(pos token.Pos, msg string) { |  | ||||||
| 		scerr = &PosError{Pos: pos, Err: errors.New(msg)} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	f.Node, err = p.objectList(false) |  | ||||||
| 	if scerr != nil { |  | ||||||
| 		return nil, scerr |  | ||||||
| 	} |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	f.Comments = p.comments |  | ||||||
| 	return f, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectList parses a list of items within an object (generally k/v pairs). |  | ||||||
| // The parameter" obj" tells this whether to we are within an object (braces: |  | ||||||
| // '{', '}') or just at the top level. If we're within an object, we end |  | ||||||
| // at an RBRACE. |  | ||||||
| func (p *Parser) objectList(obj bool) (*ast.ObjectList, error) { |  | ||||||
| 	defer un(trace(p, "ParseObjectList")) |  | ||||||
| 	node := &ast.ObjectList{} |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		if obj { |  | ||||||
| 			tok := p.scan() |  | ||||||
| 			p.unscan() |  | ||||||
| 			if tok.Type == token.RBRACE { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		n, err := p.objectItem() |  | ||||||
| 		if err == errEofToken { |  | ||||||
| 			break // we are finished |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// we don't return a nil node, because might want to use already |  | ||||||
| 		// collected items. |  | ||||||
| 		if err != nil { |  | ||||||
| 			return node, err |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		node.Add(n) |  | ||||||
|  |  | ||||||
| 		// object lists can be optionally comma-delimited e.g. when a list of maps |  | ||||||
| 		// is being expressed, so a comma is allowed here - it's simply consumed |  | ||||||
| 		tok := p.scan() |  | ||||||
| 		if tok.Type != token.COMMA { |  | ||||||
| 			p.unscan() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return node, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Parser) consumeComment() (comment *ast.Comment, endline int) { |  | ||||||
| 	endline = p.tok.Pos.Line |  | ||||||
|  |  | ||||||
| 	// count the endline if it's multiline comment, ie starting with /* |  | ||||||
| 	if len(p.tok.Text) > 1 && p.tok.Text[1] == '*' { |  | ||||||
| 		// don't use range here - no need to decode Unicode code points |  | ||||||
| 		for i := 0; i < len(p.tok.Text); i++ { |  | ||||||
| 			if p.tok.Text[i] == '\n' { |  | ||||||
| 				endline++ |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	comment = &ast.Comment{Start: p.tok.Pos, Text: p.tok.Text} |  | ||||||
| 	p.tok = p.sc.Scan() |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Parser) consumeCommentGroup(n int) (comments *ast.CommentGroup, endline int) { |  | ||||||
| 	var list []*ast.Comment |  | ||||||
| 	endline = p.tok.Pos.Line |  | ||||||
|  |  | ||||||
| 	for p.tok.Type == token.COMMENT && p.tok.Pos.Line <= endline+n { |  | ||||||
| 		var comment *ast.Comment |  | ||||||
| 		comment, endline = p.consumeComment() |  | ||||||
| 		list = append(list, comment) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// add comment group to the comments list |  | ||||||
| 	comments = &ast.CommentGroup{List: list} |  | ||||||
| 	p.comments = append(p.comments, comments) |  | ||||||
|  |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectItem parses a single object item |  | ||||||
| func (p *Parser) objectItem() (*ast.ObjectItem, error) { |  | ||||||
| 	defer un(trace(p, "ParseObjectItem")) |  | ||||||
|  |  | ||||||
| 	keys, err := p.objectKey() |  | ||||||
| 	if len(keys) > 0 && err == errEofToken { |  | ||||||
| 		// We ignore eof token here since it is an error if we didn't |  | ||||||
| 		// receive a value (but we did receive a key) for the item. |  | ||||||
| 		err = nil |  | ||||||
| 	} |  | ||||||
| 	if len(keys) > 0 && err != nil && p.tok.Type == token.RBRACE { |  | ||||||
| 		// This is a strange boolean statement, but what it means is: |  | ||||||
| 		// We have keys with no value, and we're likely in an object |  | ||||||
| 		// (since RBrace ends an object). For this, we set err to nil so |  | ||||||
| 		// we continue and get the error below of having the wrong value |  | ||||||
| 		// type. |  | ||||||
| 		err = nil |  | ||||||
|  |  | ||||||
| 		// Reset the token type so we don't think it completed fine. See |  | ||||||
| 		// objectType which uses p.tok.Type to check if we're done with |  | ||||||
| 		// the object. |  | ||||||
| 		p.tok.Type = token.EOF |  | ||||||
| 	} |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	o := &ast.ObjectItem{ |  | ||||||
| 		Keys: keys, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if p.leadComment != nil { |  | ||||||
| 		o.LeadComment = p.leadComment |  | ||||||
| 		p.leadComment = nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch p.tok.Type { |  | ||||||
| 	case token.ASSIGN: |  | ||||||
| 		o.Assign = p.tok.Pos |  | ||||||
| 		o.Val, err = p.object() |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
| 	case token.LBRACE: |  | ||||||
| 		o.Val, err = p.objectType() |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
| 	default: |  | ||||||
| 		keyStr := make([]string, 0, len(keys)) |  | ||||||
| 		for _, k := range keys { |  | ||||||
| 			keyStr = append(keyStr, k.Token.Text) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		return nil, &PosError{ |  | ||||||
| 			Pos: p.tok.Pos, |  | ||||||
| 			Err: fmt.Errorf( |  | ||||||
| 				"key '%s' expected start of object ('{') or assignment ('=')", |  | ||||||
| 				strings.Join(keyStr, " ")), |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// key=#comment |  | ||||||
| 	// val |  | ||||||
| 	if p.lineComment != nil { |  | ||||||
| 		o.LineComment, p.lineComment = p.lineComment, nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// do a look-ahead for line comment |  | ||||||
| 	p.scan() |  | ||||||
| 	if len(keys) > 0 && o.Val.Pos().Line == keys[0].Pos().Line && p.lineComment != nil { |  | ||||||
| 		o.LineComment = p.lineComment |  | ||||||
| 		p.lineComment = nil |  | ||||||
| 	} |  | ||||||
| 	p.unscan() |  | ||||||
| 	return o, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectKey parses an object key and returns a ObjectKey AST |  | ||||||
| func (p *Parser) objectKey() ([]*ast.ObjectKey, error) { |  | ||||||
| 	keyCount := 0 |  | ||||||
| 	keys := make([]*ast.ObjectKey, 0) |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		tok := p.scan() |  | ||||||
| 		switch tok.Type { |  | ||||||
| 		case token.EOF: |  | ||||||
| 			// It is very important to also return the keys here as well as |  | ||||||
| 			// the error. This is because we need to be able to tell if we |  | ||||||
| 			// did parse keys prior to finding the EOF, or if we just found |  | ||||||
| 			// a bare EOF. |  | ||||||
| 			return keys, errEofToken |  | ||||||
| 		case token.ASSIGN: |  | ||||||
| 			// assignment or object only, but not nested objects. this is not |  | ||||||
| 			// allowed: `foo bar = {}` |  | ||||||
| 			if keyCount > 1 { |  | ||||||
| 				return nil, &PosError{ |  | ||||||
| 					Pos: p.tok.Pos, |  | ||||||
| 					Err: fmt.Errorf("nested object expected: LBRACE got: %s", p.tok.Type), |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if keyCount == 0 { |  | ||||||
| 				return nil, &PosError{ |  | ||||||
| 					Pos: p.tok.Pos, |  | ||||||
| 					Err: errors.New("no object keys found!"), |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			return keys, nil |  | ||||||
| 		case token.LBRACE: |  | ||||||
| 			var err error |  | ||||||
|  |  | ||||||
| 			// If we have no keys, then it is a syntax error. i.e. {{}} is not |  | ||||||
| 			// allowed. |  | ||||||
| 			if len(keys) == 0 { |  | ||||||
| 				err = &PosError{ |  | ||||||
| 					Pos: p.tok.Pos, |  | ||||||
| 					Err: fmt.Errorf("expected: IDENT | STRING got: %s", p.tok.Type), |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// object |  | ||||||
| 			return keys, err |  | ||||||
| 		case token.IDENT, token.STRING: |  | ||||||
| 			keyCount++ |  | ||||||
| 			keys = append(keys, &ast.ObjectKey{Token: p.tok}) |  | ||||||
| 		case token.ILLEGAL: |  | ||||||
| 			return keys, &PosError{ |  | ||||||
| 				Pos: p.tok.Pos, |  | ||||||
| 				Err: fmt.Errorf("illegal character"), |  | ||||||
| 			} |  | ||||||
| 		default: |  | ||||||
| 			return keys, &PosError{ |  | ||||||
| 				Pos: p.tok.Pos, |  | ||||||
| 				Err: fmt.Errorf("expected: IDENT | STRING | ASSIGN | LBRACE got: %s", p.tok.Type), |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // object parses any type of object, such as number, bool, string, object or |  | ||||||
| // list. |  | ||||||
| func (p *Parser) object() (ast.Node, error) { |  | ||||||
| 	defer un(trace(p, "ParseType")) |  | ||||||
| 	tok := p.scan() |  | ||||||
|  |  | ||||||
| 	switch tok.Type { |  | ||||||
| 	case token.NUMBER, token.FLOAT, token.BOOL, token.STRING, token.HEREDOC: |  | ||||||
| 		return p.literalType() |  | ||||||
| 	case token.LBRACE: |  | ||||||
| 		return p.objectType() |  | ||||||
| 	case token.LBRACK: |  | ||||||
| 		return p.listType() |  | ||||||
| 	case token.COMMENT: |  | ||||||
| 		// implement comment |  | ||||||
| 	case token.EOF: |  | ||||||
| 		return nil, errEofToken |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return nil, &PosError{ |  | ||||||
| 		Pos: tok.Pos, |  | ||||||
| 		Err: fmt.Errorf("Unknown token: %+v", tok), |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectType parses an object type and returns a ObjectType AST |  | ||||||
| func (p *Parser) objectType() (*ast.ObjectType, error) { |  | ||||||
| 	defer un(trace(p, "ParseObjectType")) |  | ||||||
|  |  | ||||||
| 	// we assume that the currently scanned token is a LBRACE |  | ||||||
| 	o := &ast.ObjectType{ |  | ||||||
| 		Lbrace: p.tok.Pos, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	l, err := p.objectList(true) |  | ||||||
|  |  | ||||||
| 	// if we hit RBRACE, we are good to go (means we parsed all Items), if it's |  | ||||||
| 	// not a RBRACE, it's an syntax error and we just return it. |  | ||||||
| 	if err != nil && p.tok.Type != token.RBRACE { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// No error, scan and expect the ending to be a brace |  | ||||||
| 	if tok := p.scan(); tok.Type != token.RBRACE { |  | ||||||
| 		return nil, &PosError{ |  | ||||||
| 			Pos: tok.Pos, |  | ||||||
| 			Err: fmt.Errorf("object expected closing RBRACE got: %s", tok.Type), |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	o.List = l |  | ||||||
| 	o.Rbrace = p.tok.Pos // advanced via parseObjectList |  | ||||||
| 	return o, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // listType parses a list type and returns a ListType AST |  | ||||||
| func (p *Parser) listType() (*ast.ListType, error) { |  | ||||||
| 	defer un(trace(p, "ParseListType")) |  | ||||||
|  |  | ||||||
| 	// we assume that the currently scanned token is a LBRACK |  | ||||||
| 	l := &ast.ListType{ |  | ||||||
| 		Lbrack: p.tok.Pos, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	needComma := false |  | ||||||
| 	for { |  | ||||||
| 		tok := p.scan() |  | ||||||
| 		if needComma { |  | ||||||
| 			switch tok.Type { |  | ||||||
| 			case token.COMMA, token.RBRACK: |  | ||||||
| 			default: |  | ||||||
| 				return nil, &PosError{ |  | ||||||
| 					Pos: tok.Pos, |  | ||||||
| 					Err: fmt.Errorf( |  | ||||||
| 						"error parsing list, expected comma or list end, got: %s", |  | ||||||
| 						tok.Type), |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 		switch tok.Type { |  | ||||||
| 		case token.BOOL, token.NUMBER, token.FLOAT, token.STRING, token.HEREDOC: |  | ||||||
| 			node, err := p.literalType() |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, err |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// If there is a lead comment, apply it |  | ||||||
| 			if p.leadComment != nil { |  | ||||||
| 				node.LeadComment = p.leadComment |  | ||||||
| 				p.leadComment = nil |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			l.Add(node) |  | ||||||
| 			needComma = true |  | ||||||
| 		case token.COMMA: |  | ||||||
| 			// get next list item or we are at the end |  | ||||||
| 			// do a look-ahead for line comment |  | ||||||
| 			p.scan() |  | ||||||
| 			if p.lineComment != nil && len(l.List) > 0 { |  | ||||||
| 				lit, ok := l.List[len(l.List)-1].(*ast.LiteralType) |  | ||||||
| 				if ok { |  | ||||||
| 					lit.LineComment = p.lineComment |  | ||||||
| 					l.List[len(l.List)-1] = lit |  | ||||||
| 					p.lineComment = nil |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 			p.unscan() |  | ||||||
|  |  | ||||||
| 			needComma = false |  | ||||||
| 			continue |  | ||||||
| 		case token.LBRACE: |  | ||||||
| 			// Looks like a nested object, so parse it out |  | ||||||
| 			node, err := p.objectType() |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, &PosError{ |  | ||||||
| 					Pos: tok.Pos, |  | ||||||
| 					Err: fmt.Errorf( |  | ||||||
| 						"error while trying to parse object within list: %s", err), |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 			l.Add(node) |  | ||||||
| 			needComma = true |  | ||||||
| 		case token.LBRACK: |  | ||||||
| 			node, err := p.listType() |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, &PosError{ |  | ||||||
| 					Pos: tok.Pos, |  | ||||||
| 					Err: fmt.Errorf( |  | ||||||
| 						"error while trying to parse list within list: %s", err), |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 			l.Add(node) |  | ||||||
| 		case token.RBRACK: |  | ||||||
| 			// finished |  | ||||||
| 			l.Rbrack = p.tok.Pos |  | ||||||
| 			return l, nil |  | ||||||
| 		default: |  | ||||||
| 			return nil, &PosError{ |  | ||||||
| 				Pos: tok.Pos, |  | ||||||
| 				Err: fmt.Errorf("unexpected token while parsing list: %s", tok.Type), |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // literalType parses a literal type and returns a LiteralType AST |  | ||||||
| func (p *Parser) literalType() (*ast.LiteralType, error) { |  | ||||||
| 	defer un(trace(p, "ParseLiteral")) |  | ||||||
|  |  | ||||||
| 	return &ast.LiteralType{ |  | ||||||
| 		Token: p.tok, |  | ||||||
| 	}, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scan returns the next token from the underlying scanner. If a token has |  | ||||||
| // been unscanned then read that instead. In the process, it collects any |  | ||||||
| // comment groups encountered, and remembers the last lead and line comments. |  | ||||||
| func (p *Parser) scan() token.Token { |  | ||||||
| 	// If we have a token on the buffer, then return it. |  | ||||||
| 	if p.n != 0 { |  | ||||||
| 		p.n = 0 |  | ||||||
| 		return p.tok |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Otherwise read the next token from the scanner and Save it to the buffer |  | ||||||
| 	// in case we unscan later. |  | ||||||
| 	prev := p.tok |  | ||||||
| 	p.tok = p.sc.Scan() |  | ||||||
|  |  | ||||||
| 	if p.tok.Type == token.COMMENT { |  | ||||||
| 		var comment *ast.CommentGroup |  | ||||||
| 		var endline int |  | ||||||
|  |  | ||||||
| 		// fmt.Printf("p.tok.Pos.Line = %+v prev: %d endline %d \n", |  | ||||||
| 		// p.tok.Pos.Line, prev.Pos.Line, endline) |  | ||||||
| 		if p.tok.Pos.Line == prev.Pos.Line { |  | ||||||
| 			// The comment is on same line as the previous token; it |  | ||||||
| 			// cannot be a lead comment but may be a line comment. |  | ||||||
| 			comment, endline = p.consumeCommentGroup(0) |  | ||||||
| 			if p.tok.Pos.Line != endline { |  | ||||||
| 				// The next token is on a different line, thus |  | ||||||
| 				// the last comment group is a line comment. |  | ||||||
| 				p.lineComment = comment |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// consume successor comments, if any |  | ||||||
| 		endline = -1 |  | ||||||
| 		for p.tok.Type == token.COMMENT { |  | ||||||
| 			comment, endline = p.consumeCommentGroup(1) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if endline+1 == p.tok.Pos.Line && p.tok.Type != token.RBRACE { |  | ||||||
| 			switch p.tok.Type { |  | ||||||
| 			case token.RBRACE, token.RBRACK: |  | ||||||
| 				// Do not count for these cases |  | ||||||
| 			default: |  | ||||||
| 				// The next token is following on the line immediately after the |  | ||||||
| 				// comment group, thus the last comment group is a lead comment. |  | ||||||
| 				p.leadComment = comment |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return p.tok |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // unscan pushes the previously read token back onto the buffer. |  | ||||||
| func (p *Parser) unscan() { |  | ||||||
| 	p.n = 1 |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
| // Parsing support |  | ||||||
|  |  | ||||||
| func (p *Parser) printTrace(a ...interface{}) { |  | ||||||
| 	if !p.enableTrace { |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	const dots = ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " |  | ||||||
| 	const n = len(dots) |  | ||||||
| 	fmt.Printf("%5d:%3d: ", p.tok.Pos.Line, p.tok.Pos.Column) |  | ||||||
|  |  | ||||||
| 	i := 2 * p.indent |  | ||||||
| 	for i > n { |  | ||||||
| 		fmt.Print(dots) |  | ||||||
| 		i -= n |  | ||||||
| 	} |  | ||||||
| 	// i <= n |  | ||||||
| 	fmt.Print(dots[0:i]) |  | ||||||
| 	fmt.Println(a...) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func trace(p *Parser, msg string) *Parser { |  | ||||||
| 	p.printTrace(msg, "(") |  | ||||||
| 	p.indent++ |  | ||||||
| 	return p |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Usage pattern: defer un(trace(p, "...")) |  | ||||||
| func un(p *Parser) { |  | ||||||
| 	p.indent-- |  | ||||||
| 	p.printTrace(")") |  | ||||||
| } |  | ||||||
							
								
								
									
										789
									
								
								vendor/github.com/hashicorp/hcl/hcl/printer/nodes.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										789
									
								
								vendor/github.com/hashicorp/hcl/hcl/printer/nodes.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,789 +0,0 @@ | |||||||
| package printer |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"fmt" |  | ||||||
| 	"sort" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/ast" |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	blank    = byte(' ') |  | ||||||
| 	newline  = byte('\n') |  | ||||||
| 	tab      = byte('\t') |  | ||||||
| 	infinity = 1 << 30 // offset or line |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var ( |  | ||||||
| 	unindent = []byte("\uE123") // in the private use space |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type printer struct { |  | ||||||
| 	cfg  Config |  | ||||||
| 	prev token.Pos |  | ||||||
|  |  | ||||||
| 	comments           []*ast.CommentGroup // may be nil, contains all comments |  | ||||||
| 	standaloneComments []*ast.CommentGroup // contains all standalone comments (not assigned to any node) |  | ||||||
|  |  | ||||||
| 	enableTrace bool |  | ||||||
| 	indentTrace int |  | ||||||
| } |  | ||||||
|  |  | ||||||
| type ByPosition []*ast.CommentGroup |  | ||||||
|  |  | ||||||
| func (b ByPosition) Len() int           { return len(b) } |  | ||||||
| func (b ByPosition) Swap(i, j int)      { b[i], b[j] = b[j], b[i] } |  | ||||||
| func (b ByPosition) Less(i, j int) bool { return b[i].Pos().Before(b[j].Pos()) } |  | ||||||
|  |  | ||||||
| // collectComments comments all standalone comments which are not lead or line |  | ||||||
| // comment |  | ||||||
| func (p *printer) collectComments(node ast.Node) { |  | ||||||
| 	// first collect all comments. This is already stored in |  | ||||||
| 	// ast.File.(comments) |  | ||||||
| 	ast.Walk(node, func(nn ast.Node) (ast.Node, bool) { |  | ||||||
| 		switch t := nn.(type) { |  | ||||||
| 		case *ast.File: |  | ||||||
| 			p.comments = t.Comments |  | ||||||
| 			return nn, false |  | ||||||
| 		} |  | ||||||
| 		return nn, true |  | ||||||
| 	}) |  | ||||||
|  |  | ||||||
| 	standaloneComments := make(map[token.Pos]*ast.CommentGroup, 0) |  | ||||||
| 	for _, c := range p.comments { |  | ||||||
| 		standaloneComments[c.Pos()] = c |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// next remove all lead and line comments from the overall comment map. |  | ||||||
| 	// This will give us comments which are standalone, comments which are not |  | ||||||
| 	// assigned to any kind of node. |  | ||||||
| 	ast.Walk(node, func(nn ast.Node) (ast.Node, bool) { |  | ||||||
| 		switch t := nn.(type) { |  | ||||||
| 		case *ast.LiteralType: |  | ||||||
| 			if t.LeadComment != nil { |  | ||||||
| 				for _, comment := range t.LeadComment.List { |  | ||||||
| 					if _, ok := standaloneComments[comment.Pos()]; ok { |  | ||||||
| 						delete(standaloneComments, comment.Pos()) |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if t.LineComment != nil { |  | ||||||
| 				for _, comment := range t.LineComment.List { |  | ||||||
| 					if _, ok := standaloneComments[comment.Pos()]; ok { |  | ||||||
| 						delete(standaloneComments, comment.Pos()) |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 		case *ast.ObjectItem: |  | ||||||
| 			if t.LeadComment != nil { |  | ||||||
| 				for _, comment := range t.LeadComment.List { |  | ||||||
| 					if _, ok := standaloneComments[comment.Pos()]; ok { |  | ||||||
| 						delete(standaloneComments, comment.Pos()) |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if t.LineComment != nil { |  | ||||||
| 				for _, comment := range t.LineComment.List { |  | ||||||
| 					if _, ok := standaloneComments[comment.Pos()]; ok { |  | ||||||
| 						delete(standaloneComments, comment.Pos()) |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		return nn, true |  | ||||||
| 	}) |  | ||||||
|  |  | ||||||
| 	for _, c := range standaloneComments { |  | ||||||
| 		p.standaloneComments = append(p.standaloneComments, c) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	sort.Sort(ByPosition(p.standaloneComments)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // output prints creates b printable HCL output and returns it. |  | ||||||
| func (p *printer) output(n interface{}) []byte { |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
|  |  | ||||||
| 	switch t := n.(type) { |  | ||||||
| 	case *ast.File: |  | ||||||
| 		// File doesn't trace so we add the tracing here |  | ||||||
| 		defer un(trace(p, "File")) |  | ||||||
| 		return p.output(t.Node) |  | ||||||
| 	case *ast.ObjectList: |  | ||||||
| 		defer un(trace(p, "ObjectList")) |  | ||||||
|  |  | ||||||
| 		var index int |  | ||||||
| 		for { |  | ||||||
| 			// Determine the location of the next actual non-comment |  | ||||||
| 			// item. If we're at the end, the next item is at "infinity" |  | ||||||
| 			var nextItem token.Pos |  | ||||||
| 			if index != len(t.Items) { |  | ||||||
| 				nextItem = t.Items[index].Pos() |  | ||||||
| 			} else { |  | ||||||
| 				nextItem = token.Pos{Offset: infinity, Line: infinity} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// Go through the standalone comments in the file and print out |  | ||||||
| 			// the comments that we should be for this object item. |  | ||||||
| 			for _, c := range p.standaloneComments { |  | ||||||
| 				// Go through all the comments in the group. The group |  | ||||||
| 				// should be printed together, not separated by double newlines. |  | ||||||
| 				printed := false |  | ||||||
| 				newlinePrinted := false |  | ||||||
| 				for _, comment := range c.List { |  | ||||||
| 					// We only care about comments after the previous item |  | ||||||
| 					// we've printed so that comments are printed in the |  | ||||||
| 					// correct locations (between two objects for example). |  | ||||||
| 					// And before the next item. |  | ||||||
| 					if comment.Pos().After(p.prev) && comment.Pos().Before(nextItem) { |  | ||||||
| 						// if we hit the end add newlines so we can print the comment |  | ||||||
| 						// we don't do this if prev is invalid which means the |  | ||||||
| 						// beginning of the file since the first comment should |  | ||||||
| 						// be at the first line. |  | ||||||
| 						if !newlinePrinted && p.prev.IsValid() && index == len(t.Items) { |  | ||||||
| 							buf.Write([]byte{newline, newline}) |  | ||||||
| 							newlinePrinted = true |  | ||||||
| 						} |  | ||||||
|  |  | ||||||
| 						// Write the actual comment. |  | ||||||
| 						buf.WriteString(comment.Text) |  | ||||||
| 						buf.WriteByte(newline) |  | ||||||
|  |  | ||||||
| 						// Set printed to true to note that we printed something |  | ||||||
| 						printed = true |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				// If we're not at the last item, write a new line so |  | ||||||
| 				// that there is a newline separating this comment from |  | ||||||
| 				// the next object. |  | ||||||
| 				if printed && index != len(t.Items) { |  | ||||||
| 					buf.WriteByte(newline) |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if index == len(t.Items) { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			buf.Write(p.output(t.Items[index])) |  | ||||||
| 			if index != len(t.Items)-1 { |  | ||||||
| 				// Always write a newline to separate us from the next item |  | ||||||
| 				buf.WriteByte(newline) |  | ||||||
|  |  | ||||||
| 				// Need to determine if we're going to separate the next item |  | ||||||
| 				// with a blank line. The logic here is simple, though there |  | ||||||
| 				// are a few conditions: |  | ||||||
| 				// |  | ||||||
| 				//   1. The next object is more than one line away anyways, |  | ||||||
| 				//      so we need an empty line. |  | ||||||
| 				// |  | ||||||
| 				//   2. The next object is not a "single line" object, so |  | ||||||
| 				//      we need an empty line. |  | ||||||
| 				// |  | ||||||
| 				//   3. This current object is not a single line object, |  | ||||||
| 				//      so we need an empty line. |  | ||||||
| 				current := t.Items[index] |  | ||||||
| 				next := t.Items[index+1] |  | ||||||
| 				if next.Pos().Line != t.Items[index].Pos().Line+1 || |  | ||||||
| 					!p.isSingleLineObject(next) || |  | ||||||
| 					!p.isSingleLineObject(current) { |  | ||||||
| 					buf.WriteByte(newline) |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 			index++ |  | ||||||
| 		} |  | ||||||
| 	case *ast.ObjectKey: |  | ||||||
| 		buf.WriteString(t.Token.Text) |  | ||||||
| 	case *ast.ObjectItem: |  | ||||||
| 		p.prev = t.Pos() |  | ||||||
| 		buf.Write(p.objectItem(t)) |  | ||||||
| 	case *ast.LiteralType: |  | ||||||
| 		buf.Write(p.literalType(t)) |  | ||||||
| 	case *ast.ListType: |  | ||||||
| 		buf.Write(p.list(t)) |  | ||||||
| 	case *ast.ObjectType: |  | ||||||
| 		buf.Write(p.objectType(t)) |  | ||||||
| 	default: |  | ||||||
| 		fmt.Printf(" unknown type: %T\n", n) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return buf.Bytes() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *printer) literalType(lit *ast.LiteralType) []byte { |  | ||||||
| 	result := []byte(lit.Token.Text) |  | ||||||
| 	switch lit.Token.Type { |  | ||||||
| 	case token.HEREDOC: |  | ||||||
| 		// Clear the trailing newline from heredocs |  | ||||||
| 		if result[len(result)-1] == '\n' { |  | ||||||
| 			result = result[:len(result)-1] |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Poison lines 2+ so that we don't indent them |  | ||||||
| 		result = p.heredocIndent(result) |  | ||||||
| 	case token.STRING: |  | ||||||
| 		// If this is a multiline string, poison lines 2+ so we don't |  | ||||||
| 		// indent them. |  | ||||||
| 		if bytes.IndexRune(result, '\n') >= 0 { |  | ||||||
| 			result = p.heredocIndent(result) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return result |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectItem returns the printable HCL form of an object item. An object type |  | ||||||
| // starts with one/multiple keys and has a value. The value might be of any |  | ||||||
| // type. |  | ||||||
| func (p *printer) objectItem(o *ast.ObjectItem) []byte { |  | ||||||
| 	defer un(trace(p, fmt.Sprintf("ObjectItem: %s", o.Keys[0].Token.Text))) |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
|  |  | ||||||
| 	if o.LeadComment != nil { |  | ||||||
| 		for _, comment := range o.LeadComment.List { |  | ||||||
| 			buf.WriteString(comment.Text) |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If key and val are on different lines, treat line comments like lead comments. |  | ||||||
| 	if o.LineComment != nil && o.Val.Pos().Line != o.Keys[0].Pos().Line { |  | ||||||
| 		for _, comment := range o.LineComment.List { |  | ||||||
| 			buf.WriteString(comment.Text) |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	for i, k := range o.Keys { |  | ||||||
| 		buf.WriteString(k.Token.Text) |  | ||||||
| 		buf.WriteByte(blank) |  | ||||||
|  |  | ||||||
| 		// reach end of key |  | ||||||
| 		if o.Assign.IsValid() && i == len(o.Keys)-1 && len(o.Keys) == 1 { |  | ||||||
| 			buf.WriteString("=") |  | ||||||
| 			buf.WriteByte(blank) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	buf.Write(p.output(o.Val)) |  | ||||||
|  |  | ||||||
| 	if o.LineComment != nil && o.Val.Pos().Line == o.Keys[0].Pos().Line { |  | ||||||
| 		buf.WriteByte(blank) |  | ||||||
| 		for _, comment := range o.LineComment.List { |  | ||||||
| 			buf.WriteString(comment.Text) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return buf.Bytes() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectType returns the printable HCL form of an object type. An object type |  | ||||||
| // begins with a brace and ends with a brace. |  | ||||||
| func (p *printer) objectType(o *ast.ObjectType) []byte { |  | ||||||
| 	defer un(trace(p, "ObjectType")) |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
| 	buf.WriteString("{") |  | ||||||
|  |  | ||||||
| 	var index int |  | ||||||
| 	var nextItem token.Pos |  | ||||||
| 	var commented, newlinePrinted bool |  | ||||||
| 	for { |  | ||||||
| 		// Determine the location of the next actual non-comment |  | ||||||
| 		// item. If we're at the end, the next item is the closing brace |  | ||||||
| 		if index != len(o.List.Items) { |  | ||||||
| 			nextItem = o.List.Items[index].Pos() |  | ||||||
| 		} else { |  | ||||||
| 			nextItem = o.Rbrace |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Go through the standalone comments in the file and print out |  | ||||||
| 		// the comments that we should be for this object item. |  | ||||||
| 		for _, c := range p.standaloneComments { |  | ||||||
| 			printed := false |  | ||||||
| 			var lastCommentPos token.Pos |  | ||||||
| 			for _, comment := range c.List { |  | ||||||
| 				// We only care about comments after the previous item |  | ||||||
| 				// we've printed so that comments are printed in the |  | ||||||
| 				// correct locations (between two objects for example). |  | ||||||
| 				// And before the next item. |  | ||||||
| 				if comment.Pos().After(p.prev) && comment.Pos().Before(nextItem) { |  | ||||||
| 					// If there are standalone comments and the initial newline has not |  | ||||||
| 					// been printed yet, do it now. |  | ||||||
| 					if !newlinePrinted { |  | ||||||
| 						newlinePrinted = true |  | ||||||
| 						buf.WriteByte(newline) |  | ||||||
| 					} |  | ||||||
|  |  | ||||||
| 					// add newline if it's between other printed nodes |  | ||||||
| 					if index > 0 { |  | ||||||
| 						commented = true |  | ||||||
| 						buf.WriteByte(newline) |  | ||||||
| 					} |  | ||||||
|  |  | ||||||
| 					// Store this position |  | ||||||
| 					lastCommentPos = comment.Pos() |  | ||||||
|  |  | ||||||
| 					// output the comment itself |  | ||||||
| 					buf.Write(p.indent(p.heredocIndent([]byte(comment.Text)))) |  | ||||||
|  |  | ||||||
| 					// Set printed to true to note that we printed something |  | ||||||
| 					printed = true |  | ||||||
|  |  | ||||||
| 					/* |  | ||||||
| 						if index != len(o.List.Items) { |  | ||||||
| 							buf.WriteByte(newline) // do not print on the end |  | ||||||
| 						} |  | ||||||
| 					*/ |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// Stuff to do if we had comments |  | ||||||
| 			if printed { |  | ||||||
| 				// Always write a newline |  | ||||||
| 				buf.WriteByte(newline) |  | ||||||
|  |  | ||||||
| 				// If there is another item in the object and our comment |  | ||||||
| 				// didn't hug it directly, then make sure there is a blank |  | ||||||
| 				// line separating them. |  | ||||||
| 				if nextItem != o.Rbrace && nextItem.Line != lastCommentPos.Line+1 { |  | ||||||
| 					buf.WriteByte(newline) |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if index == len(o.List.Items) { |  | ||||||
| 			p.prev = o.Rbrace |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// At this point we are sure that it's not a totally empty block: print |  | ||||||
| 		// the initial newline if it hasn't been printed yet by the previous |  | ||||||
| 		// block about standalone comments. |  | ||||||
| 		if !newlinePrinted { |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 			newlinePrinted = true |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// check if we have adjacent one liner items. If yes we'll going to align |  | ||||||
| 		// the comments. |  | ||||||
| 		var aligned []*ast.ObjectItem |  | ||||||
| 		for _, item := range o.List.Items[index:] { |  | ||||||
| 			// we don't group one line lists |  | ||||||
| 			if len(o.List.Items) == 1 { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// one means a oneliner with out any lead comment |  | ||||||
| 			// two means a oneliner with lead comment |  | ||||||
| 			// anything else might be something else |  | ||||||
| 			cur := lines(string(p.objectItem(item))) |  | ||||||
| 			if cur > 2 { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			curPos := item.Pos() |  | ||||||
|  |  | ||||||
| 			nextPos := token.Pos{} |  | ||||||
| 			if index != len(o.List.Items)-1 { |  | ||||||
| 				nextPos = o.List.Items[index+1].Pos() |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			prevPos := token.Pos{} |  | ||||||
| 			if index != 0 { |  | ||||||
| 				prevPos = o.List.Items[index-1].Pos() |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// fmt.Println("DEBUG ----------------") |  | ||||||
| 			// fmt.Printf("prev = %+v prevPos: %s\n", prev, prevPos) |  | ||||||
| 			// fmt.Printf("cur = %+v curPos: %s\n", cur, curPos) |  | ||||||
| 			// fmt.Printf("next = %+v nextPos: %s\n", next, nextPos) |  | ||||||
|  |  | ||||||
| 			if curPos.Line+1 == nextPos.Line { |  | ||||||
| 				aligned = append(aligned, item) |  | ||||||
| 				index++ |  | ||||||
| 				continue |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if curPos.Line-1 == prevPos.Line { |  | ||||||
| 				aligned = append(aligned, item) |  | ||||||
| 				index++ |  | ||||||
|  |  | ||||||
| 				// finish if we have a new line or comment next. This happens |  | ||||||
| 				// if the next item is not adjacent |  | ||||||
| 				if curPos.Line+1 != nextPos.Line { |  | ||||||
| 					break |  | ||||||
| 				} |  | ||||||
| 				continue |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// put newlines if the items are between other non aligned items. |  | ||||||
| 		// newlines are also added if there is a standalone comment already, so |  | ||||||
| 		// check it too |  | ||||||
| 		if !commented && index != len(aligned) { |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if len(aligned) >= 1 { |  | ||||||
| 			p.prev = aligned[len(aligned)-1].Pos() |  | ||||||
|  |  | ||||||
| 			items := p.alignedItems(aligned) |  | ||||||
| 			buf.Write(p.indent(items)) |  | ||||||
| 		} else { |  | ||||||
| 			p.prev = o.List.Items[index].Pos() |  | ||||||
|  |  | ||||||
| 			buf.Write(p.indent(p.objectItem(o.List.Items[index]))) |  | ||||||
| 			index++ |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		buf.WriteByte(newline) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	buf.WriteString("}") |  | ||||||
| 	return buf.Bytes() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *printer) alignedItems(items []*ast.ObjectItem) []byte { |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
|  |  | ||||||
| 	// find the longest key and value length, needed for alignment |  | ||||||
| 	var longestKeyLen int // longest key length |  | ||||||
| 	var longestValLen int // longest value length |  | ||||||
| 	for _, item := range items { |  | ||||||
| 		key := len(item.Keys[0].Token.Text) |  | ||||||
| 		val := len(p.output(item.Val)) |  | ||||||
|  |  | ||||||
| 		if key > longestKeyLen { |  | ||||||
| 			longestKeyLen = key |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if val > longestValLen { |  | ||||||
| 			longestValLen = val |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	for i, item := range items { |  | ||||||
| 		if item.LeadComment != nil { |  | ||||||
| 			for _, comment := range item.LeadComment.List { |  | ||||||
| 				buf.WriteString(comment.Text) |  | ||||||
| 				buf.WriteByte(newline) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		for i, k := range item.Keys { |  | ||||||
| 			keyLen := len(k.Token.Text) |  | ||||||
| 			buf.WriteString(k.Token.Text) |  | ||||||
| 			for i := 0; i < longestKeyLen-keyLen+1; i++ { |  | ||||||
| 				buf.WriteByte(blank) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// reach end of key |  | ||||||
| 			if i == len(item.Keys)-1 && len(item.Keys) == 1 { |  | ||||||
| 				buf.WriteString("=") |  | ||||||
| 				buf.WriteByte(blank) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		val := p.output(item.Val) |  | ||||||
| 		valLen := len(val) |  | ||||||
| 		buf.Write(val) |  | ||||||
|  |  | ||||||
| 		if item.Val.Pos().Line == item.Keys[0].Pos().Line && item.LineComment != nil { |  | ||||||
| 			for i := 0; i < longestValLen-valLen+1; i++ { |  | ||||||
| 				buf.WriteByte(blank) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			for _, comment := range item.LineComment.List { |  | ||||||
| 				buf.WriteString(comment.Text) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// do not print for the last item |  | ||||||
| 		if i != len(items)-1 { |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return buf.Bytes() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // list returns the printable HCL form of an list type. |  | ||||||
| func (p *printer) list(l *ast.ListType) []byte { |  | ||||||
| 	if p.isSingleLineList(l) { |  | ||||||
| 		return p.singleLineList(l) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
| 	buf.WriteString("[") |  | ||||||
| 	buf.WriteByte(newline) |  | ||||||
|  |  | ||||||
| 	var longestLine int |  | ||||||
| 	for _, item := range l.List { |  | ||||||
| 		// for now we assume that the list only contains literal types |  | ||||||
| 		if lit, ok := item.(*ast.LiteralType); ok { |  | ||||||
| 			lineLen := len(lit.Token.Text) |  | ||||||
| 			if lineLen > longestLine { |  | ||||||
| 				longestLine = lineLen |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	haveEmptyLine := false |  | ||||||
| 	for i, item := range l.List { |  | ||||||
| 		// If we have a lead comment, then we want to write that first |  | ||||||
| 		leadComment := false |  | ||||||
| 		if lit, ok := item.(*ast.LiteralType); ok && lit.LeadComment != nil { |  | ||||||
| 			leadComment = true |  | ||||||
|  |  | ||||||
| 			// Ensure an empty line before every element with a |  | ||||||
| 			// lead comment (except the first item in a list). |  | ||||||
| 			if !haveEmptyLine && i != 0 { |  | ||||||
| 				buf.WriteByte(newline) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			for _, comment := range lit.LeadComment.List { |  | ||||||
| 				buf.Write(p.indent([]byte(comment.Text))) |  | ||||||
| 				buf.WriteByte(newline) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// also indent each line |  | ||||||
| 		val := p.output(item) |  | ||||||
| 		curLen := len(val) |  | ||||||
| 		buf.Write(p.indent(val)) |  | ||||||
|  |  | ||||||
| 		// if this item is a heredoc, then we output the comma on |  | ||||||
| 		// the next line. This is the only case this happens. |  | ||||||
| 		comma := []byte{','} |  | ||||||
| 		if lit, ok := item.(*ast.LiteralType); ok && lit.Token.Type == token.HEREDOC { |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 			comma = p.indent(comma) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		buf.Write(comma) |  | ||||||
|  |  | ||||||
| 		if lit, ok := item.(*ast.LiteralType); ok && lit.LineComment != nil { |  | ||||||
| 			// if the next item doesn't have any comments, do not align |  | ||||||
| 			buf.WriteByte(blank) // align one space |  | ||||||
| 			for i := 0; i < longestLine-curLen; i++ { |  | ||||||
| 				buf.WriteByte(blank) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			for _, comment := range lit.LineComment.List { |  | ||||||
| 				buf.WriteString(comment.Text) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		buf.WriteByte(newline) |  | ||||||
|  |  | ||||||
| 		// Ensure an empty line after every element with a |  | ||||||
| 		// lead comment (except the first item in a list). |  | ||||||
| 		haveEmptyLine = leadComment && i != len(l.List)-1 |  | ||||||
| 		if haveEmptyLine { |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	buf.WriteString("]") |  | ||||||
| 	return buf.Bytes() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isSingleLineList returns true if: |  | ||||||
| // * they were previously formatted entirely on one line |  | ||||||
| // * they consist entirely of literals |  | ||||||
| // * there are either no heredoc strings or the list has exactly one element |  | ||||||
| // * there are no line comments |  | ||||||
| func (printer) isSingleLineList(l *ast.ListType) bool { |  | ||||||
| 	for _, item := range l.List { |  | ||||||
| 		if item.Pos().Line != l.Lbrack.Line { |  | ||||||
| 			return false |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		lit, ok := item.(*ast.LiteralType) |  | ||||||
| 		if !ok { |  | ||||||
| 			return false |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if lit.Token.Type == token.HEREDOC && len(l.List) != 1 { |  | ||||||
| 			return false |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if lit.LineComment != nil { |  | ||||||
| 			return false |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return true |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // singleLineList prints a simple single line list. |  | ||||||
| // For a definition of "simple", see isSingleLineList above. |  | ||||||
| func (p *printer) singleLineList(l *ast.ListType) []byte { |  | ||||||
| 	buf := &bytes.Buffer{} |  | ||||||
|  |  | ||||||
| 	buf.WriteString("[") |  | ||||||
| 	for i, item := range l.List { |  | ||||||
| 		if i != 0 { |  | ||||||
| 			buf.WriteString(", ") |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Output the item itself |  | ||||||
| 		buf.Write(p.output(item)) |  | ||||||
|  |  | ||||||
| 		// The heredoc marker needs to be at the end of line. |  | ||||||
| 		if lit, ok := item.(*ast.LiteralType); ok && lit.Token.Type == token.HEREDOC { |  | ||||||
| 			buf.WriteByte(newline) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	buf.WriteString("]") |  | ||||||
| 	return buf.Bytes() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // indent indents the lines of the given buffer for each non-empty line |  | ||||||
| func (p *printer) indent(buf []byte) []byte { |  | ||||||
| 	var prefix []byte |  | ||||||
| 	if p.cfg.SpacesWidth != 0 { |  | ||||||
| 		for i := 0; i < p.cfg.SpacesWidth; i++ { |  | ||||||
| 			prefix = append(prefix, blank) |  | ||||||
| 		} |  | ||||||
| 	} else { |  | ||||||
| 		prefix = []byte{tab} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var res []byte |  | ||||||
| 	bol := true |  | ||||||
| 	for _, c := range buf { |  | ||||||
| 		if bol && c != '\n' { |  | ||||||
| 			res = append(res, prefix...) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		res = append(res, c) |  | ||||||
| 		bol = c == '\n' |  | ||||||
| 	} |  | ||||||
| 	return res |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // unindent removes all the indentation from the tombstoned lines |  | ||||||
| func (p *printer) unindent(buf []byte) []byte { |  | ||||||
| 	var res []byte |  | ||||||
| 	for i := 0; i < len(buf); i++ { |  | ||||||
| 		skip := len(buf)-i <= len(unindent) |  | ||||||
| 		if !skip { |  | ||||||
| 			skip = !bytes.Equal(unindent, buf[i:i+len(unindent)]) |  | ||||||
| 		} |  | ||||||
| 		if skip { |  | ||||||
| 			res = append(res, buf[i]) |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// We have a marker. we have to backtrace here and clean out |  | ||||||
| 		// any whitespace ahead of our tombstone up to a \n |  | ||||||
| 		for j := len(res) - 1; j >= 0; j-- { |  | ||||||
| 			if res[j] == '\n' { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			res = res[:j] |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Skip the entire unindent marker |  | ||||||
| 		i += len(unindent) - 1 |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return res |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // heredocIndent marks all the 2nd and further lines as unindentable |  | ||||||
| func (p *printer) heredocIndent(buf []byte) []byte { |  | ||||||
| 	var res []byte |  | ||||||
| 	bol := false |  | ||||||
| 	for _, c := range buf { |  | ||||||
| 		if bol && c != '\n' { |  | ||||||
| 			res = append(res, unindent...) |  | ||||||
| 		} |  | ||||||
| 		res = append(res, c) |  | ||||||
| 		bol = c == '\n' |  | ||||||
| 	} |  | ||||||
| 	return res |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isSingleLineObject tells whether the given object item is a single |  | ||||||
| // line object such as "obj {}". |  | ||||||
| // |  | ||||||
| // A single line object: |  | ||||||
| // |  | ||||||
| //   * has no lead comments (hence multi-line) |  | ||||||
| //   * has no assignment |  | ||||||
| //   * has no values in the stanza (within {}) |  | ||||||
| // |  | ||||||
| func (p *printer) isSingleLineObject(val *ast.ObjectItem) bool { |  | ||||||
| 	// If there is a lead comment, can't be one line |  | ||||||
| 	if val.LeadComment != nil { |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If there is assignment, we always break by line |  | ||||||
| 	if val.Assign.IsValid() { |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If it isn't an object type, then its not a single line object |  | ||||||
| 	ot, ok := val.Val.(*ast.ObjectType) |  | ||||||
| 	if !ok { |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If the object has no items, it is single line! |  | ||||||
| 	return len(ot.List.Items) == 0 |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func lines(txt string) int { |  | ||||||
| 	endline := 1 |  | ||||||
| 	for i := 0; i < len(txt); i++ { |  | ||||||
| 		if txt[i] == '\n' { |  | ||||||
| 			endline++ |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return endline |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
| // Tracing support |  | ||||||
|  |  | ||||||
| func (p *printer) printTrace(a ...interface{}) { |  | ||||||
| 	if !p.enableTrace { |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	const dots = ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " |  | ||||||
| 	const n = len(dots) |  | ||||||
| 	i := 2 * p.indentTrace |  | ||||||
| 	for i > n { |  | ||||||
| 		fmt.Print(dots) |  | ||||||
| 		i -= n |  | ||||||
| 	} |  | ||||||
| 	// i <= n |  | ||||||
| 	fmt.Print(dots[0:i]) |  | ||||||
| 	fmt.Println(a...) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func trace(p *printer, msg string) *printer { |  | ||||||
| 	p.printTrace(msg, "(") |  | ||||||
| 	p.indentTrace++ |  | ||||||
| 	return p |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Usage pattern: defer un(trace(p, "...")) |  | ||||||
| func un(p *printer) { |  | ||||||
| 	p.indentTrace-- |  | ||||||
| 	p.printTrace(")") |  | ||||||
| } |  | ||||||
							
								
								
									
										66
									
								
								vendor/github.com/hashicorp/hcl/hcl/printer/printer.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										66
									
								
								vendor/github.com/hashicorp/hcl/hcl/printer/printer.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,66 +0,0 @@ | |||||||
| // Package printer implements printing of AST nodes to HCL format. |  | ||||||
| package printer |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"io" |  | ||||||
| 	"text/tabwriter" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/ast" |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/parser" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var DefaultConfig = Config{ |  | ||||||
| 	SpacesWidth: 2, |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // A Config node controls the output of Fprint. |  | ||||||
| type Config struct { |  | ||||||
| 	SpacesWidth int // if set, it will use spaces instead of tabs for alignment |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (c *Config) Fprint(output io.Writer, node ast.Node) error { |  | ||||||
| 	p := &printer{ |  | ||||||
| 		cfg:                *c, |  | ||||||
| 		comments:           make([]*ast.CommentGroup, 0), |  | ||||||
| 		standaloneComments: make([]*ast.CommentGroup, 0), |  | ||||||
| 		// enableTrace:        true, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	p.collectComments(node) |  | ||||||
|  |  | ||||||
| 	if _, err := output.Write(p.unindent(p.output(node))); err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// flush tabwriter, if any |  | ||||||
| 	var err error |  | ||||||
| 	if tw, _ := output.(*tabwriter.Writer); tw != nil { |  | ||||||
| 		err = tw.Flush() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return err |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Fprint "pretty-prints" an HCL node to output |  | ||||||
| // It calls Config.Fprint with default settings. |  | ||||||
| func Fprint(output io.Writer, node ast.Node) error { |  | ||||||
| 	return DefaultConfig.Fprint(output, node) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Format formats src HCL and returns the result. |  | ||||||
| func Format(src []byte) ([]byte, error) { |  | ||||||
| 	node, err := parser.Parse(src) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
| 	if err := DefaultConfig.Fprint(&buf, node); err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Add trailing newline to result |  | ||||||
| 	buf.WriteString("\n") |  | ||||||
| 	return buf.Bytes(), nil |  | ||||||
| } |  | ||||||
							
								
								
									
										652
									
								
								vendor/github.com/hashicorp/hcl/hcl/scanner/scanner.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										652
									
								
								vendor/github.com/hashicorp/hcl/hcl/scanner/scanner.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,652 +0,0 @@ | |||||||
| // Package scanner implements a scanner for HCL (HashiCorp Configuration |  | ||||||
| // Language) source text. |  | ||||||
| package scanner |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"fmt" |  | ||||||
| 	"os" |  | ||||||
| 	"regexp" |  | ||||||
| 	"unicode" |  | ||||||
| 	"unicode/utf8" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // eof represents a marker rune for the end of the reader. |  | ||||||
| const eof = rune(0) |  | ||||||
|  |  | ||||||
| // Scanner defines a lexical scanner |  | ||||||
| type Scanner struct { |  | ||||||
| 	buf *bytes.Buffer // Source buffer for advancing and scanning |  | ||||||
| 	src []byte        // Source buffer for immutable access |  | ||||||
|  |  | ||||||
| 	// Source Position |  | ||||||
| 	srcPos  token.Pos // current position |  | ||||||
| 	prevPos token.Pos // previous position, used for peek() method |  | ||||||
|  |  | ||||||
| 	lastCharLen int // length of last character in bytes |  | ||||||
| 	lastLineLen int // length of last line in characters (for correct column reporting) |  | ||||||
|  |  | ||||||
| 	tokStart int // token text start position |  | ||||||
| 	tokEnd   int // token text end  position |  | ||||||
|  |  | ||||||
| 	// Error is called for each error encountered. If no Error |  | ||||||
| 	// function is set, the error is reported to os.Stderr. |  | ||||||
| 	Error func(pos token.Pos, msg string) |  | ||||||
|  |  | ||||||
| 	// ErrorCount is incremented by one for each error encountered. |  | ||||||
| 	ErrorCount int |  | ||||||
|  |  | ||||||
| 	// tokPos is the start position of most recently scanned token; set by |  | ||||||
| 	// Scan. The Filename field is always left untouched by the Scanner.  If |  | ||||||
| 	// an error is reported (via Error) and Position is invalid, the scanner is |  | ||||||
| 	// not inside a token. |  | ||||||
| 	tokPos token.Pos |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // New creates and initializes a new instance of Scanner using src as |  | ||||||
| // its source content. |  | ||||||
| func New(src []byte) *Scanner { |  | ||||||
| 	// even though we accept a src, we read from a io.Reader compatible type |  | ||||||
| 	// (*bytes.Buffer). So in the future we might easily change it to streaming |  | ||||||
| 	// read. |  | ||||||
| 	b := bytes.NewBuffer(src) |  | ||||||
| 	s := &Scanner{ |  | ||||||
| 		buf: b, |  | ||||||
| 		src: src, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// srcPosition always starts with 1 |  | ||||||
| 	s.srcPos.Line = 1 |  | ||||||
| 	return s |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // next reads the next rune from the bufferred reader. Returns the rune(0) if |  | ||||||
| // an error occurs (or io.EOF is returned). |  | ||||||
| func (s *Scanner) next() rune { |  | ||||||
| 	ch, size, err := s.buf.ReadRune() |  | ||||||
| 	if err != nil { |  | ||||||
| 		// advance for error reporting |  | ||||||
| 		s.srcPos.Column++ |  | ||||||
| 		s.srcPos.Offset += size |  | ||||||
| 		s.lastCharLen = size |  | ||||||
| 		return eof |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// remember last position |  | ||||||
| 	s.prevPos = s.srcPos |  | ||||||
|  |  | ||||||
| 	s.srcPos.Column++ |  | ||||||
| 	s.lastCharLen = size |  | ||||||
| 	s.srcPos.Offset += size |  | ||||||
|  |  | ||||||
| 	if ch == utf8.RuneError && size == 1 { |  | ||||||
| 		s.err("illegal UTF-8 encoding") |  | ||||||
| 		return ch |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch == '\n' { |  | ||||||
| 		s.srcPos.Line++ |  | ||||||
| 		s.lastLineLen = s.srcPos.Column |  | ||||||
| 		s.srcPos.Column = 0 |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch == '\x00' { |  | ||||||
| 		s.err("unexpected null character (0x00)") |  | ||||||
| 		return eof |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch == '\uE123' { |  | ||||||
| 		s.err("unicode code point U+E123 reserved for internal use") |  | ||||||
| 		return utf8.RuneError |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// debug |  | ||||||
| 	// fmt.Printf("ch: %q, offset:column: %d:%d\n", ch, s.srcPos.Offset, s.srcPos.Column) |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // unread unreads the previous read Rune and updates the source position |  | ||||||
| func (s *Scanner) unread() { |  | ||||||
| 	if err := s.buf.UnreadRune(); err != nil { |  | ||||||
| 		panic(err) // this is user fault, we should catch it |  | ||||||
| 	} |  | ||||||
| 	s.srcPos = s.prevPos // put back last position |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // peek returns the next rune without advancing the reader. |  | ||||||
| func (s *Scanner) peek() rune { |  | ||||||
| 	peek, _, err := s.buf.ReadRune() |  | ||||||
| 	if err != nil { |  | ||||||
| 		return eof |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	s.buf.UnreadRune() |  | ||||||
| 	return peek |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Scan scans the next token and returns the token. |  | ||||||
| func (s *Scanner) Scan() token.Token { |  | ||||||
| 	ch := s.next() |  | ||||||
|  |  | ||||||
| 	// skip white space |  | ||||||
| 	for isWhitespace(ch) { |  | ||||||
| 		ch = s.next() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var tok token.Type |  | ||||||
|  |  | ||||||
| 	// token text markings |  | ||||||
| 	s.tokStart = s.srcPos.Offset - s.lastCharLen |  | ||||||
|  |  | ||||||
| 	// token position, initial next() is moving the offset by one(size of rune |  | ||||||
| 	// actually), though we are interested with the starting point |  | ||||||
| 	s.tokPos.Offset = s.srcPos.Offset - s.lastCharLen |  | ||||||
| 	if s.srcPos.Column > 0 { |  | ||||||
| 		// common case: last character was not a '\n' |  | ||||||
| 		s.tokPos.Line = s.srcPos.Line |  | ||||||
| 		s.tokPos.Column = s.srcPos.Column |  | ||||||
| 	} else { |  | ||||||
| 		// last character was a '\n' |  | ||||||
| 		// (we cannot be at the beginning of the source |  | ||||||
| 		// since we have called next() at least once) |  | ||||||
| 		s.tokPos.Line = s.srcPos.Line - 1 |  | ||||||
| 		s.tokPos.Column = s.lastLineLen |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch { |  | ||||||
| 	case isLetter(ch): |  | ||||||
| 		tok = token.IDENT |  | ||||||
| 		lit := s.scanIdentifier() |  | ||||||
| 		if lit == "true" || lit == "false" { |  | ||||||
| 			tok = token.BOOL |  | ||||||
| 		} |  | ||||||
| 	case isDecimal(ch): |  | ||||||
| 		tok = s.scanNumber(ch) |  | ||||||
| 	default: |  | ||||||
| 		switch ch { |  | ||||||
| 		case eof: |  | ||||||
| 			tok = token.EOF |  | ||||||
| 		case '"': |  | ||||||
| 			tok = token.STRING |  | ||||||
| 			s.scanString() |  | ||||||
| 		case '#', '/': |  | ||||||
| 			tok = token.COMMENT |  | ||||||
| 			s.scanComment(ch) |  | ||||||
| 		case '.': |  | ||||||
| 			tok = token.PERIOD |  | ||||||
| 			ch = s.peek() |  | ||||||
| 			if isDecimal(ch) { |  | ||||||
| 				tok = token.FLOAT |  | ||||||
| 				ch = s.scanMantissa(ch) |  | ||||||
| 				ch = s.scanExponent(ch) |  | ||||||
| 			} |  | ||||||
| 		case '<': |  | ||||||
| 			tok = token.HEREDOC |  | ||||||
| 			s.scanHeredoc() |  | ||||||
| 		case '[': |  | ||||||
| 			tok = token.LBRACK |  | ||||||
| 		case ']': |  | ||||||
| 			tok = token.RBRACK |  | ||||||
| 		case '{': |  | ||||||
| 			tok = token.LBRACE |  | ||||||
| 		case '}': |  | ||||||
| 			tok = token.RBRACE |  | ||||||
| 		case ',': |  | ||||||
| 			tok = token.COMMA |  | ||||||
| 		case '=': |  | ||||||
| 			tok = token.ASSIGN |  | ||||||
| 		case '+': |  | ||||||
| 			tok = token.ADD |  | ||||||
| 		case '-': |  | ||||||
| 			if isDecimal(s.peek()) { |  | ||||||
| 				ch := s.next() |  | ||||||
| 				tok = s.scanNumber(ch) |  | ||||||
| 			} else { |  | ||||||
| 				tok = token.SUB |  | ||||||
| 			} |  | ||||||
| 		default: |  | ||||||
| 			s.err("illegal char") |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// finish token ending |  | ||||||
| 	s.tokEnd = s.srcPos.Offset |  | ||||||
|  |  | ||||||
| 	// create token literal |  | ||||||
| 	var tokenText string |  | ||||||
| 	if s.tokStart >= 0 { |  | ||||||
| 		tokenText = string(s.src[s.tokStart:s.tokEnd]) |  | ||||||
| 	} |  | ||||||
| 	s.tokStart = s.tokEnd // ensure idempotency of tokenText() call |  | ||||||
|  |  | ||||||
| 	return token.Token{ |  | ||||||
| 		Type: tok, |  | ||||||
| 		Pos:  s.tokPos, |  | ||||||
| 		Text: tokenText, |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (s *Scanner) scanComment(ch rune) { |  | ||||||
| 	// single line comments |  | ||||||
| 	if ch == '#' || (ch == '/' && s.peek() != '*') { |  | ||||||
| 		if ch == '/' && s.peek() != '/' { |  | ||||||
| 			s.err("expected '/' for comment") |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		ch = s.next() |  | ||||||
| 		for ch != '\n' && ch >= 0 && ch != eof { |  | ||||||
| 			ch = s.next() |  | ||||||
| 		} |  | ||||||
| 		if ch != eof && ch >= 0 { |  | ||||||
| 			s.unread() |  | ||||||
| 		} |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// be sure we get the character after /* This allows us to find comment's |  | ||||||
| 	// that are not erminated |  | ||||||
| 	if ch == '/' { |  | ||||||
| 		s.next() |  | ||||||
| 		ch = s.next() // read character after "/*" |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// look for /* - style comments |  | ||||||
| 	for { |  | ||||||
| 		if ch < 0 || ch == eof { |  | ||||||
| 			s.err("comment not terminated") |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		ch0 := ch |  | ||||||
| 		ch = s.next() |  | ||||||
| 		if ch0 == '*' && ch == '/' { |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanNumber scans a HCL number definition starting with the given rune |  | ||||||
| func (s *Scanner) scanNumber(ch rune) token.Type { |  | ||||||
| 	if ch == '0' { |  | ||||||
| 		// check for hexadecimal, octal or float |  | ||||||
| 		ch = s.next() |  | ||||||
| 		if ch == 'x' || ch == 'X' { |  | ||||||
| 			// hexadecimal |  | ||||||
| 			ch = s.next() |  | ||||||
| 			found := false |  | ||||||
| 			for isHexadecimal(ch) { |  | ||||||
| 				ch = s.next() |  | ||||||
| 				found = true |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if !found { |  | ||||||
| 				s.err("illegal hexadecimal number") |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if ch != eof { |  | ||||||
| 				s.unread() |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			return token.NUMBER |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// now it's either something like: 0421(octal) or 0.1231(float) |  | ||||||
| 		illegalOctal := false |  | ||||||
| 		for isDecimal(ch) { |  | ||||||
| 			ch = s.next() |  | ||||||
| 			if ch == '8' || ch == '9' { |  | ||||||
| 				// this is just a possibility. For example 0159 is illegal, but |  | ||||||
| 				// 0159.23 is valid. So we mark a possible illegal octal. If |  | ||||||
| 				// the next character is not a period, we'll print the error. |  | ||||||
| 				illegalOctal = true |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch == 'e' || ch == 'E' { |  | ||||||
| 			ch = s.scanExponent(ch) |  | ||||||
| 			return token.FLOAT |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch == '.' { |  | ||||||
| 			ch = s.scanFraction(ch) |  | ||||||
|  |  | ||||||
| 			if ch == 'e' || ch == 'E' { |  | ||||||
| 				ch = s.next() |  | ||||||
| 				ch = s.scanExponent(ch) |  | ||||||
| 			} |  | ||||||
| 			return token.FLOAT |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if illegalOctal { |  | ||||||
| 			s.err("illegal octal number") |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch != eof { |  | ||||||
| 			s.unread() |  | ||||||
| 		} |  | ||||||
| 		return token.NUMBER |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	s.scanMantissa(ch) |  | ||||||
| 	ch = s.next() // seek forward |  | ||||||
| 	if ch == 'e' || ch == 'E' { |  | ||||||
| 		ch = s.scanExponent(ch) |  | ||||||
| 		return token.FLOAT |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch == '.' { |  | ||||||
| 		ch = s.scanFraction(ch) |  | ||||||
| 		if ch == 'e' || ch == 'E' { |  | ||||||
| 			ch = s.next() |  | ||||||
| 			ch = s.scanExponent(ch) |  | ||||||
| 		} |  | ||||||
| 		return token.FLOAT |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch != eof { |  | ||||||
| 		s.unread() |  | ||||||
| 	} |  | ||||||
| 	return token.NUMBER |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanMantissa scans the mantissa beginning from the rune. It returns the next |  | ||||||
| // non decimal rune. It's used to determine wheter it's a fraction or exponent. |  | ||||||
| func (s *Scanner) scanMantissa(ch rune) rune { |  | ||||||
| 	scanned := false |  | ||||||
| 	for isDecimal(ch) { |  | ||||||
| 		ch = s.next() |  | ||||||
| 		scanned = true |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if scanned && ch != eof { |  | ||||||
| 		s.unread() |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanFraction scans the fraction after the '.' rune |  | ||||||
| func (s *Scanner) scanFraction(ch rune) rune { |  | ||||||
| 	if ch == '.' { |  | ||||||
| 		ch = s.peek() // we peek just to see if we can move forward |  | ||||||
| 		ch = s.scanMantissa(ch) |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanExponent scans the remaining parts of an exponent after the 'e' or 'E' |  | ||||||
| // rune. |  | ||||||
| func (s *Scanner) scanExponent(ch rune) rune { |  | ||||||
| 	if ch == 'e' || ch == 'E' { |  | ||||||
| 		ch = s.next() |  | ||||||
| 		if ch == '-' || ch == '+' { |  | ||||||
| 			ch = s.next() |  | ||||||
| 		} |  | ||||||
| 		ch = s.scanMantissa(ch) |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanHeredoc scans a heredoc string |  | ||||||
| func (s *Scanner) scanHeredoc() { |  | ||||||
| 	// Scan the second '<' in example: '<<EOF' |  | ||||||
| 	if s.next() != '<' { |  | ||||||
| 		s.err("heredoc expected second '<', didn't see it") |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Get the original offset so we can read just the heredoc ident |  | ||||||
| 	offs := s.srcPos.Offset |  | ||||||
|  |  | ||||||
| 	// Scan the identifier |  | ||||||
| 	ch := s.next() |  | ||||||
|  |  | ||||||
| 	// Indented heredoc syntax |  | ||||||
| 	if ch == '-' { |  | ||||||
| 		ch = s.next() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	for isLetter(ch) || isDigit(ch) { |  | ||||||
| 		ch = s.next() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If we reached an EOF then that is not good |  | ||||||
| 	if ch == eof { |  | ||||||
| 		s.err("heredoc not terminated") |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Ignore the '\r' in Windows line endings |  | ||||||
| 	if ch == '\r' { |  | ||||||
| 		if s.peek() == '\n' { |  | ||||||
| 			ch = s.next() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If we didn't reach a newline then that is also not good |  | ||||||
| 	if ch != '\n' { |  | ||||||
| 		s.err("invalid characters in heredoc anchor") |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Read the identifier |  | ||||||
| 	identBytes := s.src[offs : s.srcPos.Offset-s.lastCharLen] |  | ||||||
| 	if len(identBytes) == 0 || (len(identBytes) == 1 && identBytes[0] == '-') { |  | ||||||
| 		s.err("zero-length heredoc anchor") |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var identRegexp *regexp.Regexp |  | ||||||
| 	if identBytes[0] == '-' { |  | ||||||
| 		identRegexp = regexp.MustCompile(fmt.Sprintf(`^[[:space:]]*%s\r*\z`, identBytes[1:])) |  | ||||||
| 	} else { |  | ||||||
| 		identRegexp = regexp.MustCompile(fmt.Sprintf(`^[[:space:]]*%s\r*\z`, identBytes)) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Read the actual string value |  | ||||||
| 	lineStart := s.srcPos.Offset |  | ||||||
| 	for { |  | ||||||
| 		ch := s.next() |  | ||||||
|  |  | ||||||
| 		// Special newline handling. |  | ||||||
| 		if ch == '\n' { |  | ||||||
| 			// Math is fast, so we first compare the byte counts to see if we have a chance |  | ||||||
| 			// of seeing the same identifier - if the length is less than the number of bytes |  | ||||||
| 			// in the identifier, this cannot be a valid terminator. |  | ||||||
| 			lineBytesLen := s.srcPos.Offset - s.lastCharLen - lineStart |  | ||||||
| 			if lineBytesLen >= len(identBytes) && identRegexp.Match(s.src[lineStart:s.srcPos.Offset-s.lastCharLen]) { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// Not an anchor match, record the start of a new line |  | ||||||
| 			lineStart = s.srcPos.Offset |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch == eof { |  | ||||||
| 			s.err("heredoc not terminated") |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanString scans a quoted string |  | ||||||
| func (s *Scanner) scanString() { |  | ||||||
| 	braces := 0 |  | ||||||
| 	for { |  | ||||||
| 		// '"' opening already consumed |  | ||||||
| 		// read character after quote |  | ||||||
| 		ch := s.next() |  | ||||||
|  |  | ||||||
| 		if (ch == '\n' && braces == 0) || ch < 0 || ch == eof { |  | ||||||
| 			s.err("literal not terminated") |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch == '"' && braces == 0 { |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// If we're going into a ${} then we can ignore quotes for awhile |  | ||||||
| 		if braces == 0 && ch == '$' && s.peek() == '{' { |  | ||||||
| 			braces++ |  | ||||||
| 			s.next() |  | ||||||
| 		} else if braces > 0 && ch == '{' { |  | ||||||
| 			braces++ |  | ||||||
| 		} |  | ||||||
| 		if braces > 0 && ch == '}' { |  | ||||||
| 			braces-- |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch == '\\' { |  | ||||||
| 			s.scanEscape() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanEscape scans an escape sequence |  | ||||||
| func (s *Scanner) scanEscape() rune { |  | ||||||
| 	// http://en.cppreference.com/w/cpp/language/escape |  | ||||||
| 	ch := s.next() // read character after '/' |  | ||||||
| 	switch ch { |  | ||||||
| 	case 'a', 'b', 'f', 'n', 'r', 't', 'v', '\\', '"': |  | ||||||
| 		// nothing to do |  | ||||||
| 	case '0', '1', '2', '3', '4', '5', '6', '7': |  | ||||||
| 		// octal notation |  | ||||||
| 		ch = s.scanDigits(ch, 8, 3) |  | ||||||
| 	case 'x': |  | ||||||
| 		// hexademical notation |  | ||||||
| 		ch = s.scanDigits(s.next(), 16, 2) |  | ||||||
| 	case 'u': |  | ||||||
| 		// universal character name |  | ||||||
| 		ch = s.scanDigits(s.next(), 16, 4) |  | ||||||
| 	case 'U': |  | ||||||
| 		// universal character name |  | ||||||
| 		ch = s.scanDigits(s.next(), 16, 8) |  | ||||||
| 	default: |  | ||||||
| 		s.err("illegal char escape") |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanDigits scans a rune with the given base for n times. For example an |  | ||||||
| // octal notation \184 would yield in scanDigits(ch, 8, 3) |  | ||||||
| func (s *Scanner) scanDigits(ch rune, base, n int) rune { |  | ||||||
| 	start := n |  | ||||||
| 	for n > 0 && digitVal(ch) < base { |  | ||||||
| 		ch = s.next() |  | ||||||
| 		if ch == eof { |  | ||||||
| 			// If we see an EOF, we halt any more scanning of digits |  | ||||||
| 			// immediately. |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		n-- |  | ||||||
| 	} |  | ||||||
| 	if n > 0 { |  | ||||||
| 		s.err("illegal char escape") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if n != start && ch != eof { |  | ||||||
| 		// we scanned all digits, put the last non digit char back, |  | ||||||
| 		// only if we read anything at all |  | ||||||
| 		s.unread() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanIdentifier scans an identifier and returns the literal string |  | ||||||
| func (s *Scanner) scanIdentifier() string { |  | ||||||
| 	offs := s.srcPos.Offset - s.lastCharLen |  | ||||||
| 	ch := s.next() |  | ||||||
| 	for isLetter(ch) || isDigit(ch) || ch == '-' || ch == '.' { |  | ||||||
| 		ch = s.next() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch != eof { |  | ||||||
| 		s.unread() // we got identifier, put back latest char |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return string(s.src[offs:s.srcPos.Offset]) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // recentPosition returns the position of the character immediately after the |  | ||||||
| // character or token returned by the last call to Scan. |  | ||||||
| func (s *Scanner) recentPosition() (pos token.Pos) { |  | ||||||
| 	pos.Offset = s.srcPos.Offset - s.lastCharLen |  | ||||||
| 	switch { |  | ||||||
| 	case s.srcPos.Column > 0: |  | ||||||
| 		// common case: last character was not a '\n' |  | ||||||
| 		pos.Line = s.srcPos.Line |  | ||||||
| 		pos.Column = s.srcPos.Column |  | ||||||
| 	case s.lastLineLen > 0: |  | ||||||
| 		// last character was a '\n' |  | ||||||
| 		// (we cannot be at the beginning of the source |  | ||||||
| 		// since we have called next() at least once) |  | ||||||
| 		pos.Line = s.srcPos.Line - 1 |  | ||||||
| 		pos.Column = s.lastLineLen |  | ||||||
| 	default: |  | ||||||
| 		// at the beginning of the source |  | ||||||
| 		pos.Line = 1 |  | ||||||
| 		pos.Column = 1 |  | ||||||
| 	} |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // err prints the error of any scanning to s.Error function. If the function is |  | ||||||
| // not defined, by default it prints them to os.Stderr |  | ||||||
| func (s *Scanner) err(msg string) { |  | ||||||
| 	s.ErrorCount++ |  | ||||||
| 	pos := s.recentPosition() |  | ||||||
|  |  | ||||||
| 	if s.Error != nil { |  | ||||||
| 		s.Error(pos, msg) |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	fmt.Fprintf(os.Stderr, "%s: %s\n", pos, msg) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isHexadecimal returns true if the given rune is a letter |  | ||||||
| func isLetter(ch rune) bool { |  | ||||||
| 	return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch == '_' || ch >= 0x80 && unicode.IsLetter(ch) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isDigit returns true if the given rune is a decimal digit |  | ||||||
| func isDigit(ch rune) bool { |  | ||||||
| 	return '0' <= ch && ch <= '9' || ch >= 0x80 && unicode.IsDigit(ch) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isDecimal returns true if the given rune is a decimal number |  | ||||||
| func isDecimal(ch rune) bool { |  | ||||||
| 	return '0' <= ch && ch <= '9' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isHexadecimal returns true if the given rune is an hexadecimal number |  | ||||||
| func isHexadecimal(ch rune) bool { |  | ||||||
| 	return '0' <= ch && ch <= '9' || 'a' <= ch && ch <= 'f' || 'A' <= ch && ch <= 'F' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isWhitespace returns true if the rune is a space, tab, newline or carriage return |  | ||||||
| func isWhitespace(ch rune) bool { |  | ||||||
| 	return ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // digitVal returns the integer value of a given octal,decimal or hexadecimal rune |  | ||||||
| func digitVal(ch rune) int { |  | ||||||
| 	switch { |  | ||||||
| 	case '0' <= ch && ch <= '9': |  | ||||||
| 		return int(ch - '0') |  | ||||||
| 	case 'a' <= ch && ch <= 'f': |  | ||||||
| 		return int(ch - 'a' + 10) |  | ||||||
| 	case 'A' <= ch && ch <= 'F': |  | ||||||
| 		return int(ch - 'A' + 10) |  | ||||||
| 	} |  | ||||||
| 	return 16 // larger than any legal digit val |  | ||||||
| } |  | ||||||
							
								
								
									
										241
									
								
								vendor/github.com/hashicorp/hcl/hcl/strconv/quote.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										241
									
								
								vendor/github.com/hashicorp/hcl/hcl/strconv/quote.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,241 +0,0 @@ | |||||||
| package strconv |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"errors" |  | ||||||
| 	"unicode/utf8" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // ErrSyntax indicates that a value does not have the right syntax for the target type. |  | ||||||
| var ErrSyntax = errors.New("invalid syntax") |  | ||||||
|  |  | ||||||
| // Unquote interprets s as a single-quoted, double-quoted, |  | ||||||
| // or backquoted Go string literal, returning the string value |  | ||||||
| // that s quotes.  (If s is single-quoted, it would be a Go |  | ||||||
| // character literal; Unquote returns the corresponding |  | ||||||
| // one-character string.) |  | ||||||
| func Unquote(s string) (t string, err error) { |  | ||||||
| 	n := len(s) |  | ||||||
| 	if n < 2 { |  | ||||||
| 		return "", ErrSyntax |  | ||||||
| 	} |  | ||||||
| 	quote := s[0] |  | ||||||
| 	if quote != s[n-1] { |  | ||||||
| 		return "", ErrSyntax |  | ||||||
| 	} |  | ||||||
| 	s = s[1 : n-1] |  | ||||||
|  |  | ||||||
| 	if quote != '"' { |  | ||||||
| 		return "", ErrSyntax |  | ||||||
| 	} |  | ||||||
| 	if !contains(s, '$') && !contains(s, '{') && contains(s, '\n') { |  | ||||||
| 		return "", ErrSyntax |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Is it trivial?  Avoid allocation. |  | ||||||
| 	if !contains(s, '\\') && !contains(s, quote) && !contains(s, '$') { |  | ||||||
| 		switch quote { |  | ||||||
| 		case '"': |  | ||||||
| 			return s, nil |  | ||||||
| 		case '\'': |  | ||||||
| 			r, size := utf8.DecodeRuneInString(s) |  | ||||||
| 			if size == len(s) && (r != utf8.RuneError || size != 1) { |  | ||||||
| 				return s, nil |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var runeTmp [utf8.UTFMax]byte |  | ||||||
| 	buf := make([]byte, 0, 3*len(s)/2) // Try to avoid more allocations. |  | ||||||
| 	for len(s) > 0 { |  | ||||||
| 		// If we're starting a '${}' then let it through un-unquoted. |  | ||||||
| 		// Specifically: we don't unquote any characters within the `${}` |  | ||||||
| 		// section. |  | ||||||
| 		if s[0] == '$' && len(s) > 1 && s[1] == '{' { |  | ||||||
| 			buf = append(buf, '$', '{') |  | ||||||
| 			s = s[2:] |  | ||||||
|  |  | ||||||
| 			// Continue reading until we find the closing brace, copying as-is |  | ||||||
| 			braces := 1 |  | ||||||
| 			for len(s) > 0 && braces > 0 { |  | ||||||
| 				r, size := utf8.DecodeRuneInString(s) |  | ||||||
| 				if r == utf8.RuneError { |  | ||||||
| 					return "", ErrSyntax |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				s = s[size:] |  | ||||||
|  |  | ||||||
| 				n := utf8.EncodeRune(runeTmp[:], r) |  | ||||||
| 				buf = append(buf, runeTmp[:n]...) |  | ||||||
|  |  | ||||||
| 				switch r { |  | ||||||
| 				case '{': |  | ||||||
| 					braces++ |  | ||||||
| 				case '}': |  | ||||||
| 					braces-- |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 			if braces != 0 { |  | ||||||
| 				return "", ErrSyntax |  | ||||||
| 			} |  | ||||||
| 			if len(s) == 0 { |  | ||||||
| 				// If there's no string left, we're done! |  | ||||||
| 				break |  | ||||||
| 			} else { |  | ||||||
| 				// If there's more left, we need to pop back up to the top of the loop |  | ||||||
| 				// in case there's another interpolation in this string. |  | ||||||
| 				continue |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if s[0] == '\n' { |  | ||||||
| 			return "", ErrSyntax |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		c, multibyte, ss, err := unquoteChar(s, quote) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return "", err |  | ||||||
| 		} |  | ||||||
| 		s = ss |  | ||||||
| 		if c < utf8.RuneSelf || !multibyte { |  | ||||||
| 			buf = append(buf, byte(c)) |  | ||||||
| 		} else { |  | ||||||
| 			n := utf8.EncodeRune(runeTmp[:], c) |  | ||||||
| 			buf = append(buf, runeTmp[:n]...) |  | ||||||
| 		} |  | ||||||
| 		if quote == '\'' && len(s) != 0 { |  | ||||||
| 			// single-quoted must be single character |  | ||||||
| 			return "", ErrSyntax |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return string(buf), nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // contains reports whether the string contains the byte c. |  | ||||||
| func contains(s string, c byte) bool { |  | ||||||
| 	for i := 0; i < len(s); i++ { |  | ||||||
| 		if s[i] == c { |  | ||||||
| 			return true |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return false |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func unhex(b byte) (v rune, ok bool) { |  | ||||||
| 	c := rune(b) |  | ||||||
| 	switch { |  | ||||||
| 	case '0' <= c && c <= '9': |  | ||||||
| 		return c - '0', true |  | ||||||
| 	case 'a' <= c && c <= 'f': |  | ||||||
| 		return c - 'a' + 10, true |  | ||||||
| 	case 'A' <= c && c <= 'F': |  | ||||||
| 		return c - 'A' + 10, true |  | ||||||
| 	} |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func unquoteChar(s string, quote byte) (value rune, multibyte bool, tail string, err error) { |  | ||||||
| 	// easy cases |  | ||||||
| 	switch c := s[0]; { |  | ||||||
| 	case c == quote && (quote == '\'' || quote == '"'): |  | ||||||
| 		err = ErrSyntax |  | ||||||
| 		return |  | ||||||
| 	case c >= utf8.RuneSelf: |  | ||||||
| 		r, size := utf8.DecodeRuneInString(s) |  | ||||||
| 		return r, true, s[size:], nil |  | ||||||
| 	case c != '\\': |  | ||||||
| 		return rune(s[0]), false, s[1:], nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// hard case: c is backslash |  | ||||||
| 	if len(s) <= 1 { |  | ||||||
| 		err = ErrSyntax |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
| 	c := s[1] |  | ||||||
| 	s = s[2:] |  | ||||||
|  |  | ||||||
| 	switch c { |  | ||||||
| 	case 'a': |  | ||||||
| 		value = '\a' |  | ||||||
| 	case 'b': |  | ||||||
| 		value = '\b' |  | ||||||
| 	case 'f': |  | ||||||
| 		value = '\f' |  | ||||||
| 	case 'n': |  | ||||||
| 		value = '\n' |  | ||||||
| 	case 'r': |  | ||||||
| 		value = '\r' |  | ||||||
| 	case 't': |  | ||||||
| 		value = '\t' |  | ||||||
| 	case 'v': |  | ||||||
| 		value = '\v' |  | ||||||
| 	case 'x', 'u', 'U': |  | ||||||
| 		n := 0 |  | ||||||
| 		switch c { |  | ||||||
| 		case 'x': |  | ||||||
| 			n = 2 |  | ||||||
| 		case 'u': |  | ||||||
| 			n = 4 |  | ||||||
| 		case 'U': |  | ||||||
| 			n = 8 |  | ||||||
| 		} |  | ||||||
| 		var v rune |  | ||||||
| 		if len(s) < n { |  | ||||||
| 			err = ErrSyntax |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
| 		for j := 0; j < n; j++ { |  | ||||||
| 			x, ok := unhex(s[j]) |  | ||||||
| 			if !ok { |  | ||||||
| 				err = ErrSyntax |  | ||||||
| 				return |  | ||||||
| 			} |  | ||||||
| 			v = v<<4 | x |  | ||||||
| 		} |  | ||||||
| 		s = s[n:] |  | ||||||
| 		if c == 'x' { |  | ||||||
| 			// single-byte string, possibly not UTF-8 |  | ||||||
| 			value = v |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 		if v > utf8.MaxRune { |  | ||||||
| 			err = ErrSyntax |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
| 		value = v |  | ||||||
| 		multibyte = true |  | ||||||
| 	case '0', '1', '2', '3', '4', '5', '6', '7': |  | ||||||
| 		v := rune(c) - '0' |  | ||||||
| 		if len(s) < 2 { |  | ||||||
| 			err = ErrSyntax |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
| 		for j := 0; j < 2; j++ { // one digit already; two more |  | ||||||
| 			x := rune(s[j]) - '0' |  | ||||||
| 			if x < 0 || x > 7 { |  | ||||||
| 				err = ErrSyntax |  | ||||||
| 				return |  | ||||||
| 			} |  | ||||||
| 			v = (v << 3) | x |  | ||||||
| 		} |  | ||||||
| 		s = s[2:] |  | ||||||
| 		if v > 255 { |  | ||||||
| 			err = ErrSyntax |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
| 		value = v |  | ||||||
| 	case '\\': |  | ||||||
| 		value = '\\' |  | ||||||
| 	case '\'', '"': |  | ||||||
| 		if c != quote { |  | ||||||
| 			err = ErrSyntax |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
| 		value = rune(c) |  | ||||||
| 	default: |  | ||||||
| 		err = ErrSyntax |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
| 	tail = s |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
							
								
								
									
										46
									
								
								vendor/github.com/hashicorp/hcl/hcl/token/position.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										46
									
								
								vendor/github.com/hashicorp/hcl/hcl/token/position.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,46 +0,0 @@ | |||||||
| package token |  | ||||||
|  |  | ||||||
| import "fmt" |  | ||||||
|  |  | ||||||
| // Pos describes an arbitrary source position |  | ||||||
| // including the file, line, and column location. |  | ||||||
| // A Position is valid if the line number is > 0. |  | ||||||
| type Pos struct { |  | ||||||
| 	Filename string // filename, if any |  | ||||||
| 	Offset   int    // offset, starting at 0 |  | ||||||
| 	Line     int    // line number, starting at 1 |  | ||||||
| 	Column   int    // column number, starting at 1 (character count) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // IsValid returns true if the position is valid. |  | ||||||
| func (p *Pos) IsValid() bool { return p.Line > 0 } |  | ||||||
|  |  | ||||||
| // String returns a string in one of several forms: |  | ||||||
| // |  | ||||||
| //	file:line:column    valid position with file name |  | ||||||
| //	line:column         valid position without file name |  | ||||||
| //	file                invalid position with file name |  | ||||||
| //	-                   invalid position without file name |  | ||||||
| func (p Pos) String() string { |  | ||||||
| 	s := p.Filename |  | ||||||
| 	if p.IsValid() { |  | ||||||
| 		if s != "" { |  | ||||||
| 			s += ":" |  | ||||||
| 		} |  | ||||||
| 		s += fmt.Sprintf("%d:%d", p.Line, p.Column) |  | ||||||
| 	} |  | ||||||
| 	if s == "" { |  | ||||||
| 		s = "-" |  | ||||||
| 	} |  | ||||||
| 	return s |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Before reports whether the position p is before u. |  | ||||||
| func (p Pos) Before(u Pos) bool { |  | ||||||
| 	return u.Offset > p.Offset || u.Line > p.Line |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // After reports whether the position p is after u. |  | ||||||
| func (p Pos) After(u Pos) bool { |  | ||||||
| 	return u.Offset < p.Offset || u.Line < p.Line |  | ||||||
| } |  | ||||||
							
								
								
									
										219
									
								
								vendor/github.com/hashicorp/hcl/hcl/token/token.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										219
									
								
								vendor/github.com/hashicorp/hcl/hcl/token/token.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,219 +0,0 @@ | |||||||
| // Package token defines constants representing the lexical tokens for HCL |  | ||||||
| // (HashiCorp Configuration Language) |  | ||||||
| package token |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
|  |  | ||||||
| 	hclstrconv "github.com/hashicorp/hcl/hcl/strconv" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Token defines a single HCL token which can be obtained via the Scanner |  | ||||||
| type Token struct { |  | ||||||
| 	Type Type |  | ||||||
| 	Pos  Pos |  | ||||||
| 	Text string |  | ||||||
| 	JSON bool |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Type is the set of lexical tokens of the HCL (HashiCorp Configuration Language) |  | ||||||
| type Type int |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	// Special tokens |  | ||||||
| 	ILLEGAL Type = iota |  | ||||||
| 	EOF |  | ||||||
| 	COMMENT |  | ||||||
|  |  | ||||||
| 	identifier_beg |  | ||||||
| 	IDENT // literals |  | ||||||
| 	literal_beg |  | ||||||
| 	NUMBER  // 12345 |  | ||||||
| 	FLOAT   // 123.45 |  | ||||||
| 	BOOL    // true,false |  | ||||||
| 	STRING  // "abc" |  | ||||||
| 	HEREDOC // <<FOO\nbar\nFOO |  | ||||||
| 	literal_end |  | ||||||
| 	identifier_end |  | ||||||
|  |  | ||||||
| 	operator_beg |  | ||||||
| 	LBRACK // [ |  | ||||||
| 	LBRACE // { |  | ||||||
| 	COMMA  // , |  | ||||||
| 	PERIOD // . |  | ||||||
|  |  | ||||||
| 	RBRACK // ] |  | ||||||
| 	RBRACE // } |  | ||||||
|  |  | ||||||
| 	ASSIGN // = |  | ||||||
| 	ADD    // + |  | ||||||
| 	SUB    // - |  | ||||||
| 	operator_end |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var tokens = [...]string{ |  | ||||||
| 	ILLEGAL: "ILLEGAL", |  | ||||||
|  |  | ||||||
| 	EOF:     "EOF", |  | ||||||
| 	COMMENT: "COMMENT", |  | ||||||
|  |  | ||||||
| 	IDENT:  "IDENT", |  | ||||||
| 	NUMBER: "NUMBER", |  | ||||||
| 	FLOAT:  "FLOAT", |  | ||||||
| 	BOOL:   "BOOL", |  | ||||||
| 	STRING: "STRING", |  | ||||||
|  |  | ||||||
| 	LBRACK:  "LBRACK", |  | ||||||
| 	LBRACE:  "LBRACE", |  | ||||||
| 	COMMA:   "COMMA", |  | ||||||
| 	PERIOD:  "PERIOD", |  | ||||||
| 	HEREDOC: "HEREDOC", |  | ||||||
|  |  | ||||||
| 	RBRACK: "RBRACK", |  | ||||||
| 	RBRACE: "RBRACE", |  | ||||||
|  |  | ||||||
| 	ASSIGN: "ASSIGN", |  | ||||||
| 	ADD:    "ADD", |  | ||||||
| 	SUB:    "SUB", |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // String returns the string corresponding to the token tok. |  | ||||||
| func (t Type) String() string { |  | ||||||
| 	s := "" |  | ||||||
| 	if 0 <= t && t < Type(len(tokens)) { |  | ||||||
| 		s = tokens[t] |  | ||||||
| 	} |  | ||||||
| 	if s == "" { |  | ||||||
| 		s = "token(" + strconv.Itoa(int(t)) + ")" |  | ||||||
| 	} |  | ||||||
| 	return s |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // IsIdentifier returns true for tokens corresponding to identifiers and basic |  | ||||||
| // type literals; it returns false otherwise. |  | ||||||
| func (t Type) IsIdentifier() bool { return identifier_beg < t && t < identifier_end } |  | ||||||
|  |  | ||||||
| // IsLiteral returns true for tokens corresponding to basic type literals; it |  | ||||||
| // returns false otherwise. |  | ||||||
| func (t Type) IsLiteral() bool { return literal_beg < t && t < literal_end } |  | ||||||
|  |  | ||||||
| // IsOperator returns true for tokens corresponding to operators and |  | ||||||
| // delimiters; it returns false otherwise. |  | ||||||
| func (t Type) IsOperator() bool { return operator_beg < t && t < operator_end } |  | ||||||
|  |  | ||||||
| // String returns the token's literal text. Note that this is only |  | ||||||
| // applicable for certain token types, such as token.IDENT, |  | ||||||
| // token.STRING, etc.. |  | ||||||
| func (t Token) String() string { |  | ||||||
| 	return fmt.Sprintf("%s %s %s", t.Pos.String(), t.Type.String(), t.Text) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Value returns the properly typed value for this token. The type of |  | ||||||
| // the returned interface{} is guaranteed based on the Type field. |  | ||||||
| // |  | ||||||
| // This can only be called for literal types. If it is called for any other |  | ||||||
| // type, this will panic. |  | ||||||
| func (t Token) Value() interface{} { |  | ||||||
| 	switch t.Type { |  | ||||||
| 	case BOOL: |  | ||||||
| 		if t.Text == "true" { |  | ||||||
| 			return true |  | ||||||
| 		} else if t.Text == "false" { |  | ||||||
| 			return false |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		panic("unknown bool value: " + t.Text) |  | ||||||
| 	case FLOAT: |  | ||||||
| 		v, err := strconv.ParseFloat(t.Text, 64) |  | ||||||
| 		if err != nil { |  | ||||||
| 			panic(err) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		return float64(v) |  | ||||||
| 	case NUMBER: |  | ||||||
| 		v, err := strconv.ParseInt(t.Text, 0, 64) |  | ||||||
| 		if err != nil { |  | ||||||
| 			panic(err) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		return int64(v) |  | ||||||
| 	case IDENT: |  | ||||||
| 		return t.Text |  | ||||||
| 	case HEREDOC: |  | ||||||
| 		return unindentHeredoc(t.Text) |  | ||||||
| 	case STRING: |  | ||||||
| 		// Determine the Unquote method to use. If it came from JSON, |  | ||||||
| 		// then we need to use the built-in unquote since we have to |  | ||||||
| 		// escape interpolations there. |  | ||||||
| 		f := hclstrconv.Unquote |  | ||||||
| 		if t.JSON { |  | ||||||
| 			f = strconv.Unquote |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// This case occurs if json null is used |  | ||||||
| 		if t.Text == "" { |  | ||||||
| 			return "" |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		v, err := f(t.Text) |  | ||||||
| 		if err != nil { |  | ||||||
| 			panic(fmt.Sprintf("unquote %s err: %s", t.Text, err)) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		return v |  | ||||||
| 	default: |  | ||||||
| 		panic(fmt.Sprintf("unimplemented Value for type: %s", t.Type)) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // unindentHeredoc returns the string content of a HEREDOC if it is started with << |  | ||||||
| // and the content of a HEREDOC with the hanging indent removed if it is started with |  | ||||||
| // a <<-, and the terminating line is at least as indented as the least indented line. |  | ||||||
| func unindentHeredoc(heredoc string) string { |  | ||||||
| 	// We need to find the end of the marker |  | ||||||
| 	idx := strings.IndexByte(heredoc, '\n') |  | ||||||
| 	if idx == -1 { |  | ||||||
| 		panic("heredoc doesn't contain newline") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	unindent := heredoc[2] == '-' |  | ||||||
|  |  | ||||||
| 	// We can optimize if the heredoc isn't marked for indentation |  | ||||||
| 	if !unindent { |  | ||||||
| 		return string(heredoc[idx+1 : len(heredoc)-idx+1]) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// We need to unindent each line based on the indentation level of the marker |  | ||||||
| 	lines := strings.Split(string(heredoc[idx+1:len(heredoc)-idx+2]), "\n") |  | ||||||
| 	whitespacePrefix := lines[len(lines)-1] |  | ||||||
|  |  | ||||||
| 	isIndented := true |  | ||||||
| 	for _, v := range lines { |  | ||||||
| 		if strings.HasPrefix(v, whitespacePrefix) { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		isIndented = false |  | ||||||
| 		break |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If all lines are not at least as indented as the terminating mark, return the |  | ||||||
| 	// heredoc as is, but trim the leading space from the marker on the final line. |  | ||||||
| 	if !isIndented { |  | ||||||
| 		return strings.TrimRight(string(heredoc[idx+1:len(heredoc)-idx+1]), " \t") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	unindentedLines := make([]string, len(lines)) |  | ||||||
| 	for k, v := range lines { |  | ||||||
| 		if k == len(lines)-1 { |  | ||||||
| 			unindentedLines[k] = "" |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		unindentedLines[k] = strings.TrimPrefix(v, whitespacePrefix) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return strings.Join(unindentedLines, "\n") |  | ||||||
| } |  | ||||||
							
								
								
									
										117
									
								
								vendor/github.com/hashicorp/hcl/json/parser/flatten.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										117
									
								
								vendor/github.com/hashicorp/hcl/json/parser/flatten.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,117 +0,0 @@ | |||||||
| package parser |  | ||||||
|  |  | ||||||
| import "github.com/hashicorp/hcl/hcl/ast" |  | ||||||
|  |  | ||||||
| // flattenObjects takes an AST node, walks it, and flattens |  | ||||||
| func flattenObjects(node ast.Node) { |  | ||||||
| 	ast.Walk(node, func(n ast.Node) (ast.Node, bool) { |  | ||||||
| 		// We only care about lists, because this is what we modify |  | ||||||
| 		list, ok := n.(*ast.ObjectList) |  | ||||||
| 		if !ok { |  | ||||||
| 			return n, true |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Rebuild the item list |  | ||||||
| 		items := make([]*ast.ObjectItem, 0, len(list.Items)) |  | ||||||
| 		frontier := make([]*ast.ObjectItem, len(list.Items)) |  | ||||||
| 		copy(frontier, list.Items) |  | ||||||
| 		for len(frontier) > 0 { |  | ||||||
| 			// Pop the current item |  | ||||||
| 			n := len(frontier) |  | ||||||
| 			item := frontier[n-1] |  | ||||||
| 			frontier = frontier[:n-1] |  | ||||||
|  |  | ||||||
| 			switch v := item.Val.(type) { |  | ||||||
| 			case *ast.ObjectType: |  | ||||||
| 				items, frontier = flattenObjectType(v, item, items, frontier) |  | ||||||
| 			case *ast.ListType: |  | ||||||
| 				items, frontier = flattenListType(v, item, items, frontier) |  | ||||||
| 			default: |  | ||||||
| 				items = append(items, item) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Reverse the list since the frontier model runs things backwards |  | ||||||
| 		for i := len(items)/2 - 1; i >= 0; i-- { |  | ||||||
| 			opp := len(items) - 1 - i |  | ||||||
| 			items[i], items[opp] = items[opp], items[i] |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// Done! Set the original items |  | ||||||
| 		list.Items = items |  | ||||||
| 		return n, true |  | ||||||
| 	}) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func flattenListType( |  | ||||||
| 	ot *ast.ListType, |  | ||||||
| 	item *ast.ObjectItem, |  | ||||||
| 	items []*ast.ObjectItem, |  | ||||||
| 	frontier []*ast.ObjectItem) ([]*ast.ObjectItem, []*ast.ObjectItem) { |  | ||||||
| 	// If the list is empty, keep the original list |  | ||||||
| 	if len(ot.List) == 0 { |  | ||||||
| 		items = append(items, item) |  | ||||||
| 		return items, frontier |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// All the elements of this object must also be objects! |  | ||||||
| 	for _, subitem := range ot.List { |  | ||||||
| 		if _, ok := subitem.(*ast.ObjectType); !ok { |  | ||||||
| 			items = append(items, item) |  | ||||||
| 			return items, frontier |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Great! We have a match go through all the items and flatten |  | ||||||
| 	for _, elem := range ot.List { |  | ||||||
| 		// Add it to the frontier so that we can recurse |  | ||||||
| 		frontier = append(frontier, &ast.ObjectItem{ |  | ||||||
| 			Keys:        item.Keys, |  | ||||||
| 			Assign:      item.Assign, |  | ||||||
| 			Val:         elem, |  | ||||||
| 			LeadComment: item.LeadComment, |  | ||||||
| 			LineComment: item.LineComment, |  | ||||||
| 		}) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return items, frontier |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func flattenObjectType( |  | ||||||
| 	ot *ast.ObjectType, |  | ||||||
| 	item *ast.ObjectItem, |  | ||||||
| 	items []*ast.ObjectItem, |  | ||||||
| 	frontier []*ast.ObjectItem) ([]*ast.ObjectItem, []*ast.ObjectItem) { |  | ||||||
| 	// If the list has no items we do not have to flatten anything |  | ||||||
| 	if ot.List.Items == nil { |  | ||||||
| 		items = append(items, item) |  | ||||||
| 		return items, frontier |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// All the elements of this object must also be objects! |  | ||||||
| 	for _, subitem := range ot.List.Items { |  | ||||||
| 		if _, ok := subitem.Val.(*ast.ObjectType); !ok { |  | ||||||
| 			items = append(items, item) |  | ||||||
| 			return items, frontier |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// Great! We have a match go through all the items and flatten |  | ||||||
| 	for _, subitem := range ot.List.Items { |  | ||||||
| 		// Copy the new key |  | ||||||
| 		keys := make([]*ast.ObjectKey, len(item.Keys)+len(subitem.Keys)) |  | ||||||
| 		copy(keys, item.Keys) |  | ||||||
| 		copy(keys[len(item.Keys):], subitem.Keys) |  | ||||||
|  |  | ||||||
| 		// Add it to the frontier so that we can recurse |  | ||||||
| 		frontier = append(frontier, &ast.ObjectItem{ |  | ||||||
| 			Keys:        keys, |  | ||||||
| 			Assign:      item.Assign, |  | ||||||
| 			Val:         subitem.Val, |  | ||||||
| 			LeadComment: item.LeadComment, |  | ||||||
| 			LineComment: item.LineComment, |  | ||||||
| 		}) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return items, frontier |  | ||||||
| } |  | ||||||
							
								
								
									
										313
									
								
								vendor/github.com/hashicorp/hcl/json/parser/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										313
									
								
								vendor/github.com/hashicorp/hcl/json/parser/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,313 +0,0 @@ | |||||||
| package parser |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/ast" |  | ||||||
| 	hcltoken "github.com/hashicorp/hcl/hcl/token" |  | ||||||
| 	"github.com/hashicorp/hcl/json/scanner" |  | ||||||
| 	"github.com/hashicorp/hcl/json/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type Parser struct { |  | ||||||
| 	sc *scanner.Scanner |  | ||||||
|  |  | ||||||
| 	// Last read token |  | ||||||
| 	tok       token.Token |  | ||||||
| 	commaPrev token.Token |  | ||||||
|  |  | ||||||
| 	enableTrace bool |  | ||||||
| 	indent      int |  | ||||||
| 	n           int // buffer size (max = 1) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func newParser(src []byte) *Parser { |  | ||||||
| 	return &Parser{ |  | ||||||
| 		sc: scanner.New(src), |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Parse returns the fully parsed source and returns the abstract syntax tree. |  | ||||||
| func Parse(src []byte) (*ast.File, error) { |  | ||||||
| 	p := newParser(src) |  | ||||||
| 	return p.Parse() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var errEofToken = errors.New("EOF token found") |  | ||||||
|  |  | ||||||
| // Parse returns the fully parsed source and returns the abstract syntax tree. |  | ||||||
| func (p *Parser) Parse() (*ast.File, error) { |  | ||||||
| 	f := &ast.File{} |  | ||||||
| 	var err, scerr error |  | ||||||
| 	p.sc.Error = func(pos token.Pos, msg string) { |  | ||||||
| 		scerr = fmt.Errorf("%s: %s", pos, msg) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// The root must be an object in JSON |  | ||||||
| 	object, err := p.object() |  | ||||||
| 	if scerr != nil { |  | ||||||
| 		return nil, scerr |  | ||||||
| 	} |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// We make our final node an object list so it is more HCL compatible |  | ||||||
| 	f.Node = object.List |  | ||||||
|  |  | ||||||
| 	// Flatten it, which finds patterns and turns them into more HCL-like |  | ||||||
| 	// AST trees. |  | ||||||
| 	flattenObjects(f.Node) |  | ||||||
|  |  | ||||||
| 	return f, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Parser) objectList() (*ast.ObjectList, error) { |  | ||||||
| 	defer un(trace(p, "ParseObjectList")) |  | ||||||
| 	node := &ast.ObjectList{} |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		n, err := p.objectItem() |  | ||||||
| 		if err == errEofToken { |  | ||||||
| 			break // we are finished |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// we don't return a nil node, because might want to use already |  | ||||||
| 		// collected items. |  | ||||||
| 		if err != nil { |  | ||||||
| 			return node, err |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		node.Add(n) |  | ||||||
|  |  | ||||||
| 		// Check for a followup comma. If it isn't a comma, then we're done |  | ||||||
| 		if tok := p.scan(); tok.Type != token.COMMA { |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return node, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectItem parses a single object item |  | ||||||
| func (p *Parser) objectItem() (*ast.ObjectItem, error) { |  | ||||||
| 	defer un(trace(p, "ParseObjectItem")) |  | ||||||
|  |  | ||||||
| 	keys, err := p.objectKey() |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	o := &ast.ObjectItem{ |  | ||||||
| 		Keys: keys, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch p.tok.Type { |  | ||||||
| 	case token.COLON: |  | ||||||
| 		pos := p.tok.Pos |  | ||||||
| 		o.Assign = hcltoken.Pos{ |  | ||||||
| 			Filename: pos.Filename, |  | ||||||
| 			Offset:   pos.Offset, |  | ||||||
| 			Line:     pos.Line, |  | ||||||
| 			Column:   pos.Column, |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		o.Val, err = p.objectValue() |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return o, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectKey parses an object key and returns a ObjectKey AST |  | ||||||
| func (p *Parser) objectKey() ([]*ast.ObjectKey, error) { |  | ||||||
| 	keyCount := 0 |  | ||||||
| 	keys := make([]*ast.ObjectKey, 0) |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		tok := p.scan() |  | ||||||
| 		switch tok.Type { |  | ||||||
| 		case token.EOF: |  | ||||||
| 			return nil, errEofToken |  | ||||||
| 		case token.STRING: |  | ||||||
| 			keyCount++ |  | ||||||
| 			keys = append(keys, &ast.ObjectKey{ |  | ||||||
| 				Token: p.tok.HCLToken(), |  | ||||||
| 			}) |  | ||||||
| 		case token.COLON: |  | ||||||
| 			// If we have a zero keycount it means that we never got |  | ||||||
| 			// an object key, i.e. `{ :`. This is a syntax error. |  | ||||||
| 			if keyCount == 0 { |  | ||||||
| 				return nil, fmt.Errorf("expected: STRING got: %s", p.tok.Type) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			// Done |  | ||||||
| 			return keys, nil |  | ||||||
| 		case token.ILLEGAL: |  | ||||||
| 			return nil, errors.New("illegal") |  | ||||||
| 		default: |  | ||||||
| 			return nil, fmt.Errorf("expected: STRING got: %s", p.tok.Type) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // object parses any type of object, such as number, bool, string, object or |  | ||||||
| // list. |  | ||||||
| func (p *Parser) objectValue() (ast.Node, error) { |  | ||||||
| 	defer un(trace(p, "ParseObjectValue")) |  | ||||||
| 	tok := p.scan() |  | ||||||
|  |  | ||||||
| 	switch tok.Type { |  | ||||||
| 	case token.NUMBER, token.FLOAT, token.BOOL, token.NULL, token.STRING: |  | ||||||
| 		return p.literalType() |  | ||||||
| 	case token.LBRACE: |  | ||||||
| 		return p.objectType() |  | ||||||
| 	case token.LBRACK: |  | ||||||
| 		return p.listType() |  | ||||||
| 	case token.EOF: |  | ||||||
| 		return nil, errEofToken |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return nil, fmt.Errorf("Expected object value, got unknown token: %+v", tok) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // object parses any type of object, such as number, bool, string, object or |  | ||||||
| // list. |  | ||||||
| func (p *Parser) object() (*ast.ObjectType, error) { |  | ||||||
| 	defer un(trace(p, "ParseType")) |  | ||||||
| 	tok := p.scan() |  | ||||||
|  |  | ||||||
| 	switch tok.Type { |  | ||||||
| 	case token.LBRACE: |  | ||||||
| 		return p.objectType() |  | ||||||
| 	case token.EOF: |  | ||||||
| 		return nil, errEofToken |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return nil, fmt.Errorf("Expected object, got unknown token: %+v", tok) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // objectType parses an object type and returns a ObjectType AST |  | ||||||
| func (p *Parser) objectType() (*ast.ObjectType, error) { |  | ||||||
| 	defer un(trace(p, "ParseObjectType")) |  | ||||||
|  |  | ||||||
| 	// we assume that the currently scanned token is a LBRACE |  | ||||||
| 	o := &ast.ObjectType{} |  | ||||||
|  |  | ||||||
| 	l, err := p.objectList() |  | ||||||
|  |  | ||||||
| 	// if we hit RBRACE, we are good to go (means we parsed all Items), if it's |  | ||||||
| 	// not a RBRACE, it's an syntax error and we just return it. |  | ||||||
| 	if err != nil && p.tok.Type != token.RBRACE { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	o.List = l |  | ||||||
| 	return o, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // listType parses a list type and returns a ListType AST |  | ||||||
| func (p *Parser) listType() (*ast.ListType, error) { |  | ||||||
| 	defer un(trace(p, "ParseListType")) |  | ||||||
|  |  | ||||||
| 	// we assume that the currently scanned token is a LBRACK |  | ||||||
| 	l := &ast.ListType{} |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		tok := p.scan() |  | ||||||
| 		switch tok.Type { |  | ||||||
| 		case token.NUMBER, token.FLOAT, token.STRING: |  | ||||||
| 			node, err := p.literalType() |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, err |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			l.Add(node) |  | ||||||
| 		case token.COMMA: |  | ||||||
| 			continue |  | ||||||
| 		case token.LBRACE: |  | ||||||
| 			node, err := p.objectType() |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, err |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			l.Add(node) |  | ||||||
| 		case token.BOOL: |  | ||||||
| 			// TODO(arslan) should we support? not supported by HCL yet |  | ||||||
| 		case token.LBRACK: |  | ||||||
| 			// TODO(arslan) should we support nested lists? Even though it's |  | ||||||
| 			// written in README of HCL, it's not a part of the grammar |  | ||||||
| 			// (not defined in parse.y) |  | ||||||
| 		case token.RBRACK: |  | ||||||
| 			// finished |  | ||||||
| 			return l, nil |  | ||||||
| 		default: |  | ||||||
| 			return nil, fmt.Errorf("unexpected token while parsing list: %s", tok.Type) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // literalType parses a literal type and returns a LiteralType AST |  | ||||||
| func (p *Parser) literalType() (*ast.LiteralType, error) { |  | ||||||
| 	defer un(trace(p, "ParseLiteral")) |  | ||||||
|  |  | ||||||
| 	return &ast.LiteralType{ |  | ||||||
| 		Token: p.tok.HCLToken(), |  | ||||||
| 	}, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scan returns the next token from the underlying scanner. If a token has |  | ||||||
| // been unscanned then read that instead. |  | ||||||
| func (p *Parser) scan() token.Token { |  | ||||||
| 	// If we have a token on the buffer, then return it. |  | ||||||
| 	if p.n != 0 { |  | ||||||
| 		p.n = 0 |  | ||||||
| 		return p.tok |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	p.tok = p.sc.Scan() |  | ||||||
| 	return p.tok |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // unscan pushes the previously read token back onto the buffer. |  | ||||||
| func (p *Parser) unscan() { |  | ||||||
| 	p.n = 1 |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
| // Parsing support |  | ||||||
|  |  | ||||||
| func (p *Parser) printTrace(a ...interface{}) { |  | ||||||
| 	if !p.enableTrace { |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	const dots = ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " |  | ||||||
| 	const n = len(dots) |  | ||||||
| 	fmt.Printf("%5d:%3d: ", p.tok.Pos.Line, p.tok.Pos.Column) |  | ||||||
|  |  | ||||||
| 	i := 2 * p.indent |  | ||||||
| 	for i > n { |  | ||||||
| 		fmt.Print(dots) |  | ||||||
| 		i -= n |  | ||||||
| 	} |  | ||||||
| 	// i <= n |  | ||||||
| 	fmt.Print(dots[0:i]) |  | ||||||
| 	fmt.Println(a...) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func trace(p *Parser, msg string) *Parser { |  | ||||||
| 	p.printTrace(msg, "(") |  | ||||||
| 	p.indent++ |  | ||||||
| 	return p |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Usage pattern: defer un(trace(p, "...")) |  | ||||||
| func un(p *Parser) { |  | ||||||
| 	p.indent-- |  | ||||||
| 	p.printTrace(")") |  | ||||||
| } |  | ||||||
							
								
								
									
										451
									
								
								vendor/github.com/hashicorp/hcl/json/scanner/scanner.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										451
									
								
								vendor/github.com/hashicorp/hcl/json/scanner/scanner.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,451 +0,0 @@ | |||||||
| package scanner |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"fmt" |  | ||||||
| 	"os" |  | ||||||
| 	"unicode" |  | ||||||
| 	"unicode/utf8" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/json/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // eof represents a marker rune for the end of the reader. |  | ||||||
| const eof = rune(0) |  | ||||||
|  |  | ||||||
| // Scanner defines a lexical scanner |  | ||||||
| type Scanner struct { |  | ||||||
| 	buf *bytes.Buffer // Source buffer for advancing and scanning |  | ||||||
| 	src []byte        // Source buffer for immutable access |  | ||||||
|  |  | ||||||
| 	// Source Position |  | ||||||
| 	srcPos  token.Pos // current position |  | ||||||
| 	prevPos token.Pos // previous position, used for peek() method |  | ||||||
|  |  | ||||||
| 	lastCharLen int // length of last character in bytes |  | ||||||
| 	lastLineLen int // length of last line in characters (for correct column reporting) |  | ||||||
|  |  | ||||||
| 	tokStart int // token text start position |  | ||||||
| 	tokEnd   int // token text end  position |  | ||||||
|  |  | ||||||
| 	// Error is called for each error encountered. If no Error |  | ||||||
| 	// function is set, the error is reported to os.Stderr. |  | ||||||
| 	Error func(pos token.Pos, msg string) |  | ||||||
|  |  | ||||||
| 	// ErrorCount is incremented by one for each error encountered. |  | ||||||
| 	ErrorCount int |  | ||||||
|  |  | ||||||
| 	// tokPos is the start position of most recently scanned token; set by |  | ||||||
| 	// Scan. The Filename field is always left untouched by the Scanner.  If |  | ||||||
| 	// an error is reported (via Error) and Position is invalid, the scanner is |  | ||||||
| 	// not inside a token. |  | ||||||
| 	tokPos token.Pos |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // New creates and initializes a new instance of Scanner using src as |  | ||||||
| // its source content. |  | ||||||
| func New(src []byte) *Scanner { |  | ||||||
| 	// even though we accept a src, we read from a io.Reader compatible type |  | ||||||
| 	// (*bytes.Buffer). So in the future we might easily change it to streaming |  | ||||||
| 	// read. |  | ||||||
| 	b := bytes.NewBuffer(src) |  | ||||||
| 	s := &Scanner{ |  | ||||||
| 		buf: b, |  | ||||||
| 		src: src, |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// srcPosition always starts with 1 |  | ||||||
| 	s.srcPos.Line = 1 |  | ||||||
| 	return s |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // next reads the next rune from the bufferred reader. Returns the rune(0) if |  | ||||||
| // an error occurs (or io.EOF is returned). |  | ||||||
| func (s *Scanner) next() rune { |  | ||||||
| 	ch, size, err := s.buf.ReadRune() |  | ||||||
| 	if err != nil { |  | ||||||
| 		// advance for error reporting |  | ||||||
| 		s.srcPos.Column++ |  | ||||||
| 		s.srcPos.Offset += size |  | ||||||
| 		s.lastCharLen = size |  | ||||||
| 		return eof |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch == utf8.RuneError && size == 1 { |  | ||||||
| 		s.srcPos.Column++ |  | ||||||
| 		s.srcPos.Offset += size |  | ||||||
| 		s.lastCharLen = size |  | ||||||
| 		s.err("illegal UTF-8 encoding") |  | ||||||
| 		return ch |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// remember last position |  | ||||||
| 	s.prevPos = s.srcPos |  | ||||||
|  |  | ||||||
| 	s.srcPos.Column++ |  | ||||||
| 	s.lastCharLen = size |  | ||||||
| 	s.srcPos.Offset += size |  | ||||||
|  |  | ||||||
| 	if ch == '\n' { |  | ||||||
| 		s.srcPos.Line++ |  | ||||||
| 		s.lastLineLen = s.srcPos.Column |  | ||||||
| 		s.srcPos.Column = 0 |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// debug |  | ||||||
| 	// fmt.Printf("ch: %q, offset:column: %d:%d\n", ch, s.srcPos.Offset, s.srcPos.Column) |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // unread unreads the previous read Rune and updates the source position |  | ||||||
| func (s *Scanner) unread() { |  | ||||||
| 	if err := s.buf.UnreadRune(); err != nil { |  | ||||||
| 		panic(err) // this is user fault, we should catch it |  | ||||||
| 	} |  | ||||||
| 	s.srcPos = s.prevPos // put back last position |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // peek returns the next rune without advancing the reader. |  | ||||||
| func (s *Scanner) peek() rune { |  | ||||||
| 	peek, _, err := s.buf.ReadRune() |  | ||||||
| 	if err != nil { |  | ||||||
| 		return eof |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	s.buf.UnreadRune() |  | ||||||
| 	return peek |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Scan scans the next token and returns the token. |  | ||||||
| func (s *Scanner) Scan() token.Token { |  | ||||||
| 	ch := s.next() |  | ||||||
|  |  | ||||||
| 	// skip white space |  | ||||||
| 	for isWhitespace(ch) { |  | ||||||
| 		ch = s.next() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var tok token.Type |  | ||||||
|  |  | ||||||
| 	// token text markings |  | ||||||
| 	s.tokStart = s.srcPos.Offset - s.lastCharLen |  | ||||||
|  |  | ||||||
| 	// token position, initial next() is moving the offset by one(size of rune |  | ||||||
| 	// actually), though we are interested with the starting point |  | ||||||
| 	s.tokPos.Offset = s.srcPos.Offset - s.lastCharLen |  | ||||||
| 	if s.srcPos.Column > 0 { |  | ||||||
| 		// common case: last character was not a '\n' |  | ||||||
| 		s.tokPos.Line = s.srcPos.Line |  | ||||||
| 		s.tokPos.Column = s.srcPos.Column |  | ||||||
| 	} else { |  | ||||||
| 		// last character was a '\n' |  | ||||||
| 		// (we cannot be at the beginning of the source |  | ||||||
| 		// since we have called next() at least once) |  | ||||||
| 		s.tokPos.Line = s.srcPos.Line - 1 |  | ||||||
| 		s.tokPos.Column = s.lastLineLen |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch { |  | ||||||
| 	case isLetter(ch): |  | ||||||
| 		lit := s.scanIdentifier() |  | ||||||
| 		if lit == "true" || lit == "false" { |  | ||||||
| 			tok = token.BOOL |  | ||||||
| 		} else if lit == "null" { |  | ||||||
| 			tok = token.NULL |  | ||||||
| 		} else { |  | ||||||
| 			s.err("illegal char") |  | ||||||
| 		} |  | ||||||
| 	case isDecimal(ch): |  | ||||||
| 		tok = s.scanNumber(ch) |  | ||||||
| 	default: |  | ||||||
| 		switch ch { |  | ||||||
| 		case eof: |  | ||||||
| 			tok = token.EOF |  | ||||||
| 		case '"': |  | ||||||
| 			tok = token.STRING |  | ||||||
| 			s.scanString() |  | ||||||
| 		case '.': |  | ||||||
| 			tok = token.PERIOD |  | ||||||
| 			ch = s.peek() |  | ||||||
| 			if isDecimal(ch) { |  | ||||||
| 				tok = token.FLOAT |  | ||||||
| 				ch = s.scanMantissa(ch) |  | ||||||
| 				ch = s.scanExponent(ch) |  | ||||||
| 			} |  | ||||||
| 		case '[': |  | ||||||
| 			tok = token.LBRACK |  | ||||||
| 		case ']': |  | ||||||
| 			tok = token.RBRACK |  | ||||||
| 		case '{': |  | ||||||
| 			tok = token.LBRACE |  | ||||||
| 		case '}': |  | ||||||
| 			tok = token.RBRACE |  | ||||||
| 		case ',': |  | ||||||
| 			tok = token.COMMA |  | ||||||
| 		case ':': |  | ||||||
| 			tok = token.COLON |  | ||||||
| 		case '-': |  | ||||||
| 			if isDecimal(s.peek()) { |  | ||||||
| 				ch := s.next() |  | ||||||
| 				tok = s.scanNumber(ch) |  | ||||||
| 			} else { |  | ||||||
| 				s.err("illegal char") |  | ||||||
| 			} |  | ||||||
| 		default: |  | ||||||
| 			s.err("illegal char: " + string(ch)) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// finish token ending |  | ||||||
| 	s.tokEnd = s.srcPos.Offset |  | ||||||
|  |  | ||||||
| 	// create token literal |  | ||||||
| 	var tokenText string |  | ||||||
| 	if s.tokStart >= 0 { |  | ||||||
| 		tokenText = string(s.src[s.tokStart:s.tokEnd]) |  | ||||||
| 	} |  | ||||||
| 	s.tokStart = s.tokEnd // ensure idempotency of tokenText() call |  | ||||||
|  |  | ||||||
| 	return token.Token{ |  | ||||||
| 		Type: tok, |  | ||||||
| 		Pos:  s.tokPos, |  | ||||||
| 		Text: tokenText, |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanNumber scans a HCL number definition starting with the given rune |  | ||||||
| func (s *Scanner) scanNumber(ch rune) token.Type { |  | ||||||
| 	zero := ch == '0' |  | ||||||
| 	pos := s.srcPos |  | ||||||
|  |  | ||||||
| 	s.scanMantissa(ch) |  | ||||||
| 	ch = s.next() // seek forward |  | ||||||
| 	if ch == 'e' || ch == 'E' { |  | ||||||
| 		ch = s.scanExponent(ch) |  | ||||||
| 		return token.FLOAT |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch == '.' { |  | ||||||
| 		ch = s.scanFraction(ch) |  | ||||||
| 		if ch == 'e' || ch == 'E' { |  | ||||||
| 			ch = s.next() |  | ||||||
| 			ch = s.scanExponent(ch) |  | ||||||
| 		} |  | ||||||
| 		return token.FLOAT |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch != eof { |  | ||||||
| 		s.unread() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// If we have a larger number and this is zero, error |  | ||||||
| 	if zero && pos != s.srcPos { |  | ||||||
| 		s.err("numbers cannot start with 0") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return token.NUMBER |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanMantissa scans the mantissa beginning from the rune. It returns the next |  | ||||||
| // non decimal rune. It's used to determine wheter it's a fraction or exponent. |  | ||||||
| func (s *Scanner) scanMantissa(ch rune) rune { |  | ||||||
| 	scanned := false |  | ||||||
| 	for isDecimal(ch) { |  | ||||||
| 		ch = s.next() |  | ||||||
| 		scanned = true |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if scanned && ch != eof { |  | ||||||
| 		s.unread() |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanFraction scans the fraction after the '.' rune |  | ||||||
| func (s *Scanner) scanFraction(ch rune) rune { |  | ||||||
| 	if ch == '.' { |  | ||||||
| 		ch = s.peek() // we peek just to see if we can move forward |  | ||||||
| 		ch = s.scanMantissa(ch) |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanExponent scans the remaining parts of an exponent after the 'e' or 'E' |  | ||||||
| // rune. |  | ||||||
| func (s *Scanner) scanExponent(ch rune) rune { |  | ||||||
| 	if ch == 'e' || ch == 'E' { |  | ||||||
| 		ch = s.next() |  | ||||||
| 		if ch == '-' || ch == '+' { |  | ||||||
| 			ch = s.next() |  | ||||||
| 		} |  | ||||||
| 		ch = s.scanMantissa(ch) |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanString scans a quoted string |  | ||||||
| func (s *Scanner) scanString() { |  | ||||||
| 	braces := 0 |  | ||||||
| 	for { |  | ||||||
| 		// '"' opening already consumed |  | ||||||
| 		// read character after quote |  | ||||||
| 		ch := s.next() |  | ||||||
|  |  | ||||||
| 		if ch == '\n' || ch < 0 || ch == eof { |  | ||||||
| 			s.err("literal not terminated") |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch == '"' { |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		// If we're going into a ${} then we can ignore quotes for awhile |  | ||||||
| 		if braces == 0 && ch == '$' && s.peek() == '{' { |  | ||||||
| 			braces++ |  | ||||||
| 			s.next() |  | ||||||
| 		} else if braces > 0 && ch == '{' { |  | ||||||
| 			braces++ |  | ||||||
| 		} |  | ||||||
| 		if braces > 0 && ch == '}' { |  | ||||||
| 			braces-- |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if ch == '\\' { |  | ||||||
| 			s.scanEscape() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanEscape scans an escape sequence |  | ||||||
| func (s *Scanner) scanEscape() rune { |  | ||||||
| 	// http://en.cppreference.com/w/cpp/language/escape |  | ||||||
| 	ch := s.next() // read character after '/' |  | ||||||
| 	switch ch { |  | ||||||
| 	case 'a', 'b', 'f', 'n', 'r', 't', 'v', '\\', '"': |  | ||||||
| 		// nothing to do |  | ||||||
| 	case '0', '1', '2', '3', '4', '5', '6', '7': |  | ||||||
| 		// octal notation |  | ||||||
| 		ch = s.scanDigits(ch, 8, 3) |  | ||||||
| 	case 'x': |  | ||||||
| 		// hexademical notation |  | ||||||
| 		ch = s.scanDigits(s.next(), 16, 2) |  | ||||||
| 	case 'u': |  | ||||||
| 		// universal character name |  | ||||||
| 		ch = s.scanDigits(s.next(), 16, 4) |  | ||||||
| 	case 'U': |  | ||||||
| 		// universal character name |  | ||||||
| 		ch = s.scanDigits(s.next(), 16, 8) |  | ||||||
| 	default: |  | ||||||
| 		s.err("illegal char escape") |  | ||||||
| 	} |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanDigits scans a rune with the given base for n times. For example an |  | ||||||
| // octal notation \184 would yield in scanDigits(ch, 8, 3) |  | ||||||
| func (s *Scanner) scanDigits(ch rune, base, n int) rune { |  | ||||||
| 	for n > 0 && digitVal(ch) < base { |  | ||||||
| 		ch = s.next() |  | ||||||
| 		n-- |  | ||||||
| 	} |  | ||||||
| 	if n > 0 { |  | ||||||
| 		s.err("illegal char escape") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// we scanned all digits, put the last non digit char back |  | ||||||
| 	s.unread() |  | ||||||
| 	return ch |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanIdentifier scans an identifier and returns the literal string |  | ||||||
| func (s *Scanner) scanIdentifier() string { |  | ||||||
| 	offs := s.srcPos.Offset - s.lastCharLen |  | ||||||
| 	ch := s.next() |  | ||||||
| 	for isLetter(ch) || isDigit(ch) || ch == '-' { |  | ||||||
| 		ch = s.next() |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if ch != eof { |  | ||||||
| 		s.unread() // we got identifier, put back latest char |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return string(s.src[offs:s.srcPos.Offset]) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // recentPosition returns the position of the character immediately after the |  | ||||||
| // character or token returned by the last call to Scan. |  | ||||||
| func (s *Scanner) recentPosition() (pos token.Pos) { |  | ||||||
| 	pos.Offset = s.srcPos.Offset - s.lastCharLen |  | ||||||
| 	switch { |  | ||||||
| 	case s.srcPos.Column > 0: |  | ||||||
| 		// common case: last character was not a '\n' |  | ||||||
| 		pos.Line = s.srcPos.Line |  | ||||||
| 		pos.Column = s.srcPos.Column |  | ||||||
| 	case s.lastLineLen > 0: |  | ||||||
| 		// last character was a '\n' |  | ||||||
| 		// (we cannot be at the beginning of the source |  | ||||||
| 		// since we have called next() at least once) |  | ||||||
| 		pos.Line = s.srcPos.Line - 1 |  | ||||||
| 		pos.Column = s.lastLineLen |  | ||||||
| 	default: |  | ||||||
| 		// at the beginning of the source |  | ||||||
| 		pos.Line = 1 |  | ||||||
| 		pos.Column = 1 |  | ||||||
| 	} |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // err prints the error of any scanning to s.Error function. If the function is |  | ||||||
| // not defined, by default it prints them to os.Stderr |  | ||||||
| func (s *Scanner) err(msg string) { |  | ||||||
| 	s.ErrorCount++ |  | ||||||
| 	pos := s.recentPosition() |  | ||||||
|  |  | ||||||
| 	if s.Error != nil { |  | ||||||
| 		s.Error(pos, msg) |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	fmt.Fprintf(os.Stderr, "%s: %s\n", pos, msg) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isHexadecimal returns true if the given rune is a letter |  | ||||||
| func isLetter(ch rune) bool { |  | ||||||
| 	return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch == '_' || ch >= 0x80 && unicode.IsLetter(ch) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isHexadecimal returns true if the given rune is a decimal digit |  | ||||||
| func isDigit(ch rune) bool { |  | ||||||
| 	return '0' <= ch && ch <= '9' || ch >= 0x80 && unicode.IsDigit(ch) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isHexadecimal returns true if the given rune is a decimal number |  | ||||||
| func isDecimal(ch rune) bool { |  | ||||||
| 	return '0' <= ch && ch <= '9' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isHexadecimal returns true if the given rune is an hexadecimal number |  | ||||||
| func isHexadecimal(ch rune) bool { |  | ||||||
| 	return '0' <= ch && ch <= '9' || 'a' <= ch && ch <= 'f' || 'A' <= ch && ch <= 'F' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isWhitespace returns true if the rune is a space, tab, newline or carriage return |  | ||||||
| func isWhitespace(ch rune) bool { |  | ||||||
| 	return ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // digitVal returns the integer value of a given octal,decimal or hexadecimal rune |  | ||||||
| func digitVal(ch rune) int { |  | ||||||
| 	switch { |  | ||||||
| 	case '0' <= ch && ch <= '9': |  | ||||||
| 		return int(ch - '0') |  | ||||||
| 	case 'a' <= ch && ch <= 'f': |  | ||||||
| 		return int(ch - 'a' + 10) |  | ||||||
| 	case 'A' <= ch && ch <= 'F': |  | ||||||
| 		return int(ch - 'A' + 10) |  | ||||||
| 	} |  | ||||||
| 	return 16 // larger than any legal digit val |  | ||||||
| } |  | ||||||
							
								
								
									
										46
									
								
								vendor/github.com/hashicorp/hcl/json/token/position.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										46
									
								
								vendor/github.com/hashicorp/hcl/json/token/position.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,46 +0,0 @@ | |||||||
| package token |  | ||||||
|  |  | ||||||
| import "fmt" |  | ||||||
|  |  | ||||||
| // Pos describes an arbitrary source position |  | ||||||
| // including the file, line, and column location. |  | ||||||
| // A Position is valid if the line number is > 0. |  | ||||||
| type Pos struct { |  | ||||||
| 	Filename string // filename, if any |  | ||||||
| 	Offset   int    // offset, starting at 0 |  | ||||||
| 	Line     int    // line number, starting at 1 |  | ||||||
| 	Column   int    // column number, starting at 1 (character count) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // IsValid returns true if the position is valid. |  | ||||||
| func (p *Pos) IsValid() bool { return p.Line > 0 } |  | ||||||
|  |  | ||||||
| // String returns a string in one of several forms: |  | ||||||
| // |  | ||||||
| //	file:line:column    valid position with file name |  | ||||||
| //	line:column         valid position without file name |  | ||||||
| //	file                invalid position with file name |  | ||||||
| //	-                   invalid position without file name |  | ||||||
| func (p Pos) String() string { |  | ||||||
| 	s := p.Filename |  | ||||||
| 	if p.IsValid() { |  | ||||||
| 		if s != "" { |  | ||||||
| 			s += ":" |  | ||||||
| 		} |  | ||||||
| 		s += fmt.Sprintf("%d:%d", p.Line, p.Column) |  | ||||||
| 	} |  | ||||||
| 	if s == "" { |  | ||||||
| 		s = "-" |  | ||||||
| 	} |  | ||||||
| 	return s |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Before reports whether the position p is before u. |  | ||||||
| func (p Pos) Before(u Pos) bool { |  | ||||||
| 	return u.Offset > p.Offset || u.Line > p.Line |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // After reports whether the position p is after u. |  | ||||||
| func (p Pos) After(u Pos) bool { |  | ||||||
| 	return u.Offset < p.Offset || u.Line < p.Line |  | ||||||
| } |  | ||||||
							
								
								
									
										118
									
								
								vendor/github.com/hashicorp/hcl/json/token/token.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										118
									
								
								vendor/github.com/hashicorp/hcl/json/token/token.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,118 +0,0 @@ | |||||||
| package token |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"strconv" |  | ||||||
|  |  | ||||||
| 	hcltoken "github.com/hashicorp/hcl/hcl/token" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Token defines a single HCL token which can be obtained via the Scanner |  | ||||||
| type Token struct { |  | ||||||
| 	Type Type |  | ||||||
| 	Pos  Pos |  | ||||||
| 	Text string |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Type is the set of lexical tokens of the HCL (HashiCorp Configuration Language) |  | ||||||
| type Type int |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	// Special tokens |  | ||||||
| 	ILLEGAL Type = iota |  | ||||||
| 	EOF |  | ||||||
|  |  | ||||||
| 	identifier_beg |  | ||||||
| 	literal_beg |  | ||||||
| 	NUMBER // 12345 |  | ||||||
| 	FLOAT  // 123.45 |  | ||||||
| 	BOOL   // true,false |  | ||||||
| 	STRING // "abc" |  | ||||||
| 	NULL   // null |  | ||||||
| 	literal_end |  | ||||||
| 	identifier_end |  | ||||||
|  |  | ||||||
| 	operator_beg |  | ||||||
| 	LBRACK // [ |  | ||||||
| 	LBRACE // { |  | ||||||
| 	COMMA  // , |  | ||||||
| 	PERIOD // . |  | ||||||
| 	COLON  // : |  | ||||||
|  |  | ||||||
| 	RBRACK // ] |  | ||||||
| 	RBRACE // } |  | ||||||
|  |  | ||||||
| 	operator_end |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var tokens = [...]string{ |  | ||||||
| 	ILLEGAL: "ILLEGAL", |  | ||||||
|  |  | ||||||
| 	EOF: "EOF", |  | ||||||
|  |  | ||||||
| 	NUMBER: "NUMBER", |  | ||||||
| 	FLOAT:  "FLOAT", |  | ||||||
| 	BOOL:   "BOOL", |  | ||||||
| 	STRING: "STRING", |  | ||||||
| 	NULL:   "NULL", |  | ||||||
|  |  | ||||||
| 	LBRACK: "LBRACK", |  | ||||||
| 	LBRACE: "LBRACE", |  | ||||||
| 	COMMA:  "COMMA", |  | ||||||
| 	PERIOD: "PERIOD", |  | ||||||
| 	COLON:  "COLON", |  | ||||||
|  |  | ||||||
| 	RBRACK: "RBRACK", |  | ||||||
| 	RBRACE: "RBRACE", |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // String returns the string corresponding to the token tok. |  | ||||||
| func (t Type) String() string { |  | ||||||
| 	s := "" |  | ||||||
| 	if 0 <= t && t < Type(len(tokens)) { |  | ||||||
| 		s = tokens[t] |  | ||||||
| 	} |  | ||||||
| 	if s == "" { |  | ||||||
| 		s = "token(" + strconv.Itoa(int(t)) + ")" |  | ||||||
| 	} |  | ||||||
| 	return s |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // IsIdentifier returns true for tokens corresponding to identifiers and basic |  | ||||||
| // type literals; it returns false otherwise. |  | ||||||
| func (t Type) IsIdentifier() bool { return identifier_beg < t && t < identifier_end } |  | ||||||
|  |  | ||||||
| // IsLiteral returns true for tokens corresponding to basic type literals; it |  | ||||||
| // returns false otherwise. |  | ||||||
| func (t Type) IsLiteral() bool { return literal_beg < t && t < literal_end } |  | ||||||
|  |  | ||||||
| // IsOperator returns true for tokens corresponding to operators and |  | ||||||
| // delimiters; it returns false otherwise. |  | ||||||
| func (t Type) IsOperator() bool { return operator_beg < t && t < operator_end } |  | ||||||
|  |  | ||||||
| // String returns the token's literal text. Note that this is only |  | ||||||
| // applicable for certain token types, such as token.IDENT, |  | ||||||
| // token.STRING, etc.. |  | ||||||
| func (t Token) String() string { |  | ||||||
| 	return fmt.Sprintf("%s %s %s", t.Pos.String(), t.Type.String(), t.Text) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // HCLToken converts this token to an HCL token. |  | ||||||
| // |  | ||||||
| // The token type must be a literal type or this will panic. |  | ||||||
| func (t Token) HCLToken() hcltoken.Token { |  | ||||||
| 	switch t.Type { |  | ||||||
| 	case BOOL: |  | ||||||
| 		return hcltoken.Token{Type: hcltoken.BOOL, Text: t.Text} |  | ||||||
| 	case FLOAT: |  | ||||||
| 		return hcltoken.Token{Type: hcltoken.FLOAT, Text: t.Text} |  | ||||||
| 	case NULL: |  | ||||||
| 		return hcltoken.Token{Type: hcltoken.STRING, Text: ""} |  | ||||||
| 	case NUMBER: |  | ||||||
| 		return hcltoken.Token{Type: hcltoken.NUMBER, Text: t.Text} |  | ||||||
| 	case STRING: |  | ||||||
| 		return hcltoken.Token{Type: hcltoken.STRING, Text: t.Text, JSON: true} |  | ||||||
| 	default: |  | ||||||
| 		panic(fmt.Sprintf("unimplemented HCLToken for type: %s", t.Type)) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
							
								
								
									
										38
									
								
								vendor/github.com/hashicorp/hcl/lex.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										38
									
								
								vendor/github.com/hashicorp/hcl/lex.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,38 +0,0 @@ | |||||||
| package hcl |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"unicode" |  | ||||||
| 	"unicode/utf8" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type lexModeValue byte |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	lexModeUnknown lexModeValue = iota |  | ||||||
| 	lexModeHcl |  | ||||||
| 	lexModeJson |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // lexMode returns whether we're going to be parsing in JSON |  | ||||||
| // mode or HCL mode. |  | ||||||
| func lexMode(v []byte) lexModeValue { |  | ||||||
| 	var ( |  | ||||||
| 		r      rune |  | ||||||
| 		w      int |  | ||||||
| 		offset int |  | ||||||
| 	) |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		r, w = utf8.DecodeRune(v[offset:]) |  | ||||||
| 		offset += w |  | ||||||
| 		if unicode.IsSpace(r) { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
| 		if r == '{' { |  | ||||||
| 			return lexModeJson |  | ||||||
| 		} |  | ||||||
| 		break |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return lexModeHcl |  | ||||||
| } |  | ||||||
							
								
								
									
										39
									
								
								vendor/github.com/hashicorp/hcl/parse.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										39
									
								
								vendor/github.com/hashicorp/hcl/parse.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,39 +0,0 @@ | |||||||
| package hcl |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
|  |  | ||||||
| 	"github.com/hashicorp/hcl/hcl/ast" |  | ||||||
| 	hclParser "github.com/hashicorp/hcl/hcl/parser" |  | ||||||
| 	jsonParser "github.com/hashicorp/hcl/json/parser" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // ParseBytes accepts as input byte slice and returns ast tree. |  | ||||||
| // |  | ||||||
| // Input can be either JSON or HCL |  | ||||||
| func ParseBytes(in []byte) (*ast.File, error) { |  | ||||||
| 	return parse(in) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ParseString accepts input as a string and returns ast tree. |  | ||||||
| func ParseString(input string) (*ast.File, error) { |  | ||||||
| 	return parse([]byte(input)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func parse(in []byte) (*ast.File, error) { |  | ||||||
| 	switch lexMode(in) { |  | ||||||
| 	case lexModeHcl: |  | ||||||
| 		return hclParser.Parse(in) |  | ||||||
| 	case lexModeJson: |  | ||||||
| 		return jsonParser.Parse(in) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return nil, fmt.Errorf("unknown config format") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Parse parses the given input and returns the root object. |  | ||||||
| // |  | ||||||
| // The input format can be either HCL or JSON. |  | ||||||
| func Parse(input string) (*ast.File, error) { |  | ||||||
| 	return parse([]byte(input)) |  | ||||||
| } |  | ||||||
							
								
								
									
										6
									
								
								vendor/github.com/magiconair/properties/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										6
									
								
								vendor/github.com/magiconair/properties/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,6 +0,0 @@ | |||||||
| *.sublime-project |  | ||||||
| *.sublime-workspace |  | ||||||
| *.un~ |  | ||||||
| *.swp |  | ||||||
| .idea/ |  | ||||||
| *.iml |  | ||||||
							
								
								
									
										12
									
								
								vendor/github.com/magiconair/properties/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										12
									
								
								vendor/github.com/magiconair/properties/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,12 +0,0 @@ | |||||||
| language: go |  | ||||||
| go: |  | ||||||
|     - 1.4.x |  | ||||||
|     - 1.5.x |  | ||||||
|     - 1.6.x |  | ||||||
|     - 1.7.x |  | ||||||
|     - 1.8.x |  | ||||||
|     - 1.9.x |  | ||||||
|     - "1.10.x" |  | ||||||
|     - "1.11.x" |  | ||||||
|     - "1.12.x" |  | ||||||
|     - tip |  | ||||||
							
								
								
									
										139
									
								
								vendor/github.com/magiconair/properties/CHANGELOG.md
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										139
									
								
								vendor/github.com/magiconair/properties/CHANGELOG.md
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,139 +0,0 @@ | |||||||
| ## Changelog |  | ||||||
|  |  | ||||||
| ### [1.8.1](https://github.com/magiconair/properties/tree/v1.8.1) - 10 May 2019 |  | ||||||
|  |  | ||||||
|  * [PR #26](https://github.com/magiconair/properties/pull/35): Close body always after request |  | ||||||
|  |  | ||||||
|    This patch ensures that in `LoadURL` the response body is always closed. |  | ||||||
|  |  | ||||||
|    Thanks to [@liubog2008](https://github.com/liubog2008) for the patch. |  | ||||||
|  |  | ||||||
| ### [1.8](https://github.com/magiconair/properties/tree/v1.8) - 15 May 2018 |  | ||||||
|  |  | ||||||
|  * [PR #26](https://github.com/magiconair/properties/pull/26): Disable expansion during loading |  | ||||||
|  |  | ||||||
|    This adds the option to disable property expansion during loading. |  | ||||||
|  |  | ||||||
|    Thanks to [@kmala](https://github.com/kmala) for the patch. |  | ||||||
|  |  | ||||||
| ### [1.7.6](https://github.com/magiconair/properties/tree/v1.7.6) - 14 Feb 2018 |  | ||||||
|  |  | ||||||
|  * [PR #29](https://github.com/magiconair/properties/pull/29): Reworked expansion logic to handle more complex cases. |  | ||||||
|  |  | ||||||
|    See PR for an example. |  | ||||||
|  |  | ||||||
|    Thanks to [@yobert](https://github.com/yobert) for the fix. |  | ||||||
|  |  | ||||||
| ### [1.7.5](https://github.com/magiconair/properties/tree/v1.7.5) - 13 Feb 2018 |  | ||||||
|  |  | ||||||
|  * [PR #28](https://github.com/magiconair/properties/pull/28): Support duplicate expansions in the same value |  | ||||||
|  |  | ||||||
|    Values which expand the same key multiple times (e.g. `key=${a} ${a}`) will no longer fail |  | ||||||
|    with a `circular reference error`. |  | ||||||
|  |  | ||||||
|    Thanks to [@yobert](https://github.com/yobert) for the fix. |  | ||||||
|  |  | ||||||
| ### [1.7.4](https://github.com/magiconair/properties/tree/v1.7.4) - 31 Oct 2017 |  | ||||||
|  |  | ||||||
|  * [Issue #23](https://github.com/magiconair/properties/issues/23): Ignore blank lines with whitespaces |  | ||||||
|  |  | ||||||
|  * [PR #24](https://github.com/magiconair/properties/pull/24): Update keys when DisableExpansion is enabled |  | ||||||
|  |  | ||||||
|    Thanks to [@mgurov](https://github.com/mgurov) for the fix. |  | ||||||
|  |  | ||||||
| ### [1.7.3](https://github.com/magiconair/properties/tree/v1.7.3) - 10 Jul 2017 |  | ||||||
|  |  | ||||||
|  * [Issue #17](https://github.com/magiconair/properties/issues/17): Add [SetValue()](http://godoc.org/github.com/magiconair/properties#Properties.SetValue) method to set values generically |  | ||||||
|  * [Issue #22](https://github.com/magiconair/properties/issues/22): Add [LoadMap()](http://godoc.org/github.com/magiconair/properties#LoadMap) function to load properties from a string map |  | ||||||
|  |  | ||||||
| ### [1.7.2](https://github.com/magiconair/properties/tree/v1.7.2) - 20 Mar 2017 |  | ||||||
|  |  | ||||||
|  * [Issue #15](https://github.com/magiconair/properties/issues/15): Drop gocheck dependency |  | ||||||
|  * [PR #21](https://github.com/magiconair/properties/pull/21): Add [Map()](http://godoc.org/github.com/magiconair/properties#Properties.Map) and [FilterFunc()](http://godoc.org/github.com/magiconair/properties#Properties.FilterFunc) |  | ||||||
|  |  | ||||||
| ### [1.7.1](https://github.com/magiconair/properties/tree/v1.7.1) - 13 Jan 2017 |  | ||||||
|  |  | ||||||
|  * [Issue #14](https://github.com/magiconair/properties/issues/14): Decouple TestLoadExpandedFile from `$USER` |  | ||||||
|  * [PR #12](https://github.com/magiconair/properties/pull/12): Load from files and URLs |  | ||||||
|  * [PR #16](https://github.com/magiconair/properties/pull/16): Keep gofmt happy |  | ||||||
|  * [PR #18](https://github.com/magiconair/properties/pull/18): Fix Delete() function |  | ||||||
|  |  | ||||||
| ### [1.7.0](https://github.com/magiconair/properties/tree/v1.7.0) - 20 Mar 2016 |  | ||||||
|  |  | ||||||
|  * [Issue #10](https://github.com/magiconair/properties/issues/10): Add [LoadURL,LoadURLs,MustLoadURL,MustLoadURLs](http://godoc.org/github.com/magiconair/properties#LoadURL) method to load properties from a URL. |  | ||||||
|  * [Issue #11](https://github.com/magiconair/properties/issues/11): Add [LoadString,MustLoadString](http://godoc.org/github.com/magiconair/properties#LoadString) method to load properties from an UTF8 string. |  | ||||||
|  * [PR #8](https://github.com/magiconair/properties/pull/8): Add [MustFlag](http://godoc.org/github.com/magiconair/properties#Properties.MustFlag) method to provide overrides via command line flags. (@pascaldekloe) |  | ||||||
|  |  | ||||||
| ### [1.6.0](https://github.com/magiconair/properties/tree/v1.6.0) - 11 Dec 2015 |  | ||||||
|  |  | ||||||
|  * Add [Decode](http://godoc.org/github.com/magiconair/properties#Properties.Decode) method to populate struct from properties via tags. |  | ||||||
|  |  | ||||||
| ### [1.5.6](https://github.com/magiconair/properties/tree/v1.5.6) - 18 Oct 2015 |  | ||||||
|  |  | ||||||
|  * Vendored in gopkg.in/check.v1 |  | ||||||
|  |  | ||||||
| ### [1.5.5](https://github.com/magiconair/properties/tree/v1.5.5) - 31 Jul 2015 |  | ||||||
|  |  | ||||||
|  * [PR #6](https://github.com/magiconair/properties/pull/6): Add [Delete](http://godoc.org/github.com/magiconair/properties#Properties.Delete) method to remove keys including comments. (@gerbenjacobs) |  | ||||||
|  |  | ||||||
| ### [1.5.4](https://github.com/magiconair/properties/tree/v1.5.4) - 23 Jun 2015 |  | ||||||
|  |  | ||||||
|  * [Issue #5](https://github.com/magiconair/properties/issues/5): Allow disabling of property expansion [DisableExpansion](http://godoc.org/github.com/magiconair/properties#Properties.DisableExpansion). When property expansion is disabled Properties become a simple key/value store and don't check for circular references. |  | ||||||
|  |  | ||||||
| ### [1.5.3](https://github.com/magiconair/properties/tree/v1.5.3) - 02 Jun 2015 |  | ||||||
|  |  | ||||||
|  * [Issue #4](https://github.com/magiconair/properties/issues/4): Maintain key order in [Filter()](http://godoc.org/github.com/magiconair/properties#Properties.Filter), [FilterPrefix()](http://godoc.org/github.com/magiconair/properties#Properties.FilterPrefix) and [FilterRegexp()](http://godoc.org/github.com/magiconair/properties#Properties.FilterRegexp) |  | ||||||
|  |  | ||||||
| ### [1.5.2](https://github.com/magiconair/properties/tree/v1.5.2) - 10 Apr 2015 |  | ||||||
|  |  | ||||||
|  * [Issue #3](https://github.com/magiconair/properties/issues/3): Don't print comments in [WriteComment()](http://godoc.org/github.com/magiconair/properties#Properties.WriteComment) if they are all empty |  | ||||||
|  * Add clickable links to README |  | ||||||
|  |  | ||||||
| ### [1.5.1](https://github.com/magiconair/properties/tree/v1.5.1) - 08 Dec 2014 |  | ||||||
|  |  | ||||||
|  * Added [GetParsedDuration()](http://godoc.org/github.com/magiconair/properties#Properties.GetParsedDuration) and [MustGetParsedDuration()](http://godoc.org/github.com/magiconair/properties#Properties.MustGetParsedDuration) for values specified compatible with |  | ||||||
|    [time.ParseDuration()](http://golang.org/pkg/time/#ParseDuration). |  | ||||||
|  |  | ||||||
| ### [1.5.0](https://github.com/magiconair/properties/tree/v1.5.0) - 18 Nov 2014 |  | ||||||
|  |  | ||||||
|  * Added support for single and multi-line comments (reading, writing and updating) |  | ||||||
|  * The order of keys is now preserved |  | ||||||
|  * Calling [Set()](http://godoc.org/github.com/magiconair/properties#Properties.Set) with an empty key now silently ignores the call and does not create a new entry |  | ||||||
|  * Added a [MustSet()](http://godoc.org/github.com/magiconair/properties#Properties.MustSet) method |  | ||||||
|  * Migrated test library from launchpad.net/gocheck to [gopkg.in/check.v1](http://gopkg.in/check.v1) |  | ||||||
|  |  | ||||||
| ### [1.4.2](https://github.com/magiconair/properties/tree/v1.4.2) - 15 Nov 2014 |  | ||||||
|  |  | ||||||
|  * [Issue #2](https://github.com/magiconair/properties/issues/2): Fixed goroutine leak in parser which created two lexers but cleaned up only one |  | ||||||
|  |  | ||||||
| ### [1.4.1](https://github.com/magiconair/properties/tree/v1.4.1) - 13 Nov 2014 |  | ||||||
|  |  | ||||||
|  * [Issue #1](https://github.com/magiconair/properties/issues/1): Fixed bug in Keys() method which returned an empty string |  | ||||||
|  |  | ||||||
| ### [1.4.0](https://github.com/magiconair/properties/tree/v1.4.0) - 23 Sep 2014 |  | ||||||
|  |  | ||||||
|  * Added [Keys()](http://godoc.org/github.com/magiconair/properties#Properties.Keys) to get the keys |  | ||||||
|  * Added [Filter()](http://godoc.org/github.com/magiconair/properties#Properties.Filter), [FilterRegexp()](http://godoc.org/github.com/magiconair/properties#Properties.FilterRegexp) and [FilterPrefix()](http://godoc.org/github.com/magiconair/properties#Properties.FilterPrefix) to get a subset of the properties |  | ||||||
|  |  | ||||||
| ### [1.3.0](https://github.com/magiconair/properties/tree/v1.3.0) - 18 Mar 2014 |  | ||||||
|  |  | ||||||
| * Added support for time.Duration |  | ||||||
| * Made MustXXX() failure beha[ior configurable (log.Fatal, panic](https://github.com/magiconair/properties/tree/vior configurable (log.Fatal, panic) - custom) |  | ||||||
| * Changed default of MustXXX() failure from panic to log.Fatal |  | ||||||
|  |  | ||||||
| ### [1.2.0](https://github.com/magiconair/properties/tree/v1.2.0) - 05 Mar 2014 |  | ||||||
|  |  | ||||||
| * Added MustGet... functions |  | ||||||
| * Added support for int and uint with range checks on 32 bit platforms |  | ||||||
|  |  | ||||||
| ### [1.1.0](https://github.com/magiconair/properties/tree/v1.1.0) - 20 Jan 2014 |  | ||||||
|  |  | ||||||
| * Renamed from goproperties to properties |  | ||||||
| * Added support for expansion of environment vars in |  | ||||||
|   filenames and value expressions |  | ||||||
| * Fixed bug where value expressions were not at the |  | ||||||
|   start of the string |  | ||||||
|  |  | ||||||
| ### [1.0.0](https://github.com/magiconair/properties/tree/v1.0.0) - 7 Jan 2014 |  | ||||||
|  |  | ||||||
| * Initial release |  | ||||||
							
								
								
									
										25
									
								
								vendor/github.com/magiconair/properties/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										25
									
								
								vendor/github.com/magiconair/properties/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,25 +0,0 @@ | |||||||
| goproperties - properties file decoder for Go |  | ||||||
|  |  | ||||||
| Copyright (c) 2013-2018 - Frank Schroeder |  | ||||||
|  |  | ||||||
| All rights reserved. |  | ||||||
|  |  | ||||||
| Redistribution and use in source and binary forms, with or without |  | ||||||
| modification, are permitted provided that the following conditions are met: |  | ||||||
|  |  | ||||||
| 1. Redistributions of source code must retain the above copyright notice, this |  | ||||||
|    list of conditions and the following disclaimer. |  | ||||||
| 2. Redistributions in binary form must reproduce the above copyright notice, |  | ||||||
|    this list of conditions and the following disclaimer in the documentation |  | ||||||
|    and/or other materials provided with the distribution. |  | ||||||
|  |  | ||||||
| THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND |  | ||||||
| ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED |  | ||||||
| WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE |  | ||||||
| DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR |  | ||||||
| ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES |  | ||||||
| (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; |  | ||||||
| LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND |  | ||||||
| ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT |  | ||||||
| (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS |  | ||||||
| SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |  | ||||||
							
								
								
									
										129
									
								
								vendor/github.com/magiconair/properties/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										129
									
								
								vendor/github.com/magiconair/properties/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,129 +0,0 @@ | |||||||
| [](https://github.com/magiconair/properties/releases) |  | ||||||
| [](https://travis-ci.org/magiconair/properties) |  | ||||||
| [](https://circleci.com/gh/magiconair/properties) |  | ||||||
| [](https://raw.githubusercontent.com/magiconair/properties/master/LICENSE) |  | ||||||
| [](http://godoc.org/github.com/magiconair/properties) |  | ||||||
|  |  | ||||||
| # Overview |  | ||||||
|  |  | ||||||
| #### Please run `git pull --tags` to update the tags. See [below](#updated-git-tags) why. |  | ||||||
|  |  | ||||||
| properties is a Go library for reading and writing properties files. |  | ||||||
|  |  | ||||||
| It supports reading from multiple files or URLs and Spring style recursive |  | ||||||
| property expansion of expressions like `${key}` to their corresponding value. |  | ||||||
| Value expressions can refer to other keys like in `${key}` or to environment |  | ||||||
| variables like in `${USER}`.  Filenames can also contain environment variables |  | ||||||
| like in `/home/${USER}/myapp.properties`. |  | ||||||
|  |  | ||||||
| Properties can be decoded into structs, maps, arrays and values through |  | ||||||
| struct tags. |  | ||||||
|  |  | ||||||
| Comments and the order of keys are preserved. Comments can be modified |  | ||||||
| and can be written to the output. |  | ||||||
|  |  | ||||||
| The properties library supports both ISO-8859-1 and UTF-8 encoded data. |  | ||||||
|  |  | ||||||
| Starting from version 1.3.0 the behavior of the MustXXX() functions is |  | ||||||
| configurable by providing a custom `ErrorHandler` function. The default has |  | ||||||
| changed from `panic` to `log.Fatal` but this is configurable and custom |  | ||||||
| error handling functions can be provided. See the package documentation for |  | ||||||
| details. |  | ||||||
|  |  | ||||||
| Read the full documentation on [](http://godoc.org/github.com/magiconair/properties) |  | ||||||
|  |  | ||||||
| ## Getting Started |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| import ( |  | ||||||
| 	"flag" |  | ||||||
| 	"github.com/magiconair/properties" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| func main() { |  | ||||||
| 	// init from a file |  | ||||||
| 	p := properties.MustLoadFile("${HOME}/config.properties", properties.UTF8) |  | ||||||
|  |  | ||||||
| 	// or multiple files |  | ||||||
| 	p = properties.MustLoadFiles([]string{ |  | ||||||
| 			"${HOME}/config.properties", |  | ||||||
| 			"${HOME}/config-${USER}.properties", |  | ||||||
| 		}, properties.UTF8, true) |  | ||||||
|  |  | ||||||
| 	// or from a map |  | ||||||
| 	p = properties.LoadMap(map[string]string{"key": "value", "abc": "def"}) |  | ||||||
|  |  | ||||||
| 	// or from a string |  | ||||||
| 	p = properties.MustLoadString("key=value\nabc=def") |  | ||||||
|  |  | ||||||
| 	// or from a URL |  | ||||||
| 	p = properties.MustLoadURL("http://host/path") |  | ||||||
|  |  | ||||||
| 	// or from multiple URLs |  | ||||||
| 	p = properties.MustLoadURL([]string{ |  | ||||||
| 			"http://host/config", |  | ||||||
| 			"http://host/config-${USER}", |  | ||||||
| 		}, true) |  | ||||||
|  |  | ||||||
| 	// or from flags |  | ||||||
| 	p.MustFlag(flag.CommandLine) |  | ||||||
|  |  | ||||||
| 	// get values through getters |  | ||||||
| 	host := p.MustGetString("host") |  | ||||||
| 	port := p.GetInt("port", 8080) |  | ||||||
|  |  | ||||||
| 	// or through Decode |  | ||||||
| 	type Config struct { |  | ||||||
| 		Host    string        `properties:"host"` |  | ||||||
| 		Port    int           `properties:"port,default=9000"` |  | ||||||
| 		Accept  []string      `properties:"accept,default=image/png;image;gif"` |  | ||||||
| 		Timeout time.Duration `properties:"timeout,default=5s"` |  | ||||||
| 	} |  | ||||||
| 	var cfg Config |  | ||||||
| 	if err := p.Decode(&cfg); err != nil { |  | ||||||
| 		log.Fatal(err) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ## Installation and Upgrade |  | ||||||
|  |  | ||||||
| ``` |  | ||||||
| $ go get -u github.com/magiconair/properties |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ## License |  | ||||||
|  |  | ||||||
| 2 clause BSD license. See [LICENSE](https://github.com/magiconair/properties/blob/master/LICENSE) file for details. |  | ||||||
|  |  | ||||||
| ## ToDo |  | ||||||
|  |  | ||||||
| * Dump contents with passwords and secrets obscured |  | ||||||
|  |  | ||||||
| ## Updated Git tags |  | ||||||
|  |  | ||||||
| #### 13 Feb 2018 |  | ||||||
|  |  | ||||||
| I realized that all of the git tags I had pushed before v1.7.5 were lightweight tags |  | ||||||
| and I've only recently learned that this doesn't play well with `git describe` 😞 |  | ||||||
|  |  | ||||||
| I have replaced all lightweight tags with signed tags using this script which should |  | ||||||
| retain the commit date, name and email address. Please run `git pull --tags` to update them. |  | ||||||
|  |  | ||||||
| Worst case you have to reclone the repo. |  | ||||||
|  |  | ||||||
| ```shell |  | ||||||
| #!/bin/bash |  | ||||||
| tag=$1 |  | ||||||
| echo "Updating $tag" |  | ||||||
| date=$(git show ${tag}^0 --format=%aD | head -1) |  | ||||||
| email=$(git show ${tag}^0 --format=%aE | head -1) |  | ||||||
| name=$(git show ${tag}^0 --format=%aN | head -1) |  | ||||||
| GIT_COMMITTER_DATE="$date" GIT_COMMITTER_NAME="$name" GIT_COMMITTER_EMAIL="$email" git tag -s -f ${tag} ${tag}^0 -m ${tag} |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| I apologize for the inconvenience. |  | ||||||
|  |  | ||||||
| Frank |  | ||||||
|  |  | ||||||
							
								
								
									
										289
									
								
								vendor/github.com/magiconair/properties/decode.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										289
									
								
								vendor/github.com/magiconair/properties/decode.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,289 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package properties |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"reflect" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
| 	"time" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Decode assigns property values to exported fields of a struct. |  | ||||||
| // |  | ||||||
| // Decode traverses v recursively and returns an error if a value cannot be |  | ||||||
| // converted to the field type or a required value is missing for a field. |  | ||||||
| // |  | ||||||
| // The following type dependent decodings are used: |  | ||||||
| // |  | ||||||
| // String, boolean, numeric fields have the value of the property key assigned. |  | ||||||
| // The property key name is the name of the field. A different key and a default |  | ||||||
| // value can be set in the field's tag. Fields without default value are |  | ||||||
| // required. If the value cannot be converted to the field type an error is |  | ||||||
| // returned. |  | ||||||
| // |  | ||||||
| // time.Duration fields have the result of time.ParseDuration() assigned. |  | ||||||
| // |  | ||||||
| // time.Time fields have the vaule of time.Parse() assigned. The default layout |  | ||||||
| // is time.RFC3339 but can be set in the field's tag. |  | ||||||
| // |  | ||||||
| // Arrays and slices of string, boolean, numeric, time.Duration and time.Time |  | ||||||
| // fields have the value interpreted as a comma separated list of values. The |  | ||||||
| // individual values are trimmed of whitespace and empty values are ignored. A |  | ||||||
| // default value can be provided as a semicolon separated list in the field's |  | ||||||
| // tag. |  | ||||||
| // |  | ||||||
| // Struct fields are decoded recursively using the field name plus "." as |  | ||||||
| // prefix. The prefix (without dot) can be overridden in the field's tag. |  | ||||||
| // Default values are not supported in the field's tag. Specify them on the |  | ||||||
| // fields of the inner struct instead. |  | ||||||
| // |  | ||||||
| // Map fields must have a key of type string and are decoded recursively by |  | ||||||
| // using the field's name plus ".' as prefix and the next element of the key |  | ||||||
| // name as map key. The prefix (without dot) can be overridden in the field's |  | ||||||
| // tag. Default values are not supported. |  | ||||||
| // |  | ||||||
| // Examples: |  | ||||||
| // |  | ||||||
| //     // Field is ignored. |  | ||||||
| //     Field int `properties:"-"` |  | ||||||
| // |  | ||||||
| //     // Field is assigned value of 'Field'. |  | ||||||
| //     Field int |  | ||||||
| // |  | ||||||
| //     // Field is assigned value of 'myName'. |  | ||||||
| //     Field int `properties:"myName"` |  | ||||||
| // |  | ||||||
| //     // Field is assigned value of key 'myName' and has a default |  | ||||||
| //     // value 15 if the key does not exist. |  | ||||||
| //     Field int `properties:"myName,default=15"` |  | ||||||
| // |  | ||||||
| //     // Field is assigned value of key 'Field' and has a default |  | ||||||
| //     // value 15 if the key does not exist. |  | ||||||
| //     Field int `properties:",default=15"` |  | ||||||
| // |  | ||||||
| //     // Field is assigned value of key 'date' and the date |  | ||||||
| //     // is in format 2006-01-02 |  | ||||||
| //     Field time.Time `properties:"date,layout=2006-01-02"` |  | ||||||
| // |  | ||||||
| //     // Field is assigned the non-empty and whitespace trimmed |  | ||||||
| //     // values of key 'Field' split by commas. |  | ||||||
| //     Field []string |  | ||||||
| // |  | ||||||
| //     // Field is assigned the non-empty and whitespace trimmed |  | ||||||
| //     // values of key 'Field' split by commas and has a default |  | ||||||
| //     // value ["a", "b", "c"] if the key does not exist. |  | ||||||
| //     Field []string `properties:",default=a;b;c"` |  | ||||||
| // |  | ||||||
| //     // Field is decoded recursively with "Field." as key prefix. |  | ||||||
| //     Field SomeStruct |  | ||||||
| // |  | ||||||
| //     // Field is decoded recursively with "myName." as key prefix. |  | ||||||
| //     Field SomeStruct `properties:"myName"` |  | ||||||
| // |  | ||||||
| //     // Field is decoded recursively with "Field." as key prefix |  | ||||||
| //     // and the next dotted element of the key as map key. |  | ||||||
| //     Field map[string]string |  | ||||||
| // |  | ||||||
| //     // Field is decoded recursively with "myName." as key prefix |  | ||||||
| //     // and the next dotted element of the key as map key. |  | ||||||
| //     Field map[string]string `properties:"myName"` |  | ||||||
| func (p *Properties) Decode(x interface{}) error { |  | ||||||
| 	t, v := reflect.TypeOf(x), reflect.ValueOf(x) |  | ||||||
| 	if t.Kind() != reflect.Ptr || v.Elem().Type().Kind() != reflect.Struct { |  | ||||||
| 		return fmt.Errorf("not a pointer to struct: %s", t) |  | ||||||
| 	} |  | ||||||
| 	if err := dec(p, "", nil, nil, v); err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func dec(p *Properties, key string, def *string, opts map[string]string, v reflect.Value) error { |  | ||||||
| 	t := v.Type() |  | ||||||
|  |  | ||||||
| 	// value returns the property value for key or the default if provided. |  | ||||||
| 	value := func() (string, error) { |  | ||||||
| 		if val, ok := p.Get(key); ok { |  | ||||||
| 			return val, nil |  | ||||||
| 		} |  | ||||||
| 		if def != nil { |  | ||||||
| 			return *def, nil |  | ||||||
| 		} |  | ||||||
| 		return "", fmt.Errorf("missing required key %s", key) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// conv converts a string to a value of the given type. |  | ||||||
| 	conv := func(s string, t reflect.Type) (val reflect.Value, err error) { |  | ||||||
| 		var v interface{} |  | ||||||
|  |  | ||||||
| 		switch { |  | ||||||
| 		case isDuration(t): |  | ||||||
| 			v, err = time.ParseDuration(s) |  | ||||||
|  |  | ||||||
| 		case isTime(t): |  | ||||||
| 			layout := opts["layout"] |  | ||||||
| 			if layout == "" { |  | ||||||
| 				layout = time.RFC3339 |  | ||||||
| 			} |  | ||||||
| 			v, err = time.Parse(layout, s) |  | ||||||
|  |  | ||||||
| 		case isBool(t): |  | ||||||
| 			v, err = boolVal(s), nil |  | ||||||
|  |  | ||||||
| 		case isString(t): |  | ||||||
| 			v, err = s, nil |  | ||||||
|  |  | ||||||
| 		case isFloat(t): |  | ||||||
| 			v, err = strconv.ParseFloat(s, 64) |  | ||||||
|  |  | ||||||
| 		case isInt(t): |  | ||||||
| 			v, err = strconv.ParseInt(s, 10, 64) |  | ||||||
|  |  | ||||||
| 		case isUint(t): |  | ||||||
| 			v, err = strconv.ParseUint(s, 10, 64) |  | ||||||
|  |  | ||||||
| 		default: |  | ||||||
| 			return reflect.Zero(t), fmt.Errorf("unsupported type %s", t) |  | ||||||
| 		} |  | ||||||
| 		if err != nil { |  | ||||||
| 			return reflect.Zero(t), err |  | ||||||
| 		} |  | ||||||
| 		return reflect.ValueOf(v).Convert(t), nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// keydef returns the property key and the default value based on the |  | ||||||
| 	// name of the struct field and the options in the tag. |  | ||||||
| 	keydef := func(f reflect.StructField) (string, *string, map[string]string) { |  | ||||||
| 		_key, _opts := parseTag(f.Tag.Get("properties")) |  | ||||||
|  |  | ||||||
| 		var _def *string |  | ||||||
| 		if d, ok := _opts["default"]; ok { |  | ||||||
| 			_def = &d |  | ||||||
| 		} |  | ||||||
| 		if _key != "" { |  | ||||||
| 			return _key, _def, _opts |  | ||||||
| 		} |  | ||||||
| 		return f.Name, _def, _opts |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch { |  | ||||||
| 	case isDuration(t) || isTime(t) || isBool(t) || isString(t) || isFloat(t) || isInt(t) || isUint(t): |  | ||||||
| 		s, err := value() |  | ||||||
| 		if err != nil { |  | ||||||
| 			return err |  | ||||||
| 		} |  | ||||||
| 		val, err := conv(s, t) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return err |  | ||||||
| 		} |  | ||||||
| 		v.Set(val) |  | ||||||
|  |  | ||||||
| 	case isPtr(t): |  | ||||||
| 		return dec(p, key, def, opts, v.Elem()) |  | ||||||
|  |  | ||||||
| 	case isStruct(t): |  | ||||||
| 		for i := 0; i < v.NumField(); i++ { |  | ||||||
| 			fv := v.Field(i) |  | ||||||
| 			fk, def, opts := keydef(t.Field(i)) |  | ||||||
| 			if !fv.CanSet() { |  | ||||||
| 				return fmt.Errorf("cannot set %s", t.Field(i).Name) |  | ||||||
| 			} |  | ||||||
| 			if fk == "-" { |  | ||||||
| 				continue |  | ||||||
| 			} |  | ||||||
| 			if key != "" { |  | ||||||
| 				fk = key + "." + fk |  | ||||||
| 			} |  | ||||||
| 			if err := dec(p, fk, def, opts, fv); err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 		return nil |  | ||||||
|  |  | ||||||
| 	case isArray(t): |  | ||||||
| 		val, err := value() |  | ||||||
| 		if err != nil { |  | ||||||
| 			return err |  | ||||||
| 		} |  | ||||||
| 		vals := split(val, ";") |  | ||||||
| 		a := reflect.MakeSlice(t, 0, len(vals)) |  | ||||||
| 		for _, s := range vals { |  | ||||||
| 			val, err := conv(s, t.Elem()) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
| 			a = reflect.Append(a, val) |  | ||||||
| 		} |  | ||||||
| 		v.Set(a) |  | ||||||
|  |  | ||||||
| 	case isMap(t): |  | ||||||
| 		valT := t.Elem() |  | ||||||
| 		m := reflect.MakeMap(t) |  | ||||||
| 		for postfix := range p.FilterStripPrefix(key + ".").m { |  | ||||||
| 			pp := strings.SplitN(postfix, ".", 2) |  | ||||||
| 			mk, mv := pp[0], reflect.New(valT) |  | ||||||
| 			if err := dec(p, key+"."+mk, nil, nil, mv); err != nil { |  | ||||||
| 				return err |  | ||||||
| 			} |  | ||||||
| 			m.SetMapIndex(reflect.ValueOf(mk), mv.Elem()) |  | ||||||
| 		} |  | ||||||
| 		v.Set(m) |  | ||||||
|  |  | ||||||
| 	default: |  | ||||||
| 		return fmt.Errorf("unsupported type %s", t) |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // split splits a string on sep, trims whitespace of elements |  | ||||||
| // and omits empty elements |  | ||||||
| func split(s string, sep string) []string { |  | ||||||
| 	var a []string |  | ||||||
| 	for _, v := range strings.Split(s, sep) { |  | ||||||
| 		if v = strings.TrimSpace(v); v != "" { |  | ||||||
| 			a = append(a, v) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return a |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // parseTag parses a "key,k=v,k=v,..." |  | ||||||
| func parseTag(tag string) (key string, opts map[string]string) { |  | ||||||
| 	opts = map[string]string{} |  | ||||||
| 	for i, s := range strings.Split(tag, ",") { |  | ||||||
| 		if i == 0 { |  | ||||||
| 			key = s |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		pp := strings.SplitN(s, "=", 2) |  | ||||||
| 		if len(pp) == 1 { |  | ||||||
| 			opts[pp[0]] = "" |  | ||||||
| 		} else { |  | ||||||
| 			opts[pp[0]] = pp[1] |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return key, opts |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isArray(t reflect.Type) bool    { return t.Kind() == reflect.Array || t.Kind() == reflect.Slice } |  | ||||||
| func isBool(t reflect.Type) bool     { return t.Kind() == reflect.Bool } |  | ||||||
| func isDuration(t reflect.Type) bool { return t == reflect.TypeOf(time.Second) } |  | ||||||
| func isMap(t reflect.Type) bool      { return t.Kind() == reflect.Map } |  | ||||||
| func isPtr(t reflect.Type) bool      { return t.Kind() == reflect.Ptr } |  | ||||||
| func isString(t reflect.Type) bool   { return t.Kind() == reflect.String } |  | ||||||
| func isStruct(t reflect.Type) bool   { return t.Kind() == reflect.Struct } |  | ||||||
| func isTime(t reflect.Type) bool     { return t == reflect.TypeOf(time.Time{}) } |  | ||||||
| func isFloat(t reflect.Type) bool { |  | ||||||
| 	return t.Kind() == reflect.Float32 || t.Kind() == reflect.Float64 |  | ||||||
| } |  | ||||||
| func isInt(t reflect.Type) bool { |  | ||||||
| 	return t.Kind() == reflect.Int || t.Kind() == reflect.Int8 || t.Kind() == reflect.Int16 || t.Kind() == reflect.Int32 || t.Kind() == reflect.Int64 |  | ||||||
| } |  | ||||||
| func isUint(t reflect.Type) bool { |  | ||||||
| 	return t.Kind() == reflect.Uint || t.Kind() == reflect.Uint8 || t.Kind() == reflect.Uint16 || t.Kind() == reflect.Uint32 || t.Kind() == reflect.Uint64 |  | ||||||
| } |  | ||||||
							
								
								
									
										156
									
								
								vendor/github.com/magiconair/properties/doc.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										156
									
								
								vendor/github.com/magiconair/properties/doc.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,156 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| // Package properties provides functions for reading and writing |  | ||||||
| // ISO-8859-1 and UTF-8 encoded .properties files and has |  | ||||||
| // support for recursive property expansion. |  | ||||||
| // |  | ||||||
| // Java properties files are ISO-8859-1 encoded and use Unicode |  | ||||||
| // literals for characters outside the ISO character set. Unicode |  | ||||||
| // literals can be used in UTF-8 encoded properties files but |  | ||||||
| // aren't necessary. |  | ||||||
| // |  | ||||||
| // To load a single properties file use MustLoadFile(): |  | ||||||
| // |  | ||||||
| //   p := properties.MustLoadFile(filename, properties.UTF8) |  | ||||||
| // |  | ||||||
| // To load multiple properties files use MustLoadFiles() |  | ||||||
| // which loads the files in the given order and merges the |  | ||||||
| // result. Missing properties files can be ignored if the |  | ||||||
| // 'ignoreMissing' flag is set to true. |  | ||||||
| // |  | ||||||
| // Filenames can contain environment variables which are expanded |  | ||||||
| // before loading. |  | ||||||
| // |  | ||||||
| //   f1 := "/etc/myapp/myapp.conf" |  | ||||||
| //   f2 := "/home/${USER}/myapp.conf" |  | ||||||
| //   p := MustLoadFiles([]string{f1, f2}, properties.UTF8, true) |  | ||||||
| // |  | ||||||
| // All of the different key/value delimiters ' ', ':' and '=' are |  | ||||||
| // supported as well as the comment characters '!' and '#' and |  | ||||||
| // multi-line values. |  | ||||||
| // |  | ||||||
| //   ! this is a comment |  | ||||||
| //   # and so is this |  | ||||||
| // |  | ||||||
| //   # the following expressions are equal |  | ||||||
| //   key value |  | ||||||
| //   key=value |  | ||||||
| //   key:value |  | ||||||
| //   key = value |  | ||||||
| //   key : value |  | ||||||
| //   key = val\ |  | ||||||
| //         ue |  | ||||||
| // |  | ||||||
| // Properties stores all comments preceding a key and provides |  | ||||||
| // GetComments() and SetComments() methods to retrieve and |  | ||||||
| // update them. The convenience functions GetComment() and |  | ||||||
| // SetComment() allow access to the last comment. The |  | ||||||
| // WriteComment() method writes properties files including |  | ||||||
| // the comments and with the keys in the original order. |  | ||||||
| // This can be used for sanitizing properties files. |  | ||||||
| // |  | ||||||
| // Property expansion is recursive and circular references |  | ||||||
| // and malformed expressions are not allowed and cause an |  | ||||||
| // error. Expansion of environment variables is supported. |  | ||||||
| // |  | ||||||
| //   # standard property |  | ||||||
| //   key = value |  | ||||||
| // |  | ||||||
| //   # property expansion: key2 = value |  | ||||||
| //   key2 = ${key} |  | ||||||
| // |  | ||||||
| //   # recursive expansion: key3 = value |  | ||||||
| //   key3 = ${key2} |  | ||||||
| // |  | ||||||
| //   # circular reference (error) |  | ||||||
| //   key = ${key} |  | ||||||
| // |  | ||||||
| //   # malformed expression (error) |  | ||||||
| //   key = ${ke |  | ||||||
| // |  | ||||||
| //   # refers to the users' home dir |  | ||||||
| //   home = ${HOME} |  | ||||||
| // |  | ||||||
| //   # local key takes precedence over env var: u = foo |  | ||||||
| //   USER = foo |  | ||||||
| //   u = ${USER} |  | ||||||
| // |  | ||||||
| // The default property expansion format is ${key} but can be |  | ||||||
| // changed by setting different pre- and postfix values on the |  | ||||||
| // Properties object. |  | ||||||
| // |  | ||||||
| //   p := properties.NewProperties() |  | ||||||
| //   p.Prefix = "#[" |  | ||||||
| //   p.Postfix = "]#" |  | ||||||
| // |  | ||||||
| // Properties provides convenience functions for getting typed |  | ||||||
| // values with default values if the key does not exist or the |  | ||||||
| // type conversion failed. |  | ||||||
| // |  | ||||||
| //   # Returns true if the value is either "1", "on", "yes" or "true" |  | ||||||
| //   # Returns false for every other value and the default value if |  | ||||||
| //   # the key does not exist. |  | ||||||
| //   v = p.GetBool("key", false) |  | ||||||
| // |  | ||||||
| //   # Returns the value if the key exists and the format conversion |  | ||||||
| //   # was successful. Otherwise, the default value is returned. |  | ||||||
| //   v = p.GetInt64("key", 999) |  | ||||||
| //   v = p.GetUint64("key", 999) |  | ||||||
| //   v = p.GetFloat64("key", 123.0) |  | ||||||
| //   v = p.GetString("key", "def") |  | ||||||
| //   v = p.GetDuration("key", 999) |  | ||||||
| // |  | ||||||
| // As an alternative properties may be applied with the standard |  | ||||||
| // library's flag implementation at any time. |  | ||||||
| // |  | ||||||
| //   # Standard configuration |  | ||||||
| //   v = flag.Int("key", 999, "help message") |  | ||||||
| //   flag.Parse() |  | ||||||
| // |  | ||||||
| //   # Merge p into the flag set |  | ||||||
| //   p.MustFlag(flag.CommandLine) |  | ||||||
| // |  | ||||||
| // Properties provides several MustXXX() convenience functions |  | ||||||
| // which will terminate the app if an error occurs. The behavior |  | ||||||
| // of the failure is configurable and the default is to call |  | ||||||
| // log.Fatal(err). To have the MustXXX() functions panic instead |  | ||||||
| // of logging the error set a different ErrorHandler before |  | ||||||
| // you use the Properties package. |  | ||||||
| // |  | ||||||
| //   properties.ErrorHandler = properties.PanicHandler |  | ||||||
| // |  | ||||||
| //   # Will panic instead of logging an error |  | ||||||
| //   p := properties.MustLoadFile("config.properties") |  | ||||||
| // |  | ||||||
| // You can also provide your own ErrorHandler function. The only requirement |  | ||||||
| // is that the error handler function must exit after handling the error. |  | ||||||
| // |  | ||||||
| //   properties.ErrorHandler = func(err error) { |  | ||||||
| //	     fmt.Println(err) |  | ||||||
| //       os.Exit(1) |  | ||||||
| //   } |  | ||||||
| // |  | ||||||
| //   # Will write to stdout and then exit |  | ||||||
| //   p := properties.MustLoadFile("config.properties") |  | ||||||
| // |  | ||||||
| // Properties can also be loaded into a struct via the `Decode` |  | ||||||
| // method, e.g. |  | ||||||
| // |  | ||||||
| //   type S struct { |  | ||||||
| //       A string        `properties:"a,default=foo"` |  | ||||||
| //       D time.Duration `properties:"timeout,default=5s"` |  | ||||||
| //       E time.Time     `properties:"expires,layout=2006-01-02,default=2015-01-01"` |  | ||||||
| //   } |  | ||||||
| // |  | ||||||
| // See `Decode()` method for the full documentation. |  | ||||||
| // |  | ||||||
| // The following documents provide a description of the properties |  | ||||||
| // file format. |  | ||||||
| // |  | ||||||
| // http://en.wikipedia.org/wiki/.properties |  | ||||||
| // |  | ||||||
| // http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load%28java.io.Reader%29 |  | ||||||
| // |  | ||||||
| package properties |  | ||||||
							
								
								
									
										1
									
								
								vendor/github.com/magiconair/properties/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										1
									
								
								vendor/github.com/magiconair/properties/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1 +0,0 @@ | |||||||
| module github.com/magiconair/properties |  | ||||||
							
								
								
									
										34
									
								
								vendor/github.com/magiconair/properties/integrate.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										34
									
								
								vendor/github.com/magiconair/properties/integrate.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,34 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package properties |  | ||||||
|  |  | ||||||
| import "flag" |  | ||||||
|  |  | ||||||
| // MustFlag sets flags that are skipped by dst.Parse when p contains |  | ||||||
| // the respective key for flag.Flag.Name. |  | ||||||
| // |  | ||||||
| // It's use is recommended with command line arguments as in: |  | ||||||
| // 	flag.Parse() |  | ||||||
| // 	p.MustFlag(flag.CommandLine) |  | ||||||
| func (p *Properties) MustFlag(dst *flag.FlagSet) { |  | ||||||
| 	m := make(map[string]*flag.Flag) |  | ||||||
| 	dst.VisitAll(func(f *flag.Flag) { |  | ||||||
| 		m[f.Name] = f |  | ||||||
| 	}) |  | ||||||
| 	dst.Visit(func(f *flag.Flag) { |  | ||||||
| 		delete(m, f.Name) // overridden |  | ||||||
| 	}) |  | ||||||
|  |  | ||||||
| 	for name, f := range m { |  | ||||||
| 		v, ok := p.Get(name) |  | ||||||
| 		if !ok { |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if err := f.Value.Set(v); err != nil { |  | ||||||
| 			ErrorHandler(err) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
							
								
								
									
										407
									
								
								vendor/github.com/magiconair/properties/lex.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										407
									
								
								vendor/github.com/magiconair/properties/lex.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,407 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
| // |  | ||||||
| // Parts of the lexer are from the template/text/parser package |  | ||||||
| // For these parts the following applies: |  | ||||||
| // |  | ||||||
| // Copyright 2011 The Go Authors. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file of the go 1.2 |  | ||||||
| // distribution. |  | ||||||
|  |  | ||||||
| package properties |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
| 	"unicode/utf8" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // item represents a token or text string returned from the scanner. |  | ||||||
| type item struct { |  | ||||||
| 	typ itemType // The type of this item. |  | ||||||
| 	pos int      // The starting position, in bytes, of this item in the input string. |  | ||||||
| 	val string   // The value of this item. |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (i item) String() string { |  | ||||||
| 	switch { |  | ||||||
| 	case i.typ == itemEOF: |  | ||||||
| 		return "EOF" |  | ||||||
| 	case i.typ == itemError: |  | ||||||
| 		return i.val |  | ||||||
| 	case len(i.val) > 10: |  | ||||||
| 		return fmt.Sprintf("%.10q...", i.val) |  | ||||||
| 	} |  | ||||||
| 	return fmt.Sprintf("%q", i.val) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // itemType identifies the type of lex items. |  | ||||||
| type itemType int |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	itemError itemType = iota // error occurred; value is text of error |  | ||||||
| 	itemEOF |  | ||||||
| 	itemKey     // a key |  | ||||||
| 	itemValue   // a value |  | ||||||
| 	itemComment // a comment |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // defines a constant for EOF |  | ||||||
| const eof = -1 |  | ||||||
|  |  | ||||||
| // permitted whitespace characters space, FF and TAB |  | ||||||
| const whitespace = " \f\t" |  | ||||||
|  |  | ||||||
| // stateFn represents the state of the scanner as a function that returns the next state. |  | ||||||
| type stateFn func(*lexer) stateFn |  | ||||||
|  |  | ||||||
| // lexer holds the state of the scanner. |  | ||||||
| type lexer struct { |  | ||||||
| 	input   string    // the string being scanned |  | ||||||
| 	state   stateFn   // the next lexing function to enter |  | ||||||
| 	pos     int       // current position in the input |  | ||||||
| 	start   int       // start position of this item |  | ||||||
| 	width   int       // width of last rune read from input |  | ||||||
| 	lastPos int       // position of most recent item returned by nextItem |  | ||||||
| 	runes   []rune    // scanned runes for this item |  | ||||||
| 	items   chan item // channel of scanned items |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // next returns the next rune in the input. |  | ||||||
| func (l *lexer) next() rune { |  | ||||||
| 	if l.pos >= len(l.input) { |  | ||||||
| 		l.width = 0 |  | ||||||
| 		return eof |  | ||||||
| 	} |  | ||||||
| 	r, w := utf8.DecodeRuneInString(l.input[l.pos:]) |  | ||||||
| 	l.width = w |  | ||||||
| 	l.pos += l.width |  | ||||||
| 	return r |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // peek returns but does not consume the next rune in the input. |  | ||||||
| func (l *lexer) peek() rune { |  | ||||||
| 	r := l.next() |  | ||||||
| 	l.backup() |  | ||||||
| 	return r |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // backup steps back one rune. Can only be called once per call of next. |  | ||||||
| func (l *lexer) backup() { |  | ||||||
| 	l.pos -= l.width |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // emit passes an item back to the client. |  | ||||||
| func (l *lexer) emit(t itemType) { |  | ||||||
| 	i := item{t, l.start, string(l.runes)} |  | ||||||
| 	l.items <- i |  | ||||||
| 	l.start = l.pos |  | ||||||
| 	l.runes = l.runes[:0] |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ignore skips over the pending input before this point. |  | ||||||
| func (l *lexer) ignore() { |  | ||||||
| 	l.start = l.pos |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // appends the rune to the current value |  | ||||||
| func (l *lexer) appendRune(r rune) { |  | ||||||
| 	l.runes = append(l.runes, r) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // accept consumes the next rune if it's from the valid set. |  | ||||||
| func (l *lexer) accept(valid string) bool { |  | ||||||
| 	if strings.ContainsRune(valid, l.next()) { |  | ||||||
| 		return true |  | ||||||
| 	} |  | ||||||
| 	l.backup() |  | ||||||
| 	return false |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // acceptRun consumes a run of runes from the valid set. |  | ||||||
| func (l *lexer) acceptRun(valid string) { |  | ||||||
| 	for strings.ContainsRune(valid, l.next()) { |  | ||||||
| 	} |  | ||||||
| 	l.backup() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // acceptRunUntil consumes a run of runes up to a terminator. |  | ||||||
| func (l *lexer) acceptRunUntil(term rune) { |  | ||||||
| 	for term != l.next() { |  | ||||||
| 	} |  | ||||||
| 	l.backup() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // hasText returns true if the current parsed text is not empty. |  | ||||||
| func (l *lexer) isNotEmpty() bool { |  | ||||||
| 	return l.pos > l.start |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // lineNumber reports which line we're on, based on the position of |  | ||||||
| // the previous item returned by nextItem. Doing it this way |  | ||||||
| // means we don't have to worry about peek double counting. |  | ||||||
| func (l *lexer) lineNumber() int { |  | ||||||
| 	return 1 + strings.Count(l.input[:l.lastPos], "\n") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // errorf returns an error token and terminates the scan by passing |  | ||||||
| // back a nil pointer that will be the next state, terminating l.nextItem. |  | ||||||
| func (l *lexer) errorf(format string, args ...interface{}) stateFn { |  | ||||||
| 	l.items <- item{itemError, l.start, fmt.Sprintf(format, args...)} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // nextItem returns the next item from the input. |  | ||||||
| func (l *lexer) nextItem() item { |  | ||||||
| 	i := <-l.items |  | ||||||
| 	l.lastPos = i.pos |  | ||||||
| 	return i |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // lex creates a new scanner for the input string. |  | ||||||
| func lex(input string) *lexer { |  | ||||||
| 	l := &lexer{ |  | ||||||
| 		input: input, |  | ||||||
| 		items: make(chan item), |  | ||||||
| 		runes: make([]rune, 0, 32), |  | ||||||
| 	} |  | ||||||
| 	go l.run() |  | ||||||
| 	return l |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // run runs the state machine for the lexer. |  | ||||||
| func (l *lexer) run() { |  | ||||||
| 	for l.state = lexBeforeKey(l); l.state != nil; { |  | ||||||
| 		l.state = l.state(l) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // state functions |  | ||||||
|  |  | ||||||
| // lexBeforeKey scans until a key begins. |  | ||||||
| func lexBeforeKey(l *lexer) stateFn { |  | ||||||
| 	switch r := l.next(); { |  | ||||||
| 	case isEOF(r): |  | ||||||
| 		l.emit(itemEOF) |  | ||||||
| 		return nil |  | ||||||
|  |  | ||||||
| 	case isEOL(r): |  | ||||||
| 		l.ignore() |  | ||||||
| 		return lexBeforeKey |  | ||||||
|  |  | ||||||
| 	case isComment(r): |  | ||||||
| 		return lexComment |  | ||||||
|  |  | ||||||
| 	case isWhitespace(r): |  | ||||||
| 		l.ignore() |  | ||||||
| 		return lexBeforeKey |  | ||||||
|  |  | ||||||
| 	default: |  | ||||||
| 		l.backup() |  | ||||||
| 		return lexKey |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // lexComment scans a comment line. The comment character has already been scanned. |  | ||||||
| func lexComment(l *lexer) stateFn { |  | ||||||
| 	l.acceptRun(whitespace) |  | ||||||
| 	l.ignore() |  | ||||||
| 	for { |  | ||||||
| 		switch r := l.next(); { |  | ||||||
| 		case isEOF(r): |  | ||||||
| 			l.ignore() |  | ||||||
| 			l.emit(itemEOF) |  | ||||||
| 			return nil |  | ||||||
| 		case isEOL(r): |  | ||||||
| 			l.emit(itemComment) |  | ||||||
| 			return lexBeforeKey |  | ||||||
| 		default: |  | ||||||
| 			l.appendRune(r) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // lexKey scans the key up to a delimiter |  | ||||||
| func lexKey(l *lexer) stateFn { |  | ||||||
| 	var r rune |  | ||||||
|  |  | ||||||
| Loop: |  | ||||||
| 	for { |  | ||||||
| 		switch r = l.next(); { |  | ||||||
|  |  | ||||||
| 		case isEscape(r): |  | ||||||
| 			err := l.scanEscapeSequence() |  | ||||||
| 			if err != nil { |  | ||||||
| 				return l.errorf(err.Error()) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 		case isEndOfKey(r): |  | ||||||
| 			l.backup() |  | ||||||
| 			break Loop |  | ||||||
|  |  | ||||||
| 		case isEOF(r): |  | ||||||
| 			break Loop |  | ||||||
|  |  | ||||||
| 		default: |  | ||||||
| 			l.appendRune(r) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if len(l.runes) > 0 { |  | ||||||
| 		l.emit(itemKey) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if isEOF(r) { |  | ||||||
| 		l.emit(itemEOF) |  | ||||||
| 		return nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return lexBeforeValue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // lexBeforeValue scans the delimiter between key and value. |  | ||||||
| // Leading and trailing whitespace is ignored. |  | ||||||
| // We expect to be just after the key. |  | ||||||
| func lexBeforeValue(l *lexer) stateFn { |  | ||||||
| 	l.acceptRun(whitespace) |  | ||||||
| 	l.accept(":=") |  | ||||||
| 	l.acceptRun(whitespace) |  | ||||||
| 	l.ignore() |  | ||||||
| 	return lexValue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // lexValue scans text until the end of the line. We expect to be just after the delimiter. |  | ||||||
| func lexValue(l *lexer) stateFn { |  | ||||||
| 	for { |  | ||||||
| 		switch r := l.next(); { |  | ||||||
| 		case isEscape(r): |  | ||||||
| 			if isEOL(l.peek()) { |  | ||||||
| 				l.next() |  | ||||||
| 				l.acceptRun(whitespace) |  | ||||||
| 			} else { |  | ||||||
| 				err := l.scanEscapeSequence() |  | ||||||
| 				if err != nil { |  | ||||||
| 					return l.errorf(err.Error()) |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 		case isEOL(r): |  | ||||||
| 			l.emit(itemValue) |  | ||||||
| 			l.ignore() |  | ||||||
| 			return lexBeforeKey |  | ||||||
|  |  | ||||||
| 		case isEOF(r): |  | ||||||
| 			l.emit(itemValue) |  | ||||||
| 			l.emit(itemEOF) |  | ||||||
| 			return nil |  | ||||||
|  |  | ||||||
| 		default: |  | ||||||
| 			l.appendRune(r) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scanEscapeSequence scans either one of the escaped characters |  | ||||||
| // or a unicode literal. We expect to be after the escape character. |  | ||||||
| func (l *lexer) scanEscapeSequence() error { |  | ||||||
| 	switch r := l.next(); { |  | ||||||
|  |  | ||||||
| 	case isEscapedCharacter(r): |  | ||||||
| 		l.appendRune(decodeEscapedCharacter(r)) |  | ||||||
| 		return nil |  | ||||||
|  |  | ||||||
| 	case atUnicodeLiteral(r): |  | ||||||
| 		return l.scanUnicodeLiteral() |  | ||||||
|  |  | ||||||
| 	case isEOF(r): |  | ||||||
| 		return fmt.Errorf("premature EOF") |  | ||||||
|  |  | ||||||
| 	// silently drop the escape character and append the rune as is |  | ||||||
| 	default: |  | ||||||
| 		l.appendRune(r) |  | ||||||
| 		return nil |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // scans a unicode literal in the form \uXXXX. We expect to be after the \u. |  | ||||||
| func (l *lexer) scanUnicodeLiteral() error { |  | ||||||
| 	// scan the digits |  | ||||||
| 	d := make([]rune, 4) |  | ||||||
| 	for i := 0; i < 4; i++ { |  | ||||||
| 		d[i] = l.next() |  | ||||||
| 		if d[i] == eof || !strings.ContainsRune("0123456789abcdefABCDEF", d[i]) { |  | ||||||
| 			return fmt.Errorf("invalid unicode literal") |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// decode the digits into a rune |  | ||||||
| 	r, err := strconv.ParseInt(string(d), 16, 0) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	l.appendRune(rune(r)) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // decodeEscapedCharacter returns the unescaped rune. We expect to be after the escape character. |  | ||||||
| func decodeEscapedCharacter(r rune) rune { |  | ||||||
| 	switch r { |  | ||||||
| 	case 'f': |  | ||||||
| 		return '\f' |  | ||||||
| 	case 'n': |  | ||||||
| 		return '\n' |  | ||||||
| 	case 'r': |  | ||||||
| 		return '\r' |  | ||||||
| 	case 't': |  | ||||||
| 		return '\t' |  | ||||||
| 	default: |  | ||||||
| 		return r |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // atUnicodeLiteral reports whether we are at a unicode literal. |  | ||||||
| // The escape character has already been consumed. |  | ||||||
| func atUnicodeLiteral(r rune) bool { |  | ||||||
| 	return r == 'u' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isComment reports whether we are at the start of a comment. |  | ||||||
| func isComment(r rune) bool { |  | ||||||
| 	return r == '#' || r == '!' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isEndOfKey reports whether the rune terminates the current key. |  | ||||||
| func isEndOfKey(r rune) bool { |  | ||||||
| 	return strings.ContainsRune(" \f\t\r\n:=", r) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isEOF reports whether we are at EOF. |  | ||||||
| func isEOF(r rune) bool { |  | ||||||
| 	return r == eof |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isEOL reports whether we are at a new line character. |  | ||||||
| func isEOL(r rune) bool { |  | ||||||
| 	return r == '\n' || r == '\r' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isEscape reports whether the rune is the escape character which |  | ||||||
| // prefixes unicode literals and other escaped characters. |  | ||||||
| func isEscape(r rune) bool { |  | ||||||
| 	return r == '\\' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isEscapedCharacter reports whether we are at one of the characters that need escaping. |  | ||||||
| // The escape character has already been consumed. |  | ||||||
| func isEscapedCharacter(r rune) bool { |  | ||||||
| 	return strings.ContainsRune(" :=fnrt", r) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // isWhitespace reports whether the rune is a whitespace character. |  | ||||||
| func isWhitespace(r rune) bool { |  | ||||||
| 	return strings.ContainsRune(whitespace, r) |  | ||||||
| } |  | ||||||
							
								
								
									
										292
									
								
								vendor/github.com/magiconair/properties/load.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										292
									
								
								vendor/github.com/magiconair/properties/load.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,292 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package properties |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"io/ioutil" |  | ||||||
| 	"net/http" |  | ||||||
| 	"os" |  | ||||||
| 	"strings" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Encoding specifies encoding of the input data. |  | ||||||
| type Encoding uint |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	// utf8Default is a private placeholder for the zero value of Encoding to |  | ||||||
| 	// ensure that it has the correct meaning. UTF8 is the default encoding but |  | ||||||
| 	// was assigned a non-zero value which cannot be changed without breaking |  | ||||||
| 	// existing code. Clients should continue to use the public constants. |  | ||||||
| 	utf8Default Encoding = iota |  | ||||||
|  |  | ||||||
| 	// UTF8 interprets the input data as UTF-8. |  | ||||||
| 	UTF8 |  | ||||||
|  |  | ||||||
| 	// ISO_8859_1 interprets the input data as ISO-8859-1. |  | ||||||
| 	ISO_8859_1 |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type Loader struct { |  | ||||||
| 	// Encoding determines how the data from files and byte buffers |  | ||||||
| 	// is interpreted. For URLs the Content-Type header is used |  | ||||||
| 	// to determine the encoding of the data. |  | ||||||
| 	Encoding Encoding |  | ||||||
|  |  | ||||||
| 	// DisableExpansion configures the property expansion of the |  | ||||||
| 	// returned property object. When set to true, the property values |  | ||||||
| 	// will not be expanded and the Property object will not be checked |  | ||||||
| 	// for invalid expansion expressions. |  | ||||||
| 	DisableExpansion bool |  | ||||||
|  |  | ||||||
| 	// IgnoreMissing configures whether missing files or URLs which return |  | ||||||
| 	// 404 are reported as errors. When set to true, missing files and 404 |  | ||||||
| 	// status codes are not reported as errors. |  | ||||||
| 	IgnoreMissing bool |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Load reads a buffer into a Properties struct. |  | ||||||
| func (l *Loader) LoadBytes(buf []byte) (*Properties, error) { |  | ||||||
| 	return l.loadBytes(buf, l.Encoding) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadAll reads the content of multiple URLs or files in the given order into |  | ||||||
| // a Properties struct. If IgnoreMissing is true then a 404 status code or |  | ||||||
| // missing file will not be reported as error. Encoding sets the encoding for |  | ||||||
| // files. For the URLs see LoadURL for the Content-Type header and the |  | ||||||
| // encoding. |  | ||||||
| func (l *Loader) LoadAll(names []string) (*Properties, error) { |  | ||||||
| 	all := NewProperties() |  | ||||||
| 	for _, name := range names { |  | ||||||
| 		n, err := expandName(name) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		var p *Properties |  | ||||||
| 		switch { |  | ||||||
| 		case strings.HasPrefix(n, "http://"): |  | ||||||
| 			p, err = l.LoadURL(n) |  | ||||||
| 		case strings.HasPrefix(n, "https://"): |  | ||||||
| 			p, err = l.LoadURL(n) |  | ||||||
| 		default: |  | ||||||
| 			p, err = l.LoadFile(n) |  | ||||||
| 		} |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
| 		all.Merge(p) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	all.DisableExpansion = l.DisableExpansion |  | ||||||
| 	if all.DisableExpansion { |  | ||||||
| 		return all, nil |  | ||||||
| 	} |  | ||||||
| 	return all, all.check() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadFile reads a file into a Properties struct. |  | ||||||
| // If IgnoreMissing is true then a missing file will not be |  | ||||||
| // reported as error. |  | ||||||
| func (l *Loader) LoadFile(filename string) (*Properties, error) { |  | ||||||
| 	data, err := ioutil.ReadFile(filename) |  | ||||||
| 	if err != nil { |  | ||||||
| 		if l.IgnoreMissing && os.IsNotExist(err) { |  | ||||||
| 			LogPrintf("properties: %s not found. skipping", filename) |  | ||||||
| 			return NewProperties(), nil |  | ||||||
| 		} |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
| 	return l.loadBytes(data, l.Encoding) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadURL reads the content of the URL into a Properties struct. |  | ||||||
| // |  | ||||||
| // The encoding is determined via the Content-Type header which |  | ||||||
| // should be set to 'text/plain'. If the 'charset' parameter is |  | ||||||
| // missing, 'iso-8859-1' or 'latin1' the encoding is set to |  | ||||||
| // ISO-8859-1. If the 'charset' parameter is set to 'utf-8' the |  | ||||||
| // encoding is set to UTF-8. A missing content type header is |  | ||||||
| // interpreted as 'text/plain; charset=utf-8'. |  | ||||||
| func (l *Loader) LoadURL(url string) (*Properties, error) { |  | ||||||
| 	resp, err := http.Get(url) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, fmt.Errorf("properties: error fetching %q. %s", url, err) |  | ||||||
| 	} |  | ||||||
| 	defer resp.Body.Close() |  | ||||||
|  |  | ||||||
| 	if resp.StatusCode == 404 && l.IgnoreMissing { |  | ||||||
| 		LogPrintf("properties: %s returned %d. skipping", url, resp.StatusCode) |  | ||||||
| 		return NewProperties(), nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if resp.StatusCode != 200 { |  | ||||||
| 		return nil, fmt.Errorf("properties: %s returned %d", url, resp.StatusCode) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	body, err := ioutil.ReadAll(resp.Body) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, fmt.Errorf("properties: %s error reading response. %s", url, err) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	ct := resp.Header.Get("Content-Type") |  | ||||||
| 	var enc Encoding |  | ||||||
| 	switch strings.ToLower(ct) { |  | ||||||
| 	case "text/plain", "text/plain; charset=iso-8859-1", "text/plain; charset=latin1": |  | ||||||
| 		enc = ISO_8859_1 |  | ||||||
| 	case "", "text/plain; charset=utf-8": |  | ||||||
| 		enc = UTF8 |  | ||||||
| 	default: |  | ||||||
| 		return nil, fmt.Errorf("properties: invalid content type %s", ct) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return l.loadBytes(body, enc) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *Loader) loadBytes(buf []byte, enc Encoding) (*Properties, error) { |  | ||||||
| 	p, err := parse(convert(buf, enc)) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
| 	p.DisableExpansion = l.DisableExpansion |  | ||||||
| 	if p.DisableExpansion { |  | ||||||
| 		return p, nil |  | ||||||
| 	} |  | ||||||
| 	return p, p.check() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Load reads a buffer into a Properties struct. |  | ||||||
| func Load(buf []byte, enc Encoding) (*Properties, error) { |  | ||||||
| 	l := &Loader{Encoding: enc} |  | ||||||
| 	return l.LoadBytes(buf) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadString reads an UTF8 string into a properties struct. |  | ||||||
| func LoadString(s string) (*Properties, error) { |  | ||||||
| 	l := &Loader{Encoding: UTF8} |  | ||||||
| 	return l.LoadBytes([]byte(s)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadMap creates a new Properties struct from a string map. |  | ||||||
| func LoadMap(m map[string]string) *Properties { |  | ||||||
| 	p := NewProperties() |  | ||||||
| 	for k, v := range m { |  | ||||||
| 		p.Set(k, v) |  | ||||||
| 	} |  | ||||||
| 	return p |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadFile reads a file into a Properties struct. |  | ||||||
| func LoadFile(filename string, enc Encoding) (*Properties, error) { |  | ||||||
| 	l := &Loader{Encoding: enc} |  | ||||||
| 	return l.LoadAll([]string{filename}) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadFiles reads multiple files in the given order into |  | ||||||
| // a Properties struct. If 'ignoreMissing' is true then |  | ||||||
| // non-existent files will not be reported as error. |  | ||||||
| func LoadFiles(filenames []string, enc Encoding, ignoreMissing bool) (*Properties, error) { |  | ||||||
| 	l := &Loader{Encoding: enc, IgnoreMissing: ignoreMissing} |  | ||||||
| 	return l.LoadAll(filenames) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadURL reads the content of the URL into a Properties struct. |  | ||||||
| // See Loader#LoadURL for details. |  | ||||||
| func LoadURL(url string) (*Properties, error) { |  | ||||||
| 	l := &Loader{Encoding: UTF8} |  | ||||||
| 	return l.LoadAll([]string{url}) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadURLs reads the content of multiple URLs in the given order into a |  | ||||||
| // Properties struct. If IgnoreMissing is true then a 404 status code will |  | ||||||
| // not be reported as error. See Loader#LoadURL for the Content-Type header |  | ||||||
| // and the encoding. |  | ||||||
| func LoadURLs(urls []string, ignoreMissing bool) (*Properties, error) { |  | ||||||
| 	l := &Loader{Encoding: UTF8, IgnoreMissing: ignoreMissing} |  | ||||||
| 	return l.LoadAll(urls) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadAll reads the content of multiple URLs or files in the given order into a |  | ||||||
| // Properties struct. If 'ignoreMissing' is true then a 404 status code or missing file will |  | ||||||
| // not be reported as error. Encoding sets the encoding for files. For the URLs please see |  | ||||||
| // LoadURL for the Content-Type header and the encoding. |  | ||||||
| func LoadAll(names []string, enc Encoding, ignoreMissing bool) (*Properties, error) { |  | ||||||
| 	l := &Loader{Encoding: enc, IgnoreMissing: ignoreMissing} |  | ||||||
| 	return l.LoadAll(names) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustLoadString reads an UTF8 string into a Properties struct and |  | ||||||
| // panics on error. |  | ||||||
| func MustLoadString(s string) *Properties { |  | ||||||
| 	return must(LoadString(s)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustLoadFile reads a file into a Properties struct and |  | ||||||
| // panics on error. |  | ||||||
| func MustLoadFile(filename string, enc Encoding) *Properties { |  | ||||||
| 	return must(LoadFile(filename, enc)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustLoadFiles reads multiple files in the given order into |  | ||||||
| // a Properties struct and panics on error. If 'ignoreMissing' |  | ||||||
| // is true then non-existent files will not be reported as error. |  | ||||||
| func MustLoadFiles(filenames []string, enc Encoding, ignoreMissing bool) *Properties { |  | ||||||
| 	return must(LoadFiles(filenames, enc, ignoreMissing)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustLoadURL reads the content of a URL into a Properties struct and |  | ||||||
| // panics on error. |  | ||||||
| func MustLoadURL(url string) *Properties { |  | ||||||
| 	return must(LoadURL(url)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustLoadURLs reads the content of multiple URLs in the given order into a |  | ||||||
| // Properties struct and panics on error. If 'ignoreMissing' is true then a 404 |  | ||||||
| // status code will not be reported as error. |  | ||||||
| func MustLoadURLs(urls []string, ignoreMissing bool) *Properties { |  | ||||||
| 	return must(LoadURLs(urls, ignoreMissing)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustLoadAll reads the content of multiple URLs or files in the given order into a |  | ||||||
| // Properties struct. If 'ignoreMissing' is true then a 404 status code or missing file will |  | ||||||
| // not be reported as error. Encoding sets the encoding for files. For the URLs please see |  | ||||||
| // LoadURL for the Content-Type header and the encoding. It panics on error. |  | ||||||
| func MustLoadAll(names []string, enc Encoding, ignoreMissing bool) *Properties { |  | ||||||
| 	return must(LoadAll(names, enc, ignoreMissing)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func must(p *Properties, err error) *Properties { |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return p |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // expandName expands ${ENV_VAR} expressions in a name. |  | ||||||
| // If the environment variable does not exist then it will be replaced |  | ||||||
| // with an empty string. Malformed expressions like "${ENV_VAR" will |  | ||||||
| // be reported as error. |  | ||||||
| func expandName(name string) (string, error) { |  | ||||||
| 	return expand(name, []string{}, "${", "}", make(map[string]string)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Interprets a byte buffer either as an ISO-8859-1 or UTF-8 encoded string. |  | ||||||
| // For ISO-8859-1 we can convert each byte straight into a rune since the |  | ||||||
| // first 256 unicode code points cover ISO-8859-1. |  | ||||||
| func convert(buf []byte, enc Encoding) string { |  | ||||||
| 	switch enc { |  | ||||||
| 	case utf8Default, UTF8: |  | ||||||
| 		return string(buf) |  | ||||||
| 	case ISO_8859_1: |  | ||||||
| 		runes := make([]rune, len(buf)) |  | ||||||
| 		for i, b := range buf { |  | ||||||
| 			runes[i] = rune(b) |  | ||||||
| 		} |  | ||||||
| 		return string(runes) |  | ||||||
| 	default: |  | ||||||
| 		ErrorHandler(fmt.Errorf("unsupported encoding %v", enc)) |  | ||||||
| 	} |  | ||||||
| 	panic("ErrorHandler should exit") |  | ||||||
| } |  | ||||||
							
								
								
									
										95
									
								
								vendor/github.com/magiconair/properties/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										95
									
								
								vendor/github.com/magiconair/properties/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,95 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package properties |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"runtime" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type parser struct { |  | ||||||
| 	lex *lexer |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func parse(input string) (properties *Properties, err error) { |  | ||||||
| 	p := &parser{lex: lex(input)} |  | ||||||
| 	defer p.recover(&err) |  | ||||||
|  |  | ||||||
| 	properties = NewProperties() |  | ||||||
| 	key := "" |  | ||||||
| 	comments := []string{} |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		token := p.expectOneOf(itemComment, itemKey, itemEOF) |  | ||||||
| 		switch token.typ { |  | ||||||
| 		case itemEOF: |  | ||||||
| 			goto done |  | ||||||
| 		case itemComment: |  | ||||||
| 			comments = append(comments, token.val) |  | ||||||
| 			continue |  | ||||||
| 		case itemKey: |  | ||||||
| 			key = token.val |  | ||||||
| 			if _, ok := properties.m[key]; !ok { |  | ||||||
| 				properties.k = append(properties.k, key) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		token = p.expectOneOf(itemValue, itemEOF) |  | ||||||
| 		if len(comments) > 0 { |  | ||||||
| 			properties.c[key] = comments |  | ||||||
| 			comments = []string{} |  | ||||||
| 		} |  | ||||||
| 		switch token.typ { |  | ||||||
| 		case itemEOF: |  | ||||||
| 			properties.m[key] = "" |  | ||||||
| 			goto done |  | ||||||
| 		case itemValue: |  | ||||||
| 			properties.m[key] = token.val |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| done: |  | ||||||
| 	return properties, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *parser) errorf(format string, args ...interface{}) { |  | ||||||
| 	format = fmt.Sprintf("properties: Line %d: %s", p.lex.lineNumber(), format) |  | ||||||
| 	panic(fmt.Errorf(format, args...)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *parser) expect(expected itemType) (token item) { |  | ||||||
| 	token = p.lex.nextItem() |  | ||||||
| 	if token.typ != expected { |  | ||||||
| 		p.unexpected(token) |  | ||||||
| 	} |  | ||||||
| 	return token |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *parser) expectOneOf(expected ...itemType) (token item) { |  | ||||||
| 	token = p.lex.nextItem() |  | ||||||
| 	for _, v := range expected { |  | ||||||
| 		if token.typ == v { |  | ||||||
| 			return token |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	p.unexpected(token) |  | ||||||
| 	panic("unexpected token") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *parser) unexpected(token item) { |  | ||||||
| 	p.errorf(token.String()) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // recover is the handler that turns panics into returns from the top level of Parse. |  | ||||||
| func (p *parser) recover(errp *error) { |  | ||||||
| 	e := recover() |  | ||||||
| 	if e != nil { |  | ||||||
| 		if _, ok := e.(runtime.Error); ok { |  | ||||||
| 			panic(e) |  | ||||||
| 		} |  | ||||||
| 		*errp = e.(error) |  | ||||||
| 	} |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
							
								
								
									
										833
									
								
								vendor/github.com/magiconair/properties/properties.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										833
									
								
								vendor/github.com/magiconair/properties/properties.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,833 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package properties |  | ||||||
|  |  | ||||||
| // BUG(frank): Set() does not check for invalid unicode literals since this is currently handled by the lexer. |  | ||||||
| // BUG(frank): Write() does not allow to configure the newline character. Therefore, on Windows LF is used. |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"io" |  | ||||||
| 	"log" |  | ||||||
| 	"os" |  | ||||||
| 	"regexp" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
| 	"time" |  | ||||||
| 	"unicode/utf8" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| const maxExpansionDepth = 64 |  | ||||||
|  |  | ||||||
| // ErrorHandlerFunc defines the type of function which handles failures |  | ||||||
| // of the MustXXX() functions. An error handler function must exit |  | ||||||
| // the application after handling the error. |  | ||||||
| type ErrorHandlerFunc func(error) |  | ||||||
|  |  | ||||||
| // ErrorHandler is the function which handles failures of the MustXXX() |  | ||||||
| // functions. The default is LogFatalHandler. |  | ||||||
| var ErrorHandler ErrorHandlerFunc = LogFatalHandler |  | ||||||
|  |  | ||||||
| // LogHandlerFunc defines the function prototype for logging errors. |  | ||||||
| type LogHandlerFunc func(fmt string, args ...interface{}) |  | ||||||
|  |  | ||||||
| // LogPrintf defines a log handler which uses log.Printf. |  | ||||||
| var LogPrintf LogHandlerFunc = log.Printf |  | ||||||
|  |  | ||||||
| // LogFatalHandler handles the error by logging a fatal error and exiting. |  | ||||||
| func LogFatalHandler(err error) { |  | ||||||
| 	log.Fatal(err) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // PanicHandler handles the error by panicking. |  | ||||||
| func PanicHandler(err error) { |  | ||||||
| 	panic(err) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ----------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // A Properties contains the key/value pairs from the properties input. |  | ||||||
| // All values are stored in unexpanded form and are expanded at runtime |  | ||||||
| type Properties struct { |  | ||||||
| 	// Pre-/Postfix for property expansion. |  | ||||||
| 	Prefix  string |  | ||||||
| 	Postfix string |  | ||||||
|  |  | ||||||
| 	// DisableExpansion controls the expansion of properties on Get() |  | ||||||
| 	// and the check for circular references on Set(). When set to |  | ||||||
| 	// true Properties behaves like a simple key/value store and does |  | ||||||
| 	// not check for circular references on Get() or on Set(). |  | ||||||
| 	DisableExpansion bool |  | ||||||
|  |  | ||||||
| 	// Stores the key/value pairs |  | ||||||
| 	m map[string]string |  | ||||||
|  |  | ||||||
| 	// Stores the comments per key. |  | ||||||
| 	c map[string][]string |  | ||||||
|  |  | ||||||
| 	// Stores the keys in order of appearance. |  | ||||||
| 	k []string |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // NewProperties creates a new Properties struct with the default |  | ||||||
| // configuration for "${key}" expressions. |  | ||||||
| func NewProperties() *Properties { |  | ||||||
| 	return &Properties{ |  | ||||||
| 		Prefix:  "${", |  | ||||||
| 		Postfix: "}", |  | ||||||
| 		m:       map[string]string{}, |  | ||||||
| 		c:       map[string][]string{}, |  | ||||||
| 		k:       []string{}, |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Load reads a buffer into the given Properties struct. |  | ||||||
| func (p *Properties) Load(buf []byte, enc Encoding) error { |  | ||||||
| 	l := &Loader{Encoding: enc, DisableExpansion: p.DisableExpansion} |  | ||||||
| 	newProperties, err := l.LoadBytes(buf) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	p.Merge(newProperties) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Get returns the expanded value for the given key if exists. |  | ||||||
| // Otherwise, ok is false. |  | ||||||
| func (p *Properties) Get(key string) (value string, ok bool) { |  | ||||||
| 	v, ok := p.m[key] |  | ||||||
| 	if p.DisableExpansion { |  | ||||||
| 		return v, ok |  | ||||||
| 	} |  | ||||||
| 	if !ok { |  | ||||||
| 		return "", false |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	expanded, err := p.expand(key, v) |  | ||||||
|  |  | ||||||
| 	// we guarantee that the expanded value is free of |  | ||||||
| 	// circular references and malformed expressions |  | ||||||
| 	// so we panic if we still get an error here. |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(fmt.Errorf("%s in %q", err, key+" = "+v)) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return expanded, true |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGet returns the expanded value for the given key if exists. |  | ||||||
| // Otherwise, it panics. |  | ||||||
| func (p *Properties) MustGet(key string) string { |  | ||||||
| 	if v, ok := p.Get(key); ok { |  | ||||||
| 		return v |  | ||||||
| 	} |  | ||||||
| 	ErrorHandler(invalidKeyError(key)) |  | ||||||
| 	panic("ErrorHandler should exit") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // ClearComments removes the comments for all keys. |  | ||||||
| func (p *Properties) ClearComments() { |  | ||||||
| 	p.c = map[string][]string{} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetComment returns the last comment before the given key or an empty string. |  | ||||||
| func (p *Properties) GetComment(key string) string { |  | ||||||
| 	comments, ok := p.c[key] |  | ||||||
| 	if !ok || len(comments) == 0 { |  | ||||||
| 		return "" |  | ||||||
| 	} |  | ||||||
| 	return comments[len(comments)-1] |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetComments returns all comments that appeared before the given key or nil. |  | ||||||
| func (p *Properties) GetComments(key string) []string { |  | ||||||
| 	if comments, ok := p.c[key]; ok { |  | ||||||
| 		return comments |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // SetComment sets the comment for the key. |  | ||||||
| func (p *Properties) SetComment(key, comment string) { |  | ||||||
| 	p.c[key] = []string{comment} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // SetComments sets the comments for the key. If the comments are nil then |  | ||||||
| // all comments for this key are deleted. |  | ||||||
| func (p *Properties) SetComments(key string, comments []string) { |  | ||||||
| 	if comments == nil { |  | ||||||
| 		delete(p.c, key) |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
| 	p.c[key] = comments |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetBool checks if the expanded value is one of '1', 'yes', |  | ||||||
| // 'true' or 'on' if the key exists. The comparison is case-insensitive. |  | ||||||
| // If the key does not exist the default value is returned. |  | ||||||
| func (p *Properties) GetBool(key string, def bool) bool { |  | ||||||
| 	v, err := p.getBool(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetBool checks if the expanded value is one of '1', 'yes', |  | ||||||
| // 'true' or 'on' if the key exists. The comparison is case-insensitive. |  | ||||||
| // If the key does not exist the function panics. |  | ||||||
| func (p *Properties) MustGetBool(key string) bool { |  | ||||||
| 	v, err := p.getBool(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Properties) getBool(key string) (value bool, err error) { |  | ||||||
| 	if v, ok := p.Get(key); ok { |  | ||||||
| 		return boolVal(v), nil |  | ||||||
| 	} |  | ||||||
| 	return false, invalidKeyError(key) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func boolVal(v string) bool { |  | ||||||
| 	v = strings.ToLower(v) |  | ||||||
| 	return v == "1" || v == "true" || v == "yes" || v == "on" |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetDuration parses the expanded value as an time.Duration (in ns) if the |  | ||||||
| // key exists. If key does not exist or the value cannot be parsed the default |  | ||||||
| // value is returned. In almost all cases you want to use GetParsedDuration(). |  | ||||||
| func (p *Properties) GetDuration(key string, def time.Duration) time.Duration { |  | ||||||
| 	v, err := p.getInt64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return time.Duration(v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetDuration parses the expanded value as an time.Duration (in ns) if |  | ||||||
| // the key exists. If key does not exist or the value cannot be parsed the |  | ||||||
| // function panics. In almost all cases you want to use MustGetParsedDuration(). |  | ||||||
| func (p *Properties) MustGetDuration(key string) time.Duration { |  | ||||||
| 	v, err := p.getInt64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return time.Duration(v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetParsedDuration parses the expanded value with time.ParseDuration() if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the default |  | ||||||
| // value is returned. |  | ||||||
| func (p *Properties) GetParsedDuration(key string, def time.Duration) time.Duration { |  | ||||||
| 	s, ok := p.Get(key) |  | ||||||
| 	if !ok { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	v, err := time.ParseDuration(s) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetParsedDuration parses the expanded value with time.ParseDuration() if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the function panics. |  | ||||||
| func (p *Properties) MustGetParsedDuration(key string) time.Duration { |  | ||||||
| 	s, ok := p.Get(key) |  | ||||||
| 	if !ok { |  | ||||||
| 		ErrorHandler(invalidKeyError(key)) |  | ||||||
| 	} |  | ||||||
| 	v, err := time.ParseDuration(s) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetFloat64 parses the expanded value as a float64 if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the default |  | ||||||
| // value is returned. |  | ||||||
| func (p *Properties) GetFloat64(key string, def float64) float64 { |  | ||||||
| 	v, err := p.getFloat64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetFloat64 parses the expanded value as a float64 if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the function panics. |  | ||||||
| func (p *Properties) MustGetFloat64(key string) float64 { |  | ||||||
| 	v, err := p.getFloat64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Properties) getFloat64(key string) (value float64, err error) { |  | ||||||
| 	if v, ok := p.Get(key); ok { |  | ||||||
| 		value, err = strconv.ParseFloat(v, 64) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return 0, err |  | ||||||
| 		} |  | ||||||
| 		return value, nil |  | ||||||
| 	} |  | ||||||
| 	return 0, invalidKeyError(key) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetInt parses the expanded value as an int if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the default |  | ||||||
| // value is returned. If the value does not fit into an int the |  | ||||||
| // function panics with an out of range error. |  | ||||||
| func (p *Properties) GetInt(key string, def int) int { |  | ||||||
| 	v, err := p.getInt64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return intRangeCheck(key, v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetInt parses the expanded value as an int if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the function panics. |  | ||||||
| // If the value does not fit into an int the function panics with |  | ||||||
| // an out of range error. |  | ||||||
| func (p *Properties) MustGetInt(key string) int { |  | ||||||
| 	v, err := p.getInt64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return intRangeCheck(key, v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetInt64 parses the expanded value as an int64 if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the default |  | ||||||
| // value is returned. |  | ||||||
| func (p *Properties) GetInt64(key string, def int64) int64 { |  | ||||||
| 	v, err := p.getInt64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetInt64 parses the expanded value as an int if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the function panics. |  | ||||||
| func (p *Properties) MustGetInt64(key string) int64 { |  | ||||||
| 	v, err := p.getInt64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Properties) getInt64(key string) (value int64, err error) { |  | ||||||
| 	if v, ok := p.Get(key); ok { |  | ||||||
| 		value, err = strconv.ParseInt(v, 10, 64) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return 0, err |  | ||||||
| 		} |  | ||||||
| 		return value, nil |  | ||||||
| 	} |  | ||||||
| 	return 0, invalidKeyError(key) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetUint parses the expanded value as an uint if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the default |  | ||||||
| // value is returned. If the value does not fit into an int the |  | ||||||
| // function panics with an out of range error. |  | ||||||
| func (p *Properties) GetUint(key string, def uint) uint { |  | ||||||
| 	v, err := p.getUint64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return uintRangeCheck(key, v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetUint parses the expanded value as an int if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the function panics. |  | ||||||
| // If the value does not fit into an int the function panics with |  | ||||||
| // an out of range error. |  | ||||||
| func (p *Properties) MustGetUint(key string) uint { |  | ||||||
| 	v, err := p.getUint64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return uintRangeCheck(key, v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetUint64 parses the expanded value as an uint64 if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the default |  | ||||||
| // value is returned. |  | ||||||
| func (p *Properties) GetUint64(key string, def uint64) uint64 { |  | ||||||
| 	v, err := p.getUint64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetUint64 parses the expanded value as an int if the key exists. |  | ||||||
| // If key does not exist or the value cannot be parsed the function panics. |  | ||||||
| func (p *Properties) MustGetUint64(key string) uint64 { |  | ||||||
| 	v, err := p.getUint64(key) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Properties) getUint64(key string) (value uint64, err error) { |  | ||||||
| 	if v, ok := p.Get(key); ok { |  | ||||||
| 		value, err = strconv.ParseUint(v, 10, 64) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return 0, err |  | ||||||
| 		} |  | ||||||
| 		return value, nil |  | ||||||
| 	} |  | ||||||
| 	return 0, invalidKeyError(key) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // GetString returns the expanded value for the given key if exists or |  | ||||||
| // the default value otherwise. |  | ||||||
| func (p *Properties) GetString(key, def string) string { |  | ||||||
| 	if v, ok := p.Get(key); ok { |  | ||||||
| 		return v |  | ||||||
| 	} |  | ||||||
| 	return def |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustGetString returns the expanded value for the given key if exists or |  | ||||||
| // panics otherwise. |  | ||||||
| func (p *Properties) MustGetString(key string) string { |  | ||||||
| 	if v, ok := p.Get(key); ok { |  | ||||||
| 		return v |  | ||||||
| 	} |  | ||||||
| 	ErrorHandler(invalidKeyError(key)) |  | ||||||
| 	panic("ErrorHandler should exit") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // Filter returns a new properties object which contains all properties |  | ||||||
| // for which the key matches the pattern. |  | ||||||
| func (p *Properties) Filter(pattern string) (*Properties, error) { |  | ||||||
| 	re, err := regexp.Compile(pattern) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return p.FilterRegexp(re), nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // FilterRegexp returns a new properties object which contains all properties |  | ||||||
| // for which the key matches the regular expression. |  | ||||||
| func (p *Properties) FilterRegexp(re *regexp.Regexp) *Properties { |  | ||||||
| 	pp := NewProperties() |  | ||||||
| 	for _, k := range p.k { |  | ||||||
| 		if re.MatchString(k) { |  | ||||||
| 			// TODO(fs): we are ignoring the error which flags a circular reference. |  | ||||||
| 			// TODO(fs): since we are just copying a subset of keys this cannot happen (fingers crossed) |  | ||||||
| 			pp.Set(k, p.m[k]) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return pp |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // FilterPrefix returns a new properties object with a subset of all keys |  | ||||||
| // with the given prefix. |  | ||||||
| func (p *Properties) FilterPrefix(prefix string) *Properties { |  | ||||||
| 	pp := NewProperties() |  | ||||||
| 	for _, k := range p.k { |  | ||||||
| 		if strings.HasPrefix(k, prefix) { |  | ||||||
| 			// TODO(fs): we are ignoring the error which flags a circular reference. |  | ||||||
| 			// TODO(fs): since we are just copying a subset of keys this cannot happen (fingers crossed) |  | ||||||
| 			pp.Set(k, p.m[k]) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return pp |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // FilterStripPrefix returns a new properties object with a subset of all keys |  | ||||||
| // with the given prefix and the prefix removed from the keys. |  | ||||||
| func (p *Properties) FilterStripPrefix(prefix string) *Properties { |  | ||||||
| 	pp := NewProperties() |  | ||||||
| 	n := len(prefix) |  | ||||||
| 	for _, k := range p.k { |  | ||||||
| 		if len(k) > len(prefix) && strings.HasPrefix(k, prefix) { |  | ||||||
| 			// TODO(fs): we are ignoring the error which flags a circular reference. |  | ||||||
| 			// TODO(fs): since we are modifying keys I am not entirely sure whether we can create a circular reference |  | ||||||
| 			// TODO(fs): this function should probably return an error but the signature is fixed |  | ||||||
| 			pp.Set(k[n:], p.m[k]) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return pp |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Len returns the number of keys. |  | ||||||
| func (p *Properties) Len() int { |  | ||||||
| 	return len(p.m) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Keys returns all keys in the same order as in the input. |  | ||||||
| func (p *Properties) Keys() []string { |  | ||||||
| 	keys := make([]string, len(p.k)) |  | ||||||
| 	copy(keys, p.k) |  | ||||||
| 	return keys |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Set sets the property key to the corresponding value. |  | ||||||
| // If a value for key existed before then ok is true and prev |  | ||||||
| // contains the previous value. If the value contains a |  | ||||||
| // circular reference or a malformed expression then |  | ||||||
| // an error is returned. |  | ||||||
| // An empty key is silently ignored. |  | ||||||
| func (p *Properties) Set(key, value string) (prev string, ok bool, err error) { |  | ||||||
| 	if key == "" { |  | ||||||
| 		return "", false, nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// if expansion is disabled we allow circular references |  | ||||||
| 	if p.DisableExpansion { |  | ||||||
| 		prev, ok = p.Get(key) |  | ||||||
| 		p.m[key] = value |  | ||||||
| 		if !ok { |  | ||||||
| 			p.k = append(p.k, key) |  | ||||||
| 		} |  | ||||||
| 		return prev, ok, nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// to check for a circular reference we temporarily need |  | ||||||
| 	// to set the new value. If there is an error then revert |  | ||||||
| 	// to the previous state. Only if all tests are successful |  | ||||||
| 	// then we add the key to the p.k list. |  | ||||||
| 	prev, ok = p.Get(key) |  | ||||||
| 	p.m[key] = value |  | ||||||
|  |  | ||||||
| 	// now check for a circular reference |  | ||||||
| 	_, err = p.expand(key, value) |  | ||||||
| 	if err != nil { |  | ||||||
|  |  | ||||||
| 		// revert to the previous state |  | ||||||
| 		if ok { |  | ||||||
| 			p.m[key] = prev |  | ||||||
| 		} else { |  | ||||||
| 			delete(p.m, key) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		return "", false, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if !ok { |  | ||||||
| 		p.k = append(p.k, key) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return prev, ok, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetValue sets property key to the default string value |  | ||||||
| // as defined by fmt.Sprintf("%v"). |  | ||||||
| func (p *Properties) SetValue(key string, value interface{}) error { |  | ||||||
| 	_, _, err := p.Set(key, fmt.Sprintf("%v", value)) |  | ||||||
| 	return err |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // MustSet sets the property key to the corresponding value. |  | ||||||
| // If a value for key existed before then ok is true and prev |  | ||||||
| // contains the previous value. An empty key is silently ignored. |  | ||||||
| func (p *Properties) MustSet(key, value string) (prev string, ok bool) { |  | ||||||
| 	prev, ok, err := p.Set(key, value) |  | ||||||
| 	if err != nil { |  | ||||||
| 		ErrorHandler(err) |  | ||||||
| 	} |  | ||||||
| 	return prev, ok |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // String returns a string of all expanded 'key = value' pairs. |  | ||||||
| func (p *Properties) String() string { |  | ||||||
| 	var s string |  | ||||||
| 	for _, key := range p.k { |  | ||||||
| 		value, _ := p.Get(key) |  | ||||||
| 		s = fmt.Sprintf("%s%s = %s\n", s, key, value) |  | ||||||
| 	} |  | ||||||
| 	return s |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Write writes all unexpanded 'key = value' pairs to the given writer. |  | ||||||
| // Write returns the number of bytes written and any write error encountered. |  | ||||||
| func (p *Properties) Write(w io.Writer, enc Encoding) (n int, err error) { |  | ||||||
| 	return p.WriteComment(w, "", enc) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // WriteComment writes all unexpanced 'key = value' pairs to the given writer. |  | ||||||
| // If prefix is not empty then comments are written with a blank line and the |  | ||||||
| // given prefix. The prefix should be either "# " or "! " to be compatible with |  | ||||||
| // the properties file format. Otherwise, the properties parser will not be |  | ||||||
| // able to read the file back in. It returns the number of bytes written and |  | ||||||
| // any write error encountered. |  | ||||||
| func (p *Properties) WriteComment(w io.Writer, prefix string, enc Encoding) (n int, err error) { |  | ||||||
| 	var x int |  | ||||||
|  |  | ||||||
| 	for _, key := range p.k { |  | ||||||
| 		value := p.m[key] |  | ||||||
|  |  | ||||||
| 		if prefix != "" { |  | ||||||
| 			if comments, ok := p.c[key]; ok { |  | ||||||
| 				// don't print comments if they are all empty |  | ||||||
| 				allEmpty := true |  | ||||||
| 				for _, c := range comments { |  | ||||||
| 					if c != "" { |  | ||||||
| 						allEmpty = false |  | ||||||
| 						break |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				if !allEmpty { |  | ||||||
| 					// add a blank line between entries but not at the top |  | ||||||
| 					if len(comments) > 0 && n > 0 { |  | ||||||
| 						x, err = fmt.Fprintln(w) |  | ||||||
| 						if err != nil { |  | ||||||
| 							return |  | ||||||
| 						} |  | ||||||
| 						n += x |  | ||||||
| 					} |  | ||||||
|  |  | ||||||
| 					for _, c := range comments { |  | ||||||
| 						x, err = fmt.Fprintf(w, "%s%s\n", prefix, encode(c, "", enc)) |  | ||||||
| 						if err != nil { |  | ||||||
| 							return |  | ||||||
| 						} |  | ||||||
| 						n += x |  | ||||||
| 					} |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		x, err = fmt.Fprintf(w, "%s = %s\n", encode(key, " :", enc), encode(value, "", enc)) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return |  | ||||||
| 		} |  | ||||||
| 		n += x |  | ||||||
| 	} |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Map returns a copy of the properties as a map. |  | ||||||
| func (p *Properties) Map() map[string]string { |  | ||||||
| 	m := make(map[string]string) |  | ||||||
| 	for k, v := range p.m { |  | ||||||
| 		m[k] = v |  | ||||||
| 	} |  | ||||||
| 	return m |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // FilterFunc returns a copy of the properties which includes the values which passed all filters. |  | ||||||
| func (p *Properties) FilterFunc(filters ...func(k, v string) bool) *Properties { |  | ||||||
| 	pp := NewProperties() |  | ||||||
| outer: |  | ||||||
| 	for k, v := range p.m { |  | ||||||
| 		for _, f := range filters { |  | ||||||
| 			if !f(k, v) { |  | ||||||
| 				continue outer |  | ||||||
| 			} |  | ||||||
| 			pp.Set(k, v) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return pp |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // Delete removes the key and its comments. |  | ||||||
| func (p *Properties) Delete(key string) { |  | ||||||
| 	delete(p.m, key) |  | ||||||
| 	delete(p.c, key) |  | ||||||
| 	newKeys := []string{} |  | ||||||
| 	for _, k := range p.k { |  | ||||||
| 		if k != key { |  | ||||||
| 			newKeys = append(newKeys, k) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	p.k = newKeys |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Merge merges properties, comments and keys from other *Properties into p |  | ||||||
| func (p *Properties) Merge(other *Properties) { |  | ||||||
| 	for k, v := range other.m { |  | ||||||
| 		p.m[k] = v |  | ||||||
| 	} |  | ||||||
| 	for k, v := range other.c { |  | ||||||
| 		p.c[k] = v |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| outer: |  | ||||||
| 	for _, otherKey := range other.k { |  | ||||||
| 		for _, key := range p.k { |  | ||||||
| 			if otherKey == key { |  | ||||||
| 				continue outer |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 		p.k = append(p.k, otherKey) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ---------------------------------------------------------------------------- |  | ||||||
|  |  | ||||||
| // check expands all values and returns an error if a circular reference or |  | ||||||
| // a malformed expression was found. |  | ||||||
| func (p *Properties) check() error { |  | ||||||
| 	for key, value := range p.m { |  | ||||||
| 		if _, err := p.expand(key, value); err != nil { |  | ||||||
| 			return err |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *Properties) expand(key, input string) (string, error) { |  | ||||||
| 	// no pre/postfix -> nothing to expand |  | ||||||
| 	if p.Prefix == "" && p.Postfix == "" { |  | ||||||
| 		return input, nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return expand(input, []string{key}, p.Prefix, p.Postfix, p.m) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // expand recursively expands expressions of '(prefix)key(postfix)' to their corresponding values. |  | ||||||
| // The function keeps track of the keys that were already expanded and stops if it |  | ||||||
| // detects a circular reference or a malformed expression of the form '(prefix)key'. |  | ||||||
| func expand(s string, keys []string, prefix, postfix string, values map[string]string) (string, error) { |  | ||||||
| 	if len(keys) > maxExpansionDepth { |  | ||||||
| 		return "", fmt.Errorf("expansion too deep") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		start := strings.Index(s, prefix) |  | ||||||
| 		if start == -1 { |  | ||||||
| 			return s, nil |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		keyStart := start + len(prefix) |  | ||||||
| 		keyLen := strings.Index(s[keyStart:], postfix) |  | ||||||
| 		if keyLen == -1 { |  | ||||||
| 			return "", fmt.Errorf("malformed expression") |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		end := keyStart + keyLen + len(postfix) - 1 |  | ||||||
| 		key := s[keyStart : keyStart+keyLen] |  | ||||||
|  |  | ||||||
| 		// fmt.Printf("s:%q pp:%q start:%d end:%d keyStart:%d keyLen:%d key:%q\n", s, prefix + "..." + postfix, start, end, keyStart, keyLen, key) |  | ||||||
|  |  | ||||||
| 		for _, k := range keys { |  | ||||||
| 			if key == k { |  | ||||||
| 				return "", fmt.Errorf("circular reference") |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		val, ok := values[key] |  | ||||||
| 		if !ok { |  | ||||||
| 			val = os.Getenv(key) |  | ||||||
| 		} |  | ||||||
| 		new_val, err := expand(val, append(keys, key), prefix, postfix, values) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return "", err |  | ||||||
| 		} |  | ||||||
| 		s = s[:start] + new_val + s[end+1:] |  | ||||||
| 	} |  | ||||||
| 	return s, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // encode encodes a UTF-8 string to ISO-8859-1 and escapes some characters. |  | ||||||
| func encode(s string, special string, enc Encoding) string { |  | ||||||
| 	switch enc { |  | ||||||
| 	case UTF8: |  | ||||||
| 		return encodeUtf8(s, special) |  | ||||||
| 	case ISO_8859_1: |  | ||||||
| 		return encodeIso(s, special) |  | ||||||
| 	default: |  | ||||||
| 		panic(fmt.Sprintf("unsupported encoding %v", enc)) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func encodeUtf8(s string, special string) string { |  | ||||||
| 	v := "" |  | ||||||
| 	for pos := 0; pos < len(s); { |  | ||||||
| 		r, w := utf8.DecodeRuneInString(s[pos:]) |  | ||||||
| 		pos += w |  | ||||||
| 		v += escape(r, special) |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func encodeIso(s string, special string) string { |  | ||||||
| 	var r rune |  | ||||||
| 	var w int |  | ||||||
| 	var v string |  | ||||||
| 	for pos := 0; pos < len(s); { |  | ||||||
| 		switch r, w = utf8.DecodeRuneInString(s[pos:]); { |  | ||||||
| 		case r < 1<<8: // single byte rune -> escape special chars only |  | ||||||
| 			v += escape(r, special) |  | ||||||
| 		case r < 1<<16: // two byte rune -> unicode literal |  | ||||||
| 			v += fmt.Sprintf("\\u%04x", r) |  | ||||||
| 		default: // more than two bytes per rune -> can't encode |  | ||||||
| 			v += "?" |  | ||||||
| 		} |  | ||||||
| 		pos += w |  | ||||||
| 	} |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func escape(r rune, special string) string { |  | ||||||
| 	switch r { |  | ||||||
| 	case '\f': |  | ||||||
| 		return "\\f" |  | ||||||
| 	case '\n': |  | ||||||
| 		return "\\n" |  | ||||||
| 	case '\r': |  | ||||||
| 		return "\\r" |  | ||||||
| 	case '\t': |  | ||||||
| 		return "\\t" |  | ||||||
| 	default: |  | ||||||
| 		if strings.ContainsRune(special, r) { |  | ||||||
| 			return "\\" + string(r) |  | ||||||
| 		} |  | ||||||
| 		return string(r) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func invalidKeyError(key string) error { |  | ||||||
| 	return fmt.Errorf("unknown property: %s", key) |  | ||||||
| } |  | ||||||
							
								
								
									
										31
									
								
								vendor/github.com/magiconair/properties/rangecheck.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										31
									
								
								vendor/github.com/magiconair/properties/rangecheck.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,31 +0,0 @@ | |||||||
| // Copyright 2018 Frank Schroeder. All rights reserved. |  | ||||||
| // Use of this source code is governed by a BSD-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package properties |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"math" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // make this a var to overwrite it in a test |  | ||||||
| var is32Bit = ^uint(0) == math.MaxUint32 |  | ||||||
|  |  | ||||||
| // intRangeCheck checks if the value fits into the int type and |  | ||||||
| // panics if it does not. |  | ||||||
| func intRangeCheck(key string, v int64) int { |  | ||||||
| 	if is32Bit && (v < math.MinInt32 || v > math.MaxInt32) { |  | ||||||
| 		panic(fmt.Sprintf("Value %d for key %s out of range", v, key)) |  | ||||||
| 	} |  | ||||||
| 	return int(v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // uintRangeCheck checks if the value fits into the uint type and |  | ||||||
| // panics if it does not. |  | ||||||
| func uintRangeCheck(key string, v uint64) uint { |  | ||||||
| 	if is32Bit && v > math.MaxUint32 { |  | ||||||
| 		panic(fmt.Sprintf("Value %d for key %s out of range", v, key)) |  | ||||||
| 	} |  | ||||||
| 	return uint(v) |  | ||||||
| } |  | ||||||
							
								
								
									
										2
									
								
								vendor/github.com/pelletier/go-toml/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										2
									
								
								vendor/github.com/pelletier/go-toml/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,2 +0,0 @@ | |||||||
| test_program/test_program_bin |  | ||||||
| fuzz/ |  | ||||||
							
								
								
									
										23
									
								
								vendor/github.com/pelletier/go-toml/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										23
									
								
								vendor/github.com/pelletier/go-toml/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,23 +0,0 @@ | |||||||
| sudo: false |  | ||||||
| language: go |  | ||||||
| go: |  | ||||||
|   - 1.8.x |  | ||||||
|   - 1.9.x |  | ||||||
|   - 1.10.x |  | ||||||
|   - tip |  | ||||||
| matrix: |  | ||||||
|   allow_failures: |  | ||||||
|     - go: tip |  | ||||||
|   fast_finish: true |  | ||||||
| script: |  | ||||||
|   - if [ -n "$(go fmt ./...)" ]; then exit 1; fi |  | ||||||
|   - ./test.sh |  | ||||||
|   - ./benchmark.sh $TRAVIS_BRANCH https://github.com/$TRAVIS_REPO_SLUG.git |  | ||||||
| before_install: |  | ||||||
|   - go get github.com/axw/gocov/gocov |  | ||||||
|   - go get github.com/mattn/goveralls |  | ||||||
|   - if ! go get code.google.com/p/go.tools/cmd/cover; then go get golang.org/x/tools/cmd/cover; fi |  | ||||||
| branches: |  | ||||||
|   only: [master] |  | ||||||
| after_success: |  | ||||||
|   - $HOME/gopath/bin/goveralls -service=travis-ci -coverprofile=coverage.out -repotoken $COVERALLS_TOKEN |  | ||||||
							
								
								
									
										21
									
								
								vendor/github.com/pelletier/go-toml/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										21
									
								
								vendor/github.com/pelletier/go-toml/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,21 +0,0 @@ | |||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2013 - 2017 Thomas Pelletier, Eric Anderton |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
							
								
								
									
										131
									
								
								vendor/github.com/pelletier/go-toml/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										131
									
								
								vendor/github.com/pelletier/go-toml/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,131 +0,0 @@ | |||||||
| # go-toml |  | ||||||
|  |  | ||||||
| Go library for the [TOML](https://github.com/mojombo/toml) format. |  | ||||||
|  |  | ||||||
| This library supports TOML version |  | ||||||
| [v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md) |  | ||||||
|  |  | ||||||
| [](http://godoc.org/github.com/pelletier/go-toml) |  | ||||||
| [](https://github.com/pelletier/go-toml/blob/master/LICENSE) |  | ||||||
| [](https://travis-ci.org/pelletier/go-toml) |  | ||||||
| [](https://coveralls.io/github/pelletier/go-toml?branch=master) |  | ||||||
| [](https://goreportcard.com/report/github.com/pelletier/go-toml) |  | ||||||
|  |  | ||||||
| ## Features |  | ||||||
|  |  | ||||||
| Go-toml provides the following features for using data parsed from TOML documents: |  | ||||||
|  |  | ||||||
| * Load TOML documents from files and string data |  | ||||||
| * Easily navigate TOML structure using Tree |  | ||||||
| * Mashaling and unmarshaling to and from data structures |  | ||||||
| * Line & column position data for all parsed elements |  | ||||||
| * [Query support similar to JSON-Path](query/) |  | ||||||
| * Syntax errors contain line and column numbers |  | ||||||
|  |  | ||||||
| ## Import |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| import "github.com/pelletier/go-toml" |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ## Usage example |  | ||||||
|  |  | ||||||
| Read a TOML document: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| config, _ := toml.Load(` |  | ||||||
| [postgres] |  | ||||||
| user = "pelletier" |  | ||||||
| password = "mypassword"`) |  | ||||||
| // retrieve data directly |  | ||||||
| user := config.Get("postgres.user").(string) |  | ||||||
|  |  | ||||||
| // or using an intermediate object |  | ||||||
| postgresConfig := config.Get("postgres").(*toml.Tree) |  | ||||||
| password := postgresConfig.Get("password").(string) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Or use Unmarshal: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| type Postgres struct { |  | ||||||
|     User     string |  | ||||||
|     Password string |  | ||||||
| } |  | ||||||
| type Config struct { |  | ||||||
|     Postgres Postgres |  | ||||||
| } |  | ||||||
|  |  | ||||||
| doc := []byte(` |  | ||||||
| [Postgres] |  | ||||||
| User = "pelletier" |  | ||||||
| Password = "mypassword"`) |  | ||||||
|  |  | ||||||
| config := Config{} |  | ||||||
| toml.Unmarshal(doc, &config) |  | ||||||
| fmt.Println("user=", config.Postgres.User) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Or use a query: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| // use a query to gather elements without walking the tree |  | ||||||
| q, _ := query.Compile("$..[user,password]") |  | ||||||
| results := q.Execute(config) |  | ||||||
| for ii, item := range results.Values() { |  | ||||||
|     fmt.Println("Query result %d: %v", ii, item) |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ## Documentation |  | ||||||
|  |  | ||||||
| The documentation and additional examples are available at |  | ||||||
| [godoc.org](http://godoc.org/github.com/pelletier/go-toml). |  | ||||||
|  |  | ||||||
| ## Tools |  | ||||||
|  |  | ||||||
| Go-toml provides two handy command line tools: |  | ||||||
|  |  | ||||||
| * `tomll`: Reads TOML files and lint them. |  | ||||||
|  |  | ||||||
|     ``` |  | ||||||
|     go install github.com/pelletier/go-toml/cmd/tomll |  | ||||||
|     tomll --help |  | ||||||
|     ``` |  | ||||||
| * `tomljson`: Reads a TOML file and outputs its JSON representation. |  | ||||||
|  |  | ||||||
|     ``` |  | ||||||
|     go install github.com/pelletier/go-toml/cmd/tomljson |  | ||||||
|     tomljson --help |  | ||||||
|     ``` |  | ||||||
|  |  | ||||||
| ## Contribute |  | ||||||
|  |  | ||||||
| Feel free to report bugs and patches using GitHub's pull requests system on |  | ||||||
| [pelletier/go-toml](https://github.com/pelletier/go-toml). Any feedback would be |  | ||||||
| much appreciated! |  | ||||||
|  |  | ||||||
| ### Run tests |  | ||||||
|  |  | ||||||
| You have to make sure two kind of tests run: |  | ||||||
|  |  | ||||||
| 1. The Go unit tests |  | ||||||
| 2. The TOML examples base |  | ||||||
|  |  | ||||||
| You can run both of them using `./test.sh`. |  | ||||||
|  |  | ||||||
| ### Fuzzing |  | ||||||
|  |  | ||||||
| The script `./fuzz.sh` is available to |  | ||||||
| run [go-fuzz](https://github.com/dvyukov/go-fuzz) on go-toml. |  | ||||||
|  |  | ||||||
| ## Versioning |  | ||||||
|  |  | ||||||
| Go-toml follows [Semantic Versioning](http://semver.org/). The supported version |  | ||||||
| of [TOML](https://github.com/toml-lang/toml) is indicated at the beginning of |  | ||||||
| this document. The last two major versions of Go are supported |  | ||||||
| (see [Go Release Policy](https://golang.org/doc/devel/release.html#policy)). |  | ||||||
|  |  | ||||||
| ## License |  | ||||||
|  |  | ||||||
| The MIT License (MIT). Read [LICENSE](LICENSE). |  | ||||||
							
								
								
									
										164
									
								
								vendor/github.com/pelletier/go-toml/benchmark.json
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										164
									
								
								vendor/github.com/pelletier/go-toml/benchmark.json
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,164 +0,0 @@ | |||||||
| { |  | ||||||
|     "array": { |  | ||||||
|         "key1": [ |  | ||||||
|             1, |  | ||||||
|             2, |  | ||||||
|             3 |  | ||||||
|         ], |  | ||||||
|         "key2": [ |  | ||||||
|             "red", |  | ||||||
|             "yellow", |  | ||||||
|             "green" |  | ||||||
|         ], |  | ||||||
|         "key3": [ |  | ||||||
|             [ |  | ||||||
|                 1, |  | ||||||
|                 2 |  | ||||||
|             ], |  | ||||||
|             [ |  | ||||||
|                 3, |  | ||||||
|                 4, |  | ||||||
|                 5 |  | ||||||
|             ] |  | ||||||
|         ], |  | ||||||
|         "key4": [ |  | ||||||
|             [ |  | ||||||
|                 1, |  | ||||||
|                 2 |  | ||||||
|             ], |  | ||||||
|             [ |  | ||||||
|                 "a", |  | ||||||
|                 "b", |  | ||||||
|                 "c" |  | ||||||
|             ] |  | ||||||
|         ], |  | ||||||
|         "key5": [ |  | ||||||
|             1, |  | ||||||
|             2, |  | ||||||
|             3 |  | ||||||
|         ], |  | ||||||
|         "key6": [ |  | ||||||
|             1, |  | ||||||
|             2 |  | ||||||
|         ] |  | ||||||
|     }, |  | ||||||
|     "boolean": { |  | ||||||
|         "False": false, |  | ||||||
|         "True": true |  | ||||||
|     }, |  | ||||||
|     "datetime": { |  | ||||||
|         "key1": "1979-05-27T07:32:00Z", |  | ||||||
|         "key2": "1979-05-27T00:32:00-07:00", |  | ||||||
|         "key3": "1979-05-27T00:32:00.999999-07:00" |  | ||||||
|     }, |  | ||||||
|     "float": { |  | ||||||
|         "both": { |  | ||||||
|             "key": 6.626e-34 |  | ||||||
|         }, |  | ||||||
|         "exponent": { |  | ||||||
|             "key1": 5e+22, |  | ||||||
|             "key2": 1000000, |  | ||||||
|             "key3": -0.02 |  | ||||||
|         }, |  | ||||||
|         "fractional": { |  | ||||||
|             "key1": 1, |  | ||||||
|             "key2": 3.1415, |  | ||||||
|             "key3": -0.01 |  | ||||||
|         }, |  | ||||||
|         "underscores": { |  | ||||||
|             "key1": 9224617.445991227, |  | ||||||
|             "key2": 1e+100 |  | ||||||
|         } |  | ||||||
|     }, |  | ||||||
|     "fruit": [{ |  | ||||||
|             "name": "apple", |  | ||||||
|             "physical": { |  | ||||||
|                 "color": "red", |  | ||||||
|                 "shape": "round" |  | ||||||
|             }, |  | ||||||
|             "variety": [{ |  | ||||||
|                     "name": "red delicious" |  | ||||||
|                 }, |  | ||||||
|                 { |  | ||||||
|                     "name": "granny smith" |  | ||||||
|                 } |  | ||||||
|             ] |  | ||||||
|         }, |  | ||||||
|         { |  | ||||||
|             "name": "banana", |  | ||||||
|             "variety": [{ |  | ||||||
|                 "name": "plantain" |  | ||||||
|             }] |  | ||||||
|         } |  | ||||||
|     ], |  | ||||||
|     "integer": { |  | ||||||
|         "key1": 99, |  | ||||||
|         "key2": 42, |  | ||||||
|         "key3": 0, |  | ||||||
|         "key4": -17, |  | ||||||
|         "underscores": { |  | ||||||
|             "key1": 1000, |  | ||||||
|             "key2": 5349221, |  | ||||||
|             "key3": 12345 |  | ||||||
|         } |  | ||||||
|     }, |  | ||||||
|     "products": [{ |  | ||||||
|             "name": "Hammer", |  | ||||||
|             "sku": 738594937 |  | ||||||
|         }, |  | ||||||
|         {}, |  | ||||||
|         { |  | ||||||
|             "color": "gray", |  | ||||||
|             "name": "Nail", |  | ||||||
|             "sku": 284758393 |  | ||||||
|         } |  | ||||||
|     ], |  | ||||||
|     "string": { |  | ||||||
|         "basic": { |  | ||||||
|             "basic": "I'm a string. \"You can quote me\". Name\tJosé\nLocation\tSF." |  | ||||||
|         }, |  | ||||||
|         "literal": { |  | ||||||
|             "multiline": { |  | ||||||
|                 "lines": "The first newline is\ntrimmed in raw strings.\n   All other whitespace\n   is preserved.\n", |  | ||||||
|                 "regex2": "I [dw]on't need \\d{2} apples" |  | ||||||
|             }, |  | ||||||
|             "quoted": "Tom \"Dubs\" Preston-Werner", |  | ||||||
|             "regex": "\u003c\\i\\c*\\s*\u003e", |  | ||||||
|             "winpath": "C:\\Users\\nodejs\\templates", |  | ||||||
|             "winpath2": "\\\\ServerX\\admin$\\system32\\" |  | ||||||
|         }, |  | ||||||
|         "multiline": { |  | ||||||
|             "continued": { |  | ||||||
|                 "key1": "The quick brown fox jumps over the lazy dog.", |  | ||||||
|                 "key2": "The quick brown fox jumps over the lazy dog.", |  | ||||||
|                 "key3": "The quick brown fox jumps over the lazy dog." |  | ||||||
|             }, |  | ||||||
|             "key1": "One\nTwo", |  | ||||||
|             "key2": "One\nTwo", |  | ||||||
|             "key3": "One\nTwo" |  | ||||||
|         } |  | ||||||
|     }, |  | ||||||
|     "table": { |  | ||||||
|         "inline": { |  | ||||||
|             "name": { |  | ||||||
|                 "first": "Tom", |  | ||||||
|                 "last": "Preston-Werner" |  | ||||||
|             }, |  | ||||||
|             "point": { |  | ||||||
|                 "x": 1, |  | ||||||
|                 "y": 2 |  | ||||||
|             } |  | ||||||
|         }, |  | ||||||
|         "key": "value", |  | ||||||
|         "subtable": { |  | ||||||
|             "key": "another value" |  | ||||||
|         } |  | ||||||
|     }, |  | ||||||
|     "x": { |  | ||||||
|         "y": { |  | ||||||
|             "z": { |  | ||||||
|                 "w": {} |  | ||||||
|             } |  | ||||||
|         } |  | ||||||
|     } |  | ||||||
| } |  | ||||||
							
								
								
									
										32
									
								
								vendor/github.com/pelletier/go-toml/benchmark.sh
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										32
									
								
								vendor/github.com/pelletier/go-toml/benchmark.sh
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,32 +0,0 @@ | |||||||
| #!/bin/bash |  | ||||||
|  |  | ||||||
| set -e |  | ||||||
|  |  | ||||||
| reference_ref=${1:-master} |  | ||||||
| reference_git=${2:-.} |  | ||||||
|  |  | ||||||
| if ! `hash benchstat 2>/dev/null`; then |  | ||||||
|     echo "Installing benchstat" |  | ||||||
|     go get golang.org/x/perf/cmd/benchstat |  | ||||||
|     go install golang.org/x/perf/cmd/benchstat |  | ||||||
| fi |  | ||||||
|  |  | ||||||
| tempdir=`mktemp -d /tmp/go-toml-benchmark-XXXXXX` |  | ||||||
| ref_tempdir="${tempdir}/ref" |  | ||||||
| ref_benchmark="${ref_tempdir}/benchmark-`echo -n ${reference_ref}|tr -s '/' '-'`.txt" |  | ||||||
| local_benchmark="`pwd`/benchmark-local.txt" |  | ||||||
|  |  | ||||||
| echo "=== ${reference_ref} (${ref_tempdir})" |  | ||||||
| git clone ${reference_git} ${ref_tempdir} >/dev/null 2>/dev/null |  | ||||||
| pushd ${ref_tempdir} >/dev/null |  | ||||||
| git checkout ${reference_ref} >/dev/null 2>/dev/null |  | ||||||
| go test -bench=. -benchmem | tee ${ref_benchmark} |  | ||||||
| popd >/dev/null |  | ||||||
|  |  | ||||||
| echo "" |  | ||||||
| echo "=== local" |  | ||||||
| go test -bench=. -benchmem  | tee ${local_benchmark} |  | ||||||
|  |  | ||||||
| echo "" |  | ||||||
| echo "=== diff" |  | ||||||
| benchstat -delta-test=none ${ref_benchmark} ${local_benchmark} |  | ||||||
							
								
								
									
										244
									
								
								vendor/github.com/pelletier/go-toml/benchmark.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										244
									
								
								vendor/github.com/pelletier/go-toml/benchmark.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,244 +0,0 @@ | |||||||
| ################################################################################ |  | ||||||
| ## Comment |  | ||||||
|  |  | ||||||
| # Speak your mind with the hash symbol. They go from the symbol to the end of |  | ||||||
| # the line. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Table |  | ||||||
|  |  | ||||||
| # Tables (also known as hash tables or dictionaries) are collections of |  | ||||||
| # key/value pairs. They appear in square brackets on a line by themselves. |  | ||||||
|  |  | ||||||
| [table] |  | ||||||
|  |  | ||||||
| key = "value" # Yeah, you can do this. |  | ||||||
|  |  | ||||||
| # Nested tables are denoted by table names with dots in them. Name your tables |  | ||||||
| # whatever crap you please, just don't use #, ., [ or ]. |  | ||||||
|  |  | ||||||
| [table.subtable] |  | ||||||
|  |  | ||||||
| key = "another value" |  | ||||||
|  |  | ||||||
| # You don't need to specify all the super-tables if you don't want to. TOML |  | ||||||
| # knows how to do it for you. |  | ||||||
|  |  | ||||||
| # [x] you |  | ||||||
| # [x.y] don't |  | ||||||
| # [x.y.z] need these |  | ||||||
| [x.y.z.w] # for this to work |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Inline Table |  | ||||||
|  |  | ||||||
| # Inline tables provide a more compact syntax for expressing tables. They are |  | ||||||
| # especially useful for grouped data that can otherwise quickly become verbose. |  | ||||||
| # Inline tables are enclosed in curly braces `{` and `}`. No newlines are |  | ||||||
| # allowed between the curly braces unless they are valid within a value. |  | ||||||
|  |  | ||||||
| [table.inline] |  | ||||||
|  |  | ||||||
| name = { first = "Tom", last = "Preston-Werner" } |  | ||||||
| point = { x = 1, y = 2 } |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## String |  | ||||||
|  |  | ||||||
| # There are four ways to express strings: basic, multi-line basic, literal, and |  | ||||||
| # multi-line literal. All strings must contain only valid UTF-8 characters. |  | ||||||
|  |  | ||||||
| [string.basic] |  | ||||||
|  |  | ||||||
| basic = "I'm a string. \"You can quote me\". Name\tJos\u00E9\nLocation\tSF." |  | ||||||
|  |  | ||||||
| [string.multiline] |  | ||||||
|  |  | ||||||
| # The following strings are byte-for-byte equivalent: |  | ||||||
| key1 = "One\nTwo" |  | ||||||
| key2 = """One\nTwo""" |  | ||||||
| key3 = """ |  | ||||||
| One |  | ||||||
| Two""" |  | ||||||
|  |  | ||||||
| [string.multiline.continued] |  | ||||||
|  |  | ||||||
| # The following strings are byte-for-byte equivalent: |  | ||||||
| key1 = "The quick brown fox jumps over the lazy dog." |  | ||||||
|  |  | ||||||
| key2 = """ |  | ||||||
| The quick brown \ |  | ||||||
|  |  | ||||||
|  |  | ||||||
|   fox jumps over \ |  | ||||||
|     the lazy dog.""" |  | ||||||
|  |  | ||||||
| key3 = """\ |  | ||||||
|        The quick brown \ |  | ||||||
|        fox jumps over \ |  | ||||||
|        the lazy dog.\ |  | ||||||
|        """ |  | ||||||
|  |  | ||||||
| [string.literal] |  | ||||||
|  |  | ||||||
| # What you see is what you get. |  | ||||||
| winpath  = 'C:\Users\nodejs\templates' |  | ||||||
| winpath2 = '\\ServerX\admin$\system32\' |  | ||||||
| quoted   = 'Tom "Dubs" Preston-Werner' |  | ||||||
| regex    = '<\i\c*\s*>' |  | ||||||
|  |  | ||||||
|  |  | ||||||
| [string.literal.multiline] |  | ||||||
|  |  | ||||||
| regex2 = '''I [dw]on't need \d{2} apples''' |  | ||||||
| lines  = ''' |  | ||||||
| The first newline is |  | ||||||
| trimmed in raw strings. |  | ||||||
|    All other whitespace |  | ||||||
|    is preserved. |  | ||||||
| ''' |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Integer |  | ||||||
|  |  | ||||||
| # Integers are whole numbers. Positive numbers may be prefixed with a plus sign. |  | ||||||
| # Negative numbers are prefixed with a minus sign. |  | ||||||
|  |  | ||||||
| [integer] |  | ||||||
|  |  | ||||||
| key1 = +99 |  | ||||||
| key2 = 42 |  | ||||||
| key3 = 0 |  | ||||||
| key4 = -17 |  | ||||||
|  |  | ||||||
| [integer.underscores] |  | ||||||
|  |  | ||||||
| # For large numbers, you may use underscores to enhance readability. Each |  | ||||||
| # underscore must be surrounded by at least one digit. |  | ||||||
| key1 = 1_000 |  | ||||||
| key2 = 5_349_221 |  | ||||||
| key3 = 1_2_3_4_5     # valid but inadvisable |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Float |  | ||||||
|  |  | ||||||
| # A float consists of an integer part (which may be prefixed with a plus or |  | ||||||
| # minus sign) followed by a fractional part and/or an exponent part. |  | ||||||
|  |  | ||||||
| [float.fractional] |  | ||||||
|  |  | ||||||
| key1 = +1.0 |  | ||||||
| key2 = 3.1415 |  | ||||||
| key3 = -0.01 |  | ||||||
|  |  | ||||||
| [float.exponent] |  | ||||||
|  |  | ||||||
| key1 = 5e+22 |  | ||||||
| key2 = 1e6 |  | ||||||
| key3 = -2E-2 |  | ||||||
|  |  | ||||||
| [float.both] |  | ||||||
|  |  | ||||||
| key = 6.626e-34 |  | ||||||
|  |  | ||||||
| [float.underscores] |  | ||||||
|  |  | ||||||
| key1 = 9_224_617.445_991_228_313 |  | ||||||
| key2 = 1e1_00 |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Boolean |  | ||||||
|  |  | ||||||
| # Booleans are just the tokens you're used to. Always lowercase. |  | ||||||
|  |  | ||||||
| [boolean] |  | ||||||
|  |  | ||||||
| True = true |  | ||||||
| False = false |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Datetime |  | ||||||
|  |  | ||||||
| # Datetimes are RFC 3339 dates. |  | ||||||
|  |  | ||||||
| [datetime] |  | ||||||
|  |  | ||||||
| key1 = 1979-05-27T07:32:00Z |  | ||||||
| key2 = 1979-05-27T00:32:00-07:00 |  | ||||||
| key3 = 1979-05-27T00:32:00.999999-07:00 |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Array |  | ||||||
|  |  | ||||||
| # Arrays are square brackets with other primitives inside. Whitespace is |  | ||||||
| # ignored. Elements are separated by commas. Data types may not be mixed. |  | ||||||
|  |  | ||||||
| [array] |  | ||||||
|  |  | ||||||
| key1 = [ 1, 2, 3 ] |  | ||||||
| key2 = [ "red", "yellow", "green" ] |  | ||||||
| key3 = [ [ 1, 2 ], [3, 4, 5] ] |  | ||||||
| #key4 = [ [ 1, 2 ], ["a", "b", "c"] ] # this is ok |  | ||||||
|  |  | ||||||
| # Arrays can also be multiline. So in addition to ignoring whitespace, arrays |  | ||||||
| # also ignore newlines between the brackets.  Terminating commas are ok before |  | ||||||
| # the closing bracket. |  | ||||||
|  |  | ||||||
| key5 = [ |  | ||||||
|   1, 2, 3 |  | ||||||
| ] |  | ||||||
| key6 = [ |  | ||||||
|   1, |  | ||||||
|   2, # this is ok |  | ||||||
| ] |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ################################################################################ |  | ||||||
| ## Array of Tables |  | ||||||
|  |  | ||||||
| # These can be expressed by using a table name in double brackets. Each table |  | ||||||
| # with the same double bracketed name will be an element in the array. The |  | ||||||
| # tables are inserted in the order encountered. |  | ||||||
|  |  | ||||||
| [[products]] |  | ||||||
|  |  | ||||||
| name = "Hammer" |  | ||||||
| sku = 738594937 |  | ||||||
|  |  | ||||||
| [[products]] |  | ||||||
|  |  | ||||||
| [[products]] |  | ||||||
|  |  | ||||||
| name = "Nail" |  | ||||||
| sku = 284758393 |  | ||||||
| color = "gray" |  | ||||||
|  |  | ||||||
|  |  | ||||||
| # You can create nested arrays of tables as well. |  | ||||||
|  |  | ||||||
| [[fruit]] |  | ||||||
|   name = "apple" |  | ||||||
|  |  | ||||||
|   [fruit.physical] |  | ||||||
|     color = "red" |  | ||||||
|     shape = "round" |  | ||||||
|  |  | ||||||
|   [[fruit.variety]] |  | ||||||
|     name = "red delicious" |  | ||||||
|  |  | ||||||
|   [[fruit.variety]] |  | ||||||
|     name = "granny smith" |  | ||||||
|  |  | ||||||
| [[fruit]] |  | ||||||
|   name = "banana" |  | ||||||
|  |  | ||||||
|   [[fruit.variety]] |  | ||||||
|     name = "plantain" |  | ||||||
							
								
								
									
										121
									
								
								vendor/github.com/pelletier/go-toml/benchmark.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										121
									
								
								vendor/github.com/pelletier/go-toml/benchmark.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,121 +0,0 @@ | |||||||
| --- |  | ||||||
| array: |  | ||||||
|   key1: |  | ||||||
|   - 1 |  | ||||||
|   - 2 |  | ||||||
|   - 3 |  | ||||||
|   key2: |  | ||||||
|   - red |  | ||||||
|   - yellow |  | ||||||
|   - green |  | ||||||
|   key3: |  | ||||||
|   - - 1 |  | ||||||
|     - 2 |  | ||||||
|   - - 3 |  | ||||||
|     - 4 |  | ||||||
|     - 5 |  | ||||||
|   key4: |  | ||||||
|   - - 1 |  | ||||||
|     - 2 |  | ||||||
|   - - a |  | ||||||
|     - b |  | ||||||
|     - c |  | ||||||
|   key5: |  | ||||||
|   - 1 |  | ||||||
|   - 2 |  | ||||||
|   - 3 |  | ||||||
|   key6: |  | ||||||
|   - 1 |  | ||||||
|   - 2 |  | ||||||
| boolean: |  | ||||||
|   'False': false |  | ||||||
|   'True': true |  | ||||||
| datetime: |  | ||||||
|   key1: '1979-05-27T07:32:00Z' |  | ||||||
|   key2: '1979-05-27T00:32:00-07:00' |  | ||||||
|   key3: '1979-05-27T00:32:00.999999-07:00' |  | ||||||
| float: |  | ||||||
|   both: |  | ||||||
|     key: 6.626e-34 |  | ||||||
|   exponent: |  | ||||||
|     key1: 5.0e+22 |  | ||||||
|     key2: 1000000 |  | ||||||
|     key3: -0.02 |  | ||||||
|   fractional: |  | ||||||
|     key1: 1 |  | ||||||
|     key2: 3.1415 |  | ||||||
|     key3: -0.01 |  | ||||||
|   underscores: |  | ||||||
|     key1: 9224617.445991227 |  | ||||||
|     key2: 1.0e+100 |  | ||||||
| fruit: |  | ||||||
| - name: apple |  | ||||||
|   physical: |  | ||||||
|     color: red |  | ||||||
|     shape: round |  | ||||||
|   variety: |  | ||||||
|   - name: red delicious |  | ||||||
|   - name: granny smith |  | ||||||
| - name: banana |  | ||||||
|   variety: |  | ||||||
|   - name: plantain |  | ||||||
| integer: |  | ||||||
|   key1: 99 |  | ||||||
|   key2: 42 |  | ||||||
|   key3: 0 |  | ||||||
|   key4: -17 |  | ||||||
|   underscores: |  | ||||||
|     key1: 1000 |  | ||||||
|     key2: 5349221 |  | ||||||
|     key3: 12345 |  | ||||||
| products: |  | ||||||
| - name: Hammer |  | ||||||
|   sku: 738594937 |  | ||||||
| - {} |  | ||||||
| - color: gray |  | ||||||
|   name: Nail |  | ||||||
|   sku: 284758393 |  | ||||||
| string: |  | ||||||
|   basic: |  | ||||||
|     basic: "I'm a string. \"You can quote me\". Name\tJosé\nLocation\tSF." |  | ||||||
|   literal: |  | ||||||
|     multiline: |  | ||||||
|       lines: | |  | ||||||
|         The first newline is |  | ||||||
|         trimmed in raw strings. |  | ||||||
|            All other whitespace |  | ||||||
|            is preserved. |  | ||||||
|       regex2: I [dw]on't need \d{2} apples |  | ||||||
|     quoted: Tom "Dubs" Preston-Werner |  | ||||||
|     regex: "<\\i\\c*\\s*>" |  | ||||||
|     winpath: C:\Users\nodejs\templates |  | ||||||
|     winpath2: "\\\\ServerX\\admin$\\system32\\" |  | ||||||
|   multiline: |  | ||||||
|     continued: |  | ||||||
|       key1: The quick brown fox jumps over the lazy dog. |  | ||||||
|       key2: The quick brown fox jumps over the lazy dog. |  | ||||||
|       key3: The quick brown fox jumps over the lazy dog. |  | ||||||
|     key1: |- |  | ||||||
|       One |  | ||||||
|       Two |  | ||||||
|     key2: |- |  | ||||||
|       One |  | ||||||
|       Two |  | ||||||
|     key3: |- |  | ||||||
|       One |  | ||||||
|       Two |  | ||||||
| table: |  | ||||||
|   inline: |  | ||||||
|     name: |  | ||||||
|       first: Tom |  | ||||||
|       last: Preston-Werner |  | ||||||
|     point: |  | ||||||
|       x: 1 |  | ||||||
|       y: 2 |  | ||||||
|   key: value |  | ||||||
|   subtable: |  | ||||||
|     key: another value |  | ||||||
| x: |  | ||||||
|   y: |  | ||||||
|     z: |  | ||||||
|       w: {} |  | ||||||
							
								
								
									
										23
									
								
								vendor/github.com/pelletier/go-toml/doc.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										23
									
								
								vendor/github.com/pelletier/go-toml/doc.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,23 +0,0 @@ | |||||||
| // Package toml is a TOML parser and manipulation library. |  | ||||||
| // |  | ||||||
| // This version supports the specification as described in |  | ||||||
| // https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md |  | ||||||
| // |  | ||||||
| // Marshaling |  | ||||||
| // |  | ||||||
| // Go-toml can marshal and unmarshal TOML documents from and to data |  | ||||||
| // structures. |  | ||||||
| // |  | ||||||
| // TOML document as a tree |  | ||||||
| // |  | ||||||
| // Go-toml can operate on a TOML document as a tree. Use one of the Load* |  | ||||||
| // functions to parse TOML data and obtain a Tree instance, then one of its |  | ||||||
| // methods to manipulate the tree. |  | ||||||
| // |  | ||||||
| // JSONPath-like queries |  | ||||||
| // |  | ||||||
| // The package github.com/pelletier/go-toml/query implements a system |  | ||||||
| // similar to JSONPath to quickly retrieve elements of a TOML document using a |  | ||||||
| // single expression. See the package documentation for more information. |  | ||||||
| // |  | ||||||
| package toml |  | ||||||
							
								
								
									
										29
									
								
								vendor/github.com/pelletier/go-toml/example-crlf.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										29
									
								
								vendor/github.com/pelletier/go-toml/example-crlf.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,29 +0,0 @@ | |||||||
| # This is a TOML document. Boom. |  | ||||||
|  |  | ||||||
| title = "TOML Example" |  | ||||||
|  |  | ||||||
| [owner] |  | ||||||
| name = "Tom Preston-Werner" |  | ||||||
| organization = "GitHub" |  | ||||||
| bio = "GitHub Cofounder & CEO\nLikes tater tots and beer." |  | ||||||
| dob = 1979-05-27T07:32:00Z # First class dates? Why not? |  | ||||||
|  |  | ||||||
| [database] |  | ||||||
| server = "192.168.1.1" |  | ||||||
| ports = [ 8001, 8001, 8002 ] |  | ||||||
| connection_max = 5000 |  | ||||||
| enabled = true |  | ||||||
|  |  | ||||||
| [servers] |  | ||||||
|  |  | ||||||
|   # You can indent as you please. Tabs or spaces. TOML don't care. |  | ||||||
|   [servers.alpha] |  | ||||||
|   ip = "10.0.0.1" |  | ||||||
|   dc = "eqdc10" |  | ||||||
|  |  | ||||||
|   [servers.beta] |  | ||||||
|   ip = "10.0.0.2" |  | ||||||
|   dc = "eqdc10" |  | ||||||
|  |  | ||||||
| [clients] |  | ||||||
| data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it |  | ||||||
							
								
								
									
										29
									
								
								vendor/github.com/pelletier/go-toml/example.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										29
									
								
								vendor/github.com/pelletier/go-toml/example.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,29 +0,0 @@ | |||||||
| # This is a TOML document. Boom. |  | ||||||
|  |  | ||||||
| title = "TOML Example" |  | ||||||
|  |  | ||||||
| [owner] |  | ||||||
| name = "Tom Preston-Werner" |  | ||||||
| organization = "GitHub" |  | ||||||
| bio = "GitHub Cofounder & CEO\nLikes tater tots and beer." |  | ||||||
| dob = 1979-05-27T07:32:00Z # First class dates? Why not? |  | ||||||
|  |  | ||||||
| [database] |  | ||||||
| server = "192.168.1.1" |  | ||||||
| ports = [ 8001, 8001, 8002 ] |  | ||||||
| connection_max = 5000 |  | ||||||
| enabled = true |  | ||||||
|  |  | ||||||
| [servers] |  | ||||||
|  |  | ||||||
|   # You can indent as you please. Tabs or spaces. TOML don't care. |  | ||||||
|   [servers.alpha] |  | ||||||
|   ip = "10.0.0.1" |  | ||||||
|   dc = "eqdc10" |  | ||||||
|  |  | ||||||
|   [servers.beta] |  | ||||||
|   ip = "10.0.0.2" |  | ||||||
|   dc = "eqdc10" |  | ||||||
|  |  | ||||||
| [clients] |  | ||||||
| data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it |  | ||||||
							
								
								
									
										31
									
								
								vendor/github.com/pelletier/go-toml/fuzz.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										31
									
								
								vendor/github.com/pelletier/go-toml/fuzz.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,31 +0,0 @@ | |||||||
| // +build gofuzz |  | ||||||
|  |  | ||||||
| package toml |  | ||||||
|  |  | ||||||
| func Fuzz(data []byte) int { |  | ||||||
| 	tree, err := LoadBytes(data) |  | ||||||
| 	if err != nil { |  | ||||||
| 		if tree != nil { |  | ||||||
| 			panic("tree must be nil if there is an error") |  | ||||||
| 		} |  | ||||||
| 		return 0 |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	str, err := tree.ToTomlString() |  | ||||||
| 	if err != nil { |  | ||||||
| 		if str != "" { |  | ||||||
| 			panic(`str must be "" if there is an error`) |  | ||||||
| 		} |  | ||||||
| 		panic(err) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	tree, err = Load(str) |  | ||||||
| 	if err != nil { |  | ||||||
| 		if tree != nil { |  | ||||||
| 			panic("tree must be nil if there is an error") |  | ||||||
| 		} |  | ||||||
| 		return 0 |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return 1 |  | ||||||
| } |  | ||||||
							
								
								
									
										15
									
								
								vendor/github.com/pelletier/go-toml/fuzz.sh
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										15
									
								
								vendor/github.com/pelletier/go-toml/fuzz.sh
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,15 +0,0 @@ | |||||||
| #! /bin/sh |  | ||||||
| set -eu |  | ||||||
|  |  | ||||||
| go get github.com/dvyukov/go-fuzz/go-fuzz |  | ||||||
| go get github.com/dvyukov/go-fuzz/go-fuzz-build |  | ||||||
|  |  | ||||||
| if [ ! -e toml-fuzz.zip ]; then |  | ||||||
|     go-fuzz-build github.com/pelletier/go-toml |  | ||||||
| fi |  | ||||||
|  |  | ||||||
| rm -fr fuzz |  | ||||||
| mkdir -p fuzz/corpus |  | ||||||
| cp *.toml fuzz/corpus |  | ||||||
|  |  | ||||||
| go-fuzz -bin=toml-fuzz.zip -workdir=fuzz |  | ||||||
							
								
								
									
										85
									
								
								vendor/github.com/pelletier/go-toml/keysparsing.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										85
									
								
								vendor/github.com/pelletier/go-toml/keysparsing.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,85 +0,0 @@ | |||||||
| // Parsing keys handling both bare and quoted keys. |  | ||||||
|  |  | ||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
| 	"unicode" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Convert the bare key group string to an array. |  | ||||||
| // The input supports double quotation to allow "." inside the key name, |  | ||||||
| // but escape sequences are not supported. Lexers must unescape them beforehand. |  | ||||||
| func parseKey(key string) ([]string, error) { |  | ||||||
| 	groups := []string{} |  | ||||||
| 	var buffer bytes.Buffer |  | ||||||
| 	inQuotes := false |  | ||||||
| 	wasInQuotes := false |  | ||||||
| 	ignoreSpace := true |  | ||||||
| 	expectDot := false |  | ||||||
|  |  | ||||||
| 	for _, char := range key { |  | ||||||
| 		if ignoreSpace { |  | ||||||
| 			if char == ' ' { |  | ||||||
| 				continue |  | ||||||
| 			} |  | ||||||
| 			ignoreSpace = false |  | ||||||
| 		} |  | ||||||
| 		switch char { |  | ||||||
| 		case '"': |  | ||||||
| 			if inQuotes { |  | ||||||
| 				groups = append(groups, buffer.String()) |  | ||||||
| 				buffer.Reset() |  | ||||||
| 				wasInQuotes = true |  | ||||||
| 			} |  | ||||||
| 			inQuotes = !inQuotes |  | ||||||
| 			expectDot = false |  | ||||||
| 		case '.': |  | ||||||
| 			if inQuotes { |  | ||||||
| 				buffer.WriteRune(char) |  | ||||||
| 			} else { |  | ||||||
| 				if !wasInQuotes { |  | ||||||
| 					if buffer.Len() == 0 { |  | ||||||
| 						return nil, errors.New("empty table key") |  | ||||||
| 					} |  | ||||||
| 					groups = append(groups, buffer.String()) |  | ||||||
| 					buffer.Reset() |  | ||||||
| 				} |  | ||||||
| 				ignoreSpace = true |  | ||||||
| 				expectDot = false |  | ||||||
| 				wasInQuotes = false |  | ||||||
| 			} |  | ||||||
| 		case ' ': |  | ||||||
| 			if inQuotes { |  | ||||||
| 				buffer.WriteRune(char) |  | ||||||
| 			} else { |  | ||||||
| 				expectDot = true |  | ||||||
| 			} |  | ||||||
| 		default: |  | ||||||
| 			if !inQuotes && !isValidBareChar(char) { |  | ||||||
| 				return nil, fmt.Errorf("invalid bare character: %c", char) |  | ||||||
| 			} |  | ||||||
| 			if !inQuotes && expectDot { |  | ||||||
| 				return nil, errors.New("what?") |  | ||||||
| 			} |  | ||||||
| 			buffer.WriteRune(char) |  | ||||||
| 			expectDot = false |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	if inQuotes { |  | ||||||
| 		return nil, errors.New("mismatched quotes") |  | ||||||
| 	} |  | ||||||
| 	if buffer.Len() > 0 { |  | ||||||
| 		groups = append(groups, buffer.String()) |  | ||||||
| 	} |  | ||||||
| 	if len(groups) == 0 { |  | ||||||
| 		return nil, errors.New("empty key") |  | ||||||
| 	} |  | ||||||
| 	return groups, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isValidBareChar(r rune) bool { |  | ||||||
| 	return isAlphanumeric(r) || r == '-' || unicode.IsNumber(r) |  | ||||||
| } |  | ||||||
							
								
								
									
										750
									
								
								vendor/github.com/pelletier/go-toml/lexer.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										750
									
								
								vendor/github.com/pelletier/go-toml/lexer.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,750 +0,0 @@ | |||||||
| // TOML lexer. |  | ||||||
| // |  | ||||||
| // Written using the principles developed by Rob Pike in |  | ||||||
| // http://www.youtube.com/watch?v=HxaD_trXwRE |  | ||||||
|  |  | ||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
| 	"regexp" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var dateRegexp *regexp.Regexp |  | ||||||
|  |  | ||||||
| // Define state functions |  | ||||||
| type tomlLexStateFn func() tomlLexStateFn |  | ||||||
|  |  | ||||||
| // Define lexer |  | ||||||
| type tomlLexer struct { |  | ||||||
| 	inputIdx          int |  | ||||||
| 	input             []rune // Textual source |  | ||||||
| 	currentTokenStart int |  | ||||||
| 	currentTokenStop  int |  | ||||||
| 	tokens            []token |  | ||||||
| 	depth             int |  | ||||||
| 	line              int |  | ||||||
| 	col               int |  | ||||||
| 	endbufferLine     int |  | ||||||
| 	endbufferCol      int |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Basic read operations on input |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) read() rune { |  | ||||||
| 	r := l.peek() |  | ||||||
| 	if r == '\n' { |  | ||||||
| 		l.endbufferLine++ |  | ||||||
| 		l.endbufferCol = 1 |  | ||||||
| 	} else { |  | ||||||
| 		l.endbufferCol++ |  | ||||||
| 	} |  | ||||||
| 	l.inputIdx++ |  | ||||||
| 	return r |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) next() rune { |  | ||||||
| 	r := l.read() |  | ||||||
|  |  | ||||||
| 	if r != eof { |  | ||||||
| 		l.currentTokenStop++ |  | ||||||
| 	} |  | ||||||
| 	return r |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) ignore() { |  | ||||||
| 	l.currentTokenStart = l.currentTokenStop |  | ||||||
| 	l.line = l.endbufferLine |  | ||||||
| 	l.col = l.endbufferCol |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) skip() { |  | ||||||
| 	l.next() |  | ||||||
| 	l.ignore() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) fastForward(n int) { |  | ||||||
| 	for i := 0; i < n; i++ { |  | ||||||
| 		l.next() |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) emitWithValue(t tokenType, value string) { |  | ||||||
| 	l.tokens = append(l.tokens, token{ |  | ||||||
| 		Position: Position{l.line, l.col}, |  | ||||||
| 		typ:      t, |  | ||||||
| 		val:      value, |  | ||||||
| 	}) |  | ||||||
| 	l.ignore() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) emit(t tokenType) { |  | ||||||
| 	l.emitWithValue(t, string(l.input[l.currentTokenStart:l.currentTokenStop])) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) peek() rune { |  | ||||||
| 	if l.inputIdx >= len(l.input) { |  | ||||||
| 		return eof |  | ||||||
| 	} |  | ||||||
| 	return l.input[l.inputIdx] |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) peekString(size int) string { |  | ||||||
| 	maxIdx := len(l.input) |  | ||||||
| 	upperIdx := l.inputIdx + size // FIXME: potential overflow |  | ||||||
| 	if upperIdx > maxIdx { |  | ||||||
| 		upperIdx = maxIdx |  | ||||||
| 	} |  | ||||||
| 	return string(l.input[l.inputIdx:upperIdx]) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) follow(next string) bool { |  | ||||||
| 	return next == l.peekString(len(next)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Error management |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) errorf(format string, args ...interface{}) tomlLexStateFn { |  | ||||||
| 	l.tokens = append(l.tokens, token{ |  | ||||||
| 		Position: Position{l.line, l.col}, |  | ||||||
| 		typ:      tokenError, |  | ||||||
| 		val:      fmt.Sprintf(format, args...), |  | ||||||
| 	}) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // State functions |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexVoid() tomlLexStateFn { |  | ||||||
| 	for { |  | ||||||
| 		next := l.peek() |  | ||||||
| 		switch next { |  | ||||||
| 		case '[': |  | ||||||
| 			return l.lexTableKey |  | ||||||
| 		case '#': |  | ||||||
| 			return l.lexComment(l.lexVoid) |  | ||||||
| 		case '=': |  | ||||||
| 			return l.lexEqual |  | ||||||
| 		case '\r': |  | ||||||
| 			fallthrough |  | ||||||
| 		case '\n': |  | ||||||
| 			l.skip() |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if isSpace(next) { |  | ||||||
| 			l.skip() |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if l.depth > 0 { |  | ||||||
| 			return l.lexRvalue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if isKeyStartChar(next) { |  | ||||||
| 			return l.lexKey |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if next == eof { |  | ||||||
| 			l.next() |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	l.emit(tokenEOF) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexRvalue() tomlLexStateFn { |  | ||||||
| 	for { |  | ||||||
| 		next := l.peek() |  | ||||||
| 		switch next { |  | ||||||
| 		case '.': |  | ||||||
| 			return l.errorf("cannot start float with a dot") |  | ||||||
| 		case '=': |  | ||||||
| 			return l.lexEqual |  | ||||||
| 		case '[': |  | ||||||
| 			l.depth++ |  | ||||||
| 			return l.lexLeftBracket |  | ||||||
| 		case ']': |  | ||||||
| 			l.depth-- |  | ||||||
| 			return l.lexRightBracket |  | ||||||
| 		case '{': |  | ||||||
| 			return l.lexLeftCurlyBrace |  | ||||||
| 		case '}': |  | ||||||
| 			return l.lexRightCurlyBrace |  | ||||||
| 		case '#': |  | ||||||
| 			return l.lexComment(l.lexRvalue) |  | ||||||
| 		case '"': |  | ||||||
| 			return l.lexString |  | ||||||
| 		case '\'': |  | ||||||
| 			return l.lexLiteralString |  | ||||||
| 		case ',': |  | ||||||
| 			return l.lexComma |  | ||||||
| 		case '\r': |  | ||||||
| 			fallthrough |  | ||||||
| 		case '\n': |  | ||||||
| 			l.skip() |  | ||||||
| 			if l.depth == 0 { |  | ||||||
| 				return l.lexVoid |  | ||||||
| 			} |  | ||||||
| 			return l.lexRvalue |  | ||||||
| 		case '_': |  | ||||||
| 			return l.errorf("cannot start number with underscore") |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if l.follow("true") { |  | ||||||
| 			return l.lexTrue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if l.follow("false") { |  | ||||||
| 			return l.lexFalse |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if l.follow("inf") { |  | ||||||
| 			return l.lexInf |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if l.follow("nan") { |  | ||||||
| 			return l.lexNan |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if isSpace(next) { |  | ||||||
| 			l.skip() |  | ||||||
| 			continue |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if next == eof { |  | ||||||
| 			l.next() |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		possibleDate := l.peekString(35) |  | ||||||
| 		dateMatch := dateRegexp.FindString(possibleDate) |  | ||||||
| 		if dateMatch != "" { |  | ||||||
| 			l.fastForward(len(dateMatch)) |  | ||||||
| 			return l.lexDate |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if next == '+' || next == '-' || isDigit(next) { |  | ||||||
| 			return l.lexNumber |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if isAlphanumeric(next) { |  | ||||||
| 			return l.lexKey |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		return l.errorf("no value can start with %c", next) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	l.emit(tokenEOF) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexLeftCurlyBrace() tomlLexStateFn { |  | ||||||
| 	l.next() |  | ||||||
| 	l.emit(tokenLeftCurlyBrace) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexRightCurlyBrace() tomlLexStateFn { |  | ||||||
| 	l.next() |  | ||||||
| 	l.emit(tokenRightCurlyBrace) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexDate() tomlLexStateFn { |  | ||||||
| 	l.emit(tokenDate) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexTrue() tomlLexStateFn { |  | ||||||
| 	l.fastForward(4) |  | ||||||
| 	l.emit(tokenTrue) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexFalse() tomlLexStateFn { |  | ||||||
| 	l.fastForward(5) |  | ||||||
| 	l.emit(tokenFalse) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexInf() tomlLexStateFn { |  | ||||||
| 	l.fastForward(3) |  | ||||||
| 	l.emit(tokenInf) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexNan() tomlLexStateFn { |  | ||||||
| 	l.fastForward(3) |  | ||||||
| 	l.emit(tokenNan) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexEqual() tomlLexStateFn { |  | ||||||
| 	l.next() |  | ||||||
| 	l.emit(tokenEqual) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexComma() tomlLexStateFn { |  | ||||||
| 	l.next() |  | ||||||
| 	l.emit(tokenComma) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Parse the key and emits its value without escape sequences. |  | ||||||
| // bare keys, basic string keys and literal string keys are supported. |  | ||||||
| func (l *tomlLexer) lexKey() tomlLexStateFn { |  | ||||||
| 	growingString := "" |  | ||||||
|  |  | ||||||
| 	for r := l.peek(); isKeyChar(r) || r == '\n' || r == '\r'; r = l.peek() { |  | ||||||
| 		if r == '"' { |  | ||||||
| 			l.next() |  | ||||||
| 			str, err := l.lexStringAsString(`"`, false, true) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return l.errorf(err.Error()) |  | ||||||
| 			} |  | ||||||
| 			growingString += str |  | ||||||
| 			l.next() |  | ||||||
| 			continue |  | ||||||
| 		} else if r == '\'' { |  | ||||||
| 			l.next() |  | ||||||
| 			str, err := l.lexLiteralStringAsString(`'`, false) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return l.errorf(err.Error()) |  | ||||||
| 			} |  | ||||||
| 			growingString += str |  | ||||||
| 			l.next() |  | ||||||
| 			continue |  | ||||||
| 		} else if r == '\n' { |  | ||||||
| 			return l.errorf("keys cannot contain new lines") |  | ||||||
| 		} else if isSpace(r) { |  | ||||||
| 			break |  | ||||||
| 		} else if !isValidBareChar(r) { |  | ||||||
| 			return l.errorf("keys cannot contain %c character", r) |  | ||||||
| 		} |  | ||||||
| 		growingString += string(r) |  | ||||||
| 		l.next() |  | ||||||
| 	} |  | ||||||
| 	l.emitWithValue(tokenKey, growingString) |  | ||||||
| 	return l.lexVoid |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexComment(previousState tomlLexStateFn) tomlLexStateFn { |  | ||||||
| 	return func() tomlLexStateFn { |  | ||||||
| 		for next := l.peek(); next != '\n' && next != eof; next = l.peek() { |  | ||||||
| 			if next == '\r' && l.follow("\r\n") { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
| 			l.next() |  | ||||||
| 		} |  | ||||||
| 		l.ignore() |  | ||||||
| 		return previousState |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexLeftBracket() tomlLexStateFn { |  | ||||||
| 	l.next() |  | ||||||
| 	l.emit(tokenLeftBracket) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexLiteralStringAsString(terminator string, discardLeadingNewLine bool) (string, error) { |  | ||||||
| 	growingString := "" |  | ||||||
|  |  | ||||||
| 	if discardLeadingNewLine { |  | ||||||
| 		if l.follow("\r\n") { |  | ||||||
| 			l.skip() |  | ||||||
| 			l.skip() |  | ||||||
| 		} else if l.peek() == '\n' { |  | ||||||
| 			l.skip() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// find end of string |  | ||||||
| 	for { |  | ||||||
| 		if l.follow(terminator) { |  | ||||||
| 			return growingString, nil |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		next := l.peek() |  | ||||||
| 		if next == eof { |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 		growingString += string(l.next()) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return "", errors.New("unclosed string") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexLiteralString() tomlLexStateFn { |  | ||||||
| 	l.skip() |  | ||||||
|  |  | ||||||
| 	// handle special case for triple-quote |  | ||||||
| 	terminator := "'" |  | ||||||
| 	discardLeadingNewLine := false |  | ||||||
| 	if l.follow("''") { |  | ||||||
| 		l.skip() |  | ||||||
| 		l.skip() |  | ||||||
| 		terminator = "'''" |  | ||||||
| 		discardLeadingNewLine = true |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	str, err := l.lexLiteralStringAsString(terminator, discardLeadingNewLine) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return l.errorf(err.Error()) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	l.emitWithValue(tokenString, str) |  | ||||||
| 	l.fastForward(len(terminator)) |  | ||||||
| 	l.ignore() |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Lex a string and return the results as a string. |  | ||||||
| // Terminator is the substring indicating the end of the token. |  | ||||||
| // The resulting string does not include the terminator. |  | ||||||
| func (l *tomlLexer) lexStringAsString(terminator string, discardLeadingNewLine, acceptNewLines bool) (string, error) { |  | ||||||
| 	growingString := "" |  | ||||||
|  |  | ||||||
| 	if discardLeadingNewLine { |  | ||||||
| 		if l.follow("\r\n") { |  | ||||||
| 			l.skip() |  | ||||||
| 			l.skip() |  | ||||||
| 		} else if l.peek() == '\n' { |  | ||||||
| 			l.skip() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	for { |  | ||||||
| 		if l.follow(terminator) { |  | ||||||
| 			return growingString, nil |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if l.follow("\\") { |  | ||||||
| 			l.next() |  | ||||||
| 			switch l.peek() { |  | ||||||
| 			case '\r': |  | ||||||
| 				fallthrough |  | ||||||
| 			case '\n': |  | ||||||
| 				fallthrough |  | ||||||
| 			case '\t': |  | ||||||
| 				fallthrough |  | ||||||
| 			case ' ': |  | ||||||
| 				// skip all whitespace chars following backslash |  | ||||||
| 				for strings.ContainsRune("\r\n\t ", l.peek()) { |  | ||||||
| 					l.next() |  | ||||||
| 				} |  | ||||||
| 			case '"': |  | ||||||
| 				growingString += "\"" |  | ||||||
| 				l.next() |  | ||||||
| 			case 'n': |  | ||||||
| 				growingString += "\n" |  | ||||||
| 				l.next() |  | ||||||
| 			case 'b': |  | ||||||
| 				growingString += "\b" |  | ||||||
| 				l.next() |  | ||||||
| 			case 'f': |  | ||||||
| 				growingString += "\f" |  | ||||||
| 				l.next() |  | ||||||
| 			case '/': |  | ||||||
| 				growingString += "/" |  | ||||||
| 				l.next() |  | ||||||
| 			case 't': |  | ||||||
| 				growingString += "\t" |  | ||||||
| 				l.next() |  | ||||||
| 			case 'r': |  | ||||||
| 				growingString += "\r" |  | ||||||
| 				l.next() |  | ||||||
| 			case '\\': |  | ||||||
| 				growingString += "\\" |  | ||||||
| 				l.next() |  | ||||||
| 			case 'u': |  | ||||||
| 				l.next() |  | ||||||
| 				code := "" |  | ||||||
| 				for i := 0; i < 4; i++ { |  | ||||||
| 					c := l.peek() |  | ||||||
| 					if !isHexDigit(c) { |  | ||||||
| 						return "", errors.New("unfinished unicode escape") |  | ||||||
| 					} |  | ||||||
| 					l.next() |  | ||||||
| 					code = code + string(c) |  | ||||||
| 				} |  | ||||||
| 				intcode, err := strconv.ParseInt(code, 16, 32) |  | ||||||
| 				if err != nil { |  | ||||||
| 					return "", errors.New("invalid unicode escape: \\u" + code) |  | ||||||
| 				} |  | ||||||
| 				growingString += string(rune(intcode)) |  | ||||||
| 			case 'U': |  | ||||||
| 				l.next() |  | ||||||
| 				code := "" |  | ||||||
| 				for i := 0; i < 8; i++ { |  | ||||||
| 					c := l.peek() |  | ||||||
| 					if !isHexDigit(c) { |  | ||||||
| 						return "", errors.New("unfinished unicode escape") |  | ||||||
| 					} |  | ||||||
| 					l.next() |  | ||||||
| 					code = code + string(c) |  | ||||||
| 				} |  | ||||||
| 				intcode, err := strconv.ParseInt(code, 16, 64) |  | ||||||
| 				if err != nil { |  | ||||||
| 					return "", errors.New("invalid unicode escape: \\U" + code) |  | ||||||
| 				} |  | ||||||
| 				growingString += string(rune(intcode)) |  | ||||||
| 			default: |  | ||||||
| 				return "", errors.New("invalid escape sequence: \\" + string(l.peek())) |  | ||||||
| 			} |  | ||||||
| 		} else { |  | ||||||
| 			r := l.peek() |  | ||||||
|  |  | ||||||
| 			if 0x00 <= r && r <= 0x1F && !(acceptNewLines && (r == '\n' || r == '\r')) { |  | ||||||
| 				return "", fmt.Errorf("unescaped control character %U", r) |  | ||||||
| 			} |  | ||||||
| 			l.next() |  | ||||||
| 			growingString += string(r) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if l.peek() == eof { |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return "", errors.New("unclosed string") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexString() tomlLexStateFn { |  | ||||||
| 	l.skip() |  | ||||||
|  |  | ||||||
| 	// handle special case for triple-quote |  | ||||||
| 	terminator := `"` |  | ||||||
| 	discardLeadingNewLine := false |  | ||||||
| 	acceptNewLines := false |  | ||||||
| 	if l.follow(`""`) { |  | ||||||
| 		l.skip() |  | ||||||
| 		l.skip() |  | ||||||
| 		terminator = `"""` |  | ||||||
| 		discardLeadingNewLine = true |  | ||||||
| 		acceptNewLines = true |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	str, err := l.lexStringAsString(terminator, discardLeadingNewLine, acceptNewLines) |  | ||||||
|  |  | ||||||
| 	if err != nil { |  | ||||||
| 		return l.errorf(err.Error()) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	l.emitWithValue(tokenString, str) |  | ||||||
| 	l.fastForward(len(terminator)) |  | ||||||
| 	l.ignore() |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexTableKey() tomlLexStateFn { |  | ||||||
| 	l.next() |  | ||||||
|  |  | ||||||
| 	if l.peek() == '[' { |  | ||||||
| 		// token '[[' signifies an array of tables |  | ||||||
| 		l.next() |  | ||||||
| 		l.emit(tokenDoubleLeftBracket) |  | ||||||
| 		return l.lexInsideTableArrayKey |  | ||||||
| 	} |  | ||||||
| 	// vanilla table key |  | ||||||
| 	l.emit(tokenLeftBracket) |  | ||||||
| 	return l.lexInsideTableKey |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Parse the key till "]]", but only bare keys are supported |  | ||||||
| func (l *tomlLexer) lexInsideTableArrayKey() tomlLexStateFn { |  | ||||||
| 	for r := l.peek(); r != eof; r = l.peek() { |  | ||||||
| 		switch r { |  | ||||||
| 		case ']': |  | ||||||
| 			if l.currentTokenStop > l.currentTokenStart { |  | ||||||
| 				l.emit(tokenKeyGroupArray) |  | ||||||
| 			} |  | ||||||
| 			l.next() |  | ||||||
| 			if l.peek() != ']' { |  | ||||||
| 				break |  | ||||||
| 			} |  | ||||||
| 			l.next() |  | ||||||
| 			l.emit(tokenDoubleRightBracket) |  | ||||||
| 			return l.lexVoid |  | ||||||
| 		case '[': |  | ||||||
| 			return l.errorf("table array key cannot contain ']'") |  | ||||||
| 		default: |  | ||||||
| 			l.next() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return l.errorf("unclosed table array key") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Parse the key till "]" but only bare keys are supported |  | ||||||
| func (l *tomlLexer) lexInsideTableKey() tomlLexStateFn { |  | ||||||
| 	for r := l.peek(); r != eof; r = l.peek() { |  | ||||||
| 		switch r { |  | ||||||
| 		case ']': |  | ||||||
| 			if l.currentTokenStop > l.currentTokenStart { |  | ||||||
| 				l.emit(tokenKeyGroup) |  | ||||||
| 			} |  | ||||||
| 			l.next() |  | ||||||
| 			l.emit(tokenRightBracket) |  | ||||||
| 			return l.lexVoid |  | ||||||
| 		case '[': |  | ||||||
| 			return l.errorf("table key cannot contain ']'") |  | ||||||
| 		default: |  | ||||||
| 			l.next() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return l.errorf("unclosed table key") |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexRightBracket() tomlLexStateFn { |  | ||||||
| 	l.next() |  | ||||||
| 	l.emit(tokenRightBracket) |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| type validRuneFn func(r rune) bool |  | ||||||
|  |  | ||||||
| func isValidHexRune(r rune) bool { |  | ||||||
| 	return r >= 'a' && r <= 'f' || |  | ||||||
| 		r >= 'A' && r <= 'F' || |  | ||||||
| 		r >= '0' && r <= '9' || |  | ||||||
| 		r == '_' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isValidOctalRune(r rune) bool { |  | ||||||
| 	return r >= '0' && r <= '7' || r == '_' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isValidBinaryRune(r rune) bool { |  | ||||||
| 	return r == '0' || r == '1' || r == '_' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) lexNumber() tomlLexStateFn { |  | ||||||
| 	r := l.peek() |  | ||||||
|  |  | ||||||
| 	if r == '0' { |  | ||||||
| 		follow := l.peekString(2) |  | ||||||
| 		if len(follow) == 2 { |  | ||||||
| 			var isValidRune validRuneFn |  | ||||||
| 			switch follow[1] { |  | ||||||
| 			case 'x': |  | ||||||
| 				isValidRune = isValidHexRune |  | ||||||
| 			case 'o': |  | ||||||
| 				isValidRune = isValidOctalRune |  | ||||||
| 			case 'b': |  | ||||||
| 				isValidRune = isValidBinaryRune |  | ||||||
| 			default: |  | ||||||
| 				if follow[1] >= 'a' && follow[1] <= 'z' || follow[1] >= 'A' && follow[1] <= 'Z' { |  | ||||||
| 					return l.errorf("unknown number base: %s. possible options are x (hex) o (octal) b (binary)", string(follow[1])) |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			if isValidRune != nil { |  | ||||||
| 				l.next() |  | ||||||
| 				l.next() |  | ||||||
| 				digitSeen := false |  | ||||||
| 				for { |  | ||||||
| 					next := l.peek() |  | ||||||
| 					if !isValidRune(next) { |  | ||||||
| 						break |  | ||||||
| 					} |  | ||||||
| 					digitSeen = true |  | ||||||
| 					l.next() |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				if !digitSeen { |  | ||||||
| 					return l.errorf("number needs at least one digit") |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				l.emit(tokenInteger) |  | ||||||
|  |  | ||||||
| 				return l.lexRvalue |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if r == '+' || r == '-' { |  | ||||||
| 		l.next() |  | ||||||
| 		if l.follow("inf") { |  | ||||||
| 			return l.lexInf |  | ||||||
| 		} |  | ||||||
| 		if l.follow("nan") { |  | ||||||
| 			return l.lexNan |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	pointSeen := false |  | ||||||
| 	expSeen := false |  | ||||||
| 	digitSeen := false |  | ||||||
| 	for { |  | ||||||
| 		next := l.peek() |  | ||||||
| 		if next == '.' { |  | ||||||
| 			if pointSeen { |  | ||||||
| 				return l.errorf("cannot have two dots in one float") |  | ||||||
| 			} |  | ||||||
| 			l.next() |  | ||||||
| 			if !isDigit(l.peek()) { |  | ||||||
| 				return l.errorf("float cannot end with a dot") |  | ||||||
| 			} |  | ||||||
| 			pointSeen = true |  | ||||||
| 		} else if next == 'e' || next == 'E' { |  | ||||||
| 			expSeen = true |  | ||||||
| 			l.next() |  | ||||||
| 			r := l.peek() |  | ||||||
| 			if r == '+' || r == '-' { |  | ||||||
| 				l.next() |  | ||||||
| 			} |  | ||||||
| 		} else if isDigit(next) { |  | ||||||
| 			digitSeen = true |  | ||||||
| 			l.next() |  | ||||||
| 		} else if next == '_' { |  | ||||||
| 			l.next() |  | ||||||
| 		} else { |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 		if pointSeen && !digitSeen { |  | ||||||
| 			return l.errorf("cannot start float with a dot") |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if !digitSeen { |  | ||||||
| 		return l.errorf("no digit in that number") |  | ||||||
| 	} |  | ||||||
| 	if pointSeen || expSeen { |  | ||||||
| 		l.emit(tokenFloat) |  | ||||||
| 	} else { |  | ||||||
| 		l.emit(tokenInteger) |  | ||||||
| 	} |  | ||||||
| 	return l.lexRvalue |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (l *tomlLexer) run() { |  | ||||||
| 	for state := l.lexVoid; state != nil; { |  | ||||||
| 		state = state() |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func init() { |  | ||||||
| 	dateRegexp = regexp.MustCompile(`^\d{1,4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(\.\d{1,9})?(Z|[+-]\d{2}:\d{2})`) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Entry point |  | ||||||
| func lexToml(inputBytes []byte) []token { |  | ||||||
| 	runes := bytes.Runes(inputBytes) |  | ||||||
| 	l := &tomlLexer{ |  | ||||||
| 		input:         runes, |  | ||||||
| 		tokens:        make([]token, 0, 256), |  | ||||||
| 		line:          1, |  | ||||||
| 		col:           1, |  | ||||||
| 		endbufferLine: 1, |  | ||||||
| 		endbufferCol:  1, |  | ||||||
| 	} |  | ||||||
| 	l.run() |  | ||||||
| 	return l.tokens |  | ||||||
| } |  | ||||||
							
								
								
									
										609
									
								
								vendor/github.com/pelletier/go-toml/marshal.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										609
									
								
								vendor/github.com/pelletier/go-toml/marshal.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,609 +0,0 @@ | |||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
| 	"io" |  | ||||||
| 	"reflect" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
| 	"time" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| const tagKeyMultiline = "multiline" |  | ||||||
|  |  | ||||||
| type tomlOpts struct { |  | ||||||
| 	name      string |  | ||||||
| 	comment   string |  | ||||||
| 	commented bool |  | ||||||
| 	multiline bool |  | ||||||
| 	include   bool |  | ||||||
| 	omitempty bool |  | ||||||
| } |  | ||||||
|  |  | ||||||
| type encOpts struct { |  | ||||||
| 	quoteMapKeys            bool |  | ||||||
| 	arraysOneElementPerLine bool |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var encOptsDefaults = encOpts{ |  | ||||||
| 	quoteMapKeys: false, |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var timeType = reflect.TypeOf(time.Time{}) |  | ||||||
| var marshalerType = reflect.TypeOf(new(Marshaler)).Elem() |  | ||||||
|  |  | ||||||
| // Check if the given marshall type maps to a Tree primitive |  | ||||||
| func isPrimitive(mtype reflect.Type) bool { |  | ||||||
| 	switch mtype.Kind() { |  | ||||||
| 	case reflect.Ptr: |  | ||||||
| 		return isPrimitive(mtype.Elem()) |  | ||||||
| 	case reflect.Bool: |  | ||||||
| 		return true |  | ||||||
| 	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: |  | ||||||
| 		return true |  | ||||||
| 	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: |  | ||||||
| 		return true |  | ||||||
| 	case reflect.Float32, reflect.Float64: |  | ||||||
| 		return true |  | ||||||
| 	case reflect.String: |  | ||||||
| 		return true |  | ||||||
| 	case reflect.Struct: |  | ||||||
| 		return mtype == timeType || isCustomMarshaler(mtype) |  | ||||||
| 	default: |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Check if the given marshall type maps to a Tree slice |  | ||||||
| func isTreeSlice(mtype reflect.Type) bool { |  | ||||||
| 	switch mtype.Kind() { |  | ||||||
| 	case reflect.Slice: |  | ||||||
| 		return !isOtherSlice(mtype) |  | ||||||
| 	default: |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Check if the given marshall type maps to a non-Tree slice |  | ||||||
| func isOtherSlice(mtype reflect.Type) bool { |  | ||||||
| 	switch mtype.Kind() { |  | ||||||
| 	case reflect.Ptr: |  | ||||||
| 		return isOtherSlice(mtype.Elem()) |  | ||||||
| 	case reflect.Slice: |  | ||||||
| 		return isPrimitive(mtype.Elem()) || isOtherSlice(mtype.Elem()) |  | ||||||
| 	default: |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Check if the given marshall type maps to a Tree |  | ||||||
| func isTree(mtype reflect.Type) bool { |  | ||||||
| 	switch mtype.Kind() { |  | ||||||
| 	case reflect.Map: |  | ||||||
| 		return true |  | ||||||
| 	case reflect.Struct: |  | ||||||
| 		return !isPrimitive(mtype) |  | ||||||
| 	default: |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isCustomMarshaler(mtype reflect.Type) bool { |  | ||||||
| 	return mtype.Implements(marshalerType) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func callCustomMarshaler(mval reflect.Value) ([]byte, error) { |  | ||||||
| 	return mval.Interface().(Marshaler).MarshalTOML() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Marshaler is the interface implemented by types that |  | ||||||
| // can marshal themselves into valid TOML. |  | ||||||
| type Marshaler interface { |  | ||||||
| 	MarshalTOML() ([]byte, error) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| /* |  | ||||||
| Marshal returns the TOML encoding of v.  Behavior is similar to the Go json |  | ||||||
| encoder, except that there is no concept of a Marshaler interface or MarshalTOML |  | ||||||
| function for sub-structs, and currently only definite types can be marshaled |  | ||||||
| (i.e. no `interface{}`). |  | ||||||
|  |  | ||||||
| The following struct annotations are supported: |  | ||||||
|  |  | ||||||
|   toml:"Field"      Overrides the field's name to output. |  | ||||||
|   omitempty         When set, empty values and groups are not emitted. |  | ||||||
|   comment:"comment" Emits a # comment on the same line. This supports new lines. |  | ||||||
|   commented:"true"  Emits the value as commented. |  | ||||||
|  |  | ||||||
| Note that pointers are automatically assigned the "omitempty" option, as TOML |  | ||||||
| explicitly does not handle null values (saying instead the label should be |  | ||||||
| dropped). |  | ||||||
|  |  | ||||||
| Tree structural types and corresponding marshal types: |  | ||||||
|  |  | ||||||
|   *Tree                            (*)struct, (*)map[string]interface{} |  | ||||||
|   []*Tree                          (*)[](*)struct, (*)[](*)map[string]interface{} |  | ||||||
|   []interface{} (as interface{})   (*)[]primitive, (*)[]([]interface{}) |  | ||||||
|   interface{}                      (*)primitive |  | ||||||
|  |  | ||||||
| Tree primitive types and corresponding marshal types: |  | ||||||
|  |  | ||||||
|   uint64     uint, uint8-uint64, pointers to same |  | ||||||
|   int64      int, int8-uint64, pointers to same |  | ||||||
|   float64    float32, float64, pointers to same |  | ||||||
|   string     string, pointers to same |  | ||||||
|   bool       bool, pointers to same |  | ||||||
|   time.Time  time.Time{}, pointers to same |  | ||||||
| */ |  | ||||||
| func Marshal(v interface{}) ([]byte, error) { |  | ||||||
| 	return NewEncoder(nil).marshal(v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Encoder writes TOML values to an output stream. |  | ||||||
| type Encoder struct { |  | ||||||
| 	w io.Writer |  | ||||||
| 	encOpts |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // NewEncoder returns a new encoder that writes to w. |  | ||||||
| func NewEncoder(w io.Writer) *Encoder { |  | ||||||
| 	return &Encoder{ |  | ||||||
| 		w:       w, |  | ||||||
| 		encOpts: encOptsDefaults, |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Encode writes the TOML encoding of v to the stream. |  | ||||||
| // |  | ||||||
| // See the documentation for Marshal for details. |  | ||||||
| func (e *Encoder) Encode(v interface{}) error { |  | ||||||
| 	b, err := e.marshal(v) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	if _, err := e.w.Write(b); err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // QuoteMapKeys sets up the encoder to encode |  | ||||||
| // maps with string type keys with quoted TOML keys. |  | ||||||
| // |  | ||||||
| // This relieves the character limitations on map keys. |  | ||||||
| func (e *Encoder) QuoteMapKeys(v bool) *Encoder { |  | ||||||
| 	e.quoteMapKeys = v |  | ||||||
| 	return e |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ArraysWithOneElementPerLine sets up the encoder to encode arrays |  | ||||||
| // with more than one element on multiple lines instead of one. |  | ||||||
| // |  | ||||||
| // For example: |  | ||||||
| // |  | ||||||
| //   A = [1,2,3] |  | ||||||
| // |  | ||||||
| // Becomes |  | ||||||
| // |  | ||||||
| //   A = [ |  | ||||||
| //     1, |  | ||||||
| //     2, |  | ||||||
| //     3, |  | ||||||
| //   ] |  | ||||||
| func (e *Encoder) ArraysWithOneElementPerLine(v bool) *Encoder { |  | ||||||
| 	e.arraysOneElementPerLine = v |  | ||||||
| 	return e |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (e *Encoder) marshal(v interface{}) ([]byte, error) { |  | ||||||
| 	mtype := reflect.TypeOf(v) |  | ||||||
| 	if mtype.Kind() != reflect.Struct { |  | ||||||
| 		return []byte{}, errors.New("Only a struct can be marshaled to TOML") |  | ||||||
| 	} |  | ||||||
| 	sval := reflect.ValueOf(v) |  | ||||||
| 	if isCustomMarshaler(mtype) { |  | ||||||
| 		return callCustomMarshaler(sval) |  | ||||||
| 	} |  | ||||||
| 	t, err := e.valueToTree(mtype, sval) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return []byte{}, err |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
| 	_, err = t.writeTo(&buf, "", "", 0, e.arraysOneElementPerLine) |  | ||||||
|  |  | ||||||
| 	return buf.Bytes(), err |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert given marshal struct or map value to toml tree |  | ||||||
| func (e *Encoder) valueToTree(mtype reflect.Type, mval reflect.Value) (*Tree, error) { |  | ||||||
| 	if mtype.Kind() == reflect.Ptr { |  | ||||||
| 		return e.valueToTree(mtype.Elem(), mval.Elem()) |  | ||||||
| 	} |  | ||||||
| 	tval := newTree() |  | ||||||
| 	switch mtype.Kind() { |  | ||||||
| 	case reflect.Struct: |  | ||||||
| 		for i := 0; i < mtype.NumField(); i++ { |  | ||||||
| 			mtypef, mvalf := mtype.Field(i), mval.Field(i) |  | ||||||
| 			opts := tomlOptions(mtypef) |  | ||||||
| 			if opts.include && (!opts.omitempty || !isZero(mvalf)) { |  | ||||||
| 				val, err := e.valueToToml(mtypef.Type, mvalf) |  | ||||||
| 				if err != nil { |  | ||||||
| 					return nil, err |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				tval.SetWithOptions(opts.name, SetOptions{ |  | ||||||
| 					Comment:   opts.comment, |  | ||||||
| 					Commented: opts.commented, |  | ||||||
| 					Multiline: opts.multiline, |  | ||||||
| 				}, val) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	case reflect.Map: |  | ||||||
| 		for _, key := range mval.MapKeys() { |  | ||||||
| 			mvalf := mval.MapIndex(key) |  | ||||||
| 			val, err := e.valueToToml(mtype.Elem(), mvalf) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, err |  | ||||||
| 			} |  | ||||||
| 			if e.quoteMapKeys { |  | ||||||
| 				keyStr, err := tomlValueStringRepresentation(key.String(), "", e.arraysOneElementPerLine) |  | ||||||
| 				if err != nil { |  | ||||||
| 					return nil, err |  | ||||||
| 				} |  | ||||||
| 				tval.SetPath([]string{keyStr}, val) |  | ||||||
| 			} else { |  | ||||||
| 				tval.Set(key.String(), val) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return tval, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert given marshal slice to slice of Toml trees |  | ||||||
| func (e *Encoder) valueToTreeSlice(mtype reflect.Type, mval reflect.Value) ([]*Tree, error) { |  | ||||||
| 	tval := make([]*Tree, mval.Len(), mval.Len()) |  | ||||||
| 	for i := 0; i < mval.Len(); i++ { |  | ||||||
| 		val, err := e.valueToTree(mtype.Elem(), mval.Index(i)) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
| 		tval[i] = val |  | ||||||
| 	} |  | ||||||
| 	return tval, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert given marshal slice to slice of toml values |  | ||||||
| func (e *Encoder) valueToOtherSlice(mtype reflect.Type, mval reflect.Value) (interface{}, error) { |  | ||||||
| 	tval := make([]interface{}, mval.Len(), mval.Len()) |  | ||||||
| 	for i := 0; i < mval.Len(); i++ { |  | ||||||
| 		val, err := e.valueToToml(mtype.Elem(), mval.Index(i)) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
| 		tval[i] = val |  | ||||||
| 	} |  | ||||||
| 	return tval, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert given marshal value to toml value |  | ||||||
| func (e *Encoder) valueToToml(mtype reflect.Type, mval reflect.Value) (interface{}, error) { |  | ||||||
| 	if mtype.Kind() == reflect.Ptr { |  | ||||||
| 		return e.valueToToml(mtype.Elem(), mval.Elem()) |  | ||||||
| 	} |  | ||||||
| 	switch { |  | ||||||
| 	case isCustomMarshaler(mtype): |  | ||||||
| 		return callCustomMarshaler(mval) |  | ||||||
| 	case isTree(mtype): |  | ||||||
| 		return e.valueToTree(mtype, mval) |  | ||||||
| 	case isTreeSlice(mtype): |  | ||||||
| 		return e.valueToTreeSlice(mtype, mval) |  | ||||||
| 	case isOtherSlice(mtype): |  | ||||||
| 		return e.valueToOtherSlice(mtype, mval) |  | ||||||
| 	default: |  | ||||||
| 		switch mtype.Kind() { |  | ||||||
| 		case reflect.Bool: |  | ||||||
| 			return mval.Bool(), nil |  | ||||||
| 		case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: |  | ||||||
| 			return mval.Int(), nil |  | ||||||
| 		case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: |  | ||||||
| 			return mval.Uint(), nil |  | ||||||
| 		case reflect.Float32, reflect.Float64: |  | ||||||
| 			return mval.Float(), nil |  | ||||||
| 		case reflect.String: |  | ||||||
| 			return mval.String(), nil |  | ||||||
| 		case reflect.Struct: |  | ||||||
| 			return mval.Interface().(time.Time), nil |  | ||||||
| 		default: |  | ||||||
| 			return nil, fmt.Errorf("Marshal can't handle %v(%v)", mtype, mtype.Kind()) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Unmarshal attempts to unmarshal the Tree into a Go struct pointed by v. |  | ||||||
| // Neither Unmarshaler interfaces nor UnmarshalTOML functions are supported for |  | ||||||
| // sub-structs, and only definite types can be unmarshaled. |  | ||||||
| func (t *Tree) Unmarshal(v interface{}) error { |  | ||||||
| 	d := Decoder{tval: t} |  | ||||||
| 	return d.unmarshal(v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Marshal returns the TOML encoding of Tree. |  | ||||||
| // See Marshal() documentation for types mapping table. |  | ||||||
| func (t *Tree) Marshal() ([]byte, error) { |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
| 	err := NewEncoder(&buf).Encode(t) |  | ||||||
| 	return buf.Bytes(), err |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Unmarshal parses the TOML-encoded data and stores the result in the value |  | ||||||
| // pointed to by v. Behavior is similar to the Go json encoder, except that there |  | ||||||
| // is no concept of an Unmarshaler interface or UnmarshalTOML function for |  | ||||||
| // sub-structs, and currently only definite types can be unmarshaled to (i.e. no |  | ||||||
| // `interface{}`). |  | ||||||
| // |  | ||||||
| // The following struct annotations are supported: |  | ||||||
| // |  | ||||||
| //   toml:"Field" Overrides the field's name to map to. |  | ||||||
| // |  | ||||||
| // See Marshal() documentation for types mapping table. |  | ||||||
| func Unmarshal(data []byte, v interface{}) error { |  | ||||||
| 	t, err := LoadReader(bytes.NewReader(data)) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	return t.Unmarshal(v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Decoder reads and decodes TOML values from an input stream. |  | ||||||
| type Decoder struct { |  | ||||||
| 	r    io.Reader |  | ||||||
| 	tval *Tree |  | ||||||
| 	encOpts |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // NewDecoder returns a new decoder that reads from r. |  | ||||||
| func NewDecoder(r io.Reader) *Decoder { |  | ||||||
| 	return &Decoder{ |  | ||||||
| 		r:       r, |  | ||||||
| 		encOpts: encOptsDefaults, |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Decode reads a TOML-encoded value from it's input |  | ||||||
| // and unmarshals it in the value pointed at by v. |  | ||||||
| // |  | ||||||
| // See the documentation for Marshal for details. |  | ||||||
| func (d *Decoder) Decode(v interface{}) error { |  | ||||||
| 	var err error |  | ||||||
| 	d.tval, err = LoadReader(d.r) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	return d.unmarshal(v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *Decoder) unmarshal(v interface{}) error { |  | ||||||
| 	mtype := reflect.TypeOf(v) |  | ||||||
| 	if mtype.Kind() != reflect.Ptr || mtype.Elem().Kind() != reflect.Struct { |  | ||||||
| 		return errors.New("Only a pointer to struct can be unmarshaled from TOML") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	sval, err := d.valueFromTree(mtype.Elem(), d.tval) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	reflect.ValueOf(v).Elem().Set(sval) |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert toml tree to marshal struct or map, using marshal type |  | ||||||
| func (d *Decoder) valueFromTree(mtype reflect.Type, tval *Tree) (reflect.Value, error) { |  | ||||||
| 	if mtype.Kind() == reflect.Ptr { |  | ||||||
| 		return d.unwrapPointer(mtype, tval) |  | ||||||
| 	} |  | ||||||
| 	var mval reflect.Value |  | ||||||
| 	switch mtype.Kind() { |  | ||||||
| 	case reflect.Struct: |  | ||||||
| 		mval = reflect.New(mtype).Elem() |  | ||||||
| 		for i := 0; i < mtype.NumField(); i++ { |  | ||||||
| 			mtypef := mtype.Field(i) |  | ||||||
| 			opts := tomlOptions(mtypef) |  | ||||||
| 			if opts.include { |  | ||||||
| 				baseKey := opts.name |  | ||||||
| 				keysToTry := []string{baseKey, strings.ToLower(baseKey), strings.ToTitle(baseKey)} |  | ||||||
| 				for _, key := range keysToTry { |  | ||||||
| 					exists := tval.Has(key) |  | ||||||
| 					if !exists { |  | ||||||
| 						continue |  | ||||||
| 					} |  | ||||||
| 					val := tval.Get(key) |  | ||||||
| 					mvalf, err := d.valueFromToml(mtypef.Type, val) |  | ||||||
| 					if err != nil { |  | ||||||
| 						return mval, formatError(err, tval.GetPosition(key)) |  | ||||||
| 					} |  | ||||||
| 					mval.Field(i).Set(mvalf) |  | ||||||
| 					break |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	case reflect.Map: |  | ||||||
| 		mval = reflect.MakeMap(mtype) |  | ||||||
| 		for _, key := range tval.Keys() { |  | ||||||
| 			// TODO: path splits key |  | ||||||
| 			val := tval.GetPath([]string{key}) |  | ||||||
| 			mvalf, err := d.valueFromToml(mtype.Elem(), val) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return mval, formatError(err, tval.GetPosition(key)) |  | ||||||
| 			} |  | ||||||
| 			mval.SetMapIndex(reflect.ValueOf(key), mvalf) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return mval, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert toml value to marshal struct/map slice, using marshal type |  | ||||||
| func (d *Decoder) valueFromTreeSlice(mtype reflect.Type, tval []*Tree) (reflect.Value, error) { |  | ||||||
| 	mval := reflect.MakeSlice(mtype, len(tval), len(tval)) |  | ||||||
| 	for i := 0; i < len(tval); i++ { |  | ||||||
| 		val, err := d.valueFromTree(mtype.Elem(), tval[i]) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return mval, err |  | ||||||
| 		} |  | ||||||
| 		mval.Index(i).Set(val) |  | ||||||
| 	} |  | ||||||
| 	return mval, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert toml value to marshal primitive slice, using marshal type |  | ||||||
| func (d *Decoder) valueFromOtherSlice(mtype reflect.Type, tval []interface{}) (reflect.Value, error) { |  | ||||||
| 	mval := reflect.MakeSlice(mtype, len(tval), len(tval)) |  | ||||||
| 	for i := 0; i < len(tval); i++ { |  | ||||||
| 		val, err := d.valueFromToml(mtype.Elem(), tval[i]) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return mval, err |  | ||||||
| 		} |  | ||||||
| 		mval.Index(i).Set(val) |  | ||||||
| 	} |  | ||||||
| 	return mval, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Convert toml value to marshal value, using marshal type |  | ||||||
| func (d *Decoder) valueFromToml(mtype reflect.Type, tval interface{}) (reflect.Value, error) { |  | ||||||
| 	if mtype.Kind() == reflect.Ptr { |  | ||||||
| 		return d.unwrapPointer(mtype, tval) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch tval.(type) { |  | ||||||
| 	case *Tree: |  | ||||||
| 		if isTree(mtype) { |  | ||||||
| 			return d.valueFromTree(mtype, tval.(*Tree)) |  | ||||||
| 		} |  | ||||||
| 		return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to a tree", tval, tval) |  | ||||||
| 	case []*Tree: |  | ||||||
| 		if isTreeSlice(mtype) { |  | ||||||
| 			return d.valueFromTreeSlice(mtype, tval.([]*Tree)) |  | ||||||
| 		} |  | ||||||
| 		return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to trees", tval, tval) |  | ||||||
| 	case []interface{}: |  | ||||||
| 		if isOtherSlice(mtype) { |  | ||||||
| 			return d.valueFromOtherSlice(mtype, tval.([]interface{})) |  | ||||||
| 		} |  | ||||||
| 		return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to a slice", tval, tval) |  | ||||||
| 	default: |  | ||||||
| 		switch mtype.Kind() { |  | ||||||
| 		case reflect.Bool, reflect.Struct: |  | ||||||
| 			val := reflect.ValueOf(tval) |  | ||||||
| 			// if this passes for when mtype is reflect.Struct, tval is a time.Time |  | ||||||
| 			if !val.Type().ConvertibleTo(mtype) { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			return val.Convert(mtype), nil |  | ||||||
| 		case reflect.String: |  | ||||||
| 			val := reflect.ValueOf(tval) |  | ||||||
| 			// stupidly, int64 is convertible to string. So special case this. |  | ||||||
| 			if !val.Type().ConvertibleTo(mtype) || val.Kind() == reflect.Int64 { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			return val.Convert(mtype), nil |  | ||||||
| 		case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: |  | ||||||
| 			val := reflect.ValueOf(tval) |  | ||||||
| 			if !val.Type().ConvertibleTo(mtype) { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
| 			if reflect.Indirect(reflect.New(mtype)).OverflowInt(val.Int()) { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("%v(%T) would overflow %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			return val.Convert(mtype), nil |  | ||||||
| 		case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: |  | ||||||
| 			val := reflect.ValueOf(tval) |  | ||||||
| 			if !val.Type().ConvertibleTo(mtype) { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
| 			if val.Int() < 0 { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("%v(%T) is negative so does not fit in %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
| 			if reflect.Indirect(reflect.New(mtype)).OverflowUint(uint64(val.Int())) { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("%v(%T) would overflow %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			return val.Convert(mtype), nil |  | ||||||
| 		case reflect.Float32, reflect.Float64: |  | ||||||
| 			val := reflect.ValueOf(tval) |  | ||||||
| 			if !val.Type().ConvertibleTo(mtype) { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
| 			if reflect.Indirect(reflect.New(mtype)).OverflowFloat(val.Float()) { |  | ||||||
| 				return reflect.ValueOf(nil), fmt.Errorf("%v(%T) would overflow %v", tval, tval, mtype.String()) |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			return val.Convert(mtype), nil |  | ||||||
| 		default: |  | ||||||
| 			return reflect.ValueOf(nil), fmt.Errorf("Can't convert %v(%T) to %v(%v)", tval, tval, mtype, mtype.Kind()) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (d *Decoder) unwrapPointer(mtype reflect.Type, tval interface{}) (reflect.Value, error) { |  | ||||||
| 	val, err := d.valueFromToml(mtype.Elem(), tval) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return reflect.ValueOf(nil), err |  | ||||||
| 	} |  | ||||||
| 	mval := reflect.New(mtype.Elem()) |  | ||||||
| 	mval.Elem().Set(val) |  | ||||||
| 	return mval, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func tomlOptions(vf reflect.StructField) tomlOpts { |  | ||||||
| 	tag := vf.Tag.Get("toml") |  | ||||||
| 	parse := strings.Split(tag, ",") |  | ||||||
| 	var comment string |  | ||||||
| 	if c := vf.Tag.Get("comment"); c != "" { |  | ||||||
| 		comment = c |  | ||||||
| 	} |  | ||||||
| 	commented, _ := strconv.ParseBool(vf.Tag.Get("commented")) |  | ||||||
| 	multiline, _ := strconv.ParseBool(vf.Tag.Get(tagKeyMultiline)) |  | ||||||
| 	result := tomlOpts{name: vf.Name, comment: comment, commented: commented, multiline: multiline, include: true, omitempty: false} |  | ||||||
| 	if parse[0] != "" { |  | ||||||
| 		if parse[0] == "-" && len(parse) == 1 { |  | ||||||
| 			result.include = false |  | ||||||
| 		} else { |  | ||||||
| 			result.name = strings.Trim(parse[0], " ") |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	if vf.PkgPath != "" { |  | ||||||
| 		result.include = false |  | ||||||
| 	} |  | ||||||
| 	if len(parse) > 1 && strings.Trim(parse[1], " ") == "omitempty" { |  | ||||||
| 		result.omitempty = true |  | ||||||
| 	} |  | ||||||
| 	if vf.Type.Kind() == reflect.Ptr { |  | ||||||
| 		result.omitempty = true |  | ||||||
| 	} |  | ||||||
| 	return result |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isZero(val reflect.Value) bool { |  | ||||||
| 	switch val.Type().Kind() { |  | ||||||
| 	case reflect.Map: |  | ||||||
| 		fallthrough |  | ||||||
| 	case reflect.Array: |  | ||||||
| 		fallthrough |  | ||||||
| 	case reflect.Slice: |  | ||||||
| 		return val.Len() == 0 |  | ||||||
| 	default: |  | ||||||
| 		return reflect.DeepEqual(val.Interface(), reflect.Zero(val.Type()).Interface()) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func formatError(err error, pos Position) error { |  | ||||||
| 	if err.Error()[0] == '(' { // Error already contains position information |  | ||||||
| 		return err |  | ||||||
| 	} |  | ||||||
| 	return fmt.Errorf("%s: %s", pos, err) |  | ||||||
| } |  | ||||||
							
								
								
									
										38
									
								
								vendor/github.com/pelletier/go-toml/marshal_test.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										38
									
								
								vendor/github.com/pelletier/go-toml/marshal_test.toml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,38 +0,0 @@ | |||||||
| title = "TOML Marshal Testing" |  | ||||||
|  |  | ||||||
| [basic] |  | ||||||
|   bool = true |  | ||||||
|   date = 1979-05-27T07:32:00Z |  | ||||||
|   float = 123.4 |  | ||||||
|   int = 5000 |  | ||||||
|   string = "Bite me" |  | ||||||
|   uint = 5001 |  | ||||||
|  |  | ||||||
| [basic_lists] |  | ||||||
|   bools = [true,false,true] |  | ||||||
|   dates = [1979-05-27T07:32:00Z,1980-05-27T07:32:00Z] |  | ||||||
|   floats = [12.3,45.6,78.9] |  | ||||||
|   ints = [8001,8001,8002] |  | ||||||
|   strings = ["One","Two","Three"] |  | ||||||
|   uints = [5002,5003] |  | ||||||
|  |  | ||||||
| [basic_map] |  | ||||||
|   one = "one" |  | ||||||
|   two = "two" |  | ||||||
|  |  | ||||||
| [subdoc] |  | ||||||
|  |  | ||||||
|   [subdoc.first] |  | ||||||
|     name = "First" |  | ||||||
|  |  | ||||||
|   [subdoc.second] |  | ||||||
|     name = "Second" |  | ||||||
|  |  | ||||||
| [[subdoclist]] |  | ||||||
|   name = "List.First" |  | ||||||
|  |  | ||||||
| [[subdoclist]] |  | ||||||
|   name = "List.Second" |  | ||||||
|  |  | ||||||
| [[subdocptrs]] |  | ||||||
|   name = "Second" |  | ||||||
							
								
								
									
										430
									
								
								vendor/github.com/pelletier/go-toml/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										430
									
								
								vendor/github.com/pelletier/go-toml/parser.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,430 +0,0 @@ | |||||||
| // TOML Parser. |  | ||||||
|  |  | ||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
| 	"math" |  | ||||||
| 	"reflect" |  | ||||||
| 	"regexp" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
| 	"time" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type tomlParser struct { |  | ||||||
| 	flowIdx       int |  | ||||||
| 	flow          []token |  | ||||||
| 	tree          *Tree |  | ||||||
| 	currentTable  []string |  | ||||||
| 	seenTableKeys []string |  | ||||||
| } |  | ||||||
|  |  | ||||||
| type tomlParserStateFn func() tomlParserStateFn |  | ||||||
|  |  | ||||||
| // Formats and panics an error message based on a token |  | ||||||
| func (p *tomlParser) raiseError(tok *token, msg string, args ...interface{}) { |  | ||||||
| 	panic(tok.Position.String() + ": " + fmt.Sprintf(msg, args...)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) run() { |  | ||||||
| 	for state := p.parseStart; state != nil; { |  | ||||||
| 		state = state() |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) peek() *token { |  | ||||||
| 	if p.flowIdx >= len(p.flow) { |  | ||||||
| 		return nil |  | ||||||
| 	} |  | ||||||
| 	return &p.flow[p.flowIdx] |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) assume(typ tokenType) { |  | ||||||
| 	tok := p.getToken() |  | ||||||
| 	if tok == nil { |  | ||||||
| 		p.raiseError(tok, "was expecting token %s, but token stream is empty", tok) |  | ||||||
| 	} |  | ||||||
| 	if tok.typ != typ { |  | ||||||
| 		p.raiseError(tok, "was expecting token %s, but got %s instead", typ, tok) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) getToken() *token { |  | ||||||
| 	tok := p.peek() |  | ||||||
| 	if tok == nil { |  | ||||||
| 		return nil |  | ||||||
| 	} |  | ||||||
| 	p.flowIdx++ |  | ||||||
| 	return tok |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) parseStart() tomlParserStateFn { |  | ||||||
| 	tok := p.peek() |  | ||||||
|  |  | ||||||
| 	// end of stream, parsing is finished |  | ||||||
| 	if tok == nil { |  | ||||||
| 		return nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch tok.typ { |  | ||||||
| 	case tokenDoubleLeftBracket: |  | ||||||
| 		return p.parseGroupArray |  | ||||||
| 	case tokenLeftBracket: |  | ||||||
| 		return p.parseGroup |  | ||||||
| 	case tokenKey: |  | ||||||
| 		return p.parseAssign |  | ||||||
| 	case tokenEOF: |  | ||||||
| 		return nil |  | ||||||
| 	default: |  | ||||||
| 		p.raiseError(tok, "unexpected token") |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) parseGroupArray() tomlParserStateFn { |  | ||||||
| 	startToken := p.getToken() // discard the [[ |  | ||||||
| 	key := p.getToken() |  | ||||||
| 	if key.typ != tokenKeyGroupArray { |  | ||||||
| 		p.raiseError(key, "unexpected token %s, was expecting a table array key", key) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// get or create table array element at the indicated part in the path |  | ||||||
| 	keys, err := parseKey(key.val) |  | ||||||
| 	if err != nil { |  | ||||||
| 		p.raiseError(key, "invalid table array key: %s", err) |  | ||||||
| 	} |  | ||||||
| 	p.tree.createSubTree(keys[:len(keys)-1], startToken.Position) // create parent entries |  | ||||||
| 	destTree := p.tree.GetPath(keys) |  | ||||||
| 	var array []*Tree |  | ||||||
| 	if destTree == nil { |  | ||||||
| 		array = make([]*Tree, 0) |  | ||||||
| 	} else if target, ok := destTree.([]*Tree); ok && target != nil { |  | ||||||
| 		array = destTree.([]*Tree) |  | ||||||
| 	} else { |  | ||||||
| 		p.raiseError(key, "key %s is already assigned and not of type table array", key) |  | ||||||
| 	} |  | ||||||
| 	p.currentTable = keys |  | ||||||
|  |  | ||||||
| 	// add a new tree to the end of the table array |  | ||||||
| 	newTree := newTree() |  | ||||||
| 	newTree.position = startToken.Position |  | ||||||
| 	array = append(array, newTree) |  | ||||||
| 	p.tree.SetPath(p.currentTable, array) |  | ||||||
|  |  | ||||||
| 	// remove all keys that were children of this table array |  | ||||||
| 	prefix := key.val + "." |  | ||||||
| 	found := false |  | ||||||
| 	for ii := 0; ii < len(p.seenTableKeys); { |  | ||||||
| 		tableKey := p.seenTableKeys[ii] |  | ||||||
| 		if strings.HasPrefix(tableKey, prefix) { |  | ||||||
| 			p.seenTableKeys = append(p.seenTableKeys[:ii], p.seenTableKeys[ii+1:]...) |  | ||||||
| 		} else { |  | ||||||
| 			found = (tableKey == key.val) |  | ||||||
| 			ii++ |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// keep this key name from use by other kinds of assignments |  | ||||||
| 	if !found { |  | ||||||
| 		p.seenTableKeys = append(p.seenTableKeys, key.val) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// move to next parser state |  | ||||||
| 	p.assume(tokenDoubleRightBracket) |  | ||||||
| 	return p.parseStart |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) parseGroup() tomlParserStateFn { |  | ||||||
| 	startToken := p.getToken() // discard the [ |  | ||||||
| 	key := p.getToken() |  | ||||||
| 	if key.typ != tokenKeyGroup { |  | ||||||
| 		p.raiseError(key, "unexpected token %s, was expecting a table key", key) |  | ||||||
| 	} |  | ||||||
| 	for _, item := range p.seenTableKeys { |  | ||||||
| 		if item == key.val { |  | ||||||
| 			p.raiseError(key, "duplicated tables") |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	p.seenTableKeys = append(p.seenTableKeys, key.val) |  | ||||||
| 	keys, err := parseKey(key.val) |  | ||||||
| 	if err != nil { |  | ||||||
| 		p.raiseError(key, "invalid table array key: %s", err) |  | ||||||
| 	} |  | ||||||
| 	if err := p.tree.createSubTree(keys, startToken.Position); err != nil { |  | ||||||
| 		p.raiseError(key, "%s", err) |  | ||||||
| 	} |  | ||||||
| 	p.assume(tokenRightBracket) |  | ||||||
| 	p.currentTable = keys |  | ||||||
| 	return p.parseStart |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) parseAssign() tomlParserStateFn { |  | ||||||
| 	key := p.getToken() |  | ||||||
| 	p.assume(tokenEqual) |  | ||||||
|  |  | ||||||
| 	value := p.parseRvalue() |  | ||||||
| 	var tableKey []string |  | ||||||
| 	if len(p.currentTable) > 0 { |  | ||||||
| 		tableKey = p.currentTable |  | ||||||
| 	} else { |  | ||||||
| 		tableKey = []string{} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// find the table to assign, looking out for arrays of tables |  | ||||||
| 	var targetNode *Tree |  | ||||||
| 	switch node := p.tree.GetPath(tableKey).(type) { |  | ||||||
| 	case []*Tree: |  | ||||||
| 		targetNode = node[len(node)-1] |  | ||||||
| 	case *Tree: |  | ||||||
| 		targetNode = node |  | ||||||
| 	default: |  | ||||||
| 		p.raiseError(key, "Unknown table type for path: %s", |  | ||||||
| 			strings.Join(tableKey, ".")) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	// assign value to the found table |  | ||||||
| 	keyVals := []string{key.val} |  | ||||||
| 	if len(keyVals) != 1 { |  | ||||||
| 		p.raiseError(key, "Invalid key") |  | ||||||
| 	} |  | ||||||
| 	keyVal := keyVals[0] |  | ||||||
| 	localKey := []string{keyVal} |  | ||||||
| 	finalKey := append(tableKey, keyVal) |  | ||||||
| 	if targetNode.GetPath(localKey) != nil { |  | ||||||
| 		p.raiseError(key, "The following key was defined twice: %s", |  | ||||||
| 			strings.Join(finalKey, ".")) |  | ||||||
| 	} |  | ||||||
| 	var toInsert interface{} |  | ||||||
|  |  | ||||||
| 	switch value.(type) { |  | ||||||
| 	case *Tree, []*Tree: |  | ||||||
| 		toInsert = value |  | ||||||
| 	default: |  | ||||||
| 		toInsert = &tomlValue{value: value, position: key.Position} |  | ||||||
| 	} |  | ||||||
| 	targetNode.values[keyVal] = toInsert |  | ||||||
| 	return p.parseStart |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var numberUnderscoreInvalidRegexp *regexp.Regexp |  | ||||||
| var hexNumberUnderscoreInvalidRegexp *regexp.Regexp |  | ||||||
|  |  | ||||||
| func numberContainsInvalidUnderscore(value string) error { |  | ||||||
| 	if numberUnderscoreInvalidRegexp.MatchString(value) { |  | ||||||
| 		return errors.New("invalid use of _ in number") |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func hexNumberContainsInvalidUnderscore(value string) error { |  | ||||||
| 	if hexNumberUnderscoreInvalidRegexp.MatchString(value) { |  | ||||||
| 		return errors.New("invalid use of _ in hex number") |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func cleanupNumberToken(value string) string { |  | ||||||
| 	cleanedVal := strings.Replace(value, "_", "", -1) |  | ||||||
| 	return cleanedVal |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) parseRvalue() interface{} { |  | ||||||
| 	tok := p.getToken() |  | ||||||
| 	if tok == nil || tok.typ == tokenEOF { |  | ||||||
| 		p.raiseError(tok, "expecting a value") |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch tok.typ { |  | ||||||
| 	case tokenString: |  | ||||||
| 		return tok.val |  | ||||||
| 	case tokenTrue: |  | ||||||
| 		return true |  | ||||||
| 	case tokenFalse: |  | ||||||
| 		return false |  | ||||||
| 	case tokenInf: |  | ||||||
| 		if tok.val[0] == '-' { |  | ||||||
| 			return math.Inf(-1) |  | ||||||
| 		} |  | ||||||
| 		return math.Inf(1) |  | ||||||
| 	case tokenNan: |  | ||||||
| 		return math.NaN() |  | ||||||
| 	case tokenInteger: |  | ||||||
| 		cleanedVal := cleanupNumberToken(tok.val) |  | ||||||
| 		var err error |  | ||||||
| 		var val int64 |  | ||||||
| 		if len(cleanedVal) >= 3 && cleanedVal[0] == '0' { |  | ||||||
| 			switch cleanedVal[1] { |  | ||||||
| 			case 'x': |  | ||||||
| 				err = hexNumberContainsInvalidUnderscore(tok.val) |  | ||||||
| 				if err != nil { |  | ||||||
| 					p.raiseError(tok, "%s", err) |  | ||||||
| 				} |  | ||||||
| 				val, err = strconv.ParseInt(cleanedVal[2:], 16, 64) |  | ||||||
| 			case 'o': |  | ||||||
| 				err = numberContainsInvalidUnderscore(tok.val) |  | ||||||
| 				if err != nil { |  | ||||||
| 					p.raiseError(tok, "%s", err) |  | ||||||
| 				} |  | ||||||
| 				val, err = strconv.ParseInt(cleanedVal[2:], 8, 64) |  | ||||||
| 			case 'b': |  | ||||||
| 				err = numberContainsInvalidUnderscore(tok.val) |  | ||||||
| 				if err != nil { |  | ||||||
| 					p.raiseError(tok, "%s", err) |  | ||||||
| 				} |  | ||||||
| 				val, err = strconv.ParseInt(cleanedVal[2:], 2, 64) |  | ||||||
| 			default: |  | ||||||
| 				panic("invalid base") // the lexer should catch this first |  | ||||||
| 			} |  | ||||||
| 		} else { |  | ||||||
| 			err = numberContainsInvalidUnderscore(tok.val) |  | ||||||
| 			if err != nil { |  | ||||||
| 				p.raiseError(tok, "%s", err) |  | ||||||
| 			} |  | ||||||
| 			val, err = strconv.ParseInt(cleanedVal, 10, 64) |  | ||||||
| 		} |  | ||||||
| 		if err != nil { |  | ||||||
| 			p.raiseError(tok, "%s", err) |  | ||||||
| 		} |  | ||||||
| 		return val |  | ||||||
| 	case tokenFloat: |  | ||||||
| 		err := numberContainsInvalidUnderscore(tok.val) |  | ||||||
| 		if err != nil { |  | ||||||
| 			p.raiseError(tok, "%s", err) |  | ||||||
| 		} |  | ||||||
| 		cleanedVal := cleanupNumberToken(tok.val) |  | ||||||
| 		val, err := strconv.ParseFloat(cleanedVal, 64) |  | ||||||
| 		if err != nil { |  | ||||||
| 			p.raiseError(tok, "%s", err) |  | ||||||
| 		} |  | ||||||
| 		return val |  | ||||||
| 	case tokenDate: |  | ||||||
| 		val, err := time.ParseInLocation(time.RFC3339Nano, tok.val, time.UTC) |  | ||||||
| 		if err != nil { |  | ||||||
| 			p.raiseError(tok, "%s", err) |  | ||||||
| 		} |  | ||||||
| 		return val |  | ||||||
| 	case tokenLeftBracket: |  | ||||||
| 		return p.parseArray() |  | ||||||
| 	case tokenLeftCurlyBrace: |  | ||||||
| 		return p.parseInlineTable() |  | ||||||
| 	case tokenEqual: |  | ||||||
| 		p.raiseError(tok, "cannot have multiple equals for the same key") |  | ||||||
| 	case tokenError: |  | ||||||
| 		p.raiseError(tok, "%s", tok) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	p.raiseError(tok, "never reached") |  | ||||||
|  |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func tokenIsComma(t *token) bool { |  | ||||||
| 	return t != nil && t.typ == tokenComma |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) parseInlineTable() *Tree { |  | ||||||
| 	tree := newTree() |  | ||||||
| 	var previous *token |  | ||||||
| Loop: |  | ||||||
| 	for { |  | ||||||
| 		follow := p.peek() |  | ||||||
| 		if follow == nil || follow.typ == tokenEOF { |  | ||||||
| 			p.raiseError(follow, "unterminated inline table") |  | ||||||
| 		} |  | ||||||
| 		switch follow.typ { |  | ||||||
| 		case tokenRightCurlyBrace: |  | ||||||
| 			p.getToken() |  | ||||||
| 			break Loop |  | ||||||
| 		case tokenKey: |  | ||||||
| 			if !tokenIsComma(previous) && previous != nil { |  | ||||||
| 				p.raiseError(follow, "comma expected between fields in inline table") |  | ||||||
| 			} |  | ||||||
| 			key := p.getToken() |  | ||||||
| 			p.assume(tokenEqual) |  | ||||||
| 			value := p.parseRvalue() |  | ||||||
| 			tree.Set(key.val, value) |  | ||||||
| 		case tokenComma: |  | ||||||
| 			if previous == nil { |  | ||||||
| 				p.raiseError(follow, "inline table cannot start with a comma") |  | ||||||
| 			} |  | ||||||
| 			if tokenIsComma(previous) { |  | ||||||
| 				p.raiseError(follow, "need field between two commas in inline table") |  | ||||||
| 			} |  | ||||||
| 			p.getToken() |  | ||||||
| 		default: |  | ||||||
| 			p.raiseError(follow, "unexpected token type in inline table: %s", follow.String()) |  | ||||||
| 		} |  | ||||||
| 		previous = follow |  | ||||||
| 	} |  | ||||||
| 	if tokenIsComma(previous) { |  | ||||||
| 		p.raiseError(previous, "trailing comma at the end of inline table") |  | ||||||
| 	} |  | ||||||
| 	return tree |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (p *tomlParser) parseArray() interface{} { |  | ||||||
| 	var array []interface{} |  | ||||||
| 	arrayType := reflect.TypeOf(nil) |  | ||||||
| 	for { |  | ||||||
| 		follow := p.peek() |  | ||||||
| 		if follow == nil || follow.typ == tokenEOF { |  | ||||||
| 			p.raiseError(follow, "unterminated array") |  | ||||||
| 		} |  | ||||||
| 		if follow.typ == tokenRightBracket { |  | ||||||
| 			p.getToken() |  | ||||||
| 			break |  | ||||||
| 		} |  | ||||||
| 		val := p.parseRvalue() |  | ||||||
| 		if arrayType == nil { |  | ||||||
| 			arrayType = reflect.TypeOf(val) |  | ||||||
| 		} |  | ||||||
| 		if reflect.TypeOf(val) != arrayType { |  | ||||||
| 			p.raiseError(follow, "mixed types in array") |  | ||||||
| 		} |  | ||||||
| 		array = append(array, val) |  | ||||||
| 		follow = p.peek() |  | ||||||
| 		if follow == nil || follow.typ == tokenEOF { |  | ||||||
| 			p.raiseError(follow, "unterminated array") |  | ||||||
| 		} |  | ||||||
| 		if follow.typ != tokenRightBracket && follow.typ != tokenComma { |  | ||||||
| 			p.raiseError(follow, "missing comma") |  | ||||||
| 		} |  | ||||||
| 		if follow.typ == tokenComma { |  | ||||||
| 			p.getToken() |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	// An array of Trees is actually an array of inline |  | ||||||
| 	// tables, which is a shorthand for a table array. If the |  | ||||||
| 	// array was not converted from []interface{} to []*Tree, |  | ||||||
| 	// the two notations would not be equivalent. |  | ||||||
| 	if arrayType == reflect.TypeOf(newTree()) { |  | ||||||
| 		tomlArray := make([]*Tree, len(array)) |  | ||||||
| 		for i, v := range array { |  | ||||||
| 			tomlArray[i] = v.(*Tree) |  | ||||||
| 		} |  | ||||||
| 		return tomlArray |  | ||||||
| 	} |  | ||||||
| 	return array |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func parseToml(flow []token) *Tree { |  | ||||||
| 	result := newTree() |  | ||||||
| 	result.position = Position{1, 1} |  | ||||||
| 	parser := &tomlParser{ |  | ||||||
| 		flowIdx:       0, |  | ||||||
| 		flow:          flow, |  | ||||||
| 		tree:          result, |  | ||||||
| 		currentTable:  make([]string, 0), |  | ||||||
| 		seenTableKeys: make([]string, 0), |  | ||||||
| 	} |  | ||||||
| 	parser.run() |  | ||||||
| 	return result |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func init() { |  | ||||||
| 	numberUnderscoreInvalidRegexp = regexp.MustCompile(`([^\d]_|_[^\d])|_$|^_`) |  | ||||||
| 	hexNumberUnderscoreInvalidRegexp = regexp.MustCompile(`(^0x_)|([^\da-f]_|_[^\da-f])|_$|^_`) |  | ||||||
| } |  | ||||||
							
								
								
									
										29
									
								
								vendor/github.com/pelletier/go-toml/position.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										29
									
								
								vendor/github.com/pelletier/go-toml/position.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,29 +0,0 @@ | |||||||
| // Position support for go-toml |  | ||||||
|  |  | ||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Position of a document element within a TOML document. |  | ||||||
| // |  | ||||||
| // Line and Col are both 1-indexed positions for the element's line number and |  | ||||||
| // column number, respectively.  Values of zero or less will cause Invalid(), |  | ||||||
| // to return true. |  | ||||||
| type Position struct { |  | ||||||
| 	Line int // line within the document |  | ||||||
| 	Col  int // column within the line |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // String representation of the position. |  | ||||||
| // Displays 1-indexed line and column numbers. |  | ||||||
| func (p Position) String() string { |  | ||||||
| 	return fmt.Sprintf("(%d, %d)", p.Line, p.Col) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Invalid returns whether or not the position is valid (i.e. with negative or |  | ||||||
| // null values) |  | ||||||
| func (p Position) Invalid() bool { |  | ||||||
| 	return p.Line <= 0 || p.Col <= 0 |  | ||||||
| } |  | ||||||
							
								
								
									
										88
									
								
								vendor/github.com/pelletier/go-toml/test.sh
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										88
									
								
								vendor/github.com/pelletier/go-toml/test.sh
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,88 +0,0 @@ | |||||||
| #!/bin/bash |  | ||||||
| # fail out of the script if anything here fails |  | ||||||
| set -e |  | ||||||
| set -o pipefail |  | ||||||
|  |  | ||||||
| # set the path to the present working directory |  | ||||||
| export GOPATH=`pwd` |  | ||||||
|  |  | ||||||
| function git_clone() { |  | ||||||
|   path=$1 |  | ||||||
|   branch=$2 |  | ||||||
|   version=$3 |  | ||||||
|   if [ ! -d "src/$path" ]; then |  | ||||||
|     mkdir -p src/$path |  | ||||||
|     git clone https://$path.git src/$path |  | ||||||
|   fi |  | ||||||
|   pushd src/$path |  | ||||||
|   git checkout "$branch" |  | ||||||
|   git reset --hard "$version" |  | ||||||
|   popd |  | ||||||
| } |  | ||||||
|  |  | ||||||
| # Remove potential previous runs |  | ||||||
| rm -rf src test_program_bin toml-test |  | ||||||
|  |  | ||||||
| go get github.com/pelletier/go-buffruneio |  | ||||||
| go get github.com/davecgh/go-spew/spew |  | ||||||
| go get gopkg.in/yaml.v2 |  | ||||||
| go get github.com/BurntSushi/toml |  | ||||||
|  |  | ||||||
| # get code for BurntSushi TOML validation |  | ||||||
| # pinning all to 'HEAD' for version 0.3.x work (TODO: pin to commit hash when tests stabilize) |  | ||||||
| git_clone github.com/BurntSushi/toml master HEAD |  | ||||||
| git_clone github.com/BurntSushi/toml-test master HEAD #was: 0.2.0 HEAD |  | ||||||
|  |  | ||||||
| # build the BurntSushi test application |  | ||||||
| go build -o toml-test github.com/BurntSushi/toml-test |  | ||||||
|  |  | ||||||
| # vendorize the current lib for testing |  | ||||||
| # NOTE: this basically mocks an install without having to go back out to github for code |  | ||||||
| mkdir -p src/github.com/pelletier/go-toml/cmd |  | ||||||
| mkdir -p src/github.com/pelletier/go-toml/query |  | ||||||
| cp *.go *.toml src/github.com/pelletier/go-toml |  | ||||||
| cp -R cmd/* src/github.com/pelletier/go-toml/cmd |  | ||||||
| cp -R query/* src/github.com/pelletier/go-toml/query |  | ||||||
| go build -o test_program_bin src/github.com/pelletier/go-toml/cmd/test_program.go |  | ||||||
|  |  | ||||||
| # Run basic unit tests |  | ||||||
| go test github.com/pelletier/go-toml -covermode=count -coverprofile=coverage.out |  | ||||||
| go test github.com/pelletier/go-toml/cmd/tomljson |  | ||||||
| go test github.com/pelletier/go-toml/query |  | ||||||
|  |  | ||||||
| # run the entire BurntSushi test suite |  | ||||||
| if [[ $# -eq 0 ]] ; then |  | ||||||
|   echo "Running all BurntSushi tests" |  | ||||||
|   ./toml-test ./test_program_bin | tee test_out |  | ||||||
| else |  | ||||||
|   # run a specific test |  | ||||||
|   test=$1 |  | ||||||
|   test_path='src/github.com/BurntSushi/toml-test/tests' |  | ||||||
|   valid_test="$test_path/valid/$test" |  | ||||||
|   invalid_test="$test_path/invalid/$test" |  | ||||||
|  |  | ||||||
|   if [ -e "$valid_test.toml" ]; then |  | ||||||
|     echo "Valid Test TOML for $test:" |  | ||||||
|     echo "====" |  | ||||||
|     cat "$valid_test.toml" |  | ||||||
|  |  | ||||||
|     echo "Valid Test JSON for $test:" |  | ||||||
|     echo "====" |  | ||||||
|     cat "$valid_test.json" |  | ||||||
|  |  | ||||||
|     echo "Go-TOML Output for $test:" |  | ||||||
|     echo "====" |  | ||||||
|     cat "$valid_test.toml" | ./test_program_bin |  | ||||||
|   fi |  | ||||||
|  |  | ||||||
|   if [ -e "$invalid_test.toml" ]; then |  | ||||||
|     echo "Invalid Test TOML for $test:" |  | ||||||
|     echo "====" |  | ||||||
|     cat "$invalid_test.toml" |  | ||||||
|  |  | ||||||
|     echo "Go-TOML Output for $test:" |  | ||||||
|     echo "====" |  | ||||||
|     echo "go-toml Output:" |  | ||||||
|     cat "$invalid_test.toml" | ./test_program_bin |  | ||||||
|   fi |  | ||||||
| fi |  | ||||||
							
								
								
									
										144
									
								
								vendor/github.com/pelletier/go-toml/token.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										144
									
								
								vendor/github.com/pelletier/go-toml/token.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,144 +0,0 @@ | |||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"strconv" |  | ||||||
| 	"unicode" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Define tokens |  | ||||||
| type tokenType int |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	eof = -(iota + 1) |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	tokenError tokenType = iota |  | ||||||
| 	tokenEOF |  | ||||||
| 	tokenComment |  | ||||||
| 	tokenKey |  | ||||||
| 	tokenString |  | ||||||
| 	tokenInteger |  | ||||||
| 	tokenTrue |  | ||||||
| 	tokenFalse |  | ||||||
| 	tokenFloat |  | ||||||
| 	tokenInf |  | ||||||
| 	tokenNan |  | ||||||
| 	tokenEqual |  | ||||||
| 	tokenLeftBracket |  | ||||||
| 	tokenRightBracket |  | ||||||
| 	tokenLeftCurlyBrace |  | ||||||
| 	tokenRightCurlyBrace |  | ||||||
| 	tokenLeftParen |  | ||||||
| 	tokenRightParen |  | ||||||
| 	tokenDoubleLeftBracket |  | ||||||
| 	tokenDoubleRightBracket |  | ||||||
| 	tokenDate |  | ||||||
| 	tokenKeyGroup |  | ||||||
| 	tokenKeyGroupArray |  | ||||||
| 	tokenComma |  | ||||||
| 	tokenColon |  | ||||||
| 	tokenDollar |  | ||||||
| 	tokenStar |  | ||||||
| 	tokenQuestion |  | ||||||
| 	tokenDot |  | ||||||
| 	tokenDotDot |  | ||||||
| 	tokenEOL |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var tokenTypeNames = []string{ |  | ||||||
| 	"Error", |  | ||||||
| 	"EOF", |  | ||||||
| 	"Comment", |  | ||||||
| 	"Key", |  | ||||||
| 	"String", |  | ||||||
| 	"Integer", |  | ||||||
| 	"True", |  | ||||||
| 	"False", |  | ||||||
| 	"Float", |  | ||||||
| 	"Inf", |  | ||||||
| 	"NaN", |  | ||||||
| 	"=", |  | ||||||
| 	"[", |  | ||||||
| 	"]", |  | ||||||
| 	"{", |  | ||||||
| 	"}", |  | ||||||
| 	"(", |  | ||||||
| 	")", |  | ||||||
| 	"]]", |  | ||||||
| 	"[[", |  | ||||||
| 	"Date", |  | ||||||
| 	"KeyGroup", |  | ||||||
| 	"KeyGroupArray", |  | ||||||
| 	",", |  | ||||||
| 	":", |  | ||||||
| 	"$", |  | ||||||
| 	"*", |  | ||||||
| 	"?", |  | ||||||
| 	".", |  | ||||||
| 	"..", |  | ||||||
| 	"EOL", |  | ||||||
| } |  | ||||||
|  |  | ||||||
| type token struct { |  | ||||||
| 	Position |  | ||||||
| 	typ tokenType |  | ||||||
| 	val string |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (tt tokenType) String() string { |  | ||||||
| 	idx := int(tt) |  | ||||||
| 	if idx < len(tokenTypeNames) { |  | ||||||
| 		return tokenTypeNames[idx] |  | ||||||
| 	} |  | ||||||
| 	return "Unknown" |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (t token) Int() int { |  | ||||||
| 	if result, err := strconv.Atoi(t.val); err != nil { |  | ||||||
| 		panic(err) |  | ||||||
| 	} else { |  | ||||||
| 		return result |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (t token) String() string { |  | ||||||
| 	switch t.typ { |  | ||||||
| 	case tokenEOF: |  | ||||||
| 		return "EOF" |  | ||||||
| 	case tokenError: |  | ||||||
| 		return t.val |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return fmt.Sprintf("%q", t.val) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isSpace(r rune) bool { |  | ||||||
| 	return r == ' ' || r == '\t' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isAlphanumeric(r rune) bool { |  | ||||||
| 	return unicode.IsLetter(r) || r == '_' |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isKeyChar(r rune) bool { |  | ||||||
| 	// Keys start with the first character that isn't whitespace or [ and end |  | ||||||
| 	// with the last non-whitespace character before the equals sign. Keys |  | ||||||
| 	// cannot contain a # character." |  | ||||||
| 	return !(r == '\r' || r == '\n' || r == eof || r == '=') |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isKeyStartChar(r rune) bool { |  | ||||||
| 	return !(isSpace(r) || r == '\r' || r == '\n' || r == eof || r == '[') |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isDigit(r rune) bool { |  | ||||||
| 	return unicode.IsNumber(r) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func isHexDigit(r rune) bool { |  | ||||||
| 	return isDigit(r) || |  | ||||||
| 		(r >= 'a' && r <= 'f') || |  | ||||||
| 		(r >= 'A' && r <= 'F') |  | ||||||
| } |  | ||||||
							
								
								
									
										367
									
								
								vendor/github.com/pelletier/go-toml/toml.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										367
									
								
								vendor/github.com/pelletier/go-toml/toml.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,367 +0,0 @@ | |||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"errors" |  | ||||||
| 	"fmt" |  | ||||||
| 	"io" |  | ||||||
| 	"io/ioutil" |  | ||||||
| 	"os" |  | ||||||
| 	"runtime" |  | ||||||
| 	"strings" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type tomlValue struct { |  | ||||||
| 	value     interface{} // string, int64, uint64, float64, bool, time.Time, [] of any of this list |  | ||||||
| 	comment   string |  | ||||||
| 	commented bool |  | ||||||
| 	multiline bool |  | ||||||
| 	position  Position |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Tree is the result of the parsing of a TOML file. |  | ||||||
| type Tree struct { |  | ||||||
| 	values    map[string]interface{} // string -> *tomlValue, *Tree, []*Tree |  | ||||||
| 	comment   string |  | ||||||
| 	commented bool |  | ||||||
| 	position  Position |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func newTree() *Tree { |  | ||||||
| 	return &Tree{ |  | ||||||
| 		values:   make(map[string]interface{}), |  | ||||||
| 		position: Position{}, |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // TreeFromMap initializes a new Tree object using the given map. |  | ||||||
| func TreeFromMap(m map[string]interface{}) (*Tree, error) { |  | ||||||
| 	result, err := toTree(m) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
| 	return result.(*Tree), nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Position returns the position of the tree. |  | ||||||
| func (t *Tree) Position() Position { |  | ||||||
| 	return t.position |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Has returns a boolean indicating if the given key exists. |  | ||||||
| func (t *Tree) Has(key string) bool { |  | ||||||
| 	if key == "" { |  | ||||||
| 		return false |  | ||||||
| 	} |  | ||||||
| 	return t.HasPath(strings.Split(key, ".")) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // HasPath returns true if the given path of keys exists, false otherwise. |  | ||||||
| func (t *Tree) HasPath(keys []string) bool { |  | ||||||
| 	return t.GetPath(keys) != nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Keys returns the keys of the toplevel tree (does not recurse). |  | ||||||
| func (t *Tree) Keys() []string { |  | ||||||
| 	keys := make([]string, len(t.values)) |  | ||||||
| 	i := 0 |  | ||||||
| 	for k := range t.values { |  | ||||||
| 		keys[i] = k |  | ||||||
| 		i++ |  | ||||||
| 	} |  | ||||||
| 	return keys |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Get the value at key in the Tree. |  | ||||||
| // Key is a dot-separated path (e.g. a.b.c) without single/double quoted strings. |  | ||||||
| // If you need to retrieve non-bare keys, use GetPath. |  | ||||||
| // Returns nil if the path does not exist in the tree. |  | ||||||
| // If keys is of length zero, the current tree is returned. |  | ||||||
| func (t *Tree) Get(key string) interface{} { |  | ||||||
| 	if key == "" { |  | ||||||
| 		return t |  | ||||||
| 	} |  | ||||||
| 	return t.GetPath(strings.Split(key, ".")) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetPath returns the element in the tree indicated by 'keys'. |  | ||||||
| // If keys is of length zero, the current tree is returned. |  | ||||||
| func (t *Tree) GetPath(keys []string) interface{} { |  | ||||||
| 	if len(keys) == 0 { |  | ||||||
| 		return t |  | ||||||
| 	} |  | ||||||
| 	subtree := t |  | ||||||
| 	for _, intermediateKey := range keys[:len(keys)-1] { |  | ||||||
| 		value, exists := subtree.values[intermediateKey] |  | ||||||
| 		if !exists { |  | ||||||
| 			return nil |  | ||||||
| 		} |  | ||||||
| 		switch node := value.(type) { |  | ||||||
| 		case *Tree: |  | ||||||
| 			subtree = node |  | ||||||
| 		case []*Tree: |  | ||||||
| 			// go to most recent element |  | ||||||
| 			if len(node) == 0 { |  | ||||||
| 				return nil |  | ||||||
| 			} |  | ||||||
| 			subtree = node[len(node)-1] |  | ||||||
| 		default: |  | ||||||
| 			return nil // cannot navigate through other node types |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	// branch based on final node type |  | ||||||
| 	switch node := subtree.values[keys[len(keys)-1]].(type) { |  | ||||||
| 	case *tomlValue: |  | ||||||
| 		return node.value |  | ||||||
| 	default: |  | ||||||
| 		return node |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetPosition returns the position of the given key. |  | ||||||
| func (t *Tree) GetPosition(key string) Position { |  | ||||||
| 	if key == "" { |  | ||||||
| 		return t.position |  | ||||||
| 	} |  | ||||||
| 	return t.GetPositionPath(strings.Split(key, ".")) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetPositionPath returns the element in the tree indicated by 'keys'. |  | ||||||
| // If keys is of length zero, the current tree is returned. |  | ||||||
| func (t *Tree) GetPositionPath(keys []string) Position { |  | ||||||
| 	if len(keys) == 0 { |  | ||||||
| 		return t.position |  | ||||||
| 	} |  | ||||||
| 	subtree := t |  | ||||||
| 	for _, intermediateKey := range keys[:len(keys)-1] { |  | ||||||
| 		value, exists := subtree.values[intermediateKey] |  | ||||||
| 		if !exists { |  | ||||||
| 			return Position{0, 0} |  | ||||||
| 		} |  | ||||||
| 		switch node := value.(type) { |  | ||||||
| 		case *Tree: |  | ||||||
| 			subtree = node |  | ||||||
| 		case []*Tree: |  | ||||||
| 			// go to most recent element |  | ||||||
| 			if len(node) == 0 { |  | ||||||
| 				return Position{0, 0} |  | ||||||
| 			} |  | ||||||
| 			subtree = node[len(node)-1] |  | ||||||
| 		default: |  | ||||||
| 			return Position{0, 0} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	// branch based on final node type |  | ||||||
| 	switch node := subtree.values[keys[len(keys)-1]].(type) { |  | ||||||
| 	case *tomlValue: |  | ||||||
| 		return node.position |  | ||||||
| 	case *Tree: |  | ||||||
| 		return node.position |  | ||||||
| 	case []*Tree: |  | ||||||
| 		// go to most recent element |  | ||||||
| 		if len(node) == 0 { |  | ||||||
| 			return Position{0, 0} |  | ||||||
| 		} |  | ||||||
| 		return node[len(node)-1].position |  | ||||||
| 	default: |  | ||||||
| 		return Position{0, 0} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetDefault works like Get but with a default value |  | ||||||
| func (t *Tree) GetDefault(key string, def interface{}) interface{} { |  | ||||||
| 	val := t.Get(key) |  | ||||||
| 	if val == nil { |  | ||||||
| 		return def |  | ||||||
| 	} |  | ||||||
| 	return val |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetOptions arguments are supplied to the SetWithOptions and SetPathWithOptions functions to modify marshalling behaviour. |  | ||||||
| // The default values within the struct are valid default options. |  | ||||||
| type SetOptions struct { |  | ||||||
| 	Comment   string |  | ||||||
| 	Commented bool |  | ||||||
| 	Multiline bool |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetWithOptions is the same as Set, but allows you to provide formatting |  | ||||||
| // instructions to the key, that will be used by Marshal(). |  | ||||||
| func (t *Tree) SetWithOptions(key string, opts SetOptions, value interface{}) { |  | ||||||
| 	t.SetPathWithOptions(strings.Split(key, "."), opts, value) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetPathWithOptions is the same as SetPath, but allows you to provide |  | ||||||
| // formatting instructions to the key, that will be reused by Marshal(). |  | ||||||
| func (t *Tree) SetPathWithOptions(keys []string, opts SetOptions, value interface{}) { |  | ||||||
| 	subtree := t |  | ||||||
| 	for _, intermediateKey := range keys[:len(keys)-1] { |  | ||||||
| 		nextTree, exists := subtree.values[intermediateKey] |  | ||||||
| 		if !exists { |  | ||||||
| 			nextTree = newTree() |  | ||||||
| 			subtree.values[intermediateKey] = nextTree // add new element here |  | ||||||
| 		} |  | ||||||
| 		switch node := nextTree.(type) { |  | ||||||
| 		case *Tree: |  | ||||||
| 			subtree = node |  | ||||||
| 		case []*Tree: |  | ||||||
| 			// go to most recent element |  | ||||||
| 			if len(node) == 0 { |  | ||||||
| 				// create element if it does not exist |  | ||||||
| 				subtree.values[intermediateKey] = append(node, newTree()) |  | ||||||
| 			} |  | ||||||
| 			subtree = node[len(node)-1] |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var toInsert interface{} |  | ||||||
|  |  | ||||||
| 	switch value.(type) { |  | ||||||
| 	case *Tree: |  | ||||||
| 		tt := value.(*Tree) |  | ||||||
| 		tt.comment = opts.Comment |  | ||||||
| 		toInsert = value |  | ||||||
| 	case []*Tree: |  | ||||||
| 		toInsert = value |  | ||||||
| 	case *tomlValue: |  | ||||||
| 		tt := value.(*tomlValue) |  | ||||||
| 		tt.comment = opts.Comment |  | ||||||
| 		toInsert = tt |  | ||||||
| 	default: |  | ||||||
| 		toInsert = &tomlValue{value: value, comment: opts.Comment, commented: opts.Commented, multiline: opts.Multiline} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	subtree.values[keys[len(keys)-1]] = toInsert |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Set an element in the tree. |  | ||||||
| // Key is a dot-separated path (e.g. a.b.c). |  | ||||||
| // Creates all necessary intermediate trees, if needed. |  | ||||||
| func (t *Tree) Set(key string, value interface{}) { |  | ||||||
| 	t.SetWithComment(key, "", false, value) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetWithComment is the same as Set, but allows you to provide comment |  | ||||||
| // information to the key, that will be reused by Marshal(). |  | ||||||
| func (t *Tree) SetWithComment(key string, comment string, commented bool, value interface{}) { |  | ||||||
| 	t.SetPathWithComment(strings.Split(key, "."), comment, commented, value) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetPath sets an element in the tree. |  | ||||||
| // Keys is an array of path elements (e.g. {"a","b","c"}). |  | ||||||
| // Creates all necessary intermediate trees, if needed. |  | ||||||
| func (t *Tree) SetPath(keys []string, value interface{}) { |  | ||||||
| 	t.SetPathWithComment(keys, "", false, value) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetPathWithComment is the same as SetPath, but allows you to provide comment |  | ||||||
| // information to the key, that will be reused by Marshal(). |  | ||||||
| func (t *Tree) SetPathWithComment(keys []string, comment string, commented bool, value interface{}) { |  | ||||||
| 	subtree := t |  | ||||||
| 	for _, intermediateKey := range keys[:len(keys)-1] { |  | ||||||
| 		nextTree, exists := subtree.values[intermediateKey] |  | ||||||
| 		if !exists { |  | ||||||
| 			nextTree = newTree() |  | ||||||
| 			subtree.values[intermediateKey] = nextTree // add new element here |  | ||||||
| 		} |  | ||||||
| 		switch node := nextTree.(type) { |  | ||||||
| 		case *Tree: |  | ||||||
| 			subtree = node |  | ||||||
| 		case []*Tree: |  | ||||||
| 			// go to most recent element |  | ||||||
| 			if len(node) == 0 { |  | ||||||
| 				// create element if it does not exist |  | ||||||
| 				subtree.values[intermediateKey] = append(node, newTree()) |  | ||||||
| 			} |  | ||||||
| 			subtree = node[len(node)-1] |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	var toInsert interface{} |  | ||||||
|  |  | ||||||
| 	switch value.(type) { |  | ||||||
| 	case *Tree: |  | ||||||
| 		tt := value.(*Tree) |  | ||||||
| 		tt.comment = comment |  | ||||||
| 		toInsert = value |  | ||||||
| 	case []*Tree: |  | ||||||
| 		toInsert = value |  | ||||||
| 	case *tomlValue: |  | ||||||
| 		tt := value.(*tomlValue) |  | ||||||
| 		tt.comment = comment |  | ||||||
| 		toInsert = tt |  | ||||||
| 	default: |  | ||||||
| 		toInsert = &tomlValue{value: value, comment: comment, commented: commented} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	subtree.values[keys[len(keys)-1]] = toInsert |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // createSubTree takes a tree and a key and create the necessary intermediate |  | ||||||
| // subtrees to create a subtree at that point. In-place. |  | ||||||
| // |  | ||||||
| // e.g. passing a.b.c will create (assuming tree is empty) tree[a], tree[a][b] |  | ||||||
| // and tree[a][b][c] |  | ||||||
| // |  | ||||||
| // Returns nil on success, error object on failure |  | ||||||
| func (t *Tree) createSubTree(keys []string, pos Position) error { |  | ||||||
| 	subtree := t |  | ||||||
| 	for _, intermediateKey := range keys { |  | ||||||
| 		nextTree, exists := subtree.values[intermediateKey] |  | ||||||
| 		if !exists { |  | ||||||
| 			tree := newTree() |  | ||||||
| 			tree.position = pos |  | ||||||
| 			subtree.values[intermediateKey] = tree |  | ||||||
| 			nextTree = tree |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		switch node := nextTree.(type) { |  | ||||||
| 		case []*Tree: |  | ||||||
| 			subtree = node[len(node)-1] |  | ||||||
| 		case *Tree: |  | ||||||
| 			subtree = node |  | ||||||
| 		default: |  | ||||||
| 			return fmt.Errorf("unknown type for path %s (%s): %T (%#v)", |  | ||||||
| 				strings.Join(keys, "."), intermediateKey, nextTree, nextTree) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadBytes creates a Tree from a []byte. |  | ||||||
| func LoadBytes(b []byte) (tree *Tree, err error) { |  | ||||||
| 	defer func() { |  | ||||||
| 		if r := recover(); r != nil { |  | ||||||
| 			if _, ok := r.(runtime.Error); ok { |  | ||||||
| 				panic(r) |  | ||||||
| 			} |  | ||||||
| 			err = errors.New(r.(string)) |  | ||||||
| 		} |  | ||||||
| 	}() |  | ||||||
| 	tree = parseToml(lexToml(b)) |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadReader creates a Tree from any io.Reader. |  | ||||||
| func LoadReader(reader io.Reader) (tree *Tree, err error) { |  | ||||||
| 	inputBytes, err := ioutil.ReadAll(reader) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return |  | ||||||
| 	} |  | ||||||
| 	tree, err = LoadBytes(inputBytes) |  | ||||||
| 	return |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Load creates a Tree from a string. |  | ||||||
| func Load(content string) (tree *Tree, err error) { |  | ||||||
| 	return LoadBytes([]byte(content)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LoadFile creates a Tree from a file. |  | ||||||
| func LoadFile(path string) (tree *Tree, err error) { |  | ||||||
| 	file, err := os.Open(path) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
| 	defer file.Close() |  | ||||||
| 	return LoadReader(file) |  | ||||||
| } |  | ||||||
							
								
								
									
										142
									
								
								vendor/github.com/pelletier/go-toml/tomltree_create.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										142
									
								
								vendor/github.com/pelletier/go-toml/tomltree_create.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,142 +0,0 @@ | |||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"reflect" |  | ||||||
| 	"time" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var kindToType = [reflect.String + 1]reflect.Type{ |  | ||||||
| 	reflect.Bool:    reflect.TypeOf(true), |  | ||||||
| 	reflect.String:  reflect.TypeOf(""), |  | ||||||
| 	reflect.Float32: reflect.TypeOf(float64(1)), |  | ||||||
| 	reflect.Float64: reflect.TypeOf(float64(1)), |  | ||||||
| 	reflect.Int:     reflect.TypeOf(int64(1)), |  | ||||||
| 	reflect.Int8:    reflect.TypeOf(int64(1)), |  | ||||||
| 	reflect.Int16:   reflect.TypeOf(int64(1)), |  | ||||||
| 	reflect.Int32:   reflect.TypeOf(int64(1)), |  | ||||||
| 	reflect.Int64:   reflect.TypeOf(int64(1)), |  | ||||||
| 	reflect.Uint:    reflect.TypeOf(uint64(1)), |  | ||||||
| 	reflect.Uint8:   reflect.TypeOf(uint64(1)), |  | ||||||
| 	reflect.Uint16:  reflect.TypeOf(uint64(1)), |  | ||||||
| 	reflect.Uint32:  reflect.TypeOf(uint64(1)), |  | ||||||
| 	reflect.Uint64:  reflect.TypeOf(uint64(1)), |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // typeFor returns a reflect.Type for a reflect.Kind, or nil if none is found. |  | ||||||
| // supported values: |  | ||||||
| // string, bool, int64, uint64, float64, time.Time, int, int8, int16, int32, uint, uint8, uint16, uint32, float32 |  | ||||||
| func typeFor(k reflect.Kind) reflect.Type { |  | ||||||
| 	if k > 0 && int(k) < len(kindToType) { |  | ||||||
| 		return kindToType[k] |  | ||||||
| 	} |  | ||||||
| 	return nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func simpleValueCoercion(object interface{}) (interface{}, error) { |  | ||||||
| 	switch original := object.(type) { |  | ||||||
| 	case string, bool, int64, uint64, float64, time.Time: |  | ||||||
| 		return original, nil |  | ||||||
| 	case int: |  | ||||||
| 		return int64(original), nil |  | ||||||
| 	case int8: |  | ||||||
| 		return int64(original), nil |  | ||||||
| 	case int16: |  | ||||||
| 		return int64(original), nil |  | ||||||
| 	case int32: |  | ||||||
| 		return int64(original), nil |  | ||||||
| 	case uint: |  | ||||||
| 		return uint64(original), nil |  | ||||||
| 	case uint8: |  | ||||||
| 		return uint64(original), nil |  | ||||||
| 	case uint16: |  | ||||||
| 		return uint64(original), nil |  | ||||||
| 	case uint32: |  | ||||||
| 		return uint64(original), nil |  | ||||||
| 	case float32: |  | ||||||
| 		return float64(original), nil |  | ||||||
| 	case fmt.Stringer: |  | ||||||
| 		return original.String(), nil |  | ||||||
| 	default: |  | ||||||
| 		return nil, fmt.Errorf("cannot convert type %T to Tree", object) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func sliceToTree(object interface{}) (interface{}, error) { |  | ||||||
| 	// arrays are a bit tricky, since they can represent either a |  | ||||||
| 	// collection of simple values, which is represented by one |  | ||||||
| 	// *tomlValue, or an array of tables, which is represented by an |  | ||||||
| 	// array of *Tree. |  | ||||||
|  |  | ||||||
| 	// holding the assumption that this function is called from toTree only when value.Kind() is Array or Slice |  | ||||||
| 	value := reflect.ValueOf(object) |  | ||||||
| 	insideType := value.Type().Elem() |  | ||||||
| 	length := value.Len() |  | ||||||
| 	if length > 0 { |  | ||||||
| 		insideType = reflect.ValueOf(value.Index(0).Interface()).Type() |  | ||||||
| 	} |  | ||||||
| 	if insideType.Kind() == reflect.Map { |  | ||||||
| 		// this is considered as an array of tables |  | ||||||
| 		tablesArray := make([]*Tree, 0, length) |  | ||||||
| 		for i := 0; i < length; i++ { |  | ||||||
| 			table := value.Index(i) |  | ||||||
| 			tree, err := toTree(table.Interface()) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, err |  | ||||||
| 			} |  | ||||||
| 			tablesArray = append(tablesArray, tree.(*Tree)) |  | ||||||
| 		} |  | ||||||
| 		return tablesArray, nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	sliceType := typeFor(insideType.Kind()) |  | ||||||
| 	if sliceType == nil { |  | ||||||
| 		sliceType = insideType |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	arrayValue := reflect.MakeSlice(reflect.SliceOf(sliceType), 0, length) |  | ||||||
|  |  | ||||||
| 	for i := 0; i < length; i++ { |  | ||||||
| 		val := value.Index(i).Interface() |  | ||||||
| 		simpleValue, err := simpleValueCoercion(val) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return nil, err |  | ||||||
| 		} |  | ||||||
| 		arrayValue = reflect.Append(arrayValue, reflect.ValueOf(simpleValue)) |  | ||||||
| 	} |  | ||||||
| 	return &tomlValue{value: arrayValue.Interface(), position: Position{}}, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func toTree(object interface{}) (interface{}, error) { |  | ||||||
| 	value := reflect.ValueOf(object) |  | ||||||
|  |  | ||||||
| 	if value.Kind() == reflect.Map { |  | ||||||
| 		values := map[string]interface{}{} |  | ||||||
| 		keys := value.MapKeys() |  | ||||||
| 		for _, key := range keys { |  | ||||||
| 			if key.Kind() != reflect.String { |  | ||||||
| 				if _, ok := key.Interface().(string); !ok { |  | ||||||
| 					return nil, fmt.Errorf("map key needs to be a string, not %T (%v)", key.Interface(), key.Kind()) |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			v := value.MapIndex(key) |  | ||||||
| 			newValue, err := toTree(v.Interface()) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return nil, err |  | ||||||
| 			} |  | ||||||
| 			values[key.String()] = newValue |  | ||||||
| 		} |  | ||||||
| 		return &Tree{values: values, position: Position{}}, nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if value.Kind() == reflect.Array || value.Kind() == reflect.Slice { |  | ||||||
| 		return sliceToTree(object) |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	simpleValue, err := simpleValueCoercion(object) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return nil, err |  | ||||||
| 	} |  | ||||||
| 	return &tomlValue{value: simpleValue, position: Position{}}, nil |  | ||||||
| } |  | ||||||
							
								
								
									
										333
									
								
								vendor/github.com/pelletier/go-toml/tomltree_write.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										333
									
								
								vendor/github.com/pelletier/go-toml/tomltree_write.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,333 +0,0 @@ | |||||||
| package toml |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"bytes" |  | ||||||
| 	"fmt" |  | ||||||
| 	"io" |  | ||||||
| 	"math" |  | ||||||
| 	"reflect" |  | ||||||
| 	"sort" |  | ||||||
| 	"strconv" |  | ||||||
| 	"strings" |  | ||||||
| 	"time" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Encodes a string to a TOML-compliant multi-line string value |  | ||||||
| // This function is a clone of the existing encodeTomlString function, except that whitespace characters |  | ||||||
| // are preserved. Quotation marks and backslashes are also not escaped. |  | ||||||
| func encodeMultilineTomlString(value string) string { |  | ||||||
| 	var b bytes.Buffer |  | ||||||
|  |  | ||||||
| 	for _, rr := range value { |  | ||||||
| 		switch rr { |  | ||||||
| 		case '\b': |  | ||||||
| 			b.WriteString(`\b`) |  | ||||||
| 		case '\t': |  | ||||||
| 			b.WriteString("\t") |  | ||||||
| 		case '\n': |  | ||||||
| 			b.WriteString("\n") |  | ||||||
| 		case '\f': |  | ||||||
| 			b.WriteString(`\f`) |  | ||||||
| 		case '\r': |  | ||||||
| 			b.WriteString("\r") |  | ||||||
| 		case '"': |  | ||||||
| 			b.WriteString(`"`) |  | ||||||
| 		case '\\': |  | ||||||
| 			b.WriteString(`\`) |  | ||||||
| 		default: |  | ||||||
| 			intRr := uint16(rr) |  | ||||||
| 			if intRr < 0x001F { |  | ||||||
| 				b.WriteString(fmt.Sprintf("\\u%0.4X", intRr)) |  | ||||||
| 			} else { |  | ||||||
| 				b.WriteRune(rr) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return b.String() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Encodes a string to a TOML-compliant string value |  | ||||||
| func encodeTomlString(value string) string { |  | ||||||
| 	var b bytes.Buffer |  | ||||||
|  |  | ||||||
| 	for _, rr := range value { |  | ||||||
| 		switch rr { |  | ||||||
| 		case '\b': |  | ||||||
| 			b.WriteString(`\b`) |  | ||||||
| 		case '\t': |  | ||||||
| 			b.WriteString(`\t`) |  | ||||||
| 		case '\n': |  | ||||||
| 			b.WriteString(`\n`) |  | ||||||
| 		case '\f': |  | ||||||
| 			b.WriteString(`\f`) |  | ||||||
| 		case '\r': |  | ||||||
| 			b.WriteString(`\r`) |  | ||||||
| 		case '"': |  | ||||||
| 			b.WriteString(`\"`) |  | ||||||
| 		case '\\': |  | ||||||
| 			b.WriteString(`\\`) |  | ||||||
| 		default: |  | ||||||
| 			intRr := uint16(rr) |  | ||||||
| 			if intRr < 0x001F { |  | ||||||
| 				b.WriteString(fmt.Sprintf("\\u%0.4X", intRr)) |  | ||||||
| 			} else { |  | ||||||
| 				b.WriteRune(rr) |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return b.String() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func tomlValueStringRepresentation(v interface{}, indent string, arraysOneElementPerLine bool) (string, error) { |  | ||||||
| 	// this interface check is added to dereference the change made in the writeTo function. |  | ||||||
| 	// That change was made to allow this function to see formatting options. |  | ||||||
| 	tv, ok := v.(*tomlValue) |  | ||||||
| 	if ok { |  | ||||||
| 		v = tv.value |  | ||||||
| 	} else { |  | ||||||
| 		tv = &tomlValue{} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	switch value := v.(type) { |  | ||||||
| 	case uint64: |  | ||||||
| 		return strconv.FormatUint(value, 10), nil |  | ||||||
| 	case int64: |  | ||||||
| 		return strconv.FormatInt(value, 10), nil |  | ||||||
| 	case float64: |  | ||||||
| 		// Ensure a round float does contain a decimal point. Otherwise feeding |  | ||||||
| 		// the output back to the parser would convert to an integer. |  | ||||||
| 		if math.Trunc(value) == value { |  | ||||||
| 			return strings.ToLower(strconv.FormatFloat(value, 'f', 1, 32)), nil |  | ||||||
| 		} |  | ||||||
| 		return strings.ToLower(strconv.FormatFloat(value, 'f', -1, 32)), nil |  | ||||||
| 	case string: |  | ||||||
| 		if tv.multiline { |  | ||||||
| 			return "\"\"\"\n" + encodeMultilineTomlString(value) + "\"\"\"", nil |  | ||||||
| 		} |  | ||||||
| 		return "\"" + encodeTomlString(value) + "\"", nil |  | ||||||
| 	case []byte: |  | ||||||
| 		b, _ := v.([]byte) |  | ||||||
| 		return tomlValueStringRepresentation(string(b), indent, arraysOneElementPerLine) |  | ||||||
| 	case bool: |  | ||||||
| 		if value { |  | ||||||
| 			return "true", nil |  | ||||||
| 		} |  | ||||||
| 		return "false", nil |  | ||||||
| 	case time.Time: |  | ||||||
| 		return value.Format(time.RFC3339), nil |  | ||||||
| 	case nil: |  | ||||||
| 		return "", nil |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	rv := reflect.ValueOf(v) |  | ||||||
|  |  | ||||||
| 	if rv.Kind() == reflect.Slice { |  | ||||||
| 		var values []string |  | ||||||
| 		for i := 0; i < rv.Len(); i++ { |  | ||||||
| 			item := rv.Index(i).Interface() |  | ||||||
| 			itemRepr, err := tomlValueStringRepresentation(item, indent, arraysOneElementPerLine) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return "", err |  | ||||||
| 			} |  | ||||||
| 			values = append(values, itemRepr) |  | ||||||
| 		} |  | ||||||
| 		if arraysOneElementPerLine && len(values) > 1 { |  | ||||||
| 			stringBuffer := bytes.Buffer{} |  | ||||||
| 			valueIndent := indent + `  ` // TODO: move that to a shared encoder state |  | ||||||
|  |  | ||||||
| 			stringBuffer.WriteString("[\n") |  | ||||||
|  |  | ||||||
| 			for _, value := range values { |  | ||||||
| 				stringBuffer.WriteString(valueIndent) |  | ||||||
| 				stringBuffer.WriteString(value) |  | ||||||
| 				stringBuffer.WriteString(`,`) |  | ||||||
| 				stringBuffer.WriteString("\n") |  | ||||||
| 			} |  | ||||||
|  |  | ||||||
| 			stringBuffer.WriteString(indent + "]") |  | ||||||
|  |  | ||||||
| 			return stringBuffer.String(), nil |  | ||||||
| 		} |  | ||||||
| 		return "[" + strings.Join(values, ",") + "]", nil |  | ||||||
| 	} |  | ||||||
| 	return "", fmt.Errorf("unsupported value type %T: %v", v, v) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (t *Tree) writeTo(w io.Writer, indent, keyspace string, bytesCount int64, arraysOneElementPerLine bool) (int64, error) { |  | ||||||
| 	simpleValuesKeys := make([]string, 0) |  | ||||||
| 	complexValuesKeys := make([]string, 0) |  | ||||||
|  |  | ||||||
| 	for k := range t.values { |  | ||||||
| 		v := t.values[k] |  | ||||||
| 		switch v.(type) { |  | ||||||
| 		case *Tree, []*Tree: |  | ||||||
| 			complexValuesKeys = append(complexValuesKeys, k) |  | ||||||
| 		default: |  | ||||||
| 			simpleValuesKeys = append(simpleValuesKeys, k) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	sort.Strings(simpleValuesKeys) |  | ||||||
| 	sort.Strings(complexValuesKeys) |  | ||||||
|  |  | ||||||
| 	for _, k := range simpleValuesKeys { |  | ||||||
| 		v, ok := t.values[k].(*tomlValue) |  | ||||||
| 		if !ok { |  | ||||||
| 			return bytesCount, fmt.Errorf("invalid value type at %s: %T", k, t.values[k]) |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		repr, err := tomlValueStringRepresentation(v, indent, arraysOneElementPerLine) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return bytesCount, err |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		if v.comment != "" { |  | ||||||
| 			comment := strings.Replace(v.comment, "\n", "\n"+indent+"#", -1) |  | ||||||
| 			start := "# " |  | ||||||
| 			if strings.HasPrefix(comment, "#") { |  | ||||||
| 				start = "" |  | ||||||
| 			} |  | ||||||
| 			writtenBytesCountComment, errc := writeStrings(w, "\n", indent, start, comment, "\n") |  | ||||||
| 			bytesCount += int64(writtenBytesCountComment) |  | ||||||
| 			if errc != nil { |  | ||||||
| 				return bytesCount, errc |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		var commented string |  | ||||||
| 		if v.commented { |  | ||||||
| 			commented = "# " |  | ||||||
| 		} |  | ||||||
| 		writtenBytesCount, err := writeStrings(w, indent, commented, k, " = ", repr, "\n") |  | ||||||
| 		bytesCount += int64(writtenBytesCount) |  | ||||||
| 		if err != nil { |  | ||||||
| 			return bytesCount, err |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	for _, k := range complexValuesKeys { |  | ||||||
| 		v := t.values[k] |  | ||||||
|  |  | ||||||
| 		combinedKey := k |  | ||||||
| 		if keyspace != "" { |  | ||||||
| 			combinedKey = keyspace + "." + combinedKey |  | ||||||
| 		} |  | ||||||
| 		var commented string |  | ||||||
| 		if t.commented { |  | ||||||
| 			commented = "# " |  | ||||||
| 		} |  | ||||||
|  |  | ||||||
| 		switch node := v.(type) { |  | ||||||
| 		// node has to be of those two types given how keys are sorted above |  | ||||||
| 		case *Tree: |  | ||||||
| 			tv, ok := t.values[k].(*Tree) |  | ||||||
| 			if !ok { |  | ||||||
| 				return bytesCount, fmt.Errorf("invalid value type at %s: %T", k, t.values[k]) |  | ||||||
| 			} |  | ||||||
| 			if tv.comment != "" { |  | ||||||
| 				comment := strings.Replace(tv.comment, "\n", "\n"+indent+"#", -1) |  | ||||||
| 				start := "# " |  | ||||||
| 				if strings.HasPrefix(comment, "#") { |  | ||||||
| 					start = "" |  | ||||||
| 				} |  | ||||||
| 				writtenBytesCountComment, errc := writeStrings(w, "\n", indent, start, comment) |  | ||||||
| 				bytesCount += int64(writtenBytesCountComment) |  | ||||||
| 				if errc != nil { |  | ||||||
| 					return bytesCount, errc |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 			writtenBytesCount, err := writeStrings(w, "\n", indent, commented, "[", combinedKey, "]\n") |  | ||||||
| 			bytesCount += int64(writtenBytesCount) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return bytesCount, err |  | ||||||
| 			} |  | ||||||
| 			bytesCount, err = node.writeTo(w, indent+"  ", combinedKey, bytesCount, arraysOneElementPerLine) |  | ||||||
| 			if err != nil { |  | ||||||
| 				return bytesCount, err |  | ||||||
| 			} |  | ||||||
| 		case []*Tree: |  | ||||||
| 			for _, subTree := range node { |  | ||||||
| 				writtenBytesCount, err := writeStrings(w, "\n", indent, commented, "[[", combinedKey, "]]\n") |  | ||||||
| 				bytesCount += int64(writtenBytesCount) |  | ||||||
| 				if err != nil { |  | ||||||
| 					return bytesCount, err |  | ||||||
| 				} |  | ||||||
|  |  | ||||||
| 				bytesCount, err = subTree.writeTo(w, indent+"  ", combinedKey, bytesCount, arraysOneElementPerLine) |  | ||||||
| 				if err != nil { |  | ||||||
| 					return bytesCount, err |  | ||||||
| 				} |  | ||||||
| 			} |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return bytesCount, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func writeStrings(w io.Writer, s ...string) (int, error) { |  | ||||||
| 	var n int |  | ||||||
| 	for i := range s { |  | ||||||
| 		b, err := io.WriteString(w, s[i]) |  | ||||||
| 		n += b |  | ||||||
| 		if err != nil { |  | ||||||
| 			return n, err |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return n, nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // WriteTo encode the Tree as Toml and writes it to the writer w. |  | ||||||
| // Returns the number of bytes written in case of success, or an error if anything happened. |  | ||||||
| func (t *Tree) WriteTo(w io.Writer) (int64, error) { |  | ||||||
| 	return t.writeTo(w, "", "", 0, false) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToTomlString generates a human-readable representation of the current tree. |  | ||||||
| // Output spans multiple lines, and is suitable for ingest by a TOML parser. |  | ||||||
| // If the conversion cannot be performed, ToString returns a non-nil error. |  | ||||||
| func (t *Tree) ToTomlString() (string, error) { |  | ||||||
| 	var buf bytes.Buffer |  | ||||||
| 	_, err := t.WriteTo(&buf) |  | ||||||
| 	if err != nil { |  | ||||||
| 		return "", err |  | ||||||
| 	} |  | ||||||
| 	return buf.String(), nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // String generates a human-readable representation of the current tree. |  | ||||||
| // Alias of ToString. Present to implement the fmt.Stringer interface. |  | ||||||
| func (t *Tree) String() string { |  | ||||||
| 	result, _ := t.ToTomlString() |  | ||||||
| 	return result |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToMap recursively generates a representation of the tree using Go built-in structures. |  | ||||||
| // The following types are used: |  | ||||||
| // |  | ||||||
| //	* bool |  | ||||||
| //	* float64 |  | ||||||
| //	* int64 |  | ||||||
| //	* string |  | ||||||
| //	* uint64 |  | ||||||
| //	* time.Time |  | ||||||
| //	* map[string]interface{} (where interface{} is any of this list) |  | ||||||
| //	* []interface{} (where interface{} is any of this list) |  | ||||||
| func (t *Tree) ToMap() map[string]interface{} { |  | ||||||
| 	result := map[string]interface{}{} |  | ||||||
|  |  | ||||||
| 	for k, v := range t.values { |  | ||||||
| 		switch node := v.(type) { |  | ||||||
| 		case []*Tree: |  | ||||||
| 			var array []interface{} |  | ||||||
| 			for _, item := range node { |  | ||||||
| 				array = append(array, item.ToMap()) |  | ||||||
| 			} |  | ||||||
| 			result[k] = array |  | ||||||
| 		case *Tree: |  | ||||||
| 			result[k] = node.ToMap() |  | ||||||
| 		case *tomlValue: |  | ||||||
| 			result[k] = node.value |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| 	return result |  | ||||||
| } |  | ||||||
							
								
								
									
										25
									
								
								vendor/github.com/spf13/cast/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										25
									
								
								vendor/github.com/spf13/cast/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,25 +0,0 @@ | |||||||
| # Compiled Object files, Static and Dynamic libs (Shared Objects) |  | ||||||
| *.o |  | ||||||
| *.a |  | ||||||
| *.so |  | ||||||
|  |  | ||||||
| # Folders |  | ||||||
| _obj |  | ||||||
| _test |  | ||||||
|  |  | ||||||
| # Architecture specific extensions/prefixes |  | ||||||
| *.[568vq] |  | ||||||
| [568vq].out |  | ||||||
|  |  | ||||||
| *.cgo1.go |  | ||||||
| *.cgo2.c |  | ||||||
| _cgo_defun.c |  | ||||||
| _cgo_gotypes.go |  | ||||||
| _cgo_export.* |  | ||||||
|  |  | ||||||
| _testmain.go |  | ||||||
|  |  | ||||||
| *.exe |  | ||||||
| *.test |  | ||||||
|  |  | ||||||
| *.bench |  | ||||||
							
								
								
									
										15
									
								
								vendor/github.com/spf13/cast/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										15
									
								
								vendor/github.com/spf13/cast/.travis.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,15 +0,0 @@ | |||||||
| language: go |  | ||||||
| env: |  | ||||||
|   -  GO111MODULE=on |  | ||||||
| sudo: required |  | ||||||
| go: |  | ||||||
|   - "1.11.x" |  | ||||||
|   - tip |  | ||||||
| os: |  | ||||||
|   - linux |  | ||||||
| matrix: |  | ||||||
|   allow_failures: |  | ||||||
|     - go: tip |  | ||||||
|   fast_finish: true |  | ||||||
| script: |  | ||||||
|   - make check |  | ||||||
							
								
								
									
										21
									
								
								vendor/github.com/spf13/cast/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										21
									
								
								vendor/github.com/spf13/cast/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,21 +0,0 @@ | |||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2014 Steve Francia |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
							
								
								
									
										38
									
								
								vendor/github.com/spf13/cast/Makefile
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										38
									
								
								vendor/github.com/spf13/cast/Makefile
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,38 +0,0 @@ | |||||||
| # A Self-Documenting Makefile: http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html |  | ||||||
|  |  | ||||||
| .PHONY: check fmt lint test test-race vet test-cover-html help |  | ||||||
| .DEFAULT_GOAL := help |  | ||||||
|  |  | ||||||
| check: test-race fmt vet lint ## Run tests and linters |  | ||||||
|  |  | ||||||
| test: ## Run tests |  | ||||||
| 	go test ./... |  | ||||||
|  |  | ||||||
| test-race: ## Run tests with race detector |  | ||||||
| 	go test -race ./... |  | ||||||
|  |  | ||||||
| fmt: ## Run gofmt linter |  | ||||||
| 	@for d in `go list` ; do \ |  | ||||||
| 		if [ "`gofmt -l -s $$GOPATH/src/$$d | tee /dev/stderr`" ]; then \ |  | ||||||
| 			echo "^ improperly formatted go files" && echo && exit 1; \ |  | ||||||
| 		fi \ |  | ||||||
| 	done |  | ||||||
|  |  | ||||||
| lint: ## Run golint linter |  | ||||||
| 	@for d in `go list` ; do \ |  | ||||||
| 		if [ "`golint $$d | tee /dev/stderr`" ]; then \ |  | ||||||
| 			echo "^ golint errors!" && echo && exit 1; \ |  | ||||||
| 		fi \ |  | ||||||
| 	done |  | ||||||
|  |  | ||||||
| vet: ## Run go vet linter |  | ||||||
| 	@if [ "`go vet | tee /dev/stderr`" ]; then \ |  | ||||||
| 		echo "^ go vet errors!" && echo && exit 1; \ |  | ||||||
| 	fi |  | ||||||
|  |  | ||||||
| test-cover-html: ## Generate test coverage report |  | ||||||
| 	go test -coverprofile=coverage.out -covermode=count |  | ||||||
| 	go tool cover -func=coverage.out |  | ||||||
|  |  | ||||||
| help: |  | ||||||
| 	@grep -E '^[a-zA-Z0-9_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' |  | ||||||
							
								
								
									
										75
									
								
								vendor/github.com/spf13/cast/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										75
									
								
								vendor/github.com/spf13/cast/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,75 +0,0 @@ | |||||||
| cast |  | ||||||
| ==== |  | ||||||
| [](https://godoc.org/github.com/spf13/cast) |  | ||||||
| [](https://travis-ci.org/spf13/cast) |  | ||||||
| [](https://goreportcard.com/report/github.com/spf13/cast) |  | ||||||
|  |  | ||||||
| Easy and safe casting from one type to another in Go |  | ||||||
|  |  | ||||||
| Don’t Panic! ... Cast |  | ||||||
|  |  | ||||||
| ## What is Cast? |  | ||||||
|  |  | ||||||
| Cast is a library to convert between different go types in a consistent and easy way. |  | ||||||
|  |  | ||||||
| Cast provides simple functions to easily convert a number to a string, an |  | ||||||
| interface into a bool, etc. Cast does this intelligently when an obvious |  | ||||||
| conversion is possible. It doesn’t make any attempts to guess what you meant, |  | ||||||
| for example you can only convert a string to an int when it is a string |  | ||||||
| representation of an int such as “8”. Cast was developed for use in |  | ||||||
| [Hugo](http://hugo.spf13.com), a website engine which uses YAML, TOML or JSON |  | ||||||
| for meta data. |  | ||||||
|  |  | ||||||
| ## Why use Cast? |  | ||||||
|  |  | ||||||
| When working with dynamic data in Go you often need to cast or convert the data |  | ||||||
| from one type into another. Cast goes beyond just using type assertion (though |  | ||||||
| it uses that when possible) to provide a very straightforward and convenient |  | ||||||
| library. |  | ||||||
|  |  | ||||||
| If you are working with interfaces to handle things like dynamic content |  | ||||||
| you’ll need an easy way to convert an interface into a given type. This |  | ||||||
| is the library for you. |  | ||||||
|  |  | ||||||
| If you are taking in data from YAML, TOML or JSON or other formats which lack |  | ||||||
| full types, then Cast is the library for you. |  | ||||||
|  |  | ||||||
| ## Usage |  | ||||||
|  |  | ||||||
| Cast provides a handful of To_____ methods. These methods will always return |  | ||||||
| the desired type. **If input is provided that will not convert to that type, the |  | ||||||
| 0 or nil value for that type will be returned**. |  | ||||||
|  |  | ||||||
| Cast also provides identical methods To_____E. These return the same result as |  | ||||||
| the To_____ methods, plus an additional error which tells you if it successfully |  | ||||||
| converted. Using these methods you can tell the difference between when the |  | ||||||
| input matched the zero value or when the conversion failed and the zero value |  | ||||||
| was returned. |  | ||||||
|  |  | ||||||
| The following examples are merely a sample of what is available. Please review |  | ||||||
| the code for a complete set. |  | ||||||
|  |  | ||||||
| ### Example ‘ToString’: |  | ||||||
|  |  | ||||||
|     cast.ToString("mayonegg")         // "mayonegg" |  | ||||||
|     cast.ToString(8)                  // "8" |  | ||||||
|     cast.ToString(8.31)               // "8.31" |  | ||||||
|     cast.ToString([]byte("one time")) // "one time" |  | ||||||
|     cast.ToString(nil)                // "" |  | ||||||
|  |  | ||||||
| 	var foo interface{} = "one more time" |  | ||||||
|     cast.ToString(foo)                // "one more time" |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ### Example ‘ToInt’: |  | ||||||
|  |  | ||||||
|     cast.ToInt(8)                  // 8 |  | ||||||
|     cast.ToInt(8.31)               // 8 |  | ||||||
|     cast.ToInt("8")                // 8 |  | ||||||
|     cast.ToInt(true)               // 1 |  | ||||||
|     cast.ToInt(false)              // 0 |  | ||||||
|  |  | ||||||
| 	var eight interface{} = 8 |  | ||||||
|     cast.ToInt(eight)              // 8 |  | ||||||
|     cast.ToInt(nil)                // 0 |  | ||||||
|  |  | ||||||
							
								
								
									
										171
									
								
								vendor/github.com/spf13/cast/cast.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										171
									
								
								vendor/github.com/spf13/cast/cast.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,171 +0,0 @@ | |||||||
| // Copyright © 2014 Steve Francia <spf@spf13.com>. |  | ||||||
| // |  | ||||||
| // Use of this source code is governed by an MIT-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| // Package cast provides easy and safe casting in Go. |  | ||||||
| package cast |  | ||||||
|  |  | ||||||
| import "time" |  | ||||||
|  |  | ||||||
| // ToBool casts an interface to a bool type. |  | ||||||
| func ToBool(i interface{}) bool { |  | ||||||
| 	v, _ := ToBoolE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToTime casts an interface to a time.Time type. |  | ||||||
| func ToTime(i interface{}) time.Time { |  | ||||||
| 	v, _ := ToTimeE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToDuration casts an interface to a time.Duration type. |  | ||||||
| func ToDuration(i interface{}) time.Duration { |  | ||||||
| 	v, _ := ToDurationE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToFloat64 casts an interface to a float64 type. |  | ||||||
| func ToFloat64(i interface{}) float64 { |  | ||||||
| 	v, _ := ToFloat64E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToFloat32 casts an interface to a float32 type. |  | ||||||
| func ToFloat32(i interface{}) float32 { |  | ||||||
| 	v, _ := ToFloat32E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToInt64 casts an interface to an int64 type. |  | ||||||
| func ToInt64(i interface{}) int64 { |  | ||||||
| 	v, _ := ToInt64E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToInt32 casts an interface to an int32 type. |  | ||||||
| func ToInt32(i interface{}) int32 { |  | ||||||
| 	v, _ := ToInt32E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToInt16 casts an interface to an int16 type. |  | ||||||
| func ToInt16(i interface{}) int16 { |  | ||||||
| 	v, _ := ToInt16E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToInt8 casts an interface to an int8 type. |  | ||||||
| func ToInt8(i interface{}) int8 { |  | ||||||
| 	v, _ := ToInt8E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToInt casts an interface to an int type. |  | ||||||
| func ToInt(i interface{}) int { |  | ||||||
| 	v, _ := ToIntE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToUint casts an interface to a uint type. |  | ||||||
| func ToUint(i interface{}) uint { |  | ||||||
| 	v, _ := ToUintE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToUint64 casts an interface to a uint64 type. |  | ||||||
| func ToUint64(i interface{}) uint64 { |  | ||||||
| 	v, _ := ToUint64E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToUint32 casts an interface to a uint32 type. |  | ||||||
| func ToUint32(i interface{}) uint32 { |  | ||||||
| 	v, _ := ToUint32E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToUint16 casts an interface to a uint16 type. |  | ||||||
| func ToUint16(i interface{}) uint16 { |  | ||||||
| 	v, _ := ToUint16E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToUint8 casts an interface to a uint8 type. |  | ||||||
| func ToUint8(i interface{}) uint8 { |  | ||||||
| 	v, _ := ToUint8E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToString casts an interface to a string type. |  | ||||||
| func ToString(i interface{}) string { |  | ||||||
| 	v, _ := ToStringE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToStringMapString casts an interface to a map[string]string type. |  | ||||||
| func ToStringMapString(i interface{}) map[string]string { |  | ||||||
| 	v, _ := ToStringMapStringE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToStringMapStringSlice casts an interface to a map[string][]string type. |  | ||||||
| func ToStringMapStringSlice(i interface{}) map[string][]string { |  | ||||||
| 	v, _ := ToStringMapStringSliceE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToStringMapBool casts an interface to a map[string]bool type. |  | ||||||
| func ToStringMapBool(i interface{}) map[string]bool { |  | ||||||
| 	v, _ := ToStringMapBoolE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToStringMapInt casts an interface to a map[string]int type. |  | ||||||
| func ToStringMapInt(i interface{}) map[string]int { |  | ||||||
| 	v, _ := ToStringMapIntE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToStringMapInt64 casts an interface to a map[string]int64 type. |  | ||||||
| func ToStringMapInt64(i interface{}) map[string]int64 { |  | ||||||
| 	v, _ := ToStringMapInt64E(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToStringMap casts an interface to a map[string]interface{} type. |  | ||||||
| func ToStringMap(i interface{}) map[string]interface{} { |  | ||||||
| 	v, _ := ToStringMapE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToSlice casts an interface to a []interface{} type. |  | ||||||
| func ToSlice(i interface{}) []interface{} { |  | ||||||
| 	v, _ := ToSliceE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToBoolSlice casts an interface to a []bool type. |  | ||||||
| func ToBoolSlice(i interface{}) []bool { |  | ||||||
| 	v, _ := ToBoolSliceE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToStringSlice casts an interface to a []string type. |  | ||||||
| func ToStringSlice(i interface{}) []string { |  | ||||||
| 	v, _ := ToStringSliceE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToIntSlice casts an interface to a []int type. |  | ||||||
| func ToIntSlice(i interface{}) []int { |  | ||||||
| 	v, _ := ToIntSliceE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // ToDurationSlice casts an interface to a []time.Duration type. |  | ||||||
| func ToDurationSlice(i interface{}) []time.Duration { |  | ||||||
| 	v, _ := ToDurationSliceE(i) |  | ||||||
| 	return v |  | ||||||
| } |  | ||||||
							
								
								
									
										1249
									
								
								vendor/github.com/spf13/cast/caste.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										1249
									
								
								vendor/github.com/spf13/cast/caste.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
										
											
												File diff suppressed because it is too large
												Load Diff
											
										
									
								
							
							
								
								
									
										7
									
								
								vendor/github.com/spf13/cast/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										7
									
								
								vendor/github.com/spf13/cast/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,7 +0,0 @@ | |||||||
| module github.com/spf13/cast |  | ||||||
|  |  | ||||||
| require ( |  | ||||||
| 	github.com/davecgh/go-spew v1.1.1 // indirect |  | ||||||
| 	github.com/pmezard/go-difflib v1.0.0 // indirect |  | ||||||
| 	github.com/stretchr/testify v1.2.2 |  | ||||||
| ) |  | ||||||
							
								
								
									
										6
									
								
								vendor/github.com/spf13/cast/go.sum
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										6
									
								
								vendor/github.com/spf13/cast/go.sum
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,6 +0,0 @@ | |||||||
| github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= |  | ||||||
| github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= |  | ||||||
| github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= |  | ||||||
| github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= |  | ||||||
| github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w= |  | ||||||
| github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= |  | ||||||
							
								
								
									
										24
									
								
								vendor/github.com/spf13/jwalterweatherman/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										24
									
								
								vendor/github.com/spf13/jwalterweatherman/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,24 +0,0 @@ | |||||||
| # Compiled Object files, Static and Dynamic libs (Shared Objects) |  | ||||||
| *.o |  | ||||||
| *.a |  | ||||||
| *.so |  | ||||||
|  |  | ||||||
| # Folders |  | ||||||
| _obj |  | ||||||
| _test |  | ||||||
|  |  | ||||||
| # Architecture specific extensions/prefixes |  | ||||||
| *.[568vq] |  | ||||||
| [568vq].out |  | ||||||
|  |  | ||||||
| *.cgo1.go |  | ||||||
| *.cgo2.c |  | ||||||
| _cgo_defun.c |  | ||||||
| _cgo_gotypes.go |  | ||||||
| _cgo_export.* |  | ||||||
|  |  | ||||||
| _testmain.go |  | ||||||
|  |  | ||||||
| *.exe |  | ||||||
| *.bench |  | ||||||
| go.sum |  | ||||||
							
								
								
									
										21
									
								
								vendor/github.com/spf13/jwalterweatherman/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										21
									
								
								vendor/github.com/spf13/jwalterweatherman/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,21 +0,0 @@ | |||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2014 Steve Francia |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
							
								
								
									
										148
									
								
								vendor/github.com/spf13/jwalterweatherman/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										148
									
								
								vendor/github.com/spf13/jwalterweatherman/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,148 +0,0 @@ | |||||||
| jWalterWeatherman |  | ||||||
| ================= |  | ||||||
|  |  | ||||||
| Seamless printing to the terminal (stdout) and logging to a io.Writer |  | ||||||
| (file) that’s as easy to use as fmt.Println. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| Graphic by [JonnyEtc](http://jonnyetc.deviantart.com/art/And-That-s-Why-You-Always-Leave-a-Note-315311422) |  | ||||||
|  |  | ||||||
| JWW is primarily a wrapper around the excellent standard log library. It |  | ||||||
| provides a few advantages over using the standard log library alone. |  | ||||||
|  |  | ||||||
| 1. Ready to go out of the box.  |  | ||||||
| 2. One library for both printing to the terminal and logging (to files). |  | ||||||
| 3. Really easy to log to either a temp file or a file you specify. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| I really wanted a very straightforward library that could seamlessly do |  | ||||||
| the following things. |  | ||||||
|  |  | ||||||
| 1. Replace all the println, printf, etc statements thoughout my code with |  | ||||||
|    something more useful |  | ||||||
| 2. Allow the user to easily control what levels are printed to stdout |  | ||||||
| 3. Allow the user to easily control what levels are logged |  | ||||||
| 4. Provide an easy mechanism (like fmt.Println) to print info to the user |  | ||||||
|    which can be easily logged as well  |  | ||||||
| 5. Due to 2 & 3 provide easy verbose mode for output and logs |  | ||||||
| 6. Not have any unnecessary initialization cruft. Just use it. |  | ||||||
|  |  | ||||||
| # Usage |  | ||||||
|  |  | ||||||
| ## Step 1. Use it |  | ||||||
| Put calls throughout your source based on type of feedback. |  | ||||||
| No initialization or setup needs to happen. Just start calling things. |  | ||||||
|  |  | ||||||
| Available Loggers are: |  | ||||||
|  |  | ||||||
|  * TRACE |  | ||||||
|  * DEBUG |  | ||||||
|  * INFO |  | ||||||
|  * WARN |  | ||||||
|  * ERROR |  | ||||||
|  * CRITICAL |  | ||||||
|  * FATAL |  | ||||||
|  |  | ||||||
| These each are loggers based on the log standard library and follow the |  | ||||||
| standard usage. Eg. |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
|     import ( |  | ||||||
|         jww "github.com/spf13/jwalterweatherman" |  | ||||||
|     ) |  | ||||||
|  |  | ||||||
|     ... |  | ||||||
|  |  | ||||||
|     if err != nil { |  | ||||||
|  |  | ||||||
|         // This is a pretty serious error and the user should know about |  | ||||||
|         // it. It will be printed to the terminal as well as logged under the |  | ||||||
|         // default thresholds. |  | ||||||
|  |  | ||||||
|         jww.ERROR.Println(err) |  | ||||||
|     } |  | ||||||
|  |  | ||||||
|     if err2 != nil { |  | ||||||
|         // This error isn’t going to materially change the behavior of the |  | ||||||
|         // application, but it’s something that may not be what the user |  | ||||||
|         // expects. Under the default thresholds, Warn will be logged, but |  | ||||||
|         // not printed to the terminal.  |  | ||||||
|  |  | ||||||
|         jww.WARN.Println(err2) |  | ||||||
|     } |  | ||||||
|  |  | ||||||
|     // Information that’s relevant to what’s happening, but not very |  | ||||||
|     // important for the user. Under the default thresholds this will be |  | ||||||
|     // discarded. |  | ||||||
|  |  | ||||||
|     jww.INFO.Printf("information %q", response) |  | ||||||
|  |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| NOTE: You can also use the library in a non-global setting by creating an instance of a Notebook: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| notepad = jww.NewNotepad(jww.LevelInfo, jww.LevelTrace, os.Stdout, ioutil.Discard, "", log.Ldate|log.Ltime) |  | ||||||
| notepad.WARN.Println("Some warning"") |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| _Why 7 levels?_ |  | ||||||
|  |  | ||||||
| Maybe you think that 7 levels are too much for any application... and you |  | ||||||
| are probably correct. Just because there are seven levels doesn’t mean |  | ||||||
| that you should be using all 7 levels. Pick the right set for your needs. |  | ||||||
| Remember they only have to mean something to your project. |  | ||||||
|  |  | ||||||
| ## Step 2. Optionally configure JWW |  | ||||||
|  |  | ||||||
| Under the default thresholds : |  | ||||||
|  |  | ||||||
|  * Debug, Trace & Info goto /dev/null |  | ||||||
|  * Warn and above is logged (when a log file/io.Writer is provided) |  | ||||||
|  * Error and above is printed to the terminal (stdout) |  | ||||||
|  |  | ||||||
| ### Changing the thresholds |  | ||||||
|  |  | ||||||
| The threshold can be changed at any time, but will only affect calls that |  | ||||||
| execute after the change was made. |  | ||||||
|  |  | ||||||
| This is very useful if your application has a verbose mode. Of course you |  | ||||||
| can decide what verbose means to you or even have multiple levels of |  | ||||||
| verbosity. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
|     import ( |  | ||||||
|         jww "github.com/spf13/jwalterweatherman" |  | ||||||
|     ) |  | ||||||
|  |  | ||||||
|     if Verbose { |  | ||||||
|         jww.SetLogThreshold(jww.LevelTrace) |  | ||||||
|         jww.SetStdoutThreshold(jww.LevelInfo) |  | ||||||
|     } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Note that JWW's own internal output uses log levels as well, so set the log |  | ||||||
| level before making any other calls if you want to see what it's up to. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ### Setting a log file |  | ||||||
|  |  | ||||||
| JWW can log to any `io.Writer`: |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
|  |  | ||||||
|     jww.SetLogOutput(customWriter)  |  | ||||||
|  |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
|  |  | ||||||
| # More information |  | ||||||
|  |  | ||||||
| This is an early release. I’ve been using it for a while and this is the |  | ||||||
| third interface I’ve tried. I like this one pretty well, but no guarantees |  | ||||||
| that it won’t change a bit. |  | ||||||
|  |  | ||||||
| I wrote this for use in [hugo](https://gohugo.io). If you are looking |  | ||||||
| for a static website engine that’s super fast please checkout Hugo. |  | ||||||
							
								
								
									
										111
									
								
								vendor/github.com/spf13/jwalterweatherman/default_notepad.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										111
									
								
								vendor/github.com/spf13/jwalterweatherman/default_notepad.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,111 +0,0 @@ | |||||||
| // Copyright © 2016 Steve Francia <spf@spf13.com>. |  | ||||||
| // |  | ||||||
| // Use of this source code is governed by an MIT-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package jwalterweatherman |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"io" |  | ||||||
| 	"io/ioutil" |  | ||||||
| 	"log" |  | ||||||
| 	"os" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var ( |  | ||||||
| 	TRACE    *log.Logger |  | ||||||
| 	DEBUG    *log.Logger |  | ||||||
| 	INFO     *log.Logger |  | ||||||
| 	WARN     *log.Logger |  | ||||||
| 	ERROR    *log.Logger |  | ||||||
| 	CRITICAL *log.Logger |  | ||||||
| 	FATAL    *log.Logger |  | ||||||
|  |  | ||||||
| 	LOG      *log.Logger |  | ||||||
| 	FEEDBACK *Feedback |  | ||||||
|  |  | ||||||
| 	defaultNotepad *Notepad |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| func reloadDefaultNotepad() { |  | ||||||
| 	TRACE = defaultNotepad.TRACE |  | ||||||
| 	DEBUG = defaultNotepad.DEBUG |  | ||||||
| 	INFO = defaultNotepad.INFO |  | ||||||
| 	WARN = defaultNotepad.WARN |  | ||||||
| 	ERROR = defaultNotepad.ERROR |  | ||||||
| 	CRITICAL = defaultNotepad.CRITICAL |  | ||||||
| 	FATAL = defaultNotepad.FATAL |  | ||||||
|  |  | ||||||
| 	LOG = defaultNotepad.LOG |  | ||||||
| 	FEEDBACK = defaultNotepad.FEEDBACK |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func init() { |  | ||||||
| 	defaultNotepad = NewNotepad(LevelError, LevelWarn, os.Stdout, ioutil.Discard, "", log.Ldate|log.Ltime) |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetLogThreshold set the log threshold for the default notepad. Trace by default. |  | ||||||
| func SetLogThreshold(threshold Threshold) { |  | ||||||
| 	defaultNotepad.SetLogThreshold(threshold) |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetLogOutput set the log output for the default notepad. Discarded by default. |  | ||||||
| func SetLogOutput(handle io.Writer) { |  | ||||||
| 	defaultNotepad.SetLogOutput(handle) |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetStdoutThreshold set the standard output threshold for the default notepad. |  | ||||||
| // Info by default. |  | ||||||
| func SetStdoutThreshold(threshold Threshold) { |  | ||||||
| 	defaultNotepad.SetStdoutThreshold(threshold) |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetStdoutOutput set the stdout output for the default notepad. Default is stdout. |  | ||||||
| func SetStdoutOutput(handle io.Writer) { |  | ||||||
| 	defaultNotepad.outHandle = handle |  | ||||||
| 	defaultNotepad.init() |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetPrefix set the prefix for the default logger. Empty by default. |  | ||||||
| func SetPrefix(prefix string) { |  | ||||||
| 	defaultNotepad.SetPrefix(prefix) |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetFlags set the flags for the default logger. "log.Ldate | log.Ltime" by default. |  | ||||||
| func SetFlags(flags int) { |  | ||||||
| 	defaultNotepad.SetFlags(flags) |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetLogListeners configures the default logger with one or more log listeners. |  | ||||||
| func SetLogListeners(l ...LogListener) { |  | ||||||
| 	defaultNotepad.logListeners = l |  | ||||||
| 	defaultNotepad.init() |  | ||||||
| 	reloadDefaultNotepad() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Level returns the current global log threshold. |  | ||||||
| func LogThreshold() Threshold { |  | ||||||
| 	return defaultNotepad.logThreshold |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Level returns the current global output threshold. |  | ||||||
| func StdoutThreshold() Threshold { |  | ||||||
| 	return defaultNotepad.stdoutThreshold |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetStdoutThreshold returns the defined Treshold for the log logger. |  | ||||||
| func GetLogThreshold() Threshold { |  | ||||||
| 	return defaultNotepad.GetLogThreshold() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetStdoutThreshold returns the Treshold for the stdout logger. |  | ||||||
| func GetStdoutThreshold() Threshold { |  | ||||||
| 	return defaultNotepad.GetStdoutThreshold() |  | ||||||
| } |  | ||||||
							
								
								
									
										7
									
								
								vendor/github.com/spf13/jwalterweatherman/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										7
									
								
								vendor/github.com/spf13/jwalterweatherman/go.mod
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,7 +0,0 @@ | |||||||
| module github.com/spf13/jwalterweatherman |  | ||||||
|  |  | ||||||
| require ( |  | ||||||
| 	github.com/davecgh/go-spew v1.1.1 // indirect |  | ||||||
| 	github.com/pmezard/go-difflib v1.0.0 // indirect |  | ||||||
| 	github.com/stretchr/testify v1.2.2 |  | ||||||
| ) |  | ||||||
							
								
								
									
										46
									
								
								vendor/github.com/spf13/jwalterweatherman/log_counter.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										46
									
								
								vendor/github.com/spf13/jwalterweatherman/log_counter.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,46 +0,0 @@ | |||||||
| // Copyright © 2016 Steve Francia <spf@spf13.com>. |  | ||||||
| // |  | ||||||
| // Use of this source code is governed by an MIT-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package jwalterweatherman |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"io" |  | ||||||
| 	"sync/atomic" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| // Counter is an io.Writer that increments a counter on Write. |  | ||||||
| type Counter struct { |  | ||||||
| 	count uint64 |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (c *Counter) incr() { |  | ||||||
| 	atomic.AddUint64(&c.count, 1) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Reset resets the counter. |  | ||||||
| func (c *Counter) Reset() { |  | ||||||
| 	atomic.StoreUint64(&c.count, 0) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Count returns the current count. |  | ||||||
| func (c *Counter) Count() uint64 { |  | ||||||
| 	return atomic.LoadUint64(&c.count) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (c *Counter) Write(p []byte) (n int, err error) { |  | ||||||
| 	c.incr() |  | ||||||
| 	return len(p), nil |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // LogCounter creates a LogListener that counts log statements >= the given threshold. |  | ||||||
| func LogCounter(counter *Counter, t1 Threshold) LogListener { |  | ||||||
| 	return func(t2 Threshold) io.Writer { |  | ||||||
| 		if t2 < t1 { |  | ||||||
| 			// Not interested in this threshold. |  | ||||||
| 			return nil |  | ||||||
| 		} |  | ||||||
| 		return counter |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
							
								
								
									
										225
									
								
								vendor/github.com/spf13/jwalterweatherman/notepad.go
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										225
									
								
								vendor/github.com/spf13/jwalterweatherman/notepad.go
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,225 +0,0 @@ | |||||||
| // Copyright © 2016 Steve Francia <spf@spf13.com>. |  | ||||||
| // |  | ||||||
| // Use of this source code is governed by an MIT-style |  | ||||||
| // license that can be found in the LICENSE file. |  | ||||||
|  |  | ||||||
| package jwalterweatherman |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"fmt" |  | ||||||
| 	"io" |  | ||||||
| 	"io/ioutil" |  | ||||||
| 	"log" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| type Threshold int |  | ||||||
|  |  | ||||||
| func (t Threshold) String() string { |  | ||||||
| 	return prefixes[t] |  | ||||||
| } |  | ||||||
|  |  | ||||||
| const ( |  | ||||||
| 	LevelTrace Threshold = iota |  | ||||||
| 	LevelDebug |  | ||||||
| 	LevelInfo |  | ||||||
| 	LevelWarn |  | ||||||
| 	LevelError |  | ||||||
| 	LevelCritical |  | ||||||
| 	LevelFatal |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| var prefixes map[Threshold]string = map[Threshold]string{ |  | ||||||
| 	LevelTrace:    "TRACE", |  | ||||||
| 	LevelDebug:    "DEBUG", |  | ||||||
| 	LevelInfo:     "INFO", |  | ||||||
| 	LevelWarn:     "WARN", |  | ||||||
| 	LevelError:    "ERROR", |  | ||||||
| 	LevelCritical: "CRITICAL", |  | ||||||
| 	LevelFatal:    "FATAL", |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Notepad is where you leave a note! |  | ||||||
| type Notepad struct { |  | ||||||
| 	TRACE    *log.Logger |  | ||||||
| 	DEBUG    *log.Logger |  | ||||||
| 	INFO     *log.Logger |  | ||||||
| 	WARN     *log.Logger |  | ||||||
| 	ERROR    *log.Logger |  | ||||||
| 	CRITICAL *log.Logger |  | ||||||
| 	FATAL    *log.Logger |  | ||||||
|  |  | ||||||
| 	LOG      *log.Logger |  | ||||||
| 	FEEDBACK *Feedback |  | ||||||
|  |  | ||||||
| 	loggers         [7]**log.Logger |  | ||||||
| 	logHandle       io.Writer |  | ||||||
| 	outHandle       io.Writer |  | ||||||
| 	logThreshold    Threshold |  | ||||||
| 	stdoutThreshold Threshold |  | ||||||
| 	prefix          string |  | ||||||
| 	flags           int |  | ||||||
|  |  | ||||||
| 	logListeners []LogListener |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // A LogListener can ble supplied to a Notepad to listen on log writes for a given |  | ||||||
| // threshold. This can be used to capture log events in unit tests and similar. |  | ||||||
| // Note that this function will be invoked once for each log threshold. If |  | ||||||
| // the given threshold is not of interest to you, return nil. |  | ||||||
| // Note that these listeners will receive log events for a given threshold, even |  | ||||||
| // if the current configuration says not to log it. That way you can count ERRORs even |  | ||||||
| // if you don't print them to the console. |  | ||||||
| type LogListener func(t Threshold) io.Writer |  | ||||||
|  |  | ||||||
| // NewNotepad creates a new Notepad. |  | ||||||
| func NewNotepad( |  | ||||||
| 	outThreshold Threshold, |  | ||||||
| 	logThreshold Threshold, |  | ||||||
| 	outHandle, logHandle io.Writer, |  | ||||||
| 	prefix string, flags int, |  | ||||||
| 	logListeners ...LogListener, |  | ||||||
| ) *Notepad { |  | ||||||
|  |  | ||||||
| 	n := &Notepad{logListeners: logListeners} |  | ||||||
|  |  | ||||||
| 	n.loggers = [7]**log.Logger{&n.TRACE, &n.DEBUG, &n.INFO, &n.WARN, &n.ERROR, &n.CRITICAL, &n.FATAL} |  | ||||||
| 	n.outHandle = outHandle |  | ||||||
| 	n.logHandle = logHandle |  | ||||||
| 	n.stdoutThreshold = outThreshold |  | ||||||
| 	n.logThreshold = logThreshold |  | ||||||
|  |  | ||||||
| 	if len(prefix) != 0 { |  | ||||||
| 		n.prefix = "[" + prefix + "] " |  | ||||||
| 	} else { |  | ||||||
| 		n.prefix = "" |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	n.flags = flags |  | ||||||
|  |  | ||||||
| 	n.LOG = log.New(n.logHandle, |  | ||||||
| 		"LOG:   ", |  | ||||||
| 		n.flags) |  | ||||||
| 	n.FEEDBACK = &Feedback{out: log.New(outHandle, "", 0), log: n.LOG} |  | ||||||
|  |  | ||||||
| 	n.init() |  | ||||||
| 	return n |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // init creates the loggers for each level depending on the notepad thresholds. |  | ||||||
| func (n *Notepad) init() { |  | ||||||
| 	logAndOut := io.MultiWriter(n.outHandle, n.logHandle) |  | ||||||
|  |  | ||||||
| 	for t, logger := range n.loggers { |  | ||||||
| 		threshold := Threshold(t) |  | ||||||
| 		prefix := n.prefix + threshold.String() + " " |  | ||||||
|  |  | ||||||
| 		switch { |  | ||||||
| 		case threshold >= n.logThreshold && threshold >= n.stdoutThreshold: |  | ||||||
| 			*logger = log.New(n.createLogWriters(threshold, logAndOut), prefix, n.flags) |  | ||||||
|  |  | ||||||
| 		case threshold >= n.logThreshold: |  | ||||||
| 			*logger = log.New(n.createLogWriters(threshold, n.logHandle), prefix, n.flags) |  | ||||||
|  |  | ||||||
| 		case threshold >= n.stdoutThreshold: |  | ||||||
| 			*logger = log.New(n.createLogWriters(threshold, n.outHandle), prefix, n.flags) |  | ||||||
|  |  | ||||||
| 		default: |  | ||||||
| 			*logger = log.New(n.createLogWriters(threshold, ioutil.Discard), prefix, n.flags) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (n *Notepad) createLogWriters(t Threshold, handle io.Writer) io.Writer { |  | ||||||
| 	if len(n.logListeners) == 0 { |  | ||||||
| 		return handle |  | ||||||
| 	} |  | ||||||
| 	writers := []io.Writer{handle} |  | ||||||
| 	for _, l := range n.logListeners { |  | ||||||
| 		w := l(t) |  | ||||||
| 		if w != nil { |  | ||||||
| 			writers = append(writers, w) |  | ||||||
| 		} |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	if len(writers) == 1 { |  | ||||||
| 		return handle |  | ||||||
| 	} |  | ||||||
|  |  | ||||||
| 	return io.MultiWriter(writers...) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetLogThreshold changes the threshold above which messages are written to the |  | ||||||
| // log file. |  | ||||||
| func (n *Notepad) SetLogThreshold(threshold Threshold) { |  | ||||||
| 	n.logThreshold = threshold |  | ||||||
| 	n.init() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetLogOutput changes the file where log messages are written. |  | ||||||
| func (n *Notepad) SetLogOutput(handle io.Writer) { |  | ||||||
| 	n.logHandle = handle |  | ||||||
| 	n.init() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetStdoutThreshold returns the defined Treshold for the log logger. |  | ||||||
| func (n *Notepad) GetLogThreshold() Threshold { |  | ||||||
| 	return n.logThreshold |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetStdoutThreshold changes the threshold above which messages are written to the |  | ||||||
| // standard output. |  | ||||||
| func (n *Notepad) SetStdoutThreshold(threshold Threshold) { |  | ||||||
| 	n.stdoutThreshold = threshold |  | ||||||
| 	n.init() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // GetStdoutThreshold returns the Treshold for the stdout logger. |  | ||||||
| func (n *Notepad) GetStdoutThreshold() Threshold { |  | ||||||
| 	return n.stdoutThreshold |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetPrefix changes the prefix used by the notepad. Prefixes are displayed between |  | ||||||
| // brackets at the beginning of the line. An empty prefix won't be displayed at all. |  | ||||||
| func (n *Notepad) SetPrefix(prefix string) { |  | ||||||
| 	if len(prefix) != 0 { |  | ||||||
| 		n.prefix = "[" + prefix + "] " |  | ||||||
| 	} else { |  | ||||||
| 		n.prefix = "" |  | ||||||
| 	} |  | ||||||
| 	n.init() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // SetFlags choose which flags the logger will display (after prefix and message |  | ||||||
| // level). See the package log for more informations on this. |  | ||||||
| func (n *Notepad) SetFlags(flags int) { |  | ||||||
| 	n.flags = flags |  | ||||||
| 	n.init() |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Feedback writes plainly to the outHandle while |  | ||||||
| // logging with the standard extra information (date, file, etc). |  | ||||||
| type Feedback struct { |  | ||||||
| 	out *log.Logger |  | ||||||
| 	log *log.Logger |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (fb *Feedback) Println(v ...interface{}) { |  | ||||||
| 	fb.output(fmt.Sprintln(v...)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (fb *Feedback) Printf(format string, v ...interface{}) { |  | ||||||
| 	fb.output(fmt.Sprintf(format, v...)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (fb *Feedback) Print(v ...interface{}) { |  | ||||||
| 	fb.output(fmt.Sprint(v...)) |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (fb *Feedback) output(s string) { |  | ||||||
| 	if fb.out != nil { |  | ||||||
| 		fb.out.Output(2, s) |  | ||||||
| 	} |  | ||||||
| 	if fb.log != nil { |  | ||||||
| 		fb.log.Output(2, s) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
							
								
								
									
										15
									
								
								vendor/github.com/spf13/viper/.editorconfig
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										15
									
								
								vendor/github.com/spf13/viper/.editorconfig
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,15 +0,0 @@ | |||||||
| root = true |  | ||||||
|  |  | ||||||
| [*] |  | ||||||
| charset = utf-8 |  | ||||||
| end_of_line = lf |  | ||||||
| indent_size = 4 |  | ||||||
| indent_style = space |  | ||||||
| insert_final_newline = true |  | ||||||
| trim_trailing_whitespace = true |  | ||||||
|  |  | ||||||
| [*.go] |  | ||||||
| indent_style = tab |  | ||||||
|  |  | ||||||
| [{Makefile, *.mk}] |  | ||||||
| indent_style = tab |  | ||||||
							
								
								
									
										5
									
								
								vendor/github.com/spf13/viper/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										5
									
								
								vendor/github.com/spf13/viper/.gitignore
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,5 +0,0 @@ | |||||||
| /.idea/ |  | ||||||
| /bin/ |  | ||||||
| /build/ |  | ||||||
| /var/ |  | ||||||
| /vendor/ |  | ||||||
							
								
								
									
										27
									
								
								vendor/github.com/spf13/viper/.golangci.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										27
									
								
								vendor/github.com/spf13/viper/.golangci.yml
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,27 +0,0 @@ | |||||||
| linters-settings: |  | ||||||
|     golint: |  | ||||||
|         min-confidence: 0.1 |  | ||||||
|     goimports: |  | ||||||
|         local-prefixes: github.com/spf13/viper |  | ||||||
|  |  | ||||||
| linters: |  | ||||||
|     enable-all: true |  | ||||||
|     disable: |  | ||||||
|         - funlen |  | ||||||
|         - maligned |  | ||||||
|  |  | ||||||
|         # TODO: fix me |  | ||||||
|         - wsl |  | ||||||
|         - gochecknoinits |  | ||||||
|         - gosimple |  | ||||||
|         - gochecknoglobals |  | ||||||
|         - errcheck |  | ||||||
|         - lll |  | ||||||
|         - godox |  | ||||||
|         - scopelint |  | ||||||
|         - gocyclo |  | ||||||
|         - gocognit |  | ||||||
|         - gocritic |  | ||||||
|  |  | ||||||
| service: |  | ||||||
|     golangci-lint-version: 1.21.x |  | ||||||
							
								
								
									
										21
									
								
								vendor/github.com/spf13/viper/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										21
									
								
								vendor/github.com/spf13/viper/LICENSE
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,21 +0,0 @@ | |||||||
| The MIT License (MIT) |  | ||||||
|  |  | ||||||
| Copyright (c) 2014 Steve Francia |  | ||||||
|  |  | ||||||
| Permission is hereby granted, free of charge, to any person obtaining a copy |  | ||||||
| of this software and associated documentation files (the "Software"), to deal |  | ||||||
| in the Software without restriction, including without limitation the rights |  | ||||||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |  | ||||||
| copies of the Software, and to permit persons to whom the Software is |  | ||||||
| furnished to do so, subject to the following conditions: |  | ||||||
|  |  | ||||||
| The above copyright notice and this permission notice shall be included in all |  | ||||||
| copies or substantial portions of the Software. |  | ||||||
|  |  | ||||||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |  | ||||||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |  | ||||||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |  | ||||||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |  | ||||||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |  | ||||||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |  | ||||||
| SOFTWARE. |  | ||||||
							
								
								
									
										76
									
								
								vendor/github.com/spf13/viper/Makefile
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										76
									
								
								vendor/github.com/spf13/viper/Makefile
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,76 +0,0 @@ | |||||||
| # A Self-Documenting Makefile: http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html |  | ||||||
|  |  | ||||||
| OS = $(shell uname | tr A-Z a-z) |  | ||||||
| export PATH := $(abspath bin/):${PATH} |  | ||||||
|  |  | ||||||
| # Build variables |  | ||||||
| BUILD_DIR ?= build |  | ||||||
| export CGO_ENABLED ?= 0 |  | ||||||
| export GOOS = $(shell go env GOOS) |  | ||||||
| ifeq (${VERBOSE}, 1) |  | ||||||
| ifeq ($(filter -v,${GOARGS}),) |  | ||||||
| 	GOARGS += -v |  | ||||||
| endif |  | ||||||
| TEST_FORMAT = short-verbose |  | ||||||
| endif |  | ||||||
|  |  | ||||||
| # Dependency versions |  | ||||||
| GOTESTSUM_VERSION = 0.4.0 |  | ||||||
| GOLANGCI_VERSION = 1.21.0 |  | ||||||
|  |  | ||||||
| # Add the ability to override some variables |  | ||||||
| # Use with care |  | ||||||
| -include override.mk |  | ||||||
|  |  | ||||||
| .PHONY: clear |  | ||||||
| clear: ## Clear the working area and the project |  | ||||||
| 	rm -rf bin/ |  | ||||||
|  |  | ||||||
| .PHONY: check |  | ||||||
| check: test lint ## Run tests and linters |  | ||||||
|  |  | ||||||
| bin/gotestsum: bin/gotestsum-${GOTESTSUM_VERSION} |  | ||||||
| 	@ln -sf gotestsum-${GOTESTSUM_VERSION} bin/gotestsum |  | ||||||
| bin/gotestsum-${GOTESTSUM_VERSION}: |  | ||||||
| 	@mkdir -p bin |  | ||||||
| 	curl -L https://github.com/gotestyourself/gotestsum/releases/download/v${GOTESTSUM_VERSION}/gotestsum_${GOTESTSUM_VERSION}_${OS}_amd64.tar.gz | tar -zOxf - gotestsum > ./bin/gotestsum-${GOTESTSUM_VERSION} && chmod +x ./bin/gotestsum-${GOTESTSUM_VERSION} |  | ||||||
|  |  | ||||||
| TEST_PKGS ?= ./... |  | ||||||
| .PHONY: test |  | ||||||
| test: TEST_FORMAT ?= short |  | ||||||
| test: SHELL = /bin/bash |  | ||||||
| test: export CGO_ENABLED=1 |  | ||||||
| test: bin/gotestsum ## Run tests |  | ||||||
| 	@mkdir -p ${BUILD_DIR} |  | ||||||
| 	bin/gotestsum --no-summary=skipped --junitfile ${BUILD_DIR}/coverage.xml --format ${TEST_FORMAT} -- -race -coverprofile=${BUILD_DIR}/coverage.txt -covermode=atomic $(filter-out -v,${GOARGS}) $(if ${TEST_PKGS},${TEST_PKGS},./...) |  | ||||||
|  |  | ||||||
| bin/golangci-lint: bin/golangci-lint-${GOLANGCI_VERSION} |  | ||||||
| 	@ln -sf golangci-lint-${GOLANGCI_VERSION} bin/golangci-lint |  | ||||||
| bin/golangci-lint-${GOLANGCI_VERSION}: |  | ||||||
| 	@mkdir -p bin |  | ||||||
| 	curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | bash -s -- -b ./bin/ v${GOLANGCI_VERSION} |  | ||||||
| 	@mv bin/golangci-lint $@ |  | ||||||
|  |  | ||||||
| .PHONY: lint |  | ||||||
| lint: bin/golangci-lint ## Run linter |  | ||||||
| 	bin/golangci-lint run |  | ||||||
|  |  | ||||||
| .PHONY: fix |  | ||||||
| fix: bin/golangci-lint ## Fix lint violations |  | ||||||
| 	bin/golangci-lint run --fix |  | ||||||
|  |  | ||||||
| # Add custom targets here |  | ||||||
| -include custom.mk |  | ||||||
|  |  | ||||||
| .PHONY: list |  | ||||||
| list: ## List all make targets |  | ||||||
| 	@${MAKE} -pRrn : -f $(MAKEFILE_LIST) 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | egrep -v -e '^[^[:alnum:]]' -e '^$@$$' | sort |  | ||||||
|  |  | ||||||
| .PHONY: help |  | ||||||
| .DEFAULT_GOAL := help |  | ||||||
| help: |  | ||||||
| 	@grep -h -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' |  | ||||||
|  |  | ||||||
| # Variable outputting/exporting rules |  | ||||||
| var-%: ; @echo $($*) |  | ||||||
| varexport-%: ; @echo $*=$($*) |  | ||||||
							
								
								
									
										806
									
								
								vendor/github.com/spf13/viper/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							
							
						
						
									
										806
									
								
								vendor/github.com/spf13/viper/README.md
									
									
									
										generated
									
									
										vendored
									
									
								
							| @@ -1,806 +0,0 @@ | |||||||
|  |  | ||||||
|  |  | ||||||
| [](https://github.com/avelino/awesome-go#configuration) |  | ||||||
|  |  | ||||||
| [](https://github.com/spf13/viper/actions?query=workflow%3ACI) |  | ||||||
| [](https://gitter.im/spf13/viper?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) |  | ||||||
| [](https://goreportcard.com/report/github.com/spf13/viper) |  | ||||||
| [](https://pkg.go.dev/mod/github.com/spf13/viper) |  | ||||||
|  |  | ||||||
| **Go configuration with fangs!** |  | ||||||
|  |  | ||||||
| Many Go projects are built using Viper including: |  | ||||||
|  |  | ||||||
| * [Hugo](http://gohugo.io) |  | ||||||
| * [EMC RexRay](http://rexray.readthedocs.org/en/stable/) |  | ||||||
| * [Imgur’s Incus](https://github.com/Imgur/incus) |  | ||||||
| * [Nanobox](https://github.com/nanobox-io/nanobox)/[Nanopack](https://github.com/nanopack) |  | ||||||
| * [Docker Notary](https://github.com/docker/Notary) |  | ||||||
| * [BloomApi](https://www.bloomapi.com/) |  | ||||||
| * [doctl](https://github.com/digitalocean/doctl) |  | ||||||
| * [Clairctl](https://github.com/jgsqware/clairctl) |  | ||||||
| * [Mercure](https://mercure.rocks) |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ## Install |  | ||||||
|  |  | ||||||
| ```console |  | ||||||
| go get github.com/spf13/viper |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ## What is Viper? |  | ||||||
|  |  | ||||||
| Viper is a complete configuration solution for Go applications including 12-Factor apps. It is designed |  | ||||||
| to work within an application, and can handle all types of configuration needs |  | ||||||
| and formats. It supports: |  | ||||||
|  |  | ||||||
| * setting defaults |  | ||||||
| * reading from JSON, TOML, YAML, HCL, envfile and Java properties config files |  | ||||||
| * live watching and re-reading of config files (optional) |  | ||||||
| * reading from environment variables |  | ||||||
| * reading from remote config systems (etcd or Consul), and watching changes |  | ||||||
| * reading from command line flags |  | ||||||
| * reading from buffer |  | ||||||
| * setting explicit values |  | ||||||
|  |  | ||||||
| Viper can be thought of as a registry for all of your applications configuration needs. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ## Why Viper? |  | ||||||
|  |  | ||||||
| When building a modern application, you don’t want to worry about |  | ||||||
| configuration file formats; you want to focus on building awesome software. |  | ||||||
| Viper is here to help with that. |  | ||||||
|  |  | ||||||
| Viper does the following for you: |  | ||||||
|  |  | ||||||
| 1. Find, load, and unmarshal a configuration file in JSON, TOML, YAML, HCL, INI, envfile or Java properties formats. |  | ||||||
| 2. Provide a mechanism to set default values for your different configuration options. |  | ||||||
| 3. Provide a mechanism to set override values for options specified through command line flags. |  | ||||||
| 4. Provide an alias system to easily rename parameters without breaking existing code. |  | ||||||
| 5. Make it easy to tell the difference between when a user has provided a command line or config file which is the same as the default. |  | ||||||
|  |  | ||||||
| Viper uses the following precedence order. Each item takes precedence over the item below it: |  | ||||||
|  |  | ||||||
|  * explicit call to `Set` |  | ||||||
|  * flag |  | ||||||
|  * env |  | ||||||
|  * config |  | ||||||
|  * key/value store |  | ||||||
|  * default |  | ||||||
|  |  | ||||||
| **Important:** Viper configuration keys are case insensitive. |  | ||||||
| There are ongoing discussions about making that optional. |  | ||||||
|  |  | ||||||
|  |  | ||||||
| ## Putting Values into Viper |  | ||||||
|  |  | ||||||
| ### Establishing Defaults |  | ||||||
|  |  | ||||||
| A good configuration system will support default values. A default value is not |  | ||||||
| required for a key, but it’s useful in the event that a key hasn't been set via |  | ||||||
| config file, environment variable, remote configuration or flag. |  | ||||||
|  |  | ||||||
| Examples: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.SetDefault("ContentDir", "content") |  | ||||||
| viper.SetDefault("LayoutDir", "layouts") |  | ||||||
| viper.SetDefault("Taxonomies", map[string]string{"tag": "tags", "category": "categories"}) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Reading Config Files |  | ||||||
|  |  | ||||||
| Viper requires minimal configuration so it knows where to look for config files. |  | ||||||
| Viper supports JSON, TOML, YAML, HCL, INI, envfile and Java Properties files. Viper can search multiple paths, but |  | ||||||
| currently a single Viper instance only supports a single configuration file. |  | ||||||
| Viper does not default to any configuration search paths leaving defaults decision |  | ||||||
| to an application. |  | ||||||
|  |  | ||||||
| Here is an example of how to use Viper to search for and read a configuration file. |  | ||||||
| None of the specific paths are required, but at least one path should be provided |  | ||||||
| where a configuration file is expected. |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.SetConfigName("config") // name of config file (without extension) |  | ||||||
| viper.SetConfigType("yaml") // REQUIRED if the config file does not have the extension in the name |  | ||||||
| viper.AddConfigPath("/etc/appname/")   // path to look for the config file in |  | ||||||
| viper.AddConfigPath("$HOME/.appname")  // call multiple times to add many search paths |  | ||||||
| viper.AddConfigPath(".")               // optionally look for config in the working directory |  | ||||||
| err := viper.ReadInConfig() // Find and read the config file |  | ||||||
| if err != nil { // Handle errors reading the config file |  | ||||||
| 	panic(fmt.Errorf("Fatal error config file: %s \n", err)) |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| You can handle the specific case where no config file is found like this: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| if err := viper.ReadInConfig(); err != nil { |  | ||||||
|     if _, ok := err.(viper.ConfigFileNotFoundError); ok { |  | ||||||
|         // Config file not found; ignore error if desired |  | ||||||
|     } else { |  | ||||||
|         // Config file was found but another error was produced |  | ||||||
|     } |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // Config file found and successfully parsed |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| *NOTE [since 1.6]:* You can also have a file without an extension and specify the format programmaticaly. For those configuration files that lie in the home of the user without any extension like `.bashrc` |  | ||||||
|  |  | ||||||
| ### Writing Config Files |  | ||||||
|  |  | ||||||
| Reading from config files is useful, but at times you want to store all modifications made at run time. |  | ||||||
| For that, a bunch of commands are available, each with its own purpose: |  | ||||||
|  |  | ||||||
| * WriteConfig - writes the current viper configuration to the predefined path, if exists. Errors if no predefined path. Will overwrite the current config file, if it exists. |  | ||||||
| * SafeWriteConfig - writes the current viper configuration to the predefined path. Errors if no predefined path. Will not overwrite the current config file, if it exists. |  | ||||||
| * WriteConfigAs - writes the current viper configuration to the given filepath. Will overwrite the given file, if it exists. |  | ||||||
| * SafeWriteConfigAs - writes the current viper configuration to the given filepath. Will not overwrite the given file, if it exists. |  | ||||||
|  |  | ||||||
| As a rule of the thumb, everything marked with safe won't overwrite any file, but just create if not existent, whilst the default behavior is to create or truncate. |  | ||||||
|  |  | ||||||
| A small examples section: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.WriteConfig() // writes current config to predefined path set by 'viper.AddConfigPath()' and 'viper.SetConfigName' |  | ||||||
| viper.SafeWriteConfig() |  | ||||||
| viper.WriteConfigAs("/path/to/my/.config") |  | ||||||
| viper.SafeWriteConfigAs("/path/to/my/.config") // will error since it has already been written |  | ||||||
| viper.SafeWriteConfigAs("/path/to/my/.other_config") |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Watching and re-reading config files |  | ||||||
|  |  | ||||||
| Viper supports the ability to have your application live read a config file while running. |  | ||||||
|  |  | ||||||
| Gone are the days of needing to restart a server to have a config take effect, |  | ||||||
| viper powered applications can read an update to a config file while running and |  | ||||||
| not miss a beat. |  | ||||||
|  |  | ||||||
| Simply tell the viper instance to watchConfig. |  | ||||||
| Optionally you can provide a function for Viper to run each time a change occurs. |  | ||||||
|  |  | ||||||
| **Make sure you add all of the configPaths prior to calling `WatchConfig()`** |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.WatchConfig() |  | ||||||
| viper.OnConfigChange(func(e fsnotify.Event) { |  | ||||||
| 	fmt.Println("Config file changed:", e.Name) |  | ||||||
| }) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Reading Config from io.Reader |  | ||||||
|  |  | ||||||
| Viper predefines many configuration sources such as files, environment |  | ||||||
| variables, flags, and remote K/V store, but you are not bound to them. You can |  | ||||||
| also implement your own required configuration source and feed it to viper. |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.SetConfigType("yaml") // or viper.SetConfigType("YAML") |  | ||||||
|  |  | ||||||
| // any approach to require this configuration into your program. |  | ||||||
| var yamlExample = []byte(` |  | ||||||
| Hacker: true |  | ||||||
| name: steve |  | ||||||
| hobbies: |  | ||||||
| - skateboarding |  | ||||||
| - snowboarding |  | ||||||
| - go |  | ||||||
| clothing: |  | ||||||
|   jacket: leather |  | ||||||
|   trousers: denim |  | ||||||
| age: 35 |  | ||||||
| eyes : brown |  | ||||||
| beard: true |  | ||||||
| `) |  | ||||||
|  |  | ||||||
| viper.ReadConfig(bytes.NewBuffer(yamlExample)) |  | ||||||
|  |  | ||||||
| viper.Get("name") // this would be "steve" |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Setting Overrides |  | ||||||
|  |  | ||||||
| These could be from a command line flag, or from your own application logic. |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.Set("Verbose", true) |  | ||||||
| viper.Set("LogFile", LogFile) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Registering and Using Aliases |  | ||||||
|  |  | ||||||
| Aliases permit a single value to be referenced by multiple keys |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.RegisterAlias("loud", "Verbose") |  | ||||||
|  |  | ||||||
| viper.Set("verbose", true) // same result as next line |  | ||||||
| viper.Set("loud", true)   // same result as prior line |  | ||||||
|  |  | ||||||
| viper.GetBool("loud") // true |  | ||||||
| viper.GetBool("verbose") // true |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Working with Environment Variables |  | ||||||
|  |  | ||||||
| Viper has full support for environment variables. This enables 12 factor |  | ||||||
| applications out of the box. There are five methods that exist to aid working |  | ||||||
| with ENV: |  | ||||||
|  |  | ||||||
|  * `AutomaticEnv()` |  | ||||||
|  * `BindEnv(string...) : error` |  | ||||||
|  * `SetEnvPrefix(string)` |  | ||||||
|  * `SetEnvKeyReplacer(string...) *strings.Replacer` |  | ||||||
|  * `AllowEmptyEnv(bool)` |  | ||||||
|  |  | ||||||
| _When working with ENV variables, it’s important to recognize that Viper |  | ||||||
| treats ENV variables as case sensitive._ |  | ||||||
|  |  | ||||||
| Viper provides a mechanism to try to ensure that ENV variables are unique. By |  | ||||||
| using `SetEnvPrefix`, you can tell Viper to use a prefix while reading from |  | ||||||
| the environment variables. Both `BindEnv` and `AutomaticEnv` will use this |  | ||||||
| prefix. |  | ||||||
|  |  | ||||||
| `BindEnv` takes one or two parameters. The first parameter is the key name, the |  | ||||||
| second is the name of the environment variable. The name of the environment |  | ||||||
| variable is case sensitive. If the ENV variable name is not provided, then |  | ||||||
| Viper will automatically assume that the ENV variable matches the following format: prefix + "_" + the key name in ALL CAPS. When you explicitly provide the ENV variable name (the second parameter), |  | ||||||
| it **does not** automatically add the prefix. For example if the second parameter is "id", |  | ||||||
| Viper will look for the ENV variable "ID". |  | ||||||
|  |  | ||||||
| One important thing to recognize when working with ENV variables is that the |  | ||||||
| value will be read each time it is accessed. Viper does not fix the value when |  | ||||||
| the `BindEnv` is called. |  | ||||||
|  |  | ||||||
| `AutomaticEnv` is a powerful helper especially when combined with |  | ||||||
| `SetEnvPrefix`. When called, Viper will check for an environment variable any |  | ||||||
| time a `viper.Get` request is made. It will apply the following rules. It will |  | ||||||
| check for a environment variable with a name matching the key uppercased and |  | ||||||
| prefixed with the `EnvPrefix` if set. |  | ||||||
|  |  | ||||||
| `SetEnvKeyReplacer` allows you to use a `strings.Replacer` object to rewrite Env |  | ||||||
| keys to an extent. This is useful if you want to use `-` or something in your |  | ||||||
| `Get()` calls, but want your environmental variables to use `_` delimiters. An |  | ||||||
| example of using it can be found in `viper_test.go`. |  | ||||||
|  |  | ||||||
| Alternatively, you can use `EnvKeyReplacer` with `NewWithOptions` factory function. |  | ||||||
| Unlike `SetEnvKeyReplacer`, it accepts a `StringReplacer` interface allowing you to write custom string replacing logic. |  | ||||||
|  |  | ||||||
| By default empty environment variables are considered unset and will fall back to |  | ||||||
| the next configuration source. To treat empty environment variables as set, use |  | ||||||
| the `AllowEmptyEnv` method. |  | ||||||
|  |  | ||||||
| #### Env example |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| SetEnvPrefix("spf") // will be uppercased automatically |  | ||||||
| BindEnv("id") |  | ||||||
|  |  | ||||||
| os.Setenv("SPF_ID", "13") // typically done outside of the app |  | ||||||
|  |  | ||||||
| id := Get("id") // 13 |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Working with Flags |  | ||||||
|  |  | ||||||
| Viper has the ability to bind to flags. Specifically, Viper supports `Pflags` |  | ||||||
| as used in the [Cobra](https://github.com/spf13/cobra) library. |  | ||||||
|  |  | ||||||
| Like `BindEnv`, the value is not set when the binding method is called, but when |  | ||||||
| it is accessed. This means you can bind as early as you want, even in an |  | ||||||
| `init()` function. |  | ||||||
|  |  | ||||||
| For individual flags, the `BindPFlag()` method provides this functionality. |  | ||||||
|  |  | ||||||
| Example: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| serverCmd.Flags().Int("port", 1138, "Port to run Application server on") |  | ||||||
| viper.BindPFlag("port", serverCmd.Flags().Lookup("port")) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| You can also bind an existing set of pflags (pflag.FlagSet): |  | ||||||
|  |  | ||||||
| Example: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| pflag.Int("flagname", 1234, "help message for flagname") |  | ||||||
|  |  | ||||||
| pflag.Parse() |  | ||||||
| viper.BindPFlags(pflag.CommandLine) |  | ||||||
|  |  | ||||||
| i := viper.GetInt("flagname") // retrieve values from viper instead of pflag |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| The use of [pflag](https://github.com/spf13/pflag/) in Viper does not preclude |  | ||||||
| the use of other packages that use the [flag](https://golang.org/pkg/flag/) |  | ||||||
| package from the standard library. The pflag package can handle the flags |  | ||||||
| defined for the flag package by importing these flags. This is accomplished |  | ||||||
| by a calling a convenience function provided by the pflag package called |  | ||||||
| AddGoFlagSet(). |  | ||||||
|  |  | ||||||
| Example: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| package main |  | ||||||
|  |  | ||||||
| import ( |  | ||||||
| 	"flag" |  | ||||||
| 	"github.com/spf13/pflag" |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| func main() { |  | ||||||
|  |  | ||||||
| 	// using standard library "flag" package |  | ||||||
| 	flag.Int("flagname", 1234, "help message for flagname") |  | ||||||
|  |  | ||||||
| 	pflag.CommandLine.AddGoFlagSet(flag.CommandLine) |  | ||||||
| 	pflag.Parse() |  | ||||||
| 	viper.BindPFlags(pflag.CommandLine) |  | ||||||
|  |  | ||||||
| 	i := viper.GetInt("flagname") // retrieve value from viper |  | ||||||
|  |  | ||||||
| 	... |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| #### Flag interfaces |  | ||||||
|  |  | ||||||
| Viper provides two Go interfaces to bind other flag systems if you don’t use `Pflags`. |  | ||||||
|  |  | ||||||
| `FlagValue` represents a single flag. This is a very simple example on how to implement this interface: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| type myFlag struct {} |  | ||||||
| func (f myFlag) HasChanged() bool { return false } |  | ||||||
| func (f myFlag) Name() string { return "my-flag-name" } |  | ||||||
| func (f myFlag) ValueString() string { return "my-flag-value" } |  | ||||||
| func (f myFlag) ValueType() string { return "string" } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Once your flag implements this interface, you can simply tell Viper to bind it: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.BindFlagValue("my-flag-name", myFlag{}) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| `FlagValueSet` represents a group of flags. This is a very simple example on how to implement this interface: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| type myFlagSet struct { |  | ||||||
| 	flags []myFlag |  | ||||||
| } |  | ||||||
|  |  | ||||||
| func (f myFlagSet) VisitAll(fn func(FlagValue)) { |  | ||||||
| 	for _, flag := range flags { |  | ||||||
| 		fn(flag) |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Once your flag set implements this interface, you can simply tell Viper to bind it: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| fSet := myFlagSet{ |  | ||||||
| 	flags: []myFlag{myFlag{}, myFlag{}}, |  | ||||||
| } |  | ||||||
| viper.BindFlagValues("my-flags", fSet) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Remote Key/Value Store Support |  | ||||||
|  |  | ||||||
| To enable remote support in Viper, do a blank import of the `viper/remote` |  | ||||||
| package: |  | ||||||
|  |  | ||||||
| `import _ "github.com/spf13/viper/remote"` |  | ||||||
|  |  | ||||||
| Viper will read a config string (as JSON, TOML, YAML, HCL or envfile) retrieved from a path |  | ||||||
| in a Key/Value store such as etcd or Consul.  These values take precedence over |  | ||||||
| default values, but are overridden by configuration values retrieved from disk, |  | ||||||
| flags, or environment variables. |  | ||||||
|  |  | ||||||
| Viper uses [crypt](https://github.com/bketelsen/crypt) to retrieve |  | ||||||
| configuration from the K/V store, which means that you can store your |  | ||||||
| configuration values encrypted and have them automatically decrypted if you have |  | ||||||
| the correct gpg keyring.  Encryption is optional. |  | ||||||
|  |  | ||||||
| You can use remote configuration in conjunction with local configuration, or |  | ||||||
| independently of it. |  | ||||||
|  |  | ||||||
| `crypt` has a command-line helper that you can use to put configurations in your |  | ||||||
| K/V store. `crypt` defaults to etcd on http://127.0.0.1:4001. |  | ||||||
|  |  | ||||||
| ```bash |  | ||||||
| $ go get github.com/bketelsen/crypt/bin/crypt |  | ||||||
| $ crypt set -plaintext /config/hugo.json /Users/hugo/settings/config.json |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Confirm that your value was set: |  | ||||||
|  |  | ||||||
| ```bash |  | ||||||
| $ crypt get -plaintext /config/hugo.json |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| See the `crypt` documentation for examples of how to set encrypted values, or |  | ||||||
| how to use Consul. |  | ||||||
|  |  | ||||||
| ### Remote Key/Value Store Example - Unencrypted |  | ||||||
|  |  | ||||||
| #### etcd |  | ||||||
| ```go |  | ||||||
| viper.AddRemoteProvider("etcd", "http://127.0.0.1:4001","/config/hugo.json") |  | ||||||
| viper.SetConfigType("json") // because there is no file extension in a stream of bytes, supported extensions are "json", "toml", "yaml", "yml", "properties", "props", "prop", "env", "dotenv" |  | ||||||
| err := viper.ReadRemoteConfig() |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| #### Consul |  | ||||||
| You need to set a key to Consul key/value storage with JSON value containing your desired config. |  | ||||||
| For example, create a Consul key/value store key `MY_CONSUL_KEY` with value: |  | ||||||
|  |  | ||||||
| ```json |  | ||||||
| { |  | ||||||
|     "port": 8080, |  | ||||||
|     "hostname": "myhostname.com" |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.AddRemoteProvider("consul", "localhost:8500", "MY_CONSUL_KEY") |  | ||||||
| viper.SetConfigType("json") // Need to explicitly set this to json |  | ||||||
| err := viper.ReadRemoteConfig() |  | ||||||
|  |  | ||||||
| fmt.Println(viper.Get("port")) // 8080 |  | ||||||
| fmt.Println(viper.Get("hostname")) // myhostname.com |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| #### Firestore |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.AddRemoteProvider("firestore", "google-cloud-project-id", "collection/document") |  | ||||||
| viper.SetConfigType("json") // Config's format: "json", "toml", "yaml", "yml" |  | ||||||
| err := viper.ReadRemoteConfig() |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Of course, you're allowed to use `SecureRemoteProvider` also |  | ||||||
|  |  | ||||||
| ### Remote Key/Value Store Example - Encrypted |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| viper.AddSecureRemoteProvider("etcd","http://127.0.0.1:4001","/config/hugo.json","/etc/secrets/mykeyring.gpg") |  | ||||||
| viper.SetConfigType("json") // because there is no file extension in a stream of bytes,  supported extensions are "json", "toml", "yaml", "yml", "properties", "props", "prop", "env", "dotenv" |  | ||||||
| err := viper.ReadRemoteConfig() |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Watching Changes in etcd - Unencrypted |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| // alternatively, you can create a new viper instance. |  | ||||||
| var runtime_viper = viper.New() |  | ||||||
|  |  | ||||||
| runtime_viper.AddRemoteProvider("etcd", "http://127.0.0.1:4001", "/config/hugo.yml") |  | ||||||
| runtime_viper.SetConfigType("yaml") // because there is no file extension in a stream of bytes, supported extensions are "json", "toml", "yaml", "yml", "properties", "props", "prop", "env", "dotenv" |  | ||||||
|  |  | ||||||
| // read from remote config the first time. |  | ||||||
| err := runtime_viper.ReadRemoteConfig() |  | ||||||
|  |  | ||||||
| // unmarshal config |  | ||||||
| runtime_viper.Unmarshal(&runtime_conf) |  | ||||||
|  |  | ||||||
| // open a goroutine to watch remote changes forever |  | ||||||
| go func(){ |  | ||||||
| 	for { |  | ||||||
| 	    time.Sleep(time.Second * 5) // delay after each request |  | ||||||
|  |  | ||||||
| 	    // currently, only tested with etcd support |  | ||||||
| 	    err := runtime_viper.WatchRemoteConfig() |  | ||||||
| 	    if err != nil { |  | ||||||
| 	        log.Errorf("unable to read remote config: %v", err) |  | ||||||
| 	        continue |  | ||||||
| 	    } |  | ||||||
|  |  | ||||||
| 	    // unmarshal new config into our runtime config struct. you can also use channel |  | ||||||
| 	    // to implement a signal to notify the system of the changes |  | ||||||
| 	    runtime_viper.Unmarshal(&runtime_conf) |  | ||||||
| 	} |  | ||||||
| }() |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ## Getting Values From Viper |  | ||||||
|  |  | ||||||
| In Viper, there are a few ways to get a value depending on the value’s type. |  | ||||||
| The following functions and methods exist: |  | ||||||
|  |  | ||||||
|  * `Get(key string) : interface{}` |  | ||||||
|  * `GetBool(key string) : bool` |  | ||||||
|  * `GetFloat64(key string) : float64` |  | ||||||
|  * `GetInt(key string) : int` |  | ||||||
|  * `GetIntSlice(key string) : []int` |  | ||||||
|  * `GetString(key string) : string` |  | ||||||
|  * `GetStringMap(key string) : map[string]interface{}` |  | ||||||
|  * `GetStringMapString(key string) : map[string]string` |  | ||||||
|  * `GetStringSlice(key string) : []string` |  | ||||||
|  * `GetTime(key string) : time.Time` |  | ||||||
|  * `GetDuration(key string) : time.Duration` |  | ||||||
|  * `IsSet(key string) : bool` |  | ||||||
|  * `AllSettings() : map[string]interface{}` |  | ||||||
|  |  | ||||||
| One important thing to recognize is that each Get function will return a zero |  | ||||||
| value if it’s not found. To check if a given key exists, the `IsSet()` method |  | ||||||
| has been provided. |  | ||||||
|  |  | ||||||
| Example: |  | ||||||
| ```go |  | ||||||
| viper.GetString("logfile") // case-insensitive Setting & Getting |  | ||||||
| if viper.GetBool("verbose") { |  | ||||||
|     fmt.Println("verbose enabled") |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
| ### Accessing nested keys |  | ||||||
|  |  | ||||||
| The accessor methods also accept formatted paths to deeply nested keys. For |  | ||||||
| example, if the following JSON file is loaded: |  | ||||||
|  |  | ||||||
| ```json |  | ||||||
| { |  | ||||||
|     "host": { |  | ||||||
|         "address": "localhost", |  | ||||||
|         "port": 5799 |  | ||||||
|     }, |  | ||||||
|     "datastore": { |  | ||||||
|         "metric": { |  | ||||||
|             "host": "127.0.0.1", |  | ||||||
|             "port": 3099 |  | ||||||
|         }, |  | ||||||
|         "warehouse": { |  | ||||||
|             "host": "198.0.0.1", |  | ||||||
|             "port": 2112 |  | ||||||
|         } |  | ||||||
|     } |  | ||||||
| } |  | ||||||
|  |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Viper can access a nested field by passing a `.` delimited path of keys: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| GetString("datastore.metric.host") // (returns "127.0.0.1") |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| This obeys the precedence rules established above; the search for the path |  | ||||||
| will cascade through the remaining configuration registries until found. |  | ||||||
|  |  | ||||||
| For example, given this configuration file, both `datastore.metric.host` and |  | ||||||
| `datastore.metric.port` are already defined (and may be overridden). If in addition |  | ||||||
| `datastore.metric.protocol` was defined in the defaults, Viper would also find it. |  | ||||||
|  |  | ||||||
| However, if `datastore.metric` was overridden (by a flag, an environment variable, |  | ||||||
| the `Set()` method, …) with an immediate value, then all sub-keys of |  | ||||||
| `datastore.metric` become undefined, they are “shadowed” by the higher-priority |  | ||||||
| configuration level. |  | ||||||
|  |  | ||||||
| Lastly, if there exists a key that matches the delimited key path, its value |  | ||||||
| will be returned instead. E.g. |  | ||||||
|  |  | ||||||
| ```json |  | ||||||
| { |  | ||||||
|     "datastore.metric.host": "0.0.0.0", |  | ||||||
|     "host": { |  | ||||||
|         "address": "localhost", |  | ||||||
|         "port": 5799 |  | ||||||
|     }, |  | ||||||
|     "datastore": { |  | ||||||
|         "metric": { |  | ||||||
|             "host": "127.0.0.1", |  | ||||||
|             "port": 3099 |  | ||||||
|         }, |  | ||||||
|         "warehouse": { |  | ||||||
|             "host": "198.0.0.1", |  | ||||||
|             "port": 2112 |  | ||||||
|         } |  | ||||||
|     } |  | ||||||
| } |  | ||||||
|  |  | ||||||
| GetString("datastore.metric.host") // returns "0.0.0.0" |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Extract sub-tree |  | ||||||
|  |  | ||||||
| Extract sub-tree from Viper. |  | ||||||
|  |  | ||||||
| For example, `viper` represents: |  | ||||||
|  |  | ||||||
| ```json |  | ||||||
| app: |  | ||||||
|   cache1: |  | ||||||
|     max-items: 100 |  | ||||||
|     item-size: 64 |  | ||||||
|   cache2: |  | ||||||
|     max-items: 200 |  | ||||||
|     item-size: 80 |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| After executing: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| subv := viper.Sub("app.cache1") |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| `subv` represents: |  | ||||||
|  |  | ||||||
| ```json |  | ||||||
| max-items: 100 |  | ||||||
| item-size: 64 |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Suppose we have: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| func NewCache(cfg *Viper) *Cache {...} |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| which creates a cache based on config information formatted as `subv`. |  | ||||||
| Now it’s easy to create these 2 caches separately as: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| cfg1 := viper.Sub("app.cache1") |  | ||||||
| cache1 := NewCache(cfg1) |  | ||||||
|  |  | ||||||
| cfg2 := viper.Sub("app.cache2") |  | ||||||
| cache2 := NewCache(cfg2) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### Unmarshaling |  | ||||||
|  |  | ||||||
| You also have the option of Unmarshaling all or a specific value to a struct, map, |  | ||||||
| etc. |  | ||||||
|  |  | ||||||
| There are two methods to do this: |  | ||||||
|  |  | ||||||
|  * `Unmarshal(rawVal interface{}) : error` |  | ||||||
|  * `UnmarshalKey(key string, rawVal interface{}) : error` |  | ||||||
|  |  | ||||||
| Example: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| type config struct { |  | ||||||
| 	Port int |  | ||||||
| 	Name string |  | ||||||
| 	PathMap string `mapstructure:"path_map"` |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var C config |  | ||||||
|  |  | ||||||
| err := viper.Unmarshal(&C) |  | ||||||
| if err != nil { |  | ||||||
| 	t.Fatalf("unable to decode into struct, %v", err) |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| If you want to unmarshal configuration where the keys themselves contain dot (the default key delimiter), |  | ||||||
| you have to change the delimiter: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| v := viper.NewWithOptions(viper.KeyDelimiter("::")) |  | ||||||
|  |  | ||||||
| v.SetDefault("chart::values", map[string]interface{}{ |  | ||||||
|     "ingress": map[string]interface{}{ |  | ||||||
|         "annotations": map[string]interface{}{ |  | ||||||
|             "traefik.frontend.rule.type":                 "PathPrefix", |  | ||||||
|             "traefik.ingress.kubernetes.io/ssl-redirect": "true", |  | ||||||
|         }, |  | ||||||
|     }, |  | ||||||
| }) |  | ||||||
|  |  | ||||||
| type config struct { |  | ||||||
| 	Chart struct{ |  | ||||||
|         Values map[string]interface{} |  | ||||||
|     } |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var C config |  | ||||||
|  |  | ||||||
| v.Unmarshal(&C) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Viper also supports unmarshaling into embedded structs: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| /* |  | ||||||
| Example config: |  | ||||||
|  |  | ||||||
| module: |  | ||||||
|     enabled: true |  | ||||||
|     token: 89h3f98hbwf987h3f98wenf89ehf |  | ||||||
| */ |  | ||||||
| type config struct { |  | ||||||
| 	Module struct { |  | ||||||
| 		Enabled bool |  | ||||||
|  |  | ||||||
| 		moduleConfig `mapstructure:",squash"` |  | ||||||
| 	} |  | ||||||
| } |  | ||||||
|  |  | ||||||
| // moduleConfig could be in a module specific package |  | ||||||
| type moduleConfig struct { |  | ||||||
| 	Token string |  | ||||||
| } |  | ||||||
|  |  | ||||||
| var C config |  | ||||||
|  |  | ||||||
| err := viper.Unmarshal(&C) |  | ||||||
| if err != nil { |  | ||||||
| 	t.Fatalf("unable to decode into struct, %v", err) |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Viper uses [github.com/mitchellh/mapstructure](https://github.com/mitchellh/mapstructure) under the hood for unmarshaling values which uses `mapstructure` tags by default. |  | ||||||
|  |  | ||||||
| ### Marshalling to string |  | ||||||
|  |  | ||||||
| You may need to marshal all the settings held in viper into a string rather than write them to a file. |  | ||||||
| You can use your favorite format's marshaller with the config returned by `AllSettings()`. |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| import ( |  | ||||||
|     yaml "gopkg.in/yaml.v2" |  | ||||||
|     // ... |  | ||||||
| ) |  | ||||||
|  |  | ||||||
| func yamlStringSettings() string { |  | ||||||
|     c := viper.AllSettings() |  | ||||||
|     bs, err := yaml.Marshal(c) |  | ||||||
|     if err != nil { |  | ||||||
|         log.Fatalf("unable to marshal config to YAML: %v", err) |  | ||||||
|     } |  | ||||||
|     return string(bs) |  | ||||||
| } |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ## Viper or Vipers? |  | ||||||
|  |  | ||||||
| Viper comes ready to use out of the box. There is no configuration or |  | ||||||
| initialization needed to begin using Viper. Since most applications will want |  | ||||||
| to use a single central repository for their configuration, the viper package |  | ||||||
| provides this. It is similar to a singleton. |  | ||||||
|  |  | ||||||
| In all of the examples above, they demonstrate using viper in its singleton |  | ||||||
| style approach. |  | ||||||
|  |  | ||||||
| ### Working with multiple vipers |  | ||||||
|  |  | ||||||
| You can also create many different vipers for use in your application. Each will |  | ||||||
| have its own unique set of configurations and values. Each can read from a |  | ||||||
| different config file, key value store, etc. All of the functions that viper |  | ||||||
| package supports are mirrored as methods on a viper. |  | ||||||
|  |  | ||||||
| Example: |  | ||||||
|  |  | ||||||
| ```go |  | ||||||
| x := viper.New() |  | ||||||
| y := viper.New() |  | ||||||
|  |  | ||||||
| x.SetDefault("ContentDir", "content") |  | ||||||
| y.SetDefault("ContentDir", "foobar") |  | ||||||
|  |  | ||||||
| //... |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| When working with multiple vipers, it is up to the user to keep track of the |  | ||||||
| different vipers. |  | ||||||
|  |  | ||||||
| ## Q & A |  | ||||||
|  |  | ||||||
| Q: Why is it called “Viper”? |  | ||||||
|  |  | ||||||
| A: Viper is designed to be a [companion](http://en.wikipedia.org/wiki/Viper_(G.I._Joe)) |  | ||||||
| to [Cobra](https://github.com/spf13/cobra). While both can operate completely |  | ||||||
| independently, together they make a powerful pair to handle much of your |  | ||||||
| application foundation needs. |  | ||||||
|  |  | ||||||
| Q: Why is it called “Cobra”? |  | ||||||
|  |  | ||||||
| A: Is there a better name for a [commander](http://en.wikipedia.org/wiki/Cobra_Commander)? |  | ||||||
Some files were not shown because too many files have changed in this diff Show More
		Reference in New Issue
	
	Block a user
	 Davanum Srinivas
					Davanum Srinivas