Compare commits

...

43 Commits

Author SHA1 Message Date
Aditya Manthramurthy
fd202f75df
Add GHC 9.8.2 to CI (#195) 2024-05-01 13:27:01 -07:00
Aditya Manthramurthy
996540bee2
CI: Add macos to build and update caching strategy (#194) 2024-05-01 12:35:37 -07:00
Aditya Manthramurthy
78a27b44bf
Add GHC 9.6 to CI and fix tests (#193)
Tests needing fixing for recent minio

Also:
- set minimum tested version to 8.10
- remove building with cabal 3.6
- update stack.yaml to add crypton-connection

Update live server test env var handling
2024-05-01 10:25:26 -07:00
Marc Scholten
34a186ed33
Use crypton-connection instead of connection (#191)
The connection package is not maintained anymore and doesn't work
with GHC 9.6
2024-04-30 14:19:34 -07:00
Aditya Manthramurthy
45e88d813b
Enable StrictData and bump up version for release (#189)
* Enable StrictData and bump up version for release

- Types defined in Credentials.Types and Network.Minio.Data are now
strict

* ormolu fixes
2023-05-22 12:32:34 -07:00
Aditya Manthramurthy
fa62ed599a
Add support for AssumeRole STS API (#188)
This change adds support for requesting temporary object storage
credentials using the STS API. Some breaking changes are introduced to
enable this support:

- `Credentials` type has been removed. Use the `CredentialValue` type
instead. Corresponding to this the type signature for `setCreds` has
changed, though the functionality is the same.
- The type alias `Provider` has been renamed to `CredentialLoader` to
avoid naming confusion.
2023-05-03 17:52:46 -07:00
Aditya Manthramurthy
7ae8a8179d
Add GHC 9.4 to CI (#186) 2023-05-01 13:08:14 -07:00
Aditya Manthramurthy
6d3925d597
Fix XML generator tests (#187)
- Differences in quoting of XML content does not impact the equality of
XML docs, so we parse generated XML docs and compare for equality.
2023-04-26 11:18:07 -07:00
Vehbi Sinan Tunalioglu
5d58cb3bfc
fix: update AWS region map (#185) 2023-04-26 11:14:38 -07:00
Aditya Manthramurthy
f4ae55468e
Add Credentials module to use Assume Role API (#184)
This exports a new module for retrieving STS based credentials, however
they are not yet convenient to use in the library - the session token
needs to be included as a custom header and may not be possible with all
APIs.
2022-12-23 07:53:27 -08:00
Aditya Manthramurthy
d87d67b75b
Add dev flag to cabal file and update README (#182)
This turns on the GHC option `-Werror` to ensure that warnings fail the
build in dev mode. The flag is enabled in the CI.

README is updated with cabal based instructions.
2022-10-04 15:07:41 -07:00
Aditya Manthramurthy
0b3a5559fd
Fix deprecation warnings (#181) 2022-10-04 09:56:38 -07:00
Aditya Manthramurthy
7eef9b08ea
Update CI configs (#180) 2022-10-04 09:25:56 -07:00
Aditya Manthramurthy
e06bb4c949
Enable partial fields warning (#179)
- Updates exported type `EventMessage` - so avoid exporting partial
functions.
2022-10-04 09:25:39 -07:00
Aditya Manthramurthy
d82b093b6b
Bump up version for new release (#172)
- Also add hlint to CI
- Also update CI with latest action versions
2022-05-27 16:22:18 -07:00
Aditya Manthramurthy
d59f45fec4
Hlint fixes (#173)
* Hlint fixes

- Will require major version bump as some types were changed from data
  to newtype

* ormolu fixes after hlint
2022-05-27 14:33:05 -07:00
Aditya Manthramurthy
b91a7afd6b
Update with changes for ormolu 0.5.0.0 (#171)
- Add ormolu check to CI
2022-05-27 12:07:28 -07:00
Tom McLaughlin
7b6547aca0
Test GHC 9.0.2 and 9.2.2 with Stack (#170) 2022-05-25 10:21:56 -07:00
Tom McLaughlin
baee20dfb6
Support aeson 2 (#169) 2022-05-24 15:34:47 -07:00
Aditya Manthramurthy
bdac380c77
Replace protolude with relude and build with GHC 9.0.2 (#168)
- relude is a better and more commonly used library

- Add compiler warnings and fixes

- Update stack lts to 18.24

- Add explicit deriving strategies
2022-02-11 13:48:08 -08:00
Aditya Manthramurthy
c59b7066fc
Updates from new ormolu 0.4 (#167)
* Changes from formatter

* Fix github action run on master branch
2022-02-10 13:34:11 -08:00
Aditya Manthramurthy
193be59432
Update CI (#166) 2022-02-10 10:43:43 -08:00
Aditya Manthramurthy
c52f2811fe
Use single CI file based on kowainik (#162)
* Use single CI file based on kowainik
2021-03-22 09:36:01 -07:00
Aditya Manthramurthy
aa2382b2e9
Use region specific endpoints for AWS S3 in presigned Urls (#164)
- Also update standard S3 endpoints

- Unify code that determines if path style or virtual style must be used for
regular and presigned requests

Fixes #160
2021-03-08 16:35:52 -08:00
Aditya Manthramurthy
b8cc1e57ee
Update formatting with latest ormolu 1.4 (#163) 2021-03-03 16:11:45 -08:00
Aditya Manthramurthy
73bc5b64a0
Fix XML generation test for S3 SELECT (#161)
- Test was failing because of non-unique ordering of CSV properties. It is fixed
by sorting the CSV properties before serialization.
2021-03-03 15:44:12 -08:00
Aditya Manthramurthy
5ab80384ae
Fix Actions CI (#159) 2021-01-05 12:30:28 -08:00
Aditya Manthramurthy
ab2c6b0b02
Bump up version for new release (#158) 2020-10-16 17:08:34 -07:00
Harshavardhana
787f638d45 Create CNAME 2020-08-02 19:18:12 -07:00
Aditya Manthramurthy
68a2b78010
Set continue-on-error at step level (#157)
* Set continue-on-error at step level

* Add windows cache dirs
2020-06-25 08:48:04 -07:00
Aditya Manthramurthy
a3538aa46c
CI: Add support for GHC 8.10, stack and live-server testing (#156)
* CI: Add support for GHC 8.10, stack and live-server testing

* Fix live-server tests for all platforms

* Fix windows tests

* Fix resourcet cleanup exceptions

* Mark minio-hs builds GHC 8.4, 8.8 on windows experimental

* Use minio with erasure code backend for tests

* Fix matrix combinations for cabal and stack

Co-authored-by: Krishnan Parthasarathi <kp@minio.io>
2020-06-24 10:35:11 -07:00
Alexander Vershilov
3dd235a1ad
Sent Accept-Encoding: identity in the head requests. (#155)
It appeared that some s3 servers (Yandex storage in particular)
honour `Accept-Encoding: gzip` request headeer. In such a case
servers can't send `Content-Length` header as transfer size differ
from the body size.
The simplest solution for this problem is to force http-client
to send  `Accept-Encoding: identity` header in the HeadObject
request.

Co-authored-by: Aditya Manthramurthy <donatello@users.noreply.github.com>
2020-06-15 10:21:41 -07:00
Aditya Manthramurthy
8e5e51ceb8
Remove stylish-haskell config file (#154)
We are now using ormolu instead.
2020-06-15 09:59:51 -07:00
Aditya Manthramurthy
23fecbb469
Update code formatting and update dependencies (unliftio, protolude) (#152)
* Format code with ormolu

* Use latest unliftio-core

* Use latest protolude
2020-06-14 10:06:41 -07:00
Aditya Manthramurthy
ce23f7322a
Windows build (#150) 2020-05-29 15:54:41 -07:00
Aditya Manthramurthy
8e4874972b
Add github workflow (#148) 2020-05-29 11:45:32 -07:00
Aditya Manthramurthy
ae141fd6f5 Support for Yandex Object Storage (#147)
* Update src/Network/Minio/Sign/V4.hs

Co-authored-by: Sergey Ivanov <ivanovs-4@users.noreply.github.com
2020-05-20 16:04:45 -07:00
Aditya Manthramurthy
b9a3cfcd1d
Set upper bound on unliftio-core (#146) 2020-05-20 07:47:55 -07:00
Aditya Manthramurthy
9739376227
Fix live server test (#142)
* Update travis ghc to 8.8.2

* Fix live server test
2020-01-29 14:53:27 -08:00
Aditya Manthramurthy
d2a78df4eb
New release (#140)
* Disable live-server tests by default

- They will always be run by our CI
- Also update and fix example in README.

* Update examples and add them to build

- Also drop support for GHC 8.2.2

* Bump up version for new release
2020-01-02 11:23:03 -08:00
Thomas Rodriguez
c31030beac add us-west-2 region to minio data (#139) 2019-12-10 05:59:26 +00:00
Aditya Manthramurthy
1eafa68648
Bump up version for new release (#138) 2019-10-29 13:11:54 -07:00
Aditya Manthramurthy
410d342cd5
Build with GHC 8.8 (#137)
* Fix to build with GHC 8.8 and fix error handling bug

To work with the addition of MonadFail constraint to parseTimeM in the time
library, the underlying monad was changed from Either to Maybe as it has a
MonadFail instance.

* Update build to run tests against local minio server
2019-10-28 15:40:20 -07:00
71 changed files with 6818 additions and 4802 deletions

266
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,266 @@
name: CI
# Trigger the workflow on push or pull request, but only for the master branch
on:
pull_request:
branches: [master]
push:
branches: [master]
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
# Env vars for tests
env:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
MINIO_LOCAL: 1
jobs:
ormolu:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: haskell-actions/run-ormolu@v15
with:
version: "0.5.0.1"
hlint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: "Set up HLint"
uses: haskell-actions/hlint-setup@v2
with:
version: "3.5"
- name: "Run HLint"
uses: haskell-actions/hlint-run@v2
with:
path: '["src/", "test/", "examples"]'
fail-on: warning
cabal:
name: ${{ matrix.os }} / ghc-${{ matrix.ghc }} / cabal-${{ matrix.cabal }}
runs-on: ${{ matrix.os }}
needs: ormolu
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
cabal: ["3.8", "latest"]
ghc:
- "9.8"
- "9.6"
- "9.4"
- "9.2"
- "9.0"
- "8.10"
exclude:
# macos llvm issue for versions less than 9.2
- os: macos-latest
ghc: "8.10"
- os: macos-latest
ghc: "9.0"
# Cabal 3.8 supports GHC < 9.6
- cabal: "3.8"
ghc: "9.6"
- cabal: "3.8"
ghc: "9.8"
steps:
- uses: actions/checkout@v4
if: github.event.action == 'opened' || github.event.action == 'synchronize' || github.event.ref == 'refs/heads/master'
- uses: haskell-actions/setup@v2
id: setup
name: Setup Haskell
with:
ghc-version: ${{ matrix.ghc }}
cabal-version: ${{ matrix.cabal }}
cabal-update: true
- name: Configure
run: |
cabal configure --enable-tests --enable-benchmarks --test-show-details=direct -fexamples -fdev -flive-test
cabal build all --dry-run
# The last step generates dist-newstyle/cache/plan.json for the cache key.
- name: Restore cached dependencies
uses: actions/cache/restore@v4
id: cache
env:
key: ${{ runner.os }}-ghc-${{ steps.setup.outputs.ghc-version }}-cabal-${{ steps.setup.outputs.cabal-version }}
with:
path: ${{ steps.setup.outputs.cabal-store }}
key: ${{ env.key }}-plan-${{ hashFiles('**/plan.json') }}
restore-keys: ${{ env.key }}-
- name: Install dependencies
# If we had an exact cache hit, the dependencies will be up to date.
if: steps.cache.outputs.cache-hit != 'true'
run: cabal build all --only-dependencies
# Cache dependencies already here, so that we do not have to rebuild them should the subsequent steps fail.
- name: Save cached dependencies
uses: actions/cache/save@v4
# If we had an exact cache hit, trying to save the cache would error because of key clash.
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ${{ steps.setup.outputs.cabal-store }}
key: ${{ steps.cache.outputs.cache-primary-key }}
- name: Build
run: |
cabal build all
- name: Setup TLS certs for MinIO for testing (Linux)
if: matrix.os == 'ubuntu-latest'
run: |
mkdir -p /tmp/minio /tmp/minio-config/certs
cp test/cert/* /tmp/minio-config/certs/
(cd /tmp/minio; wget -q https://dl.min.io/server/minio/release/linux-amd64/minio; chmod +x ./minio)
sudo cp /tmp/minio-config/certs/public.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
## Currently disable TLS setup for MacOS due to issues in trusting it on MacOS.
- name: Setup TLS certs for MinIO for testing (MacOS)
if: matrix.os == 'macos-latest'
run: |
mkdir -p /tmp/minio /tmp/minio-config/certs
cp test/cert/* /tmp/minio-config/certs/
(cd /tmp/minio; wget -q https://dl.min.io/server/minio/release/darwin-amd64/minio; chmod +x ./minio)
# sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /tmp/minio-config/certs/public.crt
- name: Setup MinIO for testing (Windows)
if: matrix.os == 'windows-latest'
run: |
New-Item -ItemType Directory -Path "$env:temp/minio-config/certs/"
Copy-Item -Path test\cert\* -Destination "$env:temp/minio-config/certs/"
Invoke-WebRequest -Uri https://dl.minio.io/server/minio/release/windows-amd64/minio.exe -OutFile $HOME/minio.exe
Import-Certificate -FilePath "$env:temp/minio-config/certs/public.crt" -CertStoreLocation Cert:\LocalMachine\Root
- name: Test (Linux)
if: matrix.os == 'ubuntu-latest'
env:
MINIO_SECURE: 1
run: |
/tmp/minio/minio server --quiet --certs-dir /tmp/minio-config/certs data1 data2 data3 data4 2>&1 > minio.log &
ghc --version
cabal --version
cabal test all
- name: Test (MacOS)
if: matrix.os == 'macos-latest'
# # Leave MINIO_SECURE unset to disable TLS in tests.
# env:
# MINIO_SECURE: 1
run: |
/tmp/minio/minio server --quiet data1 data2 data3 data4 2>&1 > minio.log &
ghc --version
cabal --version
cabal test all
- name: Test (Windows)
if: matrix.os == 'windows-latest'
env:
MINIO_SECURE: 1
run: |
Start-Process -NoNewWindow -FilePath "$HOME/minio.exe" -ArgumentList "--certs-dir", "$env:temp/minio-config/certs", "server", "$env:temp/data1", "$env:temp/data2", "$env:temp/data3", "$env:temp/data4"
ghc --version
cabal --version
cabal test all
stack:
name: stack / ghc ${{ matrix.ghc }}
runs-on: ${{ matrix.os }}
env:
MINIO_SECURE: 1
strategy:
matrix:
ghc:
- "8.10.7"
- "9.0.2"
- "9.2.8"
- "9.4.8"
- "9.6.5"
- "9.8.2"
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
if: github.event.action == 'opened' || github.event.action == 'synchronize' || github.event.ref == 'refs/heads/master'
- uses: haskell-actions/setup@v2
with:
ghc-version: ${{ matrix.ghc }}
enable-stack: true
stack-version: "latest"
- uses: actions/cache@v4
name: Cache ~/.stack
with:
path: ~/.stack
key: ${{ runner.os }}-stack-global-${{ hashFiles('stack.yaml') }}-${{ hashFiles('package.yaml') }}
restore-keys: |
${{ runner.os }}-stack-global-
- uses: actions/cache@v4
name: Cache .stack-work
with:
path: .stack-work
key: ${{ runner.os }}-stack-work-${{ hashFiles('stack.yaml') }}-${{ hashFiles('package.yaml') }}-${{ hashFiles('**/*.hs') }}
restore-keys: |
${{ runner.os }}-stack-work-
- name: Install dependencies
run: |
stack --version
stack build --system-ghc --test --bench --no-run-tests --no-run-benchmarks --only-dependencies
- name: Build
run: |
stack build --system-ghc --test --bench --no-run-tests --no-run-benchmarks --flag minio-hs:examples --flag minio-hs:live-test --flag minio-hs:dev
- name: Setup MinIO for testing (Linux)
if: matrix.os == 'ubuntu-latest'
run: |
mkdir -p /tmp/minio /tmp/minio-config/certs
cp test/cert/* /tmp/minio-config/certs/
(cd /tmp/minio; wget -q https://dl.min.io/server/minio/release/linux-amd64/minio; chmod +x ./minio)
sudo cp /tmp/minio-config/certs/public.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
- name: Setup MinIO for testing (MacOS)
if: matrix.os == 'macos-latest'
run: |
mkdir -p /tmp/minio /tmp/minio-config/certs
cp test/cert/* /tmp/minio-config/certs/
(cd /tmp/minio; wget -q https://dl.min.io/server/minio/release/darwin-amd64/minio; chmod +x ./minio)
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /tmp/minio-config/certs/public.crt
- name: Setup MinIO for testing (Windows)
if: matrix.os == 'windows-latest'
run: |
New-Item -ItemType Directory -Path "$env:temp/minio-config/certs/"
Copy-Item -Path test\cert\* -Destination "$env:temp/minio-config/certs/"
Invoke-WebRequest -Uri https://dl.minio.io/server/minio/release/windows-amd64/minio.exe -OutFile $HOME/minio.exe
Import-Certificate -FilePath "$env:temp/minio-config/certs/public.crt" -CertStoreLocation Cert:\LocalMachine\Root
- name: Test (Non-Windows)
if: matrix.os != 'windows-latest'
run: |
/tmp/minio/minio server --quiet --certs-dir /tmp/minio-config/certs data1 data2 data3 data4 2>&1 > minio.log &
ghc --version
stack --version
stack test --system-ghc --flag minio-hs:live-test --flag minio-hs:dev
- name: Test (Windows)
if: matrix.os == 'windows-latest'
run: |
Start-Process -NoNewWindow -FilePath "$HOME/minio.exe" -ArgumentList "--certs-dir", "$env:temp/minio-config/certs", "server", "$env:temp/data1", "$env:temp/data2", "$env:temp/data3", "$env:temp/data4"
ghc --version
cabal --version
stack test --system-ghc --flag minio-hs:live-test --flag minio-hs:dev

View File

@ -1,233 +0,0 @@
# stylish-haskell configuration file
# ==================================
# The stylish-haskell tool is mainly configured by specifying steps. These steps
# are a list, so they have an order, and one specific step may appear more than
# once (if needed). Each file is processed by these steps in the given order.
steps:
# Convert some ASCII sequences to their Unicode equivalents. This is disabled
# by default.
# - unicode_syntax:
# # In order to make this work, we also need to insert the UnicodeSyntax
# # language pragma. If this flag is set to true, we insert it when it's
# # not already present. You may want to disable it if you configure
# # language extensions using some other method than pragmas. Default:
# # true.
# add_language_pragma: true
# Align the right hand side of some elements. This is quite conservative
# and only applies to statements where each element occupies a single
# line.
- simple_align:
cases: true
top_level_patterns: true
records: true
# Import cleanup
- imports:
# There are different ways we can align names and lists.
#
# - global: Align the import names and import list throughout the entire
# file.
#
# - file: Like global, but don't add padding when there are no qualified
# imports in the file.
#
# - group: Only align the imports per group (a group is formed by adjacent
# import lines).
#
# - none: Do not perform any alignment.
#
# Default: global.
align: global
# The following options affect only import list alignment.
#
# List align has following options:
#
# - after_alias: Import list is aligned with end of import including
# 'as' and 'hiding' keywords.
#
# > import qualified Data.List as List (concat, foldl, foldr, head,
# > init, last, length)
#
# - with_alias: Import list is aligned with start of alias or hiding.
#
# > import qualified Data.List as List (concat, foldl, foldr, head,
# > init, last, length)
#
# - new_line: Import list starts always on new line.
#
# > import qualified Data.List as List
# > (concat, foldl, foldr, head, init, last, length)
#
# Default: after_alias
list_align: after_alias
# Right-pad the module names to align imports in a group:
#
# - true: a little more readable
#
# > import qualified Data.List as List (concat, foldl, foldr,
# > init, last, length)
# > import qualified Data.List.Extra as List (concat, foldl, foldr,
# > init, last, length)
#
# - false: diff-safe
#
# > import qualified Data.List as List (concat, foldl, foldr, init,
# > last, length)
# > import qualified Data.List.Extra as List (concat, foldl, foldr,
# > init, last, length)
#
# Default: true
pad_module_names: true
# Long list align style takes effect when import is too long. This is
# determined by 'columns' setting.
#
# - inline: This option will put as much specs on same line as possible.
#
# - new_line: Import list will start on new line.
#
# - new_line_multiline: Import list will start on new line when it's
# short enough to fit to single line. Otherwise it'll be multiline.
#
# - multiline: One line per import list entry.
# Type with constructor list acts like single import.
#
# > import qualified Data.Map as M
# > ( empty
# > , singleton
# > , ...
# > , delete
# > )
#
# Default: inline
long_list_align: inline
# Align empty list (importing instances)
#
# Empty list align has following options
#
# - inherit: inherit list_align setting
#
# - right_after: () is right after the module name:
#
# > import Vector.Instances ()
#
# Default: inherit
empty_list_align: inherit
# List padding determines indentation of import list on lines after import.
# This option affects 'long_list_align'.
#
# - <integer>: constant value
#
# - module_name: align under start of module name.
# Useful for 'file' and 'group' align settings.
list_padding: 4
# Separate lists option affects formatting of import list for type
# or class. The only difference is single space between type and list
# of constructors, selectors and class functions.
#
# - true: There is single space between Foldable type and list of it's
# functions.
#
# > import Data.Foldable (Foldable (fold, foldl, foldMap))
#
# - false: There is no space between Foldable type and list of it's
# functions.
#
# > import Data.Foldable (Foldable(fold, foldl, foldMap))
#
# Default: true
separate_lists: true
# Space surround option affects formatting of import lists on a single
# line. The only difference is single space after the initial
# parenthesis and a single space before the terminal parenthesis.
#
# - true: There is single space associated with the enclosing
# parenthesis.
#
# > import Data.Foo ( foo )
#
# - false: There is no space associated with the enclosing parenthesis
#
# > import Data.Foo (foo)
#
# Default: false
space_surround: false
# Language pragmas
- language_pragmas:
# We can generate different styles of language pragma lists.
#
# - vertical: Vertical-spaced language pragmas, one per line.
#
# - compact: A more compact style.
#
# - compact_line: Similar to compact, but wrap each line with
# `{-#LANGUAGE #-}'.
#
# Default: vertical.
style: vertical
# Align affects alignment of closing pragma brackets.
#
# - true: Brackets are aligned in same column.
#
# - false: Brackets are not aligned together. There is only one space
# between actual import and closing bracket.
#
# Default: true
align: true
# stylish-haskell can detect redundancy of some language pragmas. If this
# is set to true, it will remove those redundant pragmas. Default: true.
remove_redundant: true
# Replace tabs by spaces. This is disabled by default.
# - tabs:
# # Number of spaces to use for each tab. Default: 8, as specified by the
# # Haskell report.
# spaces: 8
# Remove trailing whitespace
- trailing_whitespace: {}
# A common setting is the number of columns (parts of) code will be wrapped
# to. Different steps take this into account. Default: 80.
columns: 80
# By default, line endings are converted according to the OS. You can override
# preferred format here.
#
# - native: Native newline format. CRLF on Windows, LF on other OSes.
#
# - lf: Convert to LF ("\n").
#
# - crlf: Convert to CRLF ("\r\n").
#
# Default: native.
newline: native
# Sometimes, language extensions are specified in a cabal file or from the
# command line instead of using language pragmas in the file. stylish-haskell
# needs to be aware of these, so it can parse the file correctly.
#
# No language extensions are enabled by default.
language_extensions:
- BangPatterns
- FlexibleContexts
- FlexibleInstances
- MultiParamTypeClasses
- MultiWayIf
- NoImplicitPrelude
- OverloadedStrings
- RankNTypes
- ScopedTypeVariables
- TupleSections
- TypeFamilies

View File

@ -1,52 +0,0 @@
sudo: true
language: haskell
git:
depth: 5
cabal: "2.4"
cache:
directories:
- "$HOME/.cabal/store"
- "$HOME/.stack"
- "$TRAVIS_BUILD_DIR/.stack-work"
matrix:
include:
# Cabal
- ghc: 8.2.2
- ghc: 8.4.4
- ghc: 8.6.5
# Stack
- ghc: 8.6.5
env: STACK_YAML="$TRAVIS_BUILD_DIR/stack.yaml"
install:
- |
if [ -z "$STACK_YAML" ]; then
ghc --version
cabal --version
cabal new-update
cabal new-build --enable-tests --enable-benchmarks
else
# install stack
curl -sSL https://get.haskellstack.org/ | sh
# build project with stack
stack --version
stack build --system-ghc --test --bench --no-run-tests --no-run-benchmarks
fi
script:
- |
if [ -z "$STACK_YAML" ]; then
cabal new-test --enable-tests
else
stack test --system-ghc
fi
notifications:
email: false

View File

@ -1,6 +1,48 @@
Changelog Changelog
========== ==========
## Version 1.7.0 -- Unreleased
* Fix data type `EventMessage` to not export partial fields (#179)
* Bump up min bound on time dep and fix deprecation warnings (#181)
* Add `dev` flag to cabal for building with warnings as errors (#182)
* Fix AWS region map (#185)
* Fix XML generator tests (#187)
* Add support for STS Assume Role API (#188)
### Breaking changes in 1.7.0
* `Credentials` type has been removed. Use `CredentialValue` instead.
* `Provider` type has been replaced with `CredentialLoader`.
* `EventMessage` data type is updated.
## Version 1.6.0
* HLint fixes - some types were changed to newtype (#173)
* Fix XML generation test for S3 SELECT (#161)
* Use region specific endpoints for AWS S3 in presigned Urls (#164)
* Replace protolude with relude and build with GHC 9.0.2 (#168)
* Support aeson 2 (#169)
* CI updates and code formatting changes with ormolu 0.5.0.0
## Version 1.5.3
* Fix windows build
* Fix support for Yandex Storage (#147)
* Fix for HEAD requests to S3/Minio (#155)
* Bump up some dependencies, new code formatting, Github CI, example fixes and other minor improvements.
## Version 1.5.2
* Fix region `us-west-2` for AWS S3 (#139)
* Build examples in CI
* Disable live-server tests by default, but run them in CI
* Drop support for GHC 8.2.x
## Version 1.5.1
* Add support for GHC 8.8
## Version 1.5.0 ## Version 1.5.0
* Switch to faster map data type - all previous usage of * Switch to faster map data type - all previous usage of

1
CNAME Normal file
View File

@ -0,0 +1 @@
minio-hs.min.io

133
README.md
View File

@ -1,50 +1,62 @@
# MinIO Client SDK for Haskell [![Build Status](https://travis-ci.org/minio/minio-hs.svg?branch=master)](https://travis-ci.org/minio/minio-hs)[![Hackage](https://img.shields.io/hackage/v/minio-hs.svg)](https://hackage.haskell.org/package/minio-hs)[![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) # MinIO Haskell Client SDK for Amazon S3 Compatible Cloud Storage [![CI](https://github.com/minio/minio-hs/actions/workflows/ci.yml/badge.svg)](https://github.com/minio/minio-hs/actions/workflows/ci.yml)[![Hackage](https://img.shields.io/hackage/v/minio-hs.svg)](https://hackage.haskell.org/package/minio-hs)[![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io)
The MinIO Haskell Client SDK provides simple APIs to access [MinIO](https://min.io) and Amazon S3 compatible object storage server. The MinIO Haskell Client SDK provides simple APIs to access [MinIO](https://min.io) and any Amazon S3 compatible object storage.
## Minimum Requirements This guide assumes that you have a working [Haskell development environment](https://www.haskell.org/downloads/).
- The Haskell [stack](https://docs.haskellstack.org/en/stable/README/)
## Installation ## Installation
```sh ### Add to your project
git clone https://github.com/minio/minio-hs.git
cd minio-hs/ Simply add `minio-hs` to your project's `.cabal` dependencies section or if you are using hpack, to your `package.yaml` file as usual.
stack install ### Try it out in a [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop)
#### For a cabal based environment
Download the library source and change to the extracted directory:
``` sh
$ cabal get minio-hs
$ cd minio-hs-1.6.0/ # directory name could be different
``` ```
Tests can be run with: Then load the `ghci` REPL environment with the library and browse the available APIs:
```sh
stack test
``` sh
$ cabal repl
ghci> :browse Network.Minio
``` ```
A section of the tests use the remote MinIO Play server at #### For a stack based environment
`https://play.min.io` by default. For library development,
using this remote server maybe slow. To run the tests against a
locally running MinIO live server at `http://localhost:9000`, just set
the environment `MINIO_LOCAL` to any value (and unset it to switch
back to Play).
Documentation can be locally built with: From your home folder or any non-haskell project directory, just run:
```sh ```sh
stack install minio-hs
stack haddock
``` ```
## Quick-Start Example - File Uploader Then start an interpreter session and browse the available APIs with:
```sh
$ stack ghci
> :browse Network.Minio
```
## Examples
The [examples](https://github.com/minio/minio-hs/tree/master/examples) folder contains many examples that you can try out and use to learn and to help with developing your own projects.
### Quick-Start Example - File Uploader
This example program connects to a MinIO object storage server, makes a bucket on the server and then uploads a file to the bucket.
We will use the MinIO server running at https://play.min.io in this example. Feel free to use this service for testing and development. Access credentials are present in the library and are open to the public.
### FileUploader.hs ### FileUploader.hs
``` haskell ``` haskell
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs --package optparse-applicative --package filepath -- stack --resolver lts-14.11 runghc --package minio-hs --package optparse-applicative --package filepath
-- --
-- MinIO Haskell SDK, (C) 2017-2019 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2019 MinIO, Inc.
@ -106,16 +118,19 @@ main = do
res <- runMinio minioPlayCI $ do res <- runMinio minioPlayCI $ do
-- Make a bucket; catch bucket already exists exception if thrown. -- Make a bucket; catch bucket already exists exception if thrown.
bErr <- try $ makeBucket bucket Nothing bErr <- try $ makeBucket bucket Nothing
case bErr of
Left (MErrService BucketAlreadyOwnedByYou) -> return ()
Left e -> throwIO e
Right _ -> return ()
-- Upload filepath to bucket; object is derived from filepath. -- If the bucket already exists, we would get a specific
fPutObject bucket object filepath def -- `ServiceErr` exception thrown.
case bErr of
Left BucketAlreadyOwnedByYou -> return ()
Left e -> throwIO e
Right _ -> return ()
-- Upload filepath to bucket; object name is derived from filepath.
fPutObject bucket object filepath defaultPutObjectOptions
case res of case res of
Left e -> putStrLn $ "file upload failed due to " ++ (show e) Left e -> putStrLn $ "file upload failed due to " ++ show e
Right () -> putStrLn "file upload succeeded." Right () -> putStrLn "file upload succeeded."
``` ```
@ -129,3 +144,55 @@ main = do
## Contribute ## Contribute
[Contributors Guide](https://github.com/minio/minio-hs/blob/master/CONTRIBUTING.md) [Contributors Guide](https://github.com/minio/minio-hs/blob/master/CONTRIBUTING.md)
### Development
#### Download the source
```sh
$ git clone https://github.com/minio/minio-hs.git
$ cd minio-hs/
```
#### Build the package:
With `cabal`:
```sh
$ # Configure cabal for development enabling all optional flags defined by the package.
$ cabal configure --enable-tests --test-show-details=direct -fexamples -fdev -flive-test
$ cabal build
```
With `stack`:
``` sh
$ stack build --test --no-run-tests --flag minio-hs:live-test --flag minio-hs:dev --flag minio-hs:examples
```
#### Running tests:
A section of the tests use the remote MinIO Play server at `https://play.min.io` by default. For library development, using this remote server maybe slow. To run the tests against a locally running MinIO live server at `http://localhost:9000` with the credentials `access_key=minio` and `secret_key=minio123`, just set the environment `MINIO_LOCAL` to any value (and unset it to switch back to Play).
With `cabal`:
```sh
$ export MINIO_LOCAL=1 # to run live tests against local MinIO server
$ cabal test
```
With `stack`:
``` sh
$ export MINIO_LOCAL=1 # to run live tests against local MinIO server
stack test --flag minio-hs:live-test --flag minio-hs:dev
```
This will run all the test suites.
#### Building documentation:
```sh
$ cabal haddock
$ # OR
$ stack haddock
```

View File

@ -1,18 +0,0 @@
--
-- MinIO Haskell SDK, (C) 2017 MinIO, Inc.
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
import Distribution.Simple
main = defaultMain

47
examples/AssumeRole.hs Normal file
View File

@ -0,0 +1,47 @@
--
-- MinIO Haskell SDK, (C) 2023 MinIO, Inc.
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
{-# LANGUAGE OverloadedStrings #-}
import Control.Monad.IO.Class (liftIO)
import Network.Minio
import Prelude
main :: IO ()
main = do
-- Use play credentials for example.
let assumeRole =
STSAssumeRole
( CredentialValue
"Q3AM3UQ867SPQQA43P2F"
"zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
Nothing
)
$ defaultSTSAssumeRoleOptions
{ saroLocation = Just "us-east-1",
saroEndpoint = Just "https://play.min.io:9000"
}
-- Retrieve temporary credentials and print them.
cv <- requestSTSCredential assumeRole
print $ "Temporary credentials" ++ show (credentialValueText $ fst cv)
print $ "Expiry" ++ show (snd cv)
-- Configure 'ConnectInfo' to request temporary credentials on demand.
ci <- setSTSCredential assumeRole "https://play.min.io"
res <- runMinio ci $ do
buckets <- listBuckets
liftIO $ print $ "Top 5 buckets: " ++ show (take 5 buckets)
print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,20 +16,17 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Control.Monad.IO.Class (liftIO) import Control.Monad.IO.Class (liftIO)
import Prelude import Network.Minio
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let bucket = "missingbucket" let bucket = "missingbucket"
@ -39,5 +36,5 @@ main = do
liftIO $ putStrLn $ "Does " ++ show bucket ++ " exist? - " ++ show foundBucket liftIO $ putStrLn $ "Does " ++ show bucket ++ " exist? - " ++ show foundBucket
case res1 of case res1 of
Left e -> putStrLn $ "bucketExists failed." ++ show e Left e -> putStrLn $ "bucketExists failed." ++ show e
Right () -> return () Right () -> return ()

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,41 +16,40 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Control.Monad.Catch (catchIf) import Network.Minio
import Prelude import UnliftIO.Exception (catch, throwIO)
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
ignoreMinioErr :: ServiceErr -> Minio ()
ignoreMinioErr = return . const ()
main :: IO () main :: IO ()
main = do main = do
let let bucket = "test"
bucket = "test"
object = "obj" object = "obj"
objectCopy = "obj-copy" objectCopy = "obj-copy"
localFile = "/etc/lsb-release" localFile = "/etc/lsb-release"
res1 <- runMinio minioPlayCI $ do res1 <- runMinio minioPlayCI $ do
-- 1. Make a bucket; Catch BucketAlreadyOwnedByYou exception. -- 1. Make a bucket; Catch BucketAlreadyOwnedByYou exception.
catchIf (== BucketAlreadyOwnedByYou) (makeBucket bucket Nothing) ignoreMinioErr catch
(makeBucket bucket Nothing)
( \e -> case e of
BucketAlreadyOwnedByYou -> return ()
_ -> throwIO e
)
-- 2. Upload a file to bucket/object. -- 2. Upload a file to bucket/object.
fPutObject bucket object localFile fPutObject bucket object localFile defaultPutObjectOptions
-- 3. Copy bucket/object to bucket/objectCopy. -- 3. Copy bucket/object to bucket/objectCopy.
copyObject def {dstBucket = bucket, dstObject = objectCopy} def { srcBucket = bucket , srcObject = object } copyObject
defaultDestinationInfo {dstBucket = bucket, dstObject = objectCopy}
defaultSourceInfo {srcBucket = bucket, srcObject = object}
case res1 of case res1 of
Left e -> putStrLn $ "copyObject failed." ++ show e Left e -> putStrLn $ "copyObject failed." ++ show e
Right () -> putStrLn "copyObject succeeded." Right () -> putStrLn "copyObject succeeded."

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs --package optparse-applicative --package filepath -- stack --resolver lts-14.11 runghc --package minio-hs --package optparse-applicative --package filepath
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,39 +16,39 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE ScopedTypeVariables #-}
import Network.Minio
import Data.Monoid ((<>)) import Data.Text (pack)
import Data.Text (pack) import Network.Minio
import Options.Applicative import Options.Applicative
import System.FilePath.Posix import System.FilePath.Posix
import UnliftIO (throwIO, try) import UnliftIO (throwIO, try)
import Prelude
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
-- optparse-applicative package based command-line parsing. -- optparse-applicative package based command-line parsing.
fileNameArgs :: Parser FilePath fileNameArgs :: Parser FilePath
fileNameArgs = strArgument fileNameArgs =
(metavar "FILENAME" strArgument
<> help "Name of file to upload to AWS S3 or a MinIO server") ( metavar "FILENAME"
<> help "Name of file to upload to AWS S3 or a MinIO server"
)
cmdParser = info cmdParser :: ParserInfo FilePath
(helper <*> fileNameArgs) cmdParser =
(fullDesc info
<> progDesc "FileUploader" (helper <*> fileNameArgs)
<> header ( fullDesc
"FileUploader - a simple file-uploader program using minio-hs") <> progDesc "FileUploader"
<> header
"FileUploader - a simple file-uploader program using minio-hs"
)
main :: IO () main :: IO ()
main = do main = do
@ -62,13 +62,13 @@ main = do
-- Make a bucket; catch bucket already exists exception if thrown. -- Make a bucket; catch bucket already exists exception if thrown.
bErr <- try $ makeBucket bucket Nothing bErr <- try $ makeBucket bucket Nothing
case bErr of case bErr of
Left (MErrService BucketAlreadyOwnedByYou) -> return () Left BucketAlreadyOwnedByYou -> return ()
Left e -> throwIO e Left e -> throwIO e
Right _ -> return () Right _ -> return ()
-- Upload filepath to bucket; object is derived from filepath. -- Upload filepath to bucket; object is derived from filepath.
fPutObject bucket object filepath def fPutObject bucket object filepath defaultPutObjectOptions
case res of case res of
Left e -> putStrLn $ "file upload failed due to " ++ (show e) Left e -> putStrLn $ "file upload failed due to " ++ show e
Right () -> putStrLn "file upload succeeded." Right () -> putStrLn "file upload succeeded."

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -17,14 +17,14 @@
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} import Network.Minio
import Network.Minio import Network.Minio.AdminAPI
import Network.Minio.AdminAPI import Prelude
import Prelude
main :: IO () main :: IO ()
main = do main = do
res <- runMinio def $ res <-
runMinio
minioPlayCI
getConfig getConfig
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,31 +16,26 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import qualified Data.Conduit as C import qualified Data.Conduit as C
import qualified Data.Conduit.Binary as CB import qualified Data.Conduit.Binary as CB
import Network.Minio
import Prelude import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "my-bucket"
bucket = "my-bucket"
object = "my-object" object = "my-object"
res <- runMinio minioPlayCI $ do res <- runMinio minioPlayCI $ do
src <- getObject bucket object def src <- getObject bucket object defaultGetObjectOptions
C.connect src $ CB.sinkFileCautious "/tmp/my-object" C.connect (gorObjectStream src) $ CB.sinkFileCautious "/tmp/my-object"
case res of case res of
Left e -> putStrLn $ "getObject failed." ++ (show e) Left e -> putStrLn $ "getObject failed." ++ show e
Right _ -> putStrLn "getObject succeeded." Right _ -> putStrLn "getObject succeeded."

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,28 +16,25 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Network.Minio.S3API
import Prelude import Network.Minio
import Network.Minio.S3API
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "test"
bucket = "test"
object = "passwd" object = "passwd"
res <- runMinio minioPlayCI $ res <-
headObject bucket object runMinio minioPlayCI $
headObject bucket object []
case res of case res of
Left e -> putStrLn $ "headObject failed." ++ show e Left e -> putStrLn $ "headObject failed." ++ show e
Right objInfo -> putStrLn $ "headObject succeeded." ++ show objInfo Right objInfo -> putStrLn $ "headObject succeeded." ++ show objInfo

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -17,18 +17,21 @@
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} import Network.Minio
import Network.Minio import Network.Minio.AdminAPI
import Network.Minio.AdminAPI import Prelude
import Prelude
main :: IO () main :: IO ()
main = do main = do
res <- runMinio def $ res <- runMinio minioPlayCI $
do do
hsr <- startHeal Nothing Nothing HealOpts { hoRecursive = True hsr <-
, hoDryRun = False startHeal
} Nothing
Nothing
HealOpts
{ hoRecursive = True,
hoDryRun = False
}
getHealStatus Nothing Nothing (hsrClientToken hsr) getHealStatus Nothing Nothing (hsrClientToken hsr)
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,19 +16,17 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Control.Monad.IO.Class (liftIO) import Control.Monad.IO.Class (liftIO)
import Prelude import Network.Minio
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
-- This example list buckets that belongs to the user and returns -- This example list buckets that belongs to the user and returns
-- region of the first bucket returned. -- region of the first bucket returned.

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,38 +16,36 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Conduit import Conduit
import Prelude import Network.Minio
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "test"
bucket = "test"
-- Performs a recursive listing of incomplete uploads under bucket "test" -- Performs a recursive listing of incomplete uploads under bucket "test"
-- on a local minio server. -- on a local minio server.
res <- runMinio minioPlayCI $ res <-
runConduit $ listIncompleteUploads bucket Nothing True .| mapM_C (\v -> (liftIO $ print v)) runMinio minioPlayCI $
runConduit $
listIncompleteUploads bucket Nothing True .| mapM_C (liftIO . print)
print res print res
{- {-
Following is the output of the above program on a local MinIO server. Following is the output of the above program on a local MinIO server.
Right [UploadInfo { uiKey = "go1.6.2.linux-amd64.tar.gz" Right [UploadInfo { uiKey = "go1.6.2.linux-amd64.tar.gz"
, uiUploadId = "063eb592-bdd7-4a0c-be48-34fb3ceb63e2" , uiUploadId = "063eb592-bdd7-4a0c-be48-34fb3ceb63e2"
, uiInitTime = 2017-03-01 10:16:25.698 UTC , uiInitTime = 2017-03-01 10:16:25.698 UTC
, uiSize = 17731794 , uiSize = 17731794
} }
] ]
-} -}

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,33 +16,31 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Conduit
import Prelude
import Conduit
import Network.Minio
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "test"
bucket = "test"
-- Performs a recursive listing of all objects under bucket "test" -- Performs a recursive listing of all objects under bucket "test"
-- on play.min.io. -- on play.min.io.
res <- runMinio minioPlayCI $ res <-
runConduit $ listObjects bucket Nothing True .| mapM_C (\v -> (liftIO $ print v)) runMinio minioPlayCI $
runConduit $
listObjects bucket Nothing True .| mapM_C (liftIO . print)
print res print res
{-
Following is the output of the above program on a local MinIO server.
Right [ObjectInfo {oiObject = "ADVANCED.png", oiModTime = 2017-02-10 05:33:24.816 UTC, oiETag = "\"a69f3af6bbb06fe1d42ac910ec30482f\"", oiSize = 94026},ObjectInfo {oiObject = "obj", oiModTime = 2017-02-10 08:49:26.777 UTC, oiETag = "\"715a872a253a3596652c1490081b4b6a-1\"", oiSize = 15728640}] {-
-} Following is the output of the above program on a local MinIO server.
Right [ObjectInfo {oiObject = "ADVANCED.png", oiModTime = 2017-02-10 05:33:24.816 UTC, oiETag = "\"a69f3af6bbb06fe1d42ac910ec30482f\"", oiSize = 94026},ObjectInfo {oiObject = "obj", oiModTime = 2017-02-10 08:49:26.777 UTC, oiETag = "\"715a872a253a3596652c1490081b4b6a-1\"", oiSize = 15728640}]
-}

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,24 +16,21 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Prelude import Network.Minio
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let bucket = "my-bucket" let bucket = "my-bucket"
res <- runMinio minioPlayCI $ res <-
-- N B the region provided for makeBucket is optional. runMinio minioPlayCI $
makeBucket bucket (Just "us-east-1") -- N B the region provided for makeBucket is optional.
makeBucket bucket (Just "us-east-1")
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,42 +16,40 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Control.Monad.IO.Class (liftIO) import Control.Monad.IO.Class (liftIO)
import qualified Data.ByteString.Char8 as B import qualified Data.ByteString.Char8 as B
import Data.CaseInsensitive (original) import Data.CaseInsensitive (original)
import qualified Data.Conduit.Combinators as CC import qualified Data.Conduit.Combinators as CC
import qualified Data.Text.Encoding as E import qualified Data.Text.Encoding as E
import Network.Minio
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "my-bucket"
bucket = "my-bucket" object = "my-object"
object = "my-object" kb15 = 15 * 1024
kb15 = 15*1024 -- Set query parameter to modify content disposition response
-- header
-- Set query parameter to modify content disposition response queryParam =
-- header [ ( "response-content-disposition",
queryParam = [("response-content-disposition", Just "attachment; filename=\"your-filename.txt\""
Just "attachment; filename=\"your-filename.txt\"")] )
]
res <- runMinio minioPlayCI $ do res <- runMinio minioPlayCI $ do
liftIO $ B.putStrLn "Upload a file that we will fetch with a presigned URL..." liftIO $ B.putStrLn "Upload a file that we will fetch with a presigned URL..."
putObject bucket object (CC.repeat "a") (Just kb15) def putObject bucket object (CC.repeat "a") (Just kb15) defaultPutObjectOptions
liftIO $ putStrLn $ "Done. Object created at: my-bucket/my-object" liftIO $ putStrLn "Done. Object created at: my-bucket/my-object"
-- Extract Etag of uploaded object -- Extract Etag of uploaded object
oi <- statObject bucket object oi <- statObject bucket object defaultGetObjectOptions
let etag = oiETag oi let etag = oiETag oi
-- Set header to add an if-match constraint - this makes sure -- Set header to add an if-match constraint - this makes sure
@ -61,23 +59,29 @@ main = do
-- Generate a URL with 7 days expiry time - note that the headers -- Generate a URL with 7 days expiry time - note that the headers
-- used above must be added to the request with the signed URL -- used above must be added to the request with the signed URL
-- generated. -- generated.
url <- presignedGetObjectUrl "my-bucket" "my-object" (7*24*3600) url <-
queryParam headers presignedGetObjectUrl
"my-bucket"
"my-object"
(7 * 24 * 3600)
queryParam
headers
return (headers, etag, url) return (headers, etag, url)
case res of case res of
Left e -> putStrLn $ "presignedPutObject URL failed." ++ show e Left e -> putStrLn $ "presignedPutObject URL failed." ++ show e
Right (headers, etag, url) -> do Right (headers, _, url) -> do
-- We generate a curl command to demonstrate usage of the signed -- We generate a curl command to demonstrate usage of the signed
-- URL. -- URL.
let let hdrOpt (k, v) = B.concat ["-H '", original k, ": ", v, "'"]
hdrOpt (k, v) = B.concat ["-H '", original k, ": ", v, "'"] curlCmd =
curlCmd = B.intercalate " " $ B.intercalate " " $
["curl --fail"] ++ map hdrOpt headers ++ ["curl --fail"]
["-o /tmp/myfile", B.concat ["'", url, "'"]] ++ map hdrOpt headers
++ ["-o /tmp/myfile", B.concat ["'", url, "'"]]
putStrLn $ "The following curl command would use the presigned " ++ putStrLn $
"URL to fetch the object and write it to \"/tmp/myfile\":" "The following curl command would use the presigned "
++ "URL to fetch the object and write it to \"/tmp/myfile\":"
B.putStrLn curlCmd B.putStrLn curlCmd

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,69 +16,72 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import qualified Data.ByteString as B import qualified Data.ByteString as B
import qualified Data.ByteString.Char8 as Char8 import qualified Data.ByteString.Char8 as Char8
import qualified Data.HashMap.Strict as H import qualified Data.HashMap.Strict as H
import qualified Data.Text.Encoding as Enc import qualified Data.Text.Encoding as Enc
import qualified Data.Time as Time import qualified Data.Time as Time
import Network.Minio
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
now <- Time.getCurrentTime now <- Time.getCurrentTime
let let bucket = "my-bucket"
bucket = "my-bucket" object = "photos/my-object"
object = "my-object" -- set an expiration time of 10 days
expireTime = Time.addUTCTime (3600 * 24 * 10) now
-- set an expiration time of 10 days -- create a policy with expiration time and conditions - since the
expireTime = Time.addUTCTime (3600 * 24 * 10) now -- conditions are validated, newPostPolicy returns an Either value
policyE =
-- create a policy with expiration time and conditions - since the newPostPolicy
-- conditions are validated, newPostPolicy returns an Either value expireTime
policyE = newPostPolicy expireTime [ -- set the object name condition
[ -- set the object name condition ppCondKey object,
ppCondKey "photos/my-object" -- set the bucket name condition
-- set the bucket name condition ppCondBucket bucket,
, ppCondBucket "my-bucket" -- set the size range of object as 1B to 10MiB
-- set the size range of object as 1B to 10MiB ppCondContentLengthRange 1 (10 * 1024 * 1024),
, ppCondContentLengthRange 1 (10*1024*1024) -- set content type as jpg image
-- set content type as jpg image ppCondContentType "image/jpeg",
, ppCondContentType "image/jpeg" -- on success set the server response code to 200
-- on success set the server response code to 200 ppCondSuccessActionStatus 200
, ppCondSuccessActionStatus 200 ]
]
case policyE of case policyE of
Left err -> putStrLn $ show err Left err -> print err
Right policy -> do Right policy -> do
res <- runMinio minioPlayCI $ do res <- runMinio minioPlayCI $ do
(url, formData) <- presignedPostPolicy policy (url, formData) <- presignedPostPolicy policy
-- a curl command is output to demonstrate using the generated -- a curl command is output to demonstrate using the generated
-- URL and form-data -- URL and form-data
let let formFn (k, v) =
formFn (k, v) = B.concat ["-F ", Enc.encodeUtf8 k, "=", B.concat
"'", v, "'"] [ "-F ",
formOptions = B.intercalate " " $ map formFn $ H.toList formData Enc.encodeUtf8 k,
"=",
"'",
v,
"'"
]
formOptions = B.intercalate " " $ map formFn $ H.toList formData
return $
return $ B.intercalate " " $ B.intercalate
["curl", formOptions, "-F file=@/tmp/photo.jpg", url] " "
["curl", formOptions, "-F file=@/tmp/photo.jpg", url]
case res of case res of
Left e -> putStrLn $ "post-policy error: " ++ (show e) Left e -> putStrLn $ "post-policy error: " ++ show e
Right cmd -> do Right cmd -> do
putStrLn $ "Put a photo at /tmp/photo.jpg and run command:\n" putStrLn "Put a photo at /tmp/photo.jpg and run command:\n"
-- print the generated curl command -- print the generated curl command
Char8.putStrLn cmd Char8.putStrLn cmd

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,44 +16,43 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import qualified Data.ByteString.Char8 as B import qualified Data.ByteString.Char8 as B
import Data.CaseInsensitive (original) import Data.CaseInsensitive (original)
import Network.Minio
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let -- Use headers to set user-metadata - note that this header will
-- Use headers to set user-metadata - note that this header will -- need to be set when the URL is used to make an upload.
-- need to be set when the URL is used to make an upload. headers =
headers = [("x-amz-meta-url-creator", [ ( "x-amz-meta-url-creator",
"minio-hs-presigned-put-example")] "minio-hs-presigned-put-example"
)
]
res <- runMinio minioPlayCI $ do res <- runMinio minioPlayCI $ do
-- generate a URL with 7 days expiry time -- generate a URL with 7 days expiry time
presignedPutObjectUrl "my-bucket" "my-object" (7*24*3600) headers presignedPutObjectUrl "my-bucket" "my-object" (7 * 24 * 3600) headers
case res of case res of
Left e -> putStrLn $ "presignedPutObject URL failed." ++ show e Left e -> putStrLn $ "presignedPutObject URL failed." ++ show e
Right url -> do Right url -> do
-- We generate a curl command to demonstrate usage of the signed -- We generate a curl command to demonstrate usage of the signed
-- URL. -- URL.
let let hdrOpt (k, v) = B.concat ["-H '", original k, ": ", v, "'"]
hdrOpt (k, v) = B.concat ["-H '", original k, ": ", v, "'"] curlCmd =
curlCmd = B.intercalate " " $ B.intercalate " " $
["curl "] ++ map hdrOpt headers ++ ["curl "]
["-T /tmp/myfile", B.concat ["'", url, "'"]] ++ map hdrOpt headers
++ ["-T /tmp/myfile", B.concat ["'", url, "'"]]
putStrLn $ "The following curl command would use the presigned " ++ putStrLn $
"URL to upload the file at \"/tmp/myfile\":" "The following curl command would use the presigned "
++ "URL to upload the file at \"/tmp/myfile\":"
B.putStrLn curlCmd B.putStrLn curlCmd

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,39 +16,36 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import qualified Data.Conduit.Combinators as CC import qualified Data.Conduit.Combinators as CC
import Network.Minio
import Prelude import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "test"
bucket = "test"
object = "obj" object = "obj"
localFile = "/etc/lsb-release" localFile = "/etc/lsb-release"
kb15 = 15 * 1024 kb15 = 15 * 1024
-- Eg 1. Upload a stream of repeating "a" using putObject with default options. -- Eg 1. Upload a stream of repeating "a" using putObject with default options.
res1 <- runMinio minioPlayCI $ res1 <-
putObject bucket object (CC.repeat "a") (Just kb15) def runMinio minioPlayCI $
putObject bucket object (CC.repeat "a") (Just kb15) defaultPutObjectOptions
case res1 of case res1 of
Left e -> putStrLn $ "putObject failed." ++ show e Left e -> putStrLn $ "putObject failed." ++ show e
Right () -> putStrLn "putObject succeeded." Right () -> putStrLn "putObject succeeded."
-- Eg 2. Upload a file using fPutObject with default options. -- Eg 2. Upload a file using fPutObject with default options.
res2 <- runMinio minioPlayCI $ res2 <-
fPutObject bucket object localFile def runMinio minioPlayCI $
fPutObject bucket object localFile defaultPutObjectOptions
case res2 of case res2 of
Left e -> putStrLn $ "fPutObject failed." ++ show e Left e -> putStrLn $ "fPutObject failed." ++ show e
Right () -> putStrLn "fPutObject succeeded." Right () -> putStrLn "fPutObject succeeded."

17
examples/README.md Normal file
View File

@ -0,0 +1,17 @@
# Examples
The examples in this directory illustrate usage of various APIs provided by this library. Each file is self-contained and can be run like a script directly.
To build the examples, the build flag `examples` needs to be turned on:
```sh
stack build --flag minio-hs:examples
```
Now to run and example script [BucketExists.hs](https://github.com/minio/minio-hs/blob/master/examples/BucketExists.hs):
```sh
stack exec BucketExists
```
The CI system is configured to build these examples with every change, so they should be current.

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,23 +16,18 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Prelude import Network.Minio
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "my-bucket"
bucket = "my-bucket"
res <- runMinio minioPlayCI $ removeBucket bucket res <- runMinio minioPlayCI $ removeBucket bucket
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,27 +16,24 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Prelude import Network.Minio
import Prelude
-- | The following example uses minio's play server at -- | The following example uses minio's play server at
-- https://play.min.io. The endpoint and associated -- https://play.min.io. The endpoint and associated
-- credentials are provided via the libary constant, -- credentials are provided via the libary constant,
-- --
-- > minioPlayCI :: ConnectInfo -- > minioPlayCI :: ConnectInfo
--
main :: IO () main :: IO ()
main = do main = do
let let bucket = "mybucket"
bucket = "mybucket" object = "myobject"
object = "myobject"
res <- runMinio minioPlayCI $ res <-
removeIncompleteUpload bucket object runMinio minioPlayCI $
removeIncompleteUpload bucket object
case res of case res of
Left _ -> putStrLn $ "Failed to remove " ++ show bucket ++ "/" ++ show object Left _ -> putStrLn $ "Failed to remove " ++ show bucket ++ "/" ++ show object

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,20 +16,19 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio import Network.Minio
import Prelude import Prelude
main :: IO () main :: IO ()
main = do main = do
let let bucket = "mybucket"
bucket = "mybucket" object = "myobject"
object = "myobject"
res <- runMinio minioPlayCI $ res <-
removeObject bucket object runMinio minioPlayCI $
removeObject bucket object
case res of case res of
Left _ -> putStrLn $ "Failed to remove " ++ show bucket ++ "/" ++ show object Left _ -> putStrLn $ "Failed to remove " ++ show bucket ++ "/" ++ show object

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-13.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2019 MinIO, Inc. -- MinIO Haskell SDK, (C) 2019 MinIO, Inc.
@ -16,35 +16,32 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import qualified Conduit as C import qualified Conduit as C
import Control.Monad (when) import Control.Monad (unless)
import qualified Data.ByteString.Lazy as LB import Network.Minio
import Prelude
import Prelude
main :: IO () main :: IO ()
main = do main = do
let bucket = "selectbucket" let bucket = "selectbucket"
object = "1.csv" object = "1.csv"
content = "Name,Place,Temperature\n" content =
<> "James,San Jose,76\n" "Name,Place,Temperature\n"
<> "Alicia,San Leandro,88\n" <> "James,San Jose,76\n"
<> "Mark,San Carlos,90\n" <> "Alicia,San Leandro,88\n"
<> "Mark,San Carlos,90\n"
res <- runMinio minioPlayCI $ do res <- runMinio minioPlayCI $ do
exists <- bucketExists bucket
unless exists $
makeBucket bucket Nothing
exists <- bucketExists bucket C.liftIO $ putStrLn "Uploading csv object"
when (not exists) $ putObject bucket object (C.sourceLazy content) Nothing defaultPutObjectOptions
makeBucket bucket Nothing
C.liftIO $ putStrLn "Uploading csv object" let sr = selectRequest "Select * from s3object" defaultCsvInput defaultCsvOutput
putObject bucket object (C.sourceLazy content) Nothing defaultPutObjectOptions res <- selectObjectContent bucket object sr
C.runConduit $ res C..| getPayloadBytes C..| C.stdoutC
let sr = selectRequest "Select * from s3object" defaultCsvInput defaultCsvOutput print res
res <- selectObjectContent bucket object sr
C.runConduit $ res C..| getPayloadBytes C..| C.stdoutC
print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -17,14 +17,14 @@
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} import Network.Minio
import Network.Minio import Network.Minio.AdminAPI
import Network.Minio.AdminAPI import Prelude
import Prelude
main :: IO () main :: IO ()
main = do main = do
res <- runMinio def $ res <-
getServerInfo runMinio
minioPlayCI
getServerInfo
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -17,14 +17,13 @@
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} import Network.Minio
import Network.Minio import Network.Minio.AdminAPI
import Network.Minio.AdminAPI import Prelude
import Prelude
main :: IO () main :: IO ()
main = do main = do
res <- runMinio def $ res <-
serviceSendAction ServiceActionRestart runMinio minioPlayCI $
serviceSendAction ServiceActionRestart
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -17,14 +17,13 @@
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} import Network.Minio
import Network.Minio import Network.Minio.AdminAPI
import Network.Minio.AdminAPI import Prelude
import Prelude
main :: IO () main :: IO ()
main = do main = do
res <- runMinio def $ res <-
serviceSendAction ServiceActionStop runMinio minioPlayCI $
serviceSendAction ServiceActionStop
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -17,14 +17,14 @@
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} import Network.Minio
import Network.Minio import Network.Minio.AdminAPI
import Network.Minio.AdminAPI import Prelude
import Prelude
main :: IO () main :: IO ()
main = do main = do
res <- runMinio def $ res <-
serviceStatus runMinio
minioPlayCI
serviceStatus
print res print res

View File

@ -1,5 +1,5 @@
#!/usr/bin/env stack #!/usr/bin/env stack
-- stack --resolver lts-11.1 runghc --package minio-hs -- stack --resolver lts-14.11 runghc --package minio-hs
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc.
@ -16,16 +16,15 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE OverloadedStrings #-}
import Network.Minio
import Network.Minio.AdminAPI
import Prelude import Network.Minio
import Network.Minio.AdminAPI
import Prelude
main :: IO () main :: IO ()
main = do main = do
res <- runMinio def $ res <- runMinio minioPlayCI $
do do
let config = "{\"version\":\"25\",\"credential\":{\"accessKey\":\"minio\",\"secretKey\":\"minio123\"},\"region\":\"\",\"browser\":\"on\",\"worm\":\"off\",\"domain\":\"\",\"storageclass\":{\"standard\":\"\",\"rrs\":\"\"},\"cache\":{\"drives\":[],\"expiry\":90,\"exclude\":[]},\"notify\":{\"amqp\":{\"2\":{\"enable\":false,\"url\":\"amqp://guest:guest@localhost:5672/\",\"exchange\":\"minio\",\"routingKey\":\"minio\",\"exchangeType\":\"direct\",\"deliveryMode\":0,\"mandatory\":false,\"immediate\":false,\"durable\":false,\"internal\":false,\"noWait\":false,\"autoDeleted\":false}},\"elasticsearch\":{\"1\":{\"enable\":false,\"format\":\"namespace\",\"url\":\"http://localhost:9200\",\"index\":\"minio_events\"}},\"kafka\":{\"1\":{\"enable\":false,\"brokers\":null,\"topic\":\"\"}},\"mqtt\":{\"1\":{\"enable\":false,\"broker\":\"\",\"topic\":\"\",\"qos\":0,\"clientId\":\"\",\"username\":\"\",\"password\":\"\",\"reconnectInterval\":0,\"keepAliveInterval\":0}},\"mysql\":{\"1\":{\"enable\":false,\"format\":\"namespace\",\"dsnString\":\"\",\"table\":\"\",\"host\":\"\",\"port\":\"\",\"user\":\"\",\"password\":\"\",\"database\":\"\"}},\"nats\":{\"1\":{\"enable\":false,\"address\":\"\",\"subject\":\"\",\"username\":\"\",\"password\":\"\",\"token\":\"\",\"secure\":false,\"pingInterval\":0,\"streaming\":{\"enable\":false,\"clusterID\":\"\",\"clientID\":\"\",\"async\":false,\"maxPubAcksInflight\":0}}},\"postgresql\":{\"1\":{\"enable\":false,\"format\":\"namespace\",\"connectionString\":\"\",\"table\":\"\",\"host\":\"\",\"port\":\"\",\"user\":\"\",\"password\":\"\",\"database\":\"\"}},\"redis\":{\"test1\":{\"enable\":true,\"format\":\"namespace\",\"address\":\"127.0.0.1:6379\",\"password\":\"\",\"key\":\"bucketevents_ns\"},\"test2\":{\"enable\":true,\"format\":\"access\",\"address\":\"127.0.0.1:6379\",\"password\":\"\",\"key\":\"bucketevents_log\"}},\"webhook\":{\"1\":{\"enable\":true,\"endpoint\":\"http://localhost:3000\"},\"2\":{\"enable\":true,\"endpoint\":\"http://localhost:3001\"}}}}" let config = "{\"version\":\"25\",\"credential\":{\"accessKey\":\"minio\",\"secretKey\":\"minio123\"},\"region\":\"\",\"browser\":\"on\",\"worm\":\"off\",\"domain\":\"\",\"storageclass\":{\"standard\":\"\",\"rrs\":\"\"},\"cache\":{\"drives\":[],\"expiry\":90,\"exclude\":[]},\"notify\":{\"amqp\":{\"2\":{\"enable\":false,\"url\":\"amqp://guest:guest@localhost:5672/\",\"exchange\":\"minio\",\"routingKey\":\"minio\",\"exchangeType\":\"direct\",\"deliveryMode\":0,\"mandatory\":false,\"immediate\":false,\"durable\":false,\"internal\":false,\"noWait\":false,\"autoDeleted\":false}},\"elasticsearch\":{\"1\":{\"enable\":false,\"format\":\"namespace\",\"url\":\"http://localhost:9200\",\"index\":\"minio_events\"}},\"kafka\":{\"1\":{\"enable\":false,\"brokers\":null,\"topic\":\"\"}},\"mqtt\":{\"1\":{\"enable\":false,\"broker\":\"\",\"topic\":\"\",\"qos\":0,\"clientId\":\"\",\"username\":\"\",\"password\":\"\",\"reconnectInterval\":0,\"keepAliveInterval\":0}},\"mysql\":{\"1\":{\"enable\":false,\"format\":\"namespace\",\"dsnString\":\"\",\"table\":\"\",\"host\":\"\",\"port\":\"\",\"user\":\"\",\"password\":\"\",\"database\":\"\"}},\"nats\":{\"1\":{\"enable\":false,\"address\":\"\",\"subject\":\"\",\"username\":\"\",\"password\":\"\",\"token\":\"\",\"secure\":false,\"pingInterval\":0,\"streaming\":{\"enable\":false,\"clusterID\":\"\",\"clientID\":\"\",\"async\":false,\"maxPubAcksInflight\":0}}},\"postgresql\":{\"1\":{\"enable\":false,\"format\":\"namespace\",\"connectionString\":\"\",\"table\":\"\",\"host\":\"\",\"port\":\"\",\"user\":\"\",\"password\":\"\",\"database\":\"\"}},\"redis\":{\"test1\":{\"enable\":true,\"format\":\"namespace\",\"address\":\"127.0.0.1:6379\",\"password\":\"\",\"key\":\"bucketevents_ns\"},\"test2\":{\"enable\":true,\"format\":\"access\",\"address\":\"127.0.0.1:6379\",\"password\":\"\",\"key\":\"bucketevents_log\"}},\"webhook\":{\"1\":{\"enable\":true,\"endpoint\":\"http://localhost:3000\"},\"2\":{\"enable\":true,\"endpoint\":\"http://localhost:3001\"}}}}"
setConfig config setConfig config

View File

@ -1,5 +1,6 @@
cabal-version: 2.4
name: minio-hs name: minio-hs
version: 1.5.0 version: 1.7.0
synopsis: A MinIO Haskell Library for Amazon S3 compatible cloud synopsis: A MinIO Haskell Library for Amazon S3 compatible cloud
storage. storage.
description: The MinIO Haskell client library provides simple APIs to description: The MinIO Haskell client library provides simple APIs to
@ -13,22 +14,70 @@ maintainer: dev@min.io
category: Network, AWS, Object Storage category: Network, AWS, Object Storage
build-type: Simple build-type: Simple
stability: Experimental stability: Experimental
extra-source-files: extra-doc-files:
CHANGELOG.md CHANGELOG.md
CONTRIBUTING.md CONTRIBUTING.md
docs/API.md docs/API.md
examples/*.hs
README.md README.md
extra-source-files:
examples/*.hs
stack.yaml stack.yaml
tested-with: GHC == 8.10.7
, GHC == 9.0.2
, GHC == 9.2.8
, GHC == 9.4.8
, GHC == 9.6.5
, GHC == 9.8.2
cabal-version: >=1.10 source-repository head
type: git
location: https://github.com/minio/minio-hs.git
library Flag dev
hs-source-dirs: src Description: Build package in development mode
Default: False
Manual: True
common base-settings
ghc-options: -Wall ghc-options: -Wall
exposed-modules: Network.Minio -Wcompat
, Network.Minio.AdminAPI -Widentities
, Network.Minio.S3API -Wincomplete-uni-patterns
-Wincomplete-record-updates
-haddock
if impl(ghc >= 8.0)
ghc-options: -Wredundant-constraints
if impl(ghc >= 8.2)
ghc-options: -fhide-source-paths
if impl(ghc >= 8.4)
ghc-options: -Wpartial-fields
-- -Wmissing-export-lists
if impl(ghc >= 8.8)
ghc-options: -Wmissing-deriving-strategies
-Werror=missing-deriving-strategies
-- if impl(ghc >= 8.10)
-- ghc-options: -Wunused-packages -- disabled due to bug related to mixin config
if impl(ghc >= 9.0)
ghc-options: -Winvalid-haddock
if impl(ghc >= 9.2)
ghc-options: -Wredundant-bang-patterns
if flag(dev)
ghc-options: -Werror
default-language: Haskell2010
default-extensions: BangPatterns
, DerivingStrategies
, FlexibleContexts
, FlexibleInstances
, LambdaCase
, MultiParamTypeClasses
, MultiWayIf
, OverloadedStrings
, RankNTypes
, ScopedTypeVariables
, TupleSections
other-modules: Lib.Prelude other-modules: Lib.Prelude
, Network.Minio.API , Network.Minio.API
, Network.Minio.APICommon , Network.Minio.APICommon
@ -46,22 +95,30 @@ library
, Network.Minio.Utils , Network.Minio.Utils
, Network.Minio.XmlGenerator , Network.Minio.XmlGenerator
, Network.Minio.XmlParser , Network.Minio.XmlParser
, Network.Minio.XmlCommon
, Network.Minio.JsonParser , Network.Minio.JsonParser
, Network.Minio.Credentials.Types
, Network.Minio.Credentials.AssumeRole
, Network.Minio.Credentials
mixins: base hiding (Prelude)
, relude (Relude as Prelude)
, relude
build-depends: base >= 4.7 && < 5 build-depends: base >= 4.7 && < 5
, protolude >= 0.2 && < 0.3 , relude >= 0.7 && < 2
, aeson >= 1.2 , aeson >= 1.2 && < 3
, base64-bytestring >= 1.0 , base64-bytestring >= 1.0
, binary >= 0.8.5.0 , binary >= 0.8.5.0
, bytestring >= 0.10 , bytestring >= 0.10
, case-insensitive >= 1.2 , case-insensitive >= 1.2
, conduit >= 1.3 , conduit >= 1.3
, conduit-extra >= 1.3 , conduit-extra >= 1.3
, connection , crypton-connection
, cryptonite >= 0.25 , cryptonite >= 0.25
, cryptonite-conduit >= 0.2 , cryptonite-conduit >= 0.2
, digest >= 0.0.1 , digest >= 0.0.1
, directory , directory
, exceptions
, filepath >= 1.4 , filepath >= 1.4
, http-client >= 0.5 , http-client >= 0.5
, http-client-tls , http-client-tls
@ -69,203 +126,227 @@ library
, http-types >= 0.12 , http-types >= 0.12
, ini , ini
, memory >= 0.14 , memory >= 0.14
, raw-strings-qq >= 1 , network-uri
, resourcet >= 1.2 , resourcet >= 1.2
, retry , retry
, text >= 1.2 , text >= 1.2
, time >= 1.8 , time >= 1.9
, time-units ^>= 1.0.0
, transformers >= 0.5 , transformers >= 0.5
, unliftio >= 0.2 , unliftio >= 0.2 && < 0.3
, unliftio-core >= 0.1 , unliftio-core >= 0.2 && < 0.3
, unordered-containers >= 0.2 , unordered-containers >= 0.2
, xml-conduit >= 1.8 , xml-conduit >= 1.8
default-language: Haskell2010
default-extensions: BangPatterns library
, FlexibleContexts import: base-settings
, FlexibleInstances hs-source-dirs: src
, MultiParamTypeClasses exposed-modules: Network.Minio
, MultiWayIf , Network.Minio.AdminAPI
, NoImplicitPrelude , Network.Minio.S3API
, OverloadedStrings
, RankNTypes
, ScopedTypeVariables
, TypeFamilies
, TupleSections
Flag live-test Flag live-test
Default: True Description: Build the test suite that runs against a live MinIO server
Default: False
Manual: True Manual: True
test-suite minio-hs-live-server-test test-suite minio-hs-live-server-test
import: base-settings
type: exitcode-stdio-1.0 type: exitcode-stdio-1.0
hs-source-dirs: test, src hs-source-dirs: test, src
main-is: LiveServer.hs main-is: LiveServer.hs
ghc-options: -Wall -threaded -rtsopts -with-rtsopts=-N ghc-options: -Wall -threaded -rtsopts -with-rtsopts=-N
default-language: Haskell2010 other-modules: Network.Minio
default-extensions: BangPatterns
, FlexibleContexts
, FlexibleInstances
, MultiParamTypeClasses
, MultiWayIf
, NoImplicitPrelude
, OverloadedStrings
, RankNTypes
, ScopedTypeVariables
, TupleSections
, TypeFamilies
other-modules: Lib.Prelude
, Network.Minio
, Network.Minio.API
, Network.Minio.API.Test
, Network.Minio.APICommon
, Network.Minio.AdminAPI
, Network.Minio.CopyObject
, Network.Minio.Data
, Network.Minio.Data.ByteString
, Network.Minio.Data.Crypto
, Network.Minio.Data.Time
, Network.Minio.Errors
, Network.Minio.JsonParser
, Network.Minio.JsonParser.Test
, Network.Minio.ListOps
, Network.Minio.PresignedOperations
, Network.Minio.PutObject
, Network.Minio.S3API , Network.Minio.S3API
, Network.Minio.SelectAPI , Network.Minio.AdminAPI
, Network.Minio.Sign.V4 , Network.Minio.API.Test
, Network.Minio.JsonParser.Test
, Network.Minio.TestHelpers , Network.Minio.TestHelpers
, Network.Minio.Utils
, Network.Minio.Utils.Test , Network.Minio.Utils.Test
, Network.Minio.XmlGenerator
, Network.Minio.XmlGenerator.Test , Network.Minio.XmlGenerator.Test
, Network.Minio.XmlParser
, Network.Minio.XmlParser.Test , Network.Minio.XmlParser.Test
build-depends: base >= 4.7 && < 5 , Network.Minio.Credentials
, minio-hs build-depends: minio-hs
, protolude >= 0.1.6 , raw-strings-qq
, QuickCheck
, aeson
, base64-bytestring
, binary
, bytestring
, case-insensitive
, conduit
, conduit-extra
, connection
, cryptonite
, cryptonite-conduit
, digest
, directory
, exceptions
, filepath
, http-client
, http-client-tls
, http-conduit
, http-types
, ini
, memory
, raw-strings-qq >= 1
, resourcet
, retry
, tasty , tasty
, tasty-hunit , tasty-hunit
, tasty-quickcheck , tasty-quickcheck
, tasty-smallcheck , tasty-smallcheck
, temporary , QuickCheck
, text
, time
, transformers
, unliftio
, unliftio-core
, unordered-containers
, xml-conduit
if !flag(live-test) if !flag(live-test)
buildable: False buildable: False
test-suite minio-hs-test test-suite minio-hs-test
import: base-settings
type: exitcode-stdio-1.0 type: exitcode-stdio-1.0
hs-source-dirs: test, src hs-source-dirs: test, src
main-is: Spec.hs main-is: Spec.hs
build-depends: base >= 4.7 && < 5 build-depends: minio-hs
, minio-hs , raw-strings-qq
, protolude >= 0.1.6
, QuickCheck , QuickCheck
, aeson
, base64-bytestring
, binary
, bytestring
, case-insensitive
, conduit
, conduit-extra
, connection
, cryptonite
, cryptonite-conduit
, digest
, directory
, exceptions
, filepath
, http-client
, http-client-tls
, http-conduit
, http-types
, ini
, memory
, raw-strings-qq >= 1
, resourcet
, retry
, tasty , tasty
, tasty-hunit , tasty-hunit
, tasty-quickcheck , tasty-quickcheck
, tasty-smallcheck , tasty-smallcheck
, temporary
, text
, time
, transformers
, unliftio
, unliftio-core
, unordered-containers
, xml-conduit
ghc-options: -Wall -threaded -rtsopts -with-rtsopts=-N ghc-options: -Wall -threaded -rtsopts -with-rtsopts=-N
default-language: Haskell2010
default-extensions: BangPatterns
, FlexibleContexts
, FlexibleInstances
, MultiParamTypeClasses
, MultiWayIf
, NoImplicitPrelude
, OverloadedStrings
, RankNTypes
, ScopedTypeVariables
, TupleSections
, TypeFamilies
other-modules: Lib.Prelude other-modules: Lib.Prelude
, Network.Minio , Network.Minio
, Network.Minio.API
, Network.Minio.API.Test
, Network.Minio.APICommon
, Network.Minio.AdminAPI
, Network.Minio.CopyObject
, Network.Minio.Data
, Network.Minio.Data.ByteString
, Network.Minio.Data.Crypto
, Network.Minio.Data.Time
, Network.Minio.Errors
, Network.Minio.JsonParser
, Network.Minio.JsonParser.Test
, Network.Minio.ListOps
, Network.Minio.PresignedOperations
, Network.Minio.PutObject
, Network.Minio.S3API , Network.Minio.S3API
, Network.Minio.SelectAPI , Network.Minio.AdminAPI
, Network.Minio.Sign.V4
, Network.Minio.TestHelpers , Network.Minio.TestHelpers
, Network.Minio.Utils , Network.Minio.API.Test
, Network.Minio.JsonParser.Test
, Network.Minio.Utils.Test , Network.Minio.Utils.Test
, Network.Minio.XmlGenerator
, Network.Minio.XmlGenerator.Test , Network.Minio.XmlGenerator.Test
, Network.Minio.XmlParser
, Network.Minio.XmlParser.Test , Network.Minio.XmlParser.Test
, Network.Minio.Credentials
source-repository head Flag examples
type: git Description: Build the examples
location: https://github.com/minio/minio-hs Default: False
Manual: True
common examples-settings
ghc-options: -Wall -threaded -rtsopts -with-rtsopts=-N
default-language: Haskell2010
default-extensions: OverloadedStrings
build-depends: base >= 4.7 && < 5
, bytestring
, case-insensitive
, conduit
, conduit-extra
, filepath
, minio-hs
, optparse-applicative
, text
, time
, unliftio
, unordered-containers
hs-source-dirs: examples
if !flag(examples)
buildable: False
executable BucketExists
import: examples-settings
scope: private
main-is: BucketExists.hs
executable CopyObject
import: examples-settings
scope: private
main-is: CopyObject.hs
executable FileUploader
import: examples-settings
scope: private
main-is: FileUploader.hs
executable GetConfig
import: examples-settings
scope: private
main-is: GetConfig.hs
executable GetObject
import: examples-settings
scope: private
main-is: GetObject.hs
executable HeadObject
import: examples-settings
scope: private
main-is: HeadObject.hs
executable Heal
import: examples-settings
scope: private
main-is: Heal.hs
executable ListBuckets
import: examples-settings
scope: private
main-is: ListBuckets.hs
executable ListIncompleteUploads
import: examples-settings
scope: private
main-is: ListIncompleteUploads.hs
executable ListObjects
import: examples-settings
scope: private
main-is: ListObjects.hs
executable MakeBucket
import: examples-settings
scope: private
main-is: MakeBucket.hs
executable PresignedGetObject
import: examples-settings
scope: private
main-is: PresignedGetObject.hs
executable PresignedPostPolicy
import: examples-settings
scope: private
main-is: PresignedPostPolicy.hs
executable PresignedPutObject
import: examples-settings
scope: private
main-is: PresignedPutObject.hs
executable PutObject
import: examples-settings
scope: private
main-is: PutObject.hs
executable RemoveBucket
import: examples-settings
scope: private
main-is: RemoveBucket.hs
executable RemoveIncompleteUpload
import: examples-settings
scope: private
main-is: RemoveIncompleteUpload.hs
executable RemoveObject
import: examples-settings
scope: private
main-is: RemoveObject.hs
executable SelectObject
import: examples-settings
scope: private
main-is: SelectObject.hs
executable ServerInfo
import: examples-settings
scope: private
main-is: ServerInfo.hs
executable ServiceSendRestart
import: examples-settings
scope: private
main-is: ServiceSendRestart.hs
executable ServiceSendStop
import: examples-settings
scope: private
main-is: ServiceSendStop.hs
executable ServiceStatus
import: examples-settings
scope: private
main-is: ServiceStatus.hs
executable SetConfig
import: examples-settings
scope: private
main-is: SetConfig.hs
executable AssumeRole
import: examples-settings
scope: private
main-is: AssumeRole.hs

View File

@ -15,19 +15,41 @@
-- --
module Lib.Prelude module Lib.Prelude
( module Exports ( module Exports,
, both both,
) where showBS,
toStrictBS,
fromStrictBS,
lastMay,
)
where
import Protolude as Exports hiding (catch, catches, import Control.Monad.Trans.Maybe as Exports (MaybeT (..), runMaybeT)
throwIO, try) import qualified Data.ByteString.Lazy as LB
import Data.Time as Exports
import Control.Monad.Trans.Maybe as Exports (MaybeT (..), runMaybeT) ( UTCTime (..),
import Data.Time as Exports (UTCTime (..), diffUTCTime,
diffUTCTime) )
import UnliftIO as Exports (catch, catches, throwIO, import UnliftIO as Exports
try) ( Handler,
catch,
catches,
throwIO,
try,
)
-- | Apply a function on both elements of a pair -- | Apply a function on both elements of a pair
both :: (a -> b) -> (a, a) -> (b, b) both :: (a -> b) -> (a, a) -> (b, b)
both f (a, b) = (f a, f b) both f (a, b) = (f a, f b)
showBS :: (Show a) => a -> ByteString
showBS a = encodeUtf8 (show a :: Text)
toStrictBS :: LByteString -> ByteString
toStrictBS = LB.toStrict
fromStrictBS :: ByteString -> LByteString
fromStrictBS = LB.fromStrict
lastMay :: [a] -> Maybe a
lastMay a = last <$> nonEmpty a

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017-2019 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -16,224 +16,237 @@
-- | -- |
-- Module: Network.Minio -- Module: Network.Minio
-- Copyright: (c) 2017-2019 MinIO Dev Team -- Copyright: (c) 2017-2023 MinIO Dev Team
-- License: Apache 2.0 -- License: Apache 2.0
-- Maintainer: MinIO Dev Team <dev@min.io> -- Maintainer: MinIO Dev Team <dev@min.io>
-- --
-- Types and functions to conveniently access S3 compatible object -- Types and functions to conveniently access S3 compatible object
-- storage servers like MinIO. -- storage servers like MinIO.
module Network.Minio module Network.Minio
( ( -- * Credentials
-- * Credentials CredentialValue (..),
Credentials (..) credentialValueText,
AccessKey (..),
SecretKey (..),
SessionToken (..),
-- ** Credential providers -- ** Credential Loaders
-- | Run actions that retrieve 'Credentials' from the environment or
-- files or other custom sources.
, Provider
, fromAWSConfigFile
, fromAWSEnv
, fromMinioEnv
, findFirst
-- * Connecting to object storage -- | Run actions that retrieve 'CredentialValue's from the environment or
, ConnectInfo -- files or other custom sources.
, setRegion CredentialLoader,
, setCreds fromAWSConfigFile,
, setCredsFrom fromAWSEnv,
, isConnectInfoSecure fromMinioEnv,
, disableTLSCertValidation findFirst,
, MinioConn
, mkMinioConn
-- ** Connection helpers -- * Connecting to object storage
-- | These are helpers to construct 'ConnectInfo' values for common ConnectInfo,
-- cases. setRegion,
, minioPlayCI setCreds,
, awsCI setCredsFrom,
, gcsCI isConnectInfoSecure,
disableTLSCertValidation,
MinioConn,
mkMinioConn,
-- * Minio Monad -- ** Connection helpers
----------------
-- | The Minio Monad provides connection-reuse, bucket-location
-- caching, resource management and simpler error handling
-- functionality. All actions on object storage are performed within
-- this Monad.
, Minio
, runMinioWith
, runMinio
, runMinioResWith
, runMinioRes
-- * Bucket Operations -- | These are helpers to construct 'ConnectInfo' values for common
-- cases.
minioPlayCI,
awsCI,
gcsCI,
-- ** Creation, removal and querying -- ** STS Credential types
, Bucket STSAssumeRole (..),
, makeBucket STSAssumeRoleOptions (..),
, removeBucket defaultSTSAssumeRoleOptions,
, bucketExists requestSTSCredential,
, Region setSTSCredential,
, getLocation ExpiryTime (..),
STSCredentialProvider,
-- ** Listing buckets -- * Minio Monad
, BucketInfo(..)
, listBuckets
-- ** Listing objects ----------------
, listObjects
, listObjectsV1
, ListItem(..)
, ObjectInfo -- | The Minio Monad provides connection-reuse, bucket-location
, oiObject -- caching, resource management and simpler error handling
, oiModTime -- functionality. All actions on object storage are performed within
, oiETag -- this Monad.
, oiSize Minio,
, oiUserMetadata runMinioWith,
, oiMetadata runMinio,
runMinioResWith,
runMinioRes,
-- ** Listing incomplete uploads -- * Bucket Operations
, listIncompleteUploads
, UploadId
, UploadInfo(..)
, listIncompleteParts
, ObjectPartInfo(..)
-- ** Bucket Notifications -- ** Creation, removal and querying
, getBucketNotification Bucket,
, putBucketNotification makeBucket,
, removeAllBucketNotification removeBucket,
, Notification(..) bucketExists,
, defaultNotification Region,
, NotificationConfig(..) getLocation,
, Arn
, Event(..)
, Filter(..)
, defaultFilter
, FilterKey(..)
, defaultFilterKey
, FilterRules(..)
, defaultFilterRules
, FilterRule(..)
-- * Object Operations -- ** Listing buckets
, Object BucketInfo (..),
listBuckets,
-- ** File-based operations -- ** Listing objects
, fGetObject listObjects,
, fPutObject listObjectsV1,
ListItem (..),
ObjectInfo,
oiObject,
oiModTime,
oiETag,
oiSize,
oiUserMetadata,
oiMetadata,
-- ** Conduit-based streaming operations -- ** Listing incomplete uploads
, putObject listIncompleteUploads,
, PutObjectOptions UploadId,
, defaultPutObjectOptions UploadInfo (..),
, pooContentType listIncompleteParts,
, pooContentEncoding ObjectPartInfo (..),
, pooContentDisposition
, pooContentLanguage
, pooCacheControl
, pooStorageClass
, pooUserMetadata
, pooNumThreads
, pooSSE
, getObject -- ** Bucket Notifications
, GetObjectOptions getBucketNotification,
, defaultGetObjectOptions putBucketNotification,
, gooRange removeAllBucketNotification,
, gooIfMatch Notification (..),
, gooIfNoneMatch defaultNotification,
, gooIfModifiedSince NotificationConfig (..),
, gooIfUnmodifiedSince Arn,
, gooSSECKey Event (..),
, GetObjectResponse Filter (..),
, gorObjectInfo defaultFilter,
, gorObjectStream FilterKey (..),
defaultFilterKey,
FilterRules (..),
defaultFilterRules,
FilterRule (..),
-- ** Server-side object copying -- * Object Operations
, copyObject Object,
, SourceInfo
, defaultSourceInfo
, srcBucket
, srcObject
, srcRange
, srcIfMatch
, srcIfNoneMatch
, srcIfModifiedSince
, srcIfUnmodifiedSince
, DestinationInfo
, defaultDestinationInfo
, dstBucket
, dstObject
-- ** Querying object info -- ** File-based operations
, statObject fGetObject,
fPutObject,
-- ** Object removal operations -- ** Conduit-based streaming operations
, removeObject putObject,
, removeIncompleteUpload PutObjectOptions,
defaultPutObjectOptions,
pooContentType,
pooContentEncoding,
pooContentDisposition,
pooContentLanguage,
pooCacheControl,
pooStorageClass,
pooUserMetadata,
pooNumThreads,
pooSSE,
getObject,
GetObjectOptions,
defaultGetObjectOptions,
gooRange,
gooIfMatch,
gooIfNoneMatch,
gooIfModifiedSince,
gooIfUnmodifiedSince,
gooSSECKey,
GetObjectResponse,
gorObjectInfo,
gorObjectStream,
-- ** Select Object Content with SQL -- ** Server-side object copying
, module Network.Minio.SelectAPI copyObject,
SourceInfo,
defaultSourceInfo,
srcBucket,
srcObject,
srcRange,
srcIfMatch,
srcIfNoneMatch,
srcIfModifiedSince,
srcIfUnmodifiedSince,
DestinationInfo,
defaultDestinationInfo,
dstBucket,
dstObject,
-- * Server-Side Encryption Helpers -- ** Querying object info
, mkSSECKey statObject,
, SSECKey
, SSE(..)
-- * Presigned Operations -- ** Object removal operations
, presignedPutObjectUrl removeObject,
, presignedGetObjectUrl removeIncompleteUpload,
, presignedHeadObjectUrl
, UrlExpiry
-- ** POST (browser) upload helpers -- ** Select Object Content with SQL
-- | Please see module Network.Minio.SelectAPI,
-- https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html
-- for detailed information.
, newPostPolicy
, presignedPostPolicy
, showPostPolicy
, PostPolicy
, PostPolicyError(..)
-- *** Post Policy condition helpers -- * Server-Side Encryption Helpers
, PostPolicyCondition mkSSECKey,
, ppCondBucket SSECKey,
, ppCondContentLengthRange SSE (..),
, ppCondContentType
, ppCondKey
, ppCondKeyStartsWith
, ppCondSuccessActionStatus
-- * Error handling -- * Presigned Operations
-- | Data types representing various errors that may occur while presignedPutObjectUrl,
-- working with an object storage service. presignedGetObjectUrl,
, MinioErr(..) presignedHeadObjectUrl,
, MErrV(..) UrlExpiry,
, ServiceErr(..)
) where -- ** POST (browser) upload helpers
-- | Please see
-- https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html
-- for detailed information.
newPostPolicy,
presignedPostPolicy,
showPostPolicy,
PostPolicy,
PostPolicyError (..),
-- *** Post Policy condition helpers
PostPolicyCondition,
ppCondBucket,
ppCondContentLengthRange,
ppCondContentType,
ppCondKey,
ppCondKeyStartsWith,
ppCondSuccessActionStatus,
-- * Error handling
-- | Data types representing various errors that may occur while
-- working with an object storage service.
MinioErr (..),
MErrV (..),
ServiceErr (..),
)
where
{- {-
This module exports the high-level MinIO API for object storage. This module exports the high-level MinIO API for object storage.
-} -}
import qualified Data.Conduit as C import qualified Data.Conduit as C
import qualified Data.Conduit.Binary as CB import qualified Data.Conduit.Binary as CB
import qualified Data.Conduit.Combinators as CC import qualified Data.Conduit.Combinators as CC
import Network.Minio.API
import Lib.Prelude import Network.Minio.CopyObject
import Network.Minio.Credentials
import Network.Minio.CopyObject import Network.Minio.Data
import Network.Minio.Data import Network.Minio.Errors
import Network.Minio.Errors import Network.Minio.ListOps
import Network.Minio.ListOps import Network.Minio.PutObject
import Network.Minio.PutObject import Network.Minio.S3API
import Network.Minio.S3API import Network.Minio.SelectAPI
import Network.Minio.SelectAPI
import Network.Minio.Utils
-- | Lists buckets. -- | Lists buckets.
listBuckets :: Minio [BucketInfo] listBuckets :: Minio [BucketInfo]
@ -248,8 +261,12 @@ fGetObject bucket object fp opts = do
C.connect (gorObjectStream src) $ CB.sinkFileCautious fp C.connect (gorObjectStream src) $ CB.sinkFileCautious fp
-- | Upload the given file to the given object. -- | Upload the given file to the given object.
fPutObject :: Bucket -> Object -> FilePath fPutObject ::
-> PutObjectOptions -> Minio () Bucket ->
Object ->
FilePath ->
PutObjectOptions ->
Minio ()
fPutObject bucket object f opts = fPutObject bucket object f opts =
void $ putObjectInternal bucket object opts $ ODFile f Nothing void $ putObjectInternal bucket object opts $ ODFile f Nothing
@ -257,8 +274,13 @@ fPutObject bucket object f opts =
-- known; this helps the library select optimal part sizes to perform -- known; this helps the library select optimal part sizes to perform
-- a multipart upload. If not specified, it is assumed that the object -- a multipart upload. If not specified, it is assumed that the object
-- can be potentially 5TiB and selects multipart sizes appropriately. -- can be potentially 5TiB and selects multipart sizes appropriately.
putObject :: Bucket -> Object -> C.ConduitM () ByteString Minio () putObject ::
-> Maybe Int64 -> PutObjectOptions -> Minio () Bucket ->
Object ->
C.ConduitM () ByteString Minio () ->
Maybe Int64 ->
PutObjectOptions ->
Minio ()
putObject bucket object src sizeMay opts = putObject bucket object src sizeMay opts =
void $ putObjectInternal bucket object opts $ ODStream src sizeMay void $ putObjectInternal bucket object opts $ ODStream src sizeMay
@ -268,18 +290,25 @@ putObject bucket object src sizeMay opts =
-- copy operation if the new object is to be greater than 5GiB in -- copy operation if the new object is to be greater than 5GiB in
-- size. -- size.
copyObject :: DestinationInfo -> SourceInfo -> Minio () copyObject :: DestinationInfo -> SourceInfo -> Minio ()
copyObject dstInfo srcInfo = void $ copyObjectInternal (dstBucket dstInfo) copyObject dstInfo srcInfo =
(dstObject dstInfo) srcInfo void $
copyObjectInternal
(dstBucket dstInfo)
(dstObject dstInfo)
srcInfo
-- | Remove an object from the object store. -- | Remove an object from the object store.
removeObject :: Bucket -> Object -> Minio () removeObject :: Bucket -> Object -> Minio ()
removeObject = deleteObject removeObject = deleteObject
-- | Get an object from the object store. -- | Get an object from the object store.
getObject :: Bucket -> Object -> GetObjectOptions getObject ::
-> Minio GetObjectResponse Bucket ->
Object ->
GetObjectOptions ->
Minio GetObjectResponse
getObject bucket object opts = getObject bucket object opts =
getObject' bucket object [] $ gooToHeaders opts getObject' bucket object [] $ gooToHeaders opts
-- | Get an object's metadata from the object store. It accepts the -- | Get an object's metadata from the object store. It accepts the
-- same options as GetObject. -- same options as GetObject.
@ -309,6 +338,8 @@ bucketExists = headBucket
-- | Removes an ongoing multipart upload of an object. -- | Removes an ongoing multipart upload of an object.
removeIncompleteUpload :: Bucket -> Object -> Minio () removeIncompleteUpload :: Bucket -> Object -> Minio ()
removeIncompleteUpload bucket object = do removeIncompleteUpload bucket object = do
uploads <- C.runConduit $ listIncompleteUploads bucket (Just object) False uploads <-
C..| CC.sinkList C.runConduit $
listIncompleteUploads bucket (Just object) False
C..| CC.sinkList
mapM_ (abortMultipartUpload bucket object) (uiUploadId <$> uploads) mapM_ (abortMultipartUpload bucket object) (uiUploadId <$> uploads)

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -15,169 +15,260 @@
-- --
module Network.Minio.API module Network.Minio.API
( connect ( connect,
, S3ReqInfo(..) S3ReqInfo (..),
, runMinio runMinio,
, executeRequest executeRequest,
, mkStreamRequest buildRequest,
, getLocation mkStreamRequest,
getLocation,
isValidBucketName,
checkBucketNameValidity,
isValidObjectName,
checkObjectNameValidity,
requestSTSCredential,
)
where
, isValidBucketName import Control.Retry
, checkBucketNameValidity ( fullJitterBackoff,
, isValidObjectName limitRetriesByCumulativeDelay,
, checkObjectNameValidity retrying,
) where )
import qualified Data.ByteString as B
import Control.Retry (fullJitterBackoff, import qualified Data.Char as C
limitRetriesByCumulativeDelay, import qualified Data.Conduit as C
retrying) import qualified Data.HashMap.Strict as H
import qualified Data.ByteString as B import qualified Data.Text as T
import qualified Data.Char as C import qualified Data.Time.Clock as Time
import qualified Data.Conduit as C import Lib.Prelude
import qualified Data.HashMap.Strict as H import Network.HTTP.Client (defaultManagerSettings)
import qualified Data.Text as T import qualified Network.HTTP.Client as NClient
import qualified Data.Time.Clock as Time import Network.HTTP.Conduit (Response)
import Network.HTTP.Conduit (Response) import qualified Network.HTTP.Conduit as NC
import qualified Network.HTTP.Conduit as NC import Network.HTTP.Types (simpleQueryToQuery)
import qualified Network.HTTP.Types as HT import qualified Network.HTTP.Types as HT
import Network.HTTP.Types.Header (hHost) import Network.HTTP.Types.Header (hHost)
import Network.Minio.APICommon
import Lib.Prelude import Network.Minio.Credentials
import Network.Minio.Data
import Network.Minio.APICommon import Network.Minio.Errors
import Network.Minio.Data import Network.Minio.Sign.V4
import Network.Minio.Errors import Network.Minio.Utils
import Network.Minio.Sign.V4 import Network.Minio.XmlParser
import Network.Minio.Utils
import Network.Minio.XmlParser
-- | Fetch bucket location (region) -- | Fetch bucket location (region)
getLocation :: Bucket -> Minio Region getLocation :: Bucket -> Minio Region
getLocation bucket = do getLocation bucket = do
resp <- executeRequest $ defaultS3ReqInfo { resp <-
riBucket = Just bucket executeRequest $
, riQueryParams = [("location", Nothing)] defaultS3ReqInfo
, riNeedsLocation = False { riBucket = Just bucket,
} riQueryParams = [("location", Nothing)],
riNeedsLocation = False
}
parseLocation $ NC.responseBody resp parseLocation $ NC.responseBody resp
-- | Looks for region in RegionMap and updates it using getLocation if -- | Looks for region in RegionMap and updates it using getLocation if
-- absent. -- absent.
discoverRegion :: S3ReqInfo -> Minio (Maybe Region) discoverRegion :: S3ReqInfo -> Minio (Maybe Region)
discoverRegion ri = runMaybeT $ do discoverRegion ri = runMaybeT $ do
bucket <- MaybeT $ return $ riBucket ri bucket <- MaybeT $ return $ riBucket ri
regionMay <- lift $ lookupRegionCache bucket regionMay <- lift $ lookupRegionCache bucket
maybe (do maybe
l <- lift $ getLocation bucket ( do
lift $ addToRegionCache bucket l l <- lift $ getLocation bucket
return l lift $ addToRegionCache bucket l
) return regionMay return l
)
return
regionMay
-- | Returns the region to be used for the request.
getRegion :: S3ReqInfo -> Minio (Maybe Region) getRegion :: S3ReqInfo -> Minio (Maybe Region)
getRegion ri = do getRegion ri = do
ci <- asks mcConnInfo ci <- asks mcConnInfo
-- getService/makeBucket/getLocation -- don't need location -- getService/makeBucket/getLocation -- don't need location
if | not $ riNeedsLocation ri -> if
return $ Just $ connectRegion ci | not $ riNeedsLocation ri ->
return $ Just $ connectRegion ci
-- if autodiscovery of location is disabled by user -- if autodiscovery of location is disabled by user
| not $ connectAutoDiscoverRegion ci -> | not $ connectAutoDiscoverRegion ci ->
return $ Just $ connectRegion ci return $ Just $ connectRegion ci
-- discover the region for the request
-- discover the region for the request | otherwise -> discoverRegion ri
| otherwise -> discoverRegion ri
getRegionHost :: Region -> Minio Text getRegionHost :: Region -> Minio Text
getRegionHost r = do getRegionHost r = do
ci <- asks mcConnInfo ci <- asks mcConnInfo
if "amazonaws.com" `T.isSuffixOf` connectHost ci if "amazonaws.com" `T.isSuffixOf` connectHost ci
then maybe (throwIO $ MErrVRegionNotSupported r) then
return (H.lookup r awsRegionMap) maybe
(throwIO $ MErrVRegionNotSupported r)
return
(H.lookup r awsRegionMap)
else return $ connectHost ci else return $ connectHost ci
-- | Computes the appropriate host, path and region for the request.
--
-- For AWS, always use virtual bucket style, unless bucket has periods. For
-- MinIO and other non-AWS, default to path style.
getHostPathRegion :: S3ReqInfo -> Minio (Text, ByteString, Maybe Region)
getHostPathRegion ri = do
ci <- asks mcConnInfo
regionMay <- getRegion ri
case riBucket ri of
Nothing ->
-- Implies a ListBuckets request.
return (connectHost ci, "/", regionMay)
Just bucket -> do
regionHost <- case regionMay of
Nothing -> return $ connectHost ci
Just "" -> return $ connectHost ci
Just r -> getRegionHost r
let pathStyle =
( regionHost,
getS3Path (riBucket ri) (riObject ri),
regionMay
)
virtualStyle =
( bucket <> "." <> regionHost,
encodeUtf8 $ "/" <> fromMaybe "" (riObject ri),
regionMay
)
( if isAWSConnectInfo ci
then
return $
if bucketHasPeriods bucket
then pathStyle
else virtualStyle
else return pathStyle
)
-- | requestSTSCredential requests temporary credentials using the Security Token
-- Service API. The returned credential will include a populated 'SessionToken'
-- and an 'ExpiryTime'.
requestSTSCredential :: (STSCredentialProvider p) => p -> IO (CredentialValue, ExpiryTime)
requestSTSCredential p = do
endpoint <- maybe (throwIO $ MErrValidation MErrVSTSEndpointNotFound) return $ getSTSEndpoint p
let endPt = NC.parseRequest_ $ toString endpoint
settings
| NC.secure endPt = NC.tlsManagerSettings
| otherwise = defaultManagerSettings
mgr <- NC.newManager settings
liftIO $ retrieveSTSCredentials p ("", 0, False) mgr
buildRequest :: S3ReqInfo -> Minio NC.Request buildRequest :: S3ReqInfo -> Minio NC.Request
buildRequest ri = do buildRequest ri = do
maybe (return ()) checkBucketNameValidity $ riBucket ri maybe (return ()) checkBucketNameValidity $ riBucket ri
maybe (return ()) checkObjectNameValidity $ riObject ri maybe (return ()) checkObjectNameValidity $ riObject ri
ci <- asks mcConnInfo ci <- asks mcConnInfo
regionMay <- getRegion ri (host, path, regionMay) <- getHostPathRegion ri
regionHost <- maybe (return $ connectHost ci) getRegionHost regionMay let ci' = ci {connectHost = host}
hostHeader = (hHost, getHostAddr ci')
ri' =
ri
{ riHeaders = hostHeader : riHeaders ri,
riRegion = regionMay
}
-- Does not contain body and auth info.
baseRequest =
NC.defaultRequest
{ NC.method = riMethod ri',
NC.secure = connectIsSecure ci',
NC.host = encodeUtf8 $ connectHost ci',
NC.port = connectPort ci',
NC.path = path,
NC.requestHeaders = riHeaders ri',
NC.queryString = HT.renderQuery False $ riQueryParams ri'
}
let ri' = ri { riHeaders = hostHeader : riHeaders ri timeStamp <- liftIO Time.getCurrentTime
, riRegion = regionMay
}
ci' = ci { connectHost = regionHost }
hostHeader = (hHost, getHostAddr ci')
-- Does not contain body and auth info. mgr <- asks mcConnManager
baseRequest = NC.defaultRequest cv <- liftIO $ getCredential (connectCreds ci') (getEndpoint ci') mgr
{ NC.method = riMethod ri'
, NC.secure = connectIsSecure ci'
, NC.host = encodeUtf8 $ connectHost ci'
, NC.port = connectPort ci'
, NC.path = getS3Path (riBucket ri') (riObject ri')
, NC.requestHeaders = riHeaders ri'
, NC.queryString = HT.renderQuery False $ riQueryParams ri'
}
timeStamp <- liftIO Time.getCurrentTime let sp =
SignParams
(coerce $ cvAccessKey cv)
(coerce $ cvSecretKey cv)
(coerce $ cvSessionToken cv)
ServiceS3
timeStamp
(riRegion ri')
(riPresignExpirySecs ri')
Nothing
let sp = SignParams (connectAccessKey ci') (connectSecretKey ci') -- Cases to handle:
timeStamp (riRegion ri') Nothing Nothing --
-- 0. Handle presign URL case.
--
-- 1. Connection is secure: use unsigned payload
--
-- 2. Insecure connection, streaming signature is enabled via use of
-- conduit payload: use streaming signature for request.
--
-- 3. Insecure connection, non-conduit payload: compute payload
-- sha256hash, buffer request in memory and perform request.
-- Cases to handle: if
-- | isJust (riPresignExpirySecs ri') ->
-- 1. Connection is secure: use unsigned payload -- case 0 from above.
-- do
-- 2. Insecure connection, streaming signature is enabled via use of let signPairs = signV4QueryParams sp baseRequest
-- conduit payload: use streaming signature for request. qpToAdd = simpleQueryToQuery signPairs
-- existingQueryParams = HT.parseQuery (NC.queryString baseRequest)
-- 3. Insecure connection, non-conduit payload: compute payload updatedQueryParams = existingQueryParams ++ qpToAdd
-- sha256hash, buffer request in memory and perform request. return $ NClient.setQueryString updatedQueryParams baseRequest
| isStreamingPayload (riPayload ri') && not (connectIsSecure ci') ->
-- case 2 from above. -- case 2 from above.
if | isStreamingPayload (riPayload ri') && do
(not $ connectIsSecure ci') -> do (pLen, pSrc) <- case riPayload ri of
(pLen, pSrc) <- case riPayload ri of PayloadC l src -> return (l, src)
PayloadC l src -> return (l, src) _ -> throwIO MErrVUnexpectedPayload
_ -> throwIO MErrVUnexpectedPayload let reqFn = signV4Stream pLen sp baseRequest
let reqFn = signV4Stream pLen sp baseRequest return $ reqFn pSrc
return $ reqFn pSrc | otherwise ->
do
| otherwise -> do sp' <-
-- case 1 described above. ( if connectIsSecure ci'
sp' <- if | connectIsSecure ci' -> return sp then -- case 1 described above.
-- case 3 described above. return sp
| otherwise -> do else
pHash <- getPayloadSHA256Hash $ riPayload ri' ( -- case 3 described above.
return $ sp { spPayloadHash = Just pHash } do
pHash <- getPayloadSHA256Hash $ riPayload ri'
let signHeaders = signV4 sp' baseRequest return $ sp {spPayloadHash = Just pHash}
return $ baseRequest )
{ NC.requestHeaders = )
NC.requestHeaders baseRequest ++
mkHeaderFromPairs signHeaders
, NC.requestBody = getRequestBody (riPayload ri')
}
let signHeaders = signV4 sp' baseRequest
return $
baseRequest
{ NC.requestHeaders =
NC.requestHeaders baseRequest ++ signHeaders,
NC.requestBody = getRequestBody (riPayload ri')
}
retryAPIRequest :: Minio a -> Minio a retryAPIRequest :: Minio a -> Minio a
retryAPIRequest apiCall = do retryAPIRequest apiCall = do
resE <- retrying retryPolicy (const shouldRetry) $ resE <-
const $ try apiCall retrying retryPolicy (const shouldRetry) $
const $
try apiCall
either throwIO return resE either throwIO return resE
where where
-- Retry using the full-jitter backoff method for up to 10 mins -- Retry using the full-jitter backoff method for up to 10 mins
-- total -- total
retryPolicy = limitRetriesByCumulativeDelay tenMins retryPolicy =
$ fullJitterBackoff oneMilliSecond limitRetriesByCumulativeDelay tenMins $
fullJitterBackoff oneMilliSecond
oneMilliSecond = 1000 -- in microseconds oneMilliSecond = 1000 -- in microseconds
tenMins = 10 * 60 * 1000000 -- in microseconds tenMins = 10 * 60 * 1000000 -- in microseconds
-- retry on connection related failure -- retry on connection related failure
@ -189,23 +280,23 @@ retryAPIRequest apiCall = do
-- API request failed with a retryable exception -- API request failed with a retryable exception
Left httpExn@(NC.HttpExceptionRequest _ exn) -> Left httpExn@(NC.HttpExceptionRequest _ exn) ->
case (exn :: NC.HttpExceptionContent) of case (exn :: NC.HttpExceptionContent) of
NC.ResponseTimeout -> return True NC.ResponseTimeout -> return True
NC.ConnectionTimeout -> return True NC.ConnectionTimeout -> return True
NC.ConnectionFailure _ -> return True NC.ConnectionFailure _ -> return True
-- We received an unexpected exception -- We received an unexpected exception
_ -> throwIO httpExn _ -> throwIO httpExn
-- We received an unexpected exception -- We received an unexpected exception
Left someOtherExn -> throwIO someOtherExn Left someOtherExn -> throwIO someOtherExn
executeRequest :: S3ReqInfo -> Minio (Response LByteString) executeRequest :: S3ReqInfo -> Minio (Response LByteString)
executeRequest ri = do executeRequest ri = do
req <- buildRequest ri req <- buildRequest ri
mgr <- asks mcConnManager mgr <- asks mcConnManager
retryAPIRequest $ httpLbs req mgr retryAPIRequest $ httpLbs req mgr
mkStreamRequest :: S3ReqInfo mkStreamRequest ::
-> Minio (Response (C.ConduitM () ByteString Minio ())) S3ReqInfo ->
Minio (Response (C.ConduitM () ByteString Minio ()))
mkStreamRequest ri = do mkStreamRequest ri = do
req <- buildRequest ri req <- buildRequest ri
mgr <- asks mcConnManager mgr <- asks mcConnManager
@ -214,41 +305,50 @@ mkStreamRequest ri = do
-- Bucket name validity check according to AWS rules. -- Bucket name validity check according to AWS rules.
isValidBucketName :: Bucket -> Bool isValidBucketName :: Bucket -> Bool
isValidBucketName bucket = isValidBucketName bucket =
not (or [ len < 3 || len > 63 not
, or (map labelCheck labels) ( or
, or (map labelCharsCheck labels) [ len < 3 || len > 63,
, isIPCheck any labelCheck labels,
]) any labelCharsCheck labels,
isIPCheck
]
)
where where
len = T.length bucket len = T.length bucket
labels = T.splitOn "." bucket labels = T.splitOn "." bucket
-- does label `l` fail basic checks of length and start/end? -- does label `l` fail basic checks of length and start/end?
labelCheck l = T.length l == 0 || T.head l == '-' || T.last l == '-' labelCheck l = T.length l == 0 || T.head l == '-' || T.last l == '-'
-- does label `l` have non-allowed characters? -- does label `l` have non-allowed characters?
labelCharsCheck l = isJust $ T.find (\x -> not (C.isAsciiLower x || labelCharsCheck l =
x == '-' || isJust $
C.isDigit x)) l T.find
( \x ->
not
( C.isAsciiLower x
|| x == '-'
|| C.isDigit x
)
)
l
-- does label `l` have non-digit characters? -- does label `l` have non-digit characters?
labelNonDigits l = isJust $ T.find (not . C.isDigit) l labelNonDigits l = isJust $ T.find (not . C.isDigit) l
labelAsNums = map (not . labelNonDigits) labels labelAsNums = map (not . labelNonDigits) labels
-- check if bucket name looks like an IP -- check if bucket name looks like an IP
isIPCheck = and labelAsNums && length labelAsNums == 4 isIPCheck = and labelAsNums && length labelAsNums == 4
-- Throws exception iff bucket name is invalid according to AWS rules. -- Throws exception iff bucket name is invalid according to AWS rules.
checkBucketNameValidity :: MonadIO m => Bucket -> m () checkBucketNameValidity :: (MonadIO m) => Bucket -> m ()
checkBucketNameValidity bucket = checkBucketNameValidity bucket =
when (not $ isValidBucketName bucket) $ unless (isValidBucketName bucket) $
throwIO $ MErrVInvalidBucketName bucket throwIO $
MErrVInvalidBucketName bucket
isValidObjectName :: Object -> Bool isValidObjectName :: Object -> Bool
isValidObjectName object = isValidObjectName object =
T.length object > 0 && B.length (encodeUtf8 object) <= 1024 T.length object > 0 && B.length (encodeUtf8 object) <= 1024
checkObjectNameValidity :: MonadIO m => Object -> m () checkObjectNameValidity :: (MonadIO m) => Object -> m ()
checkObjectNameValidity object = checkObjectNameValidity object =
when (not $ isValidObjectName object) $ unless (isValidObjectName object) $
throwIO $ MErrVInvalidObjectName object throwIO $
MErrVInvalidObjectName object

View File

@ -16,37 +16,39 @@
module Network.Minio.APICommon where module Network.Minio.APICommon where
import qualified Conduit as C import qualified Conduit as C
import qualified Data.ByteString as BS import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy as LB import qualified Data.ByteString.Lazy as LB
import Data.Conduit.Binary (sourceHandleRange) import Data.Conduit.Binary (sourceHandleRange)
import qualified Network.HTTP.Conduit as NC import qualified Data.Text as T
import qualified Network.HTTP.Types as HT import Lib.Prelude
import qualified Network.HTTP.Conduit as NC
import Lib.Prelude import qualified Network.HTTP.Types as HT
import Network.Minio.Data
import Network.Minio.Data import Network.Minio.Data.Crypto
import Network.Minio.Data.Crypto import Network.Minio.Errors
import Network.Minio.Errors
sha256Header :: ByteString -> HT.Header sha256Header :: ByteString -> HT.Header
sha256Header = ("x-amz-content-sha256", ) sha256Header = ("x-amz-content-sha256",)
-- | This function throws an error if the payload is a conduit (as it -- | This function throws an error if the payload is a conduit (as it
-- will not be possible to re-read the conduit after it is consumed). -- will not be possible to re-read the conduit after it is consumed).
getPayloadSHA256Hash :: Payload -> Minio ByteString getPayloadSHA256Hash :: Payload -> Minio ByteString
getPayloadSHA256Hash (PayloadBS bs) = return $ hashSHA256 bs getPayloadSHA256Hash (PayloadBS bs) = return $ hashSHA256 bs
getPayloadSHA256Hash (PayloadH h off size) = hashSHA256FromSource $ getPayloadSHA256Hash (PayloadH h off size) =
sourceHandleRange h hashSHA256FromSource $
(return . fromIntegral $ off) sourceHandleRange
(return . fromIntegral $ size) h
(return . fromIntegral $ off)
(return . fromIntegral $ size)
getPayloadSHA256Hash (PayloadC _ _) = throwIO MErrVUnexpectedPayload getPayloadSHA256Hash (PayloadC _ _) = throwIO MErrVUnexpectedPayload
getRequestBody :: Payload -> NC.RequestBody getRequestBody :: Payload -> NC.RequestBody
getRequestBody (PayloadBS bs) = NC.RequestBodyBS bs getRequestBody (PayloadBS bs) = NC.RequestBodyBS bs
getRequestBody (PayloadH h off size) = getRequestBody (PayloadH h off size) =
NC.requestBodySource (fromIntegral size) $ NC.requestBodySource size $
sourceHandleRange h sourceHandleRange
h
(return . fromIntegral $ off) (return . fromIntegral $ off)
(return . fromIntegral $ size) (return . fromIntegral $ size)
getRequestBody (PayloadC n src) = NC.requestBodySource n src getRequestBody (PayloadC n src) = NC.requestBodySource n src
@ -55,14 +57,24 @@ mkStreamingPayload :: Payload -> Payload
mkStreamingPayload payload = mkStreamingPayload payload =
case payload of case payload of
PayloadBS bs -> PayloadBS bs ->
PayloadC (fromIntegral $ BS.length bs) PayloadC
(fromIntegral $ BS.length bs)
(C.sourceLazy $ LB.fromStrict bs) (C.sourceLazy $ LB.fromStrict bs)
PayloadH h off len -> PayloadH h off len ->
PayloadC len $ sourceHandleRange h PayloadC len $
(return . fromIntegral $ off) sourceHandleRange
(return . fromIntegral $ len) h
(return . fromIntegral $ off)
(return . fromIntegral $ len)
_ -> payload _ -> payload
isStreamingPayload :: Payload -> Bool isStreamingPayload :: Payload -> Bool
isStreamingPayload (PayloadC _ _) = True isStreamingPayload (PayloadC _ _) = True
isStreamingPayload _ = False isStreamingPayload _ = False
-- | Checks if the connect info is for Amazon S3.
isAWSConnectInfo :: ConnectInfo -> Bool
isAWSConnectInfo ci = ".amazonaws.com" `T.isSuffixOf` connectHost ci
bucketHasPeriods :: Bucket -> Bool
bucketHasPeriods b = isJust $ T.find (== '.') b

File diff suppressed because it is too large Load Diff

View File

@ -16,19 +16,19 @@
module Network.Minio.CopyObject where module Network.Minio.CopyObject where
import qualified Data.List as List import qualified Data.List as List
import Lib.Prelude
import Lib.Prelude import Network.Minio.Data
import Network.Minio.Errors
import Network.Minio.Data import Network.Minio.S3API
import Network.Minio.Errors import Network.Minio.Utils
import Network.Minio.S3API
import Network.Minio.Utils
-- | Copy an object using single or multipart copy strategy. -- | Copy an object using single or multipart copy strategy.
copyObjectInternal :: Bucket -> Object -> SourceInfo copyObjectInternal ::
-> Minio ETag Bucket ->
Object ->
SourceInfo ->
Minio ETag
copyObjectInternal b' o srcInfo = do copyObjectInternal b' o srcInfo = do
let sBucket = srcBucket srcInfo let sBucket = srcBucket srcInfo
sObject = srcObject srcInfo sObject = srcObject srcInfo
@ -43,27 +43,33 @@ copyObjectInternal b' o srcInfo = do
startOffset = fst range startOffset = fst range
endOffset = snd range endOffset = snd range
when (isJust rangeMay && when
or [startOffset < 0, endOffset < startOffset, ( isJust rangeMay
endOffset >= fromIntegral srcSize]) $ && ( (startOffset < 0)
throwIO $ MErrVInvalidSrcObjByteRange range || (endOffset < startOffset)
|| (endOffset >= srcSize)
)
)
$ throwIO
$ MErrVInvalidSrcObjByteRange range
-- 1. If sz > 64MiB (minPartSize) use multipart copy, OR -- 1. If sz > 64MiB (minPartSize) use multipart copy, OR
-- 2. If startOffset /= 0 use multipart copy -- 2. If startOffset /= 0 use multipart copy
let destSize = (\(a, b) -> b - a + 1 ) $ let destSize =
maybe (0, srcSize - 1) identity rangeMay (\(a, b) -> b - a + 1) $
maybe (0, srcSize - 1) identity rangeMay
if destSize > minPartSize || (endOffset - startOffset + 1 /= srcSize) if destSize > minPartSize || (endOffset - startOffset + 1 /= srcSize)
then multiPartCopyObject b' o srcInfo srcSize then multiPartCopyObject b' o srcInfo srcSize
else fst <$> copyObjectSingle b' o srcInfo {srcRange = Nothing} []
else fst <$> copyObjectSingle b' o srcInfo{srcRange = Nothing} []
-- | Given the input byte range of the source object, compute the -- | Given the input byte range of the source object, compute the
-- splits for a multipart copy object procedure. Minimum part size -- splits for a multipart copy object procedure. Minimum part size
-- used is minPartSize. -- used is minPartSize.
selectCopyRanges :: (Int64, Int64) -> [(PartNumber, (Int64, Int64))] selectCopyRanges :: (Int64, Int64) -> [(PartNumber, (Int64, Int64))]
selectCopyRanges (st, end) = zip pns $ selectCopyRanges (st, end) =
map (\(x, y) -> (st + x, st + x + y - 1)) $ zip startOffsets partSizes zip pns $
zipWith (\x y -> (st + x, st + x + y - 1)) startOffsets partSizes
where where
size = end - st + 1 size = end - st + 1
(pns, startOffsets, partSizes) = List.unzip3 $ selectPartSizes size (pns, startOffsets, partSizes) = List.unzip3 $ selectPartSizes size
@ -71,22 +77,30 @@ selectCopyRanges (st, end) = zip pns $
-- | Perform a multipart copy object action. Since we cannot verify -- | Perform a multipart copy object action. Since we cannot verify
-- existing parts based on the source object, there is no resuming -- existing parts based on the source object, there is no resuming
-- copy action support. -- copy action support.
multiPartCopyObject :: Bucket -> Object -> SourceInfo -> Int64 multiPartCopyObject ::
-> Minio ETag Bucket ->
Object ->
SourceInfo ->
Int64 ->
Minio ETag
multiPartCopyObject b o cps srcSize = do multiPartCopyObject b o cps srcSize = do
uid <- newMultipartUpload b o [] uid <- newMultipartUpload b o []
let byteRange = maybe (0, fromIntegral $ srcSize - 1) identity $ srcRange cps let byteRange = maybe (0, srcSize - 1) identity $ srcRange cps
partRanges = selectCopyRanges byteRange partRanges = selectCopyRanges byteRange
partSources = map (\(x, (start, end)) -> (x, cps {srcRange = Just (start, end) })) partSources =
partRanges map
dstInfo = defaultDestinationInfo { dstBucket = b, dstObject = o} (\(x, (start, end)) -> (x, cps {srcRange = Just (start, end)}))
partRanges
dstInfo = defaultDestinationInfo {dstBucket = b, dstObject = o}
copiedParts <- limitedMapConcurrently 10 copiedParts <-
(\(pn, cps') -> do limitedMapConcurrently
(etag, _) <- copyObjectPart dstInfo cps' uid pn [] 10
return (pn, etag) ( \(pn, cps') -> do
) (etag, _) <- copyObjectPart dstInfo cps' uid pn []
partSources return (pn, etag)
)
partSources
completeMultipartUpload b o uid copiedParts completeMultipartUpload b o uid copiedParts

View File

@ -0,0 +1,77 @@
--
-- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
module Network.Minio.Credentials
( CredentialValue (..),
credentialValueText,
STSCredentialProvider (..),
AccessKey (..),
SecretKey (..),
SessionToken (..),
ExpiryTime (..),
STSCredentialStore,
initSTSCredential,
getSTSCredential,
Creds (..),
getCredential,
Endpoint,
-- * STS Assume Role
defaultSTSAssumeRoleOptions,
STSAssumeRole (..),
STSAssumeRoleOptions (..),
)
where
import Data.Time (diffUTCTime, getCurrentTime)
import qualified Network.HTTP.Client as NC
import Network.Minio.Credentials.AssumeRole
import Network.Minio.Credentials.Types
import qualified UnliftIO.MVar as M
data STSCredentialStore = STSCredentialStore
{ cachedCredentials :: M.MVar (CredentialValue, ExpiryTime),
refreshAction :: Endpoint -> NC.Manager -> IO (CredentialValue, ExpiryTime)
}
initSTSCredential :: (STSCredentialProvider p) => p -> IO STSCredentialStore
initSTSCredential p = do
let action = retrieveSTSCredentials p
-- start with dummy credential, so that refresh happens for first request.
now <- getCurrentTime
mvar <- M.newMVar (CredentialValue mempty mempty mempty, coerce now)
return $
STSCredentialStore
{ cachedCredentials = mvar,
refreshAction = action
}
getSTSCredential :: STSCredentialStore -> Endpoint -> NC.Manager -> IO (CredentialValue, Bool)
getSTSCredential store ep mgr = M.modifyMVar (cachedCredentials store) $ \cc@(v, expiry) -> do
now <- getCurrentTime
if diffUTCTime now (coerce expiry) > 0
then do
res <- refreshAction store ep mgr
return (res, (fst res, True))
else return (cc, (v, False))
data Creds
= CredsStatic CredentialValue
| CredsSTS STSCredentialStore
getCredential :: Creds -> Endpoint -> NC.Manager -> IO CredentialValue
getCredential (CredsStatic v) _ _ = return v
getCredential (CredsSTS s) ep mgr = fst <$> getSTSCredential s ep mgr

View File

@ -0,0 +1,266 @@
--
-- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
module Network.Minio.Credentials.AssumeRole where
import qualified Data.ByteArray as BA
import qualified Data.ByteString.Lazy as LB
import qualified Data.Text as T
import qualified Data.Time as Time
import Data.Time.Units (Second)
import Lib.Prelude (UTCTime, throwIO)
import Network.HTTP.Client (RequestBody (RequestBodyBS))
import qualified Network.HTTP.Client as NC
import Network.HTTP.Types (hContentType, methodPost, renderSimpleQuery)
import Network.HTTP.Types.Header (hHost)
import Network.Minio.Credentials.Types
import Network.Minio.Data.Crypto (hashSHA256)
import Network.Minio.Errors (MErrV (..))
import Network.Minio.Sign.V4
import Network.Minio.Utils (getHostHeader, httpLbs)
import Network.Minio.XmlCommon
import Text.XML.Cursor hiding (bool)
stsVersion :: ByteString
stsVersion = "2011-06-15"
defaultDurationSeconds :: Second
defaultDurationSeconds = 3600
-- | Assume Role API argument.
--
-- @since 1.7.0
data STSAssumeRole = STSAssumeRole
{ -- | Credentials to use in the AssumeRole STS API.
sarCredentials :: CredentialValue,
-- | Optional settings.
sarOptions :: STSAssumeRoleOptions
}
-- | Options for STS Assume Role API.
data STSAssumeRoleOptions = STSAssumeRoleOptions
{ -- | STS endpoint to which the request will be made. For MinIO, this is the
-- same as the server endpoint. For AWS, this has to be the Security Token
-- Service endpoint. If using with 'setSTSCredential', this option can be
-- left as 'Nothing' and the endpoint in 'ConnectInfo' will be used.
saroEndpoint :: Maybe Text,
-- | Desired validity for the generated credentials.
saroDurationSeconds :: Maybe Second,
-- | IAM policy to apply for the generated credentials.
saroPolicyJSON :: Maybe ByteString,
-- | Location is usually required for AWS.
saroLocation :: Maybe Text,
saroRoleARN :: Maybe Text,
saroRoleSessionName :: Maybe Text
}
-- | Default STS Assume Role options - all options are Nothing, except for
-- duration which is set to 1 hour.
defaultSTSAssumeRoleOptions :: STSAssumeRoleOptions
defaultSTSAssumeRoleOptions =
STSAssumeRoleOptions
{ saroEndpoint = Nothing,
saroDurationSeconds = Just 3600,
saroPolicyJSON = Nothing,
saroLocation = Nothing,
saroRoleARN = Nothing,
saroRoleSessionName = Nothing
}
data AssumeRoleCredentials = AssumeRoleCredentials
{ arcCredentials :: CredentialValue,
arcExpiration :: UTCTime
}
deriving stock (Show, Eq)
data AssumeRoleResult = AssumeRoleResult
{ arrSourceIdentity :: Text,
arrAssumedRoleArn :: Text,
arrAssumedRoleId :: Text,
arrRoleCredentials :: AssumeRoleCredentials
}
deriving stock (Show, Eq)
-- | parseSTSAssumeRoleResult parses an XML response of the following form:
--
-- <AssumeRoleResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
-- <AssumeRoleResult>
-- <SourceIdentity>Alice</SourceIdentity>
-- <AssumedRoleUser>
-- <Arn>arn:aws:sts::123456789012:assumed-role/demo/TestAR</Arn>
-- <AssumedRoleId>ARO123EXAMPLE123:TestAR</AssumedRoleId>
-- </AssumedRoleUser>
-- <Credentials>
-- <AccessKeyId>ASIAIOSFODNN7EXAMPLE</AccessKeyId>
-- <SecretAccessKey>wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY</SecretAccessKey>
-- <SessionToken>
-- AQoDYXdzEPT//////////wEXAMPLEtc764bNrC9SAPBSM22wDOk4x4HIZ8j4FZTwdQW
-- LWsKWHGBuFqwAeMicRXmxfpSPfIeoIYRqTflfKD8YUuwthAx7mSEI/qkPpKPi/kMcGd
-- QrmGdeehM4IC1NtBmUpp2wUE8phUZampKsburEDy0KPkyQDYwT7WZ0wq5VSXDvp75YU
-- 9HFvlRd8Tx6q6fE8YQcHNVXAkiY9q6d+xo0rKwT38xVqr7ZD0u0iPPkUL64lIZbqBAz
-- +scqKmlzm8FDrypNC9Yjc8fPOLn9FX9KSYvKTr4rvx3iSIlTJabIQwj2ICCR/oLxBA==
-- </SessionToken>
-- <Expiration>2019-11-09T13:34:41Z</Expiration>
-- </Credentials>
-- <PackedPolicySize>6</PackedPolicySize>
-- </AssumeRoleResult>
-- <ResponseMetadata>
-- <RequestId>c6104cbe-af31-11e0-8154-cbc7ccf896c7</RequestId>
-- </ResponseMetadata>
-- </AssumeRoleResponse>
parseSTSAssumeRoleResult :: (MonadIO m) => ByteString -> Text -> m AssumeRoleResult
parseSTSAssumeRoleResult xmldata namespace = do
r <- parseRoot $ LB.fromStrict xmldata
let s3Elem' = s3Elem namespace
sourceIdentity =
T.concat $
r
$/ s3Elem' "AssumeRoleResult"
&/ s3Elem' "SourceIdentity"
&/ content
roleArn =
T.concat $
r
$/ s3Elem' "AssumeRoleResult"
&/ s3Elem' "AssumedRoleUser"
&/ s3Elem' "Arn"
&/ content
roleId =
T.concat $
r
$/ s3Elem' "AssumeRoleResult"
&/ s3Elem' "AssumedRoleUser"
&/ s3Elem' "AssumedRoleId"
&/ content
convSB :: Text -> BA.ScrubbedBytes
convSB = BA.convert . (encodeUtf8 :: Text -> ByteString)
credsInfo = do
cr <-
maybe (Left $ MErrVXmlParse "No Credentials Element found") Right $
listToMaybe $
r $/ s3Elem' "AssumeRoleResult" &/ s3Elem' "Credentials"
let cur = fromNode $ node cr
return
( CredentialValue
{ cvAccessKey =
coerce $
T.concat $
cur $/ s3Elem' "AccessKeyId" &/ content,
cvSecretKey =
coerce $
convSB $
T.concat $
cur
$/ s3Elem' "SecretAccessKey"
&/ content,
cvSessionToken =
Just $
coerce $
convSB $
T.concat $
cur
$/ s3Elem' "SessionToken"
&/ content
},
T.concat $ cur $/ s3Elem' "Expiration" &/ content
)
creds <- either throwIO pure credsInfo
expiry <- parseS3XMLTime $ snd creds
let roleCredentials =
AssumeRoleCredentials
{ arcCredentials = fst creds,
arcExpiration = expiry
}
return
AssumeRoleResult
{ arrSourceIdentity = sourceIdentity,
arrAssumedRoleArn = roleArn,
arrAssumedRoleId = roleId,
arrRoleCredentials = roleCredentials
}
instance STSCredentialProvider STSAssumeRole where
getSTSEndpoint = saroEndpoint . sarOptions
retrieveSTSCredentials sar (host', port', isSecure') mgr = do
-- Assemble STS request
let requiredParams =
[ ("Action", "AssumeRole"),
("Version", stsVersion)
]
opts = sarOptions sar
durSecs :: Int =
fromIntegral $
fromMaybe defaultDurationSeconds $
saroDurationSeconds opts
otherParams =
[ ("RoleArn",) . encodeUtf8 <$> saroRoleARN opts,
("RoleSessionName",) . encodeUtf8 <$> saroRoleSessionName opts,
Just ("DurationSeconds", show durSecs),
("Policy",) <$> saroPolicyJSON opts
]
parameters = requiredParams ++ catMaybes otherParams
(host, port, isSecure) =
case getSTSEndpoint sar of
Just ep ->
let endPt = NC.parseRequest_ $ toString ep
in (NC.host endPt, NC.port endPt, NC.secure endPt)
Nothing -> (host', port', isSecure')
reqBody = renderSimpleQuery False parameters
req =
NC.defaultRequest
{ NC.host = host,
NC.port = port,
NC.secure = isSecure,
NC.method = methodPost,
NC.requestHeaders =
[ (hHost, getHostHeader (host, port)),
(hContentType, "application/x-www-form-urlencoded")
],
NC.requestBody = RequestBodyBS reqBody
}
-- Sign the STS request.
timeStamp <- liftIO Time.getCurrentTime
let sp =
SignParams
{ spAccessKey = coerce $ cvAccessKey $ sarCredentials sar,
spSecretKey = coerce $ cvSecretKey $ sarCredentials sar,
spSessionToken = coerce $ cvSessionToken $ sarCredentials sar,
spService = ServiceSTS,
spTimeStamp = timeStamp,
spRegion = saroLocation opts,
spExpirySecs = Nothing,
spPayloadHash = Just $ hashSHA256 reqBody
}
signHeaders = signV4 sp req
signedReq =
req
{ NC.requestHeaders = NC.requestHeaders req ++ signHeaders
}
-- Make the STS request
resp <- httpLbs signedReq mgr
result <-
parseSTSAssumeRoleResult
(toStrict $ NC.responseBody resp)
"https://sts.amazonaws.com/doc/2011-06-15/"
return
( arcCredentials $ arrRoleCredentials result,
coerce $ arcExpiration $ arrRoleCredentials result
)

View File

@ -0,0 +1,90 @@
--
-- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE StrictData #-}
module Network.Minio.Credentials.Types where
import qualified Data.ByteArray as BA
import Lib.Prelude (UTCTime)
import qualified Network.HTTP.Client as NC
-- | Access Key type.
newtype AccessKey = AccessKey {unAccessKey :: Text}
deriving stock (Show)
deriving newtype (Eq, IsString, Semigroup, Monoid)
-- | Secret Key type - has a show instance that does not print the value.
newtype SecretKey = SecretKey {unSecretKey :: BA.ScrubbedBytes}
deriving stock (Show)
deriving newtype (Eq, IsString, Semigroup, Monoid)
-- | Session Token type - has a show instance that does not print the value.
newtype SessionToken = SessionToken {unSessionToken :: BA.ScrubbedBytes}
deriving stock (Show)
deriving newtype (Eq, IsString, Semigroup, Monoid)
-- | Object storage credential data type. It has support for the optional
-- [SessionToken](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html)
-- for using temporary credentials requested via STS.
--
-- The show instance for this type does not print the value of secrets for
-- security.
--
-- @since 1.7.0
data CredentialValue = CredentialValue
{ cvAccessKey :: AccessKey,
cvSecretKey :: SecretKey,
cvSessionToken :: Maybe SessionToken
}
deriving stock (Eq, Show)
scrubbedToText :: BA.ScrubbedBytes -> Text
scrubbedToText =
let b2t :: ByteString -> Text
b2t = decodeUtf8
s2b :: BA.ScrubbedBytes -> ByteString
s2b = BA.convert
in b2t . s2b
-- | Convert a 'CredentialValue' to a text tuple. Use this to output the
-- credential to files or other programs.
credentialValueText :: CredentialValue -> (Text, Text, Maybe Text)
credentialValueText cv =
( coerce $ cvAccessKey cv,
(scrubbedToText . coerce) $ cvSecretKey cv,
scrubbedToText . coerce <$> cvSessionToken cv
)
-- | Endpoint represented by host, port and TLS enabled flag.
type Endpoint = (ByteString, Int, Bool)
-- | Typeclass for STS credential providers.
--
-- @since 1.7.0
class STSCredentialProvider p where
retrieveSTSCredentials ::
p ->
-- | STS Endpoint (host, port, isSecure)
Endpoint ->
NC.Manager ->
IO (CredentialValue, ExpiryTime)
getSTSEndpoint :: p -> Maybe Text
-- | 'ExpiryTime' represents a time at which a credential expires.
newtype ExpiryTime = ExpiryTime {unExpiryTime :: UTCTime}
deriving stock (Show)
deriving newtype (Eq)

File diff suppressed because it is too large Load Diff

View File

@ -13,23 +13,21 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleInstances #-}
module Network.Minio.Data.ByteString module Network.Minio.Data.ByteString
( ( stripBS,
stripBS UriEncodable (..),
, UriEncodable(..) )
) where where
import qualified Data.ByteString as B import qualified Data.ByteString as B
import qualified Data.ByteString.Builder as BB import qualified Data.ByteString.Builder as BB
import qualified Data.ByteString.Char8 as BC8 import qualified Data.ByteString.Char8 as BC8
import qualified Data.ByteString.Lazy as LB import qualified Data.ByteString.Lazy as LB
import Data.Char (isSpace, toUpper, isAsciiUpper, isAsciiLower, isDigit) import Data.Char (isAsciiLower, isAsciiUpper, isDigit, isSpace, toUpper)
import qualified Data.Text as T import qualified Data.Text as T
import Numeric (showHex) import Numeric (showHex)
import Lib.Prelude
stripBS :: ByteString -> ByteString stripBS :: ByteString -> ByteString
stripBS = BC8.dropWhile isSpace . fst . BC8.spanEnd isSpace stripBS = BC8.dropWhile isSpace . fst . BC8.spanEnd isSpace
@ -39,8 +37,10 @@ class UriEncodable s where
instance UriEncodable [Char] where instance UriEncodable [Char] where
uriEncode encodeSlash payload = uriEncode encodeSlash payload =
LB.toStrict $ BB.toLazyByteString $ mconcat $ LB.toStrict $
map (`uriEncodeChar` encodeSlash) payload BB.toLazyByteString $
mconcat $
map (`uriEncodeChar` encodeSlash) payload
instance UriEncodable ByteString where instance UriEncodable ByteString where
-- assumes that uriEncode is passed ASCII encoded strings. -- assumes that uriEncode is passed ASCII encoded strings.
@ -59,16 +59,17 @@ uriEncodeChar '/' True = BB.byteString "%2F"
uriEncodeChar '/' False = BB.char7 '/' uriEncodeChar '/' False = BB.char7 '/'
uriEncodeChar ch _ uriEncodeChar ch _
| isAsciiUpper ch | isAsciiUpper ch
|| isAsciiLower ch || isAsciiLower ch
|| isDigit ch || isDigit ch
|| (ch == '_') || (ch == '_')
|| (ch == '-') || (ch == '-')
|| (ch == '.') || (ch == '.')
|| (ch == '~') = BB.char7 ch || (ch == '~') =
BB.char7 ch
| otherwise = mconcat $ map f $ B.unpack $ encodeUtf8 $ T.singleton ch | otherwise = mconcat $ map f $ B.unpack $ encodeUtf8 $ T.singleton ch
where where
f :: Word8 -> BB.Builder f :: Word8 -> BB.Builder
f n = BB.char7 '%' <> BB.string7 hexStr f n = BB.char7 '%' <> BB.string7 hexStr
where where
hexStr = map toUpper $ showHex q $ showHex r "" hexStr = map toUpper $ showHex q $ showHex r ""
(q, r) = divMod (fromIntegral n) (16::Word8) (q, r) = divMod n (16 :: Word8)

View File

@ -15,55 +15,54 @@
-- --
module Network.Minio.Data.Crypto module Network.Minio.Data.Crypto
( ( hashSHA256,
hashSHA256 hashSHA256FromSource,
, hashSHA256FromSource hashMD5,
hashMD5ToBase64,
hashMD5FromSource,
hmacSHA256,
hmacSHA256RawBS,
digestToBS,
digestToBase16,
encodeToBase64,
)
where
, hashMD5 import Crypto.Hash
, hashMD5ToBase64 ( Digest,
, hashMD5FromSource MD5 (..),
SHA256 (..),
, hmacSHA256 hashWith,
, hmacSHA256RawBS )
, digestToBS import Crypto.Hash.Conduit (sinkHash)
, digestToBase16 import Crypto.MAC.HMAC (HMAC, hmac)
import Data.ByteArray (ByteArrayAccess, convert)
, encodeToBase64 import Data.ByteArray.Encoding (Base (Base16, Base64), convertToBase)
) where import qualified Data.Conduit as C
import Crypto.Hash (Digest, MD5 (..), SHA256 (..),
hashWith)
import Crypto.Hash.Conduit (sinkHash)
import Crypto.MAC.HMAC (HMAC, hmac)
import Data.ByteArray (ByteArrayAccess, convert)
import Data.ByteArray.Encoding (Base (Base16, Base64), convertToBase)
import qualified Data.Conduit as C
import Lib.Prelude
hashSHA256 :: ByteString -> ByteString hashSHA256 :: ByteString -> ByteString
hashSHA256 = digestToBase16 . hashWith SHA256 hashSHA256 = digestToBase16 . hashWith SHA256
hashSHA256FromSource :: Monad m => C.ConduitM () ByteString m () -> m ByteString hashSHA256FromSource :: (Monad m) => C.ConduitM () ByteString m () -> m ByteString
hashSHA256FromSource src = do hashSHA256FromSource src = do
digest <- C.connect src sinkSHA256Hash digest <- C.connect src sinkSHA256Hash
return $ digestToBase16 digest return $ digestToBase16 digest
where where
-- To help with type inference -- To help with type inference
sinkSHA256Hash :: Monad m => C.ConduitM ByteString Void m (Digest SHA256) sinkSHA256Hash :: (Monad m) => C.ConduitM ByteString Void m (Digest SHA256)
sinkSHA256Hash = sinkHash sinkSHA256Hash = sinkHash
-- Returns MD5 hash hex encoded. -- Returns MD5 hash hex encoded.
hashMD5 :: ByteString -> ByteString hashMD5 :: ByteString -> ByteString
hashMD5 = digestToBase16 . hashWith MD5 hashMD5 = digestToBase16 . hashWith MD5
hashMD5FromSource :: Monad m => C.ConduitM () ByteString m () -> m ByteString hashMD5FromSource :: (Monad m) => C.ConduitM () ByteString m () -> m ByteString
hashMD5FromSource src = do hashMD5FromSource src = do
digest <- C.connect src sinkMD5Hash digest <- C.connect src sinkMD5Hash
return $ digestToBase16 digest return $ digestToBase16 digest
where where
-- To help with type inference -- To help with type inference
sinkMD5Hash :: Monad m => C.ConduitM ByteString Void m (Digest MD5) sinkMD5Hash :: (Monad m) => C.ConduitM ByteString Void m (Digest MD5)
sinkMD5Hash = sinkHash sinkMD5Hash = sinkHash
hmacSHA256 :: ByteString -> ByteString -> HMAC SHA256 hmacSHA256 :: ByteString -> ByteString -> HMAC SHA256
@ -72,15 +71,15 @@ hmacSHA256 message key = hmac key message
hmacSHA256RawBS :: ByteString -> ByteString -> ByteString hmacSHA256RawBS :: ByteString -> ByteString -> ByteString
hmacSHA256RawBS message key = convert $ hmacSHA256 message key hmacSHA256RawBS message key = convert $ hmacSHA256 message key
digestToBS :: ByteArrayAccess a => a -> ByteString digestToBS :: (ByteArrayAccess a) => a -> ByteString
digestToBS = convert digestToBS = convert
digestToBase16 :: ByteArrayAccess a => a -> ByteString digestToBase16 :: (ByteArrayAccess a) => a -> ByteString
digestToBase16 = convertToBase Base16 digestToBase16 = convertToBase Base16
-- Returns MD5 hash base 64 encoded. -- Returns MD5 hash base 64 encoded.
hashMD5ToBase64 :: ByteArrayAccess a => a -> ByteString hashMD5ToBase64 :: (ByteArrayAccess a) => a -> ByteString
hashMD5ToBase64 = convertToBase Base64 . hashWith MD5 hashMD5ToBase64 = convertToBase Base64 . hashWith MD5
encodeToBase64 :: ByteArrayAccess a => a -> ByteString encodeToBase64 :: (ByteArrayAccess a) => a -> ByteString
encodeToBase64 = convertToBase Base64 encodeToBase64 = convertToBase Base64

View File

@ -15,20 +15,24 @@
-- --
module Network.Minio.Data.Time module Network.Minio.Data.Time
( ( awsTimeFormat,
awsTimeFormat awsTimeFormatBS,
, awsTimeFormatBS awsDateFormat,
, awsDateFormat awsDateFormatBS,
, awsDateFormatBS awsParseTime,
, awsParseTime iso8601TimeFormat,
, iso8601TimeFormat UrlExpiry,
) where )
where
import Data.ByteString.Char8 (pack)
import Data.ByteString.Char8 (pack)
import qualified Data.Time as Time import qualified Data.Time as Time
import Data.Time.Format.ISO8601 (iso8601Show)
import Lib.Prelude
import Lib.Prelude -- | Time to expire for a presigned URL. It interpreted as a number of
-- seconds. The maximum duration that can be specified is 7 days.
type UrlExpiry = Int
awsTimeFormat :: UTCTime -> [Char] awsTimeFormat :: UTCTime -> [Char]
awsTimeFormat = Time.formatTime Time.defaultTimeLocale "%Y%m%dT%H%M%SZ" awsTimeFormat = Time.formatTime Time.defaultTimeLocale "%Y%m%dT%H%M%SZ"
@ -46,4 +50,4 @@ awsParseTime :: [Char] -> Maybe UTCTime
awsParseTime = Time.parseTimeM False Time.defaultTimeLocale "%Y%m%dT%H%M%SZ" awsParseTime = Time.parseTimeM False Time.defaultTimeLocale "%Y%m%dT%H%M%SZ"
iso8601TimeFormat :: UTCTime -> [Char] iso8601TimeFormat :: UTCTime -> [Char]
iso8601TimeFormat = Time.formatTime Time.defaultTimeLocale (Time.iso8601DateFormat $ Just "%T%QZ") iso8601TimeFormat = iso8601Show

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017-2019 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -14,76 +14,83 @@
-- limitations under the License. -- limitations under the License.
-- --
module Network.Minio.Errors where module Network.Minio.Errors
( MErrV (..),
ServiceErr (..),
MinioErr (..),
toServiceErr,
)
where
import Control.Exception import Control.Exception (IOException)
import qualified Network.HTTP.Conduit as NC import qualified Network.HTTP.Conduit as NC
import Lib.Prelude
--------------------------------- ---------------------------------
-- Errors -- Errors
--------------------------------- ---------------------------------
-- | Various validation errors -- | Various validation errors
data MErrV = MErrVSinglePUTSizeExceeded Int64 data MErrV
| MErrVPutSizeExceeded Int64 = MErrVSinglePUTSizeExceeded Int64
| MErrVETagHeaderNotFound | MErrVPutSizeExceeded Int64
| MErrVInvalidObjectInfoResponse | MErrVETagHeaderNotFound
| MErrVInvalidSrcObjSpec Text | MErrVInvalidObjectInfoResponse
| MErrVInvalidSrcObjByteRange (Int64, Int64) | MErrVInvalidSrcObjSpec Text
| MErrVCopyObjSingleNoRangeAccepted | MErrVInvalidSrcObjByteRange (Int64, Int64)
| MErrVRegionNotSupported Text | MErrVCopyObjSingleNoRangeAccepted
| MErrVXmlParse Text | MErrVRegionNotSupported Text
| MErrVInvalidBucketName Text | MErrVXmlParse Text
| MErrVInvalidObjectName Text | MErrVInvalidBucketName Text
| MErrVInvalidUrlExpiry Int | MErrVInvalidObjectName Text
| MErrVJsonParse Text | MErrVInvalidUrlExpiry Int
| MErrVInvalidHealPath | MErrVJsonParse Text
| MErrVMissingCredentials | MErrVInvalidHealPath
| MErrVInvalidEncryptionKeyLength | MErrVMissingCredentials
| MErrVStreamingBodyUnexpectedEOF | MErrVInvalidEncryptionKeyLength
| MErrVUnexpectedPayload | MErrVStreamingBodyUnexpectedEOF
deriving (Show, Eq) | MErrVUnexpectedPayload
| MErrVSTSEndpointNotFound
deriving stock (Show, Eq)
instance Exception MErrV instance Exception MErrV
-- | Errors returned by S3 compatible service -- | Errors returned by S3 compatible service
data ServiceErr = BucketAlreadyExists data ServiceErr
| BucketAlreadyOwnedByYou = BucketAlreadyExists
| NoSuchBucket | BucketAlreadyOwnedByYou
| InvalidBucketName | NoSuchBucket
| NoSuchKey | InvalidBucketName
| SelectErr Text Text | NoSuchKey
| ServiceErr Text Text | SelectErr Text Text
deriving (Show, Eq) | ServiceErr Text Text
deriving stock (Show, Eq)
instance Exception ServiceErr instance Exception ServiceErr
toServiceErr :: Text -> Text -> ServiceErr toServiceErr :: Text -> Text -> ServiceErr
toServiceErr "NoSuchKey" _ = NoSuchKey toServiceErr "NoSuchKey" _ = NoSuchKey
toServiceErr "NoSuchBucket" _ = NoSuchBucket toServiceErr "NoSuchBucket" _ = NoSuchBucket
toServiceErr "InvalidBucketName" _ = InvalidBucketName toServiceErr "InvalidBucketName" _ = InvalidBucketName
toServiceErr "BucketAlreadyOwnedByYou" _ = BucketAlreadyOwnedByYou toServiceErr "BucketAlreadyOwnedByYou" _ = BucketAlreadyOwnedByYou
toServiceErr "BucketAlreadyExists" _ = BucketAlreadyExists toServiceErr "BucketAlreadyExists" _ = BucketAlreadyExists
toServiceErr code message = ServiceErr code message toServiceErr code message = ServiceErr code message
-- | Errors thrown by the library -- | Errors thrown by the library
data MinioErr = MErrHTTP NC.HttpException data MinioErr
| MErrIO IOException = MErrHTTP NC.HttpException
| MErrService ServiceErr | MErrIO IOException
| MErrValidation MErrV | MErrService ServiceErr
deriving (Show) | MErrValidation MErrV
deriving stock (Show)
instance Eq MinioErr where instance Eq MinioErr where
MErrHTTP _ == MErrHTTP _ = True MErrHTTP _ == MErrHTTP _ = True
MErrHTTP _ == _ = False MErrHTTP _ == _ = False
MErrIO _ == MErrIO _ = True MErrIO _ == MErrIO _ = True
MErrIO _ == _ = False MErrIO _ == _ = False
MErrService a == MErrService b = a == b MErrService a == MErrService b = a == b
MErrService _ == _ = False MErrService _ == _ = False
MErrValidation a == MErrValidation b = a == b MErrValidation a == MErrValidation b = a == b
MErrValidation _ == _ = False MErrValidation _ == _ = False
instance Exception MinioErr instance Exception MinioErr

View File

@ -15,28 +15,35 @@
-- --
module Network.Minio.JsonParser module Network.Minio.JsonParser
( ( parseErrResponseJSON,
parseErrResponseJSON )
) where where
import Data.Aeson (FromJSON, eitherDecode, parseJSON, import Data.Aeson
withObject, (.:)) ( FromJSON,
import qualified Data.Text as T eitherDecode,
parseJSON,
withObject,
(.:),
)
import qualified Data.Text as T
import Lib.Prelude
import Network.Minio.Errors
import Lib.Prelude data AdminErrJSON = AdminErrJSON
{ aeCode :: Text,
aeMessage :: Text
}
deriving stock (Eq, Show)
import Network.Minio.Errors
data AdminErrJSON = AdminErrJSON { aeCode :: Text
, aeMessage :: Text
} deriving (Eq, Show)
instance FromJSON AdminErrJSON where instance FromJSON AdminErrJSON where
parseJSON = withObject "AdminErrJSON" $ \v -> AdminErrJSON parseJSON = withObject "AdminErrJSON" $ \v ->
<$> v .: "Code" AdminErrJSON
<*> v .: "Message" <$> v .: "Code"
<*> v .: "Message"
parseErrResponseJSON :: (MonadIO m) => LByteString -> m ServiceErr parseErrResponseJSON :: (MonadIO m) => LByteString -> m ServiceErr
parseErrResponseJSON jsondata = parseErrResponseJSON jsondata =
case eitherDecode jsondata of case eitherDecode jsondata of
Right aErr -> return $ toServiceErr (aeCode aErr) (aeMessage aErr) Right aErr -> return $ toServiceErr (aeCode aErr) (aeMessage aErr)
Left err -> throwIO $ MErrVJsonParse $ T.pack err Left err -> throwIO $ MErrVJsonParse $ T.pack err

View File

@ -16,20 +16,50 @@
module Network.Minio.ListOps where module Network.Minio.ListOps where
import qualified Data.Conduit as C import qualified Data.Conduit as C
import qualified Data.Conduit.Combinators as CC import qualified Data.Conduit.Combinators as CC
import qualified Data.Conduit.List as CL import qualified Data.Conduit.List as CL
import Network.Minio.Data
import Lib.Prelude ( Bucket,
ListObjectsResult
import Network.Minio.Data ( lorCPrefixes,
import Network.Minio.S3API lorHasMore,
lorNextToken,
lorObjects
),
ListObjectsV1Result
( lorCPrefixes',
lorHasMore',
lorNextMarker,
lorObjects'
),
ListPartsResult (lprHasMore, lprNextPart, lprParts),
ListUploadsResult
( lurHasMore,
lurNextKey,
lurNextUpload,
lurUploads
),
Minio,
Object,
ObjectInfo,
ObjectPartInfo (opiSize),
UploadId,
UploadInfo (UploadInfo),
)
import Network.Minio.S3API
( listIncompleteParts',
listIncompleteUploads',
listObjects',
listObjectsV1',
)
-- | Represents a list output item - either an object or an object -- | Represents a list output item - either an object or an object
-- prefix (i.e. a directory). -- prefix (i.e. a directory).
data ListItem = ListItemObject ObjectInfo data ListItem
| ListItemPrefix Text = ListItemObject ObjectInfo
deriving (Show, Eq) | ListItemPrefix Text
deriving stock (Show, Eq)
-- | @'listObjects' bucket prefix recurse@ lists objects in a bucket -- | @'listObjects' bucket prefix recurse@ lists objects in a bucket
-- similar to a file system tree traversal. -- similar to a file system tree traversal.
@ -48,73 +78,103 @@ listObjects bucket prefix recurse = loop Nothing
where where
loop :: Maybe Text -> C.ConduitM () ListItem Minio () loop :: Maybe Text -> C.ConduitM () ListItem Minio ()
loop nextToken = do loop nextToken = do
let let delimiter = bool (Just "/") Nothing recurse
delimiter = bool (Just "/") Nothing recurse
res <- lift $ listObjects' bucket prefix nextToken delimiter Nothing res <- lift $ listObjects' bucket prefix nextToken delimiter Nothing
CL.sourceList $ map ListItemObject $ lorObjects res CL.sourceList $ map ListItemObject $ lorObjects res
unless recurse $ unless recurse $
CL.sourceList $ map ListItemPrefix $ lorCPrefixes res CL.sourceList $
map ListItemPrefix $
lorCPrefixes res
when (lorHasMore res) $ when (lorHasMore res) $
loop (lorNextToken res) loop (lorNextToken res)
-- | Lists objects - similar to @listObjects@, however uses the older -- | Lists objects - similar to @listObjects@, however uses the older
-- V1 AWS S3 API. Prefer @listObjects@ to this. -- V1 AWS S3 API. Prefer @listObjects@ to this.
listObjectsV1 :: Bucket -> Maybe Text -> Bool listObjectsV1 ::
-> C.ConduitM () ListItem Minio () Bucket ->
Maybe Text ->
Bool ->
C.ConduitM () ListItem Minio ()
listObjectsV1 bucket prefix recurse = loop Nothing listObjectsV1 bucket prefix recurse = loop Nothing
where where
loop :: Maybe Text -> C.ConduitM () ListItem Minio () loop :: Maybe Text -> C.ConduitM () ListItem Minio ()
loop nextMarker = do loop nextMarker = do
let let delimiter = bool (Just "/") Nothing recurse
delimiter = bool (Just "/") Nothing recurse
res <- lift $ listObjectsV1' bucket prefix nextMarker delimiter Nothing res <- lift $ listObjectsV1' bucket prefix nextMarker delimiter Nothing
CL.sourceList $ map ListItemObject $ lorObjects' res CL.sourceList $ map ListItemObject $ lorObjects' res
unless recurse $ unless recurse $
CL.sourceList $ map ListItemPrefix $ lorCPrefixes' res CL.sourceList $
map ListItemPrefix $
lorCPrefixes' res
when (lorHasMore' res) $ when (lorHasMore' res) $
loop (lorNextMarker res) loop (lorNextMarker res)
-- | List incomplete uploads in a bucket matching the given prefix. If -- | List incomplete uploads in a bucket matching the given prefix. If
-- recurse is set to True incomplete uploads for the given prefix are -- recurse is set to True incomplete uploads for the given prefix are
-- recursively listed. -- recursively listed.
listIncompleteUploads :: Bucket -> Maybe Text -> Bool listIncompleteUploads ::
-> C.ConduitM () UploadInfo Minio () Bucket ->
Maybe Text ->
Bool ->
C.ConduitM () UploadInfo Minio ()
listIncompleteUploads bucket prefix recurse = loop Nothing Nothing listIncompleteUploads bucket prefix recurse = loop Nothing Nothing
where where
loop :: Maybe Text -> Maybe Text -> C.ConduitM () UploadInfo Minio () loop :: Maybe Text -> Maybe Text -> C.ConduitM () UploadInfo Minio ()
loop nextKeyMarker nextUploadIdMarker = do loop nextKeyMarker nextUploadIdMarker = do
let let delimiter = bool (Just "/") Nothing recurse
delimiter = bool (Just "/") Nothing recurse
res <- lift $ listIncompleteUploads' bucket prefix delimiter res <-
nextKeyMarker nextUploadIdMarker Nothing lift $
listIncompleteUploads'
bucket
prefix
delimiter
nextKeyMarker
nextUploadIdMarker
Nothing
aggrSizes <- lift $ forM (lurUploads res) $ \(uKey, uId, _) -> do aggrSizes <- lift $
partInfos <- C.runConduit $ listIncompleteParts bucket uKey uId forM (lurUploads res) $ \(uKey, uId, _) -> do
C..| CC.sinkList partInfos <-
return $ foldl (\sizeSofar p -> opiSize p + sizeSofar) 0 partInfos C.runConduit $
listIncompleteParts bucket uKey uId
C..| CC.sinkList
return $ foldl' (\sizeSofar p -> opiSize p + sizeSofar) 0 partInfos
CL.sourceList $ CL.sourceList $
map (\((uKey, uId, uInitTime), size) -> zipWith
UploadInfo uKey uId uInitTime size ( curry
) $ zip (lurUploads res) aggrSizes ( \((uKey, uId, uInitTime), size) ->
UploadInfo uKey uId uInitTime size
)
)
(lurUploads res)
aggrSizes
when (lurHasMore res) $ when (lurHasMore res) $
loop (lurNextKey res) (lurNextUpload res) loop (lurNextKey res) (lurNextUpload res)
-- | List object parts of an ongoing multipart upload for given -- | List object parts of an ongoing multipart upload for given
-- bucket, object and uploadId. -- bucket, object and uploadId.
listIncompleteParts :: Bucket -> Object -> UploadId listIncompleteParts ::
-> C.ConduitM () ObjectPartInfo Minio () Bucket ->
Object ->
UploadId ->
C.ConduitM () ObjectPartInfo Minio ()
listIncompleteParts bucket object uploadId = loop Nothing listIncompleteParts bucket object uploadId = loop Nothing
where where
loop :: Maybe Text -> C.ConduitM () ObjectPartInfo Minio () loop :: Maybe Text -> C.ConduitM () ObjectPartInfo Minio ()
loop nextPartMarker = do loop nextPartMarker = do
res <- lift $ listIncompleteParts' bucket object uploadId Nothing res <-
nextPartMarker lift $
listIncompleteParts'
bucket
object
uploadId
Nothing
nextPartMarker
CL.sourceList $ lprParts res CL.sourceList $ lprParts res
when (lprHasMore res) $ when (lprHasMore res) $
loop (show <$> lprNextPart res) loop (show <$> lprNextPart res)

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -13,45 +13,51 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE CPP #-}
module Network.Minio.PresignedOperations module Network.Minio.PresignedOperations
( UrlExpiry ( UrlExpiry,
, makePresignedUrl makePresignedUrl,
, presignedPutObjectUrl presignedPutObjectUrl,
, presignedGetObjectUrl presignedGetObjectUrl,
, presignedHeadObjectUrl presignedHeadObjectUrl,
PostPolicyCondition (..),
ppCondBucket,
ppCondContentLengthRange,
ppCondContentType,
ppCondKey,
ppCondKeyStartsWith,
ppCondSuccessActionStatus,
PostPolicy (..),
PostPolicyError (..),
newPostPolicy,
showPostPolicy,
presignedPostPolicy,
)
where
, PostPolicyCondition(..) import Data.Aeson ((.=))
, ppCondBucket import qualified Data.Aeson as Json
, ppCondContentLengthRange import Data.ByteString.Builder (byteString, toLazyByteString)
, ppCondContentType import qualified Data.HashMap.Strict as H
, ppCondKey import qualified Data.Text as T
, ppCondKeyStartsWith import qualified Data.Time as Time
, ppCondSuccessActionStatus import Lib.Prelude
import qualified Network.HTTP.Client as NClient
import qualified Network.HTTP.Types as HT
import Network.Minio.API (buildRequest)
import Network.Minio.Credentials
import Network.Minio.Data
import Network.Minio.Data.Time
import Network.Minio.Errors
import Network.Minio.Sign.V4
import Network.URI (uriToString)
, PostPolicy(..) {- ORMOLU_DISABLE -}
, PostPolicyError(..) #if MIN_VERSION_aeson(2,0,0)
, newPostPolicy import qualified Data.Aeson.Key as A
, showPostPolicy #endif
, presignedPostPolicy {- ORMOLU_ENABLE -}
) where
import Data.Aeson ((.=))
import qualified Data.Aeson as Json
import Data.ByteString.Builder (byteString, toLazyByteString)
import qualified Data.HashMap.Strict as H
import qualified Data.Text as T
import qualified Data.Time as Time
import qualified Network.HTTP.Conduit as NC
import qualified Network.HTTP.Types as HT
import Network.HTTP.Types.Header (hHost)
import Lib.Prelude
import Network.Minio.Data
import Network.Minio.Data.Time
import Network.Minio.Errors
import Network.Minio.Sign.V4
-- | Generate a presigned URL. This function allows for advanced usage -- | Generate a presigned URL. This function allows for advanced usage
-- - for simple cases prefer the `presigned*Url` functions. -- - for simple cases prefer the `presigned*Url` functions.
@ -61,42 +67,36 @@ import Network.Minio.Sign.V4
-- --
-- All extra query parameters or headers are signed, and therefore are -- All extra query parameters or headers are signed, and therefore are
-- required to be sent when the generated URL is actually used. -- required to be sent when the generated URL is actually used.
makePresignedUrl :: UrlExpiry -> HT.Method -> Maybe Bucket -> Maybe Object makePresignedUrl ::
-> Maybe Region -> HT.Query -> HT.RequestHeaders UrlExpiry ->
-> Minio ByteString HT.Method ->
Maybe Bucket ->
Maybe Object ->
Maybe Region ->
HT.Query ->
HT.RequestHeaders ->
Minio ByteString
makePresignedUrl expiry method bucket object region extraQuery extraHeaders = do makePresignedUrl expiry method bucket object region extraQuery extraHeaders = do
when (expiry > 7*24*3600 || expiry < 0) $ when (expiry > 7 * 24 * 3600 || expiry < 0) $
throwIO $ MErrVInvalidUrlExpiry expiry throwIO $
MErrVInvalidUrlExpiry expiry
ci <- asks mcConnInfo let s3ri =
defaultS3ReqInfo
{ riPresignExpirySecs = Just expiry,
riMethod = method,
riBucket = bucket,
riObject = object,
riRegion = region,
riQueryParams = extraQuery,
riHeaders = extraHeaders
}
let req <- buildRequest s3ri
hostHeader = (hHost, getHostAddr ci) let uri = NClient.getUri req
req = NC.defaultRequest { uriString = uriToString identity uri ""
NC.method = method
, NC.secure = connectIsSecure ci
, NC.host = encodeUtf8 $ connectHost ci
, NC.port = connectPort ci
, NC.path = getS3Path bucket object
, NC.requestHeaders = hostHeader : extraHeaders
, NC.queryString = HT.renderQuery True extraQuery
}
ts <- liftIO Time.getCurrentTime
let sp = SignParams (connectAccessKey ci) (connectSecretKey ci) return $ encodeUtf8 uriString
ts region (Just expiry) Nothing
signPairs = signV4 sp req
qpToAdd = (fmap . fmap) Just signPairs
queryStr = HT.renderQueryBuilder True
((HT.parseQuery $ NC.queryString req) ++ qpToAdd)
scheme = byteString $ bool "http://" "https://" $ connectIsSecure ci
return $ toS $ toLazyByteString $ scheme
<> byteString (getHostAddr ci)
<> byteString (getS3Path bucket object)
<> queryStr
-- | Generate a URL with authentication signature to PUT (upload) an -- | Generate a URL with authentication signature to PUT (upload) an
-- object. Any extra headers if passed, are signed, and so they are -- object. Any extra headers if passed, are signed, and so they are
@ -105,12 +105,22 @@ makePresignedUrl expiry method bucket object region extraQuery extraHeaders = do
-- --
-- For a list of possible headers to pass, please refer to the PUT -- For a list of possible headers to pass, please refer to the PUT
-- object REST API AWS S3 documentation. -- object REST API AWS S3 documentation.
presignedPutObjectUrl :: Bucket -> Object -> UrlExpiry -> HT.RequestHeaders presignedPutObjectUrl ::
-> Minio ByteString Bucket ->
Object ->
UrlExpiry ->
HT.RequestHeaders ->
Minio ByteString
presignedPutObjectUrl bucket object expirySeconds extraHeaders = do presignedPutObjectUrl bucket object expirySeconds extraHeaders = do
region <- asks (Just . connectRegion . mcConnInfo) region <- asks (Just . connectRegion . mcConnInfo)
makePresignedUrl expirySeconds HT.methodPut makePresignedUrl
(Just bucket) (Just object) region [] extraHeaders expirySeconds
HT.methodPut
(Just bucket)
(Just object)
region
[]
extraHeaders
-- | Generate a URL with authentication signature to GET (download) an -- | Generate a URL with authentication signature to GET (download) an
-- object. All extra query parameters and headers passed here will be -- object. All extra query parameters and headers passed here will be
@ -121,12 +131,23 @@ presignedPutObjectUrl bucket object expirySeconds extraHeaders = do
-- --
-- For a list of possible request parameters and headers, please refer -- For a list of possible request parameters and headers, please refer
-- to the GET object REST API AWS S3 documentation. -- to the GET object REST API AWS S3 documentation.
presignedGetObjectUrl :: Bucket -> Object -> UrlExpiry -> HT.Query presignedGetObjectUrl ::
-> HT.RequestHeaders -> Minio ByteString Bucket ->
Object ->
UrlExpiry ->
HT.Query ->
HT.RequestHeaders ->
Minio ByteString
presignedGetObjectUrl bucket object expirySeconds extraQuery extraHeaders = do presignedGetObjectUrl bucket object expirySeconds extraQuery extraHeaders = do
region <- asks (Just . connectRegion . mcConnInfo) region <- asks (Just . connectRegion . mcConnInfo)
makePresignedUrl expirySeconds HT.methodGet makePresignedUrl
(Just bucket) (Just object) region extraQuery extraHeaders expirySeconds
HT.methodGet
(Just bucket)
(Just object)
region
extraQuery
extraHeaders
-- | Generate a URL with authentication signature to make a HEAD -- | Generate a URL with authentication signature to make a HEAD
-- request on an object. This is used to fetch metadata about an -- request on an object. This is used to fetch metadata about an
@ -135,50 +156,74 @@ presignedGetObjectUrl bucket object expirySeconds extraQuery extraHeaders = do
-- --
-- For a list of possible headers to pass, please refer to the HEAD -- For a list of possible headers to pass, please refer to the HEAD
-- object REST API AWS S3 documentation. -- object REST API AWS S3 documentation.
presignedHeadObjectUrl :: Bucket -> Object -> UrlExpiry presignedHeadObjectUrl ::
-> HT.RequestHeaders -> Minio ByteString Bucket ->
Object ->
UrlExpiry ->
HT.RequestHeaders ->
Minio ByteString
presignedHeadObjectUrl bucket object expirySeconds extraHeaders = do presignedHeadObjectUrl bucket object expirySeconds extraHeaders = do
region <- asks (Just . connectRegion . mcConnInfo) region <- asks (Just . connectRegion . mcConnInfo)
makePresignedUrl expirySeconds HT.methodHead makePresignedUrl
(Just bucket) (Just object) region [] extraHeaders expirySeconds
HT.methodHead
(Just bucket)
(Just object)
region
[]
extraHeaders
-- | Represents individual conditions in a Post Policy document. -- | Represents individual conditions in a Post Policy document.
data PostPolicyCondition = PPCStartsWith Text Text data PostPolicyCondition
| PPCEquals Text Text = PPCStartsWith Text Text
| PPCRange Text Int64 Int64 | PPCEquals Text Text
deriving (Show, Eq) | PPCRange Text Int64 Int64
deriving stock (Show, Eq)
{- ORMOLU_DISABLE -}
instance Json.ToJSON PostPolicyCondition where instance Json.ToJSON PostPolicyCondition where
toJSON (PPCStartsWith k v) = Json.toJSON ["starts-with", k, v] toJSON (PPCStartsWith k v) = Json.toJSON ["starts-with", k, v]
#if MIN_VERSION_aeson(2,0,0)
toJSON (PPCEquals k v) = Json.object [(A.fromText k) .= v]
#else
toJSON (PPCEquals k v) = Json.object [k .= v] toJSON (PPCEquals k v) = Json.object [k .= v]
#endif
toJSON (PPCRange k minVal maxVal) = toJSON (PPCRange k minVal maxVal) =
Json.toJSON [Json.toJSON k, Json.toJSON minVal, Json.toJSON maxVal] Json.toJSON [Json.toJSON k, Json.toJSON minVal, Json.toJSON maxVal]
toEncoding (PPCStartsWith k v) = Json.foldable ["starts-with", k, v] toEncoding (PPCStartsWith k v) = Json.foldable ["starts-with", k, v]
#if MIN_VERSION_aeson(2,0,0)
toEncoding (PPCEquals k v) = Json.pairs ((A.fromText k) .= v)
#else
toEncoding (PPCEquals k v) = Json.pairs (k .= v) toEncoding (PPCEquals k v) = Json.pairs (k .= v)
#endif
toEncoding (PPCRange k minVal maxVal) = toEncoding (PPCRange k minVal maxVal) =
Json.foldable [Json.toJSON k, Json.toJSON minVal, Json.toJSON maxVal] Json.foldable [Json.toJSON k, Json.toJSON minVal, Json.toJSON maxVal]
{- ORMOLU_ENABLE -}
-- | A PostPolicy is required to perform uploads via browser forms. -- | A PostPolicy is required to perform uploads via browser forms.
data PostPolicy = PostPolicy { data PostPolicy = PostPolicy
expiration :: UTCTime { expiration :: UTCTime,
, conditions :: [PostPolicyCondition] conditions :: [PostPolicyCondition]
} deriving (Show, Eq) }
deriving stock (Show, Eq)
instance Json.ToJSON PostPolicy where instance Json.ToJSON PostPolicy where
toJSON (PostPolicy e c) = toJSON (PostPolicy e c) =
Json.object $ [ "expiration" .= iso8601TimeFormat e Json.object
, "conditions" .= c [ "expiration" .= iso8601TimeFormat e,
] "conditions" .= c
]
toEncoding (PostPolicy e c) = toEncoding (PostPolicy e c) =
Json.pairs ("expiration" .= iso8601TimeFormat e <> "conditions" .= c) Json.pairs ("expiration" .= iso8601TimeFormat e <> "conditions" .= c)
-- | Possible validation errors when creating a PostPolicy. -- | Possible validation errors when creating a PostPolicy.
data PostPolicyError = PPEKeyNotSpecified data PostPolicyError
| PPEBucketNotSpecified = PPEKeyNotSpecified
| PPEConditionKeyEmpty | PPEBucketNotSpecified
| PPERangeInvalid | PPEConditionKeyEmpty
deriving (Eq, Show) | PPERangeInvalid
deriving stock (Show, Eq)
-- | Set the bucket name that the upload should use. -- | Set the bucket name that the upload should use.
ppCondBucket :: Bucket -> PostPolicyCondition ppCondBucket :: Bucket -> PostPolicyCondition
@ -186,8 +231,10 @@ ppCondBucket = PPCEquals "bucket"
-- | Set the content length range constraint with minimum and maximum -- | Set the content length range constraint with minimum and maximum
-- byte count values. -- byte count values.
ppCondContentLengthRange :: Int64 -> Int64 ppCondContentLengthRange ::
-> PostPolicyCondition Int64 ->
Int64 ->
PostPolicyCondition
ppCondContentLengthRange = PPCRange "content-length-range" ppCondContentLengthRange = PPCRange "content-length-range"
-- | Set the content-type header for the upload. -- | Set the content-type header for the upload.
@ -210,83 +257,99 @@ ppCondSuccessActionStatus n =
-- | This function creates a PostPolicy after validating its -- | This function creates a PostPolicy after validating its
-- arguments. -- arguments.
newPostPolicy :: UTCTime -> [PostPolicyCondition] newPostPolicy ::
-> Either PostPolicyError PostPolicy UTCTime ->
[PostPolicyCondition] ->
Either PostPolicyError PostPolicy
newPostPolicy expirationTime conds newPostPolicy expirationTime conds
-- object name condition must be present -- object name condition must be present
| not $ any (keyEquals "key") conds = | not $ any (keyEquals "key") conds =
Left PPEKeyNotSpecified Left PPEKeyNotSpecified
-- bucket name condition must be present -- bucket name condition must be present
| not $ any (keyEquals "bucket") conds = | not $ any (keyEquals "bucket") conds =
Left PPEBucketNotSpecified Left PPEBucketNotSpecified
-- a condition with an empty key is invalid -- a condition with an empty key is invalid
| any (keyEquals "") conds || any isEmptyRangeKey conds = | any (keyEquals "") conds || any isEmptyRangeKey conds =
Left PPEConditionKeyEmpty Left PPEConditionKeyEmpty
-- invalid range check -- invalid range check
| any isInvalidRange conds = | any isInvalidRange conds =
Left PPERangeInvalid Left PPERangeInvalid
-- all good! -- all good!
| otherwise = | otherwise =
return $ PostPolicy expirationTime conds return $ PostPolicy expirationTime conds
where where
keyEquals k' (PPCStartsWith k _) = k == k' keyEquals k' (PPCStartsWith k _) = k == k'
keyEquals k' (PPCEquals k _) = k == k' keyEquals k' (PPCEquals k _) = k == k'
keyEquals _ _ = False keyEquals _ _ = False
isEmptyRangeKey (PPCRange k _ _) = k == "" isEmptyRangeKey (PPCRange k _ _) = k == ""
isEmptyRangeKey _ = False isEmptyRangeKey _ = False
isInvalidRange (PPCRange _ mi ma) = mi < 0 || mi > ma isInvalidRange (PPCRange _ mi ma) = mi < 0 || mi > ma
isInvalidRange _ = False isInvalidRange _ = False
-- | Convert Post Policy to a string (e.g. for printing). -- | Convert Post Policy to a string (e.g. for printing).
showPostPolicy :: PostPolicy -> ByteString showPostPolicy :: PostPolicy -> ByteString
showPostPolicy = toS . Json.encode showPostPolicy = toStrictBS . Json.encode
-- | Generate a presigned URL and POST policy to upload files via a -- | Generate a presigned URL and POST policy to upload files via a
-- browser. On success, this function returns a URL and POST -- browser. On success, this function returns a URL and POST
-- form-data. -- form-data.
presignedPostPolicy :: PostPolicy presignedPostPolicy ::
-> Minio (ByteString, H.HashMap Text ByteString) PostPolicy ->
Minio (ByteString, H.HashMap Text ByteString)
presignedPostPolicy p = do presignedPostPolicy p = do
ci <- asks mcConnInfo ci <- asks mcConnInfo
signTime <- liftIO $ Time.getCurrentTime signTime <- liftIO Time.getCurrentTime
mgr <- asks mcConnManager
cv <- liftIO $ getCredential (connectCreds ci) (getEndpoint ci) mgr
let let extraConditions signParams =
extraConditions = [ PPCEquals "x-amz-date" (toText $ awsTimeFormat signTime),
[ PPCEquals "x-amz-date" (toS $ awsTimeFormat signTime) PPCEquals "x-amz-algorithm" "AWS4-HMAC-SHA256",
, PPCEquals "x-amz-algorithm" "AWS4-HMAC-SHA256" PPCEquals
, PPCEquals "x-amz-credential" "x-amz-credential"
(T.intercalate "/" [connectAccessKey ci, ( T.intercalate
decodeUtf8 $ mkScope signTime region]) "/"
] [ coerce $ cvAccessKey cv,
ppWithCreds = p { decodeUtf8 $ credentialScope signParams
conditions = conditions p ++ extraConditions ]
} )
sp = SignParams (connectAccessKey ci) (connectSecretKey ci) ]
signTime (Just $ connectRegion ci) Nothing Nothing ppWithCreds signParams =
signData = signV4PostPolicy (showPostPolicy ppWithCreds) sp p
{ conditions = conditions p ++ extraConditions signParams
}
-- compute form-data sp =
mkPair (PPCStartsWith k v) = Just (k, v) SignParams
mkPair (PPCEquals k v) = Just (k, v) (coerce $ cvAccessKey cv)
mkPair _ = Nothing (coerce $ cvSecretKey cv)
formFromPolicy = H.map toS $ H.fromList $ catMaybes $ (coerce $ cvSessionToken cv)
mkPair <$> conditions ppWithCreds ServiceS3
formData = formFromPolicy `H.union` signData signTime
(Just $ connectRegion ci)
-- compute POST upload URL Nothing
bucket = H.lookupDefault "" "bucket" formData Nothing
scheme = byteString $ bool "http://" "https://" $ connectIsSecure ci signData = signV4PostPolicy (showPostPolicy $ ppWithCreds sp) sp
region = connectRegion ci -- compute form-data
mkPair (PPCStartsWith k v) = Just (k, v)
url = toS $ toLazyByteString $ scheme <> byteString (getHostAddr ci) <> mkPair (PPCEquals k v) = Just (k, v)
byteString "/" <> byteString (toS bucket) <> byteString "/" mkPair _ = Nothing
formFromPolicy =
H.map encodeUtf8 $
H.fromList $
mapMaybe
mkPair
(conditions $ ppWithCreds sp)
formData = formFromPolicy `H.union` signData
-- compute POST upload URL
bucket = H.lookupDefault "" "bucket" formData
scheme = byteString $ bool "http://" "https://" $ connectIsSecure ci
url =
toStrictBS $
toLazyByteString $
scheme
<> byteString (getHostAddr ci)
<> byteString "/"
<> byteString bucket
<> byteString "/"
return (url, formData) return (url, formData)

View File

@ -15,29 +15,24 @@
-- --
module Network.Minio.PutObject module Network.Minio.PutObject
( ( putObjectInternal,
putObjectInternal ObjectData (..),
, ObjectData(..) selectPartSizes,
, selectPartSizes )
) where where
import Conduit (takeC)
import Conduit (takeC) import qualified Conduit as C
import qualified Conduit as C import qualified Data.ByteString.Lazy as LBS
import qualified Data.ByteString.Lazy as LBS import qualified Data.Conduit.Binary as CB
import qualified Data.Conduit.Binary as CB
import qualified Data.Conduit.Combinators as CC import qualified Data.Conduit.Combinators as CC
import qualified Data.Conduit.List as CL import qualified Data.Conduit.List as CL
import qualified Data.List as List import qualified Data.List as List
import Lib.Prelude
import Network.Minio.Data
import Lib.Prelude import Network.Minio.Errors
import Network.Minio.S3API
import Network.Minio.Data import Network.Minio.Utils
import Network.Minio.Errors
import Network.Minio.S3API
import Network.Minio.Utils
-- | A data-type to represent the source data for an object. A -- | A data-type to represent the source data for an object. A
-- file-path or a producer-conduit may be provided. -- file-path or a producer-conduit may be provided.
@ -50,37 +45,45 @@ import Network.Minio.Utils
-- the input - if it is not provided, upload will continue until the -- the input - if it is not provided, upload will continue until the
-- stream ends or the object reaches `maxObjectSize` size. -- stream ends or the object reaches `maxObjectSize` size.
data ObjectData m data ObjectData m
= ODFile FilePath (Maybe Int64) -- ^ Takes filepath and optional = -- | Takes filepath and optional
-- size. -- size.
| ODStream (C.ConduitM () ByteString m ()) (Maybe Int64) -- ^ Pass ODFile FilePath (Maybe Int64)
-- size | -- | Pass
-- (bytes) -- size
-- if -- (bytes)
-- known. -- if
-- known.
ODStream (C.ConduitM () ByteString m ()) (Maybe Int64)
-- | Put an object from ObjectData. This high-level API handles -- | Put an object from ObjectData. This high-level API handles
-- objects of all sizes, and even if the object size is unknown. -- objects of all sizes, and even if the object size is unknown.
putObjectInternal :: Bucket -> Object -> PutObjectOptions putObjectInternal ::
-> ObjectData Minio -> Minio ETag Bucket ->
Object ->
PutObjectOptions ->
ObjectData Minio ->
Minio ETag
putObjectInternal b o opts (ODStream src sizeMay) = do putObjectInternal b o opts (ODStream src sizeMay) = do
case sizeMay of case sizeMay of
-- unable to get size, so assume non-seekable file -- unable to get size, so assume non-seekable file
Nothing -> sequentialMultipartUpload b o opts Nothing src Nothing -> sequentialMultipartUpload b o opts Nothing src
-- got file size, so check for single/multipart upload -- got file size, so check for single/multipart upload
Just size -> Just size ->
if | size <= 64 * oneMiB -> do if
bs <- C.runConduit $ src C..| takeC (fromIntegral size) C..| CB.sinkLbs | size <= 64 * oneMiB -> do
putObjectSingle' b o (pooToHeaders opts) $ LBS.toStrict bs bs <- C.runConduit $ src C..| takeC (fromIntegral size) C..| CB.sinkLbs
| size > maxObjectSize -> throwIO $ MErrVPutSizeExceeded size putObjectSingle' b o (pooToHeaders opts) $ LBS.toStrict bs
| otherwise -> sequentialMultipartUpload b o opts (Just size) src | size > maxObjectSize -> throwIO $ MErrVPutSizeExceeded size
| otherwise -> sequentialMultipartUpload b o opts (Just size) src
putObjectInternal b o opts (ODFile fp sizeMay) = do putObjectInternal b o opts (ODFile fp sizeMay) = do
hResE <- withNewHandle fp $ \h -> hResE <- withNewHandle fp $ \h ->
liftM2 (,) (isHandleSeekable h) (getFileSize h) liftA2 (,) (isHandleSeekable h) (getFileSize h)
(isSeekable, handleSizeMay) <- either (const $ return (False, Nothing)) return (isSeekable, handleSizeMay) <-
hResE either
(const $ return (False, Nothing))
return
hResE
-- prefer given size to queried size. -- prefer given size to queried size.
let finalSizeMay = listToMaybe $ catMaybes [sizeMay, handleSizeMay] let finalSizeMay = listToMaybe $ catMaybes [sizeMay, handleSizeMay]
@ -88,18 +91,25 @@ putObjectInternal b o opts (ODFile fp sizeMay) = do
case finalSizeMay of case finalSizeMay of
-- unable to get size, so assume non-seekable file -- unable to get size, so assume non-seekable file
Nothing -> sequentialMultipartUpload b o opts Nothing $ CB.sourceFile fp Nothing -> sequentialMultipartUpload b o opts Nothing $ CB.sourceFile fp
-- got file size, so check for single/multipart upload -- got file size, so check for single/multipart upload
Just size -> Just size ->
if | size <= 64 * oneMiB -> either throwIO return =<< if
withNewHandle fp (\h -> putObjectSingle b o (pooToHeaders opts) h 0 size) | size <= 64 * oneMiB ->
| size > maxObjectSize -> throwIO $ MErrVPutSizeExceeded size either throwIO return
| isSeekable -> parallelMultipartUpload b o opts fp size =<< withNewHandle fp (\h -> putObjectSingle b o (pooToHeaders opts) h 0 size)
| otherwise -> sequentialMultipartUpload b o opts (Just size) $ | size > maxObjectSize -> throwIO $ MErrVPutSizeExceeded size
CB.sourceFile fp | isSeekable -> parallelMultipartUpload b o opts fp size
| otherwise ->
sequentialMultipartUpload b o opts (Just size) $
CB.sourceFile fp
parallelMultipartUpload :: Bucket -> Object -> PutObjectOptions parallelMultipartUpload ::
-> FilePath -> Int64 -> Minio ETag Bucket ->
Object ->
PutObjectOptions ->
FilePath ->
Int64 ->
Minio ETag
parallelMultipartUpload b o opts filePath size = do parallelMultipartUpload b o opts filePath size = do
-- get a new upload id. -- get a new upload id.
uploadId <- newMultipartUpload b o (pooToHeaders opts) uploadId <- newMultipartUpload b o (pooToHeaders opts)
@ -109,15 +119,17 @@ parallelMultipartUpload b o opts filePath size = do
let threads = fromMaybe 10 $ pooNumThreads opts let threads = fromMaybe 10 $ pooNumThreads opts
-- perform upload with 'threads' threads -- perform upload with 'threads' threads
uploadedPartsE <- limitedMapConcurrently (fromIntegral threads) uploadedPartsE <-
(uploadPart uploadId) partSizeInfo limitedMapConcurrently
(fromIntegral threads)
(uploadPart uploadId)
partSizeInfo
-- if there were any errors, rethrow exception. -- if there were any errors, rethrow exception.
mapM_ throwIO $ lefts uploadedPartsE mapM_ throwIO $ lefts uploadedPartsE
-- if we get here, all parts were successfully uploaded. -- if we get here, all parts were successfully uploaded.
completeMultipartUpload b o uploadId $ rights uploadedPartsE completeMultipartUpload b o uploadId $ rights uploadedPartsE
where where
uploadPart uploadId (partNum, offset, sz) = uploadPart uploadId (partNum, offset, sz) =
withNewHandle filePath $ \h -> do withNewHandle filePath $ \h -> do
@ -125,10 +137,13 @@ parallelMultipartUpload b o opts filePath size = do
putObjectPart b o uploadId partNum [] payload putObjectPart b o uploadId partNum [] payload
-- | Upload multipart object from conduit source sequentially -- | Upload multipart object from conduit source sequentially
sequentialMultipartUpload :: Bucket -> Object -> PutObjectOptions sequentialMultipartUpload ::
-> Maybe Int64 Bucket ->
-> C.ConduitM () ByteString Minio () Object ->
-> Minio ETag PutObjectOptions ->
Maybe Int64 ->
C.ConduitM () ByteString Minio () ->
Minio ETag
sequentialMultipartUpload b o opts sizeMay src = do sequentialMultipartUpload b o opts sizeMay src = do
-- get a new upload id. -- get a new upload id.
uploadId <- newMultipartUpload b o (pooToHeaders opts) uploadId <- newMultipartUpload b o (pooToHeaders opts)
@ -136,22 +151,23 @@ sequentialMultipartUpload b o opts sizeMay src = do
-- upload parts in loop -- upload parts in loop
let partSizes = selectPartSizes $ maybe maxObjectSize identity sizeMay let partSizes = selectPartSizes $ maybe maxObjectSize identity sizeMay
(pnums, _, sizes) = List.unzip3 partSizes (pnums, _, sizes) = List.unzip3 partSizes
uploadedParts <- C.runConduit uploadedParts <-
$ src C.runConduit $
C..| chunkBSConduit (map fromIntegral sizes) src
C..| CL.map PayloadBS C..| chunkBSConduit (map fromIntegral sizes)
C..| uploadPart' uploadId pnums C..| CL.map PayloadBS
C..| CC.sinkList C..| uploadPart' uploadId pnums
C..| CC.sinkList
-- complete multipart upload -- complete multipart upload
completeMultipartUpload b o uploadId uploadedParts completeMultipartUpload b o uploadId uploadedParts
where where
uploadPart' _ [] = return () uploadPart' _ [] = return ()
uploadPart' uid (pn:pns) = do uploadPart' uid (pn : pns) = do
payloadMay <- C.await payloadMay <- C.await
case payloadMay of case payloadMay of
Nothing -> return () Nothing -> return ()
Just payload -> do pinfo <- lift $ putObjectPart b o uid pn [] payload Just payload -> do
C.yield pinfo pinfo <- lift $ putObjectPart b o uid pn [] payload
uploadPart' uid pns C.yield pinfo
uploadPart' uid pns

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -14,150 +14,185 @@
-- limitations under the License. -- limitations under the License.
-- --
-- |
-- Module: Network.Minio.S3API
-- Copyright: (c) 2017-2023 MinIO Dev Team
-- License: Apache 2.0
-- Maintainer: MinIO Dev Team <dev@min.io>
--
-- Lower-level API for S3 compatible object stores. Start with @Network.Minio@
-- and use this only if needed.
module Network.Minio.S3API module Network.Minio.S3API
( ( Region,
Region getLocation,
, getLocation
-- * Listing buckets -- * Listing buckets
--------------------
, getService
-- * Listing objects --------------------
-------------------- getService,
, ListObjectsResult(..)
, ListObjectsV1Result(..)
, listObjects'
, listObjectsV1'
-- * Retrieving buckets -- * Listing objects
, headBucket
-- * Retrieving objects --------------------
----------------------- ListObjectsResult (..),
, getObject' ListObjectsV1Result (..),
, headObject listObjects',
listObjectsV1',
-- * Creating buckets and objects -- * Retrieving buckets
--------------------------------- headBucket,
, putBucket
, ETag
, maxSinglePutObjectSizeBytes
, putObjectSingle'
, putObjectSingle
, copyObjectSingle
-- * Multipart Upload APIs -- * Retrieving objects
--------------------------
, UploadId
, PartTuple
, Payload(..)
, PartNumber
, newMultipartUpload
, putObjectPart
, copyObjectPart
, completeMultipartUpload
, abortMultipartUpload
, ListUploadsResult(..)
, listIncompleteUploads'
, ListPartsResult(..)
, listIncompleteParts'
-- * Deletion APIs -----------------------
-------------------------- getObject',
, deleteBucket headObject,
, deleteObject
-- * Presigned Operations -- * Creating buckets and objects
-----------------------------
, module Network.Minio.PresignedOperations
-- ** Bucket Policies ---------------------------------
, getBucketPolicy putBucket,
, setBucketPolicy ETag,
maxSinglePutObjectSizeBytes,
putObjectSingle',
putObjectSingle,
copyObjectSingle,
-- * Bucket Notifications -- * Multipart Upload APIs
-------------------------
, Notification(..)
, NotificationConfig(..)
, Arn
, Event(..)
, Filter(..)
, FilterKey(..)
, FilterRules(..)
, FilterRule(..)
, getBucketNotification
, putBucketNotification
, removeAllBucketNotification
) where
import qualified Data.ByteString as BS --------------------------
import qualified Data.Text as T UploadId,
import qualified Network.HTTP.Conduit as NC PartTuple,
import qualified Network.HTTP.Types as HT Payload (..),
import Network.HTTP.Types.Status (status404) PartNumber,
import UnliftIO (Handler (Handler)) newMultipartUpload,
putObjectPart,
copyObjectPart,
completeMultipartUpload,
abortMultipartUpload,
ListUploadsResult (..),
listIncompleteUploads',
ListPartsResult (..),
listIncompleteParts',
import Lib.Prelude -- * Deletion APIs
import Network.Minio.API --------------------------
import Network.Minio.APICommon deleteBucket,
import Network.Minio.Data deleteObject,
import Network.Minio.Errors
import Network.Minio.PresignedOperations -- * Presigned Operations
import Network.Minio.Utils
import Network.Minio.XmlGenerator -----------------------------
import Network.Minio.XmlParser module Network.Minio.PresignedOperations,
-- ** Bucket Policies
getBucketPolicy,
setBucketPolicy,
-- * Bucket Notifications
-------------------------
Notification (..),
NotificationConfig (..),
Arn,
Event (..),
Filter (..),
FilterKey (..),
FilterRules (..),
FilterRule (..),
getBucketNotification,
putBucketNotification,
removeAllBucketNotification,
)
where
import qualified Data.ByteString as BS
import qualified Data.Text as T
import Lib.Prelude
import qualified Network.HTTP.Conduit as NC
import qualified Network.HTTP.Types as HT
import Network.HTTP.Types.Status (status404)
import Network.Minio.API
import Network.Minio.APICommon
import Network.Minio.Data
import Network.Minio.Errors
import Network.Minio.PresignedOperations
import Network.Minio.Utils
import Network.Minio.XmlGenerator
import Network.Minio.XmlParser
import UnliftIO (Handler (Handler))
-- | Fetch all buckets from the service. -- | Fetch all buckets from the service.
getService :: Minio [BucketInfo] getService :: Minio [BucketInfo]
getService = do getService = do
resp <- executeRequest $ defaultS3ReqInfo { resp <-
riNeedsLocation = False executeRequest $
} defaultS3ReqInfo
{ riNeedsLocation = False
}
parseListBuckets $ NC.responseBody resp parseListBuckets $ NC.responseBody resp
-- Parse headers from getObject and headObject calls. -- Parse headers from getObject and headObject calls.
parseGetObjectHeaders :: Object -> [HT.Header] -> Maybe ObjectInfo parseGetObjectHeaders :: Object -> [HT.Header] -> Maybe ObjectInfo
parseGetObjectHeaders object headers = parseGetObjectHeaders object headers =
let metadataPairs = getMetadata headers let metadataPairs = getMetadata headers
userMetadata = getUserMetadataMap metadataPairs userMetadata = getUserMetadataMap metadataPairs
metadata = getNonUserMetadataMap metadataPairs metadata = getNonUserMetadataMap metadataPairs
in ObjectInfo <$> Just object in ObjectInfo
<*> getLastModifiedHeader headers <$> Just object
<*> getETagHeader headers <*> getLastModifiedHeader headers
<*> getContentLength headers <*> getETagHeader headers
<*> Just userMetadata <*> getContentLength headers
<*> Just metadata <*> Just userMetadata
<*> Just metadata
-- | GET an object from the service and return parsed ObjectInfo and a -- | GET an object from the service and return parsed ObjectInfo and a
-- conduit source for the object content -- conduit source for the object content
getObject' :: Bucket -> Object -> HT.Query -> [HT.Header] getObject' ::
-> Minio GetObjectResponse Bucket ->
Object ->
HT.Query ->
[HT.Header] ->
Minio GetObjectResponse
getObject' bucket object queryParams headers = do getObject' bucket object queryParams headers = do
resp <- mkStreamRequest reqInfo resp <- mkStreamRequest reqInfo
let objInfoMaybe = parseGetObjectHeaders object $ NC.responseHeaders resp let objInfoMaybe = parseGetObjectHeaders object $ NC.responseHeaders resp
objInfo <- maybe (throwIO MErrVInvalidObjectInfoResponse) return objInfo <-
objInfoMaybe maybe
return $ GetObjectResponse { gorObjectInfo = objInfo (throwIO MErrVInvalidObjectInfoResponse)
, gorObjectStream = NC.responseBody resp return
} objInfoMaybe
return $
GetObjectResponse
{ gorObjectInfo = objInfo,
gorObjectStream = NC.responseBody resp
}
where where
reqInfo = defaultS3ReqInfo { riBucket = Just bucket reqInfo =
, riObject = Just object defaultS3ReqInfo
, riQueryParams = queryParams { riBucket = Just bucket,
, riHeaders = headers riObject = Just object,
} riQueryParams = queryParams,
riHeaders =
headers
-- This header is required for safety as otherwise http-client,
-- sends Accept-Encoding: gzip, and the server may actually gzip
-- body. In that case Content-Length header will be missing.
<> [("Accept-Encoding", "identity")]
}
-- | Creates a bucket via a PUT bucket call. -- | Creates a bucket via a PUT bucket call.
putBucket :: Bucket -> Region -> Minio () putBucket :: Bucket -> Region -> Minio ()
putBucket bucket location = do putBucket bucket location = do
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
void $ executeRequest $ void $
defaultS3ReqInfo { riMethod = HT.methodPut executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
, riPayload = PayloadBS $ mkCreateBucketConfig ns location { riMethod = HT.methodPut,
, riNeedsLocation = False riBucket = Just bucket,
riPayload = PayloadBS $ mkCreateBucketConfig ns location,
riNeedsLocation = False
} }
-- | Single PUT object size. -- | Single PUT object size.
@ -173,314 +208,432 @@ putObjectSingle' bucket object headers bs = do
let size = fromIntegral (BS.length bs) let size = fromIntegral (BS.length bs)
-- check length is within single PUT object size. -- check length is within single PUT object size.
when (size > maxSinglePutObjectSizeBytes) $ when (size > maxSinglePutObjectSizeBytes) $
throwIO $ MErrVSinglePUTSizeExceeded size throwIO $
MErrVSinglePUTSizeExceeded size
let payload = mkStreamingPayload $ PayloadBS bs let payload = mkStreamingPayload $ PayloadBS bs
resp <- executeRequest $ resp <-
defaultS3ReqInfo { riMethod = HT.methodPut executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
, riObject = Just object { riMethod = HT.methodPut,
, riHeaders = headers riBucket = Just bucket,
, riPayload = payload riObject = Just object,
} riHeaders = headers,
riPayload = payload
}
let rheaders = NC.responseHeaders resp let rheaders = NC.responseHeaders resp
etag = getETagHeader rheaders etag = getETagHeader rheaders
maybe maybe
(throwIO MErrVETagHeaderNotFound) (throwIO MErrVETagHeaderNotFound)
return etag return
etag
-- | PUT an object into the service. This function performs a single -- | PUT an object into the service. This function performs a single
-- PUT object call, and so can only transfer objects upto 5GiB. -- PUT object call, and so can only transfer objects upto 5GiB.
putObjectSingle :: Bucket -> Object -> [HT.Header] -> Handle -> Int64 putObjectSingle ::
-> Int64 -> Minio ETag Bucket ->
Object ->
[HT.Header] ->
Handle ->
Int64 ->
Int64 ->
Minio ETag
putObjectSingle bucket object headers h offset size = do putObjectSingle bucket object headers h offset size = do
-- check length is within single PUT object size. -- check length is within single PUT object size.
when (size > maxSinglePutObjectSizeBytes) $ when (size > maxSinglePutObjectSizeBytes) $
throwIO $ MErrVSinglePUTSizeExceeded size throwIO $
MErrVSinglePUTSizeExceeded size
-- content-length header is automatically set by library. -- content-length header is automatically set by library.
let payload = mkStreamingPayload $ PayloadH h offset size let payload = mkStreamingPayload $ PayloadH h offset size
resp <- executeRequest $ resp <-
defaultS3ReqInfo { riMethod = HT.methodPut executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
, riObject = Just object { riMethod = HT.methodPut,
, riHeaders = headers riBucket = Just bucket,
, riPayload = payload riObject = Just object,
} riHeaders = headers,
riPayload = payload
}
let rheaders = NC.responseHeaders resp let rheaders = NC.responseHeaders resp
etag = getETagHeader rheaders etag = getETagHeader rheaders
maybe maybe
(throwIO MErrVETagHeaderNotFound) (throwIO MErrVETagHeaderNotFound)
return etag return
etag
-- | List objects in a bucket matching prefix up to delimiter, -- | List objects in a bucket matching prefix up to delimiter,
-- starting from nextMarker. -- starting from nextMarker.
listObjectsV1' :: Bucket -> Maybe Text -> Maybe Text -> Maybe Text -> Maybe Int listObjectsV1' ::
-> Minio ListObjectsV1Result Bucket ->
Maybe Text ->
Maybe Text ->
Maybe Text ->
Maybe Int ->
Minio ListObjectsV1Result
listObjectsV1' bucket prefix nextMarker delimiter maxKeys = do listObjectsV1' bucket prefix nextMarker delimiter maxKeys = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodGet resp <-
, riBucket = Just bucket executeRequest $
, riQueryParams = mkOptionalParams params defaultS3ReqInfo
} { riMethod = HT.methodGet,
riBucket = Just bucket,
riQueryParams = mkOptionalParams params
}
parseListObjectsV1Response $ NC.responseBody resp parseListObjectsV1Response $ NC.responseBody resp
where where
params = [ params =
("marker", nextMarker) [ ("marker", nextMarker),
, ("prefix", prefix) ("prefix", prefix),
, ("delimiter", delimiter) ("delimiter", delimiter),
, ("max-keys", show <$> maxKeys) ("max-keys", show <$> maxKeys)
] ]
-- | List objects in a bucket matching prefix up to delimiter, -- | List objects in a bucket matching prefix up to delimiter,
-- starting from nextToken. -- starting from nextToken.
listObjects' :: Bucket -> Maybe Text -> Maybe Text -> Maybe Text -> Maybe Int listObjects' ::
-> Minio ListObjectsResult Bucket ->
Maybe Text ->
Maybe Text ->
Maybe Text ->
Maybe Int ->
Minio ListObjectsResult
listObjects' bucket prefix nextToken delimiter maxKeys = do listObjects' bucket prefix nextToken delimiter maxKeys = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodGet resp <-
, riBucket = Just bucket executeRequest $
, riQueryParams = mkOptionalParams params defaultS3ReqInfo
} { riMethod = HT.methodGet,
riBucket = Just bucket,
riQueryParams = mkOptionalParams params
}
parseListObjectsResponse $ NC.responseBody resp parseListObjectsResponse $ NC.responseBody resp
where where
params = [ params =
("list-type", Just "2") [ ("list-type", Just "2"),
, ("continuation_token", nextToken) ("continuation_token", nextToken),
, ("prefix", prefix) ("prefix", prefix),
, ("delimiter", delimiter) ("delimiter", delimiter),
, ("max-keys", show <$> maxKeys) ("max-keys", show <$> maxKeys)
] ]
-- | DELETE a bucket from the service. -- | DELETE a bucket from the service.
deleteBucket :: Bucket -> Minio () deleteBucket :: Bucket -> Minio ()
deleteBucket bucket = void $ deleteBucket bucket =
executeRequest $ void $
defaultS3ReqInfo { riMethod = HT.methodDelete executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
{ riMethod = HT.methodDelete,
riBucket = Just bucket
} }
-- | DELETE an object from the service. -- | DELETE an object from the service.
deleteObject :: Bucket -> Object -> Minio () deleteObject :: Bucket -> Object -> Minio ()
deleteObject bucket object = void $ deleteObject bucket object =
executeRequest $ void $
defaultS3ReqInfo { riMethod = HT.methodDelete executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
, riObject = Just object { riMethod = HT.methodDelete,
riBucket = Just bucket,
riObject = Just object
} }
-- | Create a new multipart upload. -- | Create a new multipart upload.
newMultipartUpload :: Bucket -> Object -> [HT.Header] -> Minio UploadId newMultipartUpload :: Bucket -> Object -> [HT.Header] -> Minio UploadId
newMultipartUpload bucket object headers = do newMultipartUpload bucket object headers = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodPost resp <-
, riBucket = Just bucket executeRequest $
, riObject = Just object defaultS3ReqInfo
, riQueryParams = [("uploads", Nothing)] { riMethod = HT.methodPost,
, riHeaders = headers riBucket = Just bucket,
} riObject = Just object,
riQueryParams = [("uploads", Nothing)],
riHeaders = headers
}
parseNewMultipartUpload $ NC.responseBody resp parseNewMultipartUpload $ NC.responseBody resp
-- | PUT a part of an object as part of a multipart upload. -- | PUT a part of an object as part of a multipart upload.
putObjectPart :: Bucket -> Object -> UploadId -> PartNumber -> [HT.Header] putObjectPart ::
-> Payload -> Minio PartTuple Bucket ->
Object ->
UploadId ->
PartNumber ->
[HT.Header] ->
Payload ->
Minio PartTuple
putObjectPart bucket object uploadId partNumber headers payload = do putObjectPart bucket object uploadId partNumber headers payload = do
-- transform payload to conduit to enable streaming signature -- transform payload to conduit to enable streaming signature
let payload' = mkStreamingPayload payload let payload' = mkStreamingPayload payload
resp <- executeRequest $ resp <-
defaultS3ReqInfo { riMethod = HT.methodPut executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
, riObject = Just object { riMethod = HT.methodPut,
, riQueryParams = mkOptionalParams params riBucket = Just bucket,
, riHeaders = headers riObject = Just object,
, riPayload = payload' riQueryParams = mkOptionalParams params,
} riHeaders = headers,
riPayload = payload'
}
let rheaders = NC.responseHeaders resp let rheaders = NC.responseHeaders resp
etag = getETagHeader rheaders etag = getETagHeader rheaders
maybe maybe
(throwIO MErrVETagHeaderNotFound) (throwIO MErrVETagHeaderNotFound)
(return . (partNumber, )) etag (return . (partNumber,))
etag
where where
params = [ params =
("uploadId", Just uploadId) [ ("uploadId", Just uploadId),
, ("partNumber", Just $ show partNumber) ("partNumber", Just $ show partNumber)
] ]
srcInfoToHeaders :: SourceInfo -> [HT.Header] srcInfoToHeaders :: SourceInfo -> [HT.Header]
srcInfoToHeaders srcInfo = ("x-amz-copy-source", srcInfoToHeaders srcInfo =
toS $ T.concat ["/", srcBucket srcInfo, ( "x-amz-copy-source",
"/", srcObject srcInfo] encodeUtf8 $
) : rangeHdr ++ zip names values T.concat
[ "/",
srcBucket srcInfo,
"/",
srcObject srcInfo
]
)
: rangeHdr
++ zip names values
where where
names = ["x-amz-copy-source-if-match", "x-amz-copy-source-if-none-match", names =
"x-amz-copy-source-if-unmodified-since", [ "x-amz-copy-source-if-match",
"x-amz-copy-source-if-modified-since"] "x-amz-copy-source-if-none-match",
values = mapMaybe (fmap encodeUtf8 . (srcInfo &)) "x-amz-copy-source-if-unmodified-since",
[srcIfMatch, srcIfNoneMatch, "x-amz-copy-source-if-modified-since"
fmap formatRFC1123 . srcIfUnmodifiedSince, ]
fmap formatRFC1123 . srcIfModifiedSince] values =
rangeHdr = maybe [] (\a -> [("x-amz-copy-source-range", HT.renderByteRanges [a])]) mapMaybe
$ toByteRange <$> srcRange srcInfo (fmap encodeUtf8 . (srcInfo &))
[ srcIfMatch,
srcIfNoneMatch,
fmap formatRFC1123 . srcIfUnmodifiedSince,
fmap formatRFC1123 . srcIfModifiedSince
]
rangeHdr =
maybe [] ((\a -> [("x-amz-copy-source-range", HT.renderByteRanges [a])]) . toByteRange) (srcRange srcInfo)
toByteRange :: (Int64, Int64) -> HT.ByteRange toByteRange :: (Int64, Int64) -> HT.ByteRange
toByteRange (x, y) = HT.ByteRangeFromTo (fromIntegral x) (fromIntegral y) toByteRange (x, y) = HT.ByteRangeFromTo (fromIntegral x) (fromIntegral y)
-- | Performs server-side copy of an object or part of an object as an -- | Performs server-side copy of an object or part of an object as an
-- upload part of an ongoing multi-part upload. -- upload part of an ongoing multi-part upload.
copyObjectPart :: DestinationInfo -> SourceInfo -> UploadId copyObjectPart ::
-> PartNumber -> [HT.Header] -> Minio (ETag, UTCTime) DestinationInfo ->
SourceInfo ->
UploadId ->
PartNumber ->
[HT.Header] ->
Minio (ETag, UTCTime)
copyObjectPart dstInfo srcInfo uploadId partNumber headers = do copyObjectPart dstInfo srcInfo uploadId partNumber headers = do
resp <- executeRequest $ resp <-
defaultS3ReqInfo { riMethod = HT.methodPut executeRequest $
, riBucket = Just $ dstBucket dstInfo defaultS3ReqInfo
, riObject = Just $ dstObject dstInfo { riMethod = HT.methodPut,
, riQueryParams = mkOptionalParams params riBucket = Just $ dstBucket dstInfo,
, riHeaders = headers ++ srcInfoToHeaders srcInfo riObject = Just $ dstObject dstInfo,
} riQueryParams = mkOptionalParams params,
riHeaders = headers ++ srcInfoToHeaders srcInfo
}
parseCopyObjectResponse $ NC.responseBody resp parseCopyObjectResponse $ NC.responseBody resp
where where
params = [ params =
("uploadId", Just uploadId) [ ("uploadId", Just uploadId),
, ("partNumber", Just $ show partNumber) ("partNumber", Just $ show partNumber)
] ]
-- | Performs server-side copy of an object that is upto 5GiB in -- | Performs server-side copy of an object that is upto 5GiB in
-- size. If the object is greater than 5GiB, this function throws the -- size. If the object is greater than 5GiB, this function throws the
-- error returned by the server. -- error returned by the server.
copyObjectSingle :: Bucket -> Object -> SourceInfo -> [HT.Header] copyObjectSingle ::
-> Minio (ETag, UTCTime) Bucket ->
Object ->
SourceInfo ->
[HT.Header] ->
Minio (ETag, UTCTime)
copyObjectSingle bucket object srcInfo headers = do copyObjectSingle bucket object srcInfo headers = do
-- validate that srcRange is Nothing for this API. -- validate that srcRange is Nothing for this API.
when (isJust $ srcRange srcInfo) $ when (isJust $ srcRange srcInfo) $
throwIO MErrVCopyObjSingleNoRangeAccepted throwIO MErrVCopyObjSingleNoRangeAccepted
resp <- executeRequest $ resp <-
defaultS3ReqInfo { riMethod = HT.methodPut executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
, riObject = Just object { riMethod = HT.methodPut,
, riHeaders = headers ++ srcInfoToHeaders srcInfo riBucket = Just bucket,
} riObject = Just object,
riHeaders = headers ++ srcInfoToHeaders srcInfo
}
parseCopyObjectResponse $ NC.responseBody resp parseCopyObjectResponse $ NC.responseBody resp
-- | Complete a multipart upload. -- | Complete a multipart upload.
completeMultipartUpload :: Bucket -> Object -> UploadId -> [PartTuple] completeMultipartUpload ::
-> Minio ETag Bucket ->
Object ->
UploadId ->
[PartTuple] ->
Minio ETag
completeMultipartUpload bucket object uploadId partTuple = do completeMultipartUpload bucket object uploadId partTuple = do
resp <- executeRequest $ resp <-
defaultS3ReqInfo { riMethod = HT.methodPost executeRequest $
, riBucket = Just bucket defaultS3ReqInfo
, riObject = Just object { riMethod = HT.methodPost,
, riQueryParams = mkOptionalParams params riBucket = Just bucket,
, riPayload = PayloadBS $ riObject = Just object,
mkCompleteMultipartUploadRequest partTuple riQueryParams = mkOptionalParams params,
} riPayload =
PayloadBS $
mkCompleteMultipartUploadRequest partTuple
}
parseCompleteMultipartUploadResponse $ NC.responseBody resp parseCompleteMultipartUploadResponse $ NC.responseBody resp
where where
params = [("uploadId", Just uploadId)] params = [("uploadId", Just uploadId)]
-- | Abort a multipart upload. -- | Abort a multipart upload.
abortMultipartUpload :: Bucket -> Object -> UploadId -> Minio () abortMultipartUpload :: Bucket -> Object -> UploadId -> Minio ()
abortMultipartUpload bucket object uploadId = void $ abortMultipartUpload bucket object uploadId =
executeRequest $ defaultS3ReqInfo { riMethod = HT.methodDelete void $
, riBucket = Just bucket executeRequest $
, riObject = Just object defaultS3ReqInfo
, riQueryParams = mkOptionalParams params { riMethod = HT.methodDelete,
} riBucket = Just bucket,
riObject = Just object,
riQueryParams = mkOptionalParams params
}
where where
params = [("uploadId", Just uploadId)] params = [("uploadId", Just uploadId)]
-- | List incomplete multipart uploads. -- | List incomplete multipart uploads.
listIncompleteUploads' :: Bucket -> Maybe Text -> Maybe Text -> Maybe Text listIncompleteUploads' ::
-> Maybe Text -> Maybe Int -> Minio ListUploadsResult Bucket ->
Maybe Text ->
Maybe Text ->
Maybe Text ->
Maybe Text ->
Maybe Int ->
Minio ListUploadsResult
listIncompleteUploads' bucket prefix delimiter keyMarker uploadIdMarker maxKeys = do listIncompleteUploads' bucket prefix delimiter keyMarker uploadIdMarker maxKeys = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodGet resp <-
, riBucket = Just bucket executeRequest $
, riQueryParams = params defaultS3ReqInfo
} { riMethod = HT.methodGet,
riBucket = Just bucket,
riQueryParams = params
}
parseListUploadsResponse $ NC.responseBody resp parseListUploadsResponse $ NC.responseBody resp
where where
-- build query params -- build query params
params = ("uploads", Nothing) : mkOptionalParams params =
[ ("prefix", prefix) ("uploads", Nothing)
, ("delimiter", delimiter) : mkOptionalParams
, ("key-marker", keyMarker) [ ("prefix", prefix),
, ("upload-id-marker", uploadIdMarker) ("delimiter", delimiter),
, ("max-uploads", show <$> maxKeys) ("key-marker", keyMarker),
] ("upload-id-marker", uploadIdMarker),
("max-uploads", show <$> maxKeys)
]
-- | List parts of an ongoing multipart upload. -- | List parts of an ongoing multipart upload.
listIncompleteParts' :: Bucket -> Object -> UploadId -> Maybe Text listIncompleteParts' ::
-> Maybe Text -> Minio ListPartsResult Bucket ->
Object ->
UploadId ->
Maybe Text ->
Maybe Text ->
Minio ListPartsResult
listIncompleteParts' bucket object uploadId maxParts partNumMarker = do listIncompleteParts' bucket object uploadId maxParts partNumMarker = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodGet resp <-
, riBucket = Just bucket executeRequest $
, riObject = Just object defaultS3ReqInfo
, riQueryParams = mkOptionalParams params { riMethod = HT.methodGet,
} riBucket = Just bucket,
riObject = Just object,
riQueryParams = mkOptionalParams params
}
parseListPartsResponse $ NC.responseBody resp parseListPartsResponse $ NC.responseBody resp
where where
-- build optional query params -- build optional query params
params = [ params =
("uploadId", Just uploadId) [ ("uploadId", Just uploadId),
, ("part-number-marker", partNumMarker) ("part-number-marker", partNumMarker),
, ("max-parts", maxParts) ("max-parts", maxParts)
] ]
-- | Get metadata of an object. -- | Get metadata of an object.
headObject :: Bucket -> Object -> [HT.Header] -> Minio ObjectInfo headObject :: Bucket -> Object -> [HT.Header] -> Minio ObjectInfo
headObject bucket object reqHeaders = do headObject bucket object reqHeaders = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodHead resp <-
, riBucket = Just bucket executeRequest $
, riObject = Just object defaultS3ReqInfo
, riHeaders = reqHeaders { riMethod = HT.methodHead,
} riBucket = Just bucket,
riObject = Just object,
riHeaders =
reqHeaders
-- This header is required for safety as otherwise http-client,
-- sends Accept-Encoding: gzip, and the server may actually gzip
-- body. In that case Content-Length header will be missing.
<> [("Accept-Encoding", "identity")]
}
maybe (throwIO MErrVInvalidObjectInfoResponse) return $ maybe (throwIO MErrVInvalidObjectInfoResponse) return $
parseGetObjectHeaders object $ NC.responseHeaders resp parseGetObjectHeaders object $
NC.responseHeaders resp
-- | Query the object store if a given bucket exists. -- | Query the object store if a given bucket exists.
headBucket :: Bucket -> Minio Bool headBucket :: Bucket -> Minio Bool
headBucket bucket = headBucketEx `catches` headBucket bucket =
[ Handler handleNoSuchBucket headBucketEx
, Handler handleStatus404 `catches` [ Handler handleNoSuchBucket,
] Handler handleStatus404
]
where where
handleNoSuchBucket :: ServiceErr -> Minio Bool handleNoSuchBucket :: ServiceErr -> Minio Bool
handleNoSuchBucket e | e == NoSuchBucket = return False handleNoSuchBucket e
| otherwise = throwIO e | e == NoSuchBucket = return False
| otherwise = throwIO e
handleStatus404 :: NC.HttpException -> Minio Bool handleStatus404 :: NC.HttpException -> Minio Bool
handleStatus404 e@(NC.HttpExceptionRequest _ (NC.StatusCodeException res _)) = handleStatus404 e@(NC.HttpExceptionRequest _ (NC.StatusCodeException res _)) =
if NC.responseStatus res == status404 if NC.responseStatus res == status404
then return False then return False
else throwIO e else throwIO e
handleStatus404 e = throwIO e handleStatus404 e = throwIO e
headBucketEx = do headBucketEx = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodHead resp <-
, riBucket = Just bucket executeRequest $
} defaultS3ReqInfo
{ riMethod = HT.methodHead,
riBucket = Just bucket
}
return $ NC.responseStatus resp == HT.ok200 return $ NC.responseStatus resp == HT.ok200
-- | Set the notification configuration on a bucket. -- | Set the notification configuration on a bucket.
putBucketNotification :: Bucket -> Notification -> Minio () putBucketNotification :: Bucket -> Notification -> Minio ()
putBucketNotification bucket ncfg = do putBucketNotification bucket ncfg = do
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
void $ executeRequest $ defaultS3ReqInfo { riMethod = HT.methodPut void $
, riBucket = Just bucket executeRequest $
, riQueryParams = [("notification", Nothing)] defaultS3ReqInfo
, riPayload = PayloadBS $ { riMethod = HT.methodPut,
mkPutNotificationRequest ns ncfg riBucket = Just bucket,
} riQueryParams = [("notification", Nothing)],
riPayload =
PayloadBS $
mkPutNotificationRequest ns ncfg
}
-- | Retrieve the notification configuration on a bucket. -- | Retrieve the notification configuration on a bucket.
getBucketNotification :: Bucket -> Minio Notification getBucketNotification :: Bucket -> Minio Notification
getBucketNotification bucket = do getBucketNotification bucket = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodGet resp <-
, riBucket = Just bucket executeRequest $
, riQueryParams = [("notification", Nothing)] defaultS3ReqInfo
} { riMethod = HT.methodGet,
riBucket = Just bucket,
riQueryParams = [("notification", Nothing)]
}
parseNotification $ NC.responseBody resp parseNotification $ NC.responseBody resp
-- | Remove all notifications configured on a bucket. -- | Remove all notifications configured on a bucket.
@ -490,11 +643,14 @@ removeAllBucketNotification = flip putBucketNotification defaultNotification
-- | Fetch the policy if any on a bucket. -- | Fetch the policy if any on a bucket.
getBucketPolicy :: Bucket -> Minio Text getBucketPolicy :: Bucket -> Minio Text
getBucketPolicy bucket = do getBucketPolicy bucket = do
resp <- executeRequest $ defaultS3ReqInfo { riMethod = HT.methodGet resp <-
, riBucket = Just bucket executeRequest $
, riQueryParams = [("policy", Nothing)] defaultS3ReqInfo
} { riMethod = HT.methodGet,
return $ toS $ NC.responseBody resp riBucket = Just bucket,
riQueryParams = [("policy", Nothing)]
}
return $ decodeUtf8Lenient $ toStrictBS $ NC.responseBody resp
-- | Set a new policy on a bucket. -- | Set a new policy on a bucket.
-- As a special condition if the policy is empty -- As a special condition if the policy is empty
@ -506,18 +662,24 @@ setBucketPolicy bucket policy = do
else putBucketPolicy bucket policy else putBucketPolicy bucket policy
-- | Save a new policy on a bucket. -- | Save a new policy on a bucket.
putBucketPolicy :: Bucket -> Text -> Minio() putBucketPolicy :: Bucket -> Text -> Minio ()
putBucketPolicy bucket policy = do putBucketPolicy bucket policy = do
void $ executeRequest $ defaultS3ReqInfo { riMethod = HT.methodPut void $
, riBucket = Just bucket executeRequest $
, riQueryParams = [("policy", Nothing)] defaultS3ReqInfo
, riPayload = PayloadBS $ encodeUtf8 policy { riMethod = HT.methodPut,
} riBucket = Just bucket,
riQueryParams = [("policy", Nothing)],
riPayload = PayloadBS $ encodeUtf8 policy
}
-- | Delete any policy set on a bucket. -- | Delete any policy set on a bucket.
deleteBucketPolicy :: Bucket -> Minio() deleteBucketPolicy :: Bucket -> Minio ()
deleteBucketPolicy bucket = do deleteBucketPolicy bucket = do
void $ executeRequest $ defaultS3ReqInfo { riMethod = HT.methodDelete void $
, riBucket = Just bucket executeRequest $
, riQueryParams = [("policy", Nothing)] defaultS3ReqInfo
} { riMethod = HT.methodDelete,
riBucket = Just bucket,
riQueryParams = [("policy", Nothing)]
}

View File

@ -15,113 +15,103 @@
-- --
module Network.Minio.SelectAPI module Network.Minio.SelectAPI
( ( -- | The `selectObjectContent` allows querying CSV, JSON or Parquet
-- format objects in AWS S3 and in MinIO using SQL Select
-- statements. This allows significant reduction of data transfer
-- from object storage for computation-intensive tasks, as relevant
-- data is filtered close to the storage.
selectObjectContent,
SelectRequest,
selectRequest,
-- | The `selectObjectContent` allows querying CSV, JSON or Parquet -- *** Input Serialization
-- format objects in AWS S3 and in MinIO using SQL Select InputSerialization,
-- statements. This allows significant reduction of data transfer defaultCsvInput,
-- from object storage for computation-intensive tasks, as relevant linesJsonInput,
-- data is filtered close to the storage. documentJsonInput,
defaultParquetInput,
setInputCSVProps,
CompressionType (..),
setInputCompressionType,
selectObjectContent -- *** CSV Format details
, SelectRequest -- | CSV format options such as delimiters and quote characters are
, selectRequest -- specified using using the functions below. Options are combined
-- monoidally.
CSVProp,
recordDelimiter,
fieldDelimiter,
quoteCharacter,
quoteEscapeCharacter,
commentCharacter,
allowQuotedRecordDelimiter,
FileHeaderInfo (..),
fileHeaderInfo,
QuoteFields (..),
quoteFields,
-- *** Input Serialization -- *** Output Serialization
OutputSerialization,
defaultCsvOutput,
defaultJsonOutput,
outputCSVFromProps,
outputJSONFromRecordDelimiter,
, InputSerialization -- *** Progress messages
, defaultCsvInput setRequestProgressEnabled,
, linesJsonInput
, documentJsonInput
, defaultParquetInput
, setInputCSVProps
, CompressionType(..) -- *** Interpreting Select output
, setInputCompressionType
-- *** CSV Format details -- | The conduit returned by `selectObjectContent` returns values of
-- the `EventMessage` data type. This returns the query output
-- messages formatted according to the chosen output serialization,
-- interleaved with progress messages (if enabled by
-- `setRequestProgressEnabled`), and at the end a statistics
-- message.
--
-- If the application is interested in only the payload, then
-- `getPayloadBytes` can be used. For example to simply print the
-- payload to stdout:
--
-- > resultConduit <- selectObjectContent bucket object mySelectRequest
-- > runConduit $ resultConduit .| getPayloadBytes .| stdoutC
--
-- Note that runConduit, the connect operator (.|) and stdoutC are
-- all from the "conduit" package.
getPayloadBytes,
EventMessage (..),
Progress (..),
Stats,
)
where
-- | CSV format options such as delimiters and quote characters are import Conduit ((.|))
-- specified using using the functions below. Options are combined import qualified Conduit as C
-- monoidally. import qualified Data.Binary as Bin
import qualified Data.ByteString as B
import qualified Data.ByteString.Lazy as LB
import Data.Digest.CRC32 (crc32, crc32Update)
import Lib.Prelude
import qualified Network.HTTP.Conduit as NC
import qualified Network.HTTP.Types as HT
import Network.Minio.API
import Network.Minio.Data
import Network.Minio.Errors
import Network.Minio.Utils
import Network.Minio.XmlGenerator
import Network.Minio.XmlParser
import UnliftIO (MonadUnliftIO)
, CSVProp data EventStreamException
, recordDelimiter = ESEPreludeCRCFailed
, fieldDelimiter | ESEMessageCRCFailed
, quoteCharacter | ESEUnexpectedEndOfStream
, quoteEscapeCharacter | ESEDecodeFail [Char]
, commentCharacter | ESEInvalidHeaderType
, allowQuotedRecordDelimiter | ESEInvalidHeaderValueType
, FileHeaderInfo(..) | ESEInvalidMessageType
, fileHeaderInfo deriving stock (Eq, Show)
, QuoteFields(..)
, quoteFields
-- *** Output Serialization
, OutputSerialization
, defaultCsvOutput
, defaultJsonOutput
, outputCSVFromProps
, outputJSONFromRecordDelimiter
-- *** Progress messages
, setRequestProgressEnabled
-- *** Interpreting Select output
-- | The conduit returned by `selectObjectContent` returns values of
-- the `EventMessage` data type. This returns the query output
-- messages formatted according to the chosen output serialization,
-- interleaved with progress messages (if enabled by
-- `setRequestProgressEnabled`), and at the end a statistics
-- message.
--
-- If the application is interested in only the payload, then
-- `getPayloadBytes` can be used. For example to simply print the
-- payload to stdout:
--
-- > resultConduit <- selectObjectContent bucket object mySelectRequest
-- > runConduit $ resultConduit .| getPayloadBytes .| stdoutC
--
-- Note that runConduit, the connect operator (.|) and stdoutC are
-- all from the "conduit" package.
, getPayloadBytes
, EventMessage(..)
, Progress(..)
, Stats
) where
import Conduit ((.|))
import qualified Conduit as C
import qualified Data.Binary as Bin
import qualified Data.ByteString as B
import qualified Data.ByteString.Lazy as LB
import Data.Digest.CRC32 (crc32, crc32Update)
import qualified Network.HTTP.Conduit as NC
import qualified Network.HTTP.Types as HT
import UnliftIO (MonadUnliftIO)
import Lib.Prelude
import Network.Minio.API
import Network.Minio.Data
import Network.Minio.Errors
import Network.Minio.Utils
import Network.Minio.XmlGenerator
import Network.Minio.XmlParser
data EventStreamException = ESEPreludeCRCFailed
| ESEMessageCRCFailed
| ESEUnexpectedEndOfStream
| ESEDecodeFail [Char]
| ESEInvalidHeaderType
| ESEInvalidHeaderValueType
| ESEInvalidMessageType
deriving (Eq, Show)
instance Exception EventStreamException instance Exception EventStreamException
@ -129,171 +119,176 @@ instance Exception EventStreamException
chunkSize :: Int chunkSize :: Int
chunkSize = 32 * 1024 chunkSize = 32 * 1024
parseBinary :: Bin.Binary a => ByteString -> IO a parseBinary :: (Bin.Binary a) => ByteString -> IO a
parseBinary b = do parseBinary b = do
case Bin.decodeOrFail $ LB.fromStrict b of case Bin.decodeOrFail $ LB.fromStrict b of
Left (_, _, msg) -> throwIO $ ESEDecodeFail msg Left (_, _, msg) -> throwIO $ ESEDecodeFail msg
Right (_, _, r) -> return r Right (_, _, r) -> return r
bytesToHeaderName :: Text -> IO MsgHeaderName bytesToHeaderName :: Text -> IO MsgHeaderName
bytesToHeaderName t = case t of bytesToHeaderName t = case t of
":message-type" -> return MessageType ":message-type" -> return MessageType
":event-type" -> return EventType ":event-type" -> return EventType
":content-type" -> return ContentType ":content-type" -> return ContentType
":error-code" -> return ErrorCode ":error-code" -> return ErrorCode
":error-message" -> return ErrorMessage ":error-message" -> return ErrorMessage
_ -> throwIO ESEInvalidHeaderType _ -> throwIO ESEInvalidHeaderType
parseHeaders :: MonadUnliftIO m parseHeaders ::
=> Word32 -> C.ConduitM ByteString a m [MessageHeader] (MonadUnliftIO m) =>
Word32 ->
C.ConduitM ByteString a m [MessageHeader]
parseHeaders 0 = return [] parseHeaders 0 = return []
parseHeaders hdrLen = do parseHeaders hdrLen = do
bs1 <- readNBytes 1 bs1 <- readNBytes 1
n :: Word8 <- liftIO $ parseBinary bs1 n :: Word8 <- liftIO $ parseBinary bs1
headerKeyBytes <- readNBytes $ fromIntegral n headerKeyBytes <- readNBytes $ fromIntegral n
let headerKey = decodeUtf8Lenient headerKeyBytes let headerKey = decodeUtf8Lenient headerKeyBytes
headerName <- liftIO $ bytesToHeaderName headerKey headerName <- liftIO $ bytesToHeaderName headerKey
bs2 <- readNBytes 1 bs2 <- readNBytes 1
headerValueType :: Word8 <- liftIO $ parseBinary bs2 headerValueType :: Word8 <- liftIO $ parseBinary bs2
when (headerValueType /= 7) $ throwIO ESEInvalidHeaderValueType when (headerValueType /= 7) $ throwIO ESEInvalidHeaderValueType
bs3 <- readNBytes 2 bs3 <- readNBytes 2
vLen :: Word16 <- liftIO $ parseBinary bs3 vLen :: Word16 <- liftIO $ parseBinary bs3
headerValueBytes <- readNBytes $ fromIntegral vLen headerValueBytes <- readNBytes $ fromIntegral vLen
let headerValue = decodeUtf8Lenient headerValueBytes let headerValue = decodeUtf8Lenient headerValueBytes
m = (headerName, headerValue) m = (headerName, headerValue)
k = 1 + fromIntegral n + 1 + 2 + fromIntegral vLen k = 1 + fromIntegral n + 1 + 2 + fromIntegral vLen
ms <- parseHeaders (hdrLen - k) ms <- parseHeaders (hdrLen - k)
return (m:ms) return (m : ms)
-- readNBytes returns N bytes read from the string and throws an -- readNBytes returns N bytes read from the string and throws an
-- exception if N bytes are not present on the stream. -- exception if N bytes are not present on the stream.
readNBytes :: MonadUnliftIO m => Int -> C.ConduitM ByteString a m ByteString readNBytes :: (MonadUnliftIO m) => Int -> C.ConduitM ByteString a m ByteString
readNBytes n = do readNBytes n = do
b <- LB.toStrict <$> (C.takeCE n .| C.sinkLazy) b <- LB.toStrict <$> (C.takeCE n .| C.sinkLazy)
if B.length b /= n if B.length b /= n
then throwIO ESEUnexpectedEndOfStream then throwIO ESEUnexpectedEndOfStream
else return b else return b
crcCheck :: MonadUnliftIO m crcCheck ::
=> C.ConduitM ByteString ByteString m () (MonadUnliftIO m) =>
C.ConduitM ByteString ByteString m ()
crcCheck = do crcCheck = do
b <- readNBytes 12 b <- readNBytes 12
n :: Word32 <- liftIO $ parseBinary $ B.take 4 b n :: Word32 <- liftIO $ parseBinary $ B.take 4 b
preludeCRC :: Word32 <- liftIO $ parseBinary $ B.drop 8 b preludeCRC :: Word32 <- liftIO $ parseBinary $ B.drop 8 b
when (crc32 (B.take 8 b) /= preludeCRC) $ when (crc32 (B.take 8 b) /= preludeCRC) $
throwIO ESEPreludeCRCFailed throwIO ESEPreludeCRCFailed
-- we do not yield the checksum -- we do not yield the checksum
C.yield $ B.take 8 b C.yield $ B.take 8 b
-- 12 bytes have been read off the current message. Now read the -- 12 bytes have been read off the current message. Now read the
-- next (n-12)-4 bytes and accumulate the checksum, and yield it. -- next (n-12)-4 bytes and accumulate the checksum, and yield it.
let startCrc = crc32 b let startCrc = crc32 b
finalCrc <- accumulateYield (fromIntegral n-16) startCrc finalCrc <- accumulateYield (fromIntegral n - 16) startCrc
bs <- readNBytes 4 bs <- readNBytes 4
expectedCrc :: Word32 <- liftIO $ parseBinary bs expectedCrc :: Word32 <- liftIO $ parseBinary bs
when (finalCrc /= expectedCrc) $ when (finalCrc /= expectedCrc) $
throwIO ESEMessageCRCFailed throwIO ESEMessageCRCFailed
-- we unconditionally recurse - downstream figures out when to -- we unconditionally recurse - downstream figures out when to
-- quit reading the stream -- quit reading the stream
crcCheck crcCheck
where where
accumulateYield n checkSum = do accumulateYield n checkSum = do
let toRead = min n chunkSize let toRead = min n chunkSize
b <- readNBytes toRead b <- readNBytes toRead
let c' = crc32Update checkSum b let c' = crc32Update checkSum b
n' = n - B.length b n' = n - B.length b
C.yield b C.yield b
if n' > 0 if n' > 0
then accumulateYield n' c' then accumulateYield n' c'
else return c' else return c'
handleMessage :: MonadUnliftIO m => C.ConduitT ByteString EventMessage m () handleMessage :: (MonadUnliftIO m) => C.ConduitT ByteString EventMessage m ()
handleMessage = do handleMessage = do
b1 <- readNBytes 4 b1 <- readNBytes 4
msgLen :: Word32 <- liftIO $ parseBinary b1 msgLen :: Word32 <- liftIO $ parseBinary b1
b2 <- readNBytes 4 b2 <- readNBytes 4
hdrLen :: Word32 <- liftIO $ parseBinary b2 hdrLen :: Word32 <- liftIO $ parseBinary b2
hs <- parseHeaders hdrLen hs <- parseHeaders hdrLen
let payloadLen = msgLen - hdrLen - 16 let payloadLen = msgLen - hdrLen - 16
getHdrVal h = fmap snd . headMay . filter ((h ==) . fst) getHdrVal h = fmap snd . find ((h ==) . fst)
eventHdrValue = getHdrVal EventType hs eventHdrValue = getHdrVal EventType hs
msgHdrValue = getHdrVal MessageType hs msgHdrValue = getHdrVal MessageType hs
errCode = getHdrVal ErrorCode hs errCode = getHdrVal ErrorCode hs
errMsg = getHdrVal ErrorMessage hs errMsg = getHdrVal ErrorMessage hs
case msgHdrValue of
Just "event" -> do
case eventHdrValue of
Just "Records" -> passThrough $ fromIntegral payloadLen
Just "Cont" -> return ()
Just "Progress" -> do
bs <- readNBytes $ fromIntegral payloadLen
progress <- parseSelectProgress bs
C.yield $ ProgressEventMessage progress
Just "Stats" -> do
bs <- readNBytes $ fromIntegral payloadLen
stats <- parseSelectProgress bs
C.yield $ StatsEventMessage stats
Just "End" -> return ()
_ -> throwIO ESEInvalidMessageType
when (eventHdrValue /= Just "End") handleMessage
Just "error" -> do
let reqMsgMay = RequestLevelErrorMessage <$> errCode <*> errMsg
maybe (throwIO ESEInvalidMessageType) C.yield reqMsgMay
_ -> throwIO ESEInvalidMessageType
case msgHdrValue of
Just "event" -> do
case eventHdrValue of
Just "Records" -> passThrough $ fromIntegral payloadLen
Just "Cont" -> return ()
Just "Progress" -> do
bs <- readNBytes $ fromIntegral payloadLen
progress <- parseSelectProgress bs
C.yield $ ProgressEventMessage progress
Just "Stats" -> do
bs <- readNBytes $ fromIntegral payloadLen
stats <- parseSelectProgress bs
C.yield $ StatsEventMessage stats
Just "End" -> return ()
_ -> throwIO ESEInvalidMessageType
when (eventHdrValue /= Just "End") handleMessage
Just "error" -> do
let reqMsgMay = RequestLevelErrorMessage <$> errCode <*> errMsg
maybe (throwIO ESEInvalidMessageType) C.yield reqMsgMay
_ -> throwIO ESEInvalidMessageType
where where
passThrough 0 = return () passThrough 0 = return ()
passThrough n = do passThrough n = do
let c = min n chunkSize let c = min n chunkSize
b <- readNBytes c b <- readNBytes c
C.yield $ RecordPayloadEventMessage b C.yield $ RecordPayloadEventMessage b
passThrough $ n - B.length b passThrough $ n - B.length b
selectProtoConduit ::
selectProtoConduit :: MonadUnliftIO m (MonadUnliftIO m) =>
=> C.ConduitT ByteString EventMessage m () C.ConduitT ByteString EventMessage m ()
selectProtoConduit = crcCheck .| handleMessage selectProtoConduit = crcCheck .| handleMessage
-- | selectObjectContent calls the SelectRequest on the given -- | selectObjectContent calls the SelectRequest on the given
-- object. It returns a Conduit of event messages that can be consumed -- object. It returns a Conduit of event messages that can be consumed
-- by the client. -- by the client.
selectObjectContent :: Bucket -> Object -> SelectRequest selectObjectContent ::
-> Minio (C.ConduitT () EventMessage Minio ()) Bucket ->
Object ->
SelectRequest ->
Minio (C.ConduitT () EventMessage Minio ())
selectObjectContent b o r = do selectObjectContent b o r = do
let reqInfo = defaultS3ReqInfo { riMethod = HT.methodPost let reqInfo =
, riBucket = Just b defaultS3ReqInfo
, riObject = Just o { riMethod = HT.methodPost,
, riPayload = PayloadBS $ mkSelectRequest r riBucket = Just b,
, riNeedsLocation = False riObject = Just o,
, riQueryParams = [("select", Nothing), ("select-type", Just "2")] riPayload = PayloadBS $ mkSelectRequest r,
} riNeedsLocation = False,
--print $ mkSelectRequest r riQueryParams = [("select", Nothing), ("select-type", Just "2")]
resp <- mkStreamRequest reqInfo }
return $ NC.responseBody resp .| selectProtoConduit -- print $ mkSelectRequest r
resp <- mkStreamRequest reqInfo
return $ NC.responseBody resp .| selectProtoConduit
-- | A helper conduit that returns only the record payload bytes. -- | A helper conduit that returns only the record payload bytes.
getPayloadBytes :: MonadIO m => C.ConduitT EventMessage ByteString m () getPayloadBytes :: (MonadIO m) => C.ConduitT EventMessage ByteString m ()
getPayloadBytes = do getPayloadBytes = do
evM <- C.await evM <- C.await
case evM of case evM of
Just v -> do Just v -> do
case v of case v of
RecordPayloadEventMessage b -> C.yield b RecordPayloadEventMessage b -> C.yield b
RequestLevelErrorMessage c m -> liftIO $ throwIO $ SelectErr c m RequestLevelErrorMessage c m -> liftIO $ throwIO $ SelectErr c m
_ -> return () _ -> return ()
getPayloadBytes getPayloadBytes
Nothing -> return () Nothing -> return ()

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017-2019 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -13,88 +13,94 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE BangPatterns #-}
module Network.Minio.Sign.V4 where module Network.Minio.Sign.V4
( SignParams (..),
signV4QueryParams,
signV4,
signV4PostPolicy,
signV4Stream,
Service (..),
credentialScope,
)
where
import qualified Conduit as C import qualified Conduit as C
import qualified Data.ByteString as B import qualified Data.ByteArray as BA
import qualified Data.ByteString.Base64 as Base64 import qualified Data.ByteString as B
import qualified Data.ByteString.Char8 as B8 import qualified Data.ByteString.Base64 as Base64
import Data.CaseInsensitive (mk) import qualified Data.ByteString.Char8 as B8
import qualified Data.CaseInsensitive as CI import qualified Data.ByteString.Lazy as LB
import qualified Data.HashMap.Strict as Map import Data.CaseInsensitive (mk)
import qualified Data.HashSet as Set import qualified Data.CaseInsensitive as CI
import qualified Data.Time as Time import qualified Data.HashMap.Strict as Map
import qualified Network.HTTP.Conduit as NC import qualified Data.HashSet as Set
import Network.HTTP.Types (Header, parseQuery) import Data.List (partition)
import qualified Network.HTTP.Types as H import qualified Data.List.NonEmpty as NE
import Text.Printf (printf) import qualified Data.Time as Time
import Lib.Prelude
import Lib.Prelude import qualified Network.HTTP.Conduit as NC
import Network.HTTP.Types (Header, SimpleQuery, hContentEncoding, parseQuery)
import Network.Minio.Data.ByteString import qualified Network.HTTP.Types as H
import Network.Minio.Data.Crypto import Network.HTTP.Types.Header (RequestHeaders)
import Network.Minio.Data.Time import Network.Minio.Data.ByteString
import Network.Minio.Errors import Network.Minio.Data.Crypto
import Network.Minio.Data.Time
import Network.Minio.Errors
import Text.Printf (printf)
-- these headers are not included in the string to sign when signing a -- these headers are not included in the string to sign when signing a
-- request -- request
ignoredHeaders :: Set.HashSet ByteString ignoredHeaders :: Set.HashSet ByteString
ignoredHeaders = Set.fromList $ map CI.foldedCase ignoredHeaders =
[ H.hAuthorization Set.fromList $
, H.hContentType map
, H.hUserAgent CI.foldedCase
] [ H.hAuthorization,
H.hContentType,
H.hUserAgent
]
data SignV4Data = SignV4Data { data Service = ServiceS3 | ServiceSTS
sv4SignTime :: UTCTime deriving stock (Eq, Show)
, sv4Scope :: ByteString
, sv4CanonicalRequest :: ByteString
, sv4HeadersToSign :: [(ByteString, ByteString)]
, sv4Output :: [(ByteString, ByteString)]
, sv4StringToSign :: ByteString
, sv4SigningKey :: ByteString
} deriving (Show)
data SignParams = SignParams { toByteString :: Service -> ByteString
spAccessKey :: Text toByteString ServiceS3 = "s3"
, spSecretKey :: Text toByteString ServiceSTS = "sts"
, spTimeStamp :: UTCTime
, spRegion :: Maybe Text
, spExpirySecs :: Maybe Int
, spPayloadHash :: Maybe ByteString
} deriving (Show)
debugPrintSignV4Data :: SignV4Data -> IO () data SignParams = SignParams
debugPrintSignV4Data (SignV4Data t s cr h2s o sts sk) = do { spAccessKey :: Text,
B8.putStrLn "SignV4Data:" spSecretKey :: BA.ScrubbedBytes,
B8.putStr "Timestamp: " >> print t spSessionToken :: Maybe BA.ScrubbedBytes,
B8.putStr "Scope: " >> B8.putStrLn s spService :: Service,
B8.putStrLn "Canonical Request:" spTimeStamp :: UTCTime,
B8.putStrLn cr spRegion :: Maybe Text,
B8.putStr "Headers to Sign: " >> print h2s spExpirySecs :: Maybe UrlExpiry,
B8.putStr "Output: " >> print o spPayloadHash :: Maybe ByteString
B8.putStr "StringToSign: " >> B8.putStrLn sts }
B8.putStr "SigningKey: " >> printBytes sk deriving stock (Show)
B8.putStrLn "END of SignV4Data ========="
where
printBytes b = do
mapM_ (\x -> B.putStr $ B.concat [show x, " "]) $ B.unpack b
B8.putStrLn ""
mkAuthHeader :: Text -> ByteString -> ByteString -> ByteString -> H.Header mkAuthHeader :: Text -> ByteString -> ByteString -> ByteString -> H.Header
mkAuthHeader accessKey scope signedHeaderKeys sign = mkAuthHeader accessKey scope signedHeaderKeys sign =
let authValue = B.concat let authValue =
[ "AWS4-HMAC-SHA256 Credential=" B.concat
, toS accessKey [ "AWS4-HMAC-SHA256 Credential=",
, "/" encodeUtf8 accessKey,
, scope "/",
, ", SignedHeaders=" scope,
, signedHeaderKeys ", SignedHeaders=",
, ", Signature=" signedHeaderKeys,
, sign ", Signature=",
] sign
in (H.hAuthorization, authValue) ]
in (H.hAuthorization, authValue)
data IsStreaming = IsStreamingLength Int64 | NotStreaming
deriving stock (Eq, Show)
amzSecurityToken :: ByteString
amzSecurityToken = "X-Amz-Security-Token"
-- | Given SignParams and request details, including request method, -- | Given SignParams and request details, including request method,
-- request path, headers, query params and payload hash, generates an -- request path, headers, query params and payload hash, generates an
@ -108,124 +114,212 @@ mkAuthHeader accessKey scope signedHeaderKeys sign =
-- is being created. The expiry is interpreted as an integer number of -- is being created. The expiry is interpreted as an integer number of
-- seconds. The output will be the list of query-parameters to add to -- seconds. The output will be the list of query-parameters to add to
-- the request. -- the request.
signV4 :: SignParams -> NC.Request -> [(ByteString, ByteString)] signV4QueryParams :: SignParams -> NC.Request -> SimpleQuery
signV4QueryParams !sp !req =
let scope = credentialScope sp
expiry = spExpirySecs sp
headersToSign = getHeadersToSign $ NC.requestHeaders req
signedHeaderKeys = B.intercalate ";" $ sort $ map fst headersToSign
-- query-parameters to be added before signing for presigned URLs
-- (i.e. when `isJust expiry`)
authQP =
[ ("X-Amz-Algorithm", "AWS4-HMAC-SHA256"),
("X-Amz-Credential", B.concat [encodeUtf8 $ spAccessKey sp, "/", scope]),
("X-Amz-Date", awsTimeFormatBS $ spTimeStamp sp),
("X-Amz-Expires", maybe "" showBS expiry),
("X-Amz-SignedHeaders", signedHeaderKeys)
]
++ maybeToList ((amzSecurityToken,) . BA.convert <$> spSessionToken sp)
finalQP =
parseQuery (NC.queryString req)
++ if isJust expiry
then (fmap . fmap) Just authQP
else []
-- 1. compute canonical request
canonicalRequest =
mkCanonicalRequest
False
sp
(NC.setQueryString finalQP req)
headersToSign
-- 2. compute string to sign
stringToSign = mkStringToSign (spTimeStamp sp) scope canonicalRequest
-- 3.1 compute signing key
signingKey = getSigningKey sp
-- 3.2 compute signature
signature = computeSignature stringToSign signingKey
in ("X-Amz-Signature", signature) : authQP
-- | Given SignParams and request details, including request method, request
-- path, headers, query params and payload hash, generates an updated set of
-- headers, including the x-amz-date header and the Authorization header, which
-- includes the signature.
--
-- The output is the list of headers to be added to authenticate the request.
signV4 :: SignParams -> NC.Request -> [Header]
signV4 !sp !req = signV4 !sp !req =
let let scope = credentialScope sp
region = fromMaybe "" $ spRegion sp
ts = spTimeStamp sp
scope = mkScope ts region
accessKey = toS $ spAccessKey sp
secretKey = toS $ spSecretKey sp
expiry = spExpirySecs sp
-- headers to be added to the request -- extra headers to be added for signing purposes.
datePair = ("X-Amz-Date", awsTimeFormatBS ts) extraHeaders =
computedHeaders = NC.requestHeaders req ++ ("X-Amz-Date", awsTimeFormatBS $ spTimeStamp sp)
if isJust $ expiry : ( -- payload hash is only used for S3 (not STS)
then [] [ ( "x-amz-content-sha256",
else [(\(x, y) -> (mk x, y)) datePair] fromMaybe "UNSIGNED-PAYLOAD" $ spPayloadHash sp
headersToSign = getHeadersToSign computedHeaders )
signedHeaderKeys = B.intercalate ";" $ sort $ map fst headersToSign | spService sp == ServiceS3
]
)
++ maybeToList ((mk amzSecurityToken,) . BA.convert <$> spSessionToken sp)
-- query-parameters to be added before signing for presigned URLs -- 1. compute canonical request
-- (i.e. when `isJust expiry`) reqHeaders = NC.requestHeaders req ++ extraHeaders
authQP = [ ("X-Amz-Algorithm", "AWS4-HMAC-SHA256") (canonicalRequest, signedHeaderKeys) =
, ("X-Amz-Credential", B.concat [accessKey, "/", scope]) getCanonicalRequestAndSignedHeaders
, datePair NotStreaming
, ("X-Amz-Expires", maybe "" show expiry) sp
, ("X-Amz-SignedHeaders", signedHeaderKeys) req
] reqHeaders
finalQP = parseQuery (NC.queryString req) ++
if isJust expiry
then (fmap . fmap) Just authQP
else []
-- 1. compute canonical request -- 2. compute string to sign
canonicalRequest = mkCanonicalRequest False sp stringToSign = mkStringToSign (spTimeStamp sp) scope canonicalRequest
(NC.setQueryString finalQP req) -- 3.1 compute signing key
headersToSign signingKey = getSigningKey sp
-- 3.2 compute signature
signature = computeSignature stringToSign signingKey
-- 4. compute auth header
authHeader = mkAuthHeader (spAccessKey sp) scope signedHeaderKeys signature
in authHeader : extraHeaders
-- 2. compute string to sign credentialScope :: SignParams -> ByteString
stringToSign = mkStringToSign ts scope canonicalRequest credentialScope sp =
let region = fromMaybe "" $ spRegion sp
-- 3.1 compute signing key in B.intercalate
signingKey = mkSigningKey ts region secretKey "/"
[ encodeUtf8 $ Time.formatTime Time.defaultTimeLocale "%Y%m%d" $ spTimeStamp sp,
-- 3.2 compute signature encodeUtf8 region,
signature = computeSignature stringToSign signingKey toByteString $ spService sp,
"aws4_request"
-- 4. compute auth header ]
authHeader = mkAuthHeader (spAccessKey sp) scope signedHeaderKeys signature
-- finally compute output pairs
sha256Hdr = ("x-amz-content-sha256",
fromMaybe "UNSIGNED-PAYLOAD" $ spPayloadHash sp)
output = if isJust expiry
then ("X-Amz-Signature", signature) : authQP
else [(\(x, y) -> (CI.foldedCase x, y)) authHeader,
datePair, sha256Hdr]
in output
mkScope :: UTCTime -> Text -> ByteString
mkScope ts region = B.intercalate "/"
[ toS $ Time.formatTime Time.defaultTimeLocale "%Y%m%d" ts
, toS region
, "s3"
, "aws4_request"
]
-- Folds header name, trims whitespace in header values, skips ignored headers
-- and sorts headers.
getHeadersToSign :: [Header] -> [(ByteString, ByteString)] getHeadersToSign :: [Header] -> [(ByteString, ByteString)]
getHeadersToSign !h = getHeadersToSign !h =
filter ((\hdr -> not $ Set.member hdr ignoredHeaders) . fst) $ filter ((\hdr -> not $ Set.member hdr ignoredHeaders) . fst) $
map (\(x, y) -> (CI.foldedCase x, stripBS y)) h map (bimap CI.foldedCase stripBS) h
mkCanonicalRequest :: Bool -> SignParams -> NC.Request -> [(ByteString, ByteString)] -- | Given the list of headers in the request, computes the canonical headers
-> ByteString -- and the signed headers strings.
getCanonicalHeaders :: NonEmpty Header -> (ByteString, ByteString)
getCanonicalHeaders h =
let -- Folds header name, trims spaces in header values, skips ignored
-- headers and sorts headers by name (we must not re-order multi-valued
-- headers).
headersToSign =
NE.toList $
NE.sortBy (\a b -> compare (fst a) (fst b)) $
NE.fromList $
NE.filter ((\hdr -> not $ Set.member hdr ignoredHeaders) . fst) $
NE.map (bimap CI.foldedCase stripBS) h
canonicalHeaders = mconcat $ map (\(a, b) -> a <> ":" <> b <> "\n") headersToSign
signedHeaderKeys = B.intercalate ";" $ map fst headersToSign
in (canonicalHeaders, signedHeaderKeys)
getCanonicalRequestAndSignedHeaders ::
IsStreaming ->
SignParams ->
NC.Request ->
[Header] ->
(ByteString, ByteString)
getCanonicalRequestAndSignedHeaders isStreaming sp req requestHeaders =
let httpMethod = NC.method req
canonicalUri = uriEncode False $ NC.path req
canonicalQueryString =
B.intercalate "&" $
map (\(x, y) -> B.concat [x, "=", y]) $
sort $
map
( bimap (uriEncode True) (maybe "" (uriEncode True))
)
(parseQuery $ NC.queryString req)
(canonicalHeaders, signedHeaderKeys) = getCanonicalHeaders $ NE.fromList requestHeaders
payloadHashStr =
case isStreaming of
IsStreamingLength _ -> "STREAMING-AWS4-HMAC-SHA256-PAYLOAD"
NotStreaming -> fromMaybe "UNSIGNED-PAYLOAD" $ spPayloadHash sp
canonicalRequest =
B.intercalate
"\n"
[ httpMethod,
canonicalUri,
canonicalQueryString,
canonicalHeaders,
signedHeaderKeys,
payloadHashStr
]
in (canonicalRequest, signedHeaderKeys)
mkCanonicalRequest ::
Bool ->
SignParams ->
NC.Request ->
[(ByteString, ByteString)] ->
ByteString
mkCanonicalRequest !isStreaming !sp !req !headersForSign = mkCanonicalRequest !isStreaming !sp !req !headersForSign =
let let httpMethod = NC.method req
canonicalQueryString = B.intercalate "&" $ canonicalUri = uriEncode False $ NC.path req
map (\(x, y) -> B.concat [x, "=", y]) $ canonicalQueryString =
sort $ map (\(x, y) -> B.intercalate "&" $
(uriEncode True x, maybe "" (uriEncode True) y)) $ map (\(x, y) -> B.concat [x, "=", y]) $
(parseQuery $ NC.queryString req) sortBy (\a b -> compare (fst a) (fst b)) $
map
sortedHeaders = sort headersForSign ( bimap (uriEncode True) (maybe "" (uriEncode True))
)
canonicalHeaders = B.concat $ (parseQuery $ NC.queryString req)
map (\(x, y) -> B.concat [x, ":", y, "\n"]) sortedHeaders sortedHeaders = sort headersForSign
canonicalHeaders =
signedHeaders = B.intercalate ";" $ map fst sortedHeaders B.concat $
map (\(x, y) -> B.concat [x, ":", y, "\n"]) sortedHeaders
payloadHashStr = signedHeaders = B.intercalate ";" $ map fst sortedHeaders
payloadHashStr =
if isStreaming if isStreaming
then "STREAMING-AWS4-HMAC-SHA256-PAYLOAD" then "STREAMING-AWS4-HMAC-SHA256-PAYLOAD"
else fromMaybe "UNSIGNED-PAYLOAD" $ spPayloadHash sp else fromMaybe "UNSIGNED-PAYLOAD" $ spPayloadHash sp
in in B.intercalate
B.intercalate "\n" "\n"
[ NC.method req [ httpMethod,
, uriEncode False $ NC.path req canonicalUri,
, canonicalQueryString canonicalQueryString,
, canonicalHeaders canonicalHeaders,
, signedHeaders signedHeaders,
, payloadHashStr payloadHashStr
] ]
mkStringToSign :: UTCTime -> ByteString -> ByteString -> ByteString mkStringToSign :: UTCTime -> ByteString -> ByteString -> ByteString
mkStringToSign ts !scope !canonicalRequest = B.intercalate "\n" mkStringToSign ts !scope !canonicalRequest =
[ "AWS4-HMAC-SHA256" B.intercalate
, awsTimeFormatBS ts "\n"
, scope [ "AWS4-HMAC-SHA256",
, hashSHA256 canonicalRequest awsTimeFormatBS ts,
] scope,
hashSHA256 canonicalRequest
]
mkSigningKey :: UTCTime -> Text -> ByteString -> ByteString getSigningKey :: SignParams -> ByteString
mkSigningKey ts region !secretKey = hmacSHA256RawBS "aws4_request" getSigningKey sp =
. hmacSHA256RawBS "s3" hmacSHA256RawBS "aws4_request"
. hmacSHA256RawBS (toS region) . hmacSHA256RawBS (toByteString $ spService sp)
. hmacSHA256RawBS (awsDateFormatBS ts) . hmacSHA256RawBS (encodeUtf8 $ fromMaybe "" $ spRegion sp)
$ B.concat ["AWS4", secretKey] . hmacSHA256RawBS (awsDateFormatBS $ spTimeStamp sp)
$ B.concat ["AWS4", BA.convert $ spSecretKey sp]
computeSignature :: ByteString -> ByteString -> ByteString computeSignature :: ByteString -> ByteString -> ByteString
computeSignature !toSign !key = digestToBase16 $ hmacSHA256 toSign key computeSignature !toSign !key = digestToBase16 $ hmacSHA256 toSign key
@ -233,159 +327,168 @@ computeSignature !toSign !key = digestToBase16 $ hmacSHA256 toSign key
-- | Takes a validated Post Policy JSON bytestring, the signing time, -- | Takes a validated Post Policy JSON bytestring, the signing time,
-- and ConnInfo and returns form-data for the POST upload containing -- and ConnInfo and returns form-data for the POST upload containing
-- just the signature and the encoded post-policy. -- just the signature and the encoded post-policy.
signV4PostPolicy :: ByteString -> SignParams signV4PostPolicy ::
-> Map.HashMap Text ByteString ByteString ->
SignParams ->
Map.HashMap Text ByteString
signV4PostPolicy !postPolicyJSON !sp = signV4PostPolicy !postPolicyJSON !sp =
let let stringToSign = Base64.encode postPolicyJSON
stringToSign = Base64.encode postPolicyJSON signingKey = getSigningKey sp
region = fromMaybe "" $ spRegion sp signature = computeSignature stringToSign signingKey
signingKey = mkSigningKey (spTimeStamp sp) region $ toS $ spSecretKey sp in Map.fromList $
signature = computeSignature stringToSign signingKey [ ("x-amz-signature", signature),
in ("policy", stringToSign)
Map.fromList [ ("x-amz-signature", signature) ]
, ("policy", stringToSign) ++ maybeToList ((decodeUtf8 amzSecurityToken,) . BA.convert <$> spSessionToken sp)
]
chunkSizeConstant :: Int chunkSizeConstant :: Int
chunkSizeConstant = 64 * 1024 chunkSizeConstant = 64 * 1024
-- base16Len computes the number of bytes required to represent @n (> 0)@ in -- base16Len computes the number of bytes required to represent @n (> 0)@ in
-- hexadecimal. -- hexadecimal.
base16Len :: Integral a => a -> Int base16Len :: (Integral a) => a -> Int
base16Len n | n == 0 = 0 base16Len n
| otherwise = 1 + base16Len (n `div` 16) | n == 0 = 0
| otherwise = 1 + base16Len (n `div` 16)
signedStreamLength :: Int64 -> Int64 signedStreamLength :: Int64 -> Int64
signedStreamLength dataLen = signedStreamLength dataLen =
let let chunkSzInt = fromIntegral chunkSizeConstant
chunkSzInt = fromIntegral chunkSizeConstant (numChunks, lastChunkLen) = quotRem dataLen chunkSzInt
(numChunks, lastChunkLen) = quotRem dataLen chunkSzInt -- Structure of a chunk:
-- string(IntHexBase(chunk-size)) + ";chunk-signature=" + signature + \r\n + chunk-data + \r\n
encodedChunkLen csz = fromIntegral (base16Len csz) + 17 + 64 + 2 + csz + 2
fullChunkSize = encodedChunkLen chunkSzInt
lastChunkSize = bool 0 (encodedChunkLen lastChunkLen) $ lastChunkLen > 0
finalChunkSize = 1 + 17 + 64 + 2 + 2
in numChunks * fullChunkSize + lastChunkSize + finalChunkSize
-- For streaming S3, we need to update the content-encoding header.
addContentEncoding :: [Header] -> [Header]
addContentEncoding hs =
-- assume there is at most one content-encoding header.
let (ceHdrs, others) = partition ((== hContentEncoding) . fst) hs
in maybe
(hContentEncoding, "aws-chunked")
(\(k, v) -> (k, v <> ",aws-chunked"))
(listToMaybe ceHdrs)
: others
-- Structure of a chunk: signV4Stream ::
-- string(IntHexBase(chunk-size)) + ";chunk-signature=" + signature + \r\n + chunk-data + \r\n Int64 ->
encodedChunkLen csz = fromIntegral (base16Len csz) + 17 + 64 + 2 + csz + 2 SignParams ->
fullChunkSize = encodedChunkLen chunkSzInt NC.Request ->
lastChunkSize = bool 0 (encodedChunkLen lastChunkLen) $ lastChunkLen > 0 (C.ConduitT () ByteString (C.ResourceT IO) () -> NC.Request)
finalChunkSize = 1 + 17 + 64 + 2 + 2
in
numChunks * fullChunkSize + lastChunkSize + finalChunkSize
signV4Stream :: Int64 -> SignParams -> NC.Request
-> (C.ConduitT () ByteString (C.ResourceT IO) () -> NC.Request)
-- -> ([Header], C.ConduitT () ByteString (C.ResourceT IO) () -> NC.RequestBody)
signV4Stream !payloadLength !sp !req = signV4Stream !payloadLength !sp !req =
let let ts = spTimeStamp sp
ts = spTimeStamp sp
addContentEncoding hs = -- compute the updated list of headers to be added for signing purposes.
let ceMay = headMay $ filter (\(x, _) -> x == "content-encoding") hs signedContentLength = signedStreamLength payloadLength
in case ceMay of extraHeaders =
Nothing -> ("content-encoding", "aws-chunked") : hs [ ("X-Amz-Date", awsTimeFormatBS $ spTimeStamp sp),
Just (_, ce) -> ("content-encoding", ce <> ",aws-chunked") : ("x-amz-decoded-content-length", showBS payloadLength),
filter (\(x, _) -> x /= "content-encoding") hs ("content-length", showBS signedContentLength),
("x-amz-content-sha256", "STREAMING-AWS4-HMAC-SHA256-PAYLOAD")
-- headers to be added to the request
datePair = ("X-Amz-Date", awsTimeFormatBS ts)
computedHeaders = addContentEncoding $
datePair : NC.requestHeaders req
-- headers specific to streaming signature
signedContentLength = signedStreamLength payloadLength
streamingHeaders :: [Header]
streamingHeaders =
[ ("x-amz-decoded-content-length", show payloadLength)
, ("content-length", show signedContentLength )
, ("x-amz-content-sha256", "STREAMING-AWS4-HMAC-SHA256-PAYLOAD")
] ]
headersToSign = getHeadersToSign $ computedHeaders ++ streamingHeaders ++ maybeToList ((mk amzSecurityToken,) . BA.convert <$> spSessionToken sp)
signedHeaderKeys = B.intercalate ";" $ sort $ map fst headersToSign requestHeaders =
finalQP = parseQuery (NC.queryString req) addContentEncoding $
foldr setHeader (NC.requestHeaders req) extraHeaders
-- 1. Compute Seed Signature -- 1. Compute Seed Signature
-- 1.1 Canonical Request -- 1.1 Canonical Request
canonicalReq = mkCanonicalRequest True sp (canonicalReq, signedHeaderKeys) =
(NC.setQueryString finalQP req) getCanonicalRequestAndSignedHeaders
headersToSign (IsStreamingLength payloadLength)
sp
req
requestHeaders
region = fromMaybe "" $ spRegion sp scope = credentialScope sp
scope = mkScope ts region accessKey = spAccessKey sp
accessKey = spAccessKey sp -- 1.2 String toSign
secretKey = spSecretKey sp stringToSign = mkStringToSign ts scope canonicalReq
-- 1.3 Compute signature
-- 1.3.1 compute signing key
signingKey = getSigningKey sp
-- 1.3.2 Compute signature
seedSignature = computeSignature stringToSign signingKey
-- 1.3.3 Compute Auth Header
authHeader = mkAuthHeader accessKey scope signedHeaderKeys seedSignature
-- 1.4 Updated headers for the request
finalReqHeaders = authHeader : requestHeaders
-- headersToAdd = authHeader : datePair : streamingHeaders
-- 1.2 String toSign toHexStr n = B8.pack $ printf "%x" n
stringToSign = mkStringToSign ts scope canonicalReq (numParts, lastPSize) = payloadLength `quotRem` fromIntegral chunkSizeConstant
-- Function to compute string to sign for each chunk.
-- 1.3 Compute signature chunkStrToSign prevSign currChunkHash =
-- 1.3.1 compute signing key B.intercalate
signingKey = mkSigningKey ts region $ toS secretKey "\n"
[ "AWS4-HMAC-SHA256-PAYLOAD",
-- 1.3.2 Compute signature awsTimeFormatBS ts,
seedSignature = computeSignature stringToSign signingKey scope,
prevSign,
-- 1.3.3 Compute Auth Header hashSHA256 "",
authHeader = mkAuthHeader accessKey scope signedHeaderKeys seedSignature currChunkHash
]
-- 1.4 Updated headers for the request -- Read n byte from upstream and return a strict bytestring.
finalReqHeaders = authHeader : (computedHeaders ++ streamingHeaders) mustTakeN n = do
-- headersToAdd = authHeader : datePair : streamingHeaders bs <- LB.toStrict <$> (C.takeCE n C..| C.sinkLazy)
toHexStr n = B8.pack $ printf "%x" n
(numParts, lastPSize) = payloadLength `quotRem` fromIntegral chunkSizeConstant
-- Function to compute string to sign for each chunk.
chunkStrToSign prevSign currChunkHash =
B.intercalate "\n"
[ "AWS4-HMAC-SHA256-PAYLOAD"
, awsTimeFormatBS ts
, scope
, prevSign
, hashSHA256 ""
, currChunkHash
]
-- Read n byte from upstream and return a strict bytestring.
mustTakeN n = do
bs <- toS <$> (C.takeCE n C..| C.sinkLazy)
when (B.length bs /= n) $ when (B.length bs /= n) $
throwIO MErrVStreamingBodyUnexpectedEOF throwIO MErrVStreamingBodyUnexpectedEOF
return bs return bs
signerConduit n lps prevSign =
-- First case encodes a full chunk of length
-- 'chunkSizeConstant'.
if
| n > 0 -> do
bs <- mustTakeN chunkSizeConstant
let strToSign = chunkStrToSign prevSign (hashSHA256 bs)
nextSign = computeSignature strToSign signingKey
chunkBS =
toHexStr chunkSizeConstant
<> ";chunk-signature="
<> nextSign
<> "\r\n"
<> bs
<> "\r\n"
C.yield chunkBS
signerConduit (n - 1) lps nextSign
signerConduit n lps prevSign = -- Second case encodes the last chunk which is smaller than
-- First case encodes a full chunk of length -- 'chunkSizeConstant'
-- 'chunkSizeConstant'. | lps > 0 -> do
if | n > 0 -> do bs <- mustTakeN $ fromIntegral lps
bs <- mustTakeN chunkSizeConstant let strToSign = chunkStrToSign prevSign (hashSHA256 bs)
let strToSign = chunkStrToSign prevSign (hashSHA256 bs) nextSign = computeSignature strToSign signingKey
nextSign = computeSignature strToSign signingKey chunkBS =
chunkBS = toHexStr chunkSizeConstant toHexStr lps
<> ";chunk-signature=" <> ";chunk-signature="
<> nextSign <> "\r\n" <> bs <> "\r\n" <> nextSign
C.yield chunkBS <> "\r\n"
signerConduit (n-1) lps nextSign <> bs
<> "\r\n"
C.yield chunkBS
signerConduit 0 0 nextSign
-- Second case encodes the last chunk which is smaller than -- Last case encodes the final signature chunk that has no
-- 'chunkSizeConstant' -- data.
| lps > 0 -> do | otherwise -> do
bs <- mustTakeN $ fromIntegral lps let strToSign = chunkStrToSign prevSign (hashSHA256 "")
let strToSign = chunkStrToSign prevSign (hashSHA256 bs) nextSign = computeSignature strToSign signingKey
nextSign = computeSignature strToSign signingKey lastChunkBS = "0;chunk-signature=" <> nextSign <> "\r\n\r\n"
chunkBS = toHexStr lps <> ";chunk-signature=" C.yield lastChunkBS
<> nextSign <> "\r\n" <> bs <> "\r\n" in \src ->
C.yield chunkBS req
signerConduit 0 0 nextSign { NC.requestHeaders = finalReqHeaders,
NC.requestBody =
NC.requestBodySource signedContentLength $
src C..| signerConduit numParts lastPSize seedSignature
}
-- Last case encodes the final signature chunk that has no -- "setHeader r hdr" adds the hdr to r, replacing it in r if it already exists.
-- data. setHeader :: Header -> RequestHeaders -> RequestHeaders
| otherwise -> do setHeader hdr r =
let strToSign = chunkStrToSign prevSign (hashSHA256 "") let r' = filter (\(name, _) -> name /= fst hdr) r
nextSign = computeSignature strToSign signingKey in hdr : r'
lastChunkBS = "0;chunk-signature=" <> nextSign <> "\r\n\r\n"
C.yield lastChunkBS
in
\src -> req { NC.requestHeaders = finalReqHeaders
, NC.requestBody =
NC.requestBodySource signedContentLength $
src C..| signerConduit numParts lastPSize seedSignature
}

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017-2019 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -16,71 +16,77 @@
module Network.Minio.Utils where module Network.Minio.Utils where
import qualified Conduit as C import qualified Conduit as C
import Control.Monad.IO.Unlift (MonadUnliftIO) import Control.Monad.IO.Unlift (MonadUnliftIO)
import qualified Control.Monad.Trans.Resource as R import qualified Control.Monad.Trans.Resource as R
import qualified Data.ByteString as B import qualified Data.ByteString as B
import qualified Data.ByteString.Lazy as LB import qualified Data.ByteString.Lazy as LB
import Data.CaseInsensitive (mk, original) import Data.CaseInsensitive (mk, original)
import qualified Data.Conduit.Binary as CB import qualified Data.Conduit.Binary as CB
import qualified Data.HashMap.Strict as H import qualified Data.HashMap.Strict as H
import qualified Data.List as List import qualified Data.Text as T
import qualified Data.Text as T import Data.Text.Read (decimal)
import Data.Text.Encoding.Error (lenientDecode) import Data.Time
import Data.Text.Read (decimal) ( defaultTimeLocale,
import Data.Time (defaultTimeLocale, parseTimeM, parseTimeM,
rfc822DateFormat) rfc822DateFormat,
import Network.HTTP.Conduit (Response) )
import qualified Network.HTTP.Conduit as NC import Lib.Prelude
import qualified Network.HTTP.Types as HT import Network.HTTP.Conduit (Response)
import qualified Network.HTTP.Types.Header as Hdr import qualified Network.HTTP.Conduit as NC
import qualified System.IO as IO import qualified Network.HTTP.Types as HT
import qualified UnliftIO as U import qualified Network.HTTP.Types.Header as Hdr
import qualified UnliftIO.Async as A import Network.Minio.Data.ByteString
import qualified UnliftIO.MVar as UM import Network.Minio.JsonParser (parseErrResponseJSON)
import Network.Minio.XmlCommon (parseErrResponse)
import qualified System.IO as IO
import qualified UnliftIO as U
import qualified UnliftIO.Async as A
import Lib.Prelude allocateReadFile ::
(MonadUnliftIO m, R.MonadResource m) =>
import Network.Minio.Data FilePath ->
import Network.Minio.Data.ByteString m (R.ReleaseKey, Handle)
import Network.Minio.JsonParser (parseErrResponseJSON)
import Network.Minio.XmlParser (parseErrResponse)
allocateReadFile :: (MonadUnliftIO m, R.MonadResource m)
=> FilePath -> m (R.ReleaseKey, Handle)
allocateReadFile fp = do allocateReadFile fp = do
(rk, hdlE) <- R.allocate (openReadFile fp) cleanup (rk, hdlE) <- R.allocate (openReadFile fp) cleanup
either (\(e :: IOException) -> throwIO e) (return . (rk,)) hdlE either (\(e :: U.IOException) -> throwIO e) (return . (rk,)) hdlE
where where
openReadFile f = U.try $ IO.openBinaryFile f IO.ReadMode openReadFile f = U.try $ IO.openBinaryFile f IO.ReadMode
cleanup = either (const $ return ()) IO.hClose cleanup = either (const $ return ()) IO.hClose
-- | Queries the file size from the handle. Catches any file operation -- | Queries the file size from the handle. Catches any file operation
-- exceptions and returns Nothing instead. -- exceptions and returns Nothing instead.
getFileSize :: (MonadUnliftIO m, R.MonadResource m) getFileSize ::
=> Handle -> m (Maybe Int64) (MonadUnliftIO m) =>
Handle ->
m (Maybe Int64)
getFileSize h = do getFileSize h = do
resE <- liftIO $ try $ fromIntegral <$> IO.hFileSize h resE <- liftIO $ try $ fromIntegral <$> IO.hFileSize h
case resE of case resE of
Left (_ :: IOException) -> return Nothing Left (_ :: U.IOException) -> return Nothing
Right s -> return $ Just s Right s -> return $ Just s
-- | Queries if handle is seekable. Catches any file operation -- | Queries if handle is seekable. Catches any file operation
-- exceptions and return False instead. -- exceptions and return False instead.
isHandleSeekable :: (R.MonadResource m, MonadUnliftIO m) isHandleSeekable ::
=> Handle -> m Bool (R.MonadResource m) =>
Handle ->
m Bool
isHandleSeekable h = do isHandleSeekable h = do
resE <- liftIO $ try $ IO.hIsSeekable h resE <- liftIO $ try $ IO.hIsSeekable h
case resE of case resE of
Left (_ :: IOException) -> return False Left (_ :: U.IOException) -> return False
Right v -> return v Right v -> return v
-- | Helper function that opens a handle to the filepath and performs -- | Helper function that opens a handle to the filepath and performs
-- the given action on it. Exceptions of type MError are caught and -- the given action on it. Exceptions of type MError are caught and
-- returned - both during file handle allocation and when the action -- returned - both during file handle allocation and when the action
-- is run. -- is run.
withNewHandle :: (MonadUnliftIO m, R.MonadResource m) withNewHandle ::
=> FilePath -> (Handle -> m a) -> m (Either IOException a) (MonadUnliftIO m, R.MonadResource m) =>
FilePath ->
(Handle -> m a) ->
m (Either U.IOException a)
withNewHandle fp fileAction = do withNewHandle fp fileAction = do
-- opening a handle can throw MError exception. -- opening a handle can throw MError exception.
handleE <- try $ allocateReadFile fp handleE <- try $ allocateReadFile fp
@ -94,34 +100,61 @@ withNewHandle fp fileAction = do
return resE return resE
mkHeaderFromPairs :: [(ByteString, ByteString)] -> [HT.Header] mkHeaderFromPairs :: [(ByteString, ByteString)] -> [HT.Header]
mkHeaderFromPairs = map ((\(x, y) -> (mk x, y))) mkHeaderFromPairs = map (first mk)
lookupHeader :: HT.HeaderName -> [HT.Header] -> Maybe ByteString lookupHeader :: HT.HeaderName -> [HT.Header] -> Maybe ByteString
lookupHeader hdr = headMay . map snd . filter (\(h, _) -> h == hdr) lookupHeader hdr = listToMaybe . map snd . filter (\(h, _) -> h == hdr)
getETagHeader :: [HT.Header] -> Maybe Text getETagHeader :: [HT.Header] -> Maybe Text
getETagHeader hs = decodeUtf8Lenient <$> lookupHeader Hdr.hETag hs getETagHeader hs = decodeUtf8Lenient <$> lookupHeader Hdr.hETag hs
getMetadata :: [HT.Header] -> [(Text, Text)] getMetadata :: [HT.Header] -> [(Text, Text)]
getMetadata = getMetadata =
map ((\(x, y) -> (decodeUtf8Lenient $ original x, decodeUtf8Lenient $ stripBS y))) map (\(x, y) -> (decodeUtf8Lenient $ original x, decodeUtf8Lenient $ stripBS y))
-- | If the given header name has the @X-Amz-Meta-@ prefix, it is
-- stripped and a Just is returned.
userMetadataHeaderNameMaybe :: Text -> Maybe Text
userMetadataHeaderNameMaybe k =
let prefix = T.toCaseFold "X-Amz-Meta-"
n = T.length prefix
in if T.toCaseFold (T.take n k) == prefix
then Just (T.drop n k)
else Nothing
toMaybeMetadataHeader :: (Text, Text) -> Maybe (Text, Text) toMaybeMetadataHeader :: (Text, Text) -> Maybe (Text, Text)
toMaybeMetadataHeader (k, v) = toMaybeMetadataHeader (k, v) =
(, v) <$> userMetadataHeaderNameMaybe k (,v) <$> userMetadataHeaderNameMaybe k
getNonUserMetadataMap :: [(Text, Text)] -> H.HashMap Text Text getNonUserMetadataMap :: [(Text, Text)] -> H.HashMap Text Text
getNonUserMetadataMap = H.fromList getNonUserMetadataMap =
. filter ( isNothing H.fromList
. userMetadataHeaderNameMaybe . filter
. fst ( isNothing
) . userMetadataHeaderNameMaybe
. fst
)
addXAmzMetaPrefix :: Text -> Text
addXAmzMetaPrefix s
| isJust (userMetadataHeaderNameMaybe s) = s
| otherwise = "X-Amz-Meta-" <> s
mkHeaderFromMetadata :: [(Text, Text)] -> [HT.Header]
mkHeaderFromMetadata = map (\(x, y) -> (mk $ encodeUtf8 $ addXAmzMetaPrefix x, encodeUtf8 y))
-- | This function collects all headers starting with `x-amz-meta-` -- | This function collects all headers starting with `x-amz-meta-`
-- and strips off this prefix, and returns a map. -- and strips off this prefix, and returns a map.
getUserMetadataMap :: [(Text, Text)] -> H.HashMap Text Text getUserMetadataMap :: [(Text, Text)] -> H.HashMap Text Text
getUserMetadataMap = H.fromList getUserMetadataMap =
. mapMaybe toMaybeMetadataHeader H.fromList
. mapMaybe toMaybeMetadataHeader
getHostHeader :: (ByteString, Int) -> ByteString
getHostHeader (host_, port_) =
if port_ == 80 || port_ == 443
then host_
else host_ <> ":" <> show port_
getLastModifiedHeader :: [HT.Header] -> Maybe UTCTime getLastModifiedHeader :: [HT.Header] -> Maybe UTCTime
getLastModifiedHeader hs = do getLastModifiedHeader hs = do
@ -131,19 +164,21 @@ getLastModifiedHeader hs = do
getContentLength :: [HT.Header] -> Maybe Int64 getContentLength :: [HT.Header] -> Maybe Int64
getContentLength hs = do getContentLength hs = do
nbs <- decodeUtf8Lenient <$> lookupHeader Hdr.hContentLength hs nbs <- decodeUtf8Lenient <$> lookupHeader Hdr.hContentLength hs
fst <$> hush (decimal nbs) fst <$> either (const Nothing) Just (decimal nbs)
decodeUtf8Lenient :: ByteString -> Text decodeUtf8Lenient :: ByteString -> Text
decodeUtf8Lenient = decodeUtf8With lenientDecode decodeUtf8Lenient = decodeUtf8With lenientDecode
isSuccessStatus :: HT.Status -> Bool isSuccessStatus :: HT.Status -> Bool
isSuccessStatus sts = let s = HT.statusCode sts isSuccessStatus sts =
in (s >= 200 && s < 300) let s = HT.statusCode sts
in (s >= 200 && s < 300)
httpLbs :: MonadIO m httpLbs ::
=> NC.Request -> NC.Manager (MonadIO m) =>
-> m (NC.Response LByteString) NC.Request ->
NC.Manager ->
m (NC.Response LByteString)
httpLbs req mgr = do httpLbs req mgr = do
respE <- liftIO $ tryHttpEx $ NC.httpLbs req mgr respE <- liftIO $ tryHttpEx $ NC.httpLbs req mgr
resp <- either throwIO return respE resp <- either throwIO return respE
@ -155,21 +190,26 @@ httpLbs req mgr = do
Just "application/json" -> do Just "application/json" -> do
sErr <- parseErrResponseJSON $ NC.responseBody resp sErr <- parseErrResponseJSON $ NC.responseBody resp
throwIO sErr throwIO sErr
_ ->
_ -> throwIO $ NC.HttpExceptionRequest req $ throwIO $
NC.StatusCodeException (void resp) (show resp) NC.HttpExceptionRequest req $
NC.StatusCodeException (void resp) (showBS resp)
return resp return resp
where where
tryHttpEx :: IO (NC.Response LByteString) tryHttpEx ::
-> IO (Either NC.HttpException (NC.Response LByteString)) IO (NC.Response LByteString) ->
IO (Either NC.HttpException (NC.Response LByteString))
tryHttpEx = try tryHttpEx = try
contentTypeMay resp = lookupHeader Hdr.hContentType $ contentTypeMay resp =
NC.responseHeaders resp lookupHeader Hdr.hContentType $
NC.responseHeaders resp
http :: (MonadUnliftIO m, R.MonadResource m) http ::
=> NC.Request -> NC.Manager (MonadUnliftIO m, R.MonadResource m) =>
-> m (Response (C.ConduitT () ByteString m ())) NC.Request ->
NC.Manager ->
m (Response (C.ConduitT () ByteString m ()))
http req mgr = do http req mgr = do
respE <- tryHttpEx $ NC.http req mgr respE <- tryHttpEx $ NC.http req mgr
resp <- either throwIO return respE resp <- either throwIO return respE
@ -179,25 +219,31 @@ http req mgr = do
respBody <- C.connect (NC.responseBody resp) CB.sinkLbs respBody <- C.connect (NC.responseBody resp) CB.sinkLbs
sErr <- parseErrResponse respBody sErr <- parseErrResponse respBody
throwIO sErr throwIO sErr
_ -> do _ -> do
content <- LB.toStrict . NC.responseBody <$> NC.lbsResponse resp content <- LB.toStrict . NC.responseBody <$> NC.lbsResponse resp
throwIO $ NC.HttpExceptionRequest req $ throwIO $
NC.StatusCodeException (void resp) content NC.HttpExceptionRequest req $
NC.StatusCodeException (void resp) content
return resp return resp
where where
tryHttpEx :: (MonadUnliftIO m) => m a tryHttpEx ::
-> m (Either NC.HttpException a) (MonadUnliftIO m) =>
m a ->
m (Either NC.HttpException a)
tryHttpEx = try tryHttpEx = try
contentTypeMay resp = lookupHeader Hdr.hContentType $ contentTypeMay resp =
NC.responseHeaders resp lookupHeader Hdr.hContentType $
NC.responseHeaders resp
-- Similar to mapConcurrently but limits the number of threads that -- Similar to mapConcurrently but limits the number of threads that
-- can run using a quantity semaphore. -- can run using a quantity semaphore.
limitedMapConcurrently :: MonadUnliftIO m limitedMapConcurrently ::
=> Int -> (t -> m a) -> [t] -> m [a] (MonadUnliftIO m) =>
Int ->
(t -> m a) ->
[t] ->
m [a]
limitedMapConcurrently 0 _ _ = return [] limitedMapConcurrently 0 _ _ = return []
limitedMapConcurrently count act args = do limitedMapConcurrently count act args = do
t' <- U.newTVarIO count t' <- U.newTVarIO count
@ -206,17 +252,15 @@ limitedMapConcurrently count act args = do
where where
wThread t arg = wThread t arg =
U.bracket_ (waitSem t) (signalSem t) $ act arg U.bracket_ (waitSem t) (signalSem t) $ act arg
-- quantity semaphore implementation using TVar -- quantity semaphore implementation using TVar
waitSem t = U.atomically $ do waitSem t = U.atomically $ do
v <- U.readTVar t v <- U.readTVar t
if v > 0 if v > 0
then U.writeTVar t (v-1) then U.writeTVar t (v - 1)
else U.retrySTM else U.retrySTM
signalSem t = U.atomically $ do signalSem t = U.atomically $ do
v <- U.readTVar t v <- U.readTVar t
U.writeTVar t (v+1) U.writeTVar t (v + 1)
-- helper function to 'drop' empty optional parameter. -- helper function to 'drop' empty optional parameter.
mkQuery :: Text -> Maybe Text -> Maybe (Text, Text) mkQuery :: Text -> Maybe Text -> Maybe (Text, Text)
@ -225,7 +269,7 @@ mkQuery k mv = (k,) <$> mv
-- helper function to build query parameters that are optional. -- helper function to build query parameters that are optional.
-- don't use it with mandatory query params with empty value. -- don't use it with mandatory query params with empty value.
mkOptionalParams :: [(Text, Maybe Text)] -> HT.Query mkOptionalParams :: [(Text, Maybe Text)] -> HT.Query
mkOptionalParams params = HT.toQuery $ uncurry mkQuery <$> params mkOptionalParams params = HT.toQuery $ uncurry mkQuery <$> params
-- | Conduit that rechunks bytestrings into the given chunk -- | Conduit that rechunks bytestrings into the given chunk
-- lengths. Stops after given chunk lengths are yielded. Stops if -- lengths. Stops after given chunk lengths are yielded. Stops if
@ -233,41 +277,9 @@ mkOptionalParams params = HT.toQuery $ uncurry mkQuery <$> params
-- received. Does not throw any errors. -- received. Does not throw any errors.
chunkBSConduit :: (Monad m) => [Int] -> C.ConduitM ByteString ByteString m () chunkBSConduit :: (Monad m) => [Int] -> C.ConduitM ByteString ByteString m ()
chunkBSConduit [] = return () chunkBSConduit [] = return ()
chunkBSConduit (s:ss) = do chunkBSConduit (s : ss) = do
bs <- fmap LB.toStrict $ C.takeCE s C..| C.sinkLazy bs <- fmap LB.toStrict $ C.takeCE s C..| C.sinkLazy
if | B.length bs == s -> C.yield bs >> chunkBSConduit ss if
| B.length bs > 0 -> C.yield bs | B.length bs == s -> C.yield bs >> chunkBSConduit ss
| otherwise -> return () | B.length bs > 0 -> C.yield bs
| otherwise -> return ()
-- | Select part sizes - the logic is that the minimum part-size will
-- be 64MiB.
selectPartSizes :: Int64 -> [(PartNumber, Int64, Int64)]
selectPartSizes size = uncurry (List.zip3 [1..]) $
List.unzip $ loop 0 size
where
ceil :: Double -> Int64
ceil = ceiling
partSize = max minPartSize (ceil $ fromIntegral size /
fromIntegral maxMultipartParts)
m = fromIntegral partSize
loop st sz
| st > sz = []
| st + m >= sz = [(st, sz - st)]
| otherwise = (st, m) : loop (st + m) sz
lookupRegionCache :: Bucket -> Minio (Maybe Region)
lookupRegionCache b = do
rMVar <- asks mcRegionMap
rMap <- UM.readMVar rMVar
return $ H.lookup b rMap
addToRegionCache :: Bucket -> Region -> Minio ()
addToRegionCache b region = do
rMVar <- asks mcRegionMap
UM.modifyMVar_ rMVar $ return . H.insert b region
deleteFromRegionCache :: Bucket -> Minio ()
deleteFromRegionCache b = do
rMVar <- asks mcRegionMap
UM.modifyMVar_ rMVar $ return . H.delete b

View File

@ -0,0 +1,65 @@
--
-- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
module Network.Minio.XmlCommon where
import qualified Data.Text as T
import Data.Text.Read (decimal)
import Data.Time (UTCTime)
import Data.Time.Format.ISO8601 (iso8601ParseM)
import Lib.Prelude (throwIO)
import Network.Minio.Errors
import Text.XML (Name (Name), def, parseLBS)
import Text.XML.Cursor (Axis, Cursor, content, element, fromDocument, laxElement, ($/), (&/))
s3Name :: Text -> Text -> Name
s3Name ns s = Name s (Just ns) Nothing
uncurry4 :: (a -> b -> c -> d -> e) -> (a, b, c, d) -> e
uncurry4 f (a, b, c, d) = f a b c d
uncurry6 :: (a -> b -> c -> d -> e -> f -> g) -> (a, b, c, d, e, f) -> g
uncurry6 f (a, b, c, d, e, g) = f a b c d e g
-- | Parse time strings from XML
parseS3XMLTime :: (MonadIO m) => Text -> m UTCTime
parseS3XMLTime t =
maybe (throwIO $ MErrVXmlParse $ "timestamp parse failure: " <> t) return $
iso8601ParseM $
toString t
parseDecimal :: (MonadIO m, Integral a) => Text -> m a
parseDecimal numStr =
either (throwIO . MErrVXmlParse . show) return $
fst <$> decimal numStr
parseDecimals :: (MonadIO m, Integral a) => [Text] -> m [a]
parseDecimals numStr = forM numStr parseDecimal
s3Elem :: Text -> Text -> Axis
s3Elem ns = element . s3Name ns
parseRoot :: (MonadIO m) => LByteString -> m Cursor
parseRoot =
either (throwIO . MErrVXmlParse . show) (return . fromDocument)
. parseLBS def
parseErrResponse :: (MonadIO m) => LByteString -> m ServiceErr
parseErrResponse xmldata = do
r <- parseRoot xmldata
let code = T.concat $ r $/ laxElement "Code" &/ content
message = T.concat $ r $/ laxElement "Message" &/ content
return $ toServiceErr code message

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -15,89 +15,112 @@
-- --
module Network.Minio.XmlGenerator module Network.Minio.XmlGenerator
( mkCreateBucketConfig ( mkCreateBucketConfig,
, mkCompleteMultipartUploadRequest mkCompleteMultipartUploadRequest,
, mkPutNotificationRequest mkPutNotificationRequest,
, mkSelectRequest mkSelectRequest,
) where )
where
import qualified Data.ByteString.Lazy as LBS import qualified Data.ByteString.Lazy as LBS
import qualified Data.HashMap.Strict as H import qualified Data.Text as T
import qualified Data.Text as T import Network.Minio.Data
import Text.XML import Network.Minio.XmlCommon
import Text.XML
import Lib.Prelude
import Network.Minio.Data
-- | Create a bucketConfig request body XML -- | Create a bucketConfig request body XML
mkCreateBucketConfig :: Text -> Region -> ByteString mkCreateBucketConfig :: Text -> Region -> ByteString
mkCreateBucketConfig ns location = LBS.toStrict $ renderLBS def bucketConfig mkCreateBucketConfig ns location = LBS.toStrict $ renderLBS def bucketConfig
where where
s3Element n = Element (s3Name ns n) mempty s3Element n = Element (s3Name ns n) mempty
root = s3Element "CreateBucketConfiguration" root =
[ NodeElement $ s3Element "LocationConstraint" s3Element
[ NodeContent location] "CreateBucketConfiguration"
[ NodeElement $
s3Element
"LocationConstraint"
[NodeContent location]
] ]
bucketConfig = Document (Prologue [] Nothing []) root [] bucketConfig = Document (Prologue [] Nothing []) root []
-- | Create a completeMultipartUpload request body XML -- | Create a completeMultipartUpload request body XML
mkCompleteMultipartUploadRequest :: [PartTuple] -> ByteString mkCompleteMultipartUploadRequest :: [PartTuple] -> ByteString
mkCompleteMultipartUploadRequest partInfo = mkCompleteMultipartUploadRequest partInfo =
LBS.toStrict $ renderLBS def cmur LBS.toStrict $ renderLBS def cmur
where where
root = Element "CompleteMultipartUpload" mempty $ root =
map (NodeElement . mkPart) partInfo Element "CompleteMultipartUpload" mempty $
mkPart (n, etag) = Element "Part" mempty map (NodeElement . mkPart) partInfo
[ NodeElement $ Element "PartNumber" mempty mkPart (n, etag) =
[NodeContent $ T.pack $ show n] Element
, NodeElement $ Element "ETag" mempty "Part"
[NodeContent etag] mempty
] [ NodeElement $
Element
"PartNumber"
mempty
[NodeContent $ T.pack $ show n],
NodeElement $
Element
"ETag"
mempty
[NodeContent etag]
]
cmur = Document (Prologue [] Nothing []) root [] cmur = Document (Prologue [] Nothing []) root []
-- Simplified XML representation without element attributes. -- Simplified XML representation without element attributes.
data XNode = XNode Text [XNode] data XNode
| XLeaf Text Text = XNode Text [XNode]
deriving (Eq, Show) | XLeaf Text Text
deriving stock (Eq, Show)
toXML :: Text -> XNode -> ByteString toXML :: Text -> XNode -> ByteString
toXML ns node = LBS.toStrict $ renderLBS def $ toXML ns node =
Document (Prologue [] Nothing []) (xmlNode node) [] LBS.toStrict $
renderLBS def $
Document (Prologue [] Nothing []) (xmlNode node) []
where where
xmlNode :: XNode -> Element xmlNode :: XNode -> Element
xmlNode (XNode name nodes) = Element (s3Name ns name) mempty $ xmlNode (XNode name nodes) =
map (NodeElement . xmlNode) nodes Element (s3Name ns name) mempty $
xmlNode (XLeaf name content) = Element (s3Name ns name) mempty map (NodeElement . xmlNode) nodes
[NodeContent content] xmlNode (XLeaf name content) =
Element
(s3Name ns name)
mempty
[NodeContent content]
class ToXNode a where class ToXNode a where
toXNode :: a -> XNode toXNode :: a -> XNode
instance ToXNode Event where instance ToXNode Event where
toXNode = XLeaf "Event" . show toXNode = XLeaf "Event" . toText
instance ToXNode Notification where instance ToXNode Notification where
toXNode (Notification qc tc lc) = XNode "NotificationConfiguration" $ toXNode (Notification qc tc lc) =
map (toXNodesWithArnName "QueueConfiguration" "Queue") qc ++ XNode "NotificationConfiguration" $
map (toXNodesWithArnName "TopicConfiguration" "Topic") tc ++ map (toXNodesWithArnName "QueueConfiguration" "Queue") qc
map (toXNodesWithArnName "CloudFunctionConfiguration" "CloudFunction") lc ++ map (toXNodesWithArnName "TopicConfiguration" "Topic") tc
++ map (toXNodesWithArnName "CloudFunctionConfiguration" "CloudFunction") lc
toXNodesWithArnName :: Text -> Text -> NotificationConfig -> XNode toXNodesWithArnName :: Text -> Text -> NotificationConfig -> XNode
toXNodesWithArnName eltName arnName (NotificationConfig id arn events fRule) = toXNodesWithArnName eltName arnName (NotificationConfig itemId arn events fRule) =
XNode eltName $ [XLeaf "Id" id, XLeaf arnName arn] ++ map toXNode events ++ XNode eltName $
[toXNode fRule] [XLeaf "Id" itemId, XLeaf arnName arn]
++ map toXNode events
++ [toXNode fRule]
instance ToXNode Filter where instance ToXNode Filter where
toXNode (Filter (FilterKey (FilterRules rules))) = toXNode (Filter (FilterKey (FilterRules rules))) =
XNode "Filter" [XNode "S3Key" (map getFRXNode rules)] XNode "Filter" [XNode "S3Key" (map getFRXNode rules)]
getFRXNode :: FilterRule -> XNode getFRXNode :: FilterRule -> XNode
getFRXNode (FilterRule n v) = XNode "FilterRule" [ XLeaf "Name" n getFRXNode (FilterRule n v) =
, XLeaf "Value" v XNode
] "FilterRule"
[ XLeaf "Name" n,
XLeaf "Value" v
]
mkPutNotificationRequest :: Text -> Notification -> ByteString mkPutNotificationRequest :: Text -> Notification -> ByteString
mkPutNotificationRequest ns = toXML ns . toXNode mkPutNotificationRequest ns = toXML ns . toXNode
@ -106,60 +129,103 @@ mkSelectRequest :: SelectRequest -> ByteString
mkSelectRequest r = LBS.toStrict $ renderLBS def sr mkSelectRequest r = LBS.toStrict $ renderLBS def sr
where where
sr = Document (Prologue [] Nothing []) root [] sr = Document (Prologue [] Nothing []) root []
root = Element "SelectRequest" mempty $ root =
[ NodeElement (Element "Expression" mempty Element "SelectRequest" mempty $
[NodeContent $ srExpression r]) [ NodeElement
, NodeElement (Element "ExpressionType" mempty ( Element
[NodeContent $ show $ srExpressionType r]) "Expression"
, NodeElement (Element "InputSerialization" mempty $ mempty
inputSerializationNodes $ srInputSerialization r) [NodeContent $ srExpression r]
, NodeElement (Element "OutputSerialization" mempty $ ),
outputSerializationNodes $ srOutputSerialization r) NodeElement
] ++ maybe [] reqProgElem (srRequestProgressEnabled r) ( Element
reqProgElem enabled = [NodeElement "ExpressionType"
(Element "RequestProgress" mempty mempty
[NodeElement [NodeContent $ show $ srExpressionType r]
(Element "Enabled" mempty ),
[NodeContent NodeElement
(if enabled then "TRUE" else "FALSE")] ( Element "InputSerialization" mempty $
) inputSerializationNodes $
] srInputSerialization r
) ),
] NodeElement
inputSerializationNodes is = comprTypeNode (isCompressionType is) ++ ( Element "OutputSerialization" mempty $
[NodeElement $ formatNode (isFormatInfo is)] outputSerializationNodes $
comprTypeNode (Just c) = [NodeElement $ Element "CompressionType" mempty srOutputSerialization r
[NodeContent $ case c of )
CompressionTypeNone -> "NONE" ]
CompressionTypeGzip -> "GZIP" ++ maybe [] reqProgElem (srRequestProgressEnabled r)
CompressionTypeBzip2 -> "BZIP2" reqProgElem enabled =
] [ NodeElement
] ( Element
comprTypeNode Nothing = [] "RequestProgress"
mempty
kvElement (k, v) = Element (Name k Nothing Nothing) mempty [NodeContent v] [ NodeElement
formatNode (InputFormatCSV (CSVProp h)) = ( Element
Element "CSV" mempty "Enabled"
(map NodeElement $ map kvElement $ H.toList h) mempty
formatNode (InputFormatJSON p) = [ NodeContent
Element "JSON" mempty (if enabled then "TRUE" else "FALSE")
[NodeElement ]
(Element "Type" mempty )
[NodeContent $ case jsonipType p of ]
JSONTypeDocument -> "DOCUMENT" )
JSONTypeLines -> "LINES"
]
)
] ]
inputSerializationNodes is =
comprTypeNode (isCompressionType is)
++ [NodeElement $ formatNode (isFormatInfo is)]
comprTypeNode (Just c) =
[ NodeElement $
Element
"CompressionType"
mempty
[ NodeContent $ case c of
CompressionTypeNone -> "NONE"
CompressionTypeGzip -> "GZIP"
CompressionTypeBzip2 -> "BZIP2"
]
]
comprTypeNode Nothing = []
kvElement (k, v) = Element (Name k Nothing Nothing) mempty [NodeContent v]
formatNode (InputFormatCSV c) =
Element
"CSV"
mempty
(map (NodeElement . kvElement) (csvPropsList c))
formatNode (InputFormatJSON p) =
Element
"JSON"
mempty
[ NodeElement
( Element
"Type"
mempty
[ NodeContent $ case jsonipType p of
JSONTypeDocument -> "DOCUMENT"
JSONTypeLines -> "LINES"
]
)
]
formatNode InputFormatParquet = Element "Parquet" mempty [] formatNode InputFormatParquet = Element "Parquet" mempty []
outputSerializationNodes (OutputSerializationJSON j) = outputSerializationNodes (OutputSerializationJSON j) =
[NodeElement (Element "JSON" mempty $ [ NodeElement
rdElem $ jsonopRecordDelimiter j)] ( Element "JSON" mempty $
outputSerializationNodes (OutputSerializationCSV (CSVProp h)) = rdElem $
[NodeElement $ Element "CSV" mempty jsonopRecordDelimiter j
(map NodeElement $ map kvElement $ H.toList h)] )
]
outputSerializationNodes (OutputSerializationCSV c) =
[ NodeElement $
Element
"CSV"
mempty
(map (NodeElement . kvElement) (csvPropsList c))
]
rdElem Nothing = [] rdElem Nothing = []
rdElem (Just t) = [NodeElement $ Element "RecordDelimiter" mempty rdElem (Just t) =
[NodeContent t]] [ NodeElement $
Element
"RecordDelimiter"
mempty
[NodeContent t]
]

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -15,75 +15,38 @@
-- --
module Network.Minio.XmlParser module Network.Minio.XmlParser
( parseListBuckets ( parseListBuckets,
, parseLocation parseLocation,
, parseNewMultipartUpload parseNewMultipartUpload,
, parseCompleteMultipartUploadResponse parseCompleteMultipartUploadResponse,
, parseCopyObjectResponse parseCopyObjectResponse,
, parseListObjectsResponse parseListObjectsResponse,
, parseListObjectsV1Response parseListObjectsV1Response,
, parseListUploadsResponse parseListUploadsResponse,
, parseListPartsResponse parseListPartsResponse,
, parseErrResponse parseErrResponse,
, parseNotification parseNotification,
, parseSelectProgress parseSelectProgress,
) where )
where
import qualified Data.ByteString.Lazy as LB import qualified Data.ByteString.Lazy as LB
import qualified Data.HashMap.Strict as H import qualified Data.HashMap.Strict as H
import Data.List (zip3, zip4, zip6) import Data.List (zip4, zip6)
import qualified Data.Text as T import qualified Data.Text as T
import Data.Text.Read (decimal) import Data.Time
import Data.Time import Network.Minio.Data
import Text.XML import Network.Minio.XmlCommon
import Text.XML.Cursor hiding (bool) import Text.XML.Cursor hiding (bool)
import Lib.Prelude
import Network.Minio.Data
import Network.Minio.Errors
-- | Represent the time format string returned by S3 API calls.
s3TimeFormat :: [Char]
s3TimeFormat = iso8601DateFormat $ Just "%T%QZ"
-- | Helper functions.
uncurry4 :: (a -> b -> c -> d -> e) -> (a, b, c, d) -> e
uncurry4 f (a, b, c, d) = f a b c d
uncurry6 :: (a -> b -> c -> d -> e -> f -> g) -> (a, b, c, d, e, f) -> g
uncurry6 f (a, b, c, d, e, g) = f a b c d e g
-- | Parse time strings from XML
parseS3XMLTime :: (MonadIO m) => Text -> m UTCTime
parseS3XMLTime = either (throwIO . MErrVXmlParse) return
. parseTimeM True defaultTimeLocale s3TimeFormat
. T.unpack
parseDecimal :: (MonadIO m, Integral a) => Text -> m a
parseDecimal numStr = either (throwIO . MErrVXmlParse . show) return $
fst <$> decimal numStr
parseDecimals :: (MonadIO m, Integral a) => [Text] -> m [a]
parseDecimals numStr = forM numStr parseDecimal
s3Elem :: Text -> Text -> Axis
s3Elem ns = element . s3Name ns
parseRoot :: (MonadIO m) => LByteString -> m Cursor
parseRoot = either (throwIO . MErrVXmlParse . show) (return . fromDocument)
. parseLBS def
-- | Parse the response XML of a list buckets call. -- | Parse the response XML of a list buckets call.
parseListBuckets :: (MonadReader env m, HasSvcNamespace env, MonadIO m) => LByteString -> m [BucketInfo] parseListBuckets :: (MonadReader env m, HasSvcNamespace env, MonadIO m) => LByteString -> m [BucketInfo]
parseListBuckets xmldata = do parseListBuckets xmldata = do
r <- parseRoot xmldata r <- parseRoot xmldata
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
let let s3Elem' = s3Elem ns
s3Elem' = s3Elem ns names = r $// s3Elem' "Bucket" &// s3Elem' "Name" &/ content
names = r $// s3Elem' "Bucket" &// s3Elem' "Name" &/ content timeStrings = r $// s3Elem' "Bucket" &// s3Elem' "CreationDate" &/ content
timeStrings = r $// s3Elem' "Bucket" &// s3Elem' "CreationDate" &/ content
times <- mapM parseS3XMLTime timeStrings times <- mapM parseS3XMLTime timeStrings
return $ zipWith BucketInfo names times return $ zipWith BucketInfo names times
@ -116,41 +79,38 @@ parseCopyObjectResponse :: (MonadReader env m, HasSvcNamespace env, MonadIO m) =
parseCopyObjectResponse xmldata = do parseCopyObjectResponse xmldata = do
r <- parseRoot xmldata r <- parseRoot xmldata
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
let let s3Elem' = s3Elem ns
s3Elem' = s3Elem ns mtimeStr = T.concat $ r $// s3Elem' "LastModified" &/ content
mtimeStr = T.concat $ r $// s3Elem' "LastModified" &/ content
mtime <- parseS3XMLTime mtimeStr mtime <- parseS3XMLTime mtimeStr
return (T.concat $ r $// s3Elem' "ETag" &/ content, mtime) return (T.concat $ r $// s3Elem' "ETag" &/ content, mtime)
-- | Parse the response XML of a list objects v1 call. -- | Parse the response XML of a list objects v1 call.
parseListObjectsV1Response :: (MonadReader env m, HasSvcNamespace env, MonadIO m) parseListObjectsV1Response ::
=> LByteString -> m ListObjectsV1Result (MonadReader env m, HasSvcNamespace env, MonadIO m) =>
LByteString ->
m ListObjectsV1Result
parseListObjectsV1Response xmldata = do parseListObjectsV1Response xmldata = do
r <- parseRoot xmldata r <- parseRoot xmldata
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
let let s3Elem' = s3Elem ns
s3Elem' = s3Elem ns hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content)
hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content) nextMarker = listToMaybe $ r $/ s3Elem' "NextMarker" &/ content
prefixes = r $/ s3Elem' "CommonPrefixes" &/ s3Elem' "Prefix" &/ content
nextMarker = headMay $ r $/ s3Elem' "NextMarker" &/ content keys = r $/ s3Elem' "Contents" &/ s3Elem' "Key" &/ content
modTimeStr = r $/ s3Elem' "Contents" &/ s3Elem' "LastModified" &/ content
prefixes = r $/ s3Elem' "CommonPrefixes" &/ s3Elem' "Prefix" &/ content etagsList = r $/ s3Elem' "Contents" &/ s3Elem' "ETag" &/ content
-- if response xml contains empty etag response fill them with as
keys = r $/ s3Elem' "Contents" &/ s3Elem' "Key" &/ content -- many empty Text for the zip4 below to work as intended.
modTimeStr = r $/ s3Elem' "Contents" &/ s3Elem' "LastModified" &/ content etags = etagsList ++ repeat ""
etagsList = r $/ s3Elem' "Contents" &/ s3Elem' "ETag" &/ content sizeStr = r $/ s3Elem' "Contents" &/ s3Elem' "Size" &/ content
-- if response xml contains empty etag response fill them with as
-- many empty Text for the zip4 below to work as intended.
etags = etagsList ++ repeat ""
sizeStr = r $/ s3Elem' "Contents" &/ s3Elem' "Size" &/ content
modTimes <- mapM parseS3XMLTime modTimeStr modTimes <- mapM parseS3XMLTime modTimeStr
sizes <- parseDecimals sizeStr sizes <- parseDecimals sizeStr
let let objects =
objects = map (uncurry6 ObjectInfo) $ map (uncurry6 ObjectInfo) $
zip6 keys modTimes etags sizes (repeat H.empty) (repeat H.empty) zip6 keys modTimes etags sizes (repeat H.empty) (repeat H.empty)
return $ ListObjectsV1Result hasMore nextMarker objects prefixes return $ ListObjectsV1Result hasMore nextMarker objects prefixes
@ -159,28 +119,24 @@ parseListObjectsResponse :: (MonadReader env m, HasSvcNamespace env, MonadIO m)
parseListObjectsResponse xmldata = do parseListObjectsResponse xmldata = do
r <- parseRoot xmldata r <- parseRoot xmldata
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
let let s3Elem' = s3Elem ns
s3Elem' = s3Elem ns hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content)
hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content) nextToken = listToMaybe $ r $/ s3Elem' "NextContinuationToken" &/ content
prefixes = r $/ s3Elem' "CommonPrefixes" &/ s3Elem' "Prefix" &/ content
nextToken = headMay $ r $/ s3Elem' "NextContinuationToken" &/ content keys = r $/ s3Elem' "Contents" &/ s3Elem' "Key" &/ content
modTimeStr = r $/ s3Elem' "Contents" &/ s3Elem' "LastModified" &/ content
prefixes = r $/ s3Elem' "CommonPrefixes" &/ s3Elem' "Prefix" &/ content etagsList = r $/ s3Elem' "Contents" &/ s3Elem' "ETag" &/ content
-- if response xml contains empty etag response fill them with as
keys = r $/ s3Elem' "Contents" &/ s3Elem' "Key" &/ content -- many empty Text for the zip4 below to work as intended.
modTimeStr = r $/ s3Elem' "Contents" &/ s3Elem' "LastModified" &/ content etags = etagsList ++ repeat ""
etagsList = r $/ s3Elem' "Contents" &/ s3Elem' "ETag" &/ content sizeStr = r $/ s3Elem' "Contents" &/ s3Elem' "Size" &/ content
-- if response xml contains empty etag response fill them with as
-- many empty Text for the zip4 below to work as intended.
etags = etagsList ++ repeat ""
sizeStr = r $/ s3Elem' "Contents" &/ s3Elem' "Size" &/ content
modTimes <- mapM parseS3XMLTime modTimeStr modTimes <- mapM parseS3XMLTime modTimeStr
sizes <- parseDecimals sizeStr sizes <- parseDecimals sizeStr
let let objects =
objects = map (uncurry6 ObjectInfo) $ map (uncurry6 ObjectInfo) $
zip6 keys modTimes etags sizes (repeat H.empty) (repeat H.empty) zip6 keys modTimes etags sizes (repeat H.empty) (repeat H.empty)
return $ ListObjectsResult hasMore nextToken objects prefixes return $ ListObjectsResult hasMore nextToken objects prefixes
@ -189,20 +145,18 @@ parseListUploadsResponse :: (MonadReader env m, HasSvcNamespace env, MonadIO m)
parseListUploadsResponse xmldata = do parseListUploadsResponse xmldata = do
r <- parseRoot xmldata r <- parseRoot xmldata
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
let let s3Elem' = s3Elem ns
s3Elem' = s3Elem ns hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content)
hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content) prefixes = r $/ s3Elem' "CommonPrefixes" &/ s3Elem' "Prefix" &/ content
prefixes = r $/ s3Elem' "CommonPrefixes" &/ s3Elem' "Prefix" &/ content nextKey = listToMaybe $ r $/ s3Elem' "NextKeyMarker" &/ content
nextKey = headMay $ r $/ s3Elem' "NextKeyMarker" &/ content nextUpload = listToMaybe $ r $/ s3Elem' "NextUploadIdMarker" &/ content
nextUpload = headMay $ r $/ s3Elem' "NextUploadIdMarker" &/ content uploadKeys = r $/ s3Elem' "Upload" &/ s3Elem' "Key" &/ content
uploadKeys = r $/ s3Elem' "Upload" &/ s3Elem' "Key" &/ content uploadIds = r $/ s3Elem' "Upload" &/ s3Elem' "UploadId" &/ content
uploadIds = r $/ s3Elem' "Upload" &/ s3Elem' "UploadId" &/ content uploadInitTimeStr = r $/ s3Elem' "Upload" &/ s3Elem' "Initiated" &/ content
uploadInitTimeStr = r $/ s3Elem' "Upload" &/ s3Elem' "Initiated" &/ content
uploadInitTimes <- mapM parseS3XMLTime uploadInitTimeStr uploadInitTimes <- mapM parseS3XMLTime uploadInitTimeStr
let let uploads = zip3 uploadKeys uploadIds uploadInitTimes
uploads = zip3 uploadKeys uploadIds uploadInitTimes
return $ ListUploadsResult hasMore nextKey nextUpload uploads prefixes return $ ListUploadsResult hasMore nextKey nextUpload uploads prefixes
@ -210,34 +164,25 @@ parseListPartsResponse :: (MonadReader env m, HasSvcNamespace env, MonadIO m) =>
parseListPartsResponse xmldata = do parseListPartsResponse xmldata = do
r <- parseRoot xmldata r <- parseRoot xmldata
ns <- asks getSvcNamespace ns <- asks getSvcNamespace
let let s3Elem' = s3Elem ns
s3Elem' = s3Elem ns hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content)
hasMore = ["true"] == (r $/ s3Elem' "IsTruncated" &/ content) nextPartNumStr = listToMaybe $ r $/ s3Elem' "NextPartNumberMarker" &/ content
nextPartNumStr = headMay $ r $/ s3Elem' "NextPartNumberMarker" &/ content partNumberStr = r $/ s3Elem' "Part" &/ s3Elem' "PartNumber" &/ content
partNumberStr = r $/ s3Elem' "Part" &/ s3Elem' "PartNumber" &/ content partModTimeStr = r $/ s3Elem' "Part" &/ s3Elem' "LastModified" &/ content
partModTimeStr = r $/ s3Elem' "Part" &/ s3Elem' "LastModified" &/ content partETags = r $/ s3Elem' "Part" &/ s3Elem' "ETag" &/ content
partETags = r $/ s3Elem' "Part" &/ s3Elem' "ETag" &/ content partSizeStr = r $/ s3Elem' "Part" &/ s3Elem' "Size" &/ content
partSizeStr = r $/ s3Elem' "Part" &/ s3Elem' "Size" &/ content
partModTimes <- mapM parseS3XMLTime partModTimeStr partModTimes <- mapM parseS3XMLTime partModTimeStr
partSizes <- parseDecimals partSizeStr partSizes <- parseDecimals partSizeStr
partNumbers <- parseDecimals partNumberStr partNumbers <- parseDecimals partNumberStr
nextPartNum <- parseDecimals $ maybeToList nextPartNumStr nextPartNum <- parseDecimals $ maybeToList nextPartNumStr
let let partInfos =
partInfos = map (uncurry4 ObjectPartInfo) $ map (uncurry4 ObjectPartInfo) $
zip4 partNumbers partETags partSizes partModTimes zip4 partNumbers partETags partSizes partModTimes
return $ ListPartsResult hasMore (listToMaybe nextPartNum) partInfos return $ ListPartsResult hasMore (listToMaybe nextPartNum) partInfos
parseErrResponse :: (MonadIO m) => LByteString -> m ServiceErr
parseErrResponse xmldata = do
r <- parseRoot xmldata
let code = T.concat $ r $/ element "Code" &/ content
message = T.concat $ r $/ element "Message" &/ content
return $ toServiceErr code message
parseNotification :: (MonadReader env m, HasSvcNamespace env, MonadIO m) => LByteString -> m Notification parseNotification :: (MonadReader env m, HasSvcNamespace env, MonadIO m) => LByteString -> m Notification
parseNotification xmldata = do parseNotification xmldata = do
r <- parseRoot xmldata r <- parseRoot xmldata
@ -246,32 +191,40 @@ parseNotification xmldata = do
qcfg = map node $ r $/ s3Elem' "QueueConfiguration" qcfg = map node $ r $/ s3Elem' "QueueConfiguration"
tcfg = map node $ r $/ s3Elem' "TopicConfiguration" tcfg = map node $ r $/ s3Elem' "TopicConfiguration"
lcfg = map node $ r $/ s3Elem' "CloudFunctionConfiguration" lcfg = map node $ r $/ s3Elem' "CloudFunctionConfiguration"
Notification <$> (mapM (parseNode ns "Queue") qcfg) Notification
<*> (mapM (parseNode ns "Topic") tcfg) <$> mapM (parseNode ns "Queue") qcfg
<*> (mapM (parseNode ns "CloudFunction") lcfg) <*> mapM (parseNode ns "Topic") tcfg
<*> mapM (parseNode ns "CloudFunction") lcfg
where where
getFilterRule ns c = getFilterRule ns c =
let name = T.concat $ c $/ s3Elem ns "Name" &/ content let name = T.concat $ c $/ s3Elem ns "Name" &/ content
value = T.concat $ c $/ s3Elem ns "Value" &/ content value = T.concat $ c $/ s3Elem ns "Value" &/ content
in FilterRule name value in FilterRule name value
parseNode ns arnName nodeData = do parseNode ns arnName nodeData = do
let c = fromNode nodeData let c = fromNode nodeData
id = T.concat $ c $/ s3Elem ns "Id" &/ content itemId = T.concat $ c $/ s3Elem ns "Id" &/ content
arn = T.concat $ c $/ s3Elem ns arnName &/ content arn = T.concat $ c $/ s3Elem ns arnName &/ content
events = catMaybes $ map textToEvent $ c $/ s3Elem ns "Event" &/ content events = mapMaybe textToEvent (c $/ s3Elem ns "Event" &/ content)
rules = c $/ s3Elem ns "Filter" &/ s3Elem ns "S3Key" &/ rules =
s3Elem ns "FilterRule" &| getFilterRule ns c
return $ NotificationConfig id arn events $/ s3Elem ns "Filter"
(Filter $ FilterKey $ FilterRules rules) &/ s3Elem ns "S3Key"
&/ s3Elem ns "FilterRule"
&| getFilterRule ns
return $
NotificationConfig
itemId
arn
events
(Filter $ FilterKey $ FilterRules rules)
parseSelectProgress :: MonadIO m => ByteString -> m Progress parseSelectProgress :: (MonadIO m) => ByteString -> m Progress
parseSelectProgress xmldata = do parseSelectProgress xmldata = do
r <- parseRoot $ LB.fromStrict xmldata r <- parseRoot $ LB.fromStrict xmldata
let bScanned = T.concat $ r $/ element "BytesScanned" &/ content let bScanned = T.concat $ r $/ element "BytesScanned" &/ content
bProcessed = T.concat $ r $/element "BytesProcessed" &/ content bProcessed = T.concat $ r $/ element "BytesProcessed" &/ content
bReturned = T.concat $ r $/element "BytesReturned" &/ content bReturned = T.concat $ r $/ element "BytesReturned" &/ content
Progress <$> parseDecimal bScanned Progress
<*> parseDecimal bProcessed <$> parseDecimal bScanned
<*> parseDecimal bReturned <*> parseDecimal bProcessed
<*> parseDecimal bReturned

View File

@ -15,7 +15,7 @@
# resolver: # resolver:
# name: custom-snapshot # name: custom-snapshot
# location: "./custom-snapshot.yaml" # location: "./custom-snapshot.yaml"
resolver: lts-13.1 resolver: lts-22.19
# User packages to be built. # User packages to be built.
# Various formats can be used as shown in the example below. # Various formats can be used as shown in the example below.
@ -36,17 +36,17 @@ resolver: lts-13.1
# non-dependency (i.e. a user package), and its test suites and benchmarks # non-dependency (i.e. a user package), and its test suites and benchmarks
# will not be run. This is useful for tweaking upstream packages. # will not be run. This is useful for tweaking upstream packages.
packages: packages:
- '.' - "."
# Dependency packages to be pulled from upstream that are not in the resolver # Dependency packages to be pulled from upstream that are not in the resolver
# (e.g., acme-missiles-0.3) # (e.g., acme-missiles-0.3)
extra-deps: [] extra-deps:
- crypton-connection-0.3.2
# Override default flag values for local packages and extra-deps # Override default flag values for local packages and extra-deps
flags: {} flags: {}
# Extra package databases containing global packages # Extra package databases containing global packages
extra-package-dbs: [] extra-package-dbs: []
# Control whether we use the GHC we find on the path # Control whether we use the GHC we find on the path
# system-ghc: true # system-ghc: true
# #

19
stack.yaml.lock Normal file
View File

@ -0,0 +1,19 @@
# This file was autogenerated by Stack.
# You should not edit this file by hand.
# For more information, please see the documentation at:
# https://docs.haskellstack.org/en/stable/lock_files
packages:
- completed:
hackage: crypton-connection-0.3.2@sha256:c7937edc25ab022bcf167703f2ec5ab73b62908e545bb587d2aa42b33cd6f6cc,1581
pantry-tree:
sha256: f986ad29b008cbe5732606e9cde1897191c486a2f1f169a4cb75fd915bce397c
size: 394
original:
hackage: crypton-connection-0.3.2
snapshots:
- completed:
sha256: e5cac927cf7ccbd52aa41476baa68b88c564ee6ddc3bc573dbf4210069287fe7
size: 713340
url: https://raw.githubusercontent.com/commercialhaskell/stackage-snapshots/master/lts/22/19.yaml
original: lts-22.19

File diff suppressed because it is too large Load Diff

View File

@ -15,88 +15,102 @@
-- --
module Network.Minio.API.Test module Network.Minio.API.Test
( bucketNameValidityTests ( bucketNameValidityTests,
, objectNameValidityTests objectNameValidityTests,
, parseServerInfoJSONTest parseServerInfoJSONTest,
, parseHealStatusTest parseHealStatusTest,
, parseHealStartRespTest parseHealStartRespTest,
) where )
where
import Data.Aeson (eitherDecode) import Data.Aeson (eitherDecode)
import Test.Tasty import Network.Minio.API
import Test.Tasty.HUnit import Network.Minio.AdminAPI
import Test.Tasty
import Lib.Prelude import Test.Tasty.HUnit
import Network.Minio.AdminAPI
import Network.Minio.API
assertBool' :: Bool -> Assertion assertBool' :: Bool -> Assertion
assertBool' = assertBool "Test failed!" assertBool' = assertBool "Test failed!"
bucketNameValidityTests :: TestTree bucketNameValidityTests :: TestTree
bucketNameValidityTests = testGroup "Bucket Name Validity Tests" bucketNameValidityTests =
[ testCase "Too short 1" $ assertBool' $ not $ isValidBucketName "" testGroup
, testCase "Too short 2" $ assertBool' $ not $ isValidBucketName "ab" "Bucket Name Validity Tests"
, testCase "Too long 1" $ assertBool' $ not $ isValidBucketName "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [ testCase "Too short 1" $ assertBool' $ not $ isValidBucketName "",
, testCase "Has upper case" $ assertBool' $ not $ isValidBucketName "ABCD" testCase "Too short 2" $ assertBool' $ not $ isValidBucketName "ab",
, testCase "Has punctuation" $ assertBool' $ not $ isValidBucketName "abc,2" testCase "Too long 1" $ assertBool' $ not $ isValidBucketName "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
, testCase "Has hyphen at end" $ assertBool' $ not $ isValidBucketName "abc-" testCase "Has upper case" $ assertBool' $ not $ isValidBucketName "ABCD",
, testCase "Has consecutive dot" $ assertBool' $ not $ isValidBucketName "abck..eedg" testCase "Has punctuation" $ assertBool' $ not $ isValidBucketName "abc,2",
, testCase "Looks like IP" $ assertBool' $ not $ isValidBucketName "10.0.0.1" testCase "Has hyphen at end" $ assertBool' $ not $ isValidBucketName "abc-",
, testCase "Valid bucket name 1" $ assertBool' $ isValidBucketName "abcd.pqeq.rea" testCase "Has consecutive dot" $ assertBool' $ not $ isValidBucketName "abck..eedg",
, testCase "Valid bucket name 2" $ assertBool' $ isValidBucketName "abcdedgh1d" testCase "Looks like IP" $ assertBool' $ not $ isValidBucketName "10.0.0.1",
, testCase "Valid bucket name 3" $ assertBool' $ isValidBucketName "abc-de-dg-h1d" testCase "Valid bucket name 1" $ assertBool' $ isValidBucketName "abcd.pqeq.rea",
] testCase "Valid bucket name 2" $ assertBool' $ isValidBucketName "abcdedgh1d",
testCase "Valid bucket name 3" $ assertBool' $ isValidBucketName "abc-de-dg-h1d"
]
objectNameValidityTests :: TestTree objectNameValidityTests :: TestTree
objectNameValidityTests = testGroup "Object Name Validity Tests" objectNameValidityTests =
[ testCase "Empty name" $ assertBool' $ not $ isValidObjectName "" testGroup
, testCase "Has unicode characters" $ assertBool' $ isValidObjectName "日本国" "Object Name Validity Tests"
] [ testCase "Empty name" $ assertBool' $ not $ isValidObjectName "",
testCase "Has unicode characters" $ assertBool' $ isValidObjectName "日本国"
]
parseServerInfoJSONTest :: TestTree parseServerInfoJSONTest :: TestTree
parseServerInfoJSONTest = testGroup "Parse MinIO Admin API ServerInfo JSON test" $ parseServerInfoJSONTest =
map (\(tName, tDesc, tfn, tVal) -> testCase tName $ assertBool tDesc $ testGroup "Parse MinIO Admin API ServerInfo JSON test" $
tfn (eitherDecode tVal :: Either [Char] [ServerInfo])) testCases map
( \(tName, tDesc, tfn, tVal) ->
testCase tName $
assertBool tDesc $
tfn (eitherDecode tVal :: Either [Char] [ServerInfo])
)
testCases
where where
testCases = [ ("FSBackend", "Verify server info json parsing for FS backend", isRight, fsJSON) testCases =
, ("Erasure Backend", "Verify server info json parsing for Erasure backend", isRight, erasureJSON) [ ("FSBackend", "Verify server info json parsing for FS backend", isRight, fsJSON),
, ("Unknown Backend", "Verify server info json parsing for invalid backend", isLeft, invalidJSON) ("Erasure Backend", "Verify server info json parsing for Erasure backend", isRight, erasureJSON),
] ("Unknown Backend", "Verify server info json parsing for invalid backend", isLeft, invalidJSON)
]
fsJSON = "[{\"error\":\"\",\"addr\":\"192.168.1.218:9000\",\"data\":{\"storage\":{\"Used\":20530,\"Backend\":{\"Type\":1,\"OnlineDisks\":0,\"OfflineDisks\":0,\"StandardSCData\":0,\"StandardSCParity\":0,\"RRSCData\":0,\"RRSCParity\":0,\"Sets\":null}},\"network\":{\"transferred\":808,\"received\":1160},\"http\":{\"totalHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalGETs\":{\"count\":1,\"avgDuration\":\"0s\"},\"successGETs\":{\"count\":1,\"avgDuration\":\"0s\"},\"totalPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"}},\"server\":{\"uptime\":5992503019270,\"version\":\"DEVELOPMENT.GOGET\",\"commitID\":\"DEVELOPMENT.GOGET\",\"region\":\"\",\"sqsARN\":[]}}}]" fsJSON = "[{\"error\":\"\",\"addr\":\"192.168.1.218:9000\",\"data\":{\"storage\":{\"Used\":20530,\"Backend\":{\"Type\":1,\"OnlineDisks\":0,\"OfflineDisks\":0,\"StandardSCData\":0,\"StandardSCParity\":0,\"RRSCData\":0,\"RRSCParity\":0,\"Sets\":null}},\"network\":{\"transferred\":808,\"received\":1160},\"http\":{\"totalHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalGETs\":{\"count\":1,\"avgDuration\":\"0s\"},\"successGETs\":{\"count\":1,\"avgDuration\":\"0s\"},\"totalPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"}},\"server\":{\"uptime\":5992503019270,\"version\":\"DEVELOPMENT.GOGET\",\"commitID\":\"DEVELOPMENT.GOGET\",\"region\":\"\",\"sqsARN\":[]}}}]"
erasureJSON = "[{\"error\":\"\",\"addr\":\"192.168.1.218:9000\",\"data\":{\"storage\":{\"Used\":83084,\"Backend\":{\"Type\":2,\"OnlineDisks\":4,\"OfflineDisks\":0,\"StandardSCData\":2,\"StandardSCParity\":2,\"RRSCData\":2,\"RRSCParity\":2,\"Sets\":[[{\"uuid\":\"16ec6f2c-9197-4787-904a-36bb2c2683f8\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"4052e086-ef99-4aa5-ae2b-8e27559432f6\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"d0639950-ddd3-45b0-93ca-fd86f5d79f72\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"30ec68c0-37e1-4592-82c1-26b143c0ac10\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]]}},\"network\":{\"transferred\":404,\"received\":0},\"http\":{\"totalHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"}},\"server\":{\"uptime\":2738903073,\"version\":\"DEVELOPMENT.GOGET\",\"commitID\":\"DEVELOPMENT.GOGET\",\"region\":\"\",\"sqsARN\":[]}}}]" erasureJSON = "[{\"error\":\"\",\"addr\":\"192.168.1.218:9000\",\"data\":{\"storage\":{\"Used\":83084,\"Backend\":{\"Type\":2,\"OnlineDisks\":4,\"OfflineDisks\":0,\"StandardSCData\":2,\"StandardSCParity\":2,\"RRSCData\":2,\"RRSCParity\":2,\"Sets\":[[{\"uuid\":\"16ec6f2c-9197-4787-904a-36bb2c2683f8\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"4052e086-ef99-4aa5-ae2b-8e27559432f6\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"d0639950-ddd3-45b0-93ca-fd86f5d79f72\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"30ec68c0-37e1-4592-82c1-26b143c0ac10\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]]}},\"network\":{\"transferred\":404,\"received\":0},\"http\":{\"totalHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"}},\"server\":{\"uptime\":2738903073,\"version\":\"DEVELOPMENT.GOGET\",\"commitID\":\"DEVELOPMENT.GOGET\",\"region\":\"\",\"sqsARN\":[]}}}]"
invalidJSON = "[{\"error\":\"\",\"addr\":\"192.168.1.218:9000\",\"data\":{\"storage\":{\"Used\":83084,\"Backend\":{\"Type\":42,\"OnlineDisks\":4,\"OfflineDisks\":0,\"StandardSCData\":2,\"StandardSCParity\":2,\"RRSCData\":2,\"RRSCParity\":2,\"Sets\":[[{\"uuid\":\"16ec6f2c-9197-4787-904a-36bb2c2683f8\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"4052e086-ef99-4aa5-ae2b-8e27559432f6\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"d0639950-ddd3-45b0-93ca-fd86f5d79f72\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"30ec68c0-37e1-4592-82c1-26b143c0ac10\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]]}},\"network\":{\"transferred\":404,\"received\":0},\"http\":{\"totalHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"}},\"server\":{\"uptime\":2738903073,\"version\":\"DEVELOPMENT.GOGET\",\"commitID\":\"DEVELOPMENT.GOGET\",\"region\":\"\",\"sqsARN\":[]}}}]" invalidJSON = "[{\"error\":\"\",\"addr\":\"192.168.1.218:9000\",\"data\":{\"storage\":{\"Used\":83084,\"Backend\":{\"Type\":42,\"OnlineDisks\":4,\"OfflineDisks\":0,\"StandardSCData\":2,\"StandardSCParity\":2,\"RRSCData\":2,\"RRSCParity\":2,\"Sets\":[[{\"uuid\":\"16ec6f2c-9197-4787-904a-36bb2c2683f8\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"4052e086-ef99-4aa5-ae2b-8e27559432f6\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"d0639950-ddd3-45b0-93ca-fd86f5d79f72\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"30ec68c0-37e1-4592-82c1-26b143c0ac10\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]]}},\"network\":{\"transferred\":404,\"received\":0},\"http\":{\"totalHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successHEADs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successGETs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPUTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successPOSTs\":{\"count\":0,\"avgDuration\":\"0s\"},\"totalDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"},\"successDELETEs\":{\"count\":0,\"avgDuration\":\"0s\"}},\"server\":{\"uptime\":2738903073,\"version\":\"DEVELOPMENT.GOGET\",\"commitID\":\"DEVELOPMENT.GOGET\",\"region\":\"\",\"sqsARN\":[]}}}]"
parseHealStatusTest :: TestTree parseHealStatusTest :: TestTree
parseHealStatusTest = testGroup "Parse MinIO Admin API HealStatus JSON test" $ parseHealStatusTest =
map (\(tName, tDesc, tfn, tVal) -> testCase tName $ assertBool tDesc $ testGroup "Parse MinIO Admin API HealStatus JSON test" $
tfn (eitherDecode tVal :: Either [Char] HealStatus)) testCases map
( \(tName, tDesc, tfn, tVal) ->
testCase tName $
assertBool tDesc $
tfn (eitherDecode tVal :: Either [Char] HealStatus)
)
testCases
where where
testCases = [ ("Good", "Verify heal result item for erasure backend", isRight, erasureJSON') testCases =
, ("Corrupted", "Verify heal result item for erasure backend", isLeft, invalidJSON') [ ("Good", "Verify heal result item for erasure backend", isRight, erasureJSON'),
, ("Incorrect Value", "Verify heal result item for erasure backend", isLeft, invalidItemType) ("Corrupted", "Verify heal result item for erasure backend", isLeft, invalidJSON'),
] ("Incorrect Value", "Verify heal result item for erasure backend", isLeft, invalidItemType)
]
erasureJSON' = "{\"Summary\":\"finished\",\"StartTime\":\"2018-06-05T08:09:47.644465513Z\",\"NumDisks\":4,\"Settings\":{\"recursive\":false,\"dryRun\":false},\"Items\":[{\"resultId\":1,\"type\":\"metadata\",\"bucket\":\"\",\"object\":\"\",\"detail\":\"disk-format\",\"diskCount\":4,\"setCount\":1,\"before\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"after\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"objectSize\":0}]}" erasureJSON' = "{\"Summary\":\"finished\",\"StartTime\":\"2018-06-05T08:09:47.644465513Z\",\"NumDisks\":4,\"Settings\":{\"recursive\":false,\"dryRun\":false},\"Items\":[{\"resultId\":1,\"type\":\"metadata\",\"bucket\":\"\",\"object\":\"\",\"detail\":\"disk-format\",\"diskCount\":4,\"setCount\":1,\"before\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"after\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"objectSize\":0}]}"
invalidJSON' = "{\"Summary\":\"finished\",\"StartTime\":\"2018-06-05T08:09:47.644465513Z\",\"NumDisks\":4,\"Settings\":{\"recursive\":false,\"dryRun\":false},\"Items\":[{\"resultId\":1,\"type\":\"metadata\",\"bucket\":\"\",\"object\":\"\",\"detail\":\"disk-format\",\"diskCount\":4,\"setCount\":1,\"before\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"after\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"objectSize\":0}]" invalidJSON' = "{\"Summary\":\"finished\",\"StartTime\":\"2018-06-05T08:09:47.644465513Z\",\"NumDisks\":4,\"Settings\":{\"recursive\":false,\"dryRun\":false},\"Items\":[{\"resultId\":1,\"type\":\"metadata\",\"bucket\":\"\",\"object\":\"\",\"detail\":\"disk-format\",\"diskCount\":4,\"setCount\":1,\"before\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"after\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"objectSize\":0}]"
invalidItemType = "{\"Summary\":\"finished\",\"StartTime\":\"2018-06-05T08:09:47.644465513Z\",\"NumDisks\":4,\"Settings\":{\"recursive\":false,\"dryRun\":false},\"Items\":[{\"resultId\":1,\"type\":\"hello\",\"bucket\":\"\",\"object\":\"\",\"detail\":\"disk-format\",\"diskCount\":4,\"setCount\":1,\"before\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"after\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"objectSize\":0}]}" invalidItemType = "{\"Summary\":\"finished\",\"StartTime\":\"2018-06-05T08:09:47.644465513Z\",\"NumDisks\":4,\"Settings\":{\"recursive\":false,\"dryRun\":false},\"Items\":[{\"resultId\":1,\"type\":\"hello\",\"bucket\":\"\",\"object\":\"\",\"detail\":\"disk-format\",\"diskCount\":4,\"setCount\":1,\"before\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"after\":{\"drives\":[{\"uuid\":\"c3487166-b8a4-481a-b1e7-fb9b249e2500\",\"endpoint\":\"/tmp/1\",\"state\":\"ok\"},{\"uuid\":\"55a6e787-184f-4e4c-bf09-03dcada658a9\",\"endpoint\":\"/tmp/2\",\"state\":\"ok\"},{\"uuid\":\"f035d8c3-fca1-4407-b89c-38c2bcf4a641\",\"endpoint\":\"/tmp/3\",\"state\":\"ok\"},{\"uuid\":\"4f8b79d3-db90-4c1d-87c2-35a28b0d9a13\",\"endpoint\":\"/tmp/4\",\"state\":\"ok\"}]},\"objectSize\":0}]}"
parseHealStartRespTest :: TestTree parseHealStartRespTest :: TestTree
parseHealStartRespTest = testGroup "Parse MinIO Admin API HealStartResp JSON test" $ parseHealStartRespTest =
map (\(tName, tDesc, tfn, tVal) -> testCase tName $ assertBool tDesc $ testGroup "Parse MinIO Admin API HealStartResp JSON test" $
tfn (eitherDecode tVal :: Either [Char] HealStartResp)) testCases map
( \(tName, tDesc, tfn, tVal) ->
testCase tName $
assertBool tDesc $
tfn (eitherDecode tVal :: Either [Char] HealStartResp)
)
testCases
where where
testCases = [ ("Good", "Verify heal start response for erasure backend", isRight, hsrJSON) testCases =
, ("Missing Token", "Verify heal start response for erasure backend", isLeft, missingTokenJSON) [ ("Good", "Verify heal start response for erasure backend", isRight, hsrJSON),
] ("Missing Token", "Verify heal start response for erasure backend", isLeft, missingTokenJSON)
]
hsrJSON = "{\"clientToken\":\"3a3aca49-77dd-4b78-bba7-0978f119b23e\",\"clientAddress\":\"127.0.0.1\",\"startTime\":\"2018-06-05T08:09:47.644394493Z\"}" hsrJSON = "{\"clientToken\":\"3a3aca49-77dd-4b78-bba7-0978f119b23e\",\"clientAddress\":\"127.0.0.1\",\"startTime\":\"2018-06-05T08:09:47.644394493Z\"}"
missingTokenJSON = "{\"clientAddress\":\"127.0.0.1\",\"startTime\":\"2018-06-05T08:09:47.644394493Z\"}" missingTokenJSON = "{\"clientAddress\":\"127.0.0.1\",\"startTime\":\"2018-06-05T08:09:47.644394493Z\"}"

View File

@ -15,26 +15,26 @@
-- --
module Network.Minio.JsonParser.Test module Network.Minio.JsonParser.Test
( ( jsonParserTests,
jsonParserTests )
) where where
import Test.Tasty import Lib.Prelude
import Test.Tasty.HUnit import Network.Minio.Errors
import UnliftIO (MonadUnliftIO) import Network.Minio.JsonParser
import Test.Tasty
import Lib.Prelude import Test.Tasty.HUnit
import UnliftIO (MonadUnliftIO)
import Network.Minio.Errors
import Network.Minio.JsonParser
jsonParserTests :: TestTree jsonParserTests :: TestTree
jsonParserTests = testGroup "JSON Parser Tests" jsonParserTests =
[ testCase "Test parseErrResponseJSON" testParseErrResponseJSON testGroup
] "JSON Parser Tests"
[ testCase "Test parseErrResponseJSON" testParseErrResponseJSON
]
tryValidationErr :: (MonadUnliftIO m) => m a -> m (Either MErrV a) tryValidationErr :: (MonadUnliftIO m) => m a -> m (Either MErrV a)
tryValidationErr act = try act tryValidationErr = try
assertValidationErr :: MErrV -> Assertion assertValidationErr :: MErrV -> Assertion
assertValidationErr e = assertFailure $ "Failed due to validation error => " ++ show e assertValidationErr e = assertFailure $ "Failed due to validation error => " ++ show e
@ -44,21 +44,20 @@ testParseErrResponseJSON = do
-- 1. Test parsing of an invalid error json. -- 1. Test parsing of an invalid error json.
parseResE <- tryValidationErr $ parseErrResponseJSON "ClearlyInvalidJSON" parseResE <- tryValidationErr $ parseErrResponseJSON "ClearlyInvalidJSON"
when (isRight parseResE) $ when (isRight parseResE) $
assertFailure $ "Parsing should have failed => " ++ show parseResE assertFailure $
"Parsing should have failed => " ++ show parseResE
forM_ cases $ \(jsondata, sErr) -> do forM_ cases $ \(jsondata, sErr) -> do
parseErr <- tryValidationErr $ parseErrResponseJSON jsondata parseErr <- tryValidationErr $ parseErrResponseJSON jsondata
either assertValidationErr (@?= sErr) parseErr either assertValidationErr (@?= sErr) parseErr
where where
cases = [ cases =
-- 2. Test parsing of a valid error json. [ -- 2. Test parsing of a valid error json.
("{\"Code\":\"InvalidAccessKeyId\",\"Message\":\"The access key ID you provided does not exist in our records.\",\"Key\":\"\",\"BucketName\":\"\",\"Resource\":\"/minio/admin/v1/info\",\"RequestId\":\"3L137\",\"HostId\":\"3L137\"}", ( "{\"Code\":\"InvalidAccessKeyId\",\"Message\":\"The access key ID you provided does not exist in our records.\",\"Key\":\"\",\"BucketName\":\"\",\"Resource\":\"/minio/admin/v1/info\",\"RequestId\":\"3L137\",\"HostId\":\"3L137\"}",
ServiceErr "InvalidAccessKeyId" "The access key ID you provided does not exist in our records." ServiceErr "InvalidAccessKeyId" "The access key ID you provided does not exist in our records."
) ),
, -- 3. Test parsing of a valid, empty Resource.
-- 3. Test parsing of a valid, empty Resource. ( "{\"Code\":\"SignatureDoesNotMatch\",\"Message\":\"The request signature we calculated does not match the signature you provided. Check your key and signing method.\",\"Key\":\"\",\"BucketName\":\"\",\"Resource\":\"/minio/admin/v1/info\",\"RequestId\":\"3L137\",\"HostId\":\"3L137\"}",
("{\"Code\":\"SignatureDoesNotMatch\",\"Message\":\"The request signature we calculated does not match the signature you provided. Check your key and signing method.\",\"Key\":\"\",\"BucketName\":\"\",\"Resource\":\"/minio/admin/v1/info\",\"RequestId\":\"3L137\",\"HostId\":\"3L137\"}", ServiceErr "SignatureDoesNotMatch" "The request signature we calculated does not match the signature you provided. Check your key and signing method."
ServiceErr "SignatureDoesNotMatch" "The request signature we calculated does not match the signature you provided. Check your key and signing method." )
)
] ]

View File

@ -15,18 +15,18 @@
-- --
module Network.Minio.TestHelpers module Network.Minio.TestHelpers
( runTestNS ( runTestNS,
) where )
where
import Network.Minio.Data import Network.Minio.Data
import Lib.Prelude newtype TestNS = TestNS {testNamespace :: Text}
newtype TestNS = TestNS { testNamespace :: Text }
instance HasSvcNamespace TestNS where instance HasSvcNamespace TestNS where
getSvcNamespace = testNamespace getSvcNamespace = testNamespace
runTestNS :: ReaderT TestNS m a -> m a runTestNS :: ReaderT TestNS m a -> m a
runTestNS = flip runReaderT $ runTestNS =
TestNS "http://s3.amazonaws.com/doc/2006-03-01/" flip runReaderT $
TestNS "http://s3.amazonaws.com/doc/2006-03-01/"

View File

@ -15,33 +15,31 @@
-- --
module Network.Minio.Utils.Test module Network.Minio.Utils.Test
( ( limitedMapConcurrentlyTests,
limitedMapConcurrentlyTests )
) where where
import Test.Tasty import Network.Minio.Utils
import Test.Tasty.HUnit import Test.Tasty
import Test.Tasty.HUnit
import Lib.Prelude
import Network.Minio.Utils
limitedMapConcurrentlyTests :: TestTree limitedMapConcurrentlyTests :: TestTree
limitedMapConcurrentlyTests = testGroup "limitedMapConcurrently Tests" limitedMapConcurrentlyTests =
[ testCase "Test with various thread counts" testLMC testGroup
] "limitedMapConcurrently Tests"
[ testCase "Test with various thread counts" testLMC
]
testLMC :: Assertion testLMC :: Assertion
testLMC = do testLMC = do
let maxNum = 50 let maxNum = 50
-- test with thread count of 1 to 2*maxNum -- test with thread count of 1 to 2*maxNum
forM_ [1..(2*maxNum)] $ \threads -> do forM_ [1 .. (2 * maxNum)] $ \threads -> do
res <- limitedMapConcurrently threads compute [1..maxNum] res <- limitedMapConcurrently threads compute [1 .. maxNum]
sum res @?= overallResultCheck maxNum sum res @?= overallResultCheck maxNum
where where
-- simple function to run in each thread -- simple function to run in each thread
compute :: Int -> IO Int compute :: Int -> IO Int
compute n = return $ sum [1..n] compute n = return $ sum [1 .. n]
-- function to check overall result -- function to check overall result
overallResultCheck n = sum $ map (\t -> (t * (t+1)) `div` 2) [1..n] overallResultCheck n = sum $ map (\t -> (t * (t + 1)) `div` 2) [1 .. n]

View File

@ -13,30 +13,33 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE QuasiQuotes #-}
module Network.Minio.XmlGenerator.Test module Network.Minio.XmlGenerator.Test
( xmlGeneratorTests ( xmlGeneratorTests,
) where )
where
import Test.Tasty import qualified Data.ByteString.Lazy as LBS
import Test.Tasty.HUnit import Lib.Prelude
import Text.RawString.QQ (r) import Network.Minio.Data
import Network.Minio.TestHelpers
import Lib.Prelude import Network.Minio.XmlGenerator
import Network.Minio.XmlParser (parseNotification)
import Network.Minio.Data import Test.Tasty
import Network.Minio.TestHelpers import Test.Tasty.HUnit
import Network.Minio.XmlGenerator import Text.RawString.QQ (r)
import Network.Minio.XmlParser (parseNotification) import Text.XML (def, parseLBS)
xmlGeneratorTests :: TestTree xmlGeneratorTests :: TestTree
xmlGeneratorTests = testGroup "XML Generator Tests" xmlGeneratorTests =
[ testCase "Test mkCreateBucketConfig" testMkCreateBucketConfig testGroup
, testCase "Test mkCompleteMultipartUploadRequest" testMkCompleteMultipartUploadRequest "XML Generator Tests"
, testCase "Test mkPutNotificationRequest" testMkPutNotificationRequest [ testCase "Test mkCreateBucketConfig" testMkCreateBucketConfig,
, testCase "Test mkSelectRequest" testMkSelectRequest testCase "Test mkCompleteMultipartUploadRequest" testMkCompleteMultipartUploadRequest,
] testCase "Test mkPutNotificationRequest" testMkPutNotificationRequest,
testCase "Test mkSelectRequest" testMkSelectRequest
]
testMkCreateBucketConfig :: Assertion testMkCreateBucketConfig :: Assertion
testMkCreateBucketConfig = do testMkCreateBucketConfig = do
@ -44,100 +47,136 @@ testMkCreateBucketConfig = do
assertEqual "CreateBucketConfiguration xml should match: " expected $ assertEqual "CreateBucketConfiguration xml should match: " expected $
mkCreateBucketConfig ns "EU" mkCreateBucketConfig ns "EU"
where where
expected = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ expected =
\<CreateBucketConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<LocationConstraint>EU</LocationConstraint>\ \<CreateBucketConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\</CreateBucketConfiguration>" \<LocationConstraint>EU</LocationConstraint>\
\</CreateBucketConfiguration>"
testMkCompleteMultipartUploadRequest :: Assertion testMkCompleteMultipartUploadRequest :: Assertion
testMkCompleteMultipartUploadRequest = testMkCompleteMultipartUploadRequest =
assertEqual "completeMultipartUpload xml should match: " expected $ assertEqual "completeMultipartUpload xml should match: " expected $
mkCompleteMultipartUploadRequest [(1, "abc")] mkCompleteMultipartUploadRequest [(1, "abc")]
where where
expected = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ expected =
\<CompleteMultipartUpload>\ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<Part>\ \<CompleteMultipartUpload>\
\<PartNumber>1</PartNumber><ETag>abc</ETag>\ \<Part>\
\</Part>\ \<PartNumber>1</PartNumber><ETag>abc</ETag>\
\</CompleteMultipartUpload>" \</Part>\
\</CompleteMultipartUpload>"
testMkPutNotificationRequest :: Assertion testMkPutNotificationRequest :: Assertion
testMkPutNotificationRequest = testMkPutNotificationRequest =
forM_ cases $ \val -> do forM_ cases $ \val -> do
let ns = "http://s3.amazonaws.com/doc/2006-03-01/" let ns = "http://s3.amazonaws.com/doc/2006-03-01/"
result = toS $ mkPutNotificationRequest ns val result = fromStrictBS $ mkPutNotificationRequest ns val
ntf <- runExceptT $ runTestNS $ parseNotification result ntf <- runExceptT $ runTestNS $ parseNotification result
either (\_ -> assertFailure "XML Parse Error!") either
(@?= val) ntf (\_ -> assertFailure "XML Parse Error!")
(@?= val)
ntf
where where
cases = [ Notification [] cases =
[ NotificationConfig [ Notification
"YjVkM2Y0YmUtNGI3NC00ZjQyLWEwNGItNDIyYWUxY2I0N2M4" []
"arn:aws:sns:us-east-1:account-id:s3notificationtopic2" [ NotificationConfig
[ReducedRedundancyLostObject, ObjectCreated] defaultFilter "YjVkM2Y0YmUtNGI3NC00ZjQyLWEwNGItNDIyYWUxY2I0N2M4"
] "arn:aws:sns:us-east-1:account-id:s3notificationtopic2"
[] [ReducedRedundancyLostObject, ObjectCreated]
, Notification defaultFilter
[ NotificationConfig ]
"1" "arn:aws:sqs:us-west-2:444455556666:s3notificationqueue" [],
[ObjectCreatedPut] Notification
(Filter $ FilterKey $ FilterRules [ NotificationConfig
[ FilterRule "prefix" "images/" "1"
, FilterRule "suffix" ".jpg"]) "arn:aws:sqs:us-west-2:444455556666:s3notificationqueue"
, NotificationConfig [ObjectCreatedPut]
"" "arn:aws:sqs:us-east-1:356671443308:s3notificationqueue" ( Filter $
[ObjectCreated] defaultFilter FilterKey $
] FilterRules
[ NotificationConfig [ FilterRule "prefix" "images/",
"" "arn:aws:sns:us-east-1:356671443308:s3notificationtopic2" FilterRule "suffix" ".jpg"
[ReducedRedundancyLostObject] defaultFilter ]
] ),
[ NotificationConfig NotificationConfig
"ObjectCreatedEvents" "arn:aws:lambda:us-west-2:35667example:function:CreateThumbnail" ""
[ObjectCreated] defaultFilter "arn:aws:sqs:us-east-1:356671443308:s3notificationqueue"
] [ObjectCreated]
] defaultFilter
]
[ NotificationConfig
""
"arn:aws:sns:us-east-1:356671443308:s3notificationtopic2"
[ReducedRedundancyLostObject]
defaultFilter
]
[ NotificationConfig
"ObjectCreatedEvents"
"arn:aws:lambda:us-west-2:35667example:function:CreateThumbnail"
[ObjectCreated]
defaultFilter
]
]
testMkSelectRequest :: Assertion testMkSelectRequest :: Assertion
testMkSelectRequest = mapM_ assertFn cases testMkSelectRequest = mapM_ assertFn cases
where where
assertFn (a, b) = assertEqual "selectRequest XML should match: " b $ mkSelectRequest a assertFn (a, b) =
cases = [ ( SelectRequest "Select * from S3Object" SQL let generatedReqDoc = parseLBS def $ LBS.fromStrict $ mkSelectRequest a
(InputSerialization (Just CompressionTypeGzip) expectedReqDoc = parseLBS def $ LBS.fromStrict b
(InputFormatCSV $ fileHeaderInfo FileHeaderIgnore in case (generatedReqDoc, expectedReqDoc) of
<> recordDelimiter "\n" (Right genDoc, Right expDoc) -> assertEqual "selectRequest XML should match: " expDoc genDoc
<> fieldDelimiter "," (Left err, _) -> assertFailure $ "Generated selectRequest failed to parse as XML" ++ show err
<> quoteCharacter "\"" (_, Left err) -> assertFailure $ "Expected selectRequest failed to parse as XML" ++ show err
<> quoteEscapeCharacter "\"" cases =
)) [ ( SelectRequest
(OutputSerializationCSV $ quoteFields QuoteFieldsAsNeeded "Select * from S3Object"
<> recordDelimiter "\n" SQL
<> fieldDelimiter "," ( InputSerialization
<> quoteCharacter "\"" (Just CompressionTypeGzip)
<> quoteEscapeCharacter "\"" ( InputFormatCSV $
fileHeaderInfo FileHeaderIgnore
<> recordDelimiter "\n"
<> fieldDelimiter ","
<> quoteCharacter "\""
<> quoteEscapeCharacter "\""
) )
(Just False) )
, [r|<?xml version="1.0" encoding="UTF-8"?><SelectRequest><Expression>Select * from S3Object</Expression><ExpressionType>SQL</ExpressionType><InputSerialization><CompressionType>GZIP</CompressionType><CSV><QuoteCharacter>&#34;</QuoteCharacter><RecordDelimiter> ( OutputSerializationCSV $
</RecordDelimiter><FileHeaderInfo>IGNORE</FileHeaderInfo><QuoteEscapeCharacter>&#34;</QuoteEscapeCharacter><FieldDelimiter>,</FieldDelimiter></CSV></InputSerialization><OutputSerialization><CSV><QuoteCharacter>&#34;</QuoteCharacter><QuoteFields>ASNEEDED</QuoteFields><RecordDelimiter> quoteFields QuoteFieldsAsNeeded
</RecordDelimiter><QuoteEscapeCharacter>&#34;</QuoteEscapeCharacter><FieldDelimiter>,</FieldDelimiter></CSV></OutputSerialization><RequestProgress><Enabled>FALSE</Enabled></RequestProgress></SelectRequest>|] <> recordDelimiter "\n"
) <> fieldDelimiter ","
, ( setRequestProgressEnabled False $ <> quoteCharacter "\""
setInputCompressionType CompressionTypeGzip $ <> quoteEscapeCharacter "\""
selectRequest "Select * from S3Object" documentJsonInput )
(outputJSONFromRecordDelimiter "\n") (Just False),
, [r|<?xml version="1.0" encoding="UTF-8"?><SelectRequest><Expression>Select * from S3Object</Expression><ExpressionType>SQL</ExpressionType><InputSerialization><CompressionType>GZIP</CompressionType><JSON><Type>DOCUMENT</Type></JSON></InputSerialization><OutputSerialization><JSON><RecordDelimiter> [r|<?xml version="1.0" encoding="UTF-8"?><SelectRequest><Expression>Select * from S3Object</Expression><ExpressionType>SQL</ExpressionType><InputSerialization><CompressionType>GZIP</CompressionType><CSV><FieldDelimiter>,</FieldDelimiter><FileHeaderInfo>IGNORE</FileHeaderInfo><QuoteCharacter>"</QuoteCharacter><QuoteEscapeCharacter>"</QuoteEscapeCharacter><RecordDelimiter>
</RecordDelimiter></CSV></InputSerialization><OutputSerialization><CSV><FieldDelimiter>,</FieldDelimiter><QuoteCharacter>"</QuoteCharacter><QuoteEscapeCharacter>"</QuoteEscapeCharacter><QuoteFields>ASNEEDED</QuoteFields><RecordDelimiter>
</RecordDelimiter></CSV></OutputSerialization><RequestProgress><Enabled>FALSE</Enabled></RequestProgress></SelectRequest>|]
),
( setRequestProgressEnabled False $
setInputCompressionType CompressionTypeGzip $
selectRequest
"Select * from S3Object"
documentJsonInput
(outputJSONFromRecordDelimiter "\n"),
[r|<?xml version="1.0" encoding="UTF-8"?><SelectRequest><Expression>Select * from S3Object</Expression><ExpressionType>SQL</ExpressionType><InputSerialization><CompressionType>GZIP</CompressionType><JSON><Type>DOCUMENT</Type></JSON></InputSerialization><OutputSerialization><JSON><RecordDelimiter>
</RecordDelimiter></JSON></OutputSerialization><RequestProgress><Enabled>FALSE</Enabled></RequestProgress></SelectRequest>|] </RecordDelimiter></JSON></OutputSerialization><RequestProgress><Enabled>FALSE</Enabled></RequestProgress></SelectRequest>|]
) ),
, ( setRequestProgressEnabled False $ ( setRequestProgressEnabled False $
setInputCompressionType CompressionTypeNone $ setInputCompressionType CompressionTypeNone $
selectRequest "Select * from S3Object" defaultParquetInput selectRequest
(outputCSVFromProps $ quoteFields QuoteFieldsAsNeeded "Select * from S3Object"
<> recordDelimiter "\n" defaultParquetInput
<> fieldDelimiter "," ( outputCSVFromProps $
<> quoteCharacter "\"" quoteFields QuoteFieldsAsNeeded
<> quoteEscapeCharacter "\"") <> recordDelimiter "\n"
, [r|<?xml version="1.0" encoding="UTF-8"?><SelectRequest><Expression>Select * from S3Object</Expression><ExpressionType>SQL</ExpressionType><InputSerialization><CompressionType>NONE</CompressionType><Parquet/></InputSerialization><OutputSerialization><CSV><QuoteCharacter>&#34;</QuoteCharacter><QuoteFields>ASNEEDED</QuoteFields><RecordDelimiter> <> fieldDelimiter ","
</RecordDelimiter><QuoteEscapeCharacter>&#34;</QuoteEscapeCharacter><FieldDelimiter>,</FieldDelimiter></CSV></OutputSerialization><RequestProgress><Enabled>FALSE</Enabled></RequestProgress></SelectRequest>|] <> quoteCharacter "\""
) <> quoteEscapeCharacter "\""
] ),
[r|<?xml version="1.0" encoding="UTF-8"?><SelectRequest><Expression>Select * from S3Object</Expression><ExpressionType>SQL</ExpressionType><InputSerialization><CompressionType>NONE</CompressionType><Parquet/></InputSerialization><OutputSerialization><CSV><FieldDelimiter>,</FieldDelimiter><QuoteCharacter>"</QuoteCharacter><QuoteEscapeCharacter>"</QuoteEscapeCharacter><QuoteFields>ASNEEDED</QuoteFields><RecordDelimiter>
</RecordDelimiter></CSV></OutputSerialization><RequestProgress><Enabled>FALSE</Enabled></RequestProgress></SelectRequest>|]
)
]

View File

@ -13,48 +13,49 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
{-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE QuasiQuotes #-}
module Network.Minio.XmlParser.Test module Network.Minio.XmlParser.Test
( xmlParserTests ( xmlParserTests,
) where )
where
import qualified Data.HashMap.Strict as H import qualified Data.HashMap.Strict as H
import Data.Time (fromGregorian) import Data.Time (fromGregorian)
import Test.Tasty import Lib.Prelude
import Test.Tasty.HUnit import Network.Minio.Data
import Text.RawString.QQ (r) import Network.Minio.Errors
import UnliftIO (MonadUnliftIO) import Network.Minio.TestHelpers
import Network.Minio.XmlParser
import Lib.Prelude import Test.Tasty
import Test.Tasty.HUnit
import Network.Minio.Data import Text.RawString.QQ (r)
import Network.Minio.Errors import UnliftIO (MonadUnliftIO)
import Network.Minio.TestHelpers
import Network.Minio.XmlParser
xmlParserTests :: TestTree xmlParserTests :: TestTree
xmlParserTests = testGroup "XML Parser Tests" xmlParserTests =
[ testCase "Test parseLocation" testParseLocation testGroup
, testCase "Test parseNewMultipartUpload" testParseNewMultipartUpload "XML Parser Tests"
, testCase "Test parseListObjectsResponse" testParseListObjectsResult [ testCase "Test parseLocation" testParseLocation,
, testCase "Test parseListObjectsV1Response" testParseListObjectsV1Result testCase "Test parseNewMultipartUpload" testParseNewMultipartUpload,
, testCase "Test parseListUploadsresponse" testParseListIncompleteUploads testCase "Test parseListObjectsResponse" testParseListObjectsResult,
, testCase "Test parseCompleteMultipartUploadResponse" testParseCompleteMultipartUploadResponse testCase "Test parseListObjectsV1Response" testParseListObjectsV1Result,
, testCase "Test parseListPartsResponse" testParseListPartsResponse testCase "Test parseListUploadsresponse" testParseListIncompleteUploads,
, testCase "Test parseCopyObjectResponse" testParseCopyObjectResponse testCase "Test parseCompleteMultipartUploadResponse" testParseCompleteMultipartUploadResponse,
, testCase "Test parseNotification" testParseNotification testCase "Test parseListPartsResponse" testParseListPartsResponse,
, testCase "Test parseSelectProgress" testParseSelectProgress testCase "Test parseCopyObjectResponse" testParseCopyObjectResponse,
] testCase "Test parseNotification" testParseNotification,
testCase "Test parseSelectProgress" testParseSelectProgress
]
tryValidationErr :: (MonadUnliftIO m) => m a -> m (Either MErrV a) tryValidationErr :: (MonadUnliftIO m) => m a -> m (Either MErrV a)
tryValidationErr act = try act tryValidationErr = try
assertValidtionErr :: MErrV -> Assertion assertValidtionErr :: MErrV -> Assertion
assertValidtionErr e = assertFailure $ "Failed due to validation error => " ++ show e assertValidtionErr e = assertFailure $ "Failed due to validation error => " ++ show e
eitherValidationErr :: Either MErrV a -> (a -> Assertion) -> Assertion eitherValidationErr :: Either MErrV a -> (a -> Assertion) -> Assertion
eitherValidationErr (Left e) _ = assertValidtionErr e eitherValidationErr (Left e) _ = assertValidtionErr e
eitherValidationErr (Right a) f = f a eitherValidationErr (Right a) f = f a
testParseLocation :: Assertion testParseLocation :: Assertion
@ -62,224 +63,224 @@ testParseLocation = do
-- 1. Test parsing of an invalid location constraint xml. -- 1. Test parsing of an invalid location constraint xml.
parseResE <- tryValidationErr $ parseLocation "ClearlyInvalidXml" parseResE <- tryValidationErr $ parseLocation "ClearlyInvalidXml"
when (isRight parseResE) $ when (isRight parseResE) $
assertFailure $ "Parsing should have failed => " ++ show parseResE assertFailure $
"Parsing should have failed => " ++ show parseResE
forM_ cases $ \(xmldata, expectedLocation) -> do forM_ cases $ \(xmldata, expectedLocation) -> do
parseLocE <- tryValidationErr $ parseLocation xmldata parseLocE <- tryValidationErr $ parseLocation xmldata
either assertValidtionErr (@?= expectedLocation) parseLocE either assertValidtionErr (@?= expectedLocation) parseLocE
where where
cases = [ cases =
-- 2. Test parsing of a valid location xml. [ -- 2. Test parsing of a valid location xml.
("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ ( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<LocationConstraint xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">EU</LocationConstraint>", \<LocationConstraint xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">EU</LocationConstraint>",
"EU" "EU"
) ),
, -- 3. Test parsing of a valid, empty location xml.
-- 3. Test parsing of a valid, empty location xml. ( "<LocationConstraint xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"/>",
("<LocationConstraint xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"/>", "us-east-1"
"us-east-1" )
)
] ]
testParseNewMultipartUpload :: Assertion testParseNewMultipartUpload :: Assertion
testParseNewMultipartUpload = do testParseNewMultipartUpload = do
forM_ cases $ \(xmldata, expectedUploadId) -> do forM_ cases $ \(xmldata, expectedUploadId) -> do
parsedUploadIdE <- tryValidationErr $ runTestNS $ parseNewMultipartUpload xmldata parsedUploadIdE <- tryValidationErr $ runTestNS $ parseNewMultipartUpload xmldata
eitherValidationErr parsedUploadIdE (@?= expectedUploadId) eitherValidationErr parsedUploadIdE (@?= expectedUploadId)
where where
cases = [ cases =
("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ [ ( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<InitiateMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ \<InitiateMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\ <Bucket>example-bucket</Bucket>\ \ <Bucket>example-bucket</Bucket>\
\ <Key>example-object</Key>\ \ <Key>example-object</Key>\
\ <UploadId>VXBsb2FkIElEIGZvciA2aWWpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZA</UploadId>\ \ <UploadId>VXBsb2FkIElEIGZvciA2aWWpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZA</UploadId>\
\</InitiateMultipartUploadResult>", \</InitiateMultipartUploadResult>",
"VXBsb2FkIElEIGZvciA2aWWpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZA" "VXBsb2FkIElEIGZvciA2aWWpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZA"
), ),
("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ ( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<InitiateMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ \<InitiateMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\ <Bucket>example-bucket</Bucket>\ \ <Bucket>example-bucket</Bucket>\
\ <Key>example-object</Key>\ \ <Key>example-object</Key>\
\ <UploadId>EXAMPLEJZ6e0YupT2h66iePQCc9IEbYbDUy4RTpMeoSMLPRp8Z5o1u8feSRonpvnWsKKG35tI2LB9VDPiCgTy.Gq2VxQLYjrue4Nq.NBdqI-</UploadId>\ \ <UploadId>EXAMPLEJZ6e0YupT2h66iePQCc9IEbYbDUy4RTpMeoSMLPRp8Z5o1u8feSRonpvnWsKKG35tI2LB9VDPiCgTy.Gq2VxQLYjrue4Nq.NBdqI-</UploadId>\
\</InitiateMultipartUploadResult>", \</InitiateMultipartUploadResult>",
"EXAMPLEJZ6e0YupT2h66iePQCc9IEbYbDUy4RTpMeoSMLPRp8Z5o1u8feSRonpvnWsKKG35tI2LB9VDPiCgTy.Gq2VxQLYjrue4Nq.NBdqI-" "EXAMPLEJZ6e0YupT2h66iePQCc9IEbYbDUy4RTpMeoSMLPRp8Z5o1u8feSRonpvnWsKKG35tI2LB9VDPiCgTy.Gq2VxQLYjrue4Nq.NBdqI-"
) )
] ]
testParseListObjectsResult :: Assertion testParseListObjectsResult :: Assertion
testParseListObjectsResult = do testParseListObjectsResult = do
let let xmldata =
xmldata = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<ListBucketResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ \<ListBucketResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\<Name>bucket</Name>\ \<Name>bucket</Name>\
\<Prefix/>\ \<Prefix/>\
\<NextContinuationToken>opaque</NextContinuationToken>\ \<NextContinuationToken>opaque</NextContinuationToken>\
\<KeyCount>1000</KeyCount>\ \<KeyCount>1000</KeyCount>\
\<MaxKeys>1000</MaxKeys>\ \<MaxKeys>1000</MaxKeys>\
\<IsTruncated>true</IsTruncated>\ \<IsTruncated>true</IsTruncated>\
\<Contents>\ \<Contents>\
\<Key>my-image.jpg</Key>\ \<Key>my-image.jpg</Key>\
\<LastModified>2009-10-12T17:50:30.000Z</LastModified>\ \<LastModified>2009-10-12T17:50:30.000Z</LastModified>\
\<ETag>&quot;fba9dede5f27731c9771645a39863328&quot;</ETag>\ \<ETag>&quot;fba9dede5f27731c9771645a39863328&quot;</ETag>\
\<Size>434234</Size>\ \<Size>434234</Size>\
\<StorageClass>STANDARD</StorageClass>\ \<StorageClass>STANDARD</StorageClass>\
\</Contents>\ \</Contents>\
\</ListBucketResult>" \</ListBucketResult>"
expectedListResult = ListObjectsResult True (Just "opaque") [object1] []
expectedListResult = ListObjectsResult True (Just "opaque") [object1] [] object1 = ObjectInfo "my-image.jpg" modifiedTime1 "\"fba9dede5f27731c9771645a39863328\"" 434234 H.empty H.empty
object1 = ObjectInfo "my-image.jpg" modifiedTime1 "\"fba9dede5f27731c9771645a39863328\"" 434234 H.empty H.empty modifiedTime1 = flip UTCTime 64230 $ fromGregorian 2009 10 12
modifiedTime1 = flip UTCTime 64230 $ fromGregorian 2009 10 12
parsedListObjectsResult <- tryValidationErr $ runTestNS $ parseListObjectsResponse xmldata parsedListObjectsResult <- tryValidationErr $ runTestNS $ parseListObjectsResponse xmldata
eitherValidationErr parsedListObjectsResult (@?= expectedListResult) eitherValidationErr parsedListObjectsResult (@?= expectedListResult)
testParseListObjectsV1Result :: Assertion testParseListObjectsV1Result :: Assertion
testParseListObjectsV1Result = do testParseListObjectsV1Result = do
let let xmldata =
xmldata = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<ListBucketResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ \<ListBucketResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\<Name>bucket</Name>\ \<Name>bucket</Name>\
\<Prefix/>\ \<Prefix/>\
\<NextMarker>my-image1.jpg</NextMarker>\ \<NextMarker>my-image1.jpg</NextMarker>\
\<KeyCount>1000</KeyCount>\ \<KeyCount>1000</KeyCount>\
\<MaxKeys>1000</MaxKeys>\ \<MaxKeys>1000</MaxKeys>\
\<IsTruncated>true</IsTruncated>\ \<IsTruncated>true</IsTruncated>\
\<Contents>\ \<Contents>\
\<Key>my-image.jpg</Key>\ \<Key>my-image.jpg</Key>\
\<LastModified>2009-10-12T17:50:30.000Z</LastModified>\ \<LastModified>2009-10-12T17:50:30.000Z</LastModified>\
\<ETag>&quot;fba9dede5f27731c9771645a39863328&quot;</ETag>\ \<ETag>&quot;fba9dede5f27731c9771645a39863328&quot;</ETag>\
\<Size>434234</Size>\ \<Size>434234</Size>\
\<StorageClass>STANDARD</StorageClass>\ \<StorageClass>STANDARD</StorageClass>\
\</Contents>\ \</Contents>\
\</ListBucketResult>" \</ListBucketResult>"
expectedListResult = ListObjectsV1Result True (Just "my-image1.jpg") [object1] []
expectedListResult = ListObjectsV1Result True (Just "my-image1.jpg") [object1] [] object1 = ObjectInfo "my-image.jpg" modifiedTime1 "\"fba9dede5f27731c9771645a39863328\"" 434234 H.empty H.empty
object1 = ObjectInfo "my-image.jpg" modifiedTime1 "\"fba9dede5f27731c9771645a39863328\"" 434234 H.empty H.empty modifiedTime1 = flip UTCTime 64230 $ fromGregorian 2009 10 12
modifiedTime1 = flip UTCTime 64230 $ fromGregorian 2009 10 12
parsedListObjectsV1Result <- tryValidationErr $ runTestNS $ parseListObjectsV1Response xmldata parsedListObjectsV1Result <- tryValidationErr $ runTestNS $ parseListObjectsV1Response xmldata
eitherValidationErr parsedListObjectsV1Result (@?= expectedListResult) eitherValidationErr parsedListObjectsV1Result (@?= expectedListResult)
testParseListIncompleteUploads :: Assertion testParseListIncompleteUploads :: Assertion
testParseListIncompleteUploads = do testParseListIncompleteUploads = do
let let xmldata =
xmldata = "<ListMultipartUploadsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ "<ListMultipartUploadsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\<Bucket>example-bucket</Bucket>\ \<Bucket>example-bucket</Bucket>\
\<KeyMarker/>\ \<KeyMarker/>\
\<UploadIdMarker/>\ \<UploadIdMarker/>\
\<NextKeyMarker>sample.jpg</NextKeyMarker>\ \<NextKeyMarker>sample.jpg</NextKeyMarker>\
\<NextUploadIdMarker>Xgw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1W99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--</NextUploadIdMarker>\ \<NextUploadIdMarker>Xgw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1W99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--</NextUploadIdMarker>\
\<Delimiter>/</Delimiter>\ \<Delimiter>/</Delimiter>\
\<Prefix/>\ \<Prefix/>\
\<MaxUploads>1000</MaxUploads>\ \<MaxUploads>1000</MaxUploads>\
\<IsTruncated>false</IsTruncated>\ \<IsTruncated>false</IsTruncated>\
\<Upload>\ \<Upload>\
\<Key>sample.jpg</Key>\ \<Key>sample.jpg</Key>\
\<UploadId>Agw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1N99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--</UploadId>\ \<UploadId>Agw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1N99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--</UploadId>\
\<Initiator>\ \<Initiator>\
\<ID>314133b66967d86f031c7249d1d9a80249109428335cd0ef1cdc487b4566cb1b</ID>\ \<ID>314133b66967d86f031c7249d1d9a80249109428335cd0ef1cdc487b4566cb1b</ID>\
\<DisplayName>s3-nickname</DisplayName>\ \<DisplayName>s3-nickname</DisplayName>\
\</Initiator>\ \</Initiator>\
\<Owner>\ \<Owner>\
\<ID>314133b66967d86f031c7249d1d9a80249109428335cd0ef1cdc487b4566cb1b</ID>\ \<ID>314133b66967d86f031c7249d1d9a80249109428335cd0ef1cdc487b4566cb1b</ID>\
\<DisplayName>s3-nickname</DisplayName>\ \<DisplayName>s3-nickname</DisplayName>\
\</Owner>\ \</Owner>\
\<StorageClass>STANDARD</StorageClass>\ \<StorageClass>STANDARD</StorageClass>\
\<Initiated>2010-11-26T19:24:17.000Z</Initiated>\ \<Initiated>2010-11-26T19:24:17.000Z</Initiated>\
\</Upload>\ \</Upload>\
\<CommonPrefixes>\ \<CommonPrefixes>\
\<Prefix>photos/</Prefix>\ \<Prefix>photos/</Prefix>\
\</CommonPrefixes>\ \</CommonPrefixes>\
\<CommonPrefixes>\ \<CommonPrefixes>\
\<Prefix>videos/</Prefix>\ \<Prefix>videos/</Prefix>\
\</CommonPrefixes>\ \</CommonPrefixes>\
\</ListMultipartUploadsResult>" \</ListMultipartUploadsResult>"
expectedListResult = ListUploadsResult False (Just "sample.jpg") (Just "Xgw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1W99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--") uploads prefixes expectedListResult = ListUploadsResult False (Just "sample.jpg") (Just "Xgw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1W99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--") uploads prefixes
uploads = [("sample.jpg", "Agw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1N99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--", initTime)] uploads = [("sample.jpg", "Agw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1N99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--", initTime)]
initTime = UTCTime (fromGregorian 2010 11 26) 69857 initTime = UTCTime (fromGregorian 2010 11 26) 69857
prefixes = ["photos/", "videos/"] prefixes = ["photos/", "videos/"]
parsedListUploadsResult <- tryValidationErr $ runTestNS $ parseListUploadsResponse xmldata parsedListUploadsResult <- tryValidationErr $ runTestNS $ parseListUploadsResponse xmldata
eitherValidationErr parsedListUploadsResult (@?= expectedListResult) eitherValidationErr parsedListUploadsResult (@?= expectedListResult)
testParseCompleteMultipartUploadResponse :: Assertion testParseCompleteMultipartUploadResponse :: Assertion
testParseCompleteMultipartUploadResponse = do testParseCompleteMultipartUploadResponse = do
let let xmldata =
xmldata = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<CompleteMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ \<CompleteMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\<Location>http://Example-Bucket.s3.amazonaws.com/Example-Object</Location>\ \<Location>http://Example-Bucket.s3.amazonaws.com/Example-Object</Location>\
\<Bucket>Example-Bucket</Bucket>\ \<Bucket>Example-Bucket</Bucket>\
\<Key>Example-Object</Key>\ \<Key>Example-Object</Key>\
\<ETag>\"3858f62230ac3c915f300c664312c11f-9\"</ETag>\ \<ETag>\"3858f62230ac3c915f300c664312c11f-9\"</ETag>\
\</CompleteMultipartUploadResult>" \</CompleteMultipartUploadResult>"
expectedETag = "\"3858f62230ac3c915f300c664312c11f-9\"" expectedETag = "\"3858f62230ac3c915f300c664312c11f-9\""
parsedETagE <- runExceptT $ runTestNS $ parseCompleteMultipartUploadResponse xmldata parsedETagE <- runExceptT $ runTestNS $ parseCompleteMultipartUploadResponse xmldata
eitherValidationErr parsedETagE (@?= expectedETag) eitherValidationErr parsedETagE (@?= expectedETag)
testParseListPartsResponse :: Assertion testParseListPartsResponse :: Assertion
testParseListPartsResponse = do testParseListPartsResponse = do
let let xmldata =
xmldata = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<ListPartsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ \<ListPartsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\<Bucket>example-bucket</Bucket>\ \<Bucket>example-bucket</Bucket>\
\<Key>example-object</Key>\ \<Key>example-object</Key>\
\<UploadId>XXBsb2FkIElEIGZvciBlbHZpbmcncyVcdS1tb3ZpZS5tMnRzEEEwbG9hZA</UploadId>\ \<UploadId>XXBsb2FkIElEIGZvciBlbHZpbmcncyVcdS1tb3ZpZS5tMnRzEEEwbG9hZA</UploadId>\
\<Initiator>\ \<Initiator>\
\<ID>arn:aws:iam::111122223333:user/some-user-11116a31-17b5-4fb7-9df5-b288870f11xx</ID>\ \<ID>arn:aws:iam::111122223333:user/some-user-11116a31-17b5-4fb7-9df5-b288870f11xx</ID>\
\<DisplayName>umat-user-11116a31-17b5-4fb7-9df5-b288870f11xx</DisplayName>\ \<DisplayName>umat-user-11116a31-17b5-4fb7-9df5-b288870f11xx</DisplayName>\
\</Initiator>\ \</Initiator>\
\<Owner>\ \<Owner>\
\<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>\ \<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>\
\<DisplayName>someName</DisplayName>\ \<DisplayName>someName</DisplayName>\
\</Owner>\ \</Owner>\
\<StorageClass>STANDARD</StorageClass>\ \<StorageClass>STANDARD</StorageClass>\
\<PartNumberMarker>1</PartNumberMarker>\ \<PartNumberMarker>1</PartNumberMarker>\
\<NextPartNumberMarker>3</NextPartNumberMarker>\ \<NextPartNumberMarker>3</NextPartNumberMarker>\
\<MaxParts>2</MaxParts>\ \<MaxParts>2</MaxParts>\
\<IsTruncated>true</IsTruncated>\ \<IsTruncated>true</IsTruncated>\
\<Part>\ \<Part>\
\<PartNumber>2</PartNumber>\ \<PartNumber>2</PartNumber>\
\<LastModified>2010-11-10T20:48:34.000Z</LastModified>\ \<LastModified>2010-11-10T20:48:34.000Z</LastModified>\
\<ETag>\"7778aef83f66abc1fa1e8477f296d394\"</ETag>\ \<ETag>\"7778aef83f66abc1fa1e8477f296d394\"</ETag>\
\<Size>10485760</Size>\ \<Size>10485760</Size>\
\</Part>\ \</Part>\
\<Part>\ \<Part>\
\<PartNumber>3</PartNumber>\ \<PartNumber>3</PartNumber>\
\<LastModified>2010-11-10T20:48:33.000Z</LastModified>\ \<LastModified>2010-11-10T20:48:33.000Z</LastModified>\
\<ETag>\"aaaa18db4cc2f85cedef654fccc4a4x8\"</ETag>\ \<ETag>\"aaaa18db4cc2f85cedef654fccc4a4x8\"</ETag>\
\<Size>10485760</Size>\ \<Size>10485760</Size>\
\</Part>\ \</Part>\
\</ListPartsResult>" \</ListPartsResult>"
expectedListResult = ListPartsResult True (Just 3) [part1, part2]
expectedListResult = ListPartsResult True (Just 3) [part1, part2] part1 = ObjectPartInfo 2 "\"7778aef83f66abc1fa1e8477f296d394\"" 10485760 modifiedTime1
part1 = ObjectPartInfo 2 "\"7778aef83f66abc1fa1e8477f296d394\"" 10485760 modifiedTime1 modifiedTime1 = flip UTCTime 74914 $ fromGregorian 2010 11 10
modifiedTime1 = flip UTCTime 74914 $ fromGregorian 2010 11 10 part2 = ObjectPartInfo 3 "\"aaaa18db4cc2f85cedef654fccc4a4x8\"" 10485760 modifiedTime2
part2 = ObjectPartInfo 3 "\"aaaa18db4cc2f85cedef654fccc4a4x8\"" 10485760 modifiedTime2 modifiedTime2 = flip UTCTime 74913 $ fromGregorian 2010 11 10
modifiedTime2 = flip UTCTime 74913 $ fromGregorian 2010 11 10
parsedListPartsResult <- runExceptT $ runTestNS $ parseListPartsResponse xmldata parsedListPartsResult <- runExceptT $ runTestNS $ parseListPartsResponse xmldata
eitherValidationErr parsedListPartsResult (@?= expectedListResult) eitherValidationErr parsedListPartsResult (@?= expectedListResult)
testParseCopyObjectResponse :: Assertion testParseCopyObjectResponse :: Assertion
testParseCopyObjectResponse = do testParseCopyObjectResponse = do
let let cases =
cases = [ ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ [ ( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<CopyObjectResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ \<CopyObjectResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\<LastModified>2009-10-28T22:32:00.000Z</LastModified>\ \<LastModified>2009-10-28T22:32:00.000Z</LastModified>\
\<ETag>\"9b2cf535f27731c974343645a3985328\"</ETag>\ \<ETag>\"9b2cf535f27731c974343645a3985328\"</ETag>\
\</CopyObjectResult>", \</CopyObjectResult>",
("\"9b2cf535f27731c974343645a3985328\"", ( "\"9b2cf535f27731c974343645a3985328\"",
UTCTime (fromGregorian 2009 10 28) 81120)) UTCTime (fromGregorian 2009 10 28) 81120
, ("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\ )
\<CopyPartResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ ),
\<LastModified>2009-10-28T22:32:00.000Z</LastModified>\ ( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
\<ETag>\"9b2cf535f27731c974343645a3985328\"</ETag>\ \<CopyPartResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\</CopyPartResult>", \<LastModified>2009-10-28T22:32:00.000Z</LastModified>\
("\"9b2cf535f27731c974343645a3985328\"", \<ETag>\"9b2cf535f27731c974343645a3985328\"</ETag>\
UTCTime (fromGregorian 2009 10 28) 81120))] \</CopyPartResult>",
( "\"9b2cf535f27731c974343645a3985328\"",
UTCTime (fromGregorian 2009 10 28) 81120
)
)
]
forM_ cases $ \(xmldata, (etag, modTime)) -> do forM_ cases $ \(xmldata, (etag, modTime)) -> do
parseResult <- runExceptT $ runTestNS $ parseCopyObjectResponse xmldata parseResult <- runExceptT $ runTestNS $ parseCopyObjectResponse xmldata
@ -287,73 +288,89 @@ testParseCopyObjectResponse = do
testParseNotification :: Assertion testParseNotification :: Assertion
testParseNotification = do testParseNotification = do
let let cases =
cases = [ ("<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ [ ( "<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\ <TopicConfiguration>\ \ <TopicConfiguration>\
\ <Id>YjVkM2Y0YmUtNGI3NC00ZjQyLWEwNGItNDIyYWUxY2I0N2M4</Id>\ \ <Id>YjVkM2Y0YmUtNGI3NC00ZjQyLWEwNGItNDIyYWUxY2I0N2M4</Id>\
\ <Topic>arn:aws:sns:us-east-1:account-id:s3notificationtopic2</Topic>\ \ <Topic>arn:aws:sns:us-east-1:account-id:s3notificationtopic2</Topic>\
\ <Event>s3:ReducedRedundancyLostObject</Event>\ \ <Event>s3:ReducedRedundancyLostObject</Event>\
\ <Event>s3:ObjectCreated:*</Event>\ \ <Event>s3:ObjectCreated:*</Event>\
\ </TopicConfiguration>\ \ </TopicConfiguration>\
\</NotificationConfiguration>", \</NotificationConfiguration>",
Notification [] Notification
[ NotificationConfig []
[ NotificationConfig
"YjVkM2Y0YmUtNGI3NC00ZjQyLWEwNGItNDIyYWUxY2I0N2M4" "YjVkM2Y0YmUtNGI3NC00ZjQyLWEwNGItNDIyYWUxY2I0N2M4"
"arn:aws:sns:us-east-1:account-id:s3notificationtopic2" "arn:aws:sns:us-east-1:account-id:s3notificationtopic2"
[ReducedRedundancyLostObject, ObjectCreated] defaultFilter [ReducedRedundancyLostObject, ObjectCreated]
] defaultFilter
[]) ]
, ("<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\ []
\ <CloudFunctionConfiguration>\ ),
\ <Id>ObjectCreatedEvents</Id>\ ( "<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
\ <CloudFunction>arn:aws:lambda:us-west-2:35667example:function:CreateThumbnail</CloudFunction>\ \ <CloudFunctionConfiguration>\
\ <Event>s3:ObjectCreated:*</Event>\ \ <Id>ObjectCreatedEvents</Id>\
\ </CloudFunctionConfiguration>\ \ <CloudFunction>arn:aws:lambda:us-west-2:35667example:function:CreateThumbnail</CloudFunction>\
\ <QueueConfiguration>\ \ <Event>s3:ObjectCreated:*</Event>\
\ <Id>1</Id>\ \ </CloudFunctionConfiguration>\
\ <Filter>\ \ <QueueConfiguration>\
\ <S3Key>\ \ <Id>1</Id>\
\ <FilterRule>\ \ <Filter>\
\ <Name>prefix</Name>\ \ <S3Key>\
\ <Value>images/</Value>\ \ <FilterRule>\
\ </FilterRule>\ \ <Name>prefix</Name>\
\ <FilterRule>\ \ <Value>images/</Value>\
\ <Name>suffix</Name>\ \ </FilterRule>\
\ <Value>.jpg</Value>\ \ <FilterRule>\
\ </FilterRule>\ \ <Name>suffix</Name>\
\ </S3Key>\ \ <Value>.jpg</Value>\
\ </Filter>\ \ </FilterRule>\
\ <Queue>arn:aws:sqs:us-west-2:444455556666:s3notificationqueue</Queue>\ \ </S3Key>\
\ <Event>s3:ObjectCreated:Put</Event>\ \ </Filter>\
\ </QueueConfiguration>\ \ <Queue>arn:aws:sqs:us-west-2:444455556666:s3notificationqueue</Queue>\
\ <TopicConfiguration>\ \ <Event>s3:ObjectCreated:Put</Event>\
\ <Topic>arn:aws:sns:us-east-1:356671443308:s3notificationtopic2</Topic>\ \ </QueueConfiguration>\
\ <Event>s3:ReducedRedundancyLostObject</Event>\ \ <TopicConfiguration>\
\ </TopicConfiguration>\ \ <Topic>arn:aws:sns:us-east-1:356671443308:s3notificationtopic2</Topic>\
\ <QueueConfiguration>\ \ <Event>s3:ReducedRedundancyLostObject</Event>\
\ <Queue>arn:aws:sqs:us-east-1:356671443308:s3notificationqueue</Queue>\ \ </TopicConfiguration>\
\ <Event>s3:ObjectCreated:*</Event>\ \ <QueueConfiguration>\
\ </QueueConfiguration>)\ \ <Queue>arn:aws:sqs:us-east-1:356671443308:s3notificationqueue</Queue>\
\</NotificationConfiguration>", \ <Event>s3:ObjectCreated:*</Event>\
Notification [ NotificationConfig \ </QueueConfiguration>)\
"1" "arn:aws:sqs:us-west-2:444455556666:s3notificationqueue" \</NotificationConfiguration>",
[ObjectCreatedPut] Notification
(Filter $ FilterKey $ FilterRules [ NotificationConfig
[FilterRule "prefix" "images/", "1"
FilterRule "suffix" ".jpg"]) "arn:aws:sqs:us-west-2:444455556666:s3notificationqueue"
, NotificationConfig [ObjectCreatedPut]
"" "arn:aws:sqs:us-east-1:356671443308:s3notificationqueue" ( Filter $
[ObjectCreated] defaultFilter FilterKey $
] FilterRules
[ NotificationConfig [ FilterRule "prefix" "images/",
"" "arn:aws:sns:us-east-1:356671443308:s3notificationtopic2" FilterRule "suffix" ".jpg"
[ReducedRedundancyLostObject] defaultFilter ]
] ),
[ NotificationConfig NotificationConfig
"ObjectCreatedEvents" "arn:aws:lambda:us-west-2:35667example:function:CreateThumbnail" ""
[ObjectCreated] defaultFilter "arn:aws:sqs:us-east-1:356671443308:s3notificationqueue"
]) [ObjectCreated]
] defaultFilter
]
[ NotificationConfig
""
"arn:aws:sns:us-east-1:356671443308:s3notificationtopic2"
[ReducedRedundancyLostObject]
defaultFilter
]
[ NotificationConfig
"ObjectCreatedEvents"
"arn:aws:lambda:us-west-2:35667example:function:CreateThumbnail"
[ObjectCreated]
defaultFilter
]
)
]
forM_ cases $ \(xmldata, val) -> do forM_ cases $ \(xmldata, val) -> do
result <- runExceptT $ runTestNS $ parseNotification xmldata result <- runExceptT $ runTestNS $ parseNotification xmldata
@ -362,20 +379,25 @@ testParseNotification = do
-- | Tests parsing of both progress and stats -- | Tests parsing of both progress and stats
testParseSelectProgress :: Assertion testParseSelectProgress :: Assertion
testParseSelectProgress = do testParseSelectProgress = do
let cases = [ ([r|<?xml version="1.0" encoding="UTF-8"?> let cases =
[ ( [r|<?xml version="1.0" encoding="UTF-8"?>
<Progress> <Progress>
<BytesScanned>512</BytesScanned> <BytesScanned>512</BytesScanned>
<BytesProcessed>1024</BytesProcessed> <BytesProcessed>1024</BytesProcessed>
<BytesReturned>1024</BytesReturned> <BytesReturned>1024</BytesReturned>
</Progress>|] , Progress 512 1024 1024) </Progress>|],
, ([r|<?xml version="1.0" encoding="UTF-8"?> Progress 512 1024 1024
),
( [r|<?xml version="1.0" encoding="UTF-8"?>
<Stats> <Stats>
<BytesScanned>512</BytesScanned> <BytesScanned>512</BytesScanned>
<BytesProcessed>1024</BytesProcessed> <BytesProcessed>1024</BytesProcessed>
<BytesReturned>1024</BytesReturned> <BytesReturned>1024</BytesReturned>
</Stats>|], Progress 512 1024 1024) </Stats>|],
] Progress 512 1024 1024
)
]
forM_ cases $ \(xmldata, progress) -> do forM_ cases $ \(xmldata, progress) -> do
result <- runExceptT $ parseSelectProgress xmldata result <- runExceptT $ parseSelectProgress xmldata
eitherValidationErr result (@?= progress) eitherValidationErr result (@?= progress)

View File

@ -1,5 +1,5 @@
-- --
-- MinIO Haskell SDK, (C) 2017, 2018 MinIO, Inc. -- MinIO Haskell SDK, (C) 2017-2023 MinIO, Inc.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@ -14,21 +14,17 @@
-- limitations under the License. -- limitations under the License.
-- --
import Test.Tasty import qualified Data.ByteString as B
import Test.Tasty.QuickCheck as QC import qualified Data.List as L
import Lib.Prelude
import qualified Data.ByteString as B import Network.Minio.API.Test
import qualified Data.List as L import Network.Minio.CopyObject
import Network.Minio.Data
import Lib.Prelude import Network.Minio.Utils.Test
import Network.Minio.XmlGenerator.Test
import Network.Minio.API.Test import Network.Minio.XmlParser.Test
import Network.Minio.CopyObject import Test.Tasty
import Network.Minio.Data import Test.Tasty.QuickCheck as QC
import Network.Minio.PutObject
import Network.Minio.Utils.Test
import Network.Minio.XmlGenerator.Test
import Network.Minio.XmlParser.Test
main :: IO () main :: IO ()
main = defaultMain tests main = defaultMain tests
@ -51,82 +47,87 @@ properties = testGroup "Properties" [qcProps] -- [scProps]
-- ] -- ]
qcProps :: TestTree qcProps :: TestTree
qcProps = testGroup "(checked by QuickCheck)" qcProps =
[ QC.testProperty "selectPartSizes:" $ testGroup
\n -> let (pns, offs, sizes) = L.unzip3 (selectPartSizes n) "(checked by QuickCheck)"
[ QC.testProperty "selectPartSizes:" $
\n ->
let (pns, offs, sizes) = L.unzip3 (selectPartSizes n)
-- check that pns increments from 1. -- check that pns increments from 1.
isPNumsAscendingFrom1 = all (\(a, b) -> a == b) $ zip pns [1..] isPNumsAscendingFrom1 = all (uncurry (==)) $ zip pns [1 ..]
consPairs [] = []
consPairs [] = [] consPairs [_] = []
consPairs [_] = [] consPairs (a : (b : c)) = (a, b) : consPairs (b : c)
consPairs (a:(b:c)) = (a, b):(consPairs (b:c))
-- check `offs` is monotonically increasing. -- check `offs` is monotonically increasing.
isOffsetsAsc = all (\(a, b) -> a < b) $ consPairs offs isOffsetsAsc = all (uncurry (<)) $ consPairs offs
-- check sizes sums to n. -- check sizes sums to n.
isSumSizeOk = sum sizes == n isSumSizeOk = sum sizes == n
-- check sizes are constant except last -- check sizes are constant except last
isSizesConstantExceptLast = isSizesConstantExceptLast =
all (\(a, b) -> a == b) (consPairs $ L.init sizes) all (uncurry (==)) (consPairs $ L.init sizes)
-- check each part except last is at least minPartSize; -- check each part except last is at least minPartSize;
-- last part may be 0 only if it is the only part. -- last part may be 0 only if it is the only part.
nparts = length sizes nparts = length sizes
isMinPartSizeOk = isMinPartSizeOk =
if | nparts > 1 -> -- last part can be smaller but > 0 if
all (>= minPartSize) (take (nparts - 1) sizes) && | nparts > 1 -> -- last part can be smaller but > 0
all (\s -> s > 0) (drop (nparts - 1) sizes) all (>= minPartSize) (take (nparts - 1) sizes)
| nparts == 1 -> -- size may be 0 here. && all (> 0) (drop (nparts - 1) sizes)
maybe True (\x -> x >= 0 && x <= minPartSize) $ | nparts == 1 -> -- size may be 0 here.
headMay sizes maybe True (\x -> x >= 0 && x <= minPartSize) $
| otherwise -> False listToMaybe sizes
| otherwise -> False
in n < 0 || in n < 0
(isPNumsAscendingFrom1 && isOffsetsAsc && isSumSizeOk && || ( isPNumsAscendingFrom1
isSizesConstantExceptLast && isMinPartSizeOk) && isOffsetsAsc
&& isSumSizeOk
, QC.testProperty "selectCopyRanges:" $ && isSizesConstantExceptLast
\(start, end) -> && isMinPartSizeOk
let (_, pairs) = L.unzip (selectCopyRanges (start, end)) ),
QC.testProperty "selectCopyRanges:" $
-- is last part's snd offset end? \(start, end) ->
isLastPartOk = maybe False ((end ==) . snd) $ lastMay pairs let (_, pairs) = L.unzip (selectCopyRanges (start, end))
-- is first part's fst offset start -- is last part's snd offset end?
isFirstPartOk = maybe False ((start ==) . fst) $ headMay pairs isLastPartOk = maybe False ((end ==) . snd) $ lastMay pairs
-- is first part's fst offset start
-- each pair is >=64MiB except last, and all those parts isFirstPartOk = maybe False ((start ==) . fst) $ listToMaybe pairs
-- have same size. -- each pair is >=64MiB except last, and all those parts
initSizes = maybe [] (map (\(a, b) -> b - a + 1)) $ initMay pairs -- have same size.
isPartSizesOk = all (>= minPartSize) initSizes && initSizes = maybe [] (map (\(a, b) -> b - a + 1) . init) (nonEmpty pairs)
maybe True (\k -> all (== k) initSizes) isPartSizesOk =
(headMay initSizes) all (>= minPartSize) initSizes
&& maybe
-- returned offsets are contiguous. True
fsts = drop 1 $ map fst pairs (\k -> all (== k) initSizes)
snds = take (length pairs - 1) $ map snd pairs (listToMaybe initSizes)
isContParts = length fsts == length snds && -- returned offsets are contiguous.
and (map (\(a, b) -> a == b + 1) $ zip fsts snds) fsts = drop 1 $ map fst pairs
snds = take (length pairs - 1) $ map snd pairs
in start < 0 || start > end || isContParts =
(isLastPartOk && isFirstPartOk && isPartSizesOk && isContParts) length fsts == length snds
&& all (\(a, b) -> a == b + 1) (zip fsts snds)
, QC.testProperty "mkSSECKey:" $ in start < 0
\w8s -> let bs = B.pack w8s || start > end
r = mkSSECKey bs || (isLastPartOk && isFirstPartOk && isPartSizesOk && isContParts),
in case r of QC.testProperty "mkSSECKey:" $
Just _ -> B.length bs == 32 \w8s ->
let bs = B.pack w8s
r = mkSSECKey bs
in case r of
Just _ -> B.length bs == 32
Nothing -> B.length bs /= 32 Nothing -> B.length bs /= 32
] ]
unitTests :: TestTree unitTests :: TestTree
unitTests = testGroup "Unit tests" [ xmlGeneratorTests, xmlParserTests unitTests =
, bucketNameValidityTests testGroup
, objectNameValidityTests "Unit tests"
, parseServerInfoJSONTest [ xmlGeneratorTests,
, parseHealStatusTest xmlParserTests,
, parseHealStartRespTest bucketNameValidityTests,
, limitedMapConcurrentlyTests objectNameValidityTests,
] parseServerInfoJSONTest,
parseHealStatusTest,
parseHealStartRespTest,
limitedMapConcurrentlyTests
]

28
test/cert/private.key Normal file
View File

@ -0,0 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQC3G9IiC+adjf0p
i/2KYc+4dizeuzUFN7wraSdhiOMdQgCnu9Dc3t2YEsQhNdrARjOTyXd36KeM3TwI
rPJ61dRGQSuN12l+mzngFJQjE0sysZHUJOLQC3rVvIrHSQ57utPg8ifxt/SunlPY
fhcUcq03onMGq44yOfE6mIhoe0Y9wcPQ3RjjNNS44bgmXiXwa+Do0h2hEn6/essq
5KjHL8WW2vGg7G9edpYdxINA/A2fdLtr8BwPNrZhOx84eee2XcUNdBuTtUUxE+0L
9yRqItqddriRxJFwOXb5OPW8xx2WGaV2a0wbE4gB2PTwwDvfo72mo9HXHZUHM1A8
4TD/RXMbAgMBAAECggEBAJ7r1oUWLyGvinn0tijUm6RNbMQjVvEgXoCO008jr3pF
PqxVpgEMrOa/4tmwFBus0jcCNF4t3r2zhddBw3I5A/O1vEdvHnBz6NdDBQ8sP6fP
1fF50iEe1Y2MBibQkXFxxVMG2QRB1Gt5nuvXA9ELdqtCovK3EsMk5ukkWb/UvjH5
8hcmQsaSqvzFEF4wJSY2mkeGSGIJTphPhhuA22xbhaBMInQyhZu8EHsn0h6s/Wgy
C4Cp2+4qZTKaaf6x3/ZjJ8CuKiSX+ZsJKjOEv8sqx7j/Y7QFOmJPewInKDhwazr/
xIK+N0KXPbUzeSEz6ZvExNDTxtR5ZlQP2UrRDg28yQECgYEA4Is1O2BvKVzNFOkj
bTVz25a/bb0Xrcfgi0Y9rdfLzlNdItFjAkxLTVRSW2Hv9ICl0RDDAG+wTlktXRdh
rfvDjwG2CvLQo1VEdMWTTkKVg03SwMEy2hFiWV69lENFGSaY8Y6unZDbia5HQinA
EgSS4sCojS+a2jtzG5FVVHJDKlkCgYEA0MKhMhD4SUhr2y1idPBrmLxuW5mVozuW
8bYaBeSzmfS0BRsN4fP9JGODPBPDdNbfGfGC9ezWLgD/lmCgjIEyBOq8EmqWSsiS
Kihds1+Z7hXtbzGsFGAFJJTIh7blBCsK5QFuyuih2UG0fL9z6K/dy+UUJkzrYqph
vSfKixyM8pMCgYEAmUPLsNyw4325aeV8TeWnUCJERaZFDFQa21W1cfyS2yEhuEtN
llr3JzBACqn9vFk3VU1onNqfb8sE4L696KCpKeqUFEMK0AG6eS4Gzus53Gb5TKJS
kHA/PhshsZp9Bp7G1FJ8s4YVo5N2hh2zQVkn3Wh9Y+kzfHQJrK51nO9lEvkCgYBi
BuKWle1gzAcJdnhDHRoJMIJJtQbVDYhFnBMALXJAmu1lcFzGe0GlMq1PKqCfXr6I
eiXawQmZtJJP1LPPBmOsd2U06KQGHcS00xucvQmVCOrjSdnZ/3SqxsqbH8DOgj+t
ZUzXLwHA+N99rJEK9Hob4kfh7ECjpgobPnIXfKKazQKBgQChAuiXHtf/Qq18hY3u
x48zFWjGgfd6GpOBZYkXOwGdCJgnYjZbE26LZEnYbwPh8ZUA2vp7mgHRJkD5e3Fj
ERuJLCw86WqyYZmLEuBciYGjCZqR5nbavfwsziWD00jeNruds2ZwKxRfFm4V7o2S
WLd/RUatd2Uu9f3B2J78OUdnxg==
-----END PRIVATE KEY-----

19
test/cert/public.crt Normal file
View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDCzCCAfOgAwIBAgIUaIUOMI78LCu+r1zl0mmFHK8n5/AwDQYJKoZIhvcNAQEL
BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MCAXDTE5MTAyNDE5NTMxOVoYDzIxMTkw
OTMwMTk1MzE5WjAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC3G9IiC+adjf0pi/2KYc+4dizeuzUFN7wraSdhiOMd
QgCnu9Dc3t2YEsQhNdrARjOTyXd36KeM3TwIrPJ61dRGQSuN12l+mzngFJQjE0sy
sZHUJOLQC3rVvIrHSQ57utPg8ifxt/SunlPYfhcUcq03onMGq44yOfE6mIhoe0Y9
wcPQ3RjjNNS44bgmXiXwa+Do0h2hEn6/essq5KjHL8WW2vGg7G9edpYdxINA/A2f
dLtr8BwPNrZhOx84eee2XcUNdBuTtUUxE+0L9yRqItqddriRxJFwOXb5OPW8xx2W
GaV2a0wbE4gB2PTwwDvfo72mo9HXHZUHM1A84TD/RXMbAgMBAAGjUzBRMB0GA1Ud
DgQWBBSEWXQ2JRD+OK7/KTmlD+OW16pGmzAfBgNVHSMEGDAWgBSEWXQ2JRD+OK7/
KTmlD+OW16pGmzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCF
0zYRaabB3X0jzGI9/Lr3Phrb90GvoL1DFLRuiOuTlDkz0vrm/HrZskwHCgMNrkCj
OTD9Vpas4D1QZBbQbRzfnf3OOoG4bgmcCwLFZl3dy27yIDAhrmbUP++g9l1Jmy4v
vBR/M4lt2scQ8LcZYEPqhEaE5EzFQEjtaxDcKdWDNKY9W1NUzSIABhF9eHiAUNdH
AFNJlYeBlCHxcWIeqgon184Dqp/CsvKtz3z3Ni+rlwPM/zuJCFHh1VF+z++0LJjG
roBCV0Tro4XyiEz9yp7Cb5kQYMaj1KL9TqBG0tZx0pmv7y+lXc4TT6DEllXz6USy
rbIba9/uUet3BqeIMTqj
-----END CERTIFICATE-----