Saving dates as metadata


-sh-4.2$ aws s3api put-object --bucket storagegrid-training --key "test.file" --body "test.file" --metadata Change_date=$(stat test.file|grep Change|awk '{print $2}'),Change_time=$(stat test.file|grep Change|awk '{print $3}') --profile default
{
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\""
}




-sh-4.2$ aws s3api get-object --bucket storagegrid-training --key "test.file" "test.file"  --profile default {
"AcceptRanges": "bytes",
"LastModified": "2021-12-02T11:21:50+00:00",
"ContentLength": 0,
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"ContentType": "binary/octet-stream",
"Metadata": {
"change_date": "2021-12-02",
"change_time": "12:20:48.279373897"
}
}
-sh-4.2$


If you are using elasticsearch as external index you can then search for metadata fields like:

GET sgmetadata/_search
{
“query”: {
“term” : { “metadata.change_date” : “2023-01-17”}
}
}

{
“took”: 1,
“timed_out”: false,
“_shards”: {
“total”: 1,
“successful”: 1,
“skipped”: 0,
“failed”: 0
},
“hits”: {
“total”: {
“value”: 1,
“relation”: “eq”
},
“max_score”: 1,
“hits”: [
{
“_index”: “sgmetadata”,
“_id”: “lock-test_elasticsearch3.test_MUE4NThCQ0EtQkQ4QS0xMUVELTk5Q0QtQTZFOTAwQjk0NkFF”,
“_score”: 1,
“_source”: {
“bucket”: “lock-test”,
“key”: “elasticsearch3.test”,
“versionId”: “MUE4NThCQ0EtQkQ4QS0xMUVELTk5Q0QtQTZFOTAwQjk0NkFF”,
“accountId”: “32427727073175632701”,
“size”: 40960000,
“md5”: “4081a22a1c2ef19e3c44ef14c8006fda”,
“region”: “us-east-1”,
“metadata”: {
“change_date”: “2023-01-17”,
“change_time”: “15:19:36.500205796”
}
}
}
]
}
}

Modify consistency level for all your buckets in Storagegrid.

Sometimes it might make sense to set all your buckets consistency level to “available”, in case you are doing maintenance tasks and you have one node down most of the time.

The default “read-after-write” policy should work ok, unless your clients start doing HEAD operations which will lead to get 500 errors when Storagegrid is unable to reach the consistency level.

In order to change all the consistency level of all your buckets at once, we can use an API call to do that, we have define it as follows:

def get_consistency_level(tenant_authtoken,bucket_name):
     headers={'Authorization': 'Bearer ' + tenant_authtoken }
     return requests.get (_url('/api/v3/org/containers/{}/consistency'.format(bucket_name)), headers=headers , verify=verify)


def set_consistency_level(tenant_authtoken,bucket_name,level):

     headers={'Authorization': 'Bearer ' + tenant_authtoken,
     "accept": "application/json",
     "Content-Type": "application/json" 
     }

     data={
            'consistency': level
     }   


     r = requests.put(_url('/api/v3/org/containers/{}/consistency').format(bucket_name), json=data, headers=headers, verify=verify)

     return r 

Then we go though all the buckets and change the consistency level:

response = get_tenants_accounts(auth_token)

if response.status_code != 200:
    raise Exception('POST //api/v3/grid/accounts?limit=25 {}'.format(response.status_code) + " Error: "+response.json()["message"]["text"] )

#For each tenant account get the buckets:

for items in response.json()['data']:
   #print('Account tenant id: {}, Tenant Name: {}'.format(items['id'], items['name']))
   tenantid='{}'.format(items['id'])
   tenant_name='{}'.format(items['name'])
   buckets_response=get_storage_usage_in_tenant( tenantid , auth_token)

   tenant_auth=get_tenant_token(api_user,api_passwd, tenantid).json()["data"]



   for buckets in buckets_response.json()['data']['buckets']:
          
           print ('Tenant name: {} Bucket name: {} Consistency Level: {}' .format (items['name'],buckets['name'],get_consistency_level(tenant_auth,buckets['name']).json()['data'])) 

           setresponse=set_consistency_level(tenant_auth,buckets['name'],"available")
           if setresponse.status_code != 200:
             raise Exception('POST set consistency level: {}'.format(setresponse.status_code) + " Error: "+setresponse.json()["message"]["text"] )
            

ssh X forwarding after sudo

In order to be able to forward your ssh -X or ssh -Y after doing sudo we need to use Xauth.

-sh-4.2$ ssh -Y mano@server12

-sh-4.2$ xauth list|tail -1
server12.sunwave.es:10  MIT-MAGIC-COOKIE-1  38a81b09365e1b5d13c50ad53d378a78

-sh-4.2$ sudo su
[sudo] password for mano:

root@server12 mano]# xauth 
Using authority file /root/.Xauthority
xauth> add server12.sunwave.es:10  MIT-MAGIC-COOKIE-1  38a81b09365e1b5d13c50ad53d378a78
xauth> exit
Writing authority file /root/.Xauthority
[root@server12 mano]# xeyes

Now we can launch GUI as root, and the traffic will be forwarded trough our ssh session.

Use SSE-C with storagegrid or S3 storage

In case you want to use server side encryption with your own keys

Let’s create a bin key:

mirettam@doraemon:~$ cat /dev/random | head -c 32 > key.bin

Let’s upload an object:

First check the md5sum of the object:

mirettam@doraemon:~$ md5sum awscliv2.zip
e6b46dd7cac2629a544ab343df00324f  awscliv2.zip

Then PUT the object:

mirettam@doraemon:~$ aws s3api put-object --key awscliv2.zip --body awscliv2.zip  --sse-customer-algorithm AES256 --sse-customer-key fileb://key.bin  --bucket storagegrid-training
{
    "ETag": "\"0edaf675a5c6d28c3c29d8b7f627a7ae\"",
    "SSECustomerAlgorithm": "AES256",
    "SSECustomerKeyMD5": "eUhaDMfLGFJZ21BC4qX2qg=="
}

Let’s try to retrieve the object using the keys.

mirettam@doraemon:~$ aws s3api get-object --key awscliv2.zip  --sse-customer-algorithm AES256 --sse-customer-key fileb://key.bin  --bucket storagegrid-training awscli2.zip


{
    "AcceptRanges": "bytes",
    "LastModified": "2020-08-18T11:04:02+00:00",
    "ContentLength": 33159785,
    "ETag": "\"0edaf675a5c6d28c3c29d8b7f627a7ae\"",
    "ContentType": "binary/octet-stream",
    "Metadata": {},
    "SSECustomerAlgorithm": "AES256",
    "SSECustomerKeyMD5": "eUhaDMfLGFJZ21BC4qX2qg=="
}

Check the md5sum of the retrieved object.

mirettam@doraemon:~$ md5sum awscli2.zip
e6b46dd7cac2629a544ab343df00324f  awscli2.zip

Let’s try to retrieve the object without using any key, it should fail:

mirettam@doraemon:~$ aws s3api get-object --key awscliv2.zip   --bucket storagegrid-training awscli3.zip           


An error occurred (InvalidRequest) when calling the GetObject operation: The object was stored using a form of Server Side Encryption. The correct parameters must be provided to retrieve the object.                                 

POSTED ON