<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Writing Legacy That Still Works]]></title><description><![CDATA[Hi! I am Yehor.
I spent 10 years building Python systems that last long enough to become legacy. And still work.]]></description><link>https://yehorlevchenko.dev</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 21:09:42 GMT</lastBuildDate><atom:link href="https://yehorlevchenko.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How Our Proxy Setup Exposed a Critical Bug in AWS]]></title><description><![CDATA[As mentioned in the first part of the cycle, currently I’m running a large-scale IoT fleet on AWS Greengrass. These devices sit behind a very strict corporate firewall: every outbound flow must be accounted for, optimised and approved by security bef...]]></description><link>https://yehorlevchenko.dev/how-our-proxy-setup-exposed-a-critical-bug-in-aws</link><guid isPermaLink="true">https://yehorlevchenko.dev/how-our-proxy-setup-exposed-a-critical-bug-in-aws</guid><category><![CDATA[AWS Greengrass]]></category><category><![CDATA[AWS]]></category><category><![CDATA[greengrass]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Yehor Levchenko]]></dc:creator><pubDate>Sun, 28 Sep 2025 18:47:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759085229505/9759785c-a7e6-4b15-b780-f630d477a1cd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As mentioned in the first part of the cycle, currently I’m running a large-scale IoT fleet on AWS Greengrass. These devices sit behind a very strict corporate firewall: every outbound flow must be accounted for, optimised and approved by security before any port/IP/hostname is opened. In practice, that means every packet counts and every config change goes through an actual review.</p>
<p>Planned solution (the “nice” one, by the book): route all outbound traffic through a corporate proxy stack (NLB &gt; proxy in EC2) and let security whitelist only the proxy IP. On the Greengrass side, AWS has recipes for this - in theory, you set a few proxy variables in the merge config of your Nucleus and you’re done.</p>
<p>Reality #1: the docs are vague. The “just add a couple of lines to the Nucleus config” approach reads fine on paper, but in the field, you need to ensure the usual proxy envs are present everywhere: HTTP_PROXY, HTTPS_PROXY, NO_PROXY (and uppercase variants). Setting them only in the nucleus isn’t enough if components ignore the nucleus-environment.</p>
<p>Reality #2: most public components don’t play by the rules. They bypass the proxy and try to send traffic directly. Worse - there’s a floating Systems Manager bug where its proxy variables get reset after a Greengrass restart. So you can deploy, verify, sleep, and then a reboot turns a tidy, secure device into a leaking sieve. Or simply locks you out of your edge device as Systems Manager is hitting the firewall.</p>
<h3 id="heading-what-i-did-blunt-solution-until-aws-addresses-this-issue">What I did (blunt solution until AWS addresses this issue):</h3>
<ul>
<li><p>Ensure proxy envs are explicitly injected into each public component (don’t rely solely on nucleus inheritance).</p>
</li>
<li><p>Manually create a drop-in config for the proxy variables. Greengrass won’t overwrite it on restart, so it sticks.</p>
</li>
<li><p><em>Recommended hack, echoed by an AWS architect</em>: introduce a small “service component” that applies all critical settings (proxy, certs, env vars) and declare it as a dependency for every other component. That way, no other component even starts until the environment is sane.</p>
</li>
</ul>
<p>And this is a perfect example of edge reality: the architecture diagrams look pretty, but the edge prefers surprises.</p>
<p>Filed the relevant bug/support case with AWS; until it’s fixed, treat Systems Manager’s public components like “cattle that occasionally forget they are behind a wall” and hard-enforce proxy config locally.</p>
<p>Note for myself and my brothers and sisters on the edge: always test your components against GG restarts, validate env propagation per component. It might save some night shifts.</p>
]]></content:encoded></item><item><title><![CDATA[Leaves of Greengrass: my life on the (AWS) Edge]]></title><description><![CDATA[When I heard the learn’d architect,
When the components, the devices, were ranged in columns before me,
When I was shown the recipes and groups, to add, divide, and measure them,
When I sitting heard the architect where he lectured with much applause...]]></description><link>https://yehorlevchenko.dev/leaves-of-greengrass-my-life-on-the-aws-edge</link><guid isPermaLink="true">https://yehorlevchenko.dev/leaves-of-greengrass-my-life-on-the-aws-edge</guid><category><![CDATA[AWS]]></category><category><![CDATA[greengrass]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Yehor Levchenko]]></dc:creator><pubDate>Sun, 28 Sep 2025 16:16:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759076066618/78c8b8d8-c99c-44f5-bcd1-76ad44a01510.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I heard the learn’d architect,</p>
<p>When the components, the devices, were ranged in columns before me,</p>
<p>When I was shown the recipes and groups, to add, divide, and measure them,</p>
<p>When I sitting heard the architect where he lectured with much applause in the lecture-room,</p>
<p>How soon unaccountable I became tired and sick,</p>
<p>Till rising and gliding out I wander’d off by myself,</p>
<p>In the mystical moist night-air, and from time to time,</p>
<p>Look’d up in perfect silence at the CloudWatch.</p>
<p>To my dear W.W.</p>
<hr />
<p>This is my intro article in the “Leaves of Greengrass“ cycle. In it, I would like to share my knowledge and experience with AWS Greengrass and some extra pains someone might find entertaining and maybe even useful.</p>
<p>I’ve been living with AWS Greengrass for a while now. Depending on the day, it feels like either a very cool and serious project or a science experiment someone duct-taped together after too much Red Bull, and now I have to make it live.</p>
<p>On paper, Greengrass looks clean: “here are the components, here are the groups, here’s how it all connects.” In reality, you’re spelunking through logs in two or three different places, spewing deployments like a machine gun, and considering the next SDK update as a major threat.</p>
<p>The project I’m working on is far from what it should be on paper or ones might do by following tutorials. It is a production environment where internet connection is a luxury and costs a fortune, downtime is not an option, and every second of data is measured in terabytes.</p>
<p>Sounds badass, you’d say! Sounds like a minefield, I’d say.</p>
<p>So why write about this?</p>
<ul>
<li><p>Because the docs are sterile, and I want to show the dirt. At least, i wanted someone to show it to me back in the days.</p>
</li>
<li><p>Because behind many fixes there’s not just a technical answer, but a survival story.</p>
</li>
<li><p>Because at 2 a.m., staring at Greengrass logs, it helps to know you’re not alone.</p>
</li>
</ul>
<p>What <em>Leaves of Greengrass</em> will cover:</p>
<ul>
<li><p>Components that refuse to start and won’t tell you why.</p>
</li>
<li><p>Recipes and groups that look simple in diagrams but turn into hell in practice.</p>
</li>
<li><p>The small victories when, after twenty failed attempts, something finally runs.</p>
</li>
</ul>
<p>Anyway, I write it as my field notes. Part war stories, part technical breakdowns.</p>
<p>If you love polished AWS whitepapers, this will hurt.</p>
<p>If you want to know what Greengrass looks like in real life, welcome.</p>
]]></content:encoded></item></channel></rss>