It has been eight years since I set foot on the developer path. I honestly cannot say how much the me who started working in his late twenties differs from the me today. So I want to write down those eight years the way you might write a travel diary. I have never done one of those year-in-review posts many people write at New Year’s—so this is eight years of reflection in one go.


Igloosec (Jan 2013 ~ Jun 2016)

Junior developer

I began my professional life as a developer in January 2013. My first company was Igloosec, which built security products. The flagship was a unified security management solution (ESM). I was lucky enough to join as a junior on the team that built that cash cow. A cash cow it was, which meant plenty of legacy code and customer-facing work beyond pure development—hardly an easy gig. Even so, I call it luck because the grind I went through as a junior engineer paid dividends for a long time afterward. I also met many good seniors.

January 2, 2013, was my first day. I still remember the scene clearly. I wore the same stiff suit I had worn to the interview and trailed my mentor to my desk. On the bare-looking desk sat one thick book: the ESM manual, meant for customers who bought ESM. It was as fat as an encyclopedia. My first assignment was to read it and get a rough grasp of what ESM did.

That afternoon I got a second task: install CentOS, Oracle, and ESM on a test server in the server room. As a green junior, it felt daunting. Linux itself was unfamiliar, and Oracle installation kept throwing opaque errors—I rolled back and retried more times than I can count. ESM was no picnic either. It was not a single process but many modules wired together, and getting them to talk required a fairly intricate setup. I spent about a week on this. Afterward my mentor walked me through ESM’s architecture and overall flow in person. I picked up the standalone layout quickly enough; the master–slave distributed setup took longer to sink in.

Only after all of that did I see source code for the first time. My slice of the team was middleware, and the main language was Java. Even then the codebase was already eight years old—legacy everywhere—and it was plain Java without Spring or similar frameworks. The middleware handled things like time-window statistics on live ingested data, configuration management, and synchronization across distributed nodes. For a new hire, both the features and the code were too much to swallow whole, so I learned the big picture first and went deep only when a specific feature broke.

Boot camp in troubleshooting

As I said, ESM had many customers—on the order of hundreds. The R&D org did not talk to all of them directly; a separate engineering team handled installation, maintenance, and incidents. Common, simple failures were documented so they could self-serve. Everything else got escalated to R&D—which meant most issues that reached us were ones we had never seen before and were hard to diagnose. With that many customers, we sometimes had two or three such escalations a week.

Even after years in the field, the product saw that much churn mainly because environments varied wildly. The software ran on customer servers, so network layout and hardware differed from site to site. OS was usually CentOS, but we supported any environment that could run a JVMIBM AIX, HP-UX, Solaris, and so on. The default database was Oracle, but some sites used DB2, MSSQL, or Tibero. Version skew across OS and DB meant that no matter how carefully we coded and tested, incidents still happened.

Two lessons stuck. First, I learned to think hard about failure modes. Even code that worked in the “normal” case had to consider spikes in ingest volume, higher latency to the DB, concurrency edge cases, transient storage hiccups, and so on—so I tended to code conservatively.

Second, experience under fire. I do not remember every Oracle tuning tweak or minor OS-level debug session, but I think I got better instincts for how to behave when things break. Incident response often feels like detective work: lay out the clues, form hypotheses, test them. When clues are thin, narrow the blast radius—which stage, which kind of data—and try again. Without systems knowledge, hypotheses are hard. If you do not know what an inode is, a “no space left on device” error with plenty of free space leaves you stuck.

A new module

Besides middleware I built two other modules. The first was internally called “extension”: when ESM raised an alarm, it delivered the alert through customer-specific channels—often SMS or email. Each customer used different SMS gateways and email templates, so the module meant constant one-off work.

The logic was simple and the shape of the solution fairly fixed, so it was the kind of module juniors got. I owned it from hire to resignation. Security rules sometimes required on-site work at customer sites, which I occasionally used as an excuse to escape the office for a field day.

A new module from scratch

The second was SpDbReader. I owned the initial design end to end—probably in my second or third year.

When a customer adopted ESM, agents sat on security appliances (FW, IPS, IDS, UTM, …) across the infrastructure and fed security events upstream. For performance, agents were written in C. Appliance vendors and versions varied so much that the agent team carried a heavy load—they also pulled the most overtime.

The old flow looked like this. A field engineer visited the site, captured the environment and requirements, and sent them to the agent team. The team patched or extended the agent source, built it, and handed the binary back. The engineer installed and tested on site. If anything misbehaved—which was common—they collected logs and sent them back. The dev team patched again. Field and HQ both burned excessive time on agent integration.

Several ideas were tried to fix this; one was SpDbReader. The idea was simple: a Java daemon read and executed JavaScript files. Each customer had their own script describing where and how to pull appliance logs, how to transform them, and where to send them. Shared helpers lived in Java, so DB access via JDBC and similar stayed easy. One jar covered most environments without per-OS source forks. The real win was that JavaScript is interpreted—engineers could edit scripts on site and iterate immediately.

After SpDbReader shipped I trained field engineers several times; they could adjust integrations themselves. That eased the agent team’s load and shortened integration time a lot.

Throughput could not match a C agent, so for very high event rates or tight resource budgets we still used the old path—but SpDbReader became the default elsewhere.

Tech beyond the office

Around then I started looking outside the company. “Big data” was hot; the Hadoop ecosystem was maturing. I tinkered with Spark and joined a Scala study group to learn a new language. I liked Scala enough to consider porting Java code, but hiring Scala developers seemed impractical, so I dropped the idea.

I also picked up React—early days, right around when people were moving from React.createClass toward ES6 classes. I had never done web professionally; HTML, CSS, and JavaScript were barely at beginner level, so front-end had always felt like another planet. React’s component model hooked me and I dug in for a while. I got married around then and even built our mobile wedding invitation with React.

Go entered the picture here too. Java bothered me a little because the artifact was bytecode, not a native binary. I looked at things like GCJ but was not convinced—and GCJ was effectively dead even then. Then Google released Go; study groups were springing up in Korea.

The company had a C collection engine slated for redesign, and Go was briefly on the table. Learning Go scratched an itch: binaries, nicer concurrency and resource use than Java, system calls in the language with better productivity than C/C++. In the end politics killed the Go rewrite, but I still wrote small utilities on the side—around my third year on the job.

First job change

I decided to move on. The thought had been there a while, but I only got serious about preparing around year three. I had just been promoted from staff to assistant manager—but I had no concrete target until I read RIDI’s engineering blog. My first impression was that RIDI would suit someone who likes new tech.

At Igloosec, new technology was necessarily conservative. I chalk that up to service companies versus product vendors: if you deploy what you write to your own stack, you can trade a bit of stability for velocity when it helps. If your code runs on customer servers and an outage means an incident report to the client, new stacks are a hard sell. Few colleagues were chasing the bleeding edge.

Other things drew me to RIDI as well. I wanted to try B2C development. At Igloosec, B2B meant little emotional attachment to the product and scarce direct feedback. RIDI was the opposite: B2C and very feedback-driven. Every morning at nine they held “TOC” (Tears of Customer)—sharing praise and complaints. That said how much they cared about users. I applied.

In hindsight the role I applied for was absurd: the data team. Three short years of résumé had nothing to do with data. I applied anyway with a “they’ll skim and call me if I’m useful” attitude.

I got a first-round invite. The data team asked about my work and gave a short coding test. It went fine; I moved on. Round two included the CTO and two others—I felt they genuinely looked out for candidates. They were clearly uneasy about my lack of data background, then offered me a web role on the bookstore team instead. They explained what the team did and which tech they used. I accepted and got the offer a few days later.

I told my employer I was leaving. Leaving the first company where I had started as a junior felt awkward. I documented my work and handed off to my successor with the source—I wanted a clean exit and put real effort into the handover. After resigning I took about a week off, then started at the new place.