Skip to content

How I Got Into Software Development

Published: at 08:15 AM (16 min read)

The first challenge so many of us face when looking at how to get into the world of software development is where do we start. For some of us that’s a conscious choice that we make early on in our lives, for others it’s something that seems to just happen. This post is about my personal journey into Software development back in 2014.

Table of contents

Open Table of contents

The Beginning

Strap in, this is going to be a serious bit of monologue. Apologies in advance.

It might surprise some readers (and not others) that when I was in college I failed to get a passing grade in my ‘Computing’ class; I did just fine in my other A-Levels though, namely History and English. Partly, I just wasn’t as astute at studying nearly two decades ago as I am now. But also I really struggled with the theory side of the class. I found it really boring in comparison to the practical element and that meant it was hard to stay engaged. At the time it didn’t occur to me that maybe I had a different learning style when it came to programming than I did when it came to other subjects, but I learned that lesson later on in life for sure.

University

When I finished college I had my sights set on University and at the time it looked like doing an undergraduate degree in English and History was a solid investment. Spoiler alert, as much as I love the humanities, if I had to choose all over again I think I’d have focused on choosing a degree that had a definitive and likely outcome of getting on the career ladder quicker and easier and with a wider variety of possible job options.

Through-out university, both during my undergraduate and my masters (In History heavily research focused) I worked part-time at the now no-longer-existent PC World. I worked on the tech desk, having always had a passion and interest in computers and robotics, I honed at least one of those skills. One thing that working on a tech desk will teach you; patience. That, and the ability to remain calm and collected under an unreasonable amount of abuse from the every-day consumer of electronics.

When I finished my masters I wasn’t entirely sure what I wanted to do. I thought about teaching and went so far as to start the application process for a postgraduate certificate in education (PGCE). I volunteered in a school part-time to get some experience of what it was like to be in a class-room. I was unfortunately a bit late to the application process and missed out on a placement school in the first try. This led me to look for a new full-time job in the interim; I fell-across a ‘System Administrator Assistant’ job at a school. Fortunately I successfully navigated the interview to land the job; I think personally because I questioned the schools abundance of printers and the Network Manager had a deep hatred of them wanting to replace them with copiers (Something I found out later on). I’ll come onto Interview techniques in a future blog post, they’re a bit different for software developer roles than for system admins.

Early Work Life

Arguably, working as a system administrator was one of the best jobs I ever had. Not just because at the time I thought that I was getting more experience in a school environment, but I also worked with some really great people. Fast forward a few months and the luster of one day being a teacher faded very quickly. I realised they just had no time to learn new things or do anything other than deal with wave after wave of marking and stressful situations after stressful situations (Seriously, teachers need to be recognised for the stellar work they do more than they are).

About half a year in, I realised that because of the way IT roles in schools are structured I’d never really move up pay bands unless one of my colleagues left (There was only one and we got on quite well, so that seemed a bit too cut throat to wish for at the time - Hey Andrew!). So, two things happened almost simultaneously. The first, my Network manager noticed that I was creating bootable USBs for hardware diagnostics using linux and bash scripting, so he suggested I take a look at C#. The schools intranet (which was very much his love child at the time) was written in this and he thought I’d find it interesting (He was right).

At around the same time I started looking for other IT Admin roles in the hope of trying to land on better pay, I attended a couple of interviews, and it was probably the first time in my life I was told I was unsuccessful due to being over-qualified. In reality when a prospective employer says this it usually means that they’re concerned you might get bored and move on; they don’t relish the thought of having to recruit again. so, often this can put them off making an offer. Doesn’t mean its any less disheartening at the time. Fortunately for me, sometimes things do just happen for a reason.

Learning to Code

I spent two years whilst at that job learning how to code in C#. Every second I wasn’t doing something at the school related to my typical day-to-day duties I was building desktop applications. It got to the point that I started building useful tools for other teachers and that lead to building up a fairly hefty demand on my time for productivity tools here and there to help with things in lessons.

I can’t quite recall exactly how I chose which desktop framework to start with. I remembered WinForms from college but when I first setup Visual Studio 2012 I remember looking at some fairly useful guides on Windows Presentation Foundation (WPF). Just for the sake of hindsight I’ve included some of the key differences between the two technologies below as a brief intermission from the story.

WinForms vs WPF Framework Differences

FeatureWinFormsWPF
Design PhilosophyImperative programming modelDeclarative programming model with XAML
RenderingGDI+DirectX
LayoutLess flexible, manual adjustment requiredMore flexible, supports automatic layout and resizing
GraphicsBasic, suitable for standard applicationsAdvanced, supports 3D graphics and complex animations
Data BindingSimple data bindingAdvanced data binding capabilities
Styling & ThemesLimited, requires more effort to customiseExtensive styling and theming capabilities
DeploymentSimple, direct deployment possibleSupports ClickOnce, but can be more complex due to dependencies
Learning CurveEasier for beginnersSteeper due to broader feature set and XAML
Community & SupportMature, with extensive resources and third-party controlsGrowing, with ample resources but fewer third-party controls than WinForms
Diff Summary

My Earliest Three Defining Moments

I can recall three defining moments through-out my learning experience that I think are of note. It’s strange looking back and acknowledging that there are some really clearly defined moments when you started learning how to code that stay with you no matter how far you get in your career.

1) The Eureka Moment

The first was that ‘Eureka’ moment; I’ve come to understand that we as developers chase this feeling every day. For reference, the “Eureka Moment” is when something you didn’t understand or something in your code that doesn’t work suddenly does, the feeling of sweet dopamine being released into your hypothalamus. Often it could be a problem you’ve had for a day or so and one morning on the drive into work you figure it out. It’s an amazing feeling and one of the things I love most about software development.

Psychological Perspective

The “Eureka Moment” is not just folklore; it’s a well-documented psychological phenomenon often referred to as an insight moment. Research suggests that these moments are preceded by a phase of preparation and incubation, where the problem is analysed, and the brain works on it subconsciously. This absolutely aligns with my experience of solving a coding problem while driving to work, highlighting how solutions can come to us when we least expect them, often when doing something unrelated to the problem at hand.

The Role of Subconscious Processing

There’s fascinating research indicating that our subconscious plays a significant role in these sudden insights. While consciously we might take a break from the problem, our brain is still working on it in the background. This is why solutions sometimes pop into our minds at the most random times. A study published in the journal “PLOS ONE” found that taking breaks and allowing the mind to wander can significantly enhance creative problem-solving, suggesting that stepping away from your desk and coding problem could actually be a strategic move to encourage a “Eureka Moment.”

Famous Eureka Moments in Tech

It might be inspiring to note that many breakthroughs in technology started with a “Eureka Moment.” For instance, Larry Page, co-founder of Google, came up with the idea for Google’s ranking algorithm in a dream. Similarly, Jack Dorsey got the idea for Twitter (Now X) when he realized there was no way to know what his friends were doing, which led to the creation of a platform that allowed posting short status updates. Software Development and Innovation is paved by people in our industry dreaming up outside of the context of their development work. Some of the most profoundly unique products that we use on a daily basis have been created by this form of happenstance.

2) Confused by Dependency Injection

The second defining moment was truly failing to understand the concept of dependency injection. There were plenty of online guides around this topic back when I started and a lot of them referenced Ninject as a library to use to learn the fundamental concepts of dependency injection. But as someone starting out learning some of these more complicated concepts without the support of an institute of education or a mentor, it was a challenge to grapple with some of the more intricate elements of SOLID principles. I had to acknowledge at the time that this was ok. Fundamentally as developers we design and develop software within the remit of our current knowledge and experience. Solutions we create today we would likely develop differently if we created them in the future. It’s a critical soft skill for those wanting to be a developer; the acknowledgement of the continuous learning required to be successful and have a rewarding and enterprising career in the field.

SOLID Principals Explained

SOLID principles provide a robust framework for designing and developing software in an object-oriented environment. By adhering to these principles, developers can create systems that are easier to debug, extend, and refactor, ultimately leading to more robust and scalable applications.

S: Single Responsibility Principle (SRP)
O: Open/Closed Principle (OCP)
L: Liskov Substitution Principle (LSP)
I: Interface Segregation Principle (ISP)
D: Dependency Inversion Principle (DIP)

What is Ninject?

Ninject is a lightweight, open-source dependency injection framework for .NET applications. It aims to minimize the boilerplate code required to implement dependency injection, making your code cleaner, more maintainable, and easier to understand. Ninject operates on the principle of inversion of control (IoC), allowing your application components to be more loosely coupled and hence more modular and testable.

Key Features
Learning Curve

For a newcomer in 2014, Ninject strikes a balance between simplicity and functionality. Its API is designed to be approachable, making it an excellent choice for those new to dependency injection. The documentation and community resources available at that time, including tutorials and examples, provide a solid foundation for learning DI concepts through Ninject.

3) Don’t build the brick, when you can architect the castle

My final defining moment relates somewhat to stubbornness. When I started out I had this really stubborn outlook on the use of NuGet packages and 3rd party libraries. For reference, NuGet packages are bundles of reusable code that developers can integrate into their .NET projects to extend functionality, managed through the NuGet Package Manager for easy integration and updating.

I wanted to write everything myself. In part this was due to a lack of understanding about the NuGet ecosystem and what licensing meant in reality. This approach was a double edged sword in some respects; it allowed me to really think about the problem I was trying to overcome and how to develop meaningful code to meet my requirements. However, it also meant that I’d often run into blockers that would be essentially impossible to progress through with my level of expertise at the time.

Embracing the Giants

The turning point for me came when I stumbled upon a quote attributed to Isaac Newton: “If I have seen further it is by standing on the shoulders of giants.” (I promise not all of my prose and progress is going to rely on inspirational quotes). This stuck with me to a certain extent. The tech world is replete with stories of innovation spurred by collaboration and shared knowledge. Linus Torvalds, for example, didn’t set out to write every line of Linux from scratch; instead, he invited collaboration, leading to one of the most successful open-source projects in history. So if these visionaries and pioneers could acknowledge the value of not taking it all on alone, then surely so should I.

Lessons Learned

In hindsight, my initial reluctance on using NuGet packages and third-party libraries was a valuable learning phase. It taught me the importance of understanding the tools at my disposal, the wisdom in leveraging community knowledge, and the balance between innovation and integration. This underscores a fundamental truth in software development: sometimes, the most innovative thing we can do is recognise and utilise the innovations of others.

This shift in perspective didn’t just make me a more efficient developer; it transformed the way I view the entire development ecosystem. I learned that building software isn’t just about individual brilliance or starting from zero; it’s about contributing to and benefiting from a collective intelligence, a shared digital heritage that empowers us all to aim higher and build not just the brick, but the castle.

Becoming a Commercial Software Developer

Looking back, I think one of the habits that really served me well was when I looked into Universal Windows Presentation (UWP) as a Desktop Framework and started converting all of the applications I’d written in WPF to UWP, that repetitive and iterative experience laid the foundations of some interesting practical knowledge when it came to modernising Desktop applications. I’d also been starting to dabble with the schools intranet website learning ASP.net and SQL to help the Network Manager take on some of the tasks that had been handed down by the teaching staff and the executive team. I must have been doing a fairly good job as in year two I got offered a promotion to “ICT & Web Technician”; which was a way of bumping up my salary band and also allowed me to take on more official development responsibilities.

At that point I knew there were still a whole plethora of development theories that I needed to round off but I was at a comfortable stage where I knew I could go out to market and start searching for a software developer role. I ended up doing two interviews fairly quickly losing out in one to a more experienced web developer from what I was told; but securing the second interview successfully. It was a heavily conflicting moment, I had really learned to love my current role and the people I worked with, but I knew that if I wanted to get into a career I was passionate about and that could take me to places I never thought I’d get to, then I’d need to take the jump. So, in 2016 early August I became a Software Developer.