Tag: signalr

  • Craft, Curiosity, and Code: My Approach to Software Engineering

    Craft, Curiosity, and Code: My Approach to Software Engineering

    How I Approach Software Engineering

    As a software engineer, my role goes far beyond writing code that compiles. Real engineering is about solving meaningful problems, communicating clearly, and understanding the broader impact of technical decisions. That includes how code behaves, how users experience it, and how it fits into the product’s long-term goals.

    This post is not a résumé or a list of frameworks. It’s a reflection of the habits, principles, and mindset that guide how I work—regardless of the tech stack.

    Strong Foundations That Go Beyond Any Framework

    Some of the most valuable skills I’ve learned aren’t tied to one language or library. Clean code, separation of concerns, testable design, and clear thinking apply whether you’re building in Angular, React, or a backend service. When you understand the patterns and ideas behind the tools, it becomes easier to adapt, improve, and contribute across environments.

    Frontend Expertise (Angular and Beyond)

    I’ve worked extensively with Angular, including modern techniques like Signals and Standalone Components. My focus is on building modular, maintainable applications with clear structure and strong reusability.

    I’ve also designed systems with complex asynchronous flows using RxJS, Angular signals, and service-based facades. On projects with real-time requirements, I’ve integrated SignalR to manage multiple live data streams. These implementations typically involve synchronising authentication, API states, and socket connections to ensure components only render when the right data is available.

    Although Angular is my primary frontend tool, my deeper value lies in understanding where complexity lives and how to manage it. I focus on making that complexity predictable and easy for teams to work with.

    Testing and Code Quality

    I treat testing as a core part of development, not a separate phase. Whether using Jasmine, Jest, or writing code with testability in mind, I aim to bake quality into every layer.

    I structure components to be as lean and “dumb” as possible, using input and output bindings to keep them focused on presentation. This design makes components easier to test, easier to reuse, and easier to showcase in tools like Storybook.

    I consistently include data-testid attributes as part of my markup, not as an afterthought. These allow developers to write robust unit tests and enable QA teams to create automated scripts without chasing DOM changes. For me, writing test-friendly code means thinking about the entire lifecycle of the feature—from implementation through to testing and maintenance.

    Clean Code and Clear Thinking

    I prioritise readability over cleverness. I write small, purposeful functions, use clear naming, and separate concerns to keep complexity under control. Where appropriate, I introduce wrappers or facades early to reduce future refactor pain and keep teams focused on business logic, not boilerplate.

    The goal isn’t to write perfect code. It’s to write code that’s easy to understand for my future self, my teammates, and the business that depends on it.

    Practical, Delivery-Focused Approach

    I have strong experience delivering MVPs, scoping features, and shipping under real-world constraints. That includes:

    Collaborating with product teams to define realistic outcomes

    Delivering in small, testable increments

    Communicating technical trade-offs without jargon

    Using CI/CD pipelines, code reviews, and static analysis tools as daily habits

    I don’t just implement tickets. I solve problems with attention to quality, context, and end-user value.

    Curiosity That Drives Consistency

    Books That Shape My Thinking

    I read regularly across topics like psychology, marketing, and personal development. Books like Thinking, Fast and Slow, Atomic Habits, The Psychology of Money, and The Mom Test influence how I think about user experience, product decisions, and clear communication.

    Staying Current with the Tech Landscape

    I follow engineering blogs, changelogs, and newsletters to stay up to date without chasing trends for their own sake. I stay aware of what’s evolving—framework updates, architectural shifts, tooling improvements—and choose what to adopt with intention.

    Using AI with Intention

    AI is part of how I work, but never a replacement for real engineering judgment. I use tools like ChatGPT and x.ai to explore ideas, compare strategies, and generate variations, especially when brainstorming or drafting. I take time to test outputs, question assumptions, and validate anything that feels uncertain.

    I also design prompts to avoid echo chambers and reduce bias. For topics where AI has limitations, I follow up with practical research. AI supports my thinking—it doesn’t make decisions for me.

    What I’m Not

    Knowing what you don’t do is just as valuable as knowing what you do.

    • I’m not a trend chaser. I adopt tools when they solve problems, not because they’re new.
    • I’m not a “rockstar” developer. I favour collaboration, clarity, and consistency over complexity or bravado.
    • I’m not tied to Angular. It’s where I’ve built deep experience, but my core practices apply across frameworks.
    • I don’t just complete tasks. I think about what happens next—how it’s tested, maintained, and evolved over time.

    Conclusion: Building With Intention

    Whether I’m writing code, reviewing work, or collaborating with product teams, I bring a thoughtful, disciplined approach. I aim to write software that is not only functional, but dependable, understandable, and ready to scale.

    I’m always learning and always looking for ways to improve. If you’re building something and this approach resonates with you, feel free to reach out.

  • MVP: What’s In, What’s Out — And Why I Still Use Facades

    Intro – Inspired by a PR Comment

    This post was triggered by a PR comment: ‘Isn’t this wrapper overkill for MVP?’ I’ve been there before — and I’ve learned that a little early structure saves a lot of late pain.

    Why I Stick to Facades and Wrappers From the Start

    Even when moving quickly, I find it worth using:

    • Per-feature facades** (e.g. ChatFacade, EmailFacade)
    • Per-hub gateway services** (e.g. ChatHubGateway, EmailHubGateway)
    • DTO mappers

    They:

    • Add barely any overhead at the start.
    • Keep backend transport concerns (SignalR/HTTP) isolated.
    • Make future changes predictable — especially when multiple hubs are involved.
    • Save me from pulling tangled logic out of components later.

    Even if the app doesn’t go far, it’s a minimal investment for peace of mind.

    What I Include Early (Post-MVP Stability)

    1. Per-Hub Gateway Services

    Each hub (chat, email, agent) gets its own wrapper for connection and event handling.

    @Injectable({ providedIn: 'root' })
    export class ChatHubGateway {
      #conn = new HubConnectionBuilder().withUrl('/chatHub').build();
      #msg$ = new Subject<ChatMessage>();
      message$ = this.msg$.asObservable();
    
      start() {
        this.conn.on('ReceiveMessage', m => this.msg$.next(m));
        return this.conn.start();
      }
    }

    2. Typed DTO Mapping

    I always keep backend shapes away from UI models.

    toChatMessage(dto: ChatMessageDto): ChatMessage {
      return { sender: dto.from, text: dto.content, timestamp: new Date(dto.ts) };
    }

    3. Connection Status Signals

    @Injectable({ providedIn: 'root' })
    export class EmailHubService extends SignalRHubService {
      readonly status = signal<'connected' | 'disconnected' | 'reconnecting'>('disconnected');
    
      override startConnection() {
        this.hubConnection.onreconnecting(() => this.status.set('reconnecting'));
        this.hubConnection.onreconnected(() => this.status.set('connected'));
        this.hubConnection.onclose(() => this.status.set('disconnected'));
        return this.hubConnection.start();
      }
    }

    Quick Note

    Currently my understanding about signals is that they should only be used when data is required in the template, so the “status” above should only be used for informing the end user, I would ideally separate out observable and signal data into different services, I have been using naming convention like “emailHubDataService” for all observable things and then “emailHubService” for any signal based things.

    In a component:

    @Component({ ... })
    export class EmailPanelComponent {
      readonly connectionStatus = inject(EmailHubService).status;
    }

    Now you can show:

    html
    @if(isReconnecting)
    <div>
      Attempting to reconnect...
    </div>
    
    ts
    const status = signal<'connected' | 'disconnected' | 'reconnecting'>('disconnected');
    
    isReconnecting = computed(() => status() === 'reconnecting')

    Tracking Incoming vs Outgoing Traffic

    It helps to distinguish what’s being sent to the server vs what’s coming from the server. I’ve found it useful to separate these both semantically and in logging.

    This is an area I have recently made some mistakes in and was informed that I had over engineered, (hence the inspiration for this post) I added a wrapper around the emailHubService and than added two new services to distinguish between incoming and outgoing calls, I understand now, that it was over engineered, my understanding just came from understanding the original hub services.

    Semantic distinction:

    An example of how what I wanted to achieve can be done without the separate services.

    // OUTGOING (client → server)
    sendDisconnect(...)
    sendAcceptEmail(...)
    senRejectEmail(...)
    
    // INCOMING (server → client)
    onReceiveEmailMessage(...)
    registerHandlers() // binds handlers like 'ReceiveEmailMessage'

    Logging:

    Wrap both directions to log clearly:

    // Outgoing
    send<T>(method: string, payload: T) {
      this.logger.debug(`[OUTGOING] ${method}`, payload);
      this.hubConnection?.send(method, payload);
    }
    
    // Incoming
    #handleIncoming<T>(label: string, payload: T) {
      this.logger.debug(`[INCOMING] ${label}`, payload);
    }

    This makes tracing issues between frontend and backend a lot easier, especially when events stop flowing or are being sent with unexpected payloads.

    Side Note:

    The use of the “#” syntax in place of the “private” access modifier.

    What I Leave Until Later

    I think these should wait until there’s a clear need:

    • Global state libraries (NgRx, Akita)
    • Factories for creating hubs
    • Generic event buses
    • Central hub connection manager (unless coordinating 3+ hubs)

    Observables First, Signals Later (Reminder to Self)

    A quick personal rule:

    Keep SignalR data as observables until it reaches the DOM — then convert to signals, if the template has a service, then I feel it is fine to convert it there too, just as long as it is not being reference around the rest of the codebase.

    Why?

    • Observables are better for streaming, retries, and cancellations.
    • Signals are great for UI reactivity.
    • This keeps the core data flow reactive without tying it to the DOM too early.

    Typical usage:

    @Component({ ... })
    export class ChatComponent {
      readonly messageSignal = toSignal(chatHub.message$);
    }

    Final Thought

    This isn’t about gold-plating MVPs — it’s about laying groundwork that doesn’t cost much but saves me big later.

    Even if nothing ships, I’d rather have clean wrappers and small abstractions than spend hours later undoing a spaghetti mess. If it all falls over? At least I didn’t build the mess twice.