Skip to main content

Empty Chunks in Angular 19: A Technical Deep Dive

· 5 min read

Recently in our main application built with Angular 19, we encountered a perplexing issue: after deploying to a WebLogic server, certain functionalities on the UI broke completely without any error messages. Below is the technical deep dive into how we diagnosed and resolved the issue, which revolved around unexpected empty chunk files generated during the Angular build process.

Introduction — When the Error First Appeared

Some of the functionalities on the UI broke completely. There was no console errors or warnings. The angular production build was successful without any errors, yet the deployed frontend failed immediately.

This was specific to Weblogic server. The application worked fine on any other server like Tomcat, websphere etc.

This led us into a detailed debugging journey across build pipelines, server configurations, and module graphs.


Investigating WebLogic Content-Type Issues

By comparing the working and non working environments, we suspected that it is weblogic which was not setting the Content-Type in case of .js files.

Chrome DevTools has this amazing feature of overriding response headers. We used this feature to set the Content-Type header for .js files to application/javascript on the WebLogic server responses. After reloading the page, everything worked as expected.

So we thought that the issue was with WebLogic not setting correct Content-Type for .js files. We then thought that forcing explicit MIME types to set headers for .js files should work.

However, it did not solve the issue.


The 0-Byte Discovery — Manually Eliminating Suspects

Further debugging and close observation, we discovered that there was a 0 byte size .js file which was getting downloaded. Once we saw the chunk size was 0 bytes, the suspicion shifted to unused or incorrectly split modules.

We began manual testing:

With Chrome DevTools giving us the option of "Override Content", we could edit the angular generated main.js file from which we removed the import of that 0 byte sized chunk.js file. Running this change, everything worked as expected.


Finding the Root Cause — Using stats.json

To find out why this 0 byte chunk was created, we generated build statistics:

ng build --configuration production --stats-json

Inside stats.json, we found this entry:

"chunk-F5X5MWHG.js": {
"imports": [],
"exports": [],
"inputs": {
"node_modules/@angular/material/fesm2022/tooltip.mjs": {
"bytesInOutput": 0
}
},
"bytes": 0
}

Interpretation

  • "inputs" indicates the chunk originated from tooltip.mjs
  • "bytesInOutput": 0 means tree-shaking removed all code
  • "bytes": 0 means the emitted chunk is literally empty

This confirmed the root cause:
Unused Angular Material imports can generate empty chunks after tree-shaking.

From stats.json file it was clear that the module from Angular Material — specifically: @angular/material/tooltip — was responsible.

We imported MatTooltipModule but never actually used any tooltip in templates.

Angular’s tree-shaking eliminated the module’s code entirely, yet Webpack still generated a placeholder chunk for it — resulting in a 0-byte JS file, which WebLogic refused to handle cleanly.

Confirming our analysis

  1. Remove a suspicious import
  2. Rebuild the project
  3. Check whether the empty chunk disappeared

After removing the unused module, angular production build did not generate this empty chunk and everything worked as expected on the weblogic server.


Why This Happens — The Webpack Explanation

Angular CLI uses Webpack under the hood.

Webpack performs:

  • Tree-shaking (eliminating unused code)
  • Module graph evaluation
  • Chunk splitting

According to Webpack documentation:

  • Unused imports are removed entirely
  • But chunk boundaries are determined before tree-shaking
  • Therefore, if an imported module becomes empty, Webpack may still emit an empty chunk file

Sources confirming this behavior:

This is expected behavior in highly optimized builds — but problematic when servers (like WebLogic) cannot handle empty files.


Solution — Eliminating Empty Chunks Safely

There are two main approaches.


1. Remove Unused Imports (Best Fix)

If you imported:

import { MatTooltipModule } from '@angular/material/tooltip';

…but are not actually using tooltip anywhere:

👉 Remove this import.

This prevents Webpack from generating the chunk entirely.


2. Enable or Enforce removeEmptyChunks

Webpack has a built-in optimization:

"optimization": {
"removeEmptyChunks": true
}

Angular CLI enables some of these features internally, but depending on the build graph, empty chunks may still slip through.

This ensures empty chunks are dropped before writing to the output.


Conclusion

The empty chunk issue turned out to be a perfect example of how:

  • Tree-shaking
  • Lazy loading
  • Module splitting
  • Server behavior
  • And unused imports

all intersect in modern frontend pipelines.

Key takeaways:

  • Unused Angular Material imports (like MatTooltipModule) can produce empty chunks
  • stats.json is the fastest way to trace chunk origins
  • Webpack's removeEmptyChunks optimization prevents empty files from being emitted
  • Servers like WebLogic may fail when serving zero-byte JS files

Understanding the underlying build system is essential — because sometimes the issue isn’t in your code or your server, but in the subtle behavior of the bundler connecting the two.


Angular Design Patterns and Best Practices

· 5 min read

Angular Design Patterns and Best Practices

Why choose Angular ?

  1. Batteries Included
  • Angular development team has already made serveral decisions for you.
  1. Google Support
  • Angular is backed by Google, which means it has a large community and a lot of resources available.
  1. Community
  • Angular has a large community of developers who are willing to help you with any questions you may have.
  1. Tooling
  • Eg. Angular CLI, Testing
  1. Typescript
  • TypeScript is a superset of the JavaScript language that adds type checking and other features to the language, ensuring a better developer experience and security for web development.
  1. RxJS
  • Library for reactive programming using Observables, which makes it easier to work with asynchronous data streams.
  • RxJS also provides mechanism for state management, which is a common requirement in modern web applications.
  1. Webpack
  • Webpack is a very powerful and versatile bundler, and it is thanks to it that the framework manages to make some interesting optimizations such as tree shaking and lazy loading of bundles

Organising your application ?

The basis for organizing your application is the Angular module. An Angular module is a typescript class marked with decorator @NgModule that contains the metadata

  • declarations: components, directives, and pipes that belong to this module
  • providers: In this attribute, we can register the classes we want to inject using Angular's dependency injector system, normally used for services
  • imports: other modules that this module depends on. We should not import components or services.
  • exports: components, directives, and pipes that can be used in the templates of components in other modules

Modules Type

  • Business Domain Module: For example, a module for user management or product management.
  • Component Module: The purpose of this module is to group directive components and pipes that will be reused by business domain components and even other components.

Avoiding anti-patterns - Single Module App

The problems with Single Module applications

  • Disorganized folder structure
  • Bundle size and Build time optimizations are not effective
  • Components maintainability and update issues

Shared Module Pattern

  • Eg. HttpModule
  • SharedModule is a module that contains components, directives, and pipes that will be used in multiple modules.

Improving the size of your app - Lazy loading

  • Lazy loading is a technique that allows you to load modules only when they are needed, rather than loading them all at once when the application starts.
  • This can significantly reduce the initial load time of your application and improve performance.
  • To implement lazy loading, you can use the loadChildren property in the route configuration of your application.

Communication between components - inputs and outputs.

  • Inputs and outputs are used to pass data between components.
  • Inputs are used to pass data from a parent component to a child component, while outputs are used to pass data from a child component to a parent component.

Advantages of TrackBy attribute

  • TrackBy is an Angular directive that allows you to optimize the performance of your application by reducing the number of DOM manipulations.
  • It is used in conjunction with the *ngFor directive to track the identity of items in a list.
  • By using TrackBy, Angular can identify which items have changed, been added, or removed from the list, and only update those items in the DOM.
  • Enables animations when removing and adding items from the collection
  • Retains any DOM-specific UI state, such as focus and text selection, when the collection changes

Communication between components using services

  • A characteristics of Angular services is that by default, every service instantiated by the dependency injection mechanism has the same reference; that is, a new object is not created, but reused.
  • Dependency injection mechanism implements the Singleton pattern, which means that only one instance of the service is created and shared across the application.

Forms in Angular

  • Template Driven Forms: These are forms that are defined in the template and are more suitable for simple forms. They are easier to set up and require less code. They require the FormsModule to be imported in the module.
  • Reactive Forms: These are forms that are defined in the component class and are more suitable for complex forms. They provide more control over the form and allow for more advanced validation. They require the ReactiveFormsModule to be imported in the module.

Injecting services

  • Services can be injected into components, directives, and other services using the @Injectable decorator.
  • In Angular, the inject() function and the constructor-based dependency injection are two ways to inject services, but they differ in how and when they are used.
  1. constructor-based dependency injection
  • Services are injected into a component, directive or another serivce via the constructor
  • The dependency is resolved and injected when the class is instantiated.
  1. inject() function
  • The inject() function is a way to inject dependencies at runtime, allowing for more flexibility and dynamic behavior.
  • It can be used in places where constructor injection is not possible, such as in a factory function or a standalone component.

Interceptor design pattern

  • An interceptor is a service that can intercept HTTP requests and responses, allowing you to modify them before they are sent or received.
  • Attaching the token to the request with an interceptor
  • Changing the request route
  • Creating a loader
  • Notifying success
  • Measuring the performance of a request

Reactivity with RxJS

  • One of the most difficult tasks is dealing with the asynchronous nature of web applications.
  1. Observables : We use observables for asynchronous processing that does not return a value but a collection of values that can be distributed over time as events.

Modern Automated AI Agents

· One min read

What are AI Agents?

AI agents are (semi) autonomous systems that interact with environments, make decisions, and perform tasks on behalf of users.

  • Autonomy: Can perform tasks without continous human intervention
  • Decision-making: Use data to analyze and choose actions
  • Adaptation: Learn and improve over time with feedback

Eg. ChatGPT is an Agent on top of an LLM (like GPT-4o)

Agents vs LLLMs

Agent - Perform specific tasks and makes decisions based on its environemnt LLMs - Focuse on understanding and generating human-like text

Why AI Agents might be Essential

  • Automate repetitive tasks, freeing up human resources for more complex activities
  • Hande dynamicand real-time environments like finance or customer service
  • Tailor user experience based on individual preferences

AI concepts for Tech Professionals

· 16 min read

Artificial Intelligence

AI is a vast field focused on creating intelligent systems that can perform tasks usually requiring human intelligence, such as perception, reasoning, and decision-making. It encompasses a range of techniques and approaches, including machine learning, deep learning, and generative AI.

Machine Learning (ML)

ML is a subset of AI focused on developing methods that enable machines to learn from data and enhance their performance on specific tasks. It is commonly considered the simplest form of AI.

Neural Networks

A neural network is a computational model inspired by the human brain, consisting of interconnected layers of nodes or neurons that process and transform data through learned patterns and weights. It is commonly used in machine learning to recognize complex patterns, make predictions and solve tasks by training on large datasets.

Deep Learning

Deep learning is subset of machine learning that utilizes multi-layered neural networks to automatically learn and extract features from large datasets. These deep network can model complex patterns and perform tasks such as image and speech recognition with high accuracy by heirarchically processing data through multiple layers.

Computer visiion

Computer vision is a field of artificial intelligence that enables computers to interpret and understand visual information from the world, such as images and videos. It involves the use of algorithms and models to analyze and make decisions based on visual data, often mimicking human visual perception and cognition.

Natural Language processing (NLP)

Natural langulage processing (NLP) is a branch of artificial intelligence that allows computers to understand and interact with human language. It involves tasks like translating text, analyzing sentiment, and summarizing information by processing and interpreting language data.

AI Model

An AI model is a computational algorithm trained on data to perform specific tasks, such as classifiction, prediction, or pattern recognition. It learns from examples in training data, adjusting its parameters to improve its accuracy, and then applies this learned knowledge to make informed decisions or predictions on new, unseen data.

ML Algorithm

An ML Algorithm is a set of procedueres used to analayze data and make predictions or decisions based on patterns and insights. It adjusts its approach by learning from data, improving its accuracy over time through iteravtive training.

AI Model Training

AI model training is the process of teaching a model to make accurate predictions or decisions by feeding it large amounts of data, adjusting its parameters through iterative learning, and optimizing its performance based on feedback and error rates.

AI inferencing

AI inferencing is the process of applying a trained AI model to new data to generate predictions or decisions based on the patterns and knowledge it has learned. It involves using the model's learned parameters to analyze the input and produce outputs in real-time or on-demand.

Model fairness

AI model fairness refers to the principle of ensuring that a model's predictions or decisions do not disproportionately disadvantage or bias any particular group or individual, promoting equitble outcomes.

Model fit

Model fit describes how well a model's predictions match the actual data it was trained on, indicating its accuracy and effectiveness in capturing the underlying patterns.

Large Language Model (LLM)

A Large language model (LLM) is an AI model designed to understand and generate human-like text based on vast amounts of data. It uses advanced algorithms to process and respond to language, enabling tasks like text generation, translation, and question-answering.

Machine Learning Workflow

  • Identifying appropriate data is one of the most important aspects of ML workflow.

Labeled Data

Labeled data in AI refers to data that has been annotated with specific tags or categories, providing a reference for training models. This annoated information helps the model learn to identify patterns and make accurate predictions based on the labeled examples.

Unlabeled Data

Unlabled data in AI refers to data that lacks predefined tags or categories, meaning it has not been annoated with specific information. This type of data is often used in unsupervised leanring, where models identify patterns and structures without predefined labels.

Tabular data

Tabular data in AI is structured information organized in rows and columns, resembling a spredsheet or database table. Each row typically represents a single record or observation, while each column contains specific attributes or features, making it easy to analyze and process for machine learning tasks.

Time-Series data

Time-series data in AI consists of observations collected sequentially over time, ofter at regular intervals. this type of data is used to analyze trends, patterns, and seasonal variations, making it valuable for tasks such as forecasting and anamoly detection. This data is oftern submitted by IoT devices.

Image data

Image data in AI refers to visual information represented as pixel matrics, capturing various features such as colors, shapes, and textures. This type of data is commonly used in computer vision tasks, including image classification, object detection, and facial recognition.

Strucured text data

Structured text data in AI refers to text that is organized in a predefined format, often with specific fields and tags, making it easy to analyze and process. Examples include data from forms, databases, or CSV files, where each entry has a consistent structure that facilitates tasks like information extraction and analysis.

Unstructured text data

Unstructured text data in AI refers to free-form text taht lacks a predefined structure, such as documents, social media posts, or emails. This type of data is more challenging to analyze, as it requires natural language processing techniques to extract insights, indentify patterns, and derive meaning from the content.


  • Select the ML Algorithm

Linear Regression

Modeling between a dependent variable and multiple independent variables Eg. Predict housing prices based on size, location and number of bedrooms.

Logistic Regression

Binary classification predicting the probability of an event occuring. Eg. Email spam classification

K-Nearest Neighbors (KNN)

Classification of data points based on classification of neighbors. Eg. Product recommendation based on user preferences.

Principal component Analysis (PCA)

Condensing data while retaining the most important features Eg. facial recognition


  • Train the model on the data

Supervised Learning

Supervises learning is a machine learning approach where a model is trained on labeled data, using input-output pairs to learn the relationship between them. The model makes predictions on new, unseen data by applying the patterns it has learned from the training examples.

Unsupervised learning

Unsupervised learning is a machine learning approach where a model is trained on unlabled data, aiming to identify patterns, structures, or groupings within the data without predefined output categories. It is commonly used for tasks such as clustering, dimensionality reduction, and anomaly detection, helping to uncover hidden relationships in the data.

Reinforcement Learning

Reinforcement learning is a machine learning approach where an agent learns to make decisions by interacting with an environment, receiving feedback in the form of rewards or penalties. The agent aims to maximize cumulative rewards over time by exploring different actions and learning from the consequences of its choices.

  • Evaluate Model performance. Perform a series of tests to validate whether the model generates usable output

Batch inferencing

Batch inferencing is the process of making predictions or decisions on a large set of data at once, rather than individually processing each data point. This approach allows for efficient and scalable analysis by handling multiple inputs in a single operation. Batch inferencing is used when accuracy is more important than speed of response.

Real time inferencing

Real-time inferencing is the process of making predictions or decisions on data instantly as it is received, enabling immediate responses. This approach is crucial for applications requiring quick, dynamic interaction, such as live video analysis or online recommendation systems. Self-driving cars use real-time inferencing while in motion.


Real World Examples of AI applications

  • Computer Vision Autonomous vehicles utilize computer vision to interpret and navigate their environment. They rely on a combination of sensors, cameras, and AI algorithms to perceive the world around them.

  • NLP Speech recognition Virtual assistants leverage NLP and speech recognition to understand and respond to user queries in natural language. They allow for hands-free operation of devices, providing users with a seamless interaction experience.

  • Recommendation systems E-commerce platforms employ recommendation systems to provide personalized shopping experiences for users. These systems analyze user behavior and preferences to suggest products that are most likely to be purchases.

  • Fraud detection Financial institutions, including banks and credit card companies, employ fraud detection systems to identify and prevent fraudulent transactions in real-time. These systems use ML algorithms to analyze transaction data and flag suspicious activities.

  • Forecasting In supply chain management, accurate demand forecasting is crucial for ensuring products are available to meet customer demand while minimizing excess inventory costs. Companies use AI to analyze historical sales data and predict future demand.


Introduction to RAG

  • Retrieval Augmented Generation - The process of augmenting LLM output by referencing a knowledge base that is outside the context of the LLM training sources

knowledge base option

  • Traditional Database or Indexing Syatem Use a traditional database or an indexing system like Elasticsearch. Here, the documents are indexed based on keywords or phrases. the retreval process invovles searching these indices to identify documents that match the query terms, which can then be sent to the LLM for generating responses.

  • Vector Database In this method, structured or unstructured data are split into chunks, then embedded into vectors using a model (often a transformer-based encoder). These vectors are then stored in a vecotr database that supports efficient similarity search. When a prompt is submitted, this database is searched first, using a vecotr representing the query. It then retrives the most relevant documents based on vector similarity, and adds this data to the prompt.

RAG Benefits

  • Enhanced factuality and accuracy
  • LLM contextual relevance
  • Improved handling of specific verticals

RAG Challenges

  • Pipeline complexity
  • Latency issues
  • Dependence on the quality of the Retrieval set
  • Resource Requirements
  • Difficulty in Tuning and maintenance

How do you understand if an AI model is delivering business objectives ?

Below are key considerations

  • Alignement with Business Objectives - Ensure that the model addresses specific goals.
  • Performance metrics - Define KPIs to measure effectiveness
  • User Feedback - collect qualitative insights from end users.
  • Integration and Usability - Evaluate how well the model integrates into existing workflows

Generative AI

  • Transformer-based LLMs are models that can understand and generate human-like text. They are trained on text data various sources, and learn patterns and relationships between words and phrases.

  • Tokens - Units of text that the model processes individually. It represents a fragment of the input text, which can be a word, subword, characters, or even punctuation mark depending on the specific tokenization method used by the model.

  • Chunking - The practice of breaking down a large text input or output into smaller, more manageable pieces for procesessing. Chunk size (tokens) is an important parameter when creating a vector database.

  • Vectors - A mathematical representation of data (word, sentence or document) as a series of numerical values organized in a specific order. This representation captures various features or dimensions of the data, enabling the calculation of relationship or similarities.

Foundation Model Types for Generative AI

  • A Large Language Model (LLM) is an AI model designed to understand and generate human-like text based on vast amounts of data. It uses advanced algorithms to process and respond to language, enabling tasks like text generation, translation and question-answering

  • Diffusion Models start with noise or random data, and gradually add information until a recognizable pattern is obtained. This is often applied to image generation but can also be used for text or audio genreation.

  • Multimodal models are foundation models which have been trained on multiple media types. These media types can include text, audio, video, and images. The models can both interpret and generate these media types.

  • Generative Adversarial Networkds (GANs) - This model consists of two neural networks which compete with each other. One generates content, and the other attempts to differentiate that generated content from real data. the competition continues until the generated content and real data are indistinguishable from each other.

Generative AI Advantages

  • Adaptability - Generative AI excels in adapting to diverse tasks and problem domains, making it useful across a wide range of industries. It can seamlessly switch between languge, visual, and data-centric applications without needed extensive reconfiguration. This flexibility helps organizations leverage AI to tackle varied challenges with a single adaptable system.

  • Responsiveness - Generative AI models can rapidly produce outputs and insights in real-time, enabling swift responses to use queries and changing requirements. Their ability to process information and generate relevant content makes them suitable for interactive applications, such as chatbots and customer support. This responsiveness enhances user experience by providing instant and contextually appropriate answers.

  • Simplicity - Generative AI models often simplify complex workflows by automating content generation and decision-making processes. They reduce the need for manual intervention or domain-specific coding, making AI-driven solutions more accessible to non-technical users. As a result, businesses can deploy sophisticated solutions with minimal setup and oversight.

  • Creativity and Exploration - Generative AI opens up new avenues for creativity by suggesting novel ideas, designs, or content based on learned patterns. It can assist with brainstorming, creative writing, and design prototyping, providing users with unexpected and innovative options. This capability helps push the boundaries of traditional problem-solving and artistic creation.

  • Data Efficiency - Many generative AI models are designed to learn effectively from relatively small datasets through pre-training and fine-tuning techniques. This data efficiency reduces the dependency on massive labeled datasets, lowering costs and effort associated with data preparation. It also allows models to generate meaningful outputs even in data-sparse environments.

Generative AI DisAdvantages

  • Regulatory Violations - Generative AI models can inadvertently generate content that violets regulatory guidelines, such as producing misleading financial advice or content that doesn't comply with advertising standards. Organizations using these models may face complaince challenges, especially in highly regulated industries like healthcare and finance. this risk underscores the need for strict oversight and adherence to legal requirements when deploying AI systems.

  • Social Risks - Generative AI can be used to create deepfakes disinformation, or biased content, potentially amplifying harmful social impacts. Such outputs can erode trust, manipulate public opinion, or contribute to social polarization. The misuse of generative AI for malicious purpose poses significant ethical and societtal concerns that require careful mitigation strategies.

  • Data Security and Privacy Concerns - Generative AI models often require access to sensitive datasets, raising risks of data leakage or unintended exposure of personal information. If improperly handled, these models may inadvertently reveal private data points from training data. Ensuring data security and maintaining user privacy is a critical challenge when deploying generative models, especially in sensitive applications.

  • Toxicity - Generative models can sometimes produce toxic or harmful content, such as offensive language or inappropriate suggestions, if they are not carefully monitored. This issue is often due to biases or toxic patterns present in the training data. It necessitates rigorous content moderation and filtering techniques to prevent harmful outputs in public-facing applications.

  • Hallucinations - Generative AI may produce outputs that are factually incorrect or completely fabricated, known as "hallucinations". This problem is particularly challenging when using AI for tasks requiring high accuracy, such as generating technical documentation or answering factual questions. Hallucinations can undermine trust and reliability, making it difficult to use generative AI in mission-critical scenarios.

  • Nondeterminism - Generative AI models can produce different outputs even when given the same input, due to their probabilistic nature. This nondeterminism complicates tasks that require consistency, such as legal document generation or standardized communication. It also makes debugging and validating AI-generated outputs more complex, limiting their applicability in certain use cases.


Model Selection Decision Tree

What content are you trying to generate?

  • Text
  • Image
  • Audio
  • Video
  • Multimodal

Other model consideration

  • Performance and latency
  • Customization
  • Constraints and Resource
  • GRC (Governance Risk and Compliance)

What is Prompt Engineering?

The process of desiging and refining input prompts to optimize the performance of AI models. Enhances the quality of responses, guides model behavior, and can lead to more accurate results.

Key components of Prompt Engineering

  • Context - Information surrounding the prompt that helps the model understand the scenario
  • Instruction - The specific tasks or question being posed to the model.
  • Negative Prompts - Instructions that specify what the model should avoid or exclude in its response.

ML Development Lifecycle

  1. Business Goal - Objectively measure business value of the outcomes against the defined business gola. Is ML the appropriate technology choice to solve the problem statement?
  • Business Goal Definition Workflow
  • Business consideration
  • Frame the ML problem
  • Determine the optimization objective
  • Review data requirements
  • Cost and performance optimization
  • Production consideration
  1. ML problem framing Definition Define what is observed and what should be predicted. Identify dependent and/or independent variables. Define inputs and outputs.

  2. Collect Data

  • Data labeling
  • Ingest (streaming, batch)
  • Data aggregation
  1. Data pre-processing workflow
  • Clean
  • Partition
  • Scale
  • Unbalance, Balance
  • Augment
  1. Feature Engineering tasks (Features are inputs to ML models used during training and inference)
  • Feature selection: The process of selecting a subset of extracted features. This is the subset that is relevant and contributes to minimizing the error rate of a trained model.
  • Feature transformation - Steps for replacing missing features or fetaures that are not valid.
  • Feature Creation - The creation of new features from existing data to help with better predictions
  • Feature Extraction - The process of reducing the data to be processed using dimensionality reduction techniques.
  1. Train, tune and evaluate: The process of training a machine learning model involves providing the algorithm with training data to learn from.

  2. Hyperparameters are settings that can control the beahvior of the ML algorithm. Hyperparameter tuning, or optimization, is the process of choosing the optimal hyperparameters for an algorithm.

The four principles of Great design by Robin Williams

· 2 min read

What started as a curiosity turned into a desirable hobby. After having discovered Canva in 2022, I started taking insipiration from various designs in Canva and Pinterest and created simple graphics for Watsapp Status and Instagram. Having this on hands experience, it helped me understand how to make something attractive and eye catching within a limited space and with limited use of words.

I enjoy spending hours on canva, but at the same time, I have gained creative and valuable skills while doing it on hands.

I came across this course from Robin Williams about principles of Great design and it surely benefited me from my existing knowledge. The course is well structured and here are some of the important points taken from the course.

The principle of Proximity

Group related items together. Physical closeness implies a relationship. Proximity does not mean that everything is close together - it means elements that are intellectually connected should be visually connected.

The principle of Alignment

Nothing should be placed on the page arbitrarily. Even item should have a visual connection with something else on the page.

  • In life as well as in Design, alignment has a purpose.
  • Clean alignment can improve the communication of any piece of work. It presents a more professional appearance.

The principle of Repetition

Repeat some element of the design throughout the entire piece. This is a critical unifying factor.

  • You already create consistency in your design work. Take elements of that consistency and push it - emphasize the consistency so it becomes a repetitive and unifying element of design.
  • Repetition helps to clarify information and provide structure.

The principle of Contrast

Contrast is what draws a reader's eye to the page in the first place. Contrast also provides clarity of information.

  • One effect of contrast is that it pulls the reader's eye into the information, and one result is clearer communication.
  • Use contrast to clarify information as well as make the page more attractive.

Deciphering Polyfill.io Service vs. Polyfill.js

· 2 min read

In light of recent events, there's been some confusion about the polyfill.io service and polyfill.js. This article aims to clarify the differences and address some concerns.

The Polyfill.io Incident

News recently surfaced about the polyfill.io service injecting malicious code into JavaScript assets fetched from their domain. This article provides a detailed account of the incident.

Understanding Polyfills

According to MDN, a polyfill is a code snippet, typically JavaScript on the web, that provides modern functionality on older browsers lacking native support. For instance, if you want to use the latest JavaScript APIs like array filter or map—supported by Chrome but not IE7—you'd need a polyfill to ensure seamless functionality.

The Role of CDNs

A Content Delivery Network (CDN) is a system of interconnected servers that accelerate webpage loading for data-heavy applications. Commonly used static assets like jQuery, AngularJS, React, and Bootstrap.css reside on CDNs. Web applications can directly use these assets, saving on network and storage costs while enhancing application performance.

When a user in Location X visits your web application, the static files needed are downloaded from the nearest CDN to Location X, reducing latency and improving performance.

The Case for External Services

This blog post provides an excellent discussion on using polyfill as a service. The main argument is that shipping polyfills for every feature can lead to unnecessary downloads for users with modern browsers. This can negatively impact performance and user experience. An external service can help by shipping only the relevant polyfills based on the requesting browser's user agent.

Angular's polyfill.js

Angular's build system generates optimized, production-ready code files, including a file named polyfill.js. There's been confusion about whether this polyfill.js is related to the polyfill.io incident. The answer is a resounding NO.

Angular's polyfill.js is a file generated by the Angular build system for polyfilling required functionalities. It doesn't use any of the polyfill.io services to generate this build file, unless you're using the service in your source code.

Lessons in Software Simplification - From AngularJS to Vanilla JS

· 4 min read

8 Years ago…

AngularJS was very popular library and talk of the town.

The Software product had a requirement of providing a search solution with display of tabular data / pagination etc. with some UI animation.

It was decided by the tech lead and the management to go with AngularJS and there could be various reasons for it, possibly:

  • AngularJS was popular framework
  • Going forward all the new features in this software product had to be developed using AngularJS
  • It is always exciting to work on a new technology irrespective of figuring out if it actually is needed.

This feature was released and praised, but over the years, there have been no instances of this library being used for any other features, reasons:

  • Continued usage of the legacy framework, it being the obvious choice
  • The rising popularity of Angular2 over AngularJS causing lack of time and interest.
  • Lack of resources / technical skills in AngularJS

So, this huge product had AngularJS as a low hanging fruit, used only for one single feature.

However, security fixes to AngularJS library were patched whenever available.

Transition from AngularJS to Vanilla JavaScript

Fast forward to today, and our feature remains, but the landscape has shifted. AngularJS is officially deprecated and so we had to reevaluate our choice.

  • AngularJS library is deprecated.
  • Security concerns from customers
  • In this entire product, AngularJS is just used for this one feature.

We chose to use Vanilla JavaScript for various reasons, though the specifics are not relevant here.

When I began working on this feature, it became clear that Vanilla JavaScript could effortlessly provide the same functionality.

Over-Engineering and Unnecessary Complexity in the Original Code

After careful evaluation of the code, it appeared that this feature was over-engineered.

  • I discovered unused or infrequently used library files, bootstrap files, and a templating engine library.
  • I believe these libraries were added with the assumption that they would be useful for developing new features in the future. However, this turned out not to be the case.
  • Naturally, no one wanted to work with this code again, so all the core library files were left untouched.
  • There were clear violations of the DRY (Don't Repeat Yourself) and KISS (Keep It Simple, Stupid) design principles, indicating areas for improvement.

Enter the era of simplification.

Opting for vanilla JavaScript, we embarked on a journey to streamline our codebase and embrace the principles of DRY (Don't Repeat Yourself), KISS (Keep It Simple, Stupid), and YAGNI (You Ain't Gonna Need It).

The entire exercise of removing AngularJs involved the following steps:

  • Reviewing and understanding the entire feature
  • Reading the AngularJS code and identifying areas for improvement
  • Rewriting the entire feature using vanilla JavaScript
  • Ensuring the transformation does not affect the user, as only the underlying technology is being changed, not the user experience.

The Transformation: From Excessive to Efficient Coding

What began as an experiment turned into a revelation. With over 16K lines of unnecessary clutter stripped away, and under 1K lines of focused, purposeful addition, we emerged with a leaner, more efficient feature.

Simplification

The journey wasn't without its challenges, but it was immensely rewarding. We honed our skills, boosted our confidence, and left behind a codebase that is not just functional, but elegant and maintainable.

  • Increased my confidence in working independently on a feature.
  • Enhanced my ability to read any framework code and convert it to vanilla JavaScript.
  • Deepened my understanding of vanilla JavaScript.
  • Refactored the code, making it more readable and maintainable.

As we continue to evolve, let's remember the value of simplicity, the power of pragmatism, and the importance of continuous improvement.

PR#458 - My Prodest PR yet!!!

· 5 min read

In this blog post, I am thrilled to share the story behind my proudest Pull Request (PR) yet. PR#458 wasn't just another contribution but a significant milestone in my journey as a software developer. It was a challenging task that pushed me to my limits, and in overcoming those challenges, I learned valuable lessons that have shaped my approach to coding.

By the Numbers

Before we delve into the story, let's take a moment to appreciate the sheer scale of this Pull Request. It comprised nearly 10 individual commits, introduced close to 2,800 new lines of code, and astonishingly, resulted in the deletion or modification of over 314,400 lines across almost 1,000 files.

History

The project I worked on has a rich history spanning almost two decades, evolving from a Windows application to a browser-based web app with a diverse tech stack. This enterprise application has made careers of many software engineers, which means this codebase had been touched by many software engineers. With wide tech stack such as CPP, Java and Dojo framework on the UI, over the years, it accumulated tech debt, and my role primarily focused on UI enhancements and refactoring.

The need for a change

The accumulation of tech debt prompted a thorough review of the codebase to identify areas for improvement:

  • Removal of unused assets and code snippets.
  • Refactoring of legacy code to improve readability and maintainability.
  • Elimination of support for outdated browsers.
  • Streamlining of build scripts to remove unnecessary generated files.

Motivational Quotes

I came across a tweet from Elon Musk that resonated deeply with me: "Far better to delete code than add it." This philosophy encapsulates the essence of efficient software development. While striving for 100% optimized and performant code from day one may seem ideal, the reality is that codebases evolve over time, accumulating unnecessary complexities and redundancies.

Another quote I hold dear is, "Always leave the code better than you found it," attributed to Ward Cunningham. This mindset drove me to embark on a journey of code refactoring and deletion, particularly in a legacy codebase spanning two decades.

Given that we were at the onset of a new release cycle, it presented the perfect opportunity to implement these changes. In the process, I identified several areas ripe for improvement:

  • Eliminating unused styles and assets meticulously, even if they were part of the codebase for years.
  • Letting go of support for outdated browsers, such as IE6, as their usage dwindled over time.
  • Since our project utilized the Dojo framework, it came with its own set of theme files. I painstakingly sifted through these files, pinpointing and eliminating any redundant styles that were no longer in use.
  • Streamlining build scripts to remove unnecessary auto-generated files, optimizing the build process.

These actions required patience and thorough unit testing at every stage to ensure they didn't impact existing functionality adversely. By adhering to these principles and embracing the challenge of improving legacy code, I not only enhanced the codebase's quality but also cultivated a mindset of continuous improvement in software development.

We diligently conducted unit tests at every stage to ensure that our changes didn't inadvertently impact any existing functionality.

In the end, this comprehensive cleanup effort not only improved the overall quality of our codebase but also positioned us for smoother development cycles in the future.

The Result

The PR was not only about code changes but also about personal growth:

  • Increased confidence in tackling a codebase spanning two decades.
  • Improved code readability and maintainability.
  • Timely refactoring to prevent future tech debt.
  • Opened doors for new opportunities and stretch assignments.

Room for improvement

While I'm incredibly proud of this PR, reflecting on it, there are areas where I could have refined my approach:

  • Learning Opportunity: This PR provided me with a valuable opportunity to delve deep into the codebase, uncovering insights and learning valuable lessons along the way. It's crucial to leverage such opportunities for continuous growth and improvement.

  • Confidence Boost: Deleting code can be daunting, especially when it seems to be functioning correctly. However, this experience reinforced my confidence in making impactful changes to enhance the codebase's quality and performance.

  • Enhanced Readability and Maintainability: By eliminating unused code and improving overall code cleanliness, we not only optimized performance but also made future development efforts more efficient. Why burden ourselves with code that serves no purpose? Additionally, utilizing version control tools like Git and GitHub ensures that we can always reference previous versions if needed.

  • Doors to New Opportunities: Although this PR focused on code cleanup rather than adding new features, it opened doors to exciting opportunities. It demonstrated my commitment to maintaining code quality and readiness to tackle tech debt, qualities that are highly valued in any development team.

In hindsight, I could have further optimized my approach:

  • Breaking down the tasks into smaller, more focused PRs could have facilitated smoother integration and minimized the risk of unintended side effects. This iterative approach would have allowed for more granular testing and validation over multiple production builds, ensuring a seamless transition.

Conclusion

In conclusion, working on PR#458 was an enriching experience:

  • Deepened my understanding of the codebase.
  • Boosted my confidence in refactoring and deletion.
  • Enhanced the overall quality of the codebase.
  • Presented new opportunities for professional growth and learning.
  • Overall, PR#458 represents not just a code contribution but a journey of growth, learning, and improvement.

5 Reasons to enjoy working on Legacy code

· 3 min read

Working on legacy code has its own advantages and in this post I want to talk about how I enjoy and appreciate working on code that is dated as old as 15+ years.

You do not always get to start a project from scratch. Any software product usually evolves over time and ensuring that all the future developments are robust requires considerable efforts.