Skip to main content

AI concepts for Tech Professionals

· 16 min read

Artificial Intelligence

AI is a vast field focused on creating intelligent systems that can perform tasks usually requiring human intelligence, such as perception, reasoning, and decision-making. It encompasses a range of techniques and approaches, including machine learning, deep learning, and generative AI.

Machine Learning (ML)

ML is a subset of AI focused on developing methods that enable machines to learn from data and enhance their performance on specific tasks. It is commonly considered the simplest form of AI.

Neural Networks

A neural network is a computational model inspired by the human brain, consisting of interconnected layers of nodes or neurons that process and transform data through learned patterns and weights. It is commonly used in machine learning to recognize complex patterns, make predictions and solve tasks by training on large datasets.

Deep Learning

Deep learning is subset of machine learning that utilizes multi-layered neural networks to automatically learn and extract features from large datasets. These deep network can model complex patterns and perform tasks such as image and speech recognition with high accuracy by heirarchically processing data through multiple layers.

Computer visiion

Computer vision is a field of artificial intelligence that enables computers to interpret and understand visual information from the world, such as images and videos. It involves the use of algorithms and models to analyze and make decisions based on visual data, often mimicking human visual perception and cognition.

Natural Language processing (NLP)

Natural langulage processing (NLP) is a branch of artificial intelligence that allows computers to understand and interact with human language. It involves tasks like translating text, analyzing sentiment, and summarizing information by processing and interpreting language data.

AI Model

An AI model is a computational algorithm trained on data to perform specific tasks, such as classifiction, prediction, or pattern recognition. It learns from examples in training data, adjusting its parameters to improve its accuracy, and then applies this learned knowledge to make informed decisions or predictions on new, unseen data.

ML Algorithm

An ML Algorithm is a set of procedueres used to analayze data and make predictions or decisions based on patterns and insights. It adjusts its approach by learning from data, improving its accuracy over time through iteravtive training.

AI Model Training

AI model training is the process of teaching a model to make accurate predictions or decisions by feeding it large amounts of data, adjusting its parameters through iterative learning, and optimizing its performance based on feedback and error rates.

AI inferencing

AI inferencing is the process of applying a trained AI model to new data to generate predictions or decisions based on the patterns and knowledge it has learned. It involves using the model's learned parameters to analyze the input and produce outputs in real-time or on-demand.

Model fairness

AI model fairness refers to the principle of ensuring that a model's predictions or decisions do not disproportionately disadvantage or bias any particular group or individual, promoting equitble outcomes.

Model fit

Model fit describes how well a model's predictions match the actual data it was trained on, indicating its accuracy and effectiveness in capturing the underlying patterns.

Large Language Model (LLM)

A Large language model (LLM) is an AI model designed to understand and generate human-like text based on vast amounts of data. It uses advanced algorithms to process and respond to language, enabling tasks like text generation, translation, and question-answering.

Machine Learning Workflow

  • Identifying appropriate data is one of the most important aspects of ML workflow.

Labeled Data

Labeled data in AI refers to data that has been annotated with specific tags or categories, providing a reference for training models. This annoated information helps the model learn to identify patterns and make accurate predictions based on the labeled examples.

Unlabeled Data

Unlabled data in AI refers to data that lacks predefined tags or categories, meaning it has not been annoated with specific information. This type of data is often used in unsupervised leanring, where models identify patterns and structures without predefined labels.

Tabular data

Tabular data in AI is structured information organized in rows and columns, resembling a spredsheet or database table. Each row typically represents a single record or observation, while each column contains specific attributes or features, making it easy to analyze and process for machine learning tasks.

Time-Series data

Time-series data in AI consists of observations collected sequentially over time, ofter at regular intervals. this type of data is used to analyze trends, patterns, and seasonal variations, making it valuable for tasks such as forecasting and anamoly detection. This data is oftern submitted by IoT devices.

Image data

Image data in AI refers to visual information represented as pixel matrics, capturing various features such as colors, shapes, and textures. This type of data is commonly used in computer vision tasks, including image classification, object detection, and facial recognition.

Strucured text data

Structured text data in AI refers to text that is organized in a predefined format, often with specific fields and tags, making it easy to analyze and process. Examples include data from forms, databases, or CSV files, where each entry has a consistent structure that facilitates tasks like information extraction and analysis.

Unstructured text data

Unstructured text data in AI refers to free-form text taht lacks a predefined structure, such as documents, social media posts, or emails. This type of data is more challenging to analyze, as it requires natural language processing techniques to extract insights, indentify patterns, and derive meaning from the content.


  • Select the ML Algorithm

Linear Regression

Modeling between a dependent variable and multiple independent variables Eg. Predict housing prices based on size, location and number of bedrooms.

Logistic Regression

Binary classification predicting the probability of an event occuring. Eg. Email spam classification

K-Nearest Neighbors (KNN)

Classification of data points based on classification of neighbors. Eg. Product recommendation based on user preferences.

Principal component Analysis (PCA)

Condensing data while retaining the most important features Eg. facial recognition


  • Train the model on the data

Supervised Learning

Supervises learning is a machine learning approach where a model is trained on labeled data, using input-output pairs to learn the relationship between them. The model makes predictions on new, unseen data by applying the patterns it has learned from the training examples.

Unsupervised learning

Unsupervised learning is a machine learning approach where a model is trained on unlabled data, aiming to identify patterns, structures, or groupings within the data without predefined output categories. It is commonly used for tasks such as clustering, dimensionality reduction, and anomaly detection, helping to uncover hidden relationships in the data.

Reinforcement Learning

Reinforcement learning is a machine learning approach where an agent learns to make decisions by interacting with an environment, receiving feedback in the form of rewards or penalties. The agent aims to maximize cumulative rewards over time by exploring different actions and learning from the consequences of its choices.

  • Evaluate Model performance. Perform a series of tests to validate whether the model generates usable output

Batch inferencing

Batch inferencing is the process of making predictions or decisions on a large set of data at once, rather than individually processing each data point. This approach allows for efficient and scalable analysis by handling multiple inputs in a single operation. Batch inferencing is used when accuracy is more important than speed of response.

Real time inferencing

Real-time inferencing is the process of making predictions or decisions on data instantly as it is received, enabling immediate responses. This approach is crucial for applications requiring quick, dynamic interaction, such as live video analysis or online recommendation systems. Self-driving cars use real-time inferencing while in motion.


Real World Examples of AI applications

  • Computer Vision Autonomous vehicles utilize computer vision to interpret and navigate their environment. They rely on a combination of sensors, cameras, and AI algorithms to perceive the world around them.

  • NLP Speech recognition Virtual assistants leverage NLP and speech recognition to understand and respond to user queries in natural language. They allow for hands-free operation of devices, providing users with a seamless interaction experience.

  • Recommendation systems E-commerce platforms employ recommendation systems to provide personalized shopping experiences for users. These systems analyze user behavior and preferences to suggest products that are most likely to be purchases.

  • Fraud detection Financial institutions, including banks and credit card companies, employ fraud detection systems to identify and prevent fraudulent transactions in real-time. These systems use ML algorithms to analyze transaction data and flag suspicious activities.

  • Forecasting In supply chain management, accurate demand forecasting is crucial for ensuring products are available to meet customer demand while minimizing excess inventory costs. Companies use AI to analyze historical sales data and predict future demand.


Introduction to RAG

  • Retrieval Augmented Generation - The process of augmenting LLM output by referencing a knowledge base that is outside the context of the LLM training sources

knowledge base option

  • Traditional Database or Indexing Syatem Use a traditional database or an indexing system like Elasticsearch. Here, the documents are indexed based on keywords or phrases. the retreval process invovles searching these indices to identify documents that match the query terms, which can then be sent to the LLM for generating responses.

  • Vector Database In this method, structured or unstructured data are split into chunks, then embedded into vectors using a model (often a transformer-based encoder). These vectors are then stored in a vecotr database that supports efficient similarity search. When a prompt is submitted, this database is searched first, using a vecotr representing the query. It then retrives the most relevant documents based on vector similarity, and adds this data to the prompt.

RAG Benefits

  • Enhanced factuality and accuracy
  • LLM contextual relevance
  • Improved handling of specific verticals

RAG Challenges

  • Pipeline complexity
  • Latency issues
  • Dependence on the quality of the Retrieval set
  • Resource Requirements
  • Difficulty in Tuning and maintenance

How do you understand if an AI model is delivering business objectives ?

Below are key considerations

  • Alignement with Business Objectives - Ensure that the model addresses specific goals.
  • Performance metrics - Define KPIs to measure effectiveness
  • User Feedback - collect qualitative insights from end users.
  • Integration and Usability - Evaluate how well the model integrates into existing workflows

Generative AI

  • Transformer-based LLMs are models that can understand and generate human-like text. They are trained on text data various sources, and learn patterns and relationships between words and phrases.

  • Tokens - Units of text that the model processes individually. It represents a fragment of the input text, which can be a word, subword, characters, or even punctuation mark depending on the specific tokenization method used by the model.

  • Chunking - The practice of breaking down a large text input or output into smaller, more manageable pieces for procesessing. Chunk size (tokens) is an important parameter when creating a vector database.

  • Vectors - A mathematical representation of data (word, sentence or document) as a series of numerical values organized in a specific order. This representation captures various features or dimensions of the data, enabling the calculation of relationship or similarities.

Foundation Model Types for Generative AI

  • A Large Language Model (LLM) is an AI model designed to understand and generate human-like text based on vast amounts of data. It uses advanced algorithms to process and respond to language, enabling tasks like text generation, translation and question-answering

  • Diffusion Models start with noise or random data, and gradually add information until a recognizable pattern is obtained. This is often applied to image generation but can also be used for text or audio genreation.

  • Multimodal models are foundation models which have been trained on multiple media types. These media types can include text, audio, video, and images. The models can both interpret and generate these media types.

  • Generative Adversarial Networkds (GANs) - This model consists of two neural networks which compete with each other. One generates content, and the other attempts to differentiate that generated content from real data. the competition continues until the generated content and real data are indistinguishable from each other.

Generative AI Advantages

  • Adaptability - Generative AI excels in adapting to diverse tasks and problem domains, making it useful across a wide range of industries. It can seamlessly switch between languge, visual, and data-centric applications without needed extensive reconfiguration. This flexibility helps organizations leverage AI to tackle varied challenges with a single adaptable system.

  • Responsiveness - Generative AI models can rapidly produce outputs and insights in real-time, enabling swift responses to use queries and changing requirements. Their ability to process information and generate relevant content makes them suitable for interactive applications, such as chatbots and customer support. This responsiveness enhances user experience by providing instant and contextually appropriate answers.

  • Simplicity - Generative AI models often simplify complex workflows by automating content generation and decision-making processes. They reduce the need for manual intervention or domain-specific coding, making AI-driven solutions more accessible to non-technical users. As a result, businesses can deploy sophisticated solutions with minimal setup and oversight.

  • Creativity and Exploration - Generative AI opens up new avenues for creativity by suggesting novel ideas, designs, or content based on learned patterns. It can assist with brainstorming, creative writing, and design prototyping, providing users with unexpected and innovative options. This capability helps push the boundaries of traditional problem-solving and artistic creation.

  • Data Efficiency - Many generative AI models are designed to learn effectively from relatively small datasets through pre-training and fine-tuning techniques. This data efficiency reduces the dependency on massive labeled datasets, lowering costs and effort associated with data preparation. It also allows models to generate meaningful outputs even in data-sparse environments.

Generative AI DisAdvantages

  • Regulatory Violations - Generative AI models can inadvertently generate content that violets regulatory guidelines, such as producing misleading financial advice or content that doesn't comply with advertising standards. Organizations using these models may face complaince challenges, especially in highly regulated industries like healthcare and finance. this risk underscores the need for strict oversight and adherence to legal requirements when deploying AI systems.

  • Social Risks - Generative AI can be used to create deepfakes disinformation, or biased content, potentially amplifying harmful social impacts. Such outputs can erode trust, manipulate public opinion, or contribute to social polarization. The misuse of generative AI for malicious purpose poses significant ethical and societtal concerns that require careful mitigation strategies.

  • Data Security and Privacy Concerns - Generative AI models often require access to sensitive datasets, raising risks of data leakage or unintended exposure of personal information. If improperly handled, these models may inadvertently reveal private data points from training data. Ensuring data security and maintaining user privacy is a critical challenge when deploying generative models, especially in sensitive applications.

  • Toxicity - Generative models can sometimes produce toxic or harmful content, such as offensive language or inappropriate suggestions, if they are not carefully monitored. This issue is often due to biases or toxic patterns present in the training data. It necessitates rigorous content moderation and filtering techniques to prevent harmful outputs in public-facing applications.

  • Hallucinations - Generative AI may produce outputs that are factually incorrect or completely fabricated, known as "hallucinations". This problem is particularly challenging when using AI for tasks requiring high accuracy, such as generating technical documentation or answering factual questions. Hallucinations can undermine trust and reliability, making it difficult to use generative AI in mission-critical scenarios.

  • Nondeterminism - Generative AI models can produce different outputs even when given the same input, due to their probabilistic nature. This nondeterminism complicates tasks that require consistency, such as legal document generation or standardized communication. It also makes debugging and validating AI-generated outputs more complex, limiting their applicability in certain use cases.


Model Selection Decision Tree

What content are you trying to generate?

  • Text
  • Image
  • Audio
  • Video
  • Multimodal

Other model consideration

  • Performance and latency
  • Customization
  • Constraints and Resource
  • GRC (Governance Risk and Compliance)

What is Prompt Engineering?

The process of desiging and refining input prompts to optimize the performance of AI models. Enhances the quality of responses, guides model behavior, and can lead to more accurate results.

Key components of Prompt Engineering

  • Context - Information surrounding the prompt that helps the model understand the scenario
  • Instruction - The specific tasks or question being posed to the model.
  • Negative Prompts - Instructions that specify what the model should avoid or exclude in its response.

ML Development Lifecycle

  1. Business Goal - Objectively measure business value of the outcomes against the defined business gola. Is ML the appropriate technology choice to solve the problem statement?
  • Business Goal Definition Workflow
  • Business consideration
  • Frame the ML problem
  • Determine the optimization objective
  • Review data requirements
  • Cost and performance optimization
  • Production consideration
  1. ML problem framing Definition Define what is observed and what should be predicted. Identify dependent and/or independent variables. Define inputs and outputs.

  2. Collect Data

  • Data labeling
  • Ingest (streaming, batch)
  • Data aggregation
  1. Data pre-processing workflow
  • Clean
  • Partition
  • Scale
  • Unbalance, Balance
  • Augment
  1. Feature Engineering tasks (Features are inputs to ML models used during training and inference)
  • Feature selection: The process of selecting a subset of extracted features. This is the subset that is relevant and contributes to minimizing the error rate of a trained model.
  • Feature transformation - Steps for replacing missing features or fetaures that are not valid.
  • Feature Creation - The creation of new features from existing data to help with better predictions
  • Feature Extraction - The process of reducing the data to be processed using dimensionality reduction techniques.
  1. Train, tune and evaluate: The process of training a machine learning model involves providing the algorithm with training data to learn from.

  2. Hyperparameters are settings that can control the beahvior of the ML algorithm. Hyperparameter tuning, or optimization, is the process of choosing the optimal hyperparameters for an algorithm.

The four principles of Great design by Robin Williams

· 2 min read

What started as a curiosity turned into a desirable hobby. After having discovered Canva in 2022, I started taking insipiration from various designs in Canva and Pinterest and created simple graphics for Watsapp Status and Instagram. Having this on hands experience, it helped me understand how to make something attractive and eye catching within a limited space and with limited use of words.

I enjoy spending hours on canva, but at the same time, I have gained creative and valuable skills while doing it on hands.

I came across this course from Robin Williams about principles of Great design and it surely benefited me from my existing knowledge. The course is well structured and here are some of the important points taken from the course.

The principle of Proximity

Group related items together. Physical closeness implies a relationship. Proximity does not mean that everything is close together - it means elements that are intellectually connected should be visually connected.

The principle of Alignment

Nothing should be placed on the page arbitrarily. Even item should have a visual connection with something else on the page.

  • In life as well as in Design, alignment has a purpose.
  • Clean alignment can improve the communication of any piece of work. It presents a more professional appearance.

The principle of Repetition

Repeat some element of the design throughout the entire piece. This is a critical unifying factor.

  • You already create consistency in your design work. Take elements of that consistency and push it - emphasize the consistency so it becomes a repetitive and unifying element of design.
  • Repetition helps to clarify information and provide structure.

The principle of Contrast

Contrast is what draws a reader's eye to the page in the first place. Contrast also provides clarity of information.

  • One effect of contrast is that it pulls the reader's eye into the information, and one result is clearer communication.
  • Use contrast to clarify information as well as make the page more attractive.

Deciphering Polyfill.io Service vs. Polyfill.js

· 2 min read

In light of recent events, there's been some confusion about the polyfill.io service and polyfill.js. This article aims to clarify the differences and address some concerns.

The Polyfill.io Incident

News recently surfaced about the polyfill.io service injecting malicious code into JavaScript assets fetched from their domain. This article provides a detailed account of the incident.

Understanding Polyfills

According to MDN, a polyfill is a code snippet, typically JavaScript on the web, that provides modern functionality on older browsers lacking native support. For instance, if you want to use the latest JavaScript APIs like array filter or map—supported by Chrome but not IE7—you'd need a polyfill to ensure seamless functionality.

The Role of CDNs

A Content Delivery Network (CDN) is a system of interconnected servers that accelerate webpage loading for data-heavy applications. Commonly used static assets like jQuery, AngularJS, React, and Bootstrap.css reside on CDNs. Web applications can directly use these assets, saving on network and storage costs while enhancing application performance.

When a user in Location X visits your web application, the static files needed are downloaded from the nearest CDN to Location X, reducing latency and improving performance.

The Case for External Services

This blog post provides an excellent discussion on using polyfill as a service. The main argument is that shipping polyfills for every feature can lead to unnecessary downloads for users with modern browsers. This can negatively impact performance and user experience. An external service can help by shipping only the relevant polyfills based on the requesting browser's user agent.

Angular's polyfill.js

Angular's build system generates optimized, production-ready code files, including a file named polyfill.js. There's been confusion about whether this polyfill.js is related to the polyfill.io incident. The answer is a resounding NO.

Angular's polyfill.js is a file generated by the Angular build system for polyfilling required functionalities. It doesn't use any of the polyfill.io services to generate this build file, unless you're using the service in your source code.

Lessons in Software Simplification - From AngularJS to Vanilla JS

· 4 min read

8 Years ago…

AngularJS was very popular library and talk of the town.

The Software product had a requirement of providing a search solution with display of tabular data / pagination etc. with some UI animation.

It was decided by the tech lead and the management to go with AngularJS and there could be various reasons for it, possibly:

  • AngularJS was popular framework
  • Going forward all the new features in this software product had to be developed using AngularJS
  • It is always exciting to work on a new technology irrespective of figuring out if it actually is needed.

This feature was released and praised, but over the years, there have been no instances of this library being used for any other features, reasons:

  • Continued usage of the legacy framework, it being the obvious choice
  • The rising popularity of Angular2 over AngularJS causing lack of time and interest.
  • Lack of resources / technical skills in AngularJS

So, this huge product had AngularJS as a low hanging fruit, used only for one single feature.

However, security fixes to AngularJS library were patched whenever available.

Transition from AngularJS to Vanilla JavaScript

Fast forward to today, and our feature remains, but the landscape has shifted. AngularJS is officially deprecated and so we had to reevaluate our choice.

  • AngularJS library is deprecated.
  • Security concerns from customers
  • In this entire product, AngularJS is just used for this one feature.

We chose to use Vanilla JavaScript for various reasons, though the specifics are not relevant here.

When I began working on this feature, it became clear that Vanilla JavaScript could effortlessly provide the same functionality.

Over-Engineering and Unnecessary Complexity in the Original Code

After careful evaluation of the code, it appeared that this feature was over-engineered.

  • I discovered unused or infrequently used library files, bootstrap files, and a templating engine library.
  • I believe these libraries were added with the assumption that they would be useful for developing new features in the future. However, this turned out not to be the case.
  • Naturally, no one wanted to work with this code again, so all the core library files were left untouched.
  • There were clear violations of the DRY (Don't Repeat Yourself) and KISS (Keep It Simple, Stupid) design principles, indicating areas for improvement.

Enter the era of simplification.

Opting for vanilla JavaScript, we embarked on a journey to streamline our codebase and embrace the principles of DRY (Don't Repeat Yourself), KISS (Keep It Simple, Stupid), and YAGNI (You Ain't Gonna Need It).

The entire exercise of removing AngularJs involved the following steps:

  • Reviewing and understanding the entire feature
  • Reading the AngularJS code and identifying areas for improvement
  • Rewriting the entire feature using vanilla JavaScript
  • Ensuring the transformation does not affect the user, as only the underlying technology is being changed, not the user experience.

The Transformation: From Excessive to Efficient Coding

What began as an experiment turned into a revelation. With over 16K lines of unnecessary clutter stripped away, and under 1K lines of focused, purposeful addition, we emerged with a leaner, more efficient feature.

Simplification

The journey wasn't without its challenges, but it was immensely rewarding. We honed our skills, boosted our confidence, and left behind a codebase that is not just functional, but elegant and maintainable.

  • Increased my confidence in working independently on a feature.
  • Enhanced my ability to read any framework code and convert it to vanilla JavaScript.
  • Deepened my understanding of vanilla JavaScript.
  • Refactored the code, making it more readable and maintainable.

As we continue to evolve, let's remember the value of simplicity, the power of pragmatism, and the importance of continuous improvement.

PR#458 - My Prodest PR yet!!!

· 5 min read

In this blog post, I am thrilled to share the story behind my proudest Pull Request (PR) yet. PR#458 wasn't just another contribution but a significant milestone in my journey as a software developer. It was a challenging task that pushed me to my limits, and in overcoming those challenges, I learned valuable lessons that have shaped my approach to coding.

By the Numbers

Before we delve into the story, let's take a moment to appreciate the sheer scale of this Pull Request. It comprised nearly 10 individual commits, introduced close to 2,800 new lines of code, and astonishingly, resulted in the deletion or modification of over 314,400 lines across almost 1,000 files.

History

The project I worked on has a rich history spanning almost two decades, evolving from a Windows application to a browser-based web app with a diverse tech stack. This enterprise application has made careers of many software engineers, which means this codebase had been touched by many software engineers. With wide tech stack such as CPP, Java and Dojo framework on the UI, over the years, it accumulated tech debt, and my role primarily focused on UI enhancements and refactoring.

The need for a change

The accumulation of tech debt prompted a thorough review of the codebase to identify areas for improvement:

  • Removal of unused assets and code snippets.
  • Refactoring of legacy code to improve readability and maintainability.
  • Elimination of support for outdated browsers.
  • Streamlining of build scripts to remove unnecessary generated files.

Motivational Quotes

I came across a tweet from Elon Musk that resonated deeply with me: "Far better to delete code than add it." This philosophy encapsulates the essence of efficient software development. While striving for 100% optimized and performant code from day one may seem ideal, the reality is that codebases evolve over time, accumulating unnecessary complexities and redundancies.

Another quote I hold dear is, "Always leave the code better than you found it," attributed to Ward Cunningham. This mindset drove me to embark on a journey of code refactoring and deletion, particularly in a legacy codebase spanning two decades.

Given that we were at the onset of a new release cycle, it presented the perfect opportunity to implement these changes. In the process, I identified several areas ripe for improvement:

  • Eliminating unused styles and assets meticulously, even if they were part of the codebase for years.
  • Letting go of support for outdated browsers, such as IE6, as their usage dwindled over time.
  • Since our project utilized the Dojo framework, it came with its own set of theme files. I painstakingly sifted through these files, pinpointing and eliminating any redundant styles that were no longer in use.
  • Streamlining build scripts to remove unnecessary auto-generated files, optimizing the build process.

These actions required patience and thorough unit testing at every stage to ensure they didn't impact existing functionality adversely. By adhering to these principles and embracing the challenge of improving legacy code, I not only enhanced the codebase's quality but also cultivated a mindset of continuous improvement in software development.

We diligently conducted unit tests at every stage to ensure that our changes didn't inadvertently impact any existing functionality.

In the end, this comprehensive cleanup effort not only improved the overall quality of our codebase but also positioned us for smoother development cycles in the future.

The Result

The PR was not only about code changes but also about personal growth:

  • Increased confidence in tackling a codebase spanning two decades.
  • Improved code readability and maintainability.
  • Timely refactoring to prevent future tech debt.
  • Opened doors for new opportunities and stretch assignments.

Room for improvement

While I'm incredibly proud of this PR, reflecting on it, there are areas where I could have refined my approach:

  • Learning Opportunity: This PR provided me with a valuable opportunity to delve deep into the codebase, uncovering insights and learning valuable lessons along the way. It's crucial to leverage such opportunities for continuous growth and improvement.

  • Confidence Boost: Deleting code can be daunting, especially when it seems to be functioning correctly. However, this experience reinforced my confidence in making impactful changes to enhance the codebase's quality and performance.

  • Enhanced Readability and Maintainability: By eliminating unused code and improving overall code cleanliness, we not only optimized performance but also made future development efforts more efficient. Why burden ourselves with code that serves no purpose? Additionally, utilizing version control tools like Git and GitHub ensures that we can always reference previous versions if needed.

  • Doors to New Opportunities: Although this PR focused on code cleanup rather than adding new features, it opened doors to exciting opportunities. It demonstrated my commitment to maintaining code quality and readiness to tackle tech debt, qualities that are highly valued in any development team.

In hindsight, I could have further optimized my approach:

  • Breaking down the tasks into smaller, more focused PRs could have facilitated smoother integration and minimized the risk of unintended side effects. This iterative approach would have allowed for more granular testing and validation over multiple production builds, ensuring a seamless transition.

Conclusion

In conclusion, working on PR#458 was an enriching experience:

  • Deepened my understanding of the codebase.
  • Boosted my confidence in refactoring and deletion.
  • Enhanced the overall quality of the codebase.
  • Presented new opportunities for professional growth and learning.
  • Overall, PR#458 represents not just a code contribution but a journey of growth, learning, and improvement.

5 Reasons to enjoy working on Legacy code

· 3 min read

Working on legacy code has its own advantages and in this post I want to talk about how I enjoy and appreciate working on code that is dated as old as 15+ years.

You do not always get to start a project from scratch. Any software product usually evolves over time and ensuring that all the future developments are robust requires considerable efforts.

First time attending a meetup - My Experience

· 6 min read

A group of individuals with common interest plan to meet to share their knowledge and network. This is my definition of a Meetup. I have read several blogs about meetup's but never attended any. I did not plan anything nor did I know about this meetup until a day before. It was just on the spur of the moment. I am glad, I attended this meetup and hence sharing my experience through this blog post.

I have no clue what made me register on the meetup.com website. It was Friday evening, I was about to wrap up my daily office work. I pointed my browser to meetup.com and registered. The website popped at me a meetup that was happening on Saturday 05th of Oct 2019 from 10.00 AM to 12.00 PM. The host were sharing their experience about React with Typescript == React on Steroids. Interesting. I wanted to learn about it. Unfortunately, RSVP were closed. "I could always watch about such topic on YouTube", I thought.

Surprisingly, someone in the comment had mentioned that no need of RSVP, everyone is welcome. I did not have any excuse but attend.

The minimalist phone

· 7 min read

Smartphone's and in turn social media plays a very important role in our life, without which we would be stranded. Right? To a certain extent it is the truth. We use social media because we want to be connected. But we forget the toll it takes on our life through its continuous usage. In this post, I go through my journey from a social media addict to being a social media ghost.

A night of coding - Developer chaos

· 7 min read

First, a little background. I had an important feature delivery in the coming week but due to personal work, had to take two days off on a short notice. This meant, I had to complete my current development tasks. "No Problemo !", I thought. My inspiration for working overnight comes form the movie, "The Social Network". I have mentioned about how amazing the soundtrack of the movie is, umpteen times, here on this blog. This essay is my retrospection per se on working overnight.

Work commitments of an important feature delivery and family never go in hand. You will spend more time working compared to the time spent with family or honing your hobbies, the universal accepted law for the software professional by the software professional, always holds true.