Jaipur, Rajasthan, India
2K followers
500+ connections
About
Articles by Harpreet Singh
Activity
2K followers
Experience & Education
Licenses & Certifications
Publications
-
The "Expert" Button: Can AI Really Magic Up Knowledge?
Harpreet Singh Sachdev
See publicationEver tried the "writing expert mode" with AI and got predictable results? Here's the scoop: AI, like ChatGPT, relies on prior training, so it won't surprise you with new info. But, here's the real magic: Treat AI as your creative buddy! Think of it as a friend who suggests and refines, not a mind-controller.
You're the artist with unique ideas; AI adds the knowledge blend.
1. Architect Your Masterpiece: Step by Step : Plan with AI. Align your vision step by step.
2. Clarify the…Ever tried the "writing expert mode" with AI and got predictable results? Here's the scoop: AI, like ChatGPT, relies on prior training, so it won't surprise you with new info. But, here's the real magic: Treat AI as your creative buddy! Think of it as a friend who suggests and refines, not a mind-controller.
You're the artist with unique ideas; AI adds the knowledge blend.
1. Architect Your Masterpiece: Step by Step : Plan with AI. Align your vision step by step.
2. Clarify the Fog: Seek Understanding Before Speed : Communicate clearly. Refine AI's grasp of your needs.
3. Fill the Knowledge Gaps: Build on Solid Ground : Source facts wisely. Ensure unbiased, well-rounded content.
4. Take A Breath, Refine the Gem : Pause for accuracy. Let AI align with your request.
Discover detailed insights in the article that discusses leveraging AI by refining instructions, transforming it from a mere content generator or task performer into an advisor. Now, it becomes someone who rectifies, suggests ideas, and fine-tunes tasks. Tips above can help you wield AI's knowledge base alongside your creativity to create pure magic! -
AI Drift In Retrieval Augmented Generation and ways to control it
Harpreet Singh Sachdev
See publicationThis articles talks about AI Drift, the gradual decline in AI performance, is examined within the context of Retrieval Augmented Generation (RAG). Analogous to a chameleon's color-changing abilities, RAG's AI responses may deviate over time due to changes in the retrieval source, biased data, and shifts in ranking and generative algorithms. Three types of drift are identified: Data Drift, Model Drift, and Interaction Drift. Data Drift results from outdated retrieval sources, while Model Drift…
This articles talks about AI Drift, the gradual decline in AI performance, is examined within the context of Retrieval Augmented Generation (RAG). Analogous to a chameleon's color-changing abilities, RAG's AI responses may deviate over time due to changes in the retrieval source, biased data, and shifts in ranking and generative algorithms. Three types of drift are identified: Data Drift, Model Drift, and Interaction Drift. Data Drift results from outdated retrieval sources, while Model Drift involves changes in ranking and generative models. Interaction Drift is influenced by user feedback and evolving expectations. Controlling AI drift in RAG demands a comprehensive strategy involving data hygiene, bias mitigation, model monitoring, and human and user feedback integration. This proactive, ongoing approach ensures RAG's reliability in storytelling and information retrieval.
-
Large Language Models (LLMs): Understanding and Optimizing for Programmatic Use
Harpreet Singh Sachdev
See publicationThe emergence of LLMs like Chatgpt , Google Gemini, Google Bard, and LlamaIndex has revolutionized the field of artificial intelligence. These powerful models, trained on vast datasets, excel at understanding, summarizing, generating, and predicting text content. For AI enthusiasts, the ability to interact with these models programmatically through Python opens exciting possibilities. However, navigating the intricacies of parameters can be challenging. This guide focuses on three key…
The emergence of LLMs like Chatgpt , Google Gemini, Google Bard, and LlamaIndex has revolutionized the field of artificial intelligence. These powerful models, trained on vast datasets, excel at understanding, summarizing, generating, and predicting text content. For AI enthusiasts, the ability to interact with these models programmatically through Python opens exciting possibilities. However, navigating the intricacies of parameters can be challenging. This guide focuses on three key parameters (temperature, top_k, top_p) that can significantly impact the quality and creativity of your LLM outputs
-
Sim Swap Fraud
Harpreet Singh Sachdev
See publicationIn this era of technological advancement, the double-edged nature of technology is evident through the emergence of cyber threats like SIM swap attacks. This sophisticated form of high-tech fraud involves manipulating individuals by momentarily disrupting their phone network, followed by a call from a purported representative of their cell phone company. The unsuspecting victim is then coerced into pressing a button, leading to the compromise of their SIM card and subsequent unauthorized access…
In this era of technological advancement, the double-edged nature of technology is evident through the emergence of cyber threats like SIM swap attacks. This sophisticated form of high-tech fraud involves manipulating individuals by momentarily disrupting their phone network, followed by a call from a purported representative of their cell phone company. The unsuspecting victim is then coerced into pressing a button, leading to the compromise of their SIM card and subsequent unauthorized access to sensitive information, enabling hackers to conduct fraudulent bank transactions. The hackers leverage extensive personal information obtained through online searches, making it crucial for individuals to minimize their digital footprint and exercise caution with unknown calls and messages. Safeguard measures include employing strong account security practices, utilizing authentication apps, and limiting the disclosure of personal details online. Awareness and vigilance serve as key components in mitigating the risks associated with this type of cyber attack.
-
Machine Learning vs Deep Learning
Harpreet Singh Sachdev
See publicationThis article discusses the dilemma faced in choosing between machine learning and deep learning algorithms when addressing data science problems. It emphasizes that machine learning is suitable for datasets with fewer features, where data parsing and pattern learning are relatively straightforward, using examples like recommendation algorithms in streaming services. On the other hand, deep learning, a subset of machine learning, is recommended for scenarios with numerous features or…
This article discusses the dilemma faced in choosing between machine learning and deep learning algorithms when addressing data science problems. It emphasizes that machine learning is suitable for datasets with fewer features, where data parsing and pattern learning are relatively straightforward, using examples like recommendation algorithms in streaming services. On the other hand, deep learning, a subset of machine learning, is recommended for scenarios with numerous features or multidimensional data, particularly when extracting features becomes tedious. The article illustrates this with an example of a cat vs dog image classifier, highlighting the challenge of dealing with a vast number of features and the potential use of convolutional neural networks (CNN) in deep learning to automatically extract relevant features. It also discusses the role of dimensionality reduction techniques like PCA and concludes that deep learning becomes beneficial as data volume increases and feature extraction becomes challenging even after applying dimensionality reduction. The key distinction lies in the autonomy of deep learning models in assessing prediction accuracy without manual intervention.
-
Choosing number of Hidden Layers and number of hidden neurons in Neural Networks
Harpreet Singh Sachdev
See publicationThis article provides a pragmatic approach to the often challenging task of determining the architecture of neural networks during deep learning model development. It emphasizes the importance of considering the complexity and dimensionality of the data when deciding on the number of hidden layers, suggesting that linearly separable data may not require any hidden layers, while less complex data could benefit from 1 to 2 hidden layers, and more complex data may necessitate 3 to 5 hidden layers…
This article provides a pragmatic approach to the often challenging task of determining the architecture of neural networks during deep learning model development. It emphasizes the importance of considering the complexity and dimensionality of the data when deciding on the number of hidden layers, suggesting that linearly separable data may not require any hidden layers, while less complex data could benefit from 1 to 2 hidden layers, and more complex data may necessitate 3 to 5 hidden layers for optimal solutions. The article also offers guidelines for selecting the number of nodes in each hidden layer, advocating for a decreasing pattern to facilitate pattern and feature extraction. It acknowledges the flexibility of these rules, allowing for adjustments based on specific use cases and problem statements, recognizing that factors such as overfitting may be influenced by the chosen architecture. Overall, the article provides a concise set of rules to streamline the neural network design process and mitigate the challenges of tuning hidden layers and nodes.
-
Fake News Detection using Deep Learning
Harpreet Singh Sachdev
See publicationFake news is a major concern in our society right now. Fake News is any news that is either factually wrong, misrepresents the facts, and that spreads virally (or maybe to a targeted audience). We believe what we saw and sometimes it leads to catastrophic results. Sometime back, WhatsApp announced that it was deleting 1.5 million accounts every month to prevent the spread of fake news.
It has gone hand-in-hand with the rise of the data-driven era – not a coincidence when you consider the…Fake news is a major concern in our society right now. Fake News is any news that is either factually wrong, misrepresents the facts, and that spreads virally (or maybe to a targeted audience). We believe what we saw and sometimes it leads to catastrophic results. Sometime back, WhatsApp announced that it was deleting 1.5 million accounts every month to prevent the spread of fake news.
It has gone hand-in-hand with the rise of the data-driven era – not a coincidence when you consider the sheer volume of data we are generating every second. To deal with fake news detection, a detection system can be made which helps in classifying news as fake or real. For this, Recurrent Neural network can be used as sequence of words matter while differentiating a news as real or fake. The accuracy can be further increased by using encoders - decoders and further by Attention Mechanism. -
Tensorflow vs Keras vs PyTorch vs Theano
Harpreet Singh Sachdev
See publicationThe article discusses the growing popularity of artificial intelligence (AI) since 2016, with a significant number of big companies incorporating AI into their businesses, as highlighted by a McKinsey report from 2018. The potential value of AI is explored across various industries, with estimates reaching $300 billion in banking and $600 billion in retail. The focus then shifts to deep learning frameworks for building AI models, comparing TensorFlow, Keras, PyTorch, and Theano. TensorFlow…
The article discusses the growing popularity of artificial intelligence (AI) since 2016, with a significant number of big companies incorporating AI into their businesses, as highlighted by a McKinsey report from 2018. The potential value of AI is explored across various industries, with estimates reaching $300 billion in banking and $600 billion in retail. The focus then shifts to deep learning frameworks for building AI models, comparing TensorFlow, Keras, PyTorch, and Theano. TensorFlow, known for its dataflow programming and graph computation, is emphasized as the most widely used library, offering scalability and accessibility. Keras is presented as a high-level API for neural networks, capable of running on top of TensorFlow, CNTK, or Theano. PyTorch, developed by Facebook, is praised for its dynamic computational graph and GPU support, while Theano, developed by the Université de Montréal, is lauded for its efficient symbolic differentiation and multi-dimensional array computations. The article concludes by highlighting TensorFlow's prevalence in the industry due to its accessibility, visualization capabilities, and scalability, asserting its popularity in building models, particularly in companies.
-
How does Google Search Mechanism work?
Harpreet Singh Sachdev
See publicationThis article provides insights into the intricacies of Google's search process, explaining the remarkable speed and efficiency with which it delivers results. The author details the steps involved, from accepting and parsing the query to utilizing an extensive index of the internet rather than searching the web directly. The analogy of searching contacts in a phone book is used to illustrate the efficiency of Google's indexing system. The article also highlights Google's vast data centers, each…
This article provides insights into the intricacies of Google's search process, explaining the remarkable speed and efficiency with which it delivers results. The author details the steps involved, from accepting and parsing the query to utilizing an extensive index of the internet rather than searching the web directly. The analogy of searching contacts in a phone book is used to illustrate the efficiency of Google's indexing system. The article also highlights Google's vast data centers, each equipped with numerous servers and automated crawlers, contributing to its impressive lookup capabilities. Additionally, it delves into Google's unique PageRank algorithm, which considers factors like keyword frequency, a page's age, and the number of incoming links to determine search result rankings. Overall, the article demystifies the complex mechanisms behind Google's ability to return 70 million search results in a fraction of a second.
-
How Data Science can be helpful in identifying and curing Mental Illness?
Harpreet Singh Sachdev
See publicationThe article highlights the escalating global mental health crisis, with over 16% of the population affected by conditions such as anxiety and depression. Despite the severity of the issue, more than half of these cases remain untreated, leading to profound consequences for individuals, families, and economies. The shortage of mental health specialists exacerbates the problem, accounting for over 16% of GDP in mental healthcare. The article proposes leveraging Data Science to address the crisis…
The article highlights the escalating global mental health crisis, with over 16% of the population affected by conditions such as anxiety and depression. Despite the severity of the issue, more than half of these cases remain untreated, leading to profound consequences for individuals, families, and economies. The shortage of mental health specialists exacerbates the problem, accounting for over 16% of GDP in mental healthcare. The article proposes leveraging Data Science to address the crisis through innovative methods. These include developing behavioral detection systems using sentiment analysis on social media, AI-powered chatbots for psychometric tests and counseling, parental control apps for early monitoring, and facial expression monitoring systems. The main challenge lies in data collection, with a focus on analyzing internet activities, search history, and content consumption patterns, particularly in the age group of teens to adults up to 40. The integration of AI-powered solutions and data-driven insights is suggested as a means to detect and treat mental health issues in their early stages, potentially revolutionizing mental healthcare.
Projects
-
CI and CD for Machine Learning
-
Data science projects, especially those involving machine learning models, are iterative by nature. Data scientists frequently experiment with different datasets, algorithms, and hyperparameters to optimize model performance. Manual testing throughout this process can be time-consuming and error-prone.
-
Text Autosummarization
-
Build a Text Auto-summarization which can summarize a large text into minimum meaningful sentences keeping the text emotion intact.
The model is built using TF - IDF and Natural Language Processing through NLTK concepts and heapq library of python -
Personal Voice Asssitant
-
Building an intelligent personal assistant (IPA) that can perform tasks or services as instructed through voice commands that can be provided using an python based interface.
Working towards increasing its scope using LSTM model under Recurrent Neural Networks. -
Visiting Card Boundary Detection and Details Extraction
-
Built an algorithm that can open camera and capture a pic. It will then detect for a credit card boundary and finally extract text from it.
This algorithm is built using Contour detection in Opencv and pytesseract for text detection. -
CV Extraction
-
Created an Automated system which can extract every minute details such as Name, Phone No, Email, Location, Educational Qualifications, Gender, Date of Birth, Projects and Skills Set from Resume/Curriculum Vitae using Natural Language Processing and Python when any resume file is fed as input. The code can also be deployed to cloud.
-
Deep Neural Network for Cat Images classification
-
Build a Cat vs Non-Cat image classification by training a deep neural network on it and achieved 91% accuracy. Used basic approach of neural networks and also deployed model using tensorflow.
-
MARKET MIX ANALYSIS
-
• Applying regression techniques to predict Sales of the Company and to set the Sales Target for the future.
• Tools & Technology Used- Python, Regression and Matplotlib for visualization.
-
STEGANOGRAPHY USING PYTHON
-
• Used python programming language to encode and decode secret message into images (hiding text into images and retrieving it without compromising with quality of images).
• Technology Used- Python
-
STACK OVERFLOW SURVEY
-
• Applying Matplotlib and Seaborn to analyze data by cleaning it and visualizing it using several plots.
• Technology Used- Python, Matplotlib and Seaborn
-
SCRAPY AND MYSQL
-
• Using Web Scrapping to collect data from online site and store that data into MySQL tables for further processing.
• Technology Used- Web Scrapping and MySQL
-
FLIGHT CRASH ANALYSIS
-
• Applied Big Data and Hadoop to study the Flight Data recorder data and derive the reason of crash.
• Technology Used- Big Data and Hadoop and PowerBi for visualizing.
Honors & Awards
-
Gold Medal in Graduation
Vivekananda Global University
Won Gold Medal for scoring 9.99 CGPA in my B.Tech
Recommendations received
4 people have recommended Harpreet Singh
Join now to viewWebsites
- Personal Website
-
https://topmate.io/hss245/
- Personal Website
-
https://github.com/hss245
- Company Website
-
https://www.sachdevaisolutions.com/
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More