Unlike any other type of software application, deep learning applications don’t have a linear graph lifecycle and relies on the fact that models need to constantly refined, optimized and tested.
In a nutshell, model optimization is directly proportional to model robustness!
Being a deep learning practitioner, you cannot deny the fact that choosing the correct hyperparameters for your model is a very critical and painful task.
So, Google’s TensorFlow created an awesome framework to solve the pain points of performing a hyperparameter tuning and optimization.
Artificial Intelligence has undoubtedly rationalized the extreme simulations of human intelligence in machines that are programmed to think like humans and mimic their actions. A subset of artificial intelligence is Natural Language Processing, which is concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data.
We use natural language applications or benefit from them every day. Here are some examples of the most widely used NLP applications:
An activation function is a function that is added to an artificial neural network in order to help the network learn complex patterns in the data. When comparing with a neuron-based model that is in our brains, the activation function is at the end deciding what is to be fired to the next neuron.
In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of activation functions that can be “ON” (1) or “OFF” (0), depending…
This blog is not for people who fear math !
(But make sure to give it a read if you get offended :P)
By now, you must be thinking of the robots in Michael Bay’s series of American science fiction action films !
Well nope, this is something different.
The Transformer is a deep learning model introduced in 2017,with its architecture proposed in the paper “Attention is All You Need” based on a self-attention mechanism and used primarily in the field of natural language processing (NLP).
Did you ever found yourself in a situation whereupon training your machine learning model you obtain an accuracy above 90%, but then realize that the model is predicting everything in the class with the majority of records ?
What is Imbalanced Data ?
Imbalance means that the number of data points available for different classes is different. For example, consider you have the famous dataset of ‘normal’ and ‘fraudulent’ Credit Card Transactions where number of records for ‘normal’ transactions is 284315, and that of ‘fraud’ transactions is only 492.
When you deal with such problem, you are likely to work…
When I was beginning my way in data science, I often faced the problem of choosing the most appropriate algorithm for my specific problem. If you’re like me, when you open some article about machine learning algorithms, you see dozens of detailed descriptions. The paradox is that they don’t ease the choice.
Well, to not let you feel out of the track, I would suggest you to have a good understanding of the implementation and mathematical intuition behind several supervised and unsupervised Machine Learning Algorithms like -
Swagger is in essence an Interface Description Language for describing RESTful APIs expressed using JSON. Swagger is used together with a set of open-source software tools to design, build, document, and use RESTful web services. Swagger includes automated documentation, code generation, and test-case generation.
The Swagger UI is an open source project to visually render documentation for an API defined with the OpenAPI (Swagger) Specification. Swagger UI lets you visualize and interact with the API’s resources without having any of the implementation logic in place, making it easy for back end implementation and client side consumption. …
This is how your web application will look …
Exploratory Data Analysis (EDA) is used to explore different aspects of the data we are working on. EDA should be performed in order to find the patterns, visual insights, etc. that the data set is having, before creating a model or predicting something through the dataset. EDA is a general approach of identifying characteristics of the data we are working on by visualizing the dataset. EDA is performed to visualize what data is telling us before implementing any formal modelling or creating a hypothesis testing model.
Analyzing a dataset is a hectic task and takes a lot of time, according…
A good news for people looking at this blog :P
“THE REAL PREREQUISITE FOR MACHINE LEARNING ISN’T MATH, IT’S DATA ANALYSIS.”
When beginners get started with machine learning, the inevitable question is “what are the prerequisites? What do I need to know to get started?”
And once they start researching, beginners frequently find well-intentioned but disheartening advice, like the following:
You need to master math. You need all of the following:
– Vector Calculus
– Differential equations
– Mathematical statistics
– Algorithm analysis
– and ……..oh crap I am not into this!