Tuesday, February 19, 2019

Points to improve your programming logic!

As we know, Logic in programming is a fundamental and basic key to be a good programmer. Depending of your job place, you are going to use more algorithms or not. If you are a web designer probably you are not going to deal with complex algorithms, but if you are front-end developer maybe a little more and if you are a back-end developer much more. First you need to understand the requirement thoroughly, then go through the workflows, Class modules, etc. 

This article is for everyone who is Software Engineer, if we manage to develop a good logic we will be able to move through the different languages in a flexible way, try to do not depend on the language.

  •  Think to solve

Programming is about solving problems, a good technique is to split the big problem in small ones to focus on each problem in a better way, you can use pseudo codes in a program or in a simple paper. Here you have to be understood concepts like loops, conditions, workflows, etc.

  • Practice
The most important point is this: practice. An algorithm is nothing more than an ordered and finite set of operations that we carry out for the sole purpose of finding a solution to a problem. So try to practice simple problems to get a better logic.

  • Learn about Data Structures and Algorithms
Learning about structures will give you a better plan to focus your problems and have an efficient software. You can play games like Chess and practice Mathematics.

  • Learn programming paradigms
A very good point is to learn programming paradigms. Probably one of the most programming paradigm is the The Object-Oriented Paradigm (OOPs). A programming paradigm is like a blueprint to follow to create our projects. You can learn Functional Programming to learn how you can develop programs and solve problems in a different way.

  • Look at other people’s code
In programming we have many ways to solve problems, maybe another person solved the problem that you have in an optimal and simple way. Looking at other people’s minds is essential to advance as a programmer. You have Github to see a lot of great projects.


Monday, February 18, 2019

What is Machine Learning ?

Now a day, Robotics and Data science growing in the world and there are several sub-branches are come up like Big Data, Machine Learning, Cognitive intelligence, etc. In this article we are going to speak about Machine Learning.

Machine learning is the deep scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Without Math ML can not be completed, where it is infeasible to develop an algorithm of specific instructions for performing the task. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning. Domain knowledge is most important part in ML to get accuracy.

The concept here is, Machine is learning itself from the experience. Machine learning can have many tasks, and they  are classified into several broad categories. In supervised learning, the algorithm builds a mathematical model from a different set of data that contains both the inputs and the desired outputs. For example, if the task were determining whether an image contained a certain object, the training data for a supervised learning algorithm would include images with and without that object (the input), and each image would have a label (the output) designating whether it contained the object.n special cases, the input may be only partially available, or restricted to special feedback. Semi-supervised learning algorithms develop mathematical models from incomplete training data, where a portion of the sample input doesn't have labels.

Classification algorithms and Regression algorithms are types of supervised learning.
  • Classification algorithms are used when the outputs are restricted to a limited set of values. e.g. Email Filtration
  • Regression algorithms are named for their continuous outputs, meaning they may have any value within a range. Examples of a continuous value are the temperature, length, or price of an object.
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining (Not Data warehousing) uses many machine learning methods, but with different targets; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a pre-processing step to improve learner accuracy.

Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a pre-processing step to improve learner accuracy.

Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples). The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.

Machine learning and statistics are closely related fields. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field. Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.