My Computer Vision Roadmap

Ramazan Güven
25 min readMar 11, 2020

--

<#include> My Notes, advice and 70 Days of Challenge

I will repeat what I know for a week and then I will go deeper.

My information before starting this 70 Days Challenge:

Python, Git, Machine Learning Algorithm and Tools (Sckit-Learn, Keras, Tf) Beginner Level ( Seaborn +Matplotlib ), intermediate level (OpenCv +Numpy)

Courses I have completed: Coursera- Neural Networks and Deep Learning, Udemy-DataiTeam Courses [Turkish](almost 4 courses about this area).

I will be sharing the codes I wrote in my Github account. => Rguven

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 1 (Decision Day for me)

01/February/2020

I searched Computer Vision and Image Processing examples and application areas in real life. Trust me, this area awesome.

I have decided what I should learn in this area for now.

Shortly:

Programming Language: Python

Tools: OpenCV,Numpy

Mathematical Knowledge: Linear Algebra, Matrix Operations

Machine Learning Algorithm: Supervised,Unsupervised and Clustring machine learning techniques. Specifically I will focus K-Means Clustring,SVMs and Bayesian probability theory.

Deep Learning : Neural Networks, CNN arhitectures,Gan’s.

Image processing or Digital Signal processing: Digital Filters,image transformations.

  • Python (I’m thinking that I know so so but I will be developing every day step by step.)

I think Python is very basic and useful so it is used by many developers. Python usually uses in the Backend development of a website and Machine Learning applications. I will focus on Machine Learning, Deep Learning and Computer Vision.

  • OpenCV (I already know but I will improve more and more from now on.)

I will concatenate Machine Learning and OpenCV, further on I will concatenate with CNN (Convolutional Neural Network)

Machine learning approach- Object recognition using machine learning requires the features to be defined first before being classified.

Deep learning approach — Object recognition using deep learning does not need features specifically defined.

  • Numpy (a very powerful tool in mathematical calculations )

These days, Sentdex has an idea that I think this will be awesome. Sentdex will build own neural networks library with only using Python and Numpy. I will be following this series on Youtube.

###########Today’s Decisions#######

I will work linear algebra, matrix operations and mathematical operations. And I’m going to solve Numpy exercises at the same time and I will also share this on my github account.

Watch List for Tomorrow: 3Blue1Brown ( I watched it before but I want to watch it again at 1.5x speed :) )

Read List for Tomorrow: 1- A Visual Intro to Numpy, 2-Towards Data Science ,3-From Python to Numpy,4-)Standford Numpy Tutorial

Exercises List for Tomorrow: rougier/numpy-100

You can access the addresses by clicking on the underlined words.

#################################

I learned new things at the end of this decision day:

1-)Scale-invariant feature transform (SIFT) :
SIFT uses key points of objects and stores them in a database. When categorizing an image, SIFT checks the key points of the image which matches with those found in the database.

Okay. I think this is enough for this day. I’m going to play basketball see u :)

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 2 (On The Job)

I was impatient and watched last night almost 10 videos 3Brown1Blue, I think this is enough. I read read list and almost it took 1.5 hours.
I solved half of the exercises right now, I will solve it in the future (I hope :) ).

###########Today’s Decisions#######

Tomorrow I will look at OpenCV, which is a must-have library for this job. Actually I know more or less information about OpenCV but I want to go to deeper and progress regularly. I will look at also SciPy Library.
I have some code left before and I will share them on my GitHub account
but I have to edit them first.

Watch List for Tomorrow: Sentdex (First 15 video)

Read List for Tomorrow: 1-)Computer Vision For Beginners , 2-)Introduction to Computer Vision , 3-)Introduction to OpenCv Part I 4-)Intro to Part II

Exercises List For Tomorrow: On the Github :)

#################################

What I’ve searched today on Google:

1-) What is the difference between lists and arrays in Python?

2-)What is the difference between flatten and ravel functions in numpy?

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 3 ( Fighting With OpenCV )

I watched the some videos of Sentdex again last night :) I could not stop :) And I read read list but it took me a long time to read the reading list and I wrote some code about OpenCv.

###########Today’s Decisions#######

Today I worked at Opencv. I will look at Opencv tomorrow. Because this required.

Read List: Read OpenCV library tutorial on the website.

Exercises List: I will read OpenCV tutorial on the website and I will rewrite codes again.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 4 ( Keep Fighting With OpenCV )

Today I worked at Opencv. I will look at Opencv tomorrow. I am following the OpenCV tutorial. I am not afraid to go deeper. :)

#Day 5 ( Keep Fighting With OpenCV )

Today I worked at Opencv.

#Day 6 ( Keep Fighting With OpenCV )

Today I worked at Opencv. I looked at OpenCV’s own website.

#Day 7 ( Keep Fighting With OpenCV )

Today I worked at Opencv.

Today: K-means and SVM, one of the machine learning algorithms for classification, were investigated extensively.

#Day 8 ( Keep Fighting With OpenCV )

I’m ending my OpenCV research today. If necessary, a repo has occurred that I can come back to and look at again in the future. I got a lot of information with these researches. OpenCV is a really powerful library. As a result of my research, I set a new goal for myself about object detection. I know this method not useful nowadays. But sometimes it may be necessary we often use ready-made things and I don’t like this situation, so I want to know how can I own object detection with Haar Cascade Method. My plan is to create own .xml file.

— — — — —

Extra Info: Haar Cascade generally saves lives in terms of fps, you can work fastly. For example what if suppose you need real-time cup detection problems, you can use were pre-trained model Yolov3.weights. Because of Yolo faster than other algorithms for now.

If you are using a CPU machine you can take approximately Ubuntu 16.04 LTS:5 fps, Win10: 2 fps.

If you are using a GPU machine you can take approximately 11 fps.

But maybe you can increase fps with removing classes in the Yolov3-Coco Dataset. This is possible. You have to educate the whole network from the beginning, don’t worry about it, you will be able to train the model quickly.

This answer may be the roadmap for you => Stackoverflow

— — — — —

###########Today’s Decisions#######

I will create my own .xml file for Haar Cascade.

I will follow this tutorial from Sentdex=> PythonProgramming.net

Read List About This Issue:1-) Document, 2-) Instructables 3-)Quora 4-)Pyblog ,5-)TowardsDataScience

Extra Reading:Tutorial , Coding Robin

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 9 ( Creating My Own Haar Cascade File)

I read read list and I applied the trainings step by step and I reached a conclusion but doesn’t run perfectly.And it took a little longer to train data.
For higher accuracy, the following can be done :

1-) You can add to new data new the data.

2-)Searching for pictures in pictures, should the same size as education data

3-)Maybe you can change the threshold value.

And I will share code for automatic download images from ethernet(If you want you can use google chrome downloader tools it is possible). You can look imagenet for download images (imagenet include a lot of images with labels, will be needed for object detection in the future imagenet.)

###########Today’s Decisions#######

I can go a little further now. I did my own Haar Cascade and tomorrow I want to do image classification with Machine Learning Algorithms. The most commonly used algorithms are that Support Vector Machine (SVM), Logistic Regression, Random Forest, Decision Tree algorithms. The K-means algorithm is generally used for Image Segmentation I will do this the day after tomorrow.

Introduction to SVM algorithm=> Medium Towards Data Science.

Read List: Medium-SVM

Watch List: Sentdex

— — — — —

Extra Info: This machine-learning algorithm little bit confused according to CNN classification and the Machine Learning algorithm gives less accuracy. However, it is always useful to know these machine learning algorithms. This algorithm not only useful for image classification but also for ML algorithm use with data so we must know. Anyways we talk about CNN algorithms and architectures later.

— — — — —

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 10 ( Image Classification with ML Algorithm)

I did image classification with SVM Algorithm. I researched ML algorithms to understand on Google.

###########Today’s Decisions#######

I worked about image classification and tomorrow I will focus image segmantation. I guess I’ve never had segmentation before so tomorrow I should work more time.

Read List:1-)Intro K-means, 2-)K-means MNIST data set 3-)Introduction to Segmentation

Watched List: Youtube

Read and Watch: ThePythonCode

GOOGLE => What is the MNIST data set? (You should understand the MNIST data set because this data set very popular and useful we will meet more from now on this data set.)

GOOGLE => What is the difference between Image classification vs Image Segmentation vs Object Detection? Maybe this question useful to you.

Maybe you have been seen U-net Arthitecture, we will focus after.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 11 ( Image Segmentation with ML Algorithm)

Image Segmentation is a very interesting algorithm I think. I love so much. I want to do some more research later about this issue.

###########Today’s Decisions#######

Yes, I think I went a little way and I will tomorrow do this Handwritten Digits Recognition in python using scikit-learn because I still work ML algorithm.
This work is a very famous work so you can easily access information.

Read List and Watched List:

np.random.random(Mnist data set, scikit-learn library on the google ) :) :)
You will be able to access a lot of information easily.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 12 (MNIST Using Scikit-Learn)

Firstly I am focusing on understanding this data set perfectly because I know that this dataset uses other algorithms. If you don’t want to waste time while understanding other algorithms you should understand perfectly this dataset.

I focused on the machine learning algorithms for this problem solve. Firstly Machine Learning Algorithms then to the I will focus on Convolutional Neural Network (CNN).In this way, I am thinking that I can understand the difference between Machine Learning accuracy and CNN Algorithms accuracy.

Which is the best performance for this job? Elapsed time? Best Accuracy?

###########Today’s Decisions#######

I think for now,it’s enough machine learning algorithms and OpenCV operations.If necessary I can return back (I already know I will be back because progress so fast is not very healthy but I know what I did :) ) .

I can not wait for start to DEEEP LEARNING :).Tomorrow I will start to deep learning. My plain is that firstly I want to understand hardly deep learning fundamentals with linear algebra and calculus, and then I will go ahead to deeper Conv. Neural Network.

Read List:1-)Weird Introduction 2-)The Datascience Blog 3-)Towards DataSc 4-)TowardDataScience2

Watch List: Coursera Andrew NG

Exercises: I will write calculations with my own handwriting.Because I feel more comfortable and I think I understand by seeing.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 13 (Introduction To Deep Learning Day)

I read the reading list and watched the half watch list. I’m going a little slow because I watch and calculate at the same time, and understanding can also take time :).

Souvenir from this day (Don’t be afraid to go deep):

My Neural Networks Notes :)

This page for basic simple understanding.

#############Today’s Decisions#############

Tomorrow I will write code without any framework on the Python.And then I will write code with use framework.I am thinking of using TensorFlow for now but my aim to use Pytorch at the future.If you want you can googling about difference frameworks.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 14 (Write Code Without Framework)

I solved Andrew Ng course programming exercises about this issue. After I wrote simple codes for understanding without any framework.

###########Today’s Decisions#######

More or less I understand neural network fundamentals. Tomorrow I will start to Tensorflow 2.0. I used TensorFlow 1.4 before but update come so there have been some changes.[ An important event in the development of this area should not be forgotten that TensorFlow was created and open source.] I will search this changes and write little bit code.

Read: What’s coming Tensorflow 2.0

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 15 (Write Code With TensorFlow 2.0)

I researched TensorFlow 2.0 updates.

Understanding Tensors Read List:1-)KDnuggets 2-)Hackernoon

I liked Hackernoon’s blog. Nice. I plan to read all the articles by Daniel Jeffries sure not now after :)

Best repos for understanding Tf 2.0 => 1-)Easy-Tensorflow 2-)Tf examples

Since I wrote code with Keras before, I chose to start Tensorflow 2.0. If you are a beginner in this area, it maybe more comfortable to start with Keras.
I think it will not be difficult to switch to TensorFlow 2.0 if you are familiar with Keras. Already Keras and TensorFlow 2.0 merged with new update.

If you are wondering What is the difference between Keras and tf.keras Adrian Rosebrock has written a nice article.

###########Today’s Decisions#######

I wrote simple codes using Tensorflow 2.0. Tomorrow, I will firstly classify MNIST dataset without Convolutional Neural Network and then after I will start to CNN arhitecture. Okey Let’s go.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 16 (MNIST with TensorFlow 2.0 )

I did the MNIST data set classification with neural networks and logistic regression. Today a little more I researched TensorFlow 2.0, I read TensorFlow’s website but not everything but I will read next time :)

Understanding Deep Learning Software: Stanford Lecture 8

→ Nice Video Thx Stanford University :)

###########Today’s Decisions#######

Today I focused understand the TensorFlow structure and above on How can I implement to MNIST Dataset ? Tomorrow I will start to CNN.

I was waiting for this day.Here we goooo :)

Read List:1-)TowardsDataScience 2-)Understanding CNN 3-)For Beginner 4-)TowardsDataScience 5-)TowardsDataScience 6-)TowardsDataScience 7-)Machine Learning is Fun Part 3

Watch List:1-)Stanford Lecture 5 on YouTube 2-)MIT Lecture 6.S191 /2020

Exercises : Mnist Dataset.After I will compare with other Mnist classify.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 17 ( CONVOLUTIONAL NEURAL NETWORK )

Today, I implemented my aims. I read the reading list and watched the watching list. I have refreshed my information in my brain. I looked again at Andrew NG’s course for mathematical calculations.

###########Today’s Decisions#######

Today I focused on understand Convolutional Neural Network. Tomorrow I will look at CNN architecture and what is the difference between them because I know that next time I will come across a lot.

Read List: 1-)CNN Arhitectures[Das] 2-)Illustrated:10 Cnn 3-)CNN Arhitectures[Prabhu] 4-)CNN Networks Every ML Engineer Should Know

Watch List: Standford University Youtube Video Lecture 9

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 18 ( CNN ARCHITECTURES)

I intuitively understood what data to create when training my own data in the future. For example, if I have a small dataset, how should I approach, etc. I saw the depths of architecture’s and understood.

Do you want to see architectures? Click Please

Yann Lecun MNIST Paper’s →See Paper

Alexnet →Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, ImageNet Classification →See Paper

VGG-Net → Karen Simonyan and Andrew Zisserman, Very Deep Convolutional Networks → See paper, See Project

###########Today’s Decisions#######

Tomorrow I will look at transfer learning and data augmentation details for my own small data set.

News →→ I recently ordered Adrian Rosebrock’s book (Computer Vision for Deep learning), nowadays this book I will take it and I’m thinking of stopping writing code and reading the book for a while :) I will go deeper Image Processing and details with Adrian.

DECISION: Tomorrow I will implement VGG-Net with my own handwritten :)

Source Practice →Tutorial

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 19 ( VGG-Net Implementing)

I tried to understand the document written in Matlab. I believe my horizon is expanding :) After I read this code. For now, I gave up writing with my own hands but I know that I can import basically at my project with TensorFlow.

Today I read this article about CNN architecture from Adrian.

###########Today’s Decisions#######

Tomorrow I will work on training techniques.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 20 ( Training Techniques)

~~~~~~Transfer Learning, Fine Tuning, Data Augmentation~~~~~~

I think transfer learning is a very useful and helpful technique about this area so I found easily documentation. Today I will only read this documentation and take a look.

Transfer Learning Read List: 1-)Guide to Transfer Learning 2-)For beginner with Keras 3-)Transfer Learning using Keras 4-)Python Code for ResNet50 5-)Transfer Learning from pre-trained model 6-)Pengenalan Deep Learning Part8 (include data augmentation) 7-)For Large DataSet 8-)Transfer Learning with 5 lines of Code

Transfer Learning and Fine Tuning : How to use tf learning and fine-tuning

Data Augmentation:1-)Data Augmentation Technique 2-)For limited Data

I haven’t tried the codes today because sometimes my laptop gets mad at me:)
I think I sometimes pushed too hard. Sorry my laptop :)

###########Today’s Decisions#######

I want to take a break.And then I will start to reading Adrian’s book.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 21 ( ~~Life Break~~)

Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~

#Day 22 ( Start Book)

##~~##~~##~~What Did I Learn~~##~~##~~#

How “Deep” Is Deep?

Image Fundamentals

256*256*256 = 16,777,216 possible colors

I’m reading: Image Manipulations → Adrian’s blog

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 23 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Image Classification Basics

Datasets for Image Classification

Configuring Your Development Environment

Command Line Arguments

I’m reading from Adrian’s Blog :OpenCV Tutorial

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 24 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Your First Image Classifier

Parameterized Learning

A Simple Linear Classifier With Python

hinge loss and cross-entropy loss

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 25 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Optimization Methods and Regularization

Today I little bit bored and tired. Optimization methods and regularization issues a little hard to understand I think. And I wanted to go to a little deeper.

My Notes from Andrew NG’s Courses

Advanced Info from Murat Tekalp (Prof. at Koç University ) about L1 and L2 Regularization:

Actually I don’t know why this is I learned from Murat Tekalp, intuitively your train data and validation data smiler to each other, you should use L2 regularization for better results.

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 26 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Neural Network Fundamentals

Maybe you can look at Perceptron Concept. I read before so I don’t read now.

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 27 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Convolutional Neural Networks

Read → Ahmed Besbes Blog’s

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 28 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Training Your First CNN

Saving and Loading Your Models

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 29 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

LeNet: Recognizing Handwritten Digits

MiniVGGNet: Going Deeper with CNNs

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 30 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Learning Rate Schedulers

Spotting Underfitting and Overfitting

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 31 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Checkpointing Models

Visualizing Network Architectures

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

#Day 32 ( Continue Reading)

##~~##~~##~~What Did I Learn~~##~~##~~#

Out-of-the-box CNNs for Classification

Case Study: Breaking Captchas with a CNN

Case Study: Smile Detection

Today I finished this book(Starter Bundle) and I will continue Practioner Bundle the day after tomorrow. I’m feeling that processing by reading instead of watching videos makes me progress faster. Sure, I can say this because I have knowledge about mathematical calculations and programming information. I want to follow the book for a while and progress regularly.

Thanks to this blog, I both refresh my knowledge and become aware of things I do not know. I missed some things because I did not progress regularly while learning at first and now I am connecting the pieces :)

Sometimes I go fast, sometimes slowly because I am an Electrical and Electronical Engineering Student. And also I am working in a company as a part-time student. That’s why.

My roles in this company basically these 1-) PCB Design 2–) Computer Vision R&D

I am also working on image processing at the company, but I’m not talking about them here. Here I want to go calmly read on to read and slowly.
I plan to share my experience with the company in the object detection section because I’m working on object detection.

Keep * (Learning + Happy) 😂

##~~##~~##~~##~~##~~#~~##~~##~~##~~##~

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 33 ( ~~Life Break~~)

Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~Break~

#Day 34 ~~ ( to Read Adrian’s Second Book )

I started to Deep Learning for Computer Vision with Python Practitioner Bundle book.

# ~~ Day 55 ( Finitoo Adrian’s Second Book ) 26 mart

Today I finished Adrian’s second book, which is excellentttttt. I have learned huge information and sometimes I did not hesitate to learn the deep. I searched many times information at Google, Quora, Stack Overflow and different forums. I don’t think of sharing the codes I wrote for now because all of them there are at Adrian’s book, but I’m thinking of sharing it after like this with my own dataset and own works.

#################################

— — — — Sad Process for Our World— — — —

You know, our world is going through a bad process and I’m badly affected. As a result of that, I didn’t work for some days but I did a project (I saw Adrian’s my big brother :) :) ) about COVID-19 and located in my GitHub account.

My plain is that improve this work and feed new huge dataset ( by sharing ) and perhaps I can do object detection in the future if I have medical knowledge. For now, I did only image classification with transfer learning.

I’m finding the normal ( healthy ) image at Kaggle. (I’m taking normal pictures from previous competitions).

I’m searching at Google some hours of the day and I follow the doctors who are interested in this issue for pictures containing COVID-19.

#StayAtHome please.

#################################

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 56 ( Starting to Read a New Book)

Today I started to Adrian Rosebrock — Practical Python and OpenCV.

#Day 57 ( This Book is Over too)

Today I finished to Adrian Rosebrock — Practical Python and OpenCV.

I read this book because sometimes I like to go turn back and refresh my brain. This book about basic image processing techniques but practical not theoretical.

#Day 58(Break for Computer Vision, not in Learning)

I want to take a break from computer vision from today because sometimes take a break for myself it makes me stronger when I come back (maybe it’s not true but I hope that :) ) and also, of course, I am not empty during at this period I keep improving myself different area or repeat my information. My goals are to read these two books:

1-)Tensorflow Learn In 1 Day ~~ KRISHNA RUNGTA

2-)Machine Learning Yearning~~ ANDREW NG

Okay. I will hang out a little bit at Quora and Stack Overflow today and I’ll start reading the books I mentioned tomorrow. See u :)

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 59 ( to Develop By Reading ) 30 mart

Today I’m starting to Tensorflow Learn In 1 Day Book. I think I will learn good things.

He mentioned the author Docker today. I already know Docker, if you don’t know Docker you can search at Google (Docker good and new tech but
not related to computer vision).

And also He mentioned that 1-)ML vs DL 2-)Tensorflow architecture 3-)TF on AWS.

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 60( Continue )

Today I read these sections:

→Numpy, →Pandas, →Scikit-learn, →Linear Regression, →Lr in Tensorflow →Kernel Methods (RandomFourierFeatureMapper, KernelLinearClassifier)

#Day 61 ( Finished )

Today I read these sections:

→Tensorflow (Artificial NN), →ConvNet, →Autoencoder, →RNN

#Day 62 ( Starting to Read)

I’m starting to this book Machine Learning Yearning~~ ANDREW NG.

I couldn’t read much today in the book. I think I just read the first 25 chapters.

#Day 63 ( Continue )

Today I made a decision, I will read this book over time. Absolutely you should read this book if you are a Machine Learning manager or team leader.

#Day 64 ( Object Oriented Programming )

Today I want to try new something today. Actually I know so so information about Object-Oriented Programming (I wrote codes with oop), but I want more information about this paradigm because makes things much easier if you know where to use. If you interest in programming paradigms you can visit this website.

A few research I used for Object-Oriented Programming:

1–)Object-Oriented Programming →source

2-)Real Python Website →source

3-)Jeff Knup Personal Website → source

Extra: What is the difference between Functional Programming and Object-Oriented Programming?

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 65 ( Threads and Multi threads)

Today I will study threading and multithreading. Firstly if you never heard these concepts read this page and this page. Programmers often use it to do more than one job at a time so to reduce workload.

Watch List: YouTube Corey Schafer → ThreadingMultiprocessing

(I think really good programmer you should add to follow list)

Read List: Real Python, Python Own Documentation

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 66 ( Object Detection Algorithms )

I’m moving forward after today (yupiiğ :) ). I already know about this topic from both Adrian’s Book and my part-time job but now I will progress more regularly and refresh my knowledge. If you want firstly we can look at the overview and history of object detection → SOURCE

Object Detection History since 2012:

RCNN →OverFeat →SPPNets →MultiBox →Fast RCNN →YOLO →Faster RCNN →SSD →Mask RCNN(2017)

Video Tutorial: 1-)Standford University YouTube 2-)Edureka! Youtube

Read List: 1-)Towards Data Science 2-)Jonathan Hui Medium 3-)Jonathan Hui Medium 4-)Tryo-Labs 5–)Part -1 6-)Part -2 7-)Athelas Medium Blog

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 67 ( Fighting With Object Detection ) 7 nisan

I reviewed again Adrian’s book Practitioner Bundle Chapter 15 Faster RCNN, Chapter 16 Training a Faster R-CNN From Scratch, Chapter 17 Single Shot Detector (SSD), Chapter 18 Training a SSD From Scratch.

After this, I want to implement Mask RCNN algorithm from scratch with my own dataset. I will gather a dataset and detect eggplant and patatoes. I choose vegetables because I have an idea at the future. :)

EXTRA INFO FOR LABELLING and FILE FORMAT:

You will need a labeling tool or program to tag images. I have a recommendation program for this job. Check Github → labelImg. Yeah cool.

→You should use “.jpg” or “.jpeg” file format, do not use “.png”.

→Your image sizes shouldn’t bigger than 200 kB.

→Your image shouldn’t be bigger than that 720x1280.

→Choose carefully your image names think train and test folder,I’m saying that because I made a mistake at train/test image folder. My mistake is that: I named train data from 1 to end and likewise too test data :) as such, everywhere is confused namely in these cases we say as a Turkish citizens “çarşı pazar karıştı” :D

— — — — — — — — — — — -

About “ .jpg” “ .jpeg ”, “ .png ” file format:

→[ “.jpg” and “.jpeg” ] Actually there is no difference between the two as a function, the only difference is the number of characters used.

→JPG only existed because in earlier versions of windows they required a three letter extension for the file names.

→[ “.png” ] supports alpha as well as single color transparency. JPEGs are opaque.We don’t use it because it has four channels with CNN algorithms.

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 68 ( Setups,Collect Image, Labeling) 8 nsn

I’ve been dealing with setups almost throughout the day. I gathered my own dataset from google and then I tagged all of them. I started to get bored after half an hour but Big Deal ! :)

→I’m so tired today while dealing with errors but it was good.

Nice Github Repo →datitran

While install tensorflow object detection api → tutorial

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 69 ( Training My Own Dataset ) 9 nsn

Today I trained my own data set with SSD-MobileNet-v1 but the results are not very good so for now I don’t share my weights file,if I getting better result I will share weights file.

Advise → If you have a small data set as a class number you use SSD. MaskRCNN algorithm would be logical If you have a larger data set.

This error is the scourge of my head →

tensorflow.python.framework.errors_impl.NotFoundError: ; No such file or directory

Solutions: After installing “protoc” you should give path correctly and be sure.

python generate_tfrecord.py — csv_input=data/train_labels.csv — output_path=data/train.record — image_dir=images/

Add “image_dir=images/ ” at terminal if you follow raccoon detection from Github.

— — — — — — — — — — — — — — — — — — — — — — — — — —

#Day 70 ( Strive for Better Results )

Today I researched how to make improvements in a small data set with various algorithms throughout the day.I think the best result is in SSD Mobilnet for small dataset.

: Reproof :

→I got a lot of errors because I worked on my own computer, it was really tiring. I will work an another time with GPU Machine (maybe Google Cloud maybe AWS )

BIG SPEECH FROM ME :)

~~

I think I have come a long way so far. I want to talk about anyone interested in computer vision should know these algorithms,tools,concepts and etc. so far. But it would be good to do some research for after that.As far as I know and research and see, I’m writing below for you.

Applications Of Computer Vision

Generative Adversarial Networks (GANs): Creating a new picture from pictures.

Neural Style Transfer: Merge your image and Picasso’s picture

Object Segmentation: Which pixels which object in the image.

Openpose: Turn yourself into a garbage man

Image captioning: Predict at the image and print below of your image.

Motion Analysis : Set facial expression

Optical character recognition (OCR) : Identifying characters in images of printed or handwritten text

Image Restoration: Edit your bad image

Medical Image Segmentation,Human Behaviour Analysis,Self Driving Car,Face Recognition, Face Detection and moreover.

~~

#Day 71 ( To Plan For The Future)

I think this tempo is enough for now.Of course I will continue Computer Vision learning roadmap after some break.What I thought to learn after the break.

New Topics : Generative Adversarial Networks (GAN), AutoEncoder, Segmantation (U-Net),More Object Detection Algorithms

New Framework: Pytorch

New Programming Language: Julia

Yes, I want to learn all of them for now. I like learn new things when I’m bored. Frankly, I don’t know why I was a little bored but my productivity has decreased nowadays. In such times, learning new things sounds like a drug to me.

It doesn’t make sense to switch to another programming language without specializing in one language (for example Python for me), but a second programming language allow comparison with other language, may be in this way I will see my wrong, missing etc. It doesn’t make much sense to advocate any language madly, all of them same for me. But I will like Python everytime :) :)

Pytorch is using by too many Startups as far as I see. I used it in the my Startup company, but I had to use it quickly to get the project out. Now I will learn from starch with essentials.

#About Julia

I searched a little bit Julia,community,tutorial etc. on the google. Everyone mentioned from Julia faster than Python but sure Julia has fewer community than Python. It was written in Julia, C and Fortran so very fast.

History of Julia:

Work on Julia was started in 2009.Julian’s developers introduced the first version of Julia to the public in February 2012.Libraries such as Flux and Knet implement complete machine-learning frameworks written completely in Julia. Apache MXNet and TensorFlow are also available through wrapper libraries.

  • Stable Release: 1.4.0 / 21 March 2020
  • Machine Learning frameworks: Flux and Knet (Koc University )
  • In Julia, array indexing starts at 1 like Matlab.

Extras:

Andrew NG speech building AI applications steps via Andrew NG.

30 Amazing Applications of Deep Learning=> Yaron HADAD Blog

--

--

Ramazan Güven

Electrical and Electronic Engineering ~~~Deep Learning Researcher Linkedin:rguven20