Why Gradient Descent for Optimization? Home › Forums › Data Science Camp › Why Gradient Descent for Optimization? This topic contains 0 replies, has 1 voice, and was last updated by email@example.com 1 year, 11 months ago. Viewing 1 post (of 1 total) Author Posts November 4, 2017 at 2:58 am #794 Score: 0 firstname.lastname@example.orgParticipant Hello all, I have a question regarding the optimization technique used for updating the weights. People generally use gradient descent for the optimization whether its SGD or adaptive. Why can’t we use other techniques like Newton Raphson. Please help Thanks in advance I didn’t find the right solution from the internet. References: https://discuss.analyticsvidhya.com/t/why-gradient-descent-for-optimization/40127 motion graphics examples Author Posts Viewing 1 post (of 1 total) You must be logged in to reply to this topic.