优化算法篇
2022/9/7 14:23:22
本文主要是介绍优化算法篇,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
梯度下降与随机梯度下降:
import torch import matplotlib.pyplot as plt import numpy as np x_data = [5,6,7,8.5,9,10,11.5,12] y_data = [1,2,8,4,5,6.5,7.5,8] w = 1 #初始权重 def forward(x): return x * w #MSE def cost(xs,ys): cost = 0 for x,y in zip(xs,ys): y_pred = forward(x) cost += (y-y_pred)**2 return cost/len(xs) def SGD_loss(xs,ys): y_pred = forward(xs) return (y_pred - ys)**2 def SGD_gradient(xs,ys): return 2*xs*(xs*w-ys) def gradient(xs,ys): grad = 0 for x,y in zip(xs,ys): grad += 2*x*(x*w-y) return grad/len(xs) def draw(x,y): fig = plt.figure(num=1, figsize=(4, 4)) ax = fig.add_subplot(111) ax.plot(x,y) plt.show() # epoch_lis =[] # loss_lis = [] # learning_rate = 0.012 # # for epoch in range(100): # cost_val = cost(x_data,y_data) # grad_val = gradient(x_data,y_data) # w -= learning_rate*grad_val # print("Epoch = {} w = {} loss = {} ".format(epoch,w,cost_val)) # epoch_lis.append(epoch) # loss_lis.append(cost_val) # print(forward(4)) # draw(epoch_lis,loss_lis) # draw(x_data,y_data) l_lis= [] epoch = [] learning_rate = 0.009 #SGD for epoch in range(10): for x,y in zip(x_data,y_data): grad = SGD_gradient(x,y) w -= learning_rate*grad print(" x:{} y:{} grad:{}".format(x,y,grad)) l = SGD_loss(x,y) print("loss: ",l) l_lis.append(l) X = [int(i) for i in range(len(l_lis))] draw(X,l_lis)
这篇关于优化算法篇的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-05-0601-电商商品中心解密:仅凭SKU真的足够吗?
- 2024-05-01为什么公共事业机构会偏爱 TiDB :TiDB 数据库在某省妇幼健康管理系统的应用
- 2024-04-26敏捷开发:想要快速交付就必须舍弃产品质量?
- 2024-04-26静态代码分析的这些好处,我竟然都不知道?
- 2024-04-26你在测试金字塔的哪一层?(下)
- 2024-04-26快刀斩乱麻,DevOps让代码评审也自动起来
- 2024-04-262024年最好用的10款ER图神器!
- 2024-04-2203-为啥大模型LLM还没能完全替代你?
- 2024-04-2101-大语言模型发展
- 2024-04-17基于SpringWeb MultipartFile文件上传、下载功能