# 機器學習- 神經網路(多層感知機 Multilayer perceptron, MLP)運作方式

Apr 1, 2018 · 8 min read

A(0,1)→y=f((0,1))=f(1x0+1x1–0.5)=f(0.5)=1

A(1,0)→y=f((1,0))=f(1x1+1x0–0.5)=f(0.5)=1

A(1,1)→y=f((1,1))=f(1x1+1x1–0.5)=f(1.5)=1

A(0,0)→y=f((0,0))=f(1x0+1x0–0.5)=f(-0.5)=0

h1(x)

A(0,1)→h1((0,1))=f((0,1))=f(1*0+1*1–0.5)=f(0.5)=1

A(1,0)→h1((1,0))=f((1,0))=f(1*1+1*0–0.5)=f(0.5)=1

A(1,1)→h1((1,1))=f((1,1))=f(1*1+1*1–0.5)=f(1.5)=1

A(0,0)→h1((0,0))=f((0,0))=f(1*0+1*0–0.5)=f(-0.5)=0

h2(x)

A(0,1)→h2((0,1))=f((0,1))=f(1*0+1*1–1.5)=f(-0.5)=0

A(1,0)→h2((1,0))=f((1,0))=f(1*1+1*0–1.5)=f(-0.5)=0

A(1,1)→h2((1,1))=f((1,1))=f(1*1+1*1–1.5)=f(0.5)=1

A(0,0)→h2((0,0))=f((0,0))=f(1*0+1*0–1.5)=f(-1.5)=0

data(0,1)和data(1,0)

data(0,0)和data(1,1)

data(0,1) →f(h1,h2)=(1,0)

data(1,0) →f(h1,h2)=(1,0)

data(0,0) →f(h1,h2)=(0,0)

data(1,1) →f(h1,h2)=(1,1)

A(0,1)→f(1*h1((0,1))-2*h2((0,1))-0.5)=f(1*1–2*0–0.5)=f(0.5)=1

A(1,0)→f(1*h1((1,0))-2*h2((1,0))-0.5)=f(1*1–2*0–0.5)= f(0.5)=1

A(1,1)→f(1*h1((1,1))-2*h2((1,1))-0.5)=f(1*1–2*1–0.5)=f(-1.5)=0

A(0,0)→f(1*h1((0,0))-2*h2((0,0))-0.5)=f(1*0–2*0–0.5)=f(-0.5)=0

XOR問題所設計出來的神經網路結構如下圖:

Written by

## More From Medium

### Python Deep Learning: Part 1

Sep 25, 2018 · 5 min read

#### 233

Jun 23, 2018 · 6 min read

#### 294

Jul 24, 2018 · 7 min read

#### 162

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just \$5/month. Upgrade