47
Machine Learning 2015.06.06. Linear Regression

Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ

Machine Learning

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ

2015.06.06.

Linear Regression

Page 2: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 2

Issues

Page 3: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 3

Issues

โ€ข https://www.facebook.com/Architecturearts/videos/1107531579263808/

โ€ข โ€œ8์‚ด์งœ๋ฆฌ ์กฐ์นด์—๊ฒŒ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค(DB)๊ฐ€ ๋ฌด์—‡์ธ์ง€ 3์ค„์˜ ๋ฌธ์žฅ์œผ๋กœ ์„ค๋ช…ํ•˜์‹œ์˜คโ€

โ€ข 6๊ฐœ์›”๋™์•ˆ ์ตœ๋Œ€ 25๋ฒˆ์ด๋‚˜ ๋˜๋Š” ๋ฉด์ ‘์‹œํ—˜์„ ๊ฑฐ์ณ ๊ตฌ๊ธ€๋Ÿฌ(๊ตฌ๊ธ€ ์ง์›์„ ์ผ์ปซ๋Š” ๋ง)๊ฐ€ ๋  ํ™•๋ฅ ์€ 0.25%. ํ•˜๋ฒ„๋“œ๋Œ€๋ณด๋‹ค 25๋ฐฐ ๋“ค์–ด๊ฐ€๊ธฐ ์–ด๋ ต๋‹ค.

โ€ข โ€œ์šฐ๋ฆฌ๋Š” โ€˜๊ตฌ๊ธ€๋‹ค์šดโ€™(Being Googley) ์ธ์žฌ๋“ค๋งŒ ๋ฝ‘๋Š”๋‹คโ€โ€ข ํšŒ์‚ฌ์— ๋ญ”๊ฐ€ ๋‹ค๋ฅธ ๊ฐ€์น˜๋‚˜ ์žฌ๋Šฅ์„ ๊ฐ€์ ธ๋‹ค ์ค„ ์ˆ˜ ์žˆ๋Š”์ง€

โ€ข ์ƒˆ๋กœ์šด ์ง€์‹์„ ๋ฐ›์•„๋“ค์ผ ์ค„ ์•„๋Š” ์ง€์ ์ธ ๊ฒธ์†ยท์œ ์—ฐํ•จ์„ ๊ฐ–์ท„๋Š”์ง€

โ€ข ๊ตด๋Ÿฌ๋‹ค๋‹ˆ๋Š” ์“ฐ๋ ˆ๊ธฐ๋ฅผ ์Šค์Šค๋กœ ์ค๋Š” ์ž๋ฐœ์ ์ธ ์‚ฌ๋žŒ์ธ์ง€

โ€ข ๋ง์›๊ฒฝ ์„ฑ๋Šฅ์„ ๊ฐœ์„ ํ•˜๋Š๋‹ˆ ๋‹ฌ์— ์šฐ์ฃผ์„ ์„ ์˜๋Š” ๊ฒŒ ๋‚ซ๋‹ค๋Š” ์‹์˜ โ€˜๋ฌธ์ƒท์‹ฑํ‚นโ€™ ์ถœ์ฒ˜: ์ค‘์•™์ผ๋ณด

Page 4: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 4

Issues

โ€ข ์‹ค๋ฆฌ์ฝ˜๋ฐธ๋ฆฌ์˜ ์Šคํƒ€ํŠธ์—… โ€˜๋กœ์ฝ”๋ชจํ‹ฐ๋ธŒ๋žฉ์Šคโ€˜ ์ด์ˆ˜์ธ(39) ๋Œ€ํ‘œ๋Š” โ€œ๊ธฐ์ˆ ๊ธฐ์—…์—์„  ๋ชจ๋‘๊ฐ€ ๋˜‘๊ฐ™์€ ๊ทผ๋ฌด์‹œ๊ฐ„์„ ์ฑ„์šฐ๋Š” ๊ฒƒ๋ณด๋‹ค ์ตœ๊ณ ์˜ ์‹ค๋ ฅ์„ ๊ฐ€์ง„ 1๊ธ‰ ๊ฐœ๋ฐœ์ž๋“ค์ด ์ตœ๊ณ ์˜์„ฑ๊ณผ๋ฅผ ๋‚ผ ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ๊ฒŒ ๋” ์ค‘์š”ํ•˜๋‹ค.โ€

โ€ข โ€œ์ด๋“ค์ด ์ด์งํ•˜์ง€ ์•Š๋„๋ก ๋ถ™์žก์•„ ๋‘๋ ค๋ฉด ๊ณ ์•ก์—ฐ๋ด‰ ์™ธ์—, โ€˜์ž์œ โ€™ ๊ฐ™์€ ํ”Œ๋Ÿฌ์Šค ์•ŒํŒŒ์˜ ๊ฐ€์น˜๋ฅผ ๋” ์ค˜์•ผ ํ•œ๋‹ค๋Š” ๊ฒŒ์‹ค๋ฆฌ์ฝ˜๋ฐธ๋ฆฌ์˜ ๋ณดํŽธ์ ์ธ ๋ถ„์œ„๊ธฐโ€

โ€ข http://www.washingtonpost.com/graphics/business/robots/

์ถœ์ฒ˜: ์ค‘์•™์ผ๋ณด

Page 5: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 5

Issues

Page 6: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 6

Linear Regression

โ€ข ์ž„์˜์˜ ๋ฐ์ดํ„ฐ๊ฐ€ ์žˆ์„ ๋•Œ, ๋ฐ์ดํ„ฐ ์ž์งˆ ๊ฐ„์˜ ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ๊ณ ๋ คํ•˜๋Š” ๊ฒƒ

์นœ๊ตฌ 1 ์นœ๊ตฌ 2 ์นœ๊ตฌ 3 ์นœ๊ตฌ 4 ์นœ๊ตฌ 5

ํ‚ค 160 165 170 170 175

๋ชธ๋ฌด๊ฒŒ 50 50 55 50 60

Page 7: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 7

Linear Regression

โ€ข ์ฆ‰, ํšŒ๊ท€ ๋ฌธ์ œ๋ž€..

โ€ข ์ˆ˜์น˜ํ˜• ๋ชฉ์  ๊ฐ’์„ ์˜ˆ์ธกํ•˜๋Š” ๋ฐฉ๋ฒ•

โ€ข ๋ชฉ์  ๊ฐ’์— ๋Œ€ํ•œ ๋ฐฉ์ •์‹ ํ•„์š”โ€ข ํšŒ๊ท€ ๋ฐฉ์ •์‹(Regression equation)

โ€ข ์ง‘ ๊ฐ’์„ ์•Œ๊ธฐ ์œ„ํ•ด ์•„๋ž˜์™€ ๊ฐ™์€ ๋ฐฉ์ •์‹์„ ์ด์šฉ

โ€ข Ex) ์ง‘ ๊ฐ’ = 0.125 * ํ‰์ˆ˜ + 0.5 * ์—ญ๊นŒ์ง€์˜ ๊ฑฐ๋ฆฌ

โ€ข โ€œํ‰์ˆ˜โ€์™€ โ€œ์—ญ๊นŒ์ง€์˜ ๊ฑฐ๋ฆฌโ€ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ

โ€ข โ€œ์ง‘ ๊ฐ’โ€ ์ถ”์ • ๋ฐ์ดํ„ฐ

โ€ข 0.125์™€ 0.5์˜ ๊ฐ’ ํšŒ๊ท€ ๊ฐ€์ค‘์น˜(Regression weight)

โ€ข ์—ฌ์ž์นœ๊ตฌ์˜ ๋ชธ๋ฌด๊ฒŒ๋ฅผ ์ถ”์ •ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ..

โ€ข Ex) ๋ชธ๋ฌด๊ฒŒ = 0.05 * ํ‚ค

โ€ข โ€œํ‚คโ€œ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ

โ€ข โ€œ๋ชธ๋ฌด๊ฒŒโ€ ์ถ”์ • ๋ฐ์ดํ„ฐ

โ€ข 0.05 ํšŒ๊ท€ ๊ฐ€์ค‘์น˜

Page 8: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 8

Hypothesis

๐‘ฆ = ๐‘ค๐‘ฅ + ๐‘๐‘ฅ์ž…๋ ฅ๋ฐ์ดํ„ฐ: ํ‚ค

๐‘ฆ์ถ”์ •๋ฐ์ดํ„ฐ: ๋ชธ๋ฌด๊ฒŒ

๐‘คํšŒ๊ท€๊ฐ€์ค‘์น˜: ๊ธฐ์šธ๊ธฐ

Hypothesis

Page 9: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 9

Hypothesis

0

1

2

3

0 1 2 3

0

1

2

3

0 1 2 3

0

1

2

3

0 1 2 3

Andrew Ng

๐‘ฆ = ๐‘ค๐‘ฅ + ๐‘

Page 10: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 10

Hypothesis

๐‘ฆ๐‘– = ๐‘ค0 +๐‘ค๐‘‡๐‘ฅ๐‘–

๐‘ฆ๐‘– = ๐‘ค0 ร— 1 +

๐‘–=1

๐‘š

๐‘ค๐‘–๐‘ฅ๐‘–

๐‘ฆ๐‘– = ๐‘–=0๐‘š ๐‘ค๐‘–๐‘ฅ๐‘– ๐‘ค๐‘ฅ

๐‘ฆ = ๐‘ค๐‘ฅ + ๐‘ ๐‘ฆ = ๐‘ค๐‘ฅ

Variable Description

๐ฝ(๐œƒ), r Cost function vector, residual(r)

y Instance label vector

๐‘ฆ, h(๐œƒ) hypothesis

๐‘ค0, b Bias(b), y-intercept

๐‘ฅ๐‘– Feature vector, ๐‘ฅ0 = 1

W Weight set (๐‘ค1, ๐‘ค2, ๐‘ค3, โ€ฆ , ๐‘ค๐‘›)

X Feature set (๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3, โ€ฆ , ๐‘ฅ๐‘›)(generalization)

(generalization)

Page 11: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 11

Regression: statistical example

โ€ข ๋ชจ์ง‘๋‹จ: ์œ ํ†ต๊ธฐ๊ฐ„์— ๋”ฐ๋ฅธ ๋น„ํƒ€๋ฏผ C์˜ ํŒŒ๊ดด๋Ÿ‰

โ€ข ๋…๋ฆฝ ๋ณ€์ˆ˜ X๊ฐ€ ์ฃผ์–ด์กŒ์„ ๋•ŒY์— ๋Œ€ํ•œ ๊ธฐ๋Œ€ ๊ฐ’

์œ ํ†ต๊ธฐ๊ฐ„ (์ผ) : X 15 20 25 30 35

๋น„ํƒ€๋ฏผ C ํŒŒ๊ดด๋Ÿ‰ (mg):Y

05

101520

1520253035

3035404550

5055606570

5560657075

๐‘ฆ = ๐‘ค๐‘ฅ + ๐‘ + ๐œ€

๐‘ฆ = ๐œƒ๐‘ฅ + ๐œ€

๐œ€: disturbance term, error variable

Page 12: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 12

Regression: statistical example

Random variable of Y

Page 13: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 13

Residual

ใ…ก์ •๋‹ต๋ชจ๋ธใ…ก์ถ”์ •๋ชจ๋ธ

์ •๋‹ต๋ฐ์ดํ„ฐ์ถ”์ •๋ฐ์ดํ„ฐ

Residual: ๐‘Ÿ(= ๐œ€)

๐‘Ÿ1

๐‘Ÿ2

๐‘Ÿ3

๐‘Ÿ4

๐‘Ÿ5

โ€ข ์•„๋ž˜์˜๋ง์€์„œ๋กœ๊ฐ™์€์˜๋ฏธโ€ข ์ •๋‹ต๋ฐ์ดํ„ฐ์™€์ถ”์ •๋ฐ์ดํ„ฐ์˜์ฐจ์ดโ€ข ์ •๋‹ต๋ชจ๋ธ๊ณผ์ถ”์ •๋ชจ๋ธ์˜์ฐจ์ด

๐‘ฆ = ๐‘ค๐‘ฅ + ๐‘, ๐‘ . ๐‘ก. min(๐‘Ÿ)

Page 14: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 14

Least Square Error (LSE)

๐‘Ÿ1

๐‘Ÿ2๐‘Ÿ3

๐‘Ÿ4๐‘Ÿ5

๐‘Ÿ = ๐‘ฆ โˆ’ โ„Ž๐œƒ(๐‘ฅ)

๐‘Ÿ๐‘– = ๐‘ฆ โˆ’ ๐‘ฆ

๐‘Ÿ =

๐‘–

(๐‘ฆ๐‘– โˆ’ ๐‘ฆ๐‘–)

๐‘Ÿ๐‘– = ๐‘ฆ๐‘– โˆ’ ๐‘ฆ๐‘–

๐‘š๐‘–๐‘›

๐‘–=1

๐‘š

๐‘Ÿ2 = ๐‘š๐‘–๐‘›

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ฆ๐‘–2

Least square๐‘Ÿ =

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ 2

๐‘Ÿ =1

2

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ 2

= ๐ฝ(๐œƒ) โ€œcost functionโ€

๐‘ฆ โˆ’ ๐‘Ÿ โ‰… โ„Ž๐œƒ(๐‘ฅ)

(residual)

Page 15: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 15

0

1

2

3

0 1 2 3

y

x

(for fixed , this is a function of x) (function of the parameter )

0

1

2

3

-0.5 0 0.5 1 1.5 2 2.5

Cost Function

๐‘“ ๐‘ฅ1 = โ„Ž๐œƒ ๐‘ฅ1 = ๐œƒ1๐‘ฅ1 = 1 ๐ฝ ๐œƒ1 = ๐‘ฆ1 โˆ’ ๐‘“(๐‘ฅ1)

๐ฝ ๐œƒ1 = 1 โˆ’ 1 = 0 = ๐‘Ÿ

Andrew Ngโˆด min ๐ฝ(๐œƒ) == min ๐‘Ÿ

๐‘“ ๐‘ฅ1 = โ„Ž๐œƒ ๐‘ฅ1 = ๐‘ค1๐‘ฅ1 = 1

Page 16: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 16

Training

โ€ข Residual์„์ค„์—ฌ์•ผํ•จ LSE์˜๊ฐ’์„์ตœ์†Œํ™”ํ•ด์•ผํ•จ

โ€ข 2์ฐจํ•จ์ˆ˜ํ•˜๋‚˜์˜์ตœ์†Œ๊ฐ’(minimum)์„๊ฐ€์ง

โ€ข ๊ฐ w์—๋Œ€ํ•œ์„ ํ˜•ํ•จ์ˆ˜๊ฐ์ฐจ์›์˜์ตœ์†Œ๊ฐ’์„์•Œ์ˆ˜์žˆ์Œ

โ€ข ์ฆ‰, ์ „์—ญ์ตœ์†Œ๊ฐ’(global minimum)์„์•Œ์ˆ˜์žˆ์Œ

โ€ข ์ด์ตœ์†Œ๊ฐ’์„์ฐพ๊ธฐ์œ„ํ•ด๊ธฐ์šธ๊ธฐํ•˜๊ฐ•(gradient descent)์„์‚ฌ์šฉ

๐ฝ(๐œƒ) =1

2

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ 2

Minimum!!

Page 17: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 17

Training: Gradient

โ€ข ๊ฐ ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ผ์ฐจ ํŽธ๋ฏธ๋ถ„ ๊ฐ’์œผ๋กœ ๊ตฌ์„ฑ๋˜๋Š” ๋ฒกํ„ฐโ€ข ๋ฒกํ„ฐ: ๐‘“(. )์˜ ๊ฐ’์ด ๊ฐ€ํŒŒ๋ฅธ ์ชฝ์˜ ๋ฐฉํ–ฅ์„ ๋‚˜ํƒ€๋ƒ„

โ€ข ๋ฒกํ„ฐ์˜ ํฌ๊ธฐ: ๋ฒกํ„ฐ ์ฆ๊ฐ€, ์ฆ‰ ๊ธฐ์šธ๊ธฐ๋ฅผ ๋‚˜ํƒ€๋ƒ„

โ€ข ์–ด๋–ค ๋‹ค๋ณ€์ˆ˜ ํ•จ์ˆ˜ ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›)๊ฐ€ ์žˆ์„ ๋•Œ, ๐‘“์˜gradient๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Œ

๐›ป๐‘“ = (๐œ•๐‘“

๐œ•๐‘ฅ1,๐œ•๐‘“

๐œ•๐‘ฅ2, โ€ฆ ,

๐œ•๐‘“

๐œ•๐‘ฅ๐‘›)

โ€ข Gradient๋ฅผ ์ด์šฉํ•œ ๋‹ค๋ณ€์ˆ˜ scalar ํ•จ์ˆ˜ ๐‘“๋Š” ์  ๐‘Ž๐‘˜์˜ ๊ทผ์ฒ˜์—์„œ์˜ ์„ ํ˜• ๊ทผ์‚ฌ์‹ (using Taylor expansion)

๐‘“ ๐‘Ž = ๐‘“ ๐‘Ž๐‘˜ + ๐›ป๐‘“ ๐‘Ž๐‘˜ ๐‘Ž โˆ’ ๐‘Ž๐‘˜ + ๐‘œ( ๐‘Ž โˆ’ ๐‘Ž๐‘˜ )

Page 18: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 18

Training: Gradient Descent

โ€ข Formula

๐‘Ž ๐‘˜+1 = ๐‘Ž๐‘˜ โˆ’ ๐œ‚๐‘˜๐›ป๐‘“ ๐‘Ž๐‘˜ , ๐‘˜ โ‰ฅ 0

๐œ‚๐‘˜: ๐‘™๐‘’๐‘Ž๐‘Ÿ๐‘›๐‘–๐‘›๐‘” ๐‘Ÿ๐‘Ž๐‘ก๐‘’

โ€ข Algorithm

๐’ƒ๐’†๐’ˆ๐’Š๐’ ๐‘–๐‘›๐‘–๐‘ก ๐‘Ž, ๐‘กโ„Ž๐‘Ÿ๐‘’๐‘ โ„Ž๐‘œ๐‘™๐‘‘ ๐œƒ, ๐œ‚๐’…๐’ ๐‘˜ โ† ๐‘˜ + 1

๐‘Ž โ† ๐‘Ž โˆ’ ๐œ‚๐›ป๐‘“ ๐‘Ž๐’–๐’๐’•๐’Š๐’ ๐œ‚๐›ป๐‘Ž ๐‘˜ < 0

๐’“๐’†๐’•๐’–๐’“๐’ ๐‘Ž๐’†๐’๐’…

์ถœ์ฒ˜: wikipedia

Page 19: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 19

Training: Gradient Descent

min ๐ฝ(๐œƒ) =1

2

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘–2

๐œ•๐ฝ(๐œƒ)

๐œ•๐‘ค=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– (โˆ’๐‘ฅ๐‘–)โ€ข ๋ฒกํ„ฐ์—๋Œ€ํ•œ๋ฏธ๋ถ„

๐‘Ž ๐‘˜+1 = ๐‘Ž๐‘˜ โˆ’ ๐œ‚๐‘˜๐›ป๐‘“ ๐‘Ž๐‘˜ , ๐‘˜ โ‰ฅ 0

๐‘ค โ† ๐‘ค โˆ’ ๐œ‚๐œ•๐‘Ÿ

๐œ•๐‘คโ€ข Weight update

๐‘Ÿ์„์ตœ์†Œํ™”ํ•˜๋Š” ๐‘ค๋ฅผ์ฐพ์•„๋ผ!!

Page 20: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 20

Training: Gradient Descent

(for fixed , this is a function of x) (function of the parameters )

Andrew Ng

Page 21: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 21

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 22: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 22

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 23: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 23

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 24: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 24

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 25: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 25

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 26: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 26

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 27: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 27

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 28: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 28

(for fixed , this is a function of x) (function of the parameters )

Training: Gradient Descent

Andrew Ng

Page 29: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 29

Training: Solution Derivation

โ€ข ๋ถ„์„์  ๋ฐฉ๋ฒ•(analytic method)โ€ข ๐ฝ(๐œƒ)๋ฅผ ๊ฐ ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๋“ค๋กœ ํŽธ๋ฏธ๋ถ„ํ•œ ํ›„์— ๊ทธ ๊ฒฐ๊ณผ๋ฅผ 0์œผ๋กœ

ํ•˜์—ฌ ์—ฐ๋ฆฝ๋ฐฉ์ •์‹ ํ’€์ด

โ€ข ๐‘“ ๐‘ฅ = ๐‘ค๐‘ฅ + ๐‘ ์ธ ๊ฒฝ์šฐ์—๋Š” ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ ๐‘ค์™€ ๐‘๋กœ ํŽธ๋ฏธ๋ถ„

๐œ•๐‘Ÿ

๐œ•๐‘ค=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ (โˆ’๐‘ฅ๐‘–) = 0

๐œ•๐‘Ÿ

๐œ•๐‘=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ (โˆ’1) = 0

๐‘ค์—๋Œ€ํ•œํŽธ๋ฏธ๋ถ„

๐‘์—๋Œ€ํ•œํŽธ๋ฏธ๋ถ„

Page 30: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 30

Training: Solution Derivation

๐œ•๐‘Ÿ

๐œ•๐‘=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ (โˆ’1) = 0

๐‘์—๋Œ€ํ•œํŽธ๋ฏธ๋ถ„

๐œ•๐‘Ÿ

๐œ•๐‘=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡

๐‘–=1

๐‘š

๐‘ฅ๐‘– โˆ’ ๐‘๐‘š = 0

๐œ•๐‘Ÿ

๐œ•๐‘=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡

๐‘–=1

๐‘š

๐‘ฅ๐‘– = ๐‘๐‘š

๐œ•๐‘Ÿ

๐œ•๐‘= ๐‘ฆ โˆ’ ๐‘ค๐‘‡ ๐‘ฅ = ๐‘

Page 31: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 31

Training: Solution Derivation

๐œ•๐‘Ÿ

๐œ•๐‘ค=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ (โˆ’๐‘ฅ๐‘–) = 0

๐‘ค์—๋Œ€ํ•œํŽธ๋ฏธ๋ถ„

0 =

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘–๐‘ฅ๐‘– โˆ’ ๐’ƒ๐‘ฅ๐‘–

0 =

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘–๐‘ฅ๐‘– โˆ’ ( ๐‘ฆ โˆ’ ๐‘ค๐‘‡ ๐‘ฅ)๐‘ฅ๐‘–

0 =

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ฆ๐‘ฅ๐‘– + ๐‘ค๐‘‡ ๐‘ฅ๐‘ฅ๐‘–

๐‘–=1

๐‘š

(๐‘ค๐‘‡ ๐‘ฅ๐‘ฅ๐‘– โˆ’๐‘ค๐‘‡๐‘ฅ๐‘–๐‘ฅ๐‘–) =

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ฆ๐‘ฅ๐‘–

(

๐‘–=1

๐‘š

๐‘ฅ๐‘ฅ๐‘– โˆ’ ๐‘ฅ๐‘–๐‘ฅ๐‘– ๐‘ค๐‘‡) =

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ฆ๐‘ฅ๐‘–

๐‘ค๐‘‡ =

๐‘–=1

๐‘š

๐‘ฅ๐‘ฅ๐‘– โˆ’ ๐‘ฅ๐‘–๐‘ฅ๐‘–

โˆ’1

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ฆ๐‘ฅ๐‘–

๐‘ฆ โˆ’ ๐‘ค๐‘‡ ๐‘ฅ = ๐‘

0์˜๊ฐ’์„๊ฐ–๋Š”์ด์œ ๋Š”๋ชจ๋“  instance์˜๊ฐ’์„๋”ํ•˜๋Š”๊ฒƒ๊ณผํ‰๊ท ์„ n๋ฒˆ๋”ํ•˜๋Š”๊ฒƒ์€๊ฐ™์€๊ฐ’์„๊ฐ–๊ฒŒํ•˜๊ธฐ๋•Œ๋ฌธ

Page 32: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 32

Training: Solution Derivation

๐œ•๐‘Ÿ

๐œ•๐‘ค=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’ ๐‘ (โˆ’๐‘ฅ๐‘–) = 0

๐‘ค์—๋Œ€ํ•œํŽธ๋ฏธ๋ถ„

๐‘ค๐‘‡ =

๐‘–=1

๐‘š

๐‘ฅ๐‘ฅ๐‘– โˆ’ ๐‘ฅ๐‘–๐‘ฅ๐‘–

โˆ’1

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ฆ๐‘ฅ๐‘–

๐‘ค๐‘‡ =

๐‘–=1

๐‘š

๐‘ฅ๐‘–๐‘ฅ๐‘–๐‘‡ โˆ’ ๐‘ฅ๐‘‡๐‘ฅ๐‘– + ( ๐‘ฅ ๐‘ฅ๐‘‡ โˆ’ ๐‘ฅ๐‘ฅ๐‘–

๐‘‡)

โˆ’1

๐‘–=1

๐‘š

๐‘ฆ๐‘–๐‘ฅ๐‘– โˆ’ ๐‘ฆ๐‘ฅ๐‘– + ( ๐‘ฆ ๐‘ฅ โˆ’ ๐‘ฆ๐‘– ๐‘ฅ)

๐‘ค๐‘‡ =

๐‘–=1

๐‘š

๐‘ฅ๐‘– โˆ’ ๐‘ฅ)(๐‘ฅ๐‘– โˆ’ ๐‘ฅ ๐‘‡

โˆ’1

๐‘–=1

๐‘š

๐‘ฅ๐‘– โˆ’ ๐‘ฅ (๐‘ฆ๐‘– โˆ’ ๐‘ฆ)

๐‘ค๐‘‡ =

๐‘–=1

๐‘š

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘ฅ๐‘–)

โˆ’1

๐‘–=1

๐‘š

๐‘๐‘œ๐‘ฃ(๐‘ฅ๐‘– , ๐‘ฆ๐‘–)

solution

๐‘ค๐‘‡ =

๐‘–=1

๐‘š

๐‘ฅ๐‘– โˆ’ ๐‘ฅ)(๐‘ฅ๐‘– โˆ’ ๐‘ฅ ๐‘‡

โˆ’1

๐‘–=1

๐‘š

๐‘ฅ๐‘– โˆ’ ๐‘ฅ (๐‘ฆ๐‘– โˆ’ ๐‘ฆ)

๐‘ = ๐‘ฆ โˆ’ ๐‘ค๐‘‡ ๐‘ฅ

Page 33: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 33

Training: Algorithm

Page 34: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 34

Regression: other problems

Page 35: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 35

Regression: Multiple variables

โ€ข ์นœ๊ตฌ์— ๋Œ€ํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์€ ๊ฒฝ์šฐ

ํ‚ค ๋‚˜์ด ๋ฐœํฌ๊ธฐ ๋‹ค๋ฆฌ๊ธธ์ด ๋ชธ๋ฌด๊ฒŒ

์นœ๊ตฌ1 160 17 230 80 50

์นœ๊ตฌ2 165 20 235 85 50

์นœ๊ตฌ3 170 21 240 85 55

์นœ๊ตฌ4 170 24 245 90 60

์นœ๊ตฌ5 175 26 250 90 60

Features Label

Instance โ†’ ๐‘–

โ„Ž ๐‘ฅ = ๐‘ค0๐‘ฅ0 +๐‘ค1๐‘ฅ1 + ๐‘ค2๐‘ฅ2 + ๐‘ค3๐‘ฅ3 + ๐‘ค4๐‘ฅ4 + ๐‘ค5๐‘ฅ5Hypothesis:

๐‘ค0, ๐‘ค1, ๐‘ค2, ๐‘ค3, ๐‘ค4, ๐‘ค5Parameters:

๐‘ฅ0, ๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3, ๐‘ฅ4, ๐‘ฅ5Features:

๐‘ฆ๐‘ฅ1 ๐‘ฅ2 ๐‘ฅ3 ๐‘ฅ4

๐‘–1

๐‘–2

๐‘–3

๐‘–4

๐‘–5

Page 36: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 36

Regression: Multiple variables

โ€ข Hypothesis:

โ€ข Parameters:

โ€ข Features:

โ€ข Cost function:

โ„Ž ๐‘ฅ = ๐‘ค๐‘‡๐‘ฅ = ๐‘ค0๐‘ฅ0 + ๐‘ค1๐‘ฅ1 + ๐‘ค2๐‘ฅ2 +โ‹ฏ+๐‘ค๐‘›๐‘ฅ๐‘›

๐‘ค0, ๐‘ค1, ๐‘ค2, ๐‘ค3, ๐‘ค4, โ€ฆ , ๐‘ค๐‘›

๐‘ฅ0, ๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3, ๐‘ฅ4, โ€ฆ , ๐‘ฅ๐‘›

โˆˆ โ„›๐‘›+1

โˆˆ โ„›๐‘›+1

๐ฝ ๐‘ค0, ๐‘ค1, โ€ฆ , ๐‘ค๐œƒ =1

2

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ โ„Ž(๐‘ฅ๐‘–)2

๐‘ฅ =

๐‘ฅ0๐‘ฅ1๐‘ฅ2๐‘ฅ3โ€ฆ๐‘ฅ๐‘›

โˆˆ โ„›๐‘›+1 ๐‘ค =

๐‘ค0

๐‘ค1

๐‘ค2

๐‘ค3

โ€ฆ๐‘ค๐‘›

โˆˆ โ„›๐‘›+1

Page 37: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 37

Multiple variables: Gradient descent

โ€ข Gradient descent

๐œ•๐ฝ(๐œƒ)

๐œ•๐‘ค=

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– (โˆ’๐‘ฅ๐‘–)

Standard (n=1), n: num. of features

Repeat {

}

๐‘ค0 = ๐‘ค0 โˆ’ ๐œ‚

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’๐‘ค๐‘‡๐‘ฅ๐‘–

โˆ’๐‘ฅ๐‘–๐‘— โ†’ โˆ’๐‘ฅ๐‘–0 = 1

๐‘ค1 = ๐‘ค1 โˆ’ ๐œ‚

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’๐‘ฅ๐‘–1

Multiple (n>=1)

Repeat {

}

๐‘ค๐‘— = ๐‘ค๐‘— โˆ’ ๐œ‚

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’๐‘ฅ๐‘–๐‘—

๐‘ค0 = ๐‘ค0 โˆ’ ๐œ‚

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’ ๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’๐‘ฅ๐‘–0

๐‘ค1 = ๐‘ค1 โˆ’ ๐œ‚

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’๐‘ฅ๐‘–1

๐‘ค2 = ๐‘ค2 โˆ’ ๐œ‚

๐‘–=1

๐‘š

๐‘ฆ๐‘– โˆ’๐‘ค๐‘‡๐‘ฅ๐‘– โˆ’๐‘ฅ๐‘–2

โ€ฆ

Page 38: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 38

Multiple variables: Feature scaling

โ€ข Feature scaling

โ€ข ๊ฐ๊ฐ์˜ ์ž์งˆ ๊ฐ’ ๋ฒ”์œ„๋“ค์ด ์„œ๋กœ ๋‹ค๋ฆ„โ€ข ํ‚ค: 160~175, ๋‚˜์ด: 17~26, ๋ฐœ ํฌ๊ธฐ: 230~250, ๋‹ค๋ฆฌ ๊ธธ์ด:

80~90

โ€ข Gradient descent ํ•  ๋•Œ ์ตœ์†Œ ๊ฐ’์œผ๋กœ ์ˆ˜๋ ดํ•˜๋Š”๋ฐ ์˜ค๋ž˜๊ฑธ๋ฆผ

ํ‚ค ๋‚˜์ด ๋ฐœํฌ๊ธฐ ๋‹ค๋ฆฌ๊ธธ์ด ๋ชธ๋ฌด๊ฒŒ

์นœ๊ตฌ1 160 17 230 80 50

์นœ๊ตฌ2 165 20 235 85 50

์นœ๊ตฌ3 170 21 240 85 55

์นœ๊ตฌ4 170 24 245 90 60

์นœ๊ตฌ5 175 26 250 90 60

Page 39: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 39

Multiple variables: Feature scaling

โ€ข Feature scaling

โ€ข ์ž์งˆ ๊ฐ’ ๋ฒ”์œ„๊ฐ€ ๋„ˆ๋ฌด ์ปค์„œ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด ๋ฏธ๋ถ„์„ ๋งŽ์ด ํ•˜๊ฒŒ ๋จ, ์ฆ‰ iteration์„ ๋งŽ์ด ์ˆ˜ํ–‰ํ•˜๊ฒŒ ๋จ

โ€ข ์˜ˆ๋ฅผ ๋“ค์–ดโ€ข ์ด ์ •๋„ ์ฐจ์ด์˜ ์ž์งˆ๋“ค์€ ๊ดœ์ฐฎ์Œ

โ€ข ์ด ์ •๋„ ์ฐจ์ด์˜ ์ž์งˆ๋“ค์ด ๋ฌธ์ œ

โˆ’0.5 โ‰ค ๐‘ฅ1 โ‰ค 0.5

โˆ’2 โ‰ค ๐‘ฅ2 โ‰ค 3

โˆ’1000 โ‰ค ๐‘ฅ1 โ‰ค 2000

0 โ‰ค ๐‘ฅ2 โ‰ค 5000

Page 40: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 40

Multiple variables: Feature scaling

โ€ข Feature scaling

โ€ข ๋”ฐ๋ผ์„œ ์ž์งˆ ๊ฐ’ ๋ฒ”์œ„๋ฅผ โˆ’1 โ‰ค ๐‘ฅ๐‘– โ‰ค 1 ์‚ฌ์ด๋กœ ์žฌ์ •์˜

Feature scaling

โ€ข Scaling: ๐‘ฅ๐‘–: ๐‘“๐‘’๐‘Ž๐‘ก๐‘ข๐‘Ÿ๐‘’ ๐‘‘๐‘Ž๐‘ก๐‘Ž

๐‘†๐‘–: ๐‘Ÿ๐‘Ž๐‘›๐‘”๐‘’ ๐‘œ๐‘“ ๐‘“๐‘’๐‘Ž๐‘ก๐‘ข๐‘Ÿ๐‘’ ๐‘‘๐‘Ž๐‘ก๐‘Ž๐‘ ๐‘†๐‘– = max ๐‘“๐‘’๐‘Ž๐‘ก. โˆ’ min(๐‘“๐‘’๐‘Ž๐‘ก. )

๐‘†๐‘– = 230 โ‰ค ๐‘ฅ๐‘– โ‰ค 250โ†’ range: 250 โˆ’ 230 = 20

๐‘ฅ๐‘– โˆ’ ๐œ‡๐‘–๐‘†๐‘– ๐œ‡๐‘–: ๐‘š๐‘’๐‘Ž๐‘› ๐‘œ๐‘“ ๐‘“๐‘’๐‘Ž๐‘ก๐‘ข๐‘Ÿ๐‘’ ๐‘‘๐‘Ž๐‘ก๐‘Ž๐‘ 

๐‘ฅ๐‘– โˆ’ 240

20๐œ‡๐‘– = 240

Example

๐‘ฅ1 = 230 โ†’230 โˆ’ 240

20= โˆ’0.5

๐‘ฅ5 = 230 โ†’250 โˆ’ 240

20= 0.5

Page 41: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 41

Multiple variables: Feature scaling

โ€ข Feature scaling

โ€ข Feature scaling์„ ํ†ตํ•˜์—ฌ ์ •๊ทœํ™”

โ€ข ๊ฐ„๋‹จํ•œ ์—ฐ์‚ฐ

โ€ข ๊ฒฐ๊ตญ์— Gradient descent๊ฐ€ ๋น ๋ฅด๊ฒŒ ์ˆ˜๋ ดํ•  ์ˆ˜ ์žˆ์Œ

Page 42: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 42

Linear Regression: Normal equation

โ€ข ์•ž์—์„œ ๋‹ค๋ค˜๋˜ ๋ฐฉ๋ฒ•์€ ๋‹คํ•ญ์‹์„ ์ด์šฉํ•œ ๋ถ„์„์  ๋ฐฉ๋ฒ•

โ€ข ๋ถ„์„์  ๋ฐฉ๋ฒ•์€ ๊ณ ์ฐจ ํ•จ์ˆ˜๋‚˜ ๋‹ค๋ณ€์ˆ˜ ํ•จ์ˆ˜๊ฐ€ ๋˜๋ฉด ๊ณ„์‚ฐ์ด์–ด๋ ค์›€

โ€ข ๋”ฐ๋ผ์„œ ๋Œ€์ˆ˜์  ๋ฐฉ๋ฒ•์œผ๋กœ ์ ‘๊ทผ Normal equation

๋ถ„์„์ ๋ฐฉ๋ฒ•:

โ€ข Gradient Descent ํ•„์š”

๐œ‚์™€ many iteration ํ•„์š”

โ€ข ๐‘›์ด๋งŽ์œผ๋ฉด์ข‹์€์„ฑ๋Šฅ

Such as, ๐‘š training examples, ๐‘› features

๋Œ€์ˆ˜์ ๋ฐฉ๋ฒ•:

โ€ข Gradient Descent ํ•„์š”์—†์Œ

๐œ‚์™€many iteration ํ•„์š”์—†์Œ

โ€ข ๐‘‹๐‘‡๐‘‹ โˆ’1์˜๊ณ„์‚ฐ๋งŒํ•„์š” ๐‘‚(๐‘›3)

โ€ข ๐‘›์ด๋งŽ์œผ๋ฉด์†๋„๋Š๋ฆผ

Page 43: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 43

Size (feet2) Number of bedrooms Number of floors Age of home (years) Price ($1000)

1 2104 5 1 45 4601 1416 3 2 40 2321 1534 3 2 30 3151 852 2 1 36 178

Examples:

Linear Regression: Normal equation

๐‘Š =

๐‘ค0

๐‘ค1

๐‘ค2

๐‘ค3

๐‘ค4

โˆด ๐‘Š๐‘‹ = ๐‘ฆ

Page 44: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 44

Size (feet2) Number of bedrooms Number of floors Age of home (years) Price ($1000)

1 2104 5 1 45 4601 1416 3 2 40 2321 1534 3 2 30 3151 852 2 1 36 1781

Examples:

Linear Regression: Normal equation

๐‘Š = ๐‘‹๐‘‡๐‘‹ โˆ’1๐‘‹๐‘‡๐‘ฆ๐‘Š๐‘‹ = ๐‘ฆ โ†’

Page 45: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 45

Linear Regression: Normal equation

โ€œ๐‘Š = ๐‘‹๐‘‡๐‘‹ โˆ’1๐‘‹๐‘‡๐‘ฆโ€๊ฐ€์ •๋ง ๐‘Ÿ๐‘’๐‘ ๐‘–๐‘‘๐‘ข๐‘Ž๐‘™2 ํ•ฉ์„์ตœ์†Œ๋กœํ•˜๋Š”๋ชจ๋ธ์ธ๊ฐ€?์–ด๋–ป๊ฒŒ์œ ๋„ํ•˜๋Š”๊ฐ€?

๐‘Ÿ = ๐‘ฆ โˆ’ ๐‘ฆ โ†’ ๐‘Œ โˆ’๐‘Š๐‘‹ 2

min( ๐‘Œ โˆ’๐‘Š๐‘‹ 2)์„๋งŒ์กฑํ•˜๋Š”๐‘Š๋ฅผ๊ตฌํ•˜๋ผ

โˆด ๐‘Š์„ํŽธ๋ฏธ๋ถ„ํ•œํ›„ 0์œผ๋กœ๋†“์œผ๋ฉด

โˆ’2๐‘‹๐‘‡ ๐‘Œ โˆ’๐‘Š๐‘‹ = 0

โˆ’2๐‘‹๐‘‡๐‘Œ + 2๐‘‹๐‘‡๐‘Š๐‘‹ = 0

2๐‘‹๐‘‡๐‘Š๐‘‹ = 2๐‘‹๐‘‡๐‘Œ

โˆด ๐‘Š = ๐‘‹๐‘‡๐‘‹ โˆ’1๐‘‹๐‘‡๐‘Œ

๐‘‹๐‘‡๐‘Š๐‘‹ = ๐‘‹๐‘‡๐‘Œ

Page 46: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 46

References

โ€ข https://class.coursera.org/ml-007/lecture

โ€ข http://deepcumen.com/2015/04/linear-regression-2/

โ€ข http://www.aistudy.com/math/regression_lee.htm

โ€ข http://en.wikipedia.org/wiki/Linear_regression

Page 47: Linear Regression ๐‘– ๐œถ - Kangwoncs.kangwon.ac.kr/~parkce/seminar/2015_MachineLearning/03...ย ยท 2016. 6. 17.ย ยท ๐‘– ๐œถ7 Linear Regression โ€ข์ฆ‰, ํšŒ๊ท€๋ฌธ์ œ๋ž€.. โ€ข์ˆ˜์น˜ํ˜•๋ชฉ์ ๊ฐ’์„์˜ˆ์ธกํ•˜๋Š”๋ฐฉ๋ฒ•

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ 47

QA

๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค.

๋ฐ•์ฒœ์Œ, ๋ฐ•์ฐฌ๋ฏผ, ์ตœ์žฌํ˜, ๋ฐ•์„ธ๋นˆ, ์ด์ˆ˜์ •

๐‘ ๐‘–๐‘”๐‘š๐‘Ž ๐œถ , ๊ฐ•์›๋Œ€ํ•™๊ต

Email: [email protected]