๐Ÿšดโ€โ™‚๏ธ
TIL
  • MAIN
  • : TIL?
  • : WIL
  • : Plan
  • : Retrospective
    • 21Y
      • Wait a moment!
      • 9M 2W
      • 9M1W
      • 8M4W
      • 8M3W
      • 8M2W
      • 8M1W
      • 7M4W
      • 7M3W
      • 7M2W
      • 7M1W
      • 6M5W
      • 1H
    • ์ƒˆ์‚ฌ๋žŒ ๋˜๊ธฐ ํ”„๋กœ์ ํŠธ
      • 2ํšŒ์ฐจ
      • 1ํšŒ์ฐจ
  • TIL : ML
    • Paper Analysis
      • BERT
      • Transformer
    • Boostcamp 2st
      • [S]Data Viz
        • (4-3) Seaborn ์‹ฌํ™”
        • (4-2) Seaborn ๊ธฐ์ดˆ
        • (4-1) Seaborn ์†Œ๊ฐœ
        • (3-4) More Tips
        • (3-3) Facet ์‚ฌ์šฉํ•˜๊ธฐ
        • (3-2) Color ์‚ฌ์šฉํ•˜๊ธฐ
        • (3-1) Text ์‚ฌ์šฉํ•˜๊ธฐ
        • (2-3) Scatter Plot ์‚ฌ์šฉํ•˜๊ธฐ
        • (2-2) Line Plot ์‚ฌ์šฉํ•˜๊ธฐ
        • (2-1) Bar Plot ์‚ฌ์šฉํ•˜๊ธฐ
        • (1-3) Python๊ณผ Matplotlib
        • (1-2) ์‹œ๊ฐํ™”์˜ ์š”์†Œ
        • (1-1) Welcome to Visualization (OT)
      • [P]MRC
        • (2๊ฐ•) Extraction-based MRC
        • (1๊ฐ•) MRC Intro & Python Basics
      • [P]KLUE
        • (5๊ฐ•) BERT ๊ธฐ๋ฐ˜ ๋‹จ์ผ ๋ฌธ์žฅ ๋ถ„๋ฅ˜ ๋ชจ๋ธ ํ•™์Šต
        • (4๊ฐ•) ํ•œ๊ตญ์–ด BERT ์–ธ์–ด ๋ชจ๋ธ ํ•™์Šต
        • [NLP] ๋ฌธ์žฅ ๋‚ด ๊ฐœ์ฒด๊ฐ„ ๊ด€๊ณ„ ์ถ”์ถœ
        • (3๊ฐ•) BERT ์–ธ์–ด๋ชจ๋ธ ์†Œ๊ฐœ
        • (2๊ฐ•) ์ž์—ฐ์–ด์˜ ์ „์ฒ˜๋ฆฌ
        • (1๊ฐ•) ์ธ๊ณต์ง€๋Šฅ๊ณผ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ
      • [U]Stage-CV
      • [U]Stage-NLP
        • 7W Retrospective
        • (10๊ฐ•) Advanced Self-supervised Pre-training Models
        • (09๊ฐ•) Self-supervised Pre-training Models
        • (08๊ฐ•) Transformer (2)
        • (07๊ฐ•) Transformer (1)
        • 6W Retrospective
        • (06๊ฐ•) Beam Search and BLEU score
        • (05๊ฐ•) Sequence to Sequence with Attention
        • (04๊ฐ•) LSTM and GRU
        • (03๊ฐ•) Recurrent Neural Network and Language Modeling
        • (02๊ฐ•) Word Embedding
        • (01๊ฐ•) Intro to NLP, Bag-of-Words
        • [ํ•„์ˆ˜ ๊ณผ์ œ 4] Preprocessing for NMT Model
        • [ํ•„์ˆ˜ ๊ณผ์ œ 3] Subword-level Language Model
        • [ํ•„์ˆ˜ ๊ณผ์ œ2] RNN-based Language Model
        • [์„ ํƒ ๊ณผ์ œ] BERT Fine-tuning with transformers
        • [ํ•„์ˆ˜ ๊ณผ์ œ] Data Preprocessing
      • Mask Wear Image Classification
        • 5W Retrospective
        • Report_Level1_6
        • Performance | Review
        • DAY 11 : HardVoting | MultiLabelClassification
        • DAY 10 : Cutmix
        • DAY 9 : Loss Function
        • DAY 8 : Baseline
        • DAY 7 : Class Imbalance | Stratification
        • DAY 6 : Error Fix
        • DAY 5 : Facenet | Save
        • DAY 4 : VIT | F1_Loss | LrScheduler
        • DAY 3 : DataSet/Lodaer | EfficientNet
        • DAY 2 : Labeling
        • DAY 1 : EDA
        • 2_EDA Analysis
      • [P]Stage-1
        • 4W Retrospective
        • (10๊ฐ•) Experiment Toolkits & Tips
        • (9๊ฐ•) Ensemble
        • (8๊ฐ•) Training & Inference 2
        • (7๊ฐ•) Training & Inference 1
        • (6๊ฐ•) Model 2
        • (5๊ฐ•) Model 1
        • (4๊ฐ•) Data Generation
        • (3๊ฐ•) Dataset
        • (2๊ฐ•) Image Classification & EDA
        • (1๊ฐ•) Competition with AI Stages!
      • [U]Stage-3
        • 3W Retrospective
        • PyTorch
          • (10๊ฐ•) PyTorch Troubleshooting
          • (09๊ฐ•) Hyperparameter Tuning
          • (08๊ฐ•) Multi-GPU ํ•™์Šต
          • (07๊ฐ•) Monitoring tools for PyTorch
          • (06๊ฐ•) ๋ชจ๋ธ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
          • (05๊ฐ•) Dataset & Dataloader
          • (04๊ฐ•) AutoGrad & Optimizer
          • (03๊ฐ•) PyTorch ํ”„๋กœ์ ํŠธ ๊ตฌ์กฐ ์ดํ•ดํ•˜๊ธฐ
          • (02๊ฐ•) PyTorch Basics
          • (01๊ฐ•) Introduction to PyTorch
      • [U]Stage-2
        • 2W Retrospective
        • DL Basic
          • (10๊ฐ•) Generative Models 2
          • (09๊ฐ•) Generative Models 1
          • (08๊ฐ•) Sequential Models - Transformer
          • (07๊ฐ•) Sequential Models - RNN
          • (06๊ฐ•) Computer Vision Applications
          • (05๊ฐ•) Modern CNN - 1x1 convolution์˜ ์ค‘์š”์„ฑ
          • (04๊ฐ•) Convolution์€ ๋ฌด์—‡์ธ๊ฐ€?
          • (03๊ฐ•) Optimization
          • (02๊ฐ•) ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ - MLP (Multi-Layer Perceptron)
          • (01๊ฐ•) ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ณธ ์šฉ์–ด ์„ค๋ช… - Historical Review
        • Assignment
          • [ํ•„์ˆ˜ ๊ณผ์ œ] Multi-headed Attention Assignment
          • [ํ•„์ˆ˜ ๊ณผ์ œ] LSTM Assignment
          • [ํ•„์ˆ˜ ๊ณผ์ œ] CNN Assignment
          • [ํ•„์ˆ˜ ๊ณผ์ œ] Optimization Assignment
          • [ํ•„์ˆ˜ ๊ณผ์ œ] MLP Assignment
      • [U]Stage-1
        • 1W Retrospective
        • AI Math
          • (AI Math 10๊ฐ•) RNN ์ฒซ๊ฑธ์Œ
          • (AI Math 9๊ฐ•) CNN ์ฒซ๊ฑธ์Œ
          • (AI Math 8๊ฐ•) ๋ฒ ์ด์ฆˆ ํ†ต๊ณ„ํ•™ ๋ง›๋ณด๊ธฐ
          • (AI Math 7๊ฐ•) ํ†ต๊ณ„ํ•™ ๋ง›๋ณด๊ธฐ
          • (AI Math 6๊ฐ•) ํ™•๋ฅ ๋ก  ๋ง›๋ณด๊ธฐ
          • (AI Math 5๊ฐ•) ๋”ฅ๋Ÿฌ๋‹ ํ•™์Šต๋ฐฉ๋ฒ• ์ดํ•ดํ•˜๊ธฐ
          • (AI Math 4๊ฐ•) ๊ฒฝ์‚ฌํ•˜๊ฐ•๋ฒ• - ๋งค์šด๋ง›
          • (AI Math 3๊ฐ•) ๊ฒฝ์‚ฌํ•˜๊ฐ•๋ฒ• - ์ˆœํ•œ๋ง›
          • (AI Math 2๊ฐ•) ํ–‰๋ ฌ์ด ๋ญ์˜ˆ์š”?
          • (AI Math 1๊ฐ•) ๋ฒกํ„ฐ๊ฐ€ ๋ญ์˜ˆ์š”?
        • Python
          • (Python 7-2๊ฐ•) pandas II
          • (Python 7-1๊ฐ•) pandas I
          • (Python 6๊ฐ•) numpy
          • (Python 5-2๊ฐ•) Python data handling
          • (Python 5-1๊ฐ•) File / Exception / Log Handling
          • (Python 4-2๊ฐ•) Module and Project
          • (Python 4-1๊ฐ•) Python Object Oriented Programming
          • (Python 3-2๊ฐ•) Pythonic code
          • (Python 3-1๊ฐ•) Python Data Structure
          • (Python 2-4๊ฐ•) String and advanced function concept
          • (Python 2-3๊ฐ•) Conditionals and Loops
          • (Python 2-2๊ฐ•) Function and Console I/O
          • (Python 2-1๊ฐ•) Variables
          • (Python 1-3๊ฐ•) ํŒŒ์ด์ฌ ์ฝ”๋”ฉ ํ™˜๊ฒฝ
          • (Python 1-2๊ฐ•) ํŒŒ์ด์ฌ ๊ฐœ์š”
          • (Python 1-1๊ฐ•) Basic computer class for newbies
        • Assignment
          • [์„ ํƒ ๊ณผ์ œ 3] Maximum Likelihood Estimate
          • [์„ ํƒ ๊ณผ์ œ 2] Backpropagation
          • [์„ ํƒ ๊ณผ์ œ 1] Gradient Descent
          • [ํ•„์ˆ˜ ๊ณผ์ œ 5] Morsecode
          • [ํ•„์ˆ˜ ๊ณผ์ œ 4] Baseball
          • [ํ•„์ˆ˜ ๊ณผ์ œ 3] Text Processing 2
          • [ํ•„์ˆ˜ ๊ณผ์ œ 2] Text Processing 1
          • [ํ•„์ˆ˜ ๊ณผ์ œ 1] Basic Math
    • ๋”ฅ๋Ÿฌ๋‹ CNN ์™„๋ฒฝ ๊ฐ€์ด๋“œ - Fundamental ํŽธ
      • ์ข…ํ•ฉ ์‹ค์Šต 2 - ์บ๊ธ€ Plant Pathology(๋‚˜๋ฌด์žŽ ๋ณ‘ ์ง„๋‹จ) ๊ฒฝ์—ฐ ๋Œ€ํšŒ
      • ์ข…ํ•ฉ ์‹ค์Šต 1 - 120์ข…์˜ Dog Breed Identification ๋ชจ๋ธ ์ตœ์ ํ™”
      • ์‚ฌ์ „ ํ›ˆ๋ จ ๋ชจ๋ธ์˜ ๋ฏธ์„ธ ์กฐ์ • ํ•™์Šต๊ณผ ๋‹ค์–‘ํ•œ Learning Rate Scheduler์˜ ์ ์šฉ
      • Advanced CNN ๋ชจ๋ธ ํŒŒํ—ค์น˜๊ธฐ - ResNet ์ƒ์„ธ์™€ EfficientNet ๊ฐœ์š”
      • Advanced CNN ๋ชจ๋ธ ํŒŒํ—ค์น˜๊ธฐ - AlexNet, VGGNet, GoogLeNet
      • Albumentation์„ ์ด์šฉํ•œ Augmentation๊ธฐ๋ฒ•๊ณผ Keras Sequence ํ™œ์šฉํ•˜๊ธฐ
      • ์‚ฌ์ „ ํ›ˆ๋ จ CNN ๋ชจ๋ธ์˜ ํ™œ์šฉ๊ณผ Keras Generator ๋ฉ”์ปค๋‹ˆ์ฆ˜ ์ดํ•ด
      • ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์˜ ์ดํ•ด - Keras ImageDataGenerator ํ™œ์šฉ
      • CNN ๋ชจ๋ธ ๊ตฌํ˜„ ๋ฐ ์„ฑ๋Šฅ ํ–ฅ์ƒ ๊ธฐ๋ณธ ๊ธฐ๋ฒ• ์ ์šฉํ•˜๊ธฐ
    • AI School 1st
    • ํ˜„์—… ์‹ค๋ฌด์ž์—๊ฒŒ ๋ฐฐ์šฐ๋Š” Kaggle ๋จธ์‹ ๋Ÿฌ๋‹ ์ž…๋ฌธ
    • ํŒŒ์ด์ฌ ๋”ฅ๋Ÿฌ๋‹ ํŒŒ์ดํ† ์น˜
  • TIL : Python & Math
    • Do It! ์žฅ๊ณ +๋ถ€ํŠธ์ŠคํŠธ๋žฉ: ํŒŒ์ด์ฌ ์›น๊ฐœ๋ฐœ์˜ ์ •์„
      • Relations - ๋‹ค๋Œ€๋‹ค ๊ด€๊ณ„
      • Relations - ๋‹ค๋Œ€์ผ ๊ด€๊ณ„
      • ํ…œํ”Œ๋ฆฟ ํŒŒ์ผ ๋ชจ๋“ˆํ™” ํ•˜๊ธฐ
      • TDD (Test Driven Development)
      • template tags & ์กฐ๊ฑด๋ฌธ
      • ์ •์  ํŒŒ์ผ(static files) & ๋ฏธ๋””์–ด ํŒŒ์ผ(media files)
      • FBV (Function Based View)์™€ CBV (Class Based View)
      • Django ์ž…๋ฌธํ•˜๊ธฐ
      • ๋ถ€ํŠธ์ŠคํŠธ๋žฉ
      • ํ”„๋ก ํŠธ์—”๋“œ ๊ธฐ์ดˆ๋‹ค์ง€๊ธฐ (HTML, CSS, JS)
      • ๋“ค์–ด๊ฐ€๊ธฐ + ํ™˜๊ฒฝ์„ค์ •
    • Algorithm
      • Programmers
        • Level1
          • ์†Œ์ˆ˜ ๋งŒ๋“ค๊ธฐ
          • ์ˆซ์ž ๋ฌธ์ž์—ด๊ณผ ์˜๋‹จ์–ด
          • ์ž์—ฐ์ˆ˜ ๋’ค์ง‘์–ด ๋ฐฐ์—ด๋กœ ๋งŒ๋“ค๊ธฐ
          • ์ •์ˆ˜ ๋‚ด๋ฆผ์ฐจ์ˆœ์œผ๋กœ ๋ฐฐ์น˜ํ•˜๊ธฐ
          • ์ •์ˆ˜ ์ œ๊ณฑ๊ทผ ํŒ๋ณ„
          • ์ œ์ผ ์ž‘์€ ์ˆ˜ ์ œ๊ฑฐํ•˜๊ธฐ
          • ์ง์‚ฌ๊ฐํ˜• ๋ณ„์ฐ๊ธฐ
          • ์ง์ˆ˜์™€ ํ™€์ˆ˜
          • ์ฒด์œก๋ณต
          • ์ตœ๋Œ€๊ณต์•ฝ์ˆ˜์™€ ์ตœ์†Œ๊ณต๋ฐฐ์ˆ˜
          • ์ฝœ๋ผ์ธ  ์ถ”์ธก
          • ํฌ๋ ˆ์ธ ์ธํ˜•๋ฝ‘๊ธฐ ๊ฒŒ์ž„
          • ํ‚คํŒจ๋“œ ๋ˆ„๋ฅด๊ธฐ
          • ํ‰๊ท  ๊ตฌํ•˜๊ธฐ
          • ํฐ์ผ“๋ชฌ
          • ํ•˜์ƒค๋“œ ์ˆ˜
          • ํ•ธ๋“œํฐ ๋ฒˆํ˜ธ ๊ฐ€๋ฆฌ๊ธฐ
          • ํ–‰๋ ฌ์˜ ๋ง์…ˆ
        • Level2
          • ์ˆซ์ž์˜ ํ‘œํ˜„
          • ์ˆœ์œ„ ๊ฒ€์ƒ‰
          • ์ˆ˜์‹ ์ตœ๋Œ€ํ™”
          • ์†Œ์ˆ˜ ์ฐพ๊ธฐ
          • ์†Œ์ˆ˜ ๋งŒ๋“ค๊ธฐ
          • ์‚ผ๊ฐ ๋‹ฌํŒฝ์ด
          • ๋ฌธ์ž์—ด ์••์ถ•
          • ๋ฉ”๋‰ด ๋ฆฌ๋‰ด์–ผ
          • ๋” ๋งต๊ฒŒ
          • ๋•…๋”ฐ๋จน๊ธฐ
          • ๋ฉ€์ฉกํ•œ ์‚ฌ๊ฐํ˜•
          • ๊ด„ํ˜ธ ํšŒ์ „ํ•˜๊ธฐ
          • ๊ด„ํ˜ธ ๋ณ€ํ™˜
          • ๊ตฌ๋ช…๋ณดํŠธ
          • ๊ธฐ๋Šฅ ๊ฐœ๋ฐœ
          • ๋‰ด์Šค ํด๋Ÿฌ์Šคํ„ฐ๋ง
          • ๋‹ค๋ฆฌ๋ฅผ ์ง€๋‚˜๋Š” ํŠธ๋Ÿญ
          • ๋‹ค์Œ ํฐ ์ˆซ์ž
          • ๊ฒŒ์ž„ ๋งต ์ตœ๋‹จ๊ฑฐ๋ฆฌ
          • ๊ฑฐ๋ฆฌ๋‘๊ธฐ ํ™•์ธํ•˜๊ธฐ
          • ๊ฐ€์žฅ ํฐ ์ •์‚ฌ๊ฐํ˜• ์ฐพ๊ธฐ
          • H-Index
          • JadenCase ๋ฌธ์ž์—ด ๋งŒ๋“ค๊ธฐ
          • N๊ฐœ์˜ ์ตœ์†Œ๊ณต๋ฐฐ์ˆ˜
          • N์ง„์ˆ˜ ๊ฒŒ์ž„
          • ๊ฐ€์žฅ ํฐ ์ˆ˜
          • 124 ๋‚˜๋ผ์˜ ์ˆซ์ž
          • 2๊ฐœ ์ดํ•˜๋กœ ๋‹ค๋ฅธ ๋น„ํŠธ
          • [3์ฐจ] ํŒŒ์ผ๋ช… ์ •๋ ฌ
          • [3์ฐจ] ์••์ถ•
          • ์ค„ ์„œ๋Š” ๋ฐฉ๋ฒ•
          • [3์ฐจ] ๋ฐฉ๊ธˆ ๊ทธ๊ณก
          • ๊ฑฐ๋ฆฌ๋‘๊ธฐ ํ™•์ธํ•˜๊ธฐ
        • Level3
          • ๋งค์นญ ์ ์ˆ˜
          • ์™ธ๋ฒฝ ์ ๊ฒ€
          • ๊ธฐ์ง€๊ตญ ์„ค์น˜
          • ์ˆซ์ž ๊ฒŒ์ž„
          • 110 ์˜ฎ๊ธฐ๊ธฐ
          • ๊ด‘๊ณ  ์ œ๊ฑฐ
          • ๊ธธ ์ฐพ๊ธฐ ๊ฒŒ์ž„
          • ์…”ํ‹€๋ฒ„์Šค
          • ๋‹จ์†์นด๋ฉ”๋ผ
          • ํ‘œ ํŽธ์ง‘
          • N-Queen
          • ์ง•๊ฒ€๋‹ค๋ฆฌ ๊ฑด๋„ˆ๊ธฐ
          • ์ตœ๊ณ ์˜ ์ง‘ํ•ฉ
          • ํ•ฉ์Šน ํƒ์‹œ ์š”๊ธˆ
          • ๊ฑฐ์Šค๋ฆ„๋ˆ
          • ํ•˜๋…ธ์ด์˜ ํƒ‘
          • ๋ฉ€๋ฆฌ ๋›ฐ๊ธฐ
          • ๋ชจ๋‘ 0์œผ๋กœ ๋งŒ๋“ค๊ธฐ
        • Level4
    • Head First Python
    • ๋ฐ์ดํ„ฐ ๋ถ„์„์„ ์œ„ํ•œ SQL
    • ๋‹จ ๋‘ ์žฅ์˜ ๋ฌธ์„œ๋กœ ๋ฐ์ดํ„ฐ ๋ถ„์„๊ณผ ์‹œ๊ฐํ™” ๋ฝ€๊ฐœ๊ธฐ
    • Linear Algebra(Khan Academy)
    • ์ธ๊ณต์ง€๋Šฅ์„ ์œ„ํ•œ ์„ ํ˜•๋Œ€์ˆ˜
    • Statistics110
  • TIL : etc
    • [๋”ฐ๋ฐฐ๋Ÿฐ] Kubernetes
    • [๋”ฐ๋ฐฐ๋Ÿฐ] Docker
      • 2. ๋„์ปค ์„ค์น˜ ์‹ค์Šต 1 - ํ•™์ŠตํŽธ(์ค€๋น„๋ฌผ/์‹ค์Šต ์œ ํ˜• ์†Œ๊ฐœ)
      • 1. ์ปจํ…Œ์ด๋„ˆ์™€ ๋„์ปค์˜ ์ดํ•ด - ์ปจํ…Œ์ด๋„ˆ๋ฅผ ์“ฐ๋Š”์ด์œ  / ์ผ๋ฐ˜ํ”„๋กœ๊ทธ๋žจ๊ณผ ์ปจํ…Œ์ด๋„ˆํ”„๋กœ๊ทธ๋žจ์˜ ์ฐจ์ด์ 
      • 0. ๋“œ๋””์–ด ์ฐพ์•„์˜จ Docker ๊ฐ•์˜! ์™•์ดˆ๋ณด์—์„œ ๋„์ปค ๋งˆ์Šคํ„ฐ๋กœ - OT
    • CoinTrading
      • [๊ฐ€์ƒ ํ™”ํ ์ž๋™ ๋งค๋งค ํ”„๋กœ๊ทธ๋žจ] ๋ฐฑํ…Œ์ŠคํŒ… : ๊ฐ„๋‹จํ•œ ํ…Œ์ŠคํŒ…
    • Gatsby
      • 01 ๊นƒ๋ถ ํฌ๊ธฐ ์„ ์–ธ
  • TIL : Project
    • Mask Wear Image Classification
    • Project. GARIGO
  • 2021 TIL
    • CHANGED
    • JUN
      • 30 Wed
      • 29 Tue
      • 28 Mon
      • 27 Sun
      • 26 Sat
      • 25 Fri
      • 24 Thu
      • 23 Wed
      • 22 Tue
      • 21 Mon
      • 20 Sun
      • 19 Sat
      • 18 Fri
      • 17 Thu
      • 16 Wed
      • 15 Tue
      • 14 Mon
      • 13 Sun
      • 12 Sat
      • 11 Fri
      • 10 Thu
      • 9 Wed
      • 8 Tue
      • 7 Mon
      • 6 Sun
      • 5 Sat
      • 4 Fri
      • 3 Thu
      • 2 Wed
      • 1 Tue
    • MAY
      • 31 Mon
      • 30 Sun
      • 29 Sat
      • 28 Fri
      • 27 Thu
      • 26 Wed
      • 25 Tue
      • 24 Mon
      • 23 Sun
      • 22 Sat
      • 21 Fri
      • 20 Thu
      • 19 Wed
      • 18 Tue
      • 17 Mon
      • 16 Sun
      • 15 Sat
      • 14 Fri
      • 13 Thu
      • 12 Wed
      • 11 Tue
      • 10 Mon
      • 9 Sun
      • 8 Sat
      • 7 Fri
      • 6 Thu
      • 5 Wed
      • 4 Tue
      • 3 Mon
      • 2 Sun
      • 1 Sat
    • APR
      • 30 Fri
      • 29 Thu
      • 28 Wed
      • 27 Tue
      • 26 Mon
      • 25 Sun
      • 24 Sat
      • 23 Fri
      • 22 Thu
      • 21 Wed
      • 20 Tue
      • 19 Mon
      • 18 Sun
      • 17 Sat
      • 16 Fri
      • 15 Thu
      • 14 Wed
      • 13 Tue
      • 12 Mon
      • 11 Sun
      • 10 Sat
      • 9 Fri
      • 8 Thu
      • 7 Wed
      • 6 Tue
      • 5 Mon
      • 4 Sun
      • 3 Sat
      • 2 Fri
      • 1 Thu
    • MAR
      • 31 Wed
      • 30 Tue
      • 29 Mon
      • 28 Sun
      • 27 Sat
      • 26 Fri
      • 25 Thu
      • 24 Wed
      • 23 Tue
      • 22 Mon
      • 21 Sun
      • 20 Sat
      • 19 Fri
      • 18 Thu
      • 17 Wed
      • 16 Tue
      • 15 Mon
      • 14 Sun
      • 13 Sat
      • 12 Fri
      • 11 Thu
      • 10 Wed
      • 9 Tue
      • 8 Mon
      • 7 Sun
      • 6 Sat
      • 5 Fri
      • 4 Thu
      • 3 Wed
      • 2 Tue
      • 1 Mon
    • FEB
      • 28 Sun
      • 27 Sat
      • 26 Fri
      • 25 Thu
      • 24 Wed
      • 23 Tue
      • 22 Mon
      • 21 Sun
      • 20 Sat
      • 19 Fri
      • 18 Thu
      • 17 Wed
      • 16 Tue
      • 15 Mon
      • 14 Sun
      • 13 Sat
      • 12 Fri
      • 11 Thu
      • 10 Wed
      • 9 Tue
      • 8 Mon
      • 7 Sun
      • 6 Sat
      • 5 Fri
      • 4 Thu
      • 3 Wed
      • 2 Tue
      • 1 Mon
    • JAN
      • 31 Sun
      • 30 Sat
      • 29 Fri
      • 28 Thu
      • 27 Wed
      • 26 Tue
      • 25 Mon
      • 24 Sun
      • 23 Sat
      • 22 Fri
      • 21 Thu
      • 20 Wed
      • 19 Tue
      • 18 Mon
      • 17 Sun
      • 16 Sat
      • 15 Fri
      • 14 Thu
      • 13 Wed
      • 12 Tue
      • 11 Mon
      • 10 Sun
      • 9 Sat
      • 8 Fri
      • 7 Thu
      • 6 Wed
      • 5 Tue
      • 4 Mon
      • 3 Sun
      • 2 Sat
      • 1 Fri
  • 2020 TIL
    • DEC
      • 31 Thu
      • 30 Wed
      • 29 Tue
      • 28 Mon
      • 27 Sun
      • 26 Sat
      • 25 Fri
      • 24 Thu
      • 23 Wed
      • 22 Tue
      • 21 Mon
      • 20 Sun
      • 19 Sat
      • 18 Fri
      • 17 Thu
      • 16 Wed
      • 15 Tue
      • 14 Mon
      • 13 Sun
      • 12 Sat
      • 11 Fri
      • 10 Thu
      • 9 Wed
      • 8 Tue
      • 7 Mon
      • 6 Sun
      • 5 Sat
      • 4 Fri
      • 3 Tue
      • 2 Wed
      • 1 Tue
    • NOV
      • 30 Mon
Powered by GitBook
On this page
  • [์ธํ”„๋Ÿฐ] ๋‹จ ๋‘ ์žฅ์˜ ๋ฌธ์„œ๋กœ ๋ฐ์ดํ„ฐ ๋ถ„์„๊ณผ ์‹œ๊ฐํ™” ๋ฝ€๊ฐœ๊ธฐ
  • df.sort_values, rename, sort_index, reset_index๋กœ ๋ฐ์ดํ„ฐ ํ”„๋ ˆ์ž„ Reshaping ํ•˜๊ธฐ - Reshaping Data
  • melt, pivot ์œผ๋กœ Tidy Data ๋งŒ๋“ค๊ธฐ - Reshaping Data, Method Chaining
  • pd.concat([df1,df2]) ์‹œ๋ฆฌ์ฆˆ, ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ํ•ฉ์น˜๊ธฐ - Reshaping Data
  • merge๋กœ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ํ•ฉ์น˜๊ธฐ left, right, inner, outer ์˜ต์…˜ ์‚ฌ์šฉํ•˜๊ธฐ - Combine Data Sets
  • ํŒŒ์ด์ฌ ํŒ๋‹ค์Šค๋กœ groupby ํ™œ์šฉํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ ์ง‘๊ณ„๋ฅผ ํ™œ์šฉํ•˜๊ธฐ - Group Data
  • ๋งˆํฌ๋‹ค์šด ์ˆ˜์‹์ž…๋ ฅ
  • [ํ”„๋กœ๊ทธ๋ž˜๋จธ์Šค AI ์Šค์ฟจ 1๊ธฐ] 5์ฃผ์ฐจ DAY 1
  • Machine Learning ๊ธฐ์ดˆ - ์†Œ๊ฐœ
  • Machine Learning ๊ธฐ์ดˆ - ํ™•๋ฅ ์ด๋ก 1

Was this helpful?

  1. 2021 TIL
  2. JAN

4 Mon

TIL

[์ธํ”„๋Ÿฐ] ๋‹จ ๋‘ ์žฅ์˜ ๋ฌธ์„œ๋กœ ๋ฐ์ดํ„ฐ ๋ถ„์„๊ณผ ์‹œ๊ฐํ™” ๋ฝ€๊ฐœ๊ธฐ

df.sort_values, rename, sort_index, reset_index๋กœ ๋ฐ์ดํ„ฐ ํ”„๋ ˆ์ž„ Reshaping ํ•˜๊ธฐ - Reshaping Data

import pandas as pd
import numpy as np
import seaborn as sns # ์‹œ๊ฐํ™” ํˆด
df = sns.load_dataset("mpg")
df.shape
(398, 9)
df.head()

mpg

cylinders

displacement

horsepower

weight

acceleration

model_year

origin

name

0

18.0

8

307.0

130.0

3504

12.0

70

usa

chevrolet chevelle malibu

1

15.0

8

350.0

165.0

3693

11.5

70

usa

buick skylark 320

2

18.0

8

318.0

150.0

3436

11.0

70

usa

plymouth satellite

3

16.0

8

304.0

150.0

3433

12.0

70

usa

amc rebel sst

4

17.0

8

302.0

140.0

3449

10.5

70

usa

ford torino

df.sort_values('mpg')
Order rows by values of a column (low to high).
df.sort_values('mpg',ascending=False)
Order rows by values of a column (high to low).
df.sort_values('mpg').head()

mpg

cylinders

displacement

horsepower

weight

acceleration

model_year

origin

name

28

9.0

8

304.0

193.0

4732

18.5

70

usa

hi 1200d

25

10.0

8

360.0

215.0

4615

14.0

70

usa

ford f250

26

10.0

8

307.0

200.0

4376

15.0

70

usa

chevy c20

103

11.0

8

400.0

150.0

4997

14.0

73

usa

chevrolet impala

124

11.0

8

350.0

180.0

3664

11.0

73

usa

oldsmobile omega

df.sort_values('mpg', ascending=False).head()
# default๋Š” True์ด๋‹ค

mpg

cylinders

displacement

horsepower

weight

acceleration

model_year

origin

name

322

46.6

4

86.0

65.0

2110

17.9

80

japan

mazda glc

329

44.6

4

91.0

67.0

1850

13.8

80

japan

honda civic 1500 gl

325

44.3

4

90.0

48.0

2085

21.7

80

europe

vw rabbit c (diesel)

394

44.0

4

97.0

52.0

2130

24.6

82

europe

vw pickup

326

43.4

4

90.0

48.0

2335

23.7

80

europe

vw dasher (diesel)

df.sort_values?
df.rename(columns = {'y':'year'})
Rename the columns of a DataFrame
df = df.rename(columns = {'model_year' : 'year'})
df.head()

mpg

cylinders

displacement

horsepower

weight

acceleration

year

origin

name

0

18.0

8

307.0

130.0

3504

12.0

70

usa

chevrolet chevelle malibu

1

15.0

8

350.0

165.0

3693

11.5

70

usa

buick skylark 320

2

18.0

8

318.0

150.0

3436

11.0

70

usa

plymouth satellite

3

16.0

8

304.0

150.0

3433

12.0

70

usa

amc rebel sst

4

17.0

8

302.0

140.0

3449

10.5

70

usa

ford torino

df.sort_index()
Sort the index of a DataFrame
df.reset_index()
Reset index of DataFrame to row numbers, moving
index to columns
df.sort_index().head(10)

mpg

cylinders

displacement

horsepower

weight

acceleration

year

origin

name

0

18.0

8

307.0

130.0

3504

12.0

70

usa

chevrolet chevelle malibu

1

15.0

8

350.0

165.0

3693

11.5

70

usa

buick skylark 320

2

18.0

8

318.0

150.0

3436

11.0

70

usa

plymouth satellite

3

16.0

8

304.0

150.0

3433

12.0

70

usa

amc rebel sst

4

17.0

8

302.0

140.0

3449

10.5

70

usa

ford torino

5

15.0

8

429.0

198.0

4341

10.0

70

usa

ford galaxie 500

6

14.0

8

454.0

220.0

4354

9.0

70

usa

chevrolet impala

7

14.0

8

440.0

215.0

4312

8.5

70

usa

plymouth fury iii

8

14.0

8

455.0

225.0

4425

10.0

70

usa

pontiac catalina

9

15.0

8

390.0

190.0

3850

8.5

70

usa

amc ambassador dpl

df.reset_index().head(10)
# index๊ฐ€ ์—†์„ ๋•Œ index๋ฅผ ์ƒˆ๋กœ ์ƒ์„ฑ

index

mpg

cylinders

displacement

horsepower

weight

acceleration

year

origin

name

0

0

18.0

8

307.0

130.0

3504

12.0

70

usa

chevrolet chevelle malibu

1

1

15.0

8

350.0

165.0

3693

11.5

70

usa

buick skylark 320

2

2

18.0

8

318.0

150.0

3436

11.0

70

usa

plymouth satellite

3

3

16.0

8

304.0

150.0

3433

12.0

70

usa

amc rebel sst

4

4

17.0

8

302.0

140.0

3449

10.5

70

usa

ford torino

5

5

15.0

8

429.0

198.0

4341

10.0

70

usa

ford galaxie 500

6

6

14.0

8

454.0

220.0

4354

9.0

70

usa

chevrolet impala

7

7

14.0

8

440.0

215.0

4312

8.5

70

usa

plymouth fury iii

8

8

14.0

8

455.0

225.0

4425

10.0

70

usa

pontiac catalina

9

9

15.0

8

390.0

190.0

3850

8.5

70

usa

amc ambassador dpl

df.drop(columns=['Length','Height'])
Drop columns from DataFrame
df.drop(columns=['mpg','year']).head(10)

cylinders

displacement

horsepower

weight

acceleration

origin

name

0

8

307.0

130.0

3504

12.0

usa

chevrolet chevelle malibu

1

8

350.0

165.0

3693

11.5

usa

buick skylark 320

2

8

318.0

150.0

3436

11.0

usa

plymouth satellite

3

8

304.0

150.0

3433

12.0

usa

amc rebel sst

4

8

302.0

140.0

3449

10.5

usa

ford torino

5

8

429.0

198.0

4341

10.0

usa

ford galaxie 500

6

8

454.0

220.0

4354

9.0

usa

chevrolet impala

7

8

440.0

215.0

4312

8.5

usa

plymouth fury iii

8

8

455.0

225.0

4425

10.0

usa

pontiac catalina

9

8

390.0

190.0

3850

8.5

usa

amc ambassador dpl

df.drop(columns=['mpg','year', 'name']).head(10)

cylinders

displacement

horsepower

weight

acceleration

origin

0

8

307.0

130.0

3504

12.0

usa

1

8

350.0

165.0

3693

11.5

usa

2

8

318.0

150.0

3436

11.0

usa

3

8

304.0

150.0

3433

12.0

usa

4

8

302.0

140.0

3449

10.5

usa

5

8

429.0

198.0

4341

10.0

usa

6

8

454.0

220.0

4354

9.0

usa

7

8

440.0

215.0

4312

8.5

usa

8

8

455.0

225.0

4425

10.0

usa

9

8

390.0

190.0

3850

8.5

usa

melt, pivot ์œผ๋กœ Tidy Data ๋งŒ๋“ค๊ธฐ - Reshaping Data, Method Chaining

pd.melt(df)
Gather columns into rows.
pd.melt?
df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
...                    'B': {0: 1, 1: 3, 2: 5},
...                    'C': {0: 2, 1: 4, 2: 6}})
df

A

B

C

0

a

1

2

1

b

3

4

2

c

5

6

pd.melt(df, id_vars=['A'], value_vars=['B'])

A

variable

value

0

a

B

1

1

b

B

3

2

c

B

5

pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])

A

variable

value

0

a

B

1

1

b

B

3

2

c

B

5

3

a

C

2

4

b

C

4

5

c

C

6

pd.melt(df, value_vars=['A','B','C'])

variable

value

0

A

a

1

A

b

2

A

c

3

B

1

4

B

3

5

B

5

6

C

2

7

C

4

8

C

6

pd.melt(df, value_vars=['A','B','C']).rename(columns={
'variable' : 'var',
'value' : 'val'})

var

val

0

A

a

1

A

b

2

A

c

3

B

1

4

B

3

5

B

5

6

C

2

7

C

4

8

C

6

df.pivot(columns='var', values='val')
Spread rows into columns.
df.pivot?
df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
...                            'two'],
...                    'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
...                    'baz': [1, 2, 3, 4, 5, 6],
...                    'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
df

foo

bar

baz

zoo

0

one

A

1

x

1

one

B

2

y

2

one

C

3

z

3

two

A

4

q

4

two

B

5

w

5

two

C

6

t

df2 = df.pivot(index='foo', columns='bar', values='baz')
# row์— ์žˆ๋˜ ๊ฐ’๋“ค์ด column์œผ๋กœ ์ด๋™
df2

bar

A

B

C

foo

one

1

2

3

two

4

5

6

df3 = df.pivot(index='foo', columns='bar', values='baz').reset_index()
df3

bar

foo

A

B

C

0

one

1

2

3

1

two

4

5

6

df3.melt(id_vars=['foo'], value_vars=['A','B','C'])

foo

bar

value

0

one

A

1

1

two

A

4

2

one

B

2

3

two

B

5

4

one

C

3

5

two

C

6

df3.melt(id_vars=['foo'], value_vars=['A','B','C']).sort_values('bar')

foo

bar

value

0

one

A

1

1

two

A

4

2

one

B

2

3

two

B

5

4

one

C

3

5

two

C

6

df3.melt(id_vars=['foo'], value_vars=['A','B','C']).sort_values(['foo', 'bar'])

foo

bar

value

0

one

A

1

2

one

B

2

4

one

C

3

1

two

A

4

3

two

B

5

5

two

C

6

df3.melt(id_vars=['foo'], value_vars=['A','B','C']).sort_values(['foo', 'bar']).rename(columns = {'value': 'baz'})

foo

bar

baz

0

one

A

1

2

one

B

2

4

one

C

3

1

two

A

4

3

two

B

5

5

two

C

6

pd.concat([df1,df2]) ์‹œ๋ฆฌ์ฆˆ, ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ํ•ฉ์น˜๊ธฐ - Reshaping Data

pd.concat([df1,df2])
Append rows of DataFrames
pd.concat([df1,df2], axis=1)
Append columns of DataFrames
pd.concat?
s1 = pd.Series(['a', 'b'])
s1
0    a
1    b
dtype: object
s2 = pd.Series(['c', 'd'])
s2
0    c
1    d
dtype: object
pd.concat([s1, s2])
0    a
1    b
0    c
1    d
dtype: object
pd.concat([s1, s2], ignore_index=True)
0    a
1    b
2    c
3    d
dtype: object
pd.concat([s1, s2], keys=['s1', 's2'])
s1  0    a
    1    b
s2  0    c
    1    d
dtype: object
pd.concat([s1, s2], keys=['s1', 's2'],
...           names=['Series name', 'Row ID'])
Series name  Row ID
s1           0         a
             1         b
s2           0         c
             1         d
dtype: object
df1 = pd.DataFrame([['a', 1], ['b', 2]],
...                    columns=['letter', 'number'])
df1

letter

number

0

a

1

1

b

2

df2 = pd.DataFrame([['c', 3], ['d', 4]],
...                    columns=['letter', 'number'])
df2

letter

number

0

c

3

1

d

4

pd.concat([df1, df2])

letter

number

0

a

1

1

b

2

0

c

3

1

d

4

df3 = pd.DataFrame([['c', 3, 'cat'], ['d', 4, 'dog']],
...                    columns=['letter', 'number', 'animal'])
df3

letter

number

animal

0

c

3

cat

1

d

4

dog

# ํ˜•ํƒœ๊ฐ€ ๋‹ค๋ฅธ ๋‘๊ฐœ์˜ ๋ฐ์ดํ„ฐ ํ”„๋ ˆ์ž„ ํ•ฉ์น˜๊ธฐ
pd.concat([df1, df3])

letter

number

animal

0

a

1

NaN

1

b

2

NaN

0

c

3

cat

1

d

4

dog

pd.concat([df1, df3], join="inner")

letter

number

0

a

1

1

b

2

0

c

3

1

d

4

df4 = pd.DataFrame([['bird', 'polly'], ['monkey', 'george']],
...                    columns=['animal', 'name'])
df4

animal

name

0

bird

polly

1

monkey

george

df5 = pd.DataFrame([1], index=['a'])
df5

0

a

1

df6 = pd.DataFrame([2], index=['a'])
df6

0

a

2

pd.concat([df5, df6])

0

a

1

a

2

pd.concat([df5, df6], verify_integrity=True)
# ์˜ค๋ฅ˜ ๋ฐœ์ƒ : ValueError

merge๋กœ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ํ•ฉ์น˜๊ธฐ left, right, inner, outer ์˜ต์…˜ ์‚ฌ์šฉํ•˜๊ธฐ - Combine Data Sets

import pandas as pd
adf = pd.DataFrame({"x1" : ["A", "B", "C"], "x2" : [1, 2, 3]})
adf

x1

x2

0

A

1

1

B

2

2

C

3

bdf = pd.DataFrame({"x1" : ["A", "B", "D"], "x3" : ["T", "F", "T"]})
bdf

x1

x3

0

A

T

1

B

F

2

D

T

pd.merge(adf, bdf,
how='left', on='x1')
Join matching rows from bdf to adf.
pd.merge(adf, bdf,
how='right', on='x1')
Join matching rows from adf to bdf.
pd.merge(adf, bdf,
how='inner', on='x1')
Join data. Retain only rows in both sets.
pd.merge(adf, bdf,
how='outer', on='x1')
Join data. Retain all values, all rows.
pd.merge(adf, bdf, how='left', on='x1')

x1

x2

x3

0

A

1

T

1

B

2

F

2

C

3

NaN

pd.merge(adf, bdf, how='right', on='x1')

x1

x2

x3

0

A

1.0

T

1

B

2.0

F

2

D

NaN

T

pd.merge(adf, bdf, how='inner', on='x1')

x1

x2

x3

0

A

1

T

1

B

2

F

pd.merge(adf, bdf, how='outer', on='x1')

x1

x2

x3

0

A

1.0

T

1

B

2.0

F

2

C

3.0

NaN

3

D

NaN

T

adf[adf.x1.isin(bdf.x1)]
All rows in adf that have a match in bdf.

adf[~adf.x1.isin(bdf.x1)]
All rows in adf that do not have a match in bdf.
adf.x1.isin(bdf.x1)
0     True
1     True
2    False
Name: x1, dtype: bool
adf[adf.x1.isin(bdf.x1)]

x1

x2

0

A

1

1

B

2

adf[~adf.x1.isin(bdf.x1)]

x1

x2

2

C

3

pd.merge(ydf, zdf)
Rows that appear in both ydf and zdf
(Intersection).

pd.merge(ydf, zdf, how='outer')
Rows that appear in either or both ydf and zdf
(Union).

pd.merge(ydf, zdf, how='outer',
indicator=True)
.query('_merge == "left_only"')
.drop(columns=['_merge'])
Rows that appear in ydf but not zdf (Setdiff).
ydf = pd.DataFrame({"x1" : ["A", "B", "C"], "x2" : [1, 2, 3]})
ydf

x1

x2

0

A

1

1

B

2

2

C

3

zdf = pd.DataFrame({"x1" : ["B", "C", "D"], "x2" : [2, 3, 4]})
zdf

x1

x2

0

B

2

1

C

3

2

D

4

pd.merge(ydf, zdf)
# default : inner join

x1

x2

0

B

2

1

C

3

pd.merge(ydf, zdf, how='outer')

x1

x2

0

A

1

1

B

2

2

C

3

3

D

4

pd.merge(ydf, zdf, how='outer', indicator=True)

x1

x2

_merge

0

A

1

left_only

1

B

2

both

2

C

3

both

3

D

4

right_only

pd.merge(ydf, zdf, how='outer', indicator=True).query('_merge == "left_only"')

x1

x2

_merge

0

A

1

left_only

pd.merge(ydf, zdf, how='outer',indicator=True).query('_merge == "left_only"').drop(columns=['_merge'])

x1

x2

0

A

1

concat์€ ์œ„์•„๋ž˜๋กœ ํ•ฉ์น  ๋•Œ, mergs๋Š” ์ขŒ์šฐ๋กœ ํ•ฉ์น  ๋•Œ ์‚ฌ์šฉํ•˜๋ฉด ์ข‹๋‹ค

ํŒŒ์ด์ฌ ํŒ๋‹ค์Šค๋กœ groupby ํ™œ์šฉํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ ์ง‘๊ณ„๋ฅผ ํ™œ์šฉํ•˜๊ธฐ - Group Data

import pandas as pd
import seaborn as sns
df.groupby(by="col")
Return a GroupBy object,
grouped by values in column
named "col".

df.groupby(level="ind")
Return a GroupBy object,
grouped by values in index
level named "ind".

size()
Size of each group.

agg(function)
Aggregate group using function.
df = sns.load_dataset("mpg")
df.head()

mpg

cylinders

displacement

horsepower

weight

acceleration

model_year

origin

name

0

18.0

8

307.0

130.0

3504

12.0

70

usa

chevrolet chevelle malibu

1

15.0

8

350.0

165.0

3693

11.5

70

usa

buick skylark 320

2

18.0

8

318.0

150.0

3436

11.0

70

usa

plymouth satellite

3

16.0

8

304.0

150.0

3433

12.0

70

usa

amc rebel sst

4

17.0

8

302.0

140.0

3449

10.5

70

usa

ford torino

df.groupby(by="origin")
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x0000017AD720BE88>
df.groupby(by="origin").size()
origin
europe     70
japan      79
usa       249
dtype: int64
df['origin'].value_counts()
usa       249
japan      79
europe     70
Name: origin, dtype: int64
df.groupby(by="origin").max()

mpg

cylinders

displacement

horsepower

weight

acceleration

model_year

name

origin

europe

44.3

6

183.0

133.0

3820

24.8

82

vw rabbit custom

japan

46.6

6

168.0

132.0

2930

21.0

82

toyouta corona mark ii (sw)

usa

39.0

8

455.0

230.0

5140

22.2

82

pontiac ventura sj

df.groupby(by="origin").min()

mpg

cylinders

displacement

horsepower

weight

acceleration

model_year

name

origin

europe

16.2

4

68.0

46.0

1825

12.2

70

audi 100 ls

japan

18.0

3

70.0

52.0

1613

11.4

70

datsun 1200

usa

9.0

4

85.0

52.0

1800

8.0

70

amc ambassador brougham

df.groupby(by="origin")['weight'].mean()
origin
europe    2423.300000
japan     2221.227848
usa       3361.931727
Name: weight, dtype: float64
df.groupby(by="origin")['weight'].median()
origin
europe    2240
japan     2155
usa       3365
Name: weight, dtype: int64
df.groupby?
df.groupby(['origin', 'model_year'])['cylinders'].mean()
origin  model_year
europe  70            4.000000
        71            4.000000
        72            4.000000
        73            4.000000
        74            4.000000
        75            4.000000
        76            4.250000
        77            4.000000
        78            4.833333
        79            4.250000
        80            4.111111
        81            4.500000
        82            4.000000
japan   70            4.000000
        71            4.000000
        72            3.800000
        73            4.250000
        74            4.000000
        75            4.000000
        76            4.500000
        77            4.166667
        78            4.000000
        79            4.000000
        80            4.076923
        81            4.333333
        82            4.000000
usa     70            7.636364
        71            6.200000
        72            6.888889
        73            7.241379
        74            6.266667
        75            6.400000
        76            6.363636
        77            6.222222
        78            6.000000
        79            6.260870
        80            4.285714
        81            4.923077
        82            4.300000
Name: cylinders, dtype: float64
pd.DataFrame(df.groupby(['origin', 'model_year'])['cylinders'].mean())

cylinders

origin

model_year

europe

70

4.000000

71

4.000000

72

4.000000

73

4.000000

74

4.000000

75

4.000000

76

4.250000

77

4.000000

78

4.833333

79

4.250000

80

4.111111

81

4.500000

82

4.000000

japan

70

4.000000

71

4.000000

72

3.800000

73

4.250000

74

4.000000

75

4.000000

76

4.500000

77

4.166667

78

4.000000

79

4.000000

80

4.076923

81

4.333333

82

4.000000

usa

70

7.636364

71

6.200000

72

6.888889

73

7.241379

74

6.266667

75

6.400000

76

6.363636

77

6.222222

78

6.000000

79

6.260870

80

4.285714

81

4.923077

82

4.300000

df.pivot_table?
Signature:
df.pivot_table(
    values=None,
    index=None,
    columns=None,
    aggfunc='mean',
    fill_value=None,
    margins=False,
    dropna=True,
    margins_name='All',
    observed=False,
) -> 'DataFrame'
df2 = pd.DataFrame(
[[4, 7, 10],
[5, 8, 11],
[6, 9, 12]],
index=[1, 2, 3],
columns=['a', 'b', 'c'])
df2

a

b

c

1

4

7

10

2

5

8

11

3

6

9

12

df2.shift(1)

a

b

c

1

NaN

NaN

NaN

2

4.0

7.0

10.0

3

5.0

8.0

11.0

df2.shift(-1)

a

b

c

1

5.0

8.0

11.0

2

6.0

9.0

12.0

3

NaN

NaN

NaN

df2['a'].shift(2)
1    NaN
2    NaN
3    4.0
Name: a, dtype: float64
df2['b'].shift(-1)
1    8.0
2    9.0
3    NaN
Name: b, dtype: float64
df2['b'] = df2['b'].shift(-1)
df2

a

b

c

1

4

9.0

10

2

5

NaN

11

3

6

NaN

12

df['model_year']
0      70
1      70
2      70
3      70
4      70
       ..
393    82
394    82
395    82
396    82
397    82
Name: model_year, Length: 398, dtype: int64
df['model_year'].rank(method='max')
# ํ•ด๋‹น ๊ฐ’์ด ํฐ ์ˆœ์œผ๋กœ ๋ช‡๋ฒˆ์งธ ์ˆœ์„œ์ธ์ง€
0       29.0
1       29.0
2       29.0
3       29.0
4       29.0
       ...  
393    398.0
394    398.0
395    398.0
396    398.0
397    398.0
Name: model_year, Length: 398, dtype: float64
df['model_year'].rank(method='min')
# ํ•ด๋‹น ๊ฐ’์ด ์ž‘์€ ์ˆœ์œผ๋กœ ๋ช‡๋ฒˆ์งธ ์ˆœ์„œ์ธ์ง€
0        1.0
1        1.0
2        1.0
3        1.0
4        1.0
       ...  
393    368.0
394    368.0
395    368.0
396    368.0
397    368.0
Name: model_year, Length: 398, dtype: float64
df['model_year'].rank(method='min').value_counts()
86.0     40
245.0    36
183.0    34
368.0    31
153.0    30
339.0    29
281.0    29
310.0    29
1.0      29
217.0    28
58.0     28
30.0     28
126.0    27
Name: model_year, dtype: int64
df['model_year'].rank(pct=True).head()
# pct๋Š” percentage๋ฅผ ์˜๋ฏธ
0    0.037688
1    0.037688
2    0.037688
3    0.037688
4    0.037688
Name: model_year, dtype: float64
df['model_year'].rank(method='first').head()
# ๋จผ์ € ๋“ฑ์žฅํ•˜๋Š” ์ˆœ์œ„
0    1.0
1    2.0
2    3.0
3    4.0
4    5.0
Name: model_year, dtype: float64
df.rank?
df2

a

b

c

1

4

9.0

10

2

5

NaN

11

3

6

NaN

12

df2.cumsum()
# ๋ˆ„์  ๊ฐ’ ๊ตฌํ•˜๊ธฐ

a

b

c

1

4

9.0

10

2

9

NaN

21

3

15

NaN

33

df2['b'] = [9.0, 13.0, 11.5]
df2.cummax()
# df[b,3]์˜ ๊ฐ’์ด 13๋ณด๋‹ค ์ž‘์€ 11.5์ด๋ฏ€๋กœ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋˜๋Š” ๋ชจ์Šต

a

b

c

1

4

9.0

10

2

5

13.0

11

3

6

13.0

12

df2.cummin()

a

b

c

1

4

9.0

10

2

4

9.0

10

3

4

9.0

10

df2.cumprod()
# ๋ˆ„์ ๊ณฑ

a

b

c

1

4

9.0

10

2

20

117.0

110

3

120

1345.5

1320

๋งˆํฌ๋‹ค์šด ์ˆ˜์‹์ž…๋ ฅ

์ด๋ฒˆ์ฃผ์ฐจ ๊ฐ•์˜๊ฐ€ ์ˆ˜์‹์„ ์ž…๋ ฅํ•  ์ผ์ด ๋งŽ์•„ ์•„๋ž˜ ์‚ฌ์ดํŠธ๋ฅผ ์†Œ๊ฐœํ•œ๋‹ค. (๋ฌผ๋ก  ๋‚˜์—๊ฒŒ)

[ํ”„๋กœ๊ทธ๋ž˜๋จธ์Šค AI ์Šค์ฟจ 1๊ธฐ] 5์ฃผ์ฐจ DAY 1

Machine Learning ๊ธฐ์ดˆ - ์†Œ๊ฐœ

๋จธ์‹ ๋Ÿฌ๋‹์ด๋ž€?

  • ๊ธฐ๊ณ„ํ•™์Šต. ๊ฒฝํ—˜์„ ํ†ตํ•ด ์ž๋™์œผ๋กœ ๊ฐœ์„ ํ•˜๋Š” ์ปดํ“จํ„ฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์—ฐ๊ตฌ.

  • ํ•™์Šต๋ฐ์ดํ„ฐ : ์ž…๋ ฅ๋ฒกํ„ฐ๋“ค๊ณผ ๋ชฉํ‘œ๊ฐ’๋“ค

  • ๋จธ์‹ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฒฐ๊ณผ๋Š” ๋ชฉํ‘œ๊ฐ’์„ ์˜ˆ์ธกํ•˜๋Š” ํ•จ์ˆ˜

  • ์ˆซ์ž ์ธ์‹์—์„œ ์ž…๋ ฅ๋ฒกํ„ฐ๋Š” ์†๊ธ€์”จ ์ด๋ฏธ์ง€, ๋ชฉํ‘œ๊ฐ’์€ 0๋ถ€ํ„ฐ 9๊นŒ์ง€ ์ˆซ์ž์ค‘ ์˜ˆ์ธก๊ฐ’. ์ •ํ™•ํžˆ๋Š” 10๊ฐœ์˜ ํด๋ž˜์Šค ์ค‘ ํ•œ ํด๋ž˜์Šค(์ด ๋•Œ ํด๋ž˜์Šค ๋„˜๋ฒ„๋Š” 1๋ถ€ํ„ฐ ์‹œ์ž‘ํ•จ)

ํ•ต์‹ฌ๊ฐœ๋…s

  • ํ•™์Šต๋‹จ๊ณ„: ํ•จ์ˆ˜ y(x)๋ฅผ ํ•™์Šต๋ฐ์ดํ„ฐ์— ๊ธฐ๋ฐ˜ํ•ด ๊ฒฐ์ •ํ•˜๋Š” ๋‹จ๊ณ„

  • ์‹œํ—˜์…‹ : ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ

  • ์ผ๋ฐ˜ํ™” : ๋ชจ๋ธ์—์„œ ํ•™์Šต์— ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ๊ฐ€ ์•„๋‹Œ ์ด์ „์— ์ ‘ํ•˜์ง€ ๋ชปํ•œ ๋กœ์šด ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์˜ฌ๋ฐ”๋ฅธ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์—ญ๋Ÿ‰

  • ์ง€๋„ํ•™์Šต : ๋ถ„๋ฅ˜์™€ ํšŒ๊ท€

  • ๋น„์ง€๋„ํ•™์Šต : ๊ตฐ์ง‘

๋‹คํ•ญ์‹ ๊ณก์„  ๊ทผ์‚ฌ

  • Polynomial Curve Fitting

  • ํšŒ๊ท€ ๋ฌธ์ œ์— ํ•ด๋‹นํ•œ๋‹ค.

  • ์ ๋“ค์„ ์ง€๋‚˜๋Š”(์ง€๋‚˜์ง€ ๋ชปํ•˜๋”๋ผ๋„ ์ตœ๋Œ€ํ•œ ๊ฐ€๊น๊ฒŒ) ํ•จ์ˆ˜ ๊ตฌํ•˜๊ธฐ

  • ํ•™์Šต๋ฐ์ดํ„ฐ : ์ž…๋ ฅ๋ฒกํ„ฐ์™€ ๋ชฉํ‘œ๊ฐ’

  • ๋ชฉํ‘œ : ์ƒˆ๋กœ์šด ์ž…๋ ฅ๋ฒกํ„ฐ๊ฐ€ ์ฃผ์–ด์งˆ ๋•Œ ๋ชฉํ‘œ๊ฐ’์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ

  • ํ™•๋ฅ ์ด๋ก  : ์˜ˆ์ธก๊ฐ’์˜ ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰ํ™”์‹œ์ผœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ์ˆ˜ํ•™์  ํ”„๋ ˆ์ž„์›Œํฌ ์ œ๊ณต

  • ๊ฒฐ์ •์ด๋ก  : ํ™•๋ฅ ์  ํ‘œํ˜„์„ ๋ฐ”ํƒ•์œผ๋กœ ์ตœ์ ์˜ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก  ์ œ๊ณต

๊ณผ์†Œ์ ํ•ฉ๊ณผ ๊ณผ๋Œ€์ ํ•ฉ

์‹ค์ œ ํ•™์Šต ๋ฐ์ดํ„ฐ์— ํฌ๊ธฐ์— ๋น„ํ•ด ๋„ˆ๋ฌด ๊ณ ์ฐจ์› ํ•จ์ˆ˜ ๋˜๋Š” ๋„ˆ๋ฌด ์ €์ฐจ์› ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์‹ค์ œ ์„ฑ๋Šฅ์—์„œ ์—๋Ÿฌ๊ฐ€ ๋งŽ์ด ๋ฐœ์ƒํ•œ๋‹ค.

ERMS=2E(wโˆ—)/N E_{RMS} = \sqrt {2E(w^*)/N} ERMSโ€‹=2E(wโˆ—)/Nโ€‹ : Root Mean Square

๋˜ํ•œ, ๊ณ ์ฐจ์› ํ•จ์ˆ˜๋”๋ผ๋„ ๋งŽ์€ ์–‘์— ๋ฐ์ดํ„ฐ๊ฐ€ ์กด์žฌํ•œ๋‹ค๋ฉด ๊ณผ๋Œ€์ ํ•ฉ์ด ๋ฐœ์ƒํ•  ๊ฐ€๋Šฅ์„ฑ์ด ์ ์–ด์ง„๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์‹ค์ œ๋กœ ๋จธ์‹ ๋Ÿฌ๋‹์˜ ๋ฐ์ดํ„ฐ ์ˆ˜๋Š” ๊ต‰์žฅํžˆ ๋งŽ๋‹ค๋Š” ์ .

๊ทœ์ œํ™”(Regularization)

ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ’์ด ๋„ˆ๋ฌด ์ปค์ง€์ง€ ์•Š๋„๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•.

์ด ๋•Œ ๋„ˆ๋ฌด ์‹ฌํ•˜๊ฒŒ ํ•˜๋ฉด ๊ณผ๋Œ€/๊ณผ์†Œ ์ ํ•ฉ์ด ์˜ˆ๊ธฐ์น˜ ์•Š๊ฒŒ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Œ.

Machine Learning ๊ธฐ์ดˆ - ํ™•๋ฅ ์ด๋ก 1

ํ™•๋ฅ  ๋ณ€์ˆ˜

ํ™•๋ฅ  ๋ณ€์ˆ˜ X๋Š” ํ‘œ๋ณธ์˜ ์ง‘ํ•ฉ S์˜ ์›์†Œ e๋ฅผ ์‹ค์ˆ˜๊ฐ’ X(e) = x์— ๋Œ€์‘์‹œํ‚ค๋Š” ํ•จ์ˆ˜์ด๋‹ค.

  • ๋Œ€๋ฌธ์ž X, Y, ... : ํ™•๋ฅ  ๋ณ€์ˆ˜

  • ์†Œ๋ฌธ์ž x, y, ... : ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋Š” ๊ฐ’

  • ํ™•๋ฅ  P๋Š” ์ง‘ํ•ฉ S์˜ ๋ถ€๋ถ„์ง‘ํ•ฉ์„ ์‹ค์ˆ˜๊ฐ’์— ๋Œ€์‘์‹œํ‚ค๋Š” ํ•จ

  • ex) S = {HH, HT, TH, TT}; throwing coin

  • X(HH) = 2, X(HT) = 1, X(TH) = 1, X(TT) = 0; head of coin appear

  • P[X = 1] = P[{HT, TH}] = 24=12\frac {2} {4} = \frac {1} {2}42โ€‹=21โ€‹

์—ฐ์† ํ™•๋ฅ  ๋ณ€์ˆ˜(Continuous Random Variables)

๋ˆ„์ ๋ถ„ํฌํ•จ์ˆ˜ F(x) = P[X โˆˆ \inโˆˆ(-โˆž \inftyโˆž, x)] ์ผ ๋•Œ, F(x)๋ฅผ ๊ฐ€์ง„ ํ™•๋ฅ  ๋ณ€์ˆ˜ X์— ๋Œ€ํ•ด์„œ ๋‹ค์Œ์„ ๋งŒ์กฑํ•˜๋Š” ํ•จ์ˆ˜ f(x)๊ฐ€ ์กด์žฌํ•œ๋‹ค๋ฉด X๋ฅผ ์—ฐ์† ํ™•๋ฅ  ๋ณ€์ˆ˜๋ผ๊ณ  ๋ถ€๋ฅด๊ณ  f(x)๋ฅผ X์˜ ํ™•๋ฅ  ๋ฐ€๋„ ํ•จ์ˆ˜(probability density function)๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค.

  • ํ™•๋ฅ  ๋ณ€์ˆ˜๋ฅผ ๋ช…ํ™•ํžˆ ํ•˜๊ธฐ ์œ„ํ•ด F(x), f(x)๋กœ ์“ฐ๊ธฐ๋กœ ํ•˜๋ฉฐ ๋ฐ€๋„ ํ•จ์ˆ˜์˜ ๊ฒฝ์šฐ์—๋Š” p(x)๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ๋„ ํ•œ๋‹ค.

ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ์„ฑ์งˆ

  • ๋ง์…ˆ ๋ฒ•์น™

  • ๊ณฑ์…ˆ ๋ฒ•์น™

  • ๋ฒ ์ด์ฆˆ ํ™•๋ฅ  (posterior ์‚ฌํ›„ํ™•๋ฅ , likelihood ๊ฐ€๋Šฅ์„ฑ, prior ์‚ฌ์ „ํ™•๋ฅ , marginal normalization ๊ฒฝ๊ณ„ํ™•๋ฅ )

ํ™•๋ฅ ๋ณ€์ˆ˜์˜ ํ•จ์ˆ˜

ํ™•๋ฅ ๋ณ€์ˆ˜ X์˜ ํ•จ์ˆ˜ Y = f(X)๋„ ํ™•๋ฅ ๋ณ€์ˆ˜์ด๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ํ™•๋ฅ  ๋ณ€์ˆ˜ X๊ฐ€ ์ฃผ(week)์˜ ์ˆ˜๋กœ ํ‘œํ˜„๋˜์—ˆ๋‹ค๊ณ  ํ•˜๋ฉด ์ผ(day)์˜ ์ˆ˜๋กœ ํ‘œํ˜„๋œ ์ƒˆ๋กœ์šด ํ™•๋ฅ ๋ณ€์ˆ˜๋ฅผ ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค.

  • Y = 7X

  • P[14 <= Y <= 21] = P[2 <= X <= 3]

  • py(y)=px(x)โˆฃdxdyโˆฃ p_y(y) = p_x(x)|\frac {dx} {dy} | pyโ€‹(y)=pxโ€‹(x)โˆฃdydxโ€‹โˆฃ

k์ฐจ์›์˜ ํ™•๋ฅ ๋ณ€์ˆ˜ ๋ฒกํ„ฐ x = (x1, ... , xk)๊ฐ€ ์ฃผ์–ด์งˆ ๋•Œ, k๊ฐœ์˜ x์— ๊ด€ํ•œ ํ•จ์ˆ˜๋“ค์€ ์ƒˆ๋กœ์šด ํ™•๋ฅ ๋ณ€์ˆ˜๋ฒกํ„ฐ y = (y1, ... yk)๋ฅผ ์ •์˜ํ•œ๋‹ค. ๊ฐ„๋žตํ•˜๊ฒŒ y = (x)๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ๋งŒ์•ฝ y = g(x)๊ฐ€ ์ผ๋Œ€์ผ ๋ณ€ํ™˜์ธ ๊ฒฝ์šฐ(x = w(y)๋กœ ์œ ์ผํ•œ ํ•ด๋ฅผ ๊ฐ€์งˆ ๋•Œ), y์˜ ๊ฒฐํ•ฉํ™•๋ฅ ๋ฐ€๋„ํ•จ์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.

  • py(y1,...,yk)=px(x1,...,xk)โˆฃJโˆฃ p_y (y_1, ... , y_k) = p_x(x_1, ... , x_k)|J|pyโ€‹(y1โ€‹,...,ykโ€‹)=pxโ€‹(x1โ€‹,...,xkโ€‹)โˆฃJโˆฃ

  • where J = dx1dy1dx1dy2...dx1dykdx2dy1............dxkdy1......dxkdyk \begin{matrix} \frac {dx_1} {dy_1} & \frac {dx_1} {dy_2} & ... & \frac {dx_1} {dy_k} \\ \frac {dx_2} {dy_1} & ... & ... & ... \\ ... \\ \frac {dx_k} {dy_1} & ... & ... & \frac {dx_k} {dy_k} \end{matrix}dy1โ€‹dx1โ€‹โ€‹dy1โ€‹dx2โ€‹โ€‹...dy1โ€‹dxkโ€‹โ€‹โ€‹dy2โ€‹dx1โ€‹โ€‹......โ€‹.........โ€‹dykโ€‹dx1โ€‹โ€‹...dykโ€‹dxkโ€‹โ€‹โ€‹

์˜ˆ์ œ

px1,x2(x1,x2)=eโˆ’(x1+x2),x1>0,x2>0 p_{x1,x2} (x1, x2) = e^{-(x_1+x_2)}, x1 > 0, x2 > 0px1,x2โ€‹(x1,x2)=eโˆ’(x1โ€‹+x2โ€‹),x1>0,x2>0์ผ ๋•Œ, y1=x1,y2=x1+x2 y_1 = x_1, y_2 = x_1 + x_2 y1โ€‹=x1โ€‹,y2โ€‹=x1โ€‹+x2โ€‹์— ์˜ํ•ด์„œ ์ •์˜๋˜๋Š” y์˜ pdf๋Š”?

fy1,y2(y1,y2)=fx1,x2(x1,x2)โˆฃJโˆฃ=fx1,x2(y1,y2โˆ’y1)=eโˆ’{โˆ’y1+y2โˆ’y1}=ey2 f{y_1, y_2}(y_1, y_2) = f{x_1, x_2}(x_1, x_2)|J| = f{x_1, x_2}(y_1, y_2 - y_1) = e^{-\{-y1 + y2 -y1\}} = e^{y2}fy1โ€‹,y2โ€‹(y1โ€‹,y2โ€‹)=fx1โ€‹,x2โ€‹(x1โ€‹,x2โ€‹)โˆฃJโˆฃ=fx1โ€‹,x2โ€‹(y1โ€‹,y2โ€‹โˆ’y1โ€‹)=eโˆ’{โˆ’y1+y2โˆ’y1}=ey2

0<y1<y2<โˆž 0 < y_1 < y_2 < \infty0<y1โ€‹<y2โ€‹<โˆž

fy1(y1)=โˆซy1โˆžeโˆ’y2=โˆ’eโˆ’y2โˆฃy1โˆž=eโˆ’y1 f_{y_1}(y_1) = \int^\infty _{y_1} e^{-y_2} = -e^{-y_2} |^\infty _{y_1} = e^{-y_1}fy1โ€‹โ€‹(y1โ€‹)=โˆซy1โ€‹โˆžโ€‹eโˆ’y2โ€‹=โˆ’eโˆ’y2โ€‹โˆฃy1โ€‹โˆžโ€‹=eโˆ’y1โ€‹

Inverse CDF Technique๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฐ˜๊ฒฝ์ด r์ธ ์› ์•ˆ์— ๋žœ๋คํ•˜๊ฒŒ ์ ์„ ์ฐ์„ ์ˆ˜ ์žˆ๋‹ค.

๊ธฐ๋Œ“๊ฐ’

ํ™•๋ฅ ๋ถ„ํฌ p(x)ํ•˜์—์„œ ํ•จ์ˆ˜ f(x)์˜ ํ‰๊ท ๊ฐ’

๋ถ„์‚ฐ๊ณผ ๊ณต๋ถ„์‚ฐ

f(x)์˜ ๋ถ„์‚ฐ : f(x)์˜ ๊ฐ’๋“ค์ด ๊ธฐ๋Œ“๊ฐ’์œผ๋กœ๋ถ€ํ„ฐ ํฉ์–ด์ ธ ์žˆ๋Š” ์ •๋„

ํ™•๋ฅ ์„ ํ•ด์„ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ๋‹ค๋ฅธ๊ด€์  : ๋นˆ๋„์ฃผ์˜ ๋Œ€ ๋ฒ ์ด์ง€์•ˆ

๋นˆ๋„์ฃผ์˜ : ๋ฐ˜๋ณต๊ฐ€๋Šฅํ•œ ์‚ฌ๊ฑด๋“ค์˜ ๋นˆ๋„์ˆ˜์— ๊ธฐ๋ฐ˜

๋ฒ ์ด์ง€์•ˆ : ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰์ ์œผ๋กœ ํ‘œํ˜„

๋ถ๊ทน ์–ผ์Œ์ด ์ด๋ฒˆ ์„ธ๊ธฐ๋ง๊นŒ์ง€ ๋…น์•„ ์—†์–ด์งˆ ํ™•๋ฅ ์€ ๋ฐ˜๋ณต๊ฐ€๋Šฅํ•˜์ง€ ์•Š์€ ์‚ฌ๊ฑด์ž„

๋นˆ๋„์ฃผ์˜๋Š” ์ถ”์ •์‚ฌ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ํ™•๋ฅ ์„ ๊ตฌํ•˜๋ฉฐ ๊ตฌํ•ด์ง„ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๋ถˆํ™•์‹ค์„ฑ์€ ๋ถ€ํŠธ์ŠคํŠธ๋žฉ ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•ด ๊ตฌํ•œ๋‹ค.

๋ฒ ์ด์ง€์•ˆ์€ ์‚ฌ์ „ํ™•๋ฅ ์„ ๋ชจ๋ธ์— ํฌํ•จ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ์žฅ์ ์ด ์žˆ๋‹ค.

์ •๊ทœ๋ถ„ํฌ

๋‹จ์ผ๋ณ€์ˆ˜ x๋ฅผ ์œ„ํ•œ ๊ฐ€์šฐ์‹œ์•ˆ ๋ถ„ํฌ

Previous5 TueNext3 Sun

Last updated 4 years ago

Was this helpful?

https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference