Bert Huang
Bert Huang
  • Видео 93
  • Просмотров 1 008 993

Видео

A bird's eye view of neural networks
Просмотров 7332 года назад
This video series will be based on some presentations I've created for a class I'm teaching to policy students at Tufts on AI. The goal of the presentations is to describe AI concepts in sufficient detail so that students with backgrounds outside computing can understand them enough to make informed decisions and assessments about the technology. I figured these might be valuable to others, so ...
Experimenting with JAX | AI Professor Improvises Chess Programming #16
Просмотров 7693 года назад
I try to convert my current learning code to use JAX. I get it to run and compute a gradient automatically and correctly, but it’s very slow. It might be taking a step toward being able to do neural network learning, but certainly not yet. Broadcasted live on Twitch Watch live at www.twitch.tv/profberthuang
Debugging Because Nothing is Working | AI Professor Improvises Chess Programming #15
Просмотров 9103 года назад
I try to find why my learning bot isn't learning. Almost everything I test works, except possibly one critical error in my epsilon-greedy logic. I add basic PGN output for learning so we can examine games from training. Watch live at www.twitch.tv/profberthuang
Attempting Experience Replay | AI Professor Improvises Chess Programming #14
Просмотров 2923 года назад
I attempt to code experience replay and set up self-competition to allow the engine to train against increasingly strong opponents. Broadcasted live on Twitch Watch live at www.twitch.tv/profberthuang
Random Projections | AI Professor Improvises Chess Programming #13
Просмотров 3443 года назад
I try to give the bot the ability to reason about relationships among pieces (without making a true multi-layer neural network) by adding random projection features. The bot starts training, but I'll need to run it for a long while before we see any progress. Watch live at www.twitch.tv/profberthuang
Actual Reinforcement Learning | AI Professor Improvises Chess Programming #12
Просмотров 1,9 тыс.3 года назад
I try to avoid saying Q-learning because Q means something else now. But I implement Q-learning, and it seems to work. I add some more features to the board representation. I also correct my rating on my personal lichess account after accidentally playing my bot in rated games last stream. Watch live at www.twitch.tv/profberthuang
Fixing Bugs in Fake Reinforcement Learning | AI Professor Improvises Chess Programming #11
Просмотров 1943 года назад
I fix a crucial sign error in my fake reinforcement learning setup. It sort of starts to work. At the end, everything is set up for actual reinforcement learning in the next session. Broadcasted live on Twitch Watch live at www.twitch.tv/profberthuang
Setting up for Reinforcement Learning | AI Professor Improvises Chess Programming #10
Просмотров 6473 года назад
I set up a basic scoring agent that will be able to do reinforcement learning. Initially, it will just try to learn material values. I try to make it learn at the end of the session, but it's full of bugs. Watch live at www.twitch.tv/profberthuang
Fixing (?) Quiescence Search | AI Professor Improvises Chess Programming #9
Просмотров 5423 года назад
I tried to fix some issues with quiescence search, but I'm not sure if I was successful. Got into an interesting discussion with a viewer about recent advances in chess engines. Watch live at www.twitch.tv/profberthuang
Refactoring & Trying to Spell Quiescence Search | AI Professor Improvises Some Chess Programming #8
Просмотров 2103 года назад
I refactor the code to accommodate multiple forms of search. I use the refactoring to build quiescence search, which seeks a "quiet position" at the end of the normal search. Later I watch a game against Stockfish, but I didn't realize that I had a second bot running in the background that was actually controlling things. github.com/berty38/lichess-bot Watch live at www.twitch.tv/profberthuang
Time Management Prototype | AI Professor Improvises Some Chess Programming #7
Просмотров 2163 года назад
Adding some time management to my bot. I was rushed, so I'm not sure if my implementation was correct. I also briefly cover some speedups I did between streams. Watch live at www.twitch.tv/profberthuang
Introducing Bugs and Not Fixing Them | AI Professor Improvises Some Chess Programming #6
Просмотров 1743 года назад
I tried to improve the sorting with possibly good results, but then I introduced a bug that caused bad play. (I fixed the bug a few minutes after the stream I think.) Watch live at www.twitch.tv/profberthuang
Attempting Alpha-Beta Pruning | AI Professor Improvises Some Chess Programming #5
Просмотров 8083 года назад
I try to convert my makeshift pruning algorithm to proper alpha-beta pruning. It looks correct but it doesn't lead to any obvious speedups. github.com/berty38/lichess-bot Watch live at www.twitch.tv/profberthuang
Pruning the Search Tree | AI Professor Improvises Some Chess Programming #4
Просмотров 3443 года назад
I try a couple strategies for pruning the search tree. The results, as usual, are mixed. Pruning shrinks the search tree a bit, but my caching technique did nothing but slow things down. More mysteries for next time. github.com/berty38/lichess-bot Watch live at www.twitch.tv/profberthuang
Making Moves on Lichess, but Now Slowly! | AI Professor Improvises Some Chess Programming #3
Просмотров 3473 года назад
Making Moves on Lichess, but Now Slowly! | AI Professor Improvises Some Chess Programming #3
Slight Improvements to Heuristic | AI Professor Improvises Some Chess Programming #2
Просмотров 3933 года назад
Slight Improvements to Heuristic | AI Professor Improvises Some Chess Programming #2
Trading a Queen for a Pawn | AI Professor Improvises Some Chess Programming #1
Просмотров 1,5 тыс.3 года назад
Trading a Queen for a Pawn | AI Professor Improvises Some Chess Programming #1
What is soft or max about the softmax function?
Просмотров 9853 года назад
What is soft or max about the softmax function?
Avoid Numerical EXPLOSIONS with the Log Sum Exp Trick
Просмотров 2 тыс.3 года назад
Avoid Numerical EXPLOSIONS with the Log Sum Exp Trick
What Decision Makers Should Know About Machine Learning
Просмотров 4633 года назад
What Decision Makers Should Know About Machine Learning
Fairness in Machine Learning
Просмотров 2,4 тыс.4 года назад
Fairness in Machine Learning
Reduced-Bias Co-Trained Ensembles for Weakly Supervised Cyberbullying Detection
Просмотров 2644 года назад
Reduced-Bias Co-Trained Ensembles for Weakly Supervised Cyberbullying Detection
Relating the Binary and Multi-class Perceptron (handwritten notes)
Просмотров 2,3 тыс.4 года назад
Relating the Binary and Multi-class Perceptron (handwritten notes)
Perceptrons are Weird! (handwritten notes)
Просмотров 4065 лет назад
Perceptrons are Weird! (handwritten notes)
Lagrangian relaxation (handwritten notes)
Просмотров 4,5 тыс.5 лет назад
Lagrangian relaxation (handwritten notes)
Back Propagation
Просмотров 4,5 тыс.5 лет назад
Back Propagation
The Worst Unboxing Video of NVIDIA Titan V
Просмотров 1,3 тыс.5 лет назад
The Worst Unboxing Video of NVIDIA Titan V
monk parakeets of barcelona
Просмотров 1,8 тыс.5 лет назад
monk parakeets of barcelona
All Lagrangian Duals (of Minimizations) are Concave
Просмотров 7775 лет назад
All Lagrangian Duals (of Minimizations) are Concave

Комментарии

  • @gunhatornie
    @gunhatornie 5 дней назад

    remember when??

  • @Ama-be
    @Ama-be 16 дней назад

    Thanks so much for this video! It was very insightful for me👍🏾

  • @nehalkalita
    @nehalkalita 29 дней назад

    11:49 Can anyone tell me why phi(A, C) = P(C|A) and not P(A)P(C|A)?

  • @thegrey53
    @thegrey53 Месяц назад

    Heat Death

  • @imtryinghere1
    @imtryinghere1 2 месяца назад

    Never take this down Bert. I send it to everyone asking me what is a good overview of the main models for classic ML, even if it's 9 years old.

  • @reanwithkimleng
    @reanwithkimleng 2 месяца назад

    ❤❤❤❤❤

  • @reanwithkimleng
    @reanwithkimleng 2 месяца назад

    Thanks for helping me❤❤❤

  • @saisheinhtet2446
    @saisheinhtet2446 3 месяца назад

    awesome

  • @HoangTran-bu2ej
    @HoangTran-bu2ej 3 месяца назад

    what is 'phi' sir

  • @Ahmed.r.a
    @Ahmed.r.a 4 месяца назад

    thank you for this brilliant explanation. I wished there was a Question with solution to practice on.

  • @morzen5894
    @morzen5894 5 месяцев назад

    very unclear and comfusing using venn diagrams to represent some of the probabilities and giving detail example of the math using numbers to show how it runs would be of great help, for people discovering the subject. I am fairly sure this is a great video for people who already understand the subject or have some grapst on it. But for new comer it is very confusing. not to mention the rise in difficulty between the first part which is quite easy to understand (although venn diagrams would help) and the second part which looks like elvish.

  • @newbie8051
    @newbie8051 6 месяцев назад

    Very simple explanation, thans !

  • @zoltankurti
    @zoltankurti 6 месяцев назад

    The bot sacrificed the bishop because of the depth 3 search. It saw bishop to b5, pawn takes bishop and then queen takes rook. It didn't see you can recapture the rook, that's depth 4. It thoight it can take a rook for a bishop and pawn.

  • @cosmopaul8773
    @cosmopaul8773 6 месяцев назад

    Thanks for great video! Helped me a lot in understanding this stuff for my Uni course :)

  • @user-sg2ko8ty3b
    @user-sg2ko8ty3b 6 месяцев назад

    you suck at following any sort of linear pace. Fuck youtube videos

    • @user-sg2ko8ty3b
      @user-sg2ko8ty3b 6 месяцев назад

      oh let me explain how the product rule works and let me show you how the simple probability of a and b works, then immediately reference 7 more advanced terms without any introduction like we all know them perfectly. Cunt you just taught nothing congrats

  • @niyaali2379
    @niyaali2379 7 месяцев назад

    great stuff!!

  • @RishiRajvid
    @RishiRajvid 7 месяцев назад

    from Bihar (INDIA)

  • @UsefulMotivation365
    @UsefulMotivation365 7 месяцев назад

    With the given respect to you, but not to the people that created this "Variable elimination" thing, this variable "elimination" sounds like bullshit because you are already computing all the possible states of the variable that you are going to eliminate, meaning that you aren't eliminating nothing already. Or I'm wrong?

  • @dr.merlot1532
    @dr.merlot1532 8 месяцев назад

    absolutely useless.

  • @deeplearn6584
    @deeplearn6584 8 месяцев назад

    Thanks for the great explanation! Finally understood the implementation of HMM`s

  • @siomokof3425
    @siomokof3425 9 месяцев назад

    6:52

  • @ytpah9823
    @ytpah9823 9 месяцев назад

    🎯 Key Takeaways for quick navigation: 00:00 📊 Probabilistic graphical models, such as Bayesian networks, represent probability distributions through graphs, enabling the visualization of conditional independence structures. 01:34 🎲 Bayesian networks consist of nodes (variables) and directed edges representing conditional dependencies, allowing the representation of full joint probability distributions. 03:21 🔀 Bayesian network structures reveal conditional independence relationships, simplifying the calculation of conditional probabilities and inference. 09:10 🧠 Naive Bayes and logistic regression can be viewed as specific Bayesian networks, with the former relying on conditional independence assumptions. 11:55 📜 Conditional independence is a key concept in Bayesian networks, defining that each variable is independent of its non-descendants given its parents. 15:15 ⚖️ Inference in Bayesian networks often involves calculating marginal probabilities efficiently, which can be achieved through variable elimination, avoiding full enumeration. 23:54 ⚙️ Variable elimination is a technique used in Bayesian networks to replace summations over variables with functions, reducing computational complexity for inference. 24:05 🧮 Variable elimination is a technique used to compute marginal probabilities efficiently by eliminating variables one by one. 28:07 ⏱️ In tree-structured Bayesian networks, variable elimination can achieve linear time complexity for exact inference. 29:02 📊 Learning in a fully observed Bayesian network is straightforward, involving counting probabilities based on training data.

  • @JebbigerJohn
    @JebbigerJohn 10 месяцев назад

    This is so good!!!

  • @EdupugantiAadityaaeb
    @EdupugantiAadityaaeb 11 месяцев назад

    What is the name of textbook

  • @jub8891
    @jub8891 Год назад

    thank you so much, you explain the subject very well and have helped me to understand..

  • @theedmaster7748
    @theedmaster7748 Год назад

    My professor for AI explained this so badly that I had no idea what was going on. Thanks for this in-depth and logical explanation of these topics

  • @seanxu6741
    @seanxu6741 Год назад

    Fantastic video! Thanks a lot!

  • @hongkyulee9724
    @hongkyulee9724 Год назад

    Thank you for the good explanation :D

  • @Iris-pb2er
    @Iris-pb2er Год назад

    the BEST lecture about fairML

  • @JazzLispAndBeer
    @JazzLispAndBeer Год назад

    Great for getting up to speed again!

  • @elita__
    @elita__ Год назад

    i dont want to learn technology but i want you so bad bro.

  • @tuongnguyen9391
    @tuongnguyen9391 Год назад

    This very nice, instead in LDPC decoder, this numerical stuff happen so very often. Back then I use matlab vpa but it very slow. Thank you so so much !!

  • @stevendurr
    @stevendurr Год назад

    Fantastic video Thanks so much for making this

  • @alvynabranches1
    @alvynabranches1 Год назад

    Next time prepare for streams before going live. Check "Machine Learning with Phil"

  • @alvynabranches1
    @alvynabranches1 Год назад

    Can you please update the recent code on your github repository. The one currently on github is to old. And get and error `ValueError: shapes (395,) and (790,) not aligned: 395 (dim 0) != 790 (dim 0)`

  • @rajkiran1982
    @rajkiran1982 Год назад

    Can you please let me know what software do you use for writing ? -- Is it notes in Zoom ? Seeing your face surely makes it better.

  • @anto1756
    @anto1756 Год назад

    Is this playlist still useful if I want to use deep learning instead of reinforcement learning ?

  • @ea1766
    @ea1766 Год назад

    Top tier video without a doubt.

  • @margotgorske6986
    @margotgorske6986 Год назад

    When a flock leaves a tree, it is as if the tree went bare!

  • @bitvision-lg9cl
    @bitvision-lg9cl Год назад

    Very impressive, you make the model crystal clear, and I know that compute bayesian network is nothing than that to calculate a probability (for discrete variables), or a probability distribution (for continuous variables) efficiently.

  • @rezaqorbani1327
    @rezaqorbani1327 Год назад

    Greate explanation! Thank you for the video!

  • @efchen1590
    @efchen1590 Год назад

    very good explantation!

  • @lemke5497
    @lemke5497 Год назад

    Exactly what I was looking for. Can't wait to get some free time in a few weeks to start the project myself.

  • @quaternion3267
    @quaternion3267 Год назад

    Thank You! what is your chess rating?

  • @robotech2566
    @robotech2566 Год назад

    the hardest part of python language is that, you can't easily pass the variables/function to parent/child/sister levels, and that's your problem too at 34:56

  • @robotech2566
    @robotech2566 Год назад

    23:35 you didn't got error but we got a error saying cannot compare float with no value by '>', but was finally resolved by the change of 'and' to 'or', I would like to share it to those who are stuck with the error!

  • @rickstoic6907
    @rickstoic6907 2 года назад

    Thank you Professor Huang.

  • @JustinMasayda
    @JustinMasayda 2 года назад

    This was fantastic, thank you!

  • @shan35178
    @shan35178 2 года назад

    which book is he using for the reference?

  • @quantlfc
    @quantlfc 2 года назад

    Absolutely amazing lecture!!!