![Bert Huang](/img/default-banner.jpg)
- Видео 93
- Просмотров 1 008 993
Bert Huang
США
Добавлен 4 мар 2006
This channel is a mix of personal, research, and teaching videos.
A Bird's Eye View of Recommender Systems
Continuing my series of videos on overviews of key machine learning technology, here's one about recommender systems.
Просмотров: 651
Видео
A bird's eye view of neural networks
Просмотров 7332 года назад
This video series will be based on some presentations I've created for a class I'm teaching to policy students at Tufts on AI. The goal of the presentations is to describe AI concepts in sufficient detail so that students with backgrounds outside computing can understand them enough to make informed decisions and assessments about the technology. I figured these might be valuable to others, so ...
Experimenting with JAX | AI Professor Improvises Chess Programming #16
Просмотров 7693 года назад
I try to convert my current learning code to use JAX. I get it to run and compute a gradient automatically and correctly, but it’s very slow. It might be taking a step toward being able to do neural network learning, but certainly not yet. Broadcasted live on Twitch Watch live at www.twitch.tv/profberthuang
Debugging Because Nothing is Working | AI Professor Improvises Chess Programming #15
Просмотров 9103 года назад
I try to find why my learning bot isn't learning. Almost everything I test works, except possibly one critical error in my epsilon-greedy logic. I add basic PGN output for learning so we can examine games from training. Watch live at www.twitch.tv/profberthuang
Attempting Experience Replay | AI Professor Improvises Chess Programming #14
Просмотров 2923 года назад
I attempt to code experience replay and set up self-competition to allow the engine to train against increasingly strong opponents. Broadcasted live on Twitch Watch live at www.twitch.tv/profberthuang
Random Projections | AI Professor Improvises Chess Programming #13
Просмотров 3443 года назад
I try to give the bot the ability to reason about relationships among pieces (without making a true multi-layer neural network) by adding random projection features. The bot starts training, but I'll need to run it for a long while before we see any progress. Watch live at www.twitch.tv/profberthuang
Actual Reinforcement Learning | AI Professor Improvises Chess Programming #12
Просмотров 1,9 тыс.3 года назад
I try to avoid saying Q-learning because Q means something else now. But I implement Q-learning, and it seems to work. I add some more features to the board representation. I also correct my rating on my personal lichess account after accidentally playing my bot in rated games last stream. Watch live at www.twitch.tv/profberthuang
Fixing Bugs in Fake Reinforcement Learning | AI Professor Improvises Chess Programming #11
Просмотров 1943 года назад
I fix a crucial sign error in my fake reinforcement learning setup. It sort of starts to work. At the end, everything is set up for actual reinforcement learning in the next session. Broadcasted live on Twitch Watch live at www.twitch.tv/profberthuang
Setting up for Reinforcement Learning | AI Professor Improvises Chess Programming #10
Просмотров 6473 года назад
I set up a basic scoring agent that will be able to do reinforcement learning. Initially, it will just try to learn material values. I try to make it learn at the end of the session, but it's full of bugs. Watch live at www.twitch.tv/profberthuang
Fixing (?) Quiescence Search | AI Professor Improvises Chess Programming #9
Просмотров 5423 года назад
I tried to fix some issues with quiescence search, but I'm not sure if I was successful. Got into an interesting discussion with a viewer about recent advances in chess engines. Watch live at www.twitch.tv/profberthuang
Refactoring & Trying to Spell Quiescence Search | AI Professor Improvises Some Chess Programming #8
Просмотров 2103 года назад
I refactor the code to accommodate multiple forms of search. I use the refactoring to build quiescence search, which seeks a "quiet position" at the end of the normal search. Later I watch a game against Stockfish, but I didn't realize that I had a second bot running in the background that was actually controlling things. github.com/berty38/lichess-bot Watch live at www.twitch.tv/profberthuang
Time Management Prototype | AI Professor Improvises Some Chess Programming #7
Просмотров 2163 года назад
Adding some time management to my bot. I was rushed, so I'm not sure if my implementation was correct. I also briefly cover some speedups I did between streams. Watch live at www.twitch.tv/profberthuang
Introducing Bugs and Not Fixing Them | AI Professor Improvises Some Chess Programming #6
Просмотров 1743 года назад
I tried to improve the sorting with possibly good results, but then I introduced a bug that caused bad play. (I fixed the bug a few minutes after the stream I think.) Watch live at www.twitch.tv/profberthuang
Attempting Alpha-Beta Pruning | AI Professor Improvises Some Chess Programming #5
Просмотров 8083 года назад
I try to convert my makeshift pruning algorithm to proper alpha-beta pruning. It looks correct but it doesn't lead to any obvious speedups. github.com/berty38/lichess-bot Watch live at www.twitch.tv/profberthuang
Pruning the Search Tree | AI Professor Improvises Some Chess Programming #4
Просмотров 3443 года назад
I try a couple strategies for pruning the search tree. The results, as usual, are mixed. Pruning shrinks the search tree a bit, but my caching technique did nothing but slow things down. More mysteries for next time. github.com/berty38/lichess-bot Watch live at www.twitch.tv/profberthuang
Making Moves on Lichess, but Now Slowly! | AI Professor Improvises Some Chess Programming #3
Просмотров 3473 года назад
Making Moves on Lichess, but Now Slowly! | AI Professor Improvises Some Chess Programming #3
Slight Improvements to Heuristic | AI Professor Improvises Some Chess Programming #2
Просмотров 3933 года назад
Slight Improvements to Heuristic | AI Professor Improvises Some Chess Programming #2
Trading a Queen for a Pawn | AI Professor Improvises Some Chess Programming #1
Просмотров 1,5 тыс.3 года назад
Trading a Queen for a Pawn | AI Professor Improvises Some Chess Programming #1
What is soft or max about the softmax function?
Просмотров 9853 года назад
What is soft or max about the softmax function?
Avoid Numerical EXPLOSIONS with the Log Sum Exp Trick
Просмотров 2 тыс.3 года назад
Avoid Numerical EXPLOSIONS with the Log Sum Exp Trick
What Decision Makers Should Know About Machine Learning
Просмотров 4633 года назад
What Decision Makers Should Know About Machine Learning
Reduced-Bias Co-Trained Ensembles for Weakly Supervised Cyberbullying Detection
Просмотров 2644 года назад
Reduced-Bias Co-Trained Ensembles for Weakly Supervised Cyberbullying Detection
Relating the Binary and Multi-class Perceptron (handwritten notes)
Просмотров 2,3 тыс.4 года назад
Relating the Binary and Multi-class Perceptron (handwritten notes)
Perceptrons are Weird! (handwritten notes)
Просмотров 4065 лет назад
Perceptrons are Weird! (handwritten notes)
Lagrangian relaxation (handwritten notes)
Просмотров 4,5 тыс.5 лет назад
Lagrangian relaxation (handwritten notes)
The Worst Unboxing Video of NVIDIA Titan V
Просмотров 1,3 тыс.5 лет назад
The Worst Unboxing Video of NVIDIA Titan V
All Lagrangian Duals (of Minimizations) are Concave
Просмотров 7775 лет назад
All Lagrangian Duals (of Minimizations) are Concave
remember when??
Thanks so much for this video! It was very insightful for me👍🏾
11:49 Can anyone tell me why phi(A, C) = P(C|A) and not P(A)P(C|A)?
Heat Death
Never take this down Bert. I send it to everyone asking me what is a good overview of the main models for classic ML, even if it's 9 years old.
❤❤❤❤❤
Thanks for helping me❤❤❤
awesome
what is 'phi' sir
thank you for this brilliant explanation. I wished there was a Question with solution to practice on.
very unclear and comfusing using venn diagrams to represent some of the probabilities and giving detail example of the math using numbers to show how it runs would be of great help, for people discovering the subject. I am fairly sure this is a great video for people who already understand the subject or have some grapst on it. But for new comer it is very confusing. not to mention the rise in difficulty between the first part which is quite easy to understand (although venn diagrams would help) and the second part which looks like elvish.
Very simple explanation, thans !
The bot sacrificed the bishop because of the depth 3 search. It saw bishop to b5, pawn takes bishop and then queen takes rook. It didn't see you can recapture the rook, that's depth 4. It thoight it can take a rook for a bishop and pawn.
Thanks for great video! Helped me a lot in understanding this stuff for my Uni course :)
you suck at following any sort of linear pace. Fuck youtube videos
oh let me explain how the product rule works and let me show you how the simple probability of a and b works, then immediately reference 7 more advanced terms without any introduction like we all know them perfectly. Cunt you just taught nothing congrats
great stuff!!
from Bihar (INDIA)
With the given respect to you, but not to the people that created this "Variable elimination" thing, this variable "elimination" sounds like bullshit because you are already computing all the possible states of the variable that you are going to eliminate, meaning that you aren't eliminating nothing already. Or I'm wrong?
absolutely useless.
Thanks for the great explanation! Finally understood the implementation of HMM`s
6:52
🎯 Key Takeaways for quick navigation: 00:00 📊 Probabilistic graphical models, such as Bayesian networks, represent probability distributions through graphs, enabling the visualization of conditional independence structures. 01:34 🎲 Bayesian networks consist of nodes (variables) and directed edges representing conditional dependencies, allowing the representation of full joint probability distributions. 03:21 🔀 Bayesian network structures reveal conditional independence relationships, simplifying the calculation of conditional probabilities and inference. 09:10 🧠 Naive Bayes and logistic regression can be viewed as specific Bayesian networks, with the former relying on conditional independence assumptions. 11:55 📜 Conditional independence is a key concept in Bayesian networks, defining that each variable is independent of its non-descendants given its parents. 15:15 ⚖️ Inference in Bayesian networks often involves calculating marginal probabilities efficiently, which can be achieved through variable elimination, avoiding full enumeration. 23:54 ⚙️ Variable elimination is a technique used in Bayesian networks to replace summations over variables with functions, reducing computational complexity for inference. 24:05 🧮 Variable elimination is a technique used to compute marginal probabilities efficiently by eliminating variables one by one. 28:07 ⏱️ In tree-structured Bayesian networks, variable elimination can achieve linear time complexity for exact inference. 29:02 📊 Learning in a fully observed Bayesian network is straightforward, involving counting probabilities based on training data.
This is so good!!!
What is the name of textbook
thank you so much, you explain the subject very well and have helped me to understand..
My professor for AI explained this so badly that I had no idea what was going on. Thanks for this in-depth and logical explanation of these topics
Fantastic video! Thanks a lot!
Thank you for the good explanation :D
the BEST lecture about fairML
Great for getting up to speed again!
i dont want to learn technology but i want you so bad bro.
This very nice, instead in LDPC decoder, this numerical stuff happen so very often. Back then I use matlab vpa but it very slow. Thank you so so much !!
Fantastic video Thanks so much for making this
Next time prepare for streams before going live. Check "Machine Learning with Phil"
Can you please update the recent code on your github repository. The one currently on github is to old. And get and error `ValueError: shapes (395,) and (790,) not aligned: 395 (dim 0) != 790 (dim 0)`
Can you please let me know what software do you use for writing ? -- Is it notes in Zoom ? Seeing your face surely makes it better.
Is this playlist still useful if I want to use deep learning instead of reinforcement learning ?
Top tier video without a doubt.
When a flock leaves a tree, it is as if the tree went bare!
Very impressive, you make the model crystal clear, and I know that compute bayesian network is nothing than that to calculate a probability (for discrete variables), or a probability distribution (for continuous variables) efficiently.
Greate explanation! Thank you for the video!
very good explantation!
Exactly what I was looking for. Can't wait to get some free time in a few weeks to start the project myself.
Thank You! what is your chess rating?
the hardest part of python language is that, you can't easily pass the variables/function to parent/child/sister levels, and that's your problem too at 34:56
23:35 you didn't got error but we got a error saying cannot compare float with no value by '>', but was finally resolved by the change of 'and' to 'or', I would like to share it to those who are stuck with the error!
Thank you Professor Huang.
This was fantastic, thank you!
which book is he using for the reference?
Absolutely amazing lecture!!!