AaravYadav

AaravYadav

0-day streak
#100-days-in-public day16 i really want to make a chess ai, it seems like a fun idea and i want to make one that can beat me so something thats actually alright at the game. I think am sort of inspired by this youtube vid at 36 min in youtu.be/Ne40a5LkK6A?si=AoenyRehoYxF-EXD
https://imgutil.s3.us-east-2.amazonaws.com/37daf2ab01b1b6a9101417499032585cd14a289b6dbf3a752a65f12d628593ab/8863592f-d0ea-467c-b078-08c1d5017e3f.png
#100-days-in-public day16 finished training the model and creating a video for visualizing its training as can be seen in the linked video to my youtube channel. This time I actually remembered to use a validation dataset and because of that the model was able to learn substantially faster than before yayy youtu.be/iBYUoslkwQw
https://imgutil.s3.us-east-2.amazonaws.com/c1638a3553568bcacb1ddcdc8ef4a321177bd7ee718e50021fb29832d946bc79/31579cf0-c4b1-417e-9a44-3b70c1456737.png
#100-days-in-public Day 15 I am going to redo the things that I got from the last simulation but instead am going to use relu only and have a validation dataset. The model is currently training
https://scrapbook-into-the-redwoods.s3.amazonaws.com/6e4d583d-b045-4e54-bbd8-6cc03dadb154-image.png
#100-days-in-public Day14 I made it two weeks! today I finished training and creating a visualization for that training of two nueral networks that work using leaky ReLU activation functions and tanh activation functions with their last layers both being a modified tanh function. the videos of the training are on my channel! youtu.be/vZn3FnS02OI youtu.be/SVH7nVDXPf0
https://imgutil.s3.us-east-2.amazonaws.com/f99a3b62b0b920a7b448a12c574991e392d667af6985aeb6485d4a14b70f76fa/44befdc5-a942-41bb-9d70-6db87de768b8.png
#100-days-in-public Day 13 I had an issue with matplotlib not working and because of that I had to waste a lot of time uninstalling and reinstalling but I will be uploading another vid of the stuff i did last week for the 100 days in public challenge in a bit, I read the first half of this book:
https://scrapbook-into-the-redwoods.s3.amazonaws.com/c452c9e4-dd96-4ae2-a7c4-49e81f264a35-image.png
#100-days-in-public Day 12 Today I am going to be uploading a bit of the things that I have done on my youtube channel as a sort of dev log, recording them right now but should be done tonight www.youtube.com/channel/UCrk5M8CDdMTUrwLGTjs16gA
https://imgutil.s3.us-east-2.amazonaws.com/8c503c462165fb49363298f3ed9d1509f9ea77c49490e44e2cf37b993b977f63/184c3c92-a66a-45c4-be50-cadffc45b397.png
#100-days-in-public day 12 I finished the coding for the interpolation ai that I am making. I am going to let it run over night and will see how well it ends up but currently I have gotten the best result at loss = 0.95ish which is pretty good, here is the animation I was able to make with just 10 epochs though in the meantime before I get the actual animation
https://scrapbook-into-the-redwoods.s3.amazonaws.com/b66b3c92-3bce-4b08-a7fc-b038a2b74909-animation.gif
#100-days-in-public Day 11 Today I started coding for a nueral network that would take in numbers an map them to the values that it needs, unfortunatly i will have to end of today with an error saying that .mp4 file is not knnown but I was able to get the nueral network part working just needed the visualizing mp4 file to work and it didn't
https://scrapbook-into-the-redwoods.s3.amazonaws.com/ee6685bd-493e-4e97-b414-2e7ea38bd9c8-image.png
#100-days-in-public Day 10 I created a ai model that combined the semi Fourier series idea with the things that i was doing currently but it ended up creating a worse output for some reason. It has me stumped because I thought that adding more data into the model would help out, but unfortunately it didn't.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/bbb6580c-7352-49b4-b341-ea3117531671-image.png
#100-days-in-public day 9 I got some more results from my image compression ai that I was making and I realized that instead of going over every single pixel individually I could build a CNN network that has a 1,1 filter and it would do the same thing basically. That made me realize a deeper connection between CNN's and normal DNN's which has me feeling really trippy since
https://scrapbook-into-the-redwoods.s3.amazonaws.com/ba79890e-7edc-4ba9-997d-bbf0f591e238-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/d96bd8be-ea5c-42b8-834a-bb5d64c84211-image.png
#100-days-in-public day 8 I got my image compression algorithm to work a bit it created the last two images after training on the first image I have another idea that could be much better, i am going to sort the image array and then train an algorithm to take that array and turn it into the main image using that i could use 255 bytes for the data and probably very few bytes for the machine learning model as well but I am going to try that tmmr
https://scrapbook-into-the-redwoods.s3.amazonaws.com/047482aa-7fe7-48c5-964a-c0b0e8afc6e6-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/23cf9f7e-ff38-44db-9b40-68628e5c4753-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/188eb337-6495-4cb7-9a59-be57abd9cb16-image.png
spring-of-making emoji
#100-days-in-public Day 7 I have made it a week, unfortunatly I am now also very sick therefore wasn't able to really code much I did start training the model that I made for image "compression" that uses trignometry functions in an almost fourier transform like way. Also unfortunatly it takes veryyyy long for it to train so i'll train it through the night so here is a screenshot of some of the code
https://scrapbook-into-the-redwoods.s3.amazonaws.com/c96c07d3-81cb-4a59-bc4f-61907137501c-image.png
#100-days-in-public day 6 I am currently still working on representing images as partial Fourier series using ai to figure out the values for the coefficients
https://scrapbook-into-the-redwoods.s3.amazonaws.com/802d4ac5-1190-47c6-b572-c62126d52387-image.png
#100-days-in-public day 5 unfortunately for some reason using a convolutional only network hasn't been working for the image compression idea that I had which is confusing to me so im going to try to use things from this video with sin and cosine values combined with convolution layers into make things (the image that I was using was supposed to be the hacker image but it keeps returning weird blank images strangely) www.youtube.com/watch?v=TkwXa7Cvfr8&t=6s
https://scrapbook-into-the-redwoods.s3.amazonaws.com/22137412-7adf-4e55-98b5-0e268f2d9812-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/047a454c-6600-4c1f-940a-76643bb9f298-image.pnghttps://imgutil.s3.us-east-2.amazonaws.com/9509e602d70e718ab091340609b85342ae7f10001d559c9c7994e63960d13ea8/8e4fce70-b1dc-4fc1-be31-65e716ac701b.png
#100-days-in-public Day #4 after getting bored with trying to make the MNIST model faster, I have now come up with the idea that I could create a neural network that takes in a certain static image or any type of image and trains on it to be able to convert it to an image that I would it to be. So sort of like very bad image compression. So now my logs will be regarding the development of that
https://scrapbook-into-the-redwoods.s3.amazonaws.com/ecec8f36-a3a8-48c6-8587-12e671db408b-image.png
#100-days-in-public Day 3 I got the model I was working on to converge even faster at 29.972 seconds so about 8 seconds faster to 99%. I also have to idea of image compression that I want to try out tomorrow.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/f94cfe34-0ed8-4241-b50a-d91fbe10b03b-image.png
#100-days-in-public Day 2 GREAT DAY TODAY Yesterday I had a model that took around 1000 seconds in order to learn the MNIST data to 99% using some test data Today I got the time needed to converge to 99% to ONLY 38.6 SECONDS a HUGE improvement over the previous time The graph represents the number of epochs taken and the accuracy for each of those epochs, each of the colored graphs is a different trial
https://scrapbook-into-the-redwoods.s3.amazonaws.com/8ff0483a-57af-428c-ad0d-e96e68812476-image.png
Day 1 of #100-days-in-public I am going to start this voyage which I have anticipated going on for a log time now. Starting it today I will try to make the fastest converging model for the MNIST dataset on GitHub as a way to help me learn about Neural Networks and GitHub, this will hopefully last me the first 10 days afterwards I would like to move to some visualizations with ml
https://scrapbook-into-the-redwoods.s3.amazonaws.com/4faaa6fa-38f2-42ec-b3fb-bcbb66ae3878-image.png