week 15(18-22 November)
In this week we started trying our hands on Deep Learning assignment part 2 "Speech Recognition".
In this challenge we will take our knowledge of feedforward neural networks and apply it to a more useful task than recognizing handwritten digits: speech recognition. We were provided a dataset of audio recordings (utterances) and their phoneme state (subphoneme) labels. The data comes from articles published in the Wall Street Journal (WSJ) that are read aloud and labelled using the original text. It is crucial for us to have a means to distiguish different sounds in speech that may or may not represent the same letter or combinations of letters in the written alphabet. For example, the words "jet" and "ridge" both contain the same sound and we refer to this elemental sound as the phoneme "JH". For this challenge we will consider 46 phonemes in the english language.
Next we had a session with Danko sir and we did open discussion with him on human intelligence and cleared vision on too what extent machines can acquire intelligence.We solved then Sehra sir's next assignment on hypothesis testing using R.The function t.test is available in R for performing t-tests.
> x = rnorm(10)
> y = rnorm(10)
> t.test(x,y)
Welch Two Sample t-test
data: x and y
t = 1.4896, df = 15.481, p-value = 0.1564
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.3221869 1.8310421
sample estimates:
mean of x mean of y
0.1944866 -0.5599410
Comments
Post a Comment