Monday, January 29, 2007

Week 4, Tuesday 4.1-4.2

The history regarding DES was very cool. Ways of creating, attacking, and improving DES was quite interesting. The simpliefied DES type alogrithm was not as cool, but it was nice to know the background. The algorithm was difficult to understand, but reading it through twice helped.

Thursday, January 25, 2007

Week 3, Thursday 3.11

Could you explain the the discrete log problem? This chapter reminded me of basic linear algebra and the definition of a field and the lemmas it satisfies. It was interesting applying them to fields mod p. Moreover it was nice to finally understand why we were using mod a prime rather than any other integer. The polynomial fields were familiar to me but the method of using the extended eucleidian algorithim to find inverses of polynomials was quite interesting. Could you please go into more detail about how 8-byte data can represent a polynomial? That seems odd and much of the explanation after is vague if you don't understand the initial steps. LFSR seems really cool especially finding the period of reocuurance. I followed much of the explanation but I have not learned Lagrange's theorem yet. It would be nice if there was another example, it would really make things a lot clearer.

Tuesday, January 23, 2007

Week 3, Tuesday 3.3-3.4

The reading was pretty dry but necessary. Congruent and inverse in mod form was quite interesting and familiar. They apply the same rules of what it means to be inverse in mod form. They also generalize inverse, so that an inverse of a mod number can be multiplied by that number in the with the same mod n to cancel out that number when doing multiplication. I also thought the part about fractions was interesting. I didn't grasp what fractions meant in mod until I read the examples.

I found the chinese remainder theorem intersting and it was quite useful to have the T.A. go over it before we did the reading. It made the reading a lot more interesting and understandable. The examples in the book also make the theorem easier to understand. The only concern is, when are you coming back? The midterm is approaching rapidly and I feel my questions haven't been answered and a review by you would be most helpful. Your way of explaining things makes the information click. Let me know if you have extra sessions when you get back. I will surely attend!

Sunday, January 21, 2007

Week 3, Sunday 3.1-3.2

The reading was pretty dry for the most part. Consisting of theorems and proofs for the most part. The Eucledian algorithm seems very very powerful. I understand the process of both the Eucledian and the extended Euclidean but I am unsure how it really works. The eculidean algorithm seems perfect for a computer program. It would not be hard to program this in c++. We touched on the application of GCD and primes in the last chapter. Can't wait to get to crypto stuff when we can see where these theorems and lemmas are used!

Thursday, January 18, 2007

Week 2, Thursday 2.8-2.11

I didn't understand exactly why one-time pads were unbreakable? The book explained that Alice doesn't use the same key twice, but do you use more than one key per plaintext? Also, clarrification of the proof on page 48 would be helpful. Binary and ASCII, both methods of writing numbers, can complicate codes greatly. The problem of true random numbers for key generation is a problem. Computer generated 'random' numbers do not exemplify pure randomness. Cryptographers rely heavily on natural occurances of randomness. Blumb Blum Schub is a practically unpredictable. The linear feedback shift register is very fast and can be easily implemented. The attack is to look for reoccurences and then make some reasonable guesses. Once the length of the reoccurance is determined, all consectuive terms of this sequence length can be determined. Using nonlinear recurrences creates a very complicated, hard to break, code.

Tuesday, January 16, 2007

Week 2, Tuesday 2.5-2.8

The book explained how to encrypt the ADFGX Cipher very well. However, the decryption method was quite confusing. Could you explain pg. 33 2nd to last paragraph, starting with "Here is the one technique that was......" The codes keep getting more and more complicated. As I read them, I'm sometimes stunned at the complexity. After reading the Hill Cipher and reviewing my Linear Algebra the cipher methods can become complex very quickly. I feel that many of the ways to code break use trial and error as well as luck. Did people 100 years ago really sit down and use brute force to reveal the key? Shannon's rule of diffusion and confusion seems to be a simple yet safe method of checking one's code to make sure it is foolproof. I will start to analyze future codes in the cryptography book using this rule.

Wednesday, January 10, 2007

Week 1, Thursday, 2.1-2.4 Rough Draft

7. The most difficult material was understanding how to break the Vigenere Cipher. I understood exactly how the encoded the plaintext but got lost trying to understand the second method used to find the key. I think an example using the second method would clear my confusion up. Examples I've learned are a great tool for learning cryptography. First off, the first chapter and the first four sections of chapter 2 were awesome! I did not fathom how many ways there are to encrypt plaintext and that it went back to julius ceasar. The T.A. was great! And the initial part of chapter 2 went by quickly. It is interesting to note that Julius Ceasar, with such important messages, didn't pick a more compiclicated encryption algorithum. Second, I completely understand what mod 26 is thanks to Pic 10a,b,c and it is interesting to note that you must make sure all numbers in the code are between 0 and 25, and can be done by mod, a cool trick. In addition, it is interesting how we have extended the definition of inverse, when introducing mod. With respect to the Affine Cipher the multiplicative inverse for 9 (mod 26) is 3. I really like how the book introduces the topic and the order that it is in. For example, as I was reading the Affine Cipher I questioned its validity, what if there was more than 1 input that gave the same output. The book went on to describe how to assure one-to-one, which can be shown if and only if gcd(a, 26)=1. Moreover, I find that understanding how to break the code using different techniques and thought processes is interesting. It is sometimes amazing how complicated these codes can seem, yet how easy, if in the right mind-set, one can break them.