Sunday, July 21, 2019
By:
Ah zero, such a unique number. The perfect balance between positive and negative. What exactly is it though? What does it mean to have zero of something? Does it exist in nature? If we could divide by zero, then we could prove 1 = 2 (an exercise left to the reader). Philosophically, its only rival is infinity…
Enough already, you’re giving me a headache!
I’ve been battling with zero in a different sense. Not so much as to what is it, but more of why can’t I calculate it? Excuse my ignorance, let me get you up to speed.
***Warning! You are about to learn some physics and math! Those of you with weaker constitutions may want to the leave the blog post.***
Welcome to Concepts That Sound Hard, But Are Actually Pretty Simple. I’m professor Schuh and today we’ll be learning about orthogonality!
First thing’s first. Orthogonal is a fancy way of saying perpendicular. It is how mathematicians say things are at right angles with respect to one another. For example, the x-axis and the y-axis that form the well-known x-y coordinate plane are orthogonal to one another because the angle between them is 90°.
Next, we must define something called a vector. Don’t let the term scare you! It’s just a quantity that also has a direction attached to it. What do I mean by that? Well if I say I’m driving my car at 40 MPH and that’s all, then that is just my speed. Speed is not a vector. Now if I say I’m driving 40 MPH northeast, then that is my velocity. Velocity is a vector. As a bonus you now know the difference between speed and velocity.
We’ve now reached what is called the dot product. In short, the dot product is useful when measuring how perpendicular (aka orthogonal) or parallel two vectors are. If the vectors are indeed perpendicular, their dot product will equal 0. For all those physics heads out there, the dot product is important when computing the work done by a force on an object.
Finally, we can define the inner product. Something I’ve been spending a lot of time thinking about. The inner product is a more generalized version of the dot product. This time though, we can use it to measure orthogonality between two arbitrary functions as opposed to vectors. Like the dot product, two functions are orthogonal if their inner product equals 0.
Congratulations, you passed! Give yourself a nice pat on the back! To make yourself feel even prouder, I’ll tell you a secret. You just understood some actual concepts you would learn in a general physics class and/or a linear algebra class. Also, if you felt especially confident about the inner product portion, then you technically just learned some calculus. Yeah really! You did! The subject with such a difficult connotation behind it. If it wasn’t for going widely off-track, I would go on my rant about how the concepts of calculus are actually easier to understand than those of algebra, but we’ll save that for another time.
Anyway, you’re now caught up and ready to understand my recent research struggles. In simplest terms I have been trying to prove that two functions are orthogonal. Really, I’m trying to show they are what is called bi-orthogonal, but for simplicity we’ll stick with orthogonal. Either way, based on what I just taught you, it means I’ve been trying to prove that the inner product between two functions equals 0. The problem though is that I am not getting 0! A logical question you might now ask is, “Well, what if the functions are not actually orthogonal?” and I understand why you would ask that, but we know for many physical and mathematical reasons that I can’t get into now that they are in fact orthogonal. Therefore, I should be getting 0! So why am I not? Why am I having nightmares about the number zero?
Welcome to my life! These are some of the questions I’ve been asking myself since last week.
Part of the answer to my problem I already know. I’m working in an area of math known as numerical analysis. Our group’s research problem is mathematically so complicated that we require a computer to solve it and even then, the computer must make some approximations along the way. This causes some error in our final result meaning that although we should get 0 in theory, in reality we are expecting a number that is very small and can be considered 0. With that in mind, what we are actually getting is a complex number (a number with a real part and an imaginary part) that greatly fluctuates around 0. When we change the method the computer uses to solve the problem, it sometimes gets us really close to what we want even when we aren’t expecting it to, and other times it makes things worse when we thought it would improve the result. There really isn’t much of a pattern to it and it’s driving me bonkers.
The good news is that if I can figure out how to fix everything and finally get what we want, then I will have reached a pretty good milestone considering the amount of time I’ve spent on the project. My goal by Friday was to have the issue solved, but unfortunately, I didn’t quite get there. That’s okay though. Life goes on. I’m sure I’ll figure it out. This week I made considerable progress toward a solution and am now hot on its tail. Either way, I wouldn’t be learning as much as I currently am if everything just worked the way it was meant to. I’ll be sure to let you know of any updates by the time next week’s post rolls around.
Well folks, now you’ve heard about my recent research struggles, but hopefully you learned something in the process. If not and I instead just brought back horrible memories for you, then I sincerely apologize. I did give you a warning. Tune in next week for my penultimate post. I promise not to bore you with my day job again. Terry Schuh, signing off.
Terance Schuh