After reading a post from Stephen Wolfram’s blog, I was particularly intrigued by one passage.
Here’s a kind of fun example that I did. It relates to personal analytics—or what’s sometimes called “quantified self”. I’ve been a data-oriented guy for a long time. So I’ve been collecting all kinds of data about myself. Every email for 23 years. Every keystroke for a dozen years. Every walking step for a bunch of years. And so on. I’ve found these things pretty useful in sort of keeping my life organized and productive.
Earlier this year I thought I’d take all this data I’ve accumulated, and feed it to Mathematica and Wolfram|Alpha. And pretty soon I’m getting all these plots and analyses and so on. Sort of my automated personal historian, showing me all these events and trends in my life and so on.
I thought this was a really interesting idea, to take all these small quantifiable aspects of your life and track them so that you can later chart them and have an “automated personal historian” as Wolfram puts it. However, I have to question the actually effect this has on him and if it has any effect at all. I’m not sure how much effort he puts into this tracking, but I wonder if the end product is worth it. For example, in my personal experience, I know when I analyzed my Facebook data (an example he cites later in the article), I certainly found it interested and fun, but I don’t think I learned anything useful and new about myself or the people I am friends with, certainly not anything that I could apply to myself or my life in a meaningful way. Sure, it may be fun to attempt to quantify your life and it is entirely possibly that this is useful to Stephen Wolfram, but I am not yet convinced.
When I myself used the Facebook analytic tool on WolframAlpha, I did find some things that I didn’t except. For example, the fact that I have significantly more female friends than male. In addition, my female friends were more likely to be in a relationship.
I also tried to use WolframAlpha to explore my EE topic. However, since I haven’t narrowed down my EE topic yet and only have the broad category of literature, that is what I chose to search. It suggested many much, much more specific topics, including notable texts (such as the US Constitution and the Bible) as well as novels, plays, and poems.
To do a frivolous search on Wolfram Alpha, I chose to use the random feature, which informed me of various random things, such as the fact that Achilles died in Troy, a comparison between three major American television networks, and the number of robberies in Iowa.
With a technology like WolframAlpha, there are undoubtedly some knowledge issue questions that are going to come up. For example:
To what extent is the “knowledge” of a tool like WolframAlpha superior to that of a human? To what extent is it inferior?
How much trust should we put into a tool such as this?
In addition, the question comes up that was discussed at length by Conrad Wolfram: is using WolframAlpha in math class cheating?
I think this is a difficult question to answer because the answer is different in different situations. Should a young student who is just learning their multiplication tables be able to use a computer to do all of these computations? Since what they are trying to learn is something basic, something that is fairly necessary to know without the aid of a calculator, it would be cheating to use WolframAlpha because it defeats the purpose of what the teachers are trying to test. However, in higher level courses, it often wouldn’t be cheating because it is more conceptual math, less about finding the right figures but figuring out how to apply them. In cases such as this, once the basic math has already been mastered, I don’t consider it cheating to use WolframAlpha to eliminate the possibility of simple, foolish errors as well as to save time.
Lastly, after reading the article on Siri, I was really impressed by how much data and information about you goes into the “thought process” of Siri every time you ask her a question and also the sheer volume of information she has about you. Some of this surprise came from the fact that my limited experience with Siri hadn’t led to her working anything miraculous. I found a cartoon that I found interesting online:
I thought it was interesting because not only does it speak about the sort of intrinsic fear that we have of technology growing more powerful and “human” than us, it also shows the fear we have of technology and companies knowing too much about us.