Pointing a few more features at the search function has made me wish I'd done a few things differently. However, I did make one nice improvement this evening.
Now, when searching, if more than a day passed between two posts, a visual separation will occur between the posts, as well as an indicator as to how long of a time period passed between the posts. I also have it unambiguously describe the direction of the passage of time depending on if you're sorting Ascending or Descending.
Anybody need a 50 gallon fish tank, complete with a plecostomus and two eels?
Parents have a coworker, giving away said setup for free.
If you haven't seen it already, I highly recommend seeing the BBC's latest efforts with Sherlock (www.imdb.com).
Maybe I'm just uninitiated in terms of Sherlock Holmes mystery stories, but all of the episodes so far have been interesting and engaging, and the character development between Holmes (Benedict Cumberbatch) and John Watson (Martin Freeman) is very entertaining. There are only a handful of episodes out currently, and each is 1.5 hours long. However, I have yet to feel as if the episodes were too long, dragged on, or weren't worth the time investment.
See, I know definitively that there isn't an external API for programmatically posting to idkfa. And I know that it would be non-trivial to automate posting given things like the validation steps, double-posting alerts, etc.
So when I see the same message on Google Plus, Facebook, Twitter, and also idkfa, all around the same time, I feel appreciative that the effort is taken to include this little corner of the Internet.
I didn't like that the display for <#Recent> threads vs. [#Unread] threads on the discussion areas box was inconsistent (in that, when you're logged in, there may have been no third column displayed if you didn't have unread items). In the case where you're logged in and you've read everything, the Discussion Areas display will "fall back" to displaying the <#Recent> posts rather than omitting the column entirely.
Also, I added a few extra search links on the Discussion Areas box. Basically, when you see a number you can click on it, and see which posts correspond to the number represented as a result of a search. When logged in, you can click to search on your unread posts. When logged in or logged out, you can click on the post counts in either the (#Comments) or <#Recent> columns to see which posts correspond to that item, or which posts in that item correspond to the last 50 or so posts.
SPDCA: The other direction.
I convinced myself that idkfa needed to have an "unread" feature. That is to say: after I spend incredible amounts of time getting the "read" part right, I felt I might find value in going the opposite direction. So now, instead of a "speedreading" feature, we also have a "un-reading" feature for marking certain posts as unread.
Like the speedreader, it's plugged into the search feature. Basically: search to find a data set you're happy with, whether it be an item, a thread (one of the icon links, if you're interested), a single post, etc., you can choose to mark it as read (speedreading) or unread (un-reading).
This helps me in a few ways:
I realize this probably isn't an absolutely necessary feature. But it makes the seen/unseen mechanism more complete, and gives you control over it to make it conform to the way you want to read a post.
Also, I was a bit bored, and looking at forum features on other sites / softwares. I liked the idea of abbreviating post counts by listing them in their appropriate kilo- (K), mega- (M), or giga- (G) notation. It's unlikely we'll ever see 1,000,000 posts, or even 1,000,000,000, but it saves a little bit of space.
SPDCA: Cookies. For the longest time I've fought with browser cookies.
(Browser cookies are small, named variables passed between a web server and a web browser. They are what allow servers to maintain stateful information about you (Are you logged in? What is in your shopping cart? etc.) over the course of your browsing session. They often looked upon as the least sophisticated part of the HTTP protocol specification.)
In all of my previous projects, I have simply tried to get the cookies "right," and then walk away and never touch it again. In v2, they were a goddamn mess. Not only was I passing back and forth your username (something that's easy to guess and/or forge), I was also passing back and forth your plaintext, unencrypted password (something that is notoriously easy to spot if you were, say, listening in on a coffee shop wireless, etc.). Even worse: upon every page access I tried to "update" the cookie so its expiration time would be pushed further out. Each time I did so, I told the server to re-send your password in order to update the timestamp. It's probably for the best that I shut that baby down.
For v3, I decided I would try to address the problem. PHP (the programming language I designed idkfa in) has built-in session tracking functionality that takes care of things like encryption, session creation / breakdown, complex data types stored within the session, and session validation.
Unfortunately that decision, up until now, has caused my idkfa logins to last around 24 minutes unless you were a) a user actively accessing the site, or b) sitting on auto-update.
This is for the following reasons:
When I woke up this morning I was surprised to see myself still logged in to idkfa. It's been a while. But I think I've got it "correct" now, but I'm still watching it.
Also, because my brain works wrongly, I added the "pre" tag to the list of approved HTML tags. There isn't a button... so you have to look at the code... to get the pre tag. Ha... ha... ha.
ini_set("session.gc_maxlifetime", COOKIE_EXPR_TIME ); ini_set("session.save_path", SESS_PATH ); // Get some cookie modifications going session_name( SESS_NAME ); # $ck = session_get_cookie_params(); # $ck['lifetime'] = $ck['lifetime'] + COOKIE_EXPR_TIME; session_set_cookie_params( COOKIE_EXPR_TIME, COOKIE_PATH ); session_cache_expire( COOKIE_EXPIRE_TIME / 60 ); // Globally, start or resume current session session_start(); # We've already got our cookie set. $is_session_available = false; if( isset( $_COOKIE[SESS_NAME] ) ) { $is_session_available = true; setcookie( SESS_NAME, session_id(), time()+COOKIE_EXPR_TIME, COOKIE_PATH ); }
(Actually, it's a little bit unwieldy at the moment. But as I find case to put more code in, I'll see if I can't improve upon it).
Another "nice to have" thing I've wanted for a while: for plain links that are excessively long, I am "collapsing" them so that they don't mess with the formatting of the comment boxes. You can still explicitly link things of arbitrary length (making normal text appear as a link (php.net)), but using a lengthy URL (https://picasaweb.google.com/lh/photo/ywObKSO0AlKzBeedzVyADi2TBFbFtxRPtqpluxrkO2Q?feat=directlink (picasaweb.google.com)) will attempt to collapse things, like so:
https://picasaweb....O2Q?feat=directlink (picasaweb.google.com)
So the question is: why am I collapsing in certain places, and not others? Well, it depends on your browser. Some of them (apparently Chrome, and IE) automatically wrap links pasted into special "textareas" like the comment box with the correct linking tags. I might try to address this in the future to collapse explicit linking as well, but we'll see.
Alright... I've got this handled "better" now, though there's still the possibility that explicit links can be fairly lengthy and mess with the formatting of the comment box. However, with every link there is now an indicator next to it that says the host name of the site the link actually goes to. For example:
http://idkfa.com/v3/v_thread.php?thread_id=3995&msg_id=3995 (reddit.com)
Note that the link destination is different than the host domain indicated after. This should help you decide whether you want to click on a link.
I was into link misdirection before it was cool. Sadly, I don't have a 5 digit slashdot UserID (slashdot.org), so I wasn't there for the birth of their link shortening/verification system. (books.google.com) I'm going to put myself out of nerd misery now...
Why didn't anyone tell me Jonathan Coulton released a new album (secure.jonathancoulton.com) in September?
(It features Sara Quin, of Tegan and Sara fame, singing Still Alive.)
Also, I plugged "2^2147483648" into Wolfram Alpha (http://www.wolframalpha.com/input/?i=2%5E2147483648 (www.wolframalpha.com)), which gave me a) Hope, and b) a better answer:
Also, slightly related, an article about Hypercard: http://www.loper-os.org/?p=568 (www.loper-os.org)
This guy's argument is that Apple/Steve Jobs killed Hypercard because it weakened the distinction between the use and programming of a computer. This guy is sort of crazy... and even though he calls my kind the "autistic software engineer," he's still my kind of crazy. He wants people to do more interesting things with computers.
As per some advice from Josh, I made a new YouTube channel especially for all the dirty signs I will be signing:
http://www.youtube.com/user/ASLTerpCat/videos (www.youtube.com)
Tell your parents. Enjoy.
Reading some of the comments (www.youtube.com)...
It's too bad you're being "disliked" not for your technical merit, but because laymen think you're being rude or contrary to Kristen's original videos, or simply being "attention-seeking."
From someone who has to constantly defend my high towers, I salute you.
Random thought/idea: So say I see that there's 3 new posts that I haven't read in Mercy General. I click on one of them and they all happen to be in the same thread, but since I clicked on one of them, they're all counted as read. The big thing here though is how do I distinguish which ones I haven't read yet on the thread screen? Can the windows be outlined or something to show that they're new? Maybe I'm missing it, but that would be nice. Then if I hit F5, since they're already read, the outline goes away.
(nods) Yeah, I know what you mean. I've entertained the idea, but there's sort of an "order of operations" problem. When I'm rendering a page, I have a choice to make:
With #1, everything generates with the same consistent, post-viewing history. Everybody thinks they've seen everything, even what's being displayed for the first time.
With #2, I would be playing a bit of a shell game. Every time I update a page, I would need to be cognizant of "Does this function need the pre-viewing history? Or the post viewing?" If ever I screwed that up, or introduced something that always needed pre-viewing or post-viewing, I would have to reorder things in my rendering mechanism. This isn't fun.
I might be able to make a modification... essentially, have my "maintain_viewing_history" function return a list of things it considered "unseen" before its call. Then, I would have to change a number of other functions to pay attention to that. It would mean dragging even more parameters through the different rendering calls, but it would be less hellish in the end.
I'll think about it.
I had an issue with "classic view."
I made the change in this thread with full view in mind, assuming that all IDs passed through the maintenance function would be marked as read. Classic view behavior is different, marking only the post you're viewing as read, not the entire thread.
I found a way around it that allows the same function to serve both cases, but it's definitely not pretty. If I stare at it some more, I might figure something else out, but for now it is working.
I miss classic. It had a certain, understated elegance. But it did take longer.
I've also updated the "streaming" page to indicate which of the threads/post are unread. It's... actually pretty cool, being able to scroll through a bunch of posts and see which are new among the ones you've already seen. I recommend checking it out.
However, because the streaming page has the potential to display many, many posts, and unintentionally mark as read more posts than you're wanting, I'm keeping it from updating your history unless you click on a thread explicitly. Maybe I can make this a user selectable option in the future, but for now erring on the side of caution.
Remember also that there is a "speedread" function under Shortcuts > More, or through the Search feature by clicking on "Mark these posts as read" for a given search result. Seriously, it's useful.
I need to know something regarding your workplace, specifically, how you work with Excel (or other spreadsheet applications).
Thanks for your time.
1. Format data, process data, organize data.
2. I use it many times a day. Typically for the same sort of shit.
3. I share a lot, using a shared drive. Only way we pass around data really.
4. It is stored in excel, but exported to Word for reports.
5. No macros at this point because I hate them, but probably should start implementing them.
6. It would suck, but ultimately the data is stored elsewhere. But backups are created daily.
7. Excel's best feature is that anyone can use it... and because of that it is necessary. It's worst feature is that it fucking sucks (ie data manupulation is difficult)
1. I use Excel to conduct data exploration ('Is there something interesting going on here?' ) perform complex calculations, display results in organized tables or charts, and report information to others.
2. I use excel approximately half of my working day, maybe more. I used to use it exclusively, but I now conduct much more analysis using SQL and Tableau. Some of my reports are recurring (weekly, monthly, etc), and I have created dashboard like workbooks that dynamically update to minimize the work required to refresh data. But most of my work iis comprised of special data requests or long term, changing analyses.
3. About half of my workbooks get shared. Special requests are most often sent via email. Larger reports and recurring documents are posted to Sharepoint.
4. It is very, very rare that I store data in excel. I almost exclusively import it from SQL databases. The exception is census data found online. And even that is a paste at a moment in time. Data permanently stored in excel scares me. Seriously.
5. I have used quite a few docs with Macros (though I seem to use them less and less). They would run printing scripts, which would cycle through dynamic dashboard views; update formatting on large data sets; or import data directly from external data sources like other workbooks or SQL Server.
6. I don't have any workbooks that are single points of failure in any given process. My recucurring reports have many copies, and I have sql code to document my data pulls. Some analysis results could be painful to lose (a lot of successive calculations) but I could recreate them. Truly, the point of failure is me. I work alone for the most part, and other folks in the office probably couldn't pick up my work easily if I were hit by a bus.
7. Oh gosh. Excels best feature...I suppose it is the transparency of it's functionality. I can trace all its calculations, and 'watch' analyses happen visually, which I just can't do with SQL code. I couldn't live without Excel's selection of lookup functions. The worst is a tie between charting (and all the bugs inherent) and the lack of count distinct.
1. performing calculations, occasionally graphing. also, putting together equipment schedules to use with Xcel2CAD to put them into engineering drawings.
2. depends on the project i'm working on and where I am in the project. at times, i will work with excel for calculations for up to a full week, then only occasionally in a project after that. in general, its the same basic calculations with new data sets since the same calculations need to be done for each building or project i work on.
3. there are anywhere from 3 or 4, and up to 20, spreadsheets that are shared among my coworkers for review and/or input into xcel2cad. Most of the time, they are stored on our server (which has automatic backups), and we usually make incremental copies to show our progress on calculations. when i need to direct other coworkers to my spreadsheets, i usually email or IM them a link to the directory. some coworkers attach them to an email, a practice discouraged since it usually creates several working copies of a calc.
4. some of the calculation variables are measured or quantified from engineering documents, and some are brought in from codes, standards, manufacturer's information, or other sources.
5. yes. i've never programmed macros or scripts myself, but current and previous coworkers have. I use their spreadsheets and change the data as I see fit. usually the script compares an input cell to a table of data copied into another worksheet in the file, and outputs the answer from the table. the table is input by hand into the worksheet from a code, standard, manufacturer's information, or other sources.
6. minimal. we have enough copies of calculations and backlog of experience that it would not severely hinder our business operations. Now, if a whole server nuked itself, or we lost computing functions altogether, that's a serious issue and will cause delays. Ultimately, data could be tabulated and calculated by hand, but that would significantly increase our working time.
7. =CONCATENATE(text1,text2...). Absolute and Relative cell values. cell borders and shading, and text coloring, assisting in the presentation of data. those are the ones that came first to mind, but if I thought about it I could probably come up with more.