Category Archives: Software

Importing 3D models into Oculus Home from Sketchfab

Oculus Home (not mine)

A couple of weeks ago, I posted a video about importing items from Sketchfab, a site that hosts 3D models people have created, into Oculus Home. I meant to put up a companion post to that video, but as sometimes happens I was busy, and so the post didn’t go up as quickly as I’d hoped. Luckily, I recently had a five-hour flight from the east coast with many delays due to weather, and was able to start writing it up, and now that I’m back home there’s some time (but not much) to power through and finish. If you just want to see the video, it’s at the end of this post and linked earlier in this paragraph.

For those not familiar, Oculus Home is a personal space, sort of like an apartment, that appears when you first boot up the Oculus Rift headset. For a long while, Oculus has provided a wide range of additional items you could use to decorate your space, everything from furniture to suits of armor to completely new environments such as a gothamesque city or outer space. Even with all of that, one of the main requests users had over at the Rift Suggestion Portal was to be able to create their own objects and import them into their virtual living space. Here’s a crappy picture of an earlier version of my Home, taken off YouTube from this article’s companion video.

Oculus Home

Oculus Home

(Side note: In the video I discuss the texture loading issues and choppy performance, however a Dash update released a week later fixed it all. No more issues!)

About a month ago, Oculus delivered. You can now import items created in some of the virtual modeling programs available on the platform such as Medium, as well as from website Sketchfab, and it is the latter that really opens up the possibilities. Because people can upload models created using any tool, there is a wide variety available, although be warned that as is the case with anything else, when the public can upload their own creations there are some that are of the NSFW variety. Also, some models must be bought, but there are many free ones as well.

Another nice thing about Sketchfab is that you can manipulate the objects in your browser before downloading, so you can get a very good idea how they will look in your 3D environments. It also holds developers to a standard to not cut corners on, say, the backs of things (which modelers will sometimes do if they know the object will be placed against a wall, for example – in fact, games simply don’t render what can’t be seen at both the object and world level!) and develop the whole thing completely. As with anything, quality varies, although I was, for the most part, very pleased with what the site offered. Also be aware that not all models are available for download, such as the notable ‘Old Truck and Gas Pumps‘ and ‘The Pumpkin House‘ seen in the screenshot below, but most smaller, decoration-style models are.

Some Sketchfab models

Some Sketchfab models

Importing models is not an overly complicated process, however there is an issue with scaling. Many objects you will import will be far too big to fit in the space, and will need to be scaled down. Because Oculus does not allow for scaling within Home itself, a third party tool is needed to do the scaling and there’s no way to determine whether the object is properly scaled other than to import it and test it in the space. If it’s too big or too small, you’ll have to go through the process again. My hope is that Oculus will soon allow for scaling within the Home environment itself. Until then, we have to do it the hard way.

So, we will first find the object we wish to import, scale it, then import it.

In the video, I imported what looks like a file cabinet that would be found in post-apocalyptia (remember Three-Dog?) or a building from the zombie apocalypse. It adds a nice yet subtle, creepy flair.

I found the model by doing a search for free downloads, but you can find the specific one in the video by searching for “Rusty Metal Cabinet” or simply click here. I should mention that if you decide to do the search and see what else is available, ‘Cabinet’ is misspelled for this particular model on Sketchfab, but you’ll find it anyway.

Rusty Metal Cabinet on Sketchfab

Rusty Metal Cabinet on Sketchfab

Once you have found it, you can rotate it and view it from all angles right in your browser, and when you are ready to make it yours just click on the blue ‘Download’ link under the author’s name, right under the model itself.

Downloading is the big catch here: When the download dialog appears, you want to be sure to download the ‘Autoconverted format.’ That glTF format is an open-source method for the interchange of 3D models for use in applications other than those in which they were initially created. What that means is, the format can be exported to the format we need for importing into Oculus Home, while the ‘Original Format’ will be for the tool used to create the model itself, whether Blender or 3DSMax, or whatever, and that’s not what we want.

Be sure to download the 'Autoconverted format" option

Be sure to download the ‘Autoconverted format” option

Once downloaded, export the zipped file so you have the folder with associated files inside it. I use 7-zip, which for me is the easiest, provides the most control and options, and integrates nicely into the context menu. Their website, however, leaves much to be desired. However you extract it, once you have the resulting folder, open it up so you can see the ‘Textures’ folder, along with ‘Scene.bin’ and ‘Scene.gltf.’ Then, bring up https://glb-packer.glitch.me/ and drag all three of those onto the page: The folder, and the two Scene files. You don’t have to do anything else, it will automatically convert then download a file called ‘glb.out’ that has all you need. Because it generates an .out file, we know it’s a binary compaction of the original files and folders into a single file.

It couldn’t be easier. Seriously, this is the whole page, and if converting to GLB there’s nothing to do!

GLB packer. This is it.

GLB packer. This is it.

Of course, that easy step shouldn’t lull you into a false sense of complacency, because now it gets cumbersome. You should have, in your Documents folder, an Oculus Home folder, then an ‘_Import’ folder, and you can make the _Import folder if it’s not there, but don’t forget the underscore! Rename your ‘glb.out’ folder to something that makes sense, for example in the video I renamed it to ‘RustyMetalCabinet,’ and drag it into the ‘_Import’ folder.

For now, that’s it! Go into home, check in the ‘My Imports’ tab of your inventory, and you will see it there just as you named it. Bring it into home, and…it will likely be way, WAY too big. And that means it’s scaling time, which is easily the most tedious part of the process; all you can do is scale it, hope it’s right, then re-import it back into Home. If it still isn’t the right size, you’ll need to do it again. Come on Oculus, let us scale in Home!

Anyway, to scale the object, you’ll need to bring up the threejs.org/editor site, drag your .out file onto it, then scale by clicking and sliding your mouse over the ‘Scale; numbers on the right-hand side, as you can see in the video and image below, then selecting ‘Export .GLB’ from the file menu. The problem is, there’s no real way to know how the thing scales regarding your particular Home: It could be too big, too small, all you can do is re-import it and check it, then do the scaling over again if it doesn’t fit. For example, I ended making a teensy file cabinet before I got the size right. I kept it anyway, though; it was neat, in it’s way.

Adjusting scale in the threejs editor

Adjusting scale in the threejs editor

Then, once you have found your scale, you’re all set! Another neat thing about doing this is that when imported models load into your home on startup, the do this futuristic materialization, like they’re being 3D-printed by light, into existence. It’s a really nifty effect.

So there you go! Have fun, there is a huge number of models to choose from, the recent Oculus update has fixed most of the performance issues, so the possibilities are almost endless! If we can just get scaling, perhaps even direct import, the icing will be on the cake, but until then, it’s still pretty fun.

Here’s the video:

A win for digital preservation

As many of you know, I am a big fan of older software and systems, even maintaining a small collection of vintage computer systems and software. Indeed, it is the software that is important, as hardware without software is just hardware, nothing more. Every piece of hardware needs a killer app to run on it, or people don’t buy the hardware in the first place. It’s why game consoles need to have a robust software lineup available at launch, or else risk being left behind for the entire generation.

Thing is, preserving physical copies of software is easy, procedurally, anyway. You have the physical software, and you digitize it while preserving the physical copy itself, and it could be a permanent record. of course, there are issues with maintaining the software in a runnable state, both for the digital version (are there suitable emulators available?) and clearly for the physical version, which is subject to all kinds of risks including environmental and  technical. There is the issue of bit rot, an ill-defined term which generally refers to either a physical medium being unusable because as technology advances, the hardware used to read it becomes obsolete, or the general lack of performance of a physical medium overall due to the aforementioned environmental or other factors.

Even with physical copies, issues of the grey-area legality of emulators is always front-and-center, with the real focus being on ROMs. Nintendo recently shut down long-time and much-loved ROM site EmuParadise simply by threatening legal action. Curious, considering EmuParadise has been around so long, but now Nintendo wants to start monetizing its older IPs, and EP might put a bee in that bonnet. Also, ‘shut down’ isn’t entirely true: they didn’t shut it down but the site no longer hosts ROMs, and even though there is still a lot of information, the actual games being lost is a big problem. I have to go further out of my way to find ROMs, even for games whose companies, platforms, sadly even developers are long gone.

EmuParadise – “This Game is Unavailable”

So why does this all come up now? Because we are in an age now where much of the software, data, and information we have is in digital form, not the physical form of old, and this leads to huge problems for historians and archivists such as myself. If something is only available digitally, when the storefront or host on which it is available goes down, how will we maintain an archived copy for future generations to see and experience? I have a Steam library with, at the time of this writing, over 300 titles that are all digital. there is no physical copy. So, just for argument’s sake, let’s say Steam closes shop. What happens to all those games? those VR titles? That one DAW? Will they just vanish into the ether? Steam claims if they ever shut down, we’ll be able to download them all, DRM free. But will we? As the best answer in that earlier link states, Steam’s EULA that all users are required to accept, states that their games are licensed, not sold. That may seem strange, but it has actually been that way for a long time. Even with physical games, you don’t actually own them and what you can do them outside of simply play them is exceptionally limited.

So how do we preserve these digital-only games? Whether a tiny development house creating an app, or a huge AAA title developed by hundreds of people: if the company shuts down or the people move on or whatever happens, how will we access those games in the future, 10, 20, 50 years from now?

This becomes even more of an issue when we have games with a back-end, or a server-side component. The obvious example is MMOs such as Everquest and World of Warcraft. What happens when their servers shut down and the game can no longer be played? How do we continue to experience it, even if for research or historical purposes? If the server code is gone, having a local copy of the game does us no good. As many MMOs and other online games continue to shut down, the fear is they will simply fade into nothingness, as if they never existed, but their preservation as a part of the history of gaming and computing is important.

Some brave souls have tried running what are called ‘Private Servers,’ which is server code not run, supported, or authorized by the developing company. Again, the most common example are World of Warcraft private servers you can join, which are not running the current code, but earlier versions which people often prefer and which brings up another interesting issue regarding the evolution of these digital worlds: Even if a game is still going strong, as is WoW, how do we accommodate those who prefer an earlier version of the game, in this case known as vanilla, meaning with the original mechanics, structure, narrative, and other gameplay elements that were present on launch but have since been designed far out of the game? Vanilla WoW is not officially offered although that will apparently be changing, but many private servers are immediately shut down via Cease and Desist orders from Blizzard, with one being shut down the day it came online after two years of development. Others manage to hang on for a while.

Vanilla WoW map - remember this place?

Vanilla WoW map – remember this place?

At the risk of going off on too much of a tangent, the reason I’m talking about this and why it has taken me six paragraphs to get to my point is that there has been a semi-wonderful ruling from the Librarian of Congress that essentially maintains an already-written rule that if a legacy game is simply checking an authentication-server before it will run, it’s legal to crack that game and bypass the check procedure.

Additionally, while that small thing includes the ability to allow legacy server code to run and be made accessible, it specifically can not be done so by private citizens, only a small group of archivists for scholarly / scientific / historic / other related reasons, and most importantly, the server code has to be obtained legally. That may be the biggest hurdle of all, as it is well known some companies sit on games and IP long after their market value has faded, preventing them from being released or even reimagined by the public. I’m hoping this is the beginning of a renewed push for legal support to archive and utilize legacy code and legacy server code to continue to preserve not just software titles of all types, not just games (and the new ruling includes everything), but the interim forms they took throughout the course of their development.

One additional note that surprises me even to this day: There is one online game that while very popular was eventually shut down by its parent company. That game was Toontown Online, and it was developed by Disney, a company well-known for aggressively protecting its IP. So it is especially surprising, that when an enterprising teenager who missed the world they had created decided to run a free private server, even renaming it Toontown Rewritten, Disney let it go! It has been up and running for a good number of years, has seen many improvements, updates and additions other than its terrible log-in client, and runs quite well. Here’s a screenshot of its current state, and as you can see it is still popular.

Toontown Rewritten

Toontown Rewritten

It still uses Disney-owned names and imagery, and has the unmistakable Disney aesthetic – there’s even a Toontown in Disneyland! If you want to see the proper way to handle this whole issue, once your game is done and abandoned, let an enthusiastic team who is passionate about the project and treats it with respect take over. It makes goodwill for you, a positive experience for them, and ensures your creation will continue to live on.

Just like old times: Running console emulators and PLEX on the virtual flat screen TV in Oculus Home

A couple of months ago, Oculus introduced a few flat-panel TVs (and a desktop monitor) into Oculus Home. That would have been fine in and of itself, however these TVs can have ‘panels’ embedded into them. Panels are essentially windows from your desktop, so if you had Word or Plex or chat or even a game session open, you could have it appear in a TV or monitor in your virtual home.

Naturally, my first instinct was to fire up one of my retro-console emulators and see if I could have it run by projecting it through one of the TVs in the virtual space. It’s almost a meta thing: Playing a classic console on a flat-screen TV in a virtual environment via VR. If it worked, since Oculus allows you to place as many of these as you wish, I could set up the custom gaming room of my dreams. I am happy to report it worked perfectly, and I am in the process of designing that gameroom right now! I’m thinking futuristic city as the backdrop, but we’ll see.

PLEX, which is movie-playing and streaming software for your personal media collection, also ran beautifully with no lag, and I’ll talk a little about that towards the end.

Before I get to the results, let’s talk about emulators. Emulators trick your system into thinking they’re the console they’re emulating, and I mean that they really trick it. Whatever shortcomings the console itself had, so will the emulator (and possibly a few more, but we’ll get to that). If a bug appeared in a game, or the console just quite wasn’t fast enough to push a game, that will be replicated during emulation. Atari 2600 emulators will even show racing the beam errors, in which the next line of video won’t be drawn if the emulator is still carrying out logic operations. In the following image, which shows The Empire Strikes Back for the Atari 2600 running in the Stella emulator (Stella was the code name for the console during development), if you look closely you’ll see black lines down the left hand side of the image: Those are racing the beam errors, even though it’s not a CRT and not even a console! The point is, when you’re emulating, the systems is the console.

Racing the Beam errors in the Stella emulator

Racing the Beam errors in the Stella emulator

Not only that, the games are actually the games, not some reprogrammed nonsense. An ISO, for example, is an exact bit-level copy of digital media, so someone will put the original game disc into their PC’s optical drive, or sometimes they’ll use the console itself with special connections, and use special software to turn that disc or cart into a file. Other times, the ROM chip itself will have to be ripped using a ROM (EEPROM) burner.

The point is, with an emulator, you’re getting the actual console experience with the actual games and all their flaws or lack thereof, not some hokey recreation. It also means some games may not work all that well, or at all, but that’s rare.

Generally, emulators are not terribly difficult to set up and get running. Each has its own quirks, but once you start experimenting you’ll be hooked, not to mention surprised at (generally) how well they run.

I use many emulators, and I’ll give you a crash course on setting them up. All you really need to know is that you need to specify a folder that contains your games, and you can call the folder anything you want: Whenever I set up a folder I’ll call it either ‘Games’ ‘ROMs’ or ‘ISOs,’ but you can use anything. Speaking of the games, those files can come in several forms: Some emulators allow for zip files in the games folder, others will require an .iso, .bin, or possibly other, specific file such as .gcn which is for the Nintendo GameCube. Some can use whatever you throw at them. Additionally, for the later consoles, you’ll need a BIOS file in your games folder as well, and they can generally be found with a low-effort web search, as can the games themselves.

Here’s an example of how the directories work, and these are almost always assigned during setup, although you can certainly set it through the options as well. For clarification, I run mine from a network drive, which is nice because I can save a game and then pick it up again from any PC in the house, or on the planet. Speaking of game saves, the emulators take care of setting up memory cards for saving games as well, however you can also create ‘Save States’ at any point in a game regardless of how its save structure is set up. So if you have to leave or otherwise shut down the emulator but haven’t reached a checkpoint, you create a save state, then load that instead of the save from within the game itself. You’re essentially saving the state of the emulator, not the game.

Here’s an example of setting up the games directory: In the image below, in image ‘1’ in the upper left, you can see I have an ‘Emulators’ folder, and after opening that, in image ‘2,’ top right, I am selecting the PCSX2 folder, which is a PlayStation2 emulator. Then, in image ‘3,’ lower left, I have created a folder called ‘Games’ where I keep all game files, and finally the last image shows the .iso and .bin files that represent that games I have for that emulator. On the other hand, my Gamecube / Wii emulator, Dolphin, has a folder called ISOs where I keep all my games, even though not all of them are actually .isos.

Directory structure for the PCSX2 emulator

Directory structure for the PCSX2 emulator

The next big thing is that I have an XBox One controller connected to my PC, and any controller you use has to be configured. The initial controller setup, although straightforward, can be moderately time-consuming, but you should only have to do it once. For most emulators, you’ll need to go through and indicate which button on the controller does what, and you do that by clicking on each function in the emulator, then pressing the button / pulling the trigger that you’d like to assign. Below are images of the PCSX2 controller configuration screen, and underneath that the Dolphin controller configuration screen, and you have to click on each function then press / pull the corresponding control on the controller itself. Once that’s done, controls should be all set. Ultimately, it’s a really nice way of doing it because  you can set the controllers any weird, wacky way you’d like.

PCSX2 Controller Configuration dialog

PCSX2 Controller Configuration dialog

 

Dolphin Controller Setup

Dolphin Controller Setup

The only other major issue you are likely to run into is the graphical settings. Older consoles weren’t designed with 4K TVs in mind, many never even considered widescreen! Although most games will work fine, each emulator will have its own peculiarities when it comes to this, and they all offer EXTENSIVE graphics customization. PCSX2, for example, allows you to use OpenGL, Direct3D9 or Direct3D11, and it’s fascinating how a game won’t run under DX9, but will run perfectly under DX11, or vice versa, and I have seen many variations on that. I also, for that particular emulator, set resolution to at least 2x native; then it will run surprisingly well even on a widescreen 4K TV. And of course, it allows you to customize graphics for the emulator in general, as well as specific tweaks for the renderer you choose.

PCSX2 configuration menu

PCSX2 configuration menu

 

PCSX2 renderer selection and tweaking

PCSX2 renderer selection and tweaking

When it comes to actually selecting a game, some emulators are more user-friendly than others. Prettier, I guess I should say. PCSX2 only offers up a straight file selection window, while Dolphin and ReDream, a Dreamcast emulator, provide lovely selection menus. Being an HCI person I prefer that, but the PCSX2 barebones method works just as well.

ReDream game browser

ReDream game browser

 

Dolphin game browser

Dolphin game browser

So now that we have the details out of the way, what about running them on the TVs in Oculus home? I’m glad to report it works perfectly! There is, however, one big caveat: You can choose to embed the emulator window that’s displaying the game, however the window won’t be listed as a panel if it’s fullscreen, so you’d have to window the emulator on the desktop, then embed the window, then maximize it on the desktop. The better choice is to fullscreen the emulator window then embed the desktop itself. If you do that, it will work perfectly, with no lag (either visual or input), no jitter, no performance hit. For PLEX and even for a couple of emulators, if they’re fullscreen, Oculus Home won’t recognize them anyway so you’ll have to embed the desktop no matter what: It’s just a better all-around option.

Panel selection

Panel selection

It also works for running MAME through the arcade cabinet, as seen below:

Running MAME in the arcade cabinet

Running MAME in the arcade cabinet

If you’re interested in running PLEX, I have a separate video series on ripping videos and setting up PLEX (part 1 (Ripping), part 2 (PLEX)), and while it’s also not difficult, it’s beyond the scope of this post. Even so, the process of embedding is the same, especially since, as mentioned earlier, PLEX doesn’t show up in the panel selection menu anyhow.

Here’s the video showing four different emulators running on the Virtual TV in Oculus Home, as well as PLEX running a streaming rip of Coco from a network drive, and there is also a wall-mounted TV option along with a computer monitor. It all worked incredibly well, much better than expected.

A new exploit for Intel processors?

Intel Coffee Lake

Whether or not this turns out to be a big deal is yet to be seen, but research from collaborators at UC Riverside, Binghampton University, Carnegie-Mellon Qatar, and the College of William and Mary, have discovered a new possible exploit in certain Intel processors that they have dubbed BranchScope – that link will take you straight to the research paper itself. The research is published in the Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS).

This is a new form of predictive execution exploit, in which the CPU attempts to predict what will happen as the result of a particular instruction, then branch based off of that guess, then guess what will happen as a result of that particular instruction, and so on. It’s the same kind of manner in which a computerized chess game plays chess: Every time you make a move, the game will play out thousands of games as a result of that, trying to guess the moves you will make and the moves it can make in response, and decide on an appropriate next move as a result. It doesn’t make this guess blind, either: In the processor is a component of the Branch Prediction Unit, or BPU, known as a Pattern History Table (PHT) that stores the results of previous operations, and each time a particular branch is taken, the ‘value,’ for lack of a better term, of that branch is increased or decreased. That allows the BPU to be more accurate in its future predictions (Spectre attacked the other part of the BPU, the Branch Target Buffer, which focuses on the result of a branch as opposed to the choice it made).

The exploit, therefore, is to set up selected branches that will modify the PHT and run them so that the PHT will always select one branch over another in a given situation, as well as monitor that the changes to the PHT have actually taken place. These attacks can be used to ensure a particular path is taken when predictive execution happens, and that can be used to divulge information that is otherwise unaccessible, even to the OS, such as key segments, or even provide access to the Software Guard Extensions (SGX), an Intel feature that allows deft software developers to place sensitive or critical data off in protected areas of the CPU cache that should, ideally, be available to none besides the program itself. The whole purpose of the SGX is to prevent bad actors from accessing the data.

Both the Specter vulnerability and BranchScope as well remind me of the more widely-known and difficult to pull off NOP, or No-Operation, Slide (sometimes Sled or Ramp) type of attack, in which an attacker attempts to bypass a series of CPU instructions such that when an operation does happen, it ends up in a specific portion of memory where malicious code has already been installed and will then be run. These often fail, by the way, and in fact the section of memory that holds the malware will often be padded in front and behind so that the target memory location is bigger and the slide hits by luck; it’s a big shot in the dark. The hex value of the NOP is x90, and if you look at the image below which is tracing CPU instruction executions, you can actually see the slide happening as the series of ’90s’ at the top of the lower left and lower right window (source: samsclass).

NOP Slide (Lower left). Source: samsclass

NOP Slide (Lower left). Source: samsclass

This is called a side-channel attack because something is running that shouldn’t be, similar to how loading non-approved, outdated, or unkown-source OSs or apps on a smart phone is considered side-loading.

Predictive execution is complex, the exploits are complex, and as stated earlier, whether or not we see them in the wild is a completely separate issue; the payoff would really have to be worth the effort and I don’t see that being the case unless the exploit was adapted to pair with, say, a trojan horse-style attack, but even then the nature of the returned data would be useless without significant analysis. Knock on wood, but I don’t see this as a common attack vector in the near future.

Why your screen just went black and white: Windows 10 color settings and color-blindness

UPDATE: I’m leaving the original post up, however the latest Windows update changes some of this around. The ‘Windows key-ctrl-c’ shortcut for setting filters is no longer enabled by default, and while you can still use the ‘right-click on desktop and select Personalize’ technique, in settings you now have to choose ‘Ease of Access’ to access these settings, which does make more sense. Additionally, high-contrast settings are where they’ve always been, but black and white / color-blind settings are now under the ‘Color filters’ option.

Here’s the new screen in settings, and the original post follows:

New Windows 10 color and filter settings

New Windows 10 color and filter settings

If you’re using Windows 10 and looking for a fix as to why your screen has suddenly gone black and white, or if it hasn’t and you’re looking simply to learn something new and interesting and fun (and what can be used as a cruel trick on your friends / acquaintances / enemies, but please don’t do that), try the following: Hold down the Windows key, then Control, then C. If your screen was black and white, problem solved! If it wasn’t, well, now it is. Do the combo again and you’ll be good as new. Spoiler: You can choose other visual settings for that shortcut; black and white is simply the default. More on that in a moment, and here’s the companion video I recorded for this post.

I’d also like to briefly talk about the importance of color. It is a vital tool in communicating information, although it should never be used as the sole communicator: if someone can’t distinguish red, a red sign will serve no purpose if there isn’t text that also communicates what’s up. But there is no such thing as a green stop sign or a red ‘everything’s ok’ light, and there is a reason for that. For you programmers, you know that you tweak the colors of your IDE to enhance readability and comfort. Never underestimate the purpose of color in design. Also, my Ph.D. was in this very subject so I’m especially passionate about it, as you can tell.

Green stop sign

I don’t think so

But black and white? You may be wondering why something like this is even a feature. The reason is readability and accessibility, a primary focus of HCI, interaction design, and the larger field of human factors, and remember it’s not the only setting – it’s just the default. The point is to make the screen easier to see for those with visual impairments, particularly color blindness, but these adjustments can also be used by those who simply find stock Windows color settings to be too much. And let’s face it – sometimes they can really be too much.

It reminds me of the old Atari 2600 and it’s B*W / Color switch, which is as far as I can tell the first time a dedicated option was provided to select between one or the other. This wasn’t an accessibility issue, however, it was used because in 1977 black and white TVs were still quite prevalent, and using the black and white setting would adjust the contrast by switching the palettes used, thus making the game easier to see. Without that, in other words if it just switched to a standard black and white with no consideration given to their original colors, on-screen objects that were easily distinguishable in color might be very difficult to distinguish with just shades of gray. So the idea was the same, but the reason was quite different.

The image below shows the B*W / Color switch taken from one of our 2600 consoles in the gameroom here at UCI, while the other two screens show Slot Machine for the 2600, running in the Stella emulator, and how it displays in color and in switched black and white.

2600 Slot Machine in color and black and white

Atari 2600 Slot Machine in color and black and white

Yet that was back in 1977. With todays monitors having HD and 4K capabilities, why on earth would we need a black and white setting? There are several reasons, the most obvious being that black and white photography, which adds an ethereal, ageless quality to a photo, and in my opinion often requires the viewer to focus more on the content and composition of the image, is still very common today.  However you can’t simply switch from color to black and white for the same reasons we had a switch on the old Atari; what works well and is easily distinguishable in color may blend into a single shade in black and white. No, unless you are shooting in black and white natively, and if you are you should use the RAW setting to get as much contrast and separation as possible, you have to convert an image from color to black and white, manually adjusting for all those settings that would have been done automatically otherwise. That’s one use for the setting in Windows. Of course, it’s possible that a user might have complete color-blindness, and even though that is extremely rare, it does happen: estimates are, for men, between 1:30,000 and 1:100,000 depending on type.

There is much more to this feature than simply switching between color and shades of gray, however. That capability in Windows is simply foreshadowing for much more important color-based adjustments that can be made. To see what they are, go into settings, select ‘Colors,’ (Alternatively: Right-click on your desktop and select ‘Personalize’), and scroll down until you see ‘High Contrast Settings.’ Click that, and whatever is selected under the heading ‘Choose a Filter’ is what will Windows will switch to when the shortcut Windows key – Ctrl – C is pressed. You can see the choices below.

High contrast settings

High contrast settings

 

Color filters

Color filters

So what do these do? The first three should be obvious; grayscale converts to shades of gray, or black and white informally, ‘Invert’ inverts the colors on the screen, although what ‘Invert’ actually means cans be complicated. Factors to consider include whether you are using RGB or CMYK color standards, the former being additive and the latter being subtractive, and what you’re trying to invert. Normally, when colors are inverted they are switched to their Complementary Colors, which are those opposite each other on the color wheel and offer the highest contrast to each other, although that high contrast can be a nightmare for text (red and green are complementary on a CMY color wheel, for example), so always consider application and the color scheme you’re using.

I’ll be using UCI’s EEE page as I see it to show what these selections do, since it has an interesting combination of colors including various shades of blue and yellow which are sort-of-but-not-quite complimentary, my avatar has red hair, the ‘Learn about’ box in the upper right is green, and if you look closely you can see the ‘MON’ in the lower left is a shade of orange.

EEE in its natural state

EEE in its natural state

EEE with Windows set to black and white

EEE with Windows color filter set to ‘Grayscale’

EEE with Windows set to Grayscale Inverted

EEE with Windows color filter set to ‘Grayscale Inverted’

EEE with Windows set to Inverted

EEE with Windows color filter set to ‘Inverted’

As you can see, these allow a user to change the color scheme to fit their particular needs for viewing web pages or images or whatever, however it is the next three options that deserve our attention, as they are a cornerstone of accessibility and design in the user interface and HCI in general.

If you look back at the earlier image showing the filter options, you’ll notice the last three are ‘Deuteranopia,’  ‘Protanopia,’ and ‘Tritanopia.’ These are types of color blindness – a term that I really don’t like even though it’s accurate – that affect someone’s ability to process colors. We also need to distinguish between some terms here, as they are often, at least from my experience, misused. All of these conditions can manifest in two primary ways: -opia and -omaly. The former, which are the ones indicated in Windows’ color settings, mean you are missing the related cones altogether. In other words, Deuteranopia is a complete lack of red cones (known as L-cones, for long-wavelength), Protanopia is a complete lack of green cones (known as M-cones, for medium wavelength), and Tritanopia is a lack of blue cones (known as S-cones, for short wavelength). The latter is a form of blue-yellow color blindness, although Tritanopes don’t confuse those two colors so it’s not an appropriate name, while the former two are versions of red-green color blindness.

In contrast to the previous circumstances in which the cones are entirely absent, it’s also possible that the cones are present but weak to a particular color stimulus; those cases use  the ‘-omaly’ suffix. So Deuteranomaly is a weakness to red, Protanomaly is a weakness to green, and Tritanomaly is a weakness to blue, and very rare; about 1:1,000,000. But the cones are not missing as they are in the previous cases; they are simply weak or mutated. Therefore, one way to address it is to increase the amount of or saturation of the color in which a person is weak, although this in itself is not a complete solution. Additionally, how these manifest in someone’s vision can be wide-ranging, from little impact on color-vision to something very close to the -opia variant.

These types of color deficiencies can cause real problems in distinguishing elements in any visual field, especially a computer screen. Research into accessibility has not only been ongoing for many years, but guidelines for the use of color are in place for all large companies such as Microsoft, Apple, Google, and all the way to the U.S. Government (Usability.gov is overseen by the Department of Health and Human Services, by the way).

So in returning to the color / filter settings in Windows 10, I can’t say with certainty whether Microsoft actually means, for example, Deuteranopia, as indicated, or Deuteranomaly, or is using it as an umbrella term. They aren’t the same, and can only be addressed as such through design to a certain, limited extent, but there aren’t separate settings so…I don’t know.

When switching among these filters on EEE, there was no impact as there is no red or green, however there was a noticeable effect when applying the ‘Tritanopia’ setting, as that refers to an absence of blue cones. As you can see, the yellow sections were switched to green, which makes sense as Tritanopes have generally normal red-green vision.

EEE set to 'Tritanope' in Windows color settings

EEE with Windows color filter set to ‘Tritanope’

For the Protanopia setting, a good example to use instead of EEE is IS301.com itself. because this condition results in a weakness to red light, and IS301 has a strong red background, the setting reduced the saturation of red in all elements on screen for a noticeable change.

IS301.com with Windows color filter set to 'Protanope'

IS301.com with Windows color filter set to ‘Protanope’

When set to ‘Deuteranope,’ the only change was an increase in saturation of the red elements of the screen, but nothing dramatic.

I also want to mention that you can set up your own high-contrast color themes, tweaking the color for every aspect of the display so it fits your needs perfectly; there are a lot of options for configuration.

Prebuilt but editable contrast themes

Prebuilt but editable contrast themes

On the other hand, apps and programs and everything else have to be designed with that in mind or else they will be negatively impacted by your adjustments, and that results in another problem: Windows will natively adapt to your specification, but it doesn’t mean any of the programs you run under Windows will. That’s something that needs to be enforced more clearly and strictly in Microsoft’s design guidelines.

So what was s simple fix of a black and white screen turns into a deep dive about accessibility and color blindness – I love it when that kind of thing happens. If you just wanted to fix your screen, then you’re not reading this anyway, but we really need to be aware, especially software and interface designers and developers, that for usability and accessibility these options and many others need to be available so everyone can benefit from the functionality your software brings.

On a related side note for those who may be reading and and are living with color-blindness, I discovered that there are glasses that claim to fix the problem. Apparently they create a bespoke filter on the lens for your particular condition and severity, and can bring you to almost normal. I’m not affiliated with the company, I have no idea how well they work, but for the cost they’d better work pretty well. if your’e interested, the company is called EnChroma. If anyone tries them, let me know how it goes!

Zoho Writer: An excellent online alternative

I recently gave my opinions on OpenOffice.org’s word processing software Writer. My conclusion was that while they have certainly made improvements over the last twenty years, it still just doesn’t hold up to Microsoft’s behemoth package, either online or off. However, that doesn’t mean there isn’t an alternative for those who seek: I also mentioned in that post I would be reviewing Zoho Writer, the online word processor from (obviously) Zoho, which turns out to be a very competent competitor to Word. It’s not the perfect replacement, but it comes pretty close for everyday use, and then some.

Zoho offers many applications and services, all of which are relatively competent.  And I’m not talking just basic office suite stuff either; they have everything from CRM platforms to retail inventory management. For an online service, they have a surprising breadth of applications on order.

Zoho Apps

Zoho Apps

As you can see, there is a lot there. For this review I’ll be focusing on their word processor Writer, which I feel is a highlight of their office productivity suite, but it’s important to note the range of capabilities they have, especially as compared to other online options. I also want to mention that their PowerPoint equivalent, Show, is absolutely fantastic as well, while their version of Excel, Sheet, is also very good. It doesn’t quite live up to the rest of the suite in terms of functionality or design, but that’s not to say it’s not good – it is. But it has significant room for improvement, especially considering its less involved and less usable interface, with sparsely-populated toolbars, buried commands, and no sidebar, especially when considering it’s for such a powerful program as a spreadsheet. Indeed, it’s the one instance in which I can say OpenOffice’s alternative, Calc, is the better option, with more features, a more familiar interface, and better design all around.

That being said, Zoho is the superior option for everything else, and OpenOffice doesn’t provide any enterprise functionality or features beyond the basic office suite anyway. Here we’ll be talking about Writer, a name shared with the OpenOffice equivalent (so don’t mix them up!) and it will hopefully give a good impression of how it all works.

I threw together a nonsense document to mouse around with, and below you can see the spell check, which gives the initial suggestion as well as alternate suggestions, which is nice, although it doesn’t offer meanings / definitions as does Word. You can also see the Format menu in the sidebar to the left with all the functionality you would expect, including Cut / Paste / Format Painter / etc., as well as standard font and paragraph formatting options. Two additional features I think are very useful are the Quick Text option, which allows you to specify a particular piece of text that can be inserted with the click of a button or shortcut key; very good if you frequently use the same phrase or sentence or whatnot in a document. The other nice feature is evidenced by the very faint two boxes you can see in the left margin; the plus and text boxes. See those? You might have to look closely. The box with the plus is an insert menu, and the text box opens a formatting menu.  They follow your insertion point automatically, and they turned out to be quite a convenience.

A variety of page layout templates are also readily available, with more available for download and very easy to implement.

Zoho Writer format menu and spell check

Zoho Writer format menu and spell check

Page Layout Templates

Page Layout Templates

 Paragraph Options

Paragraph Options

Although this document is hardly complex, inserting an image and table is something that can trip up even stalwart word processors, and that was true here as well. Inserting the image was easy and worked fine, however arranging it for word wrap was more problematic. Smaller images resulted in much more layout success.

Inserting a table was also quite easy, and I especially like the live preview it provided as I hovered the mouse over the design options, all of which were neatly contained in the Design tab of the table options window. This is very similar to how Word will open custom contextual tabs that provide additional functionality depending on the specific element you’re working with.

Manipulating the table was much easier than manipulating the picture, which wanted to jump around the page, and even manipulating individual cells was very smooth; everything from changing their dimensions to their color to their alignment and everything in between worked perfectly the first time.

Image and table (with table options) in Zoho writer

Image and table (with table options) in Zoho writer

Now is when I need to mention the biggest advantage of (Zoho) Writer over (OpenOffice) Writer: .docx compatibility. OO Writer doesn’t have it, and that’s a huge knock, a terminal knock, actually, against it. With Zoho Writer, when saving a document, you have multiple options: You can save to a cloud service, and Zoho has collaborated with many as you can see (although I couldn’t help but notice the logo for OneDrive is oddly blurry, while the others are very clear), or you can download the file as a Microsoft Word .docx, or even as an .odt file, which is the open standard used by OpenOffice itself.

Writer's Other Drives options

Writer’s Other Drives options

Zoho Writer Save dialog and File menu

Zoho Writer Save dialog and File menu

As one would hope, there are also multiple ways to get some statistics about your document. They run along the bottom as one would expect, but I am also a big fan of the Document Properties as accessed by clicking on the ‘i’ with a circle around it at the upper right of the window. Not only does this provide a wealth of information about the document itself, but coming from an HCI perspective, having the page count, word count and character count in individually shaded, easy to see and immediately locate, is genius. In terms of information presentation and information availability it puts those interrelated metrics right next to each other yet separated visually and thereby forces them in a cognitive whole.

Not to get too far off topic, but the reason I’m so enamored with this design choice is that humans, in order to make sense of, and order out of, information and stimuli presented to them, engage in several practices that help them in organizing what they see. One of them is known as Proximity, in which spatially similar images are considered as part of a larger group, even if they’re not related, so when they are related, as these three metrics are, our ability to process them, their meaning and their relation to each other is increased even more. Another is Similarity, in which when considering multiple individual field objects, those that are similar in some dimension, whether it be shape, color, orientation, or whatever else, are considered to be part of the same group, again even if they aren’t. Those are two reasons presenting that specific document information is so brilliant – it addresses the way we as humans process information.

Consider the Windows logo below. This is an example of both proximity and similarity. We consider the logo as a single shape because the four smaller squares that comprise it are both similar and proximal.

Example of Similarity and Proximity

Example of Similarity and Proximity

This is the Writer document properties pane, that uses these concepts to such great effect.

Document Properties

Document Properties

It even estimates the average reading time. There is nothing I don’t like about this information window, it’s one of the best interface designs I’ve ever seen, and I’ve seen a lot.

Before I just put up the rest of the menu dialogs for your perusal to give you an idea of how they’re implemented, I also want to address a nice usability touch they have incorporated in the form of a support icon. See that little blue envelope there way on the right hand side?

Support icon

That’s the Writer support icon, and it opens up a window that allows you to offer feedback or ask questions directly. For a free service, that’s pretty remarkable. It really gives the idea – true or not – that they are at least receptive to feedback or requests for help. I was very surprised by that, and pleasantly so.

Writer support

Writer support

As you can probably tell, I think Writer is a great program. It’s not often I get to gush over software, but this company is so unknown, comparatively, yet they have such a well-designed and functional, free service, I’m surprised we don’t hear about it much more often. According to their webpage, they are a private company but one with five thousand employees! They hold user conferences and expos – that is not small time, yet their name recognition, even with millions of users, is small. If I may be so bold, even sacrilegious to some, I’ll even go so far as to say it destroys the offerings from Google. The only service that can compete is Microsoft’s Office 365, and even then it’s just a little better, plus Zoho integrates with all of them anyhow. Zoho has really created something great, and I encourage anyone interested in alternatives to standard office suites to give it a try.

Here are some functional dialogs if you’re interested in seeing all the options available before taking the plunge.

 

[Best_Wordpress_Gallery id=”8″ gal_title=”Zoho Dialogs”]

Has Firefox opted you into any of its studies?

Remember my previous post that talked about the fantastic new version of Firefox, especially the full-screen screenshot capability? I still haven’t adopted it as my main browser, but I was so impressed with th overall changes that were made, I’ve found myself using it more and more; certainly more often than I have in the past. High five for no more memory leaks! Anyway, when I started it up today, it opened to the screen you see below. I was intrigued, especially considering the great improvements they’ve made so far, but as I read through it I also became a little curious. It’s talking about tracking us for usability study purposes, which I’m completely for, but is there anything they perhaps are not telling us? I had no idea the drill down that was about to happen.

FireFox Pioneer

FireFox Pioneer

This is a new initiative called Firefox Pioneer, which aims to discover how people use their browsers, as well as what they vaguely refer to as ‘health of the web.’ They are careful to note that you need to opt in, which is good, and they are pretty clear about what they will and will not do, and what you can expect in terms of privacy, monitoring, opting out, how private sessions play in, and so on. I do get the feeling they are striving for high-level transparency here and acting in good faith, but perhaps a deeper dive into what they could do is warranted.

I should also mention that while Firefox allows for full page screenshots, that option was not available here. I had to open the page in Opera, save it as a pdf which Opera allows compared to Firefox’s screencap capability, open the pdf in Firefox, and then I could save a full-page screenshot. Very curious, but a triumph for ingenuity.

As the page indicates, you can type about:studies into the address bar and see which studies you’re part of, which have completed, and which are available. In fact, you can type about:[topic] into the address bar to discover many interesting things. To see how extensive this capability is, you can see a list at this link. For a real fun time, try about:config – it shows this not at all scary warning, which is really just trying to be funny before letting you access settings. I would normally be for this, however it’s incredibly unclear as to what’s going on if someone isn’t familiar (it fooled me too), especially as it says ‘this application’ and not ‘Firefox.’ Why the pseudo-third person?

Careful now

Careful now

Clicking ‘I accept the risk!’ takes you to a configuration page that isn’t much better in terms of readability or functionality.

Firefox's about:config

Firefox’s about:config

I don’t usually check this kind of thing, but I certainly should. We all know that companies use your data for many things, sometimes not asking first, so it’s important to keep an eye on what any program you’re using is doing. Typing about:studies showed me this screen, which I have to admit got me to wondering what exactly has been going on behind my back:

about:studies page in Firefox

about:studies page in Firefox

A collaboration between Mozilla and the creators of Mr. Robot? Someone’s reality is different than mine? What is going on here exactly? I clicked on Learn More and was taken to their SHIELD page where I learned all about their SHIELD studies, which they claim are to test new features, however from the original request it’s clear they test more than that. I also can’t figure out why it’s capitalized in such a way; I found no evidence it’s an acronym. A deeper dive led me to another Shield page where it’s maddeningly no longer capitalized, and that’s where I found, towards the bottom, a very concerning entry called ‘Normandy – User Profile Matching and Recipe Deployment.’ I wasn’t completely sure what that meant, but it stood out as sounding perhaps not so good. Just the name is curious: If you’re not familiar, and you should be, Normandy happens to be the location in France where the United States, Britain, Canada, and some of France itself launched the D-Day invasion against the Nazis in 1944. Did you ever see Saving Private Ryan? The opening scene was Normandy. I can’t help but wonder why Mozilla has named this bit of its process in such a way. It may be common knowledge, I don’t know, but it was new – and enlightening – to me.

Normandy

Normandy

You’ll notice the first item there, “Filters for Shield Targeting.” That took me to another interesting page, the very first paragraph of which states how certain users can be pegged for recipe execution in their browser, and the means by which that is done has access to location and locale (I’m uncertain as to how those differ in Mozilla parlance). It’s further down the page that you begin to see all the ways you can be tracked and monitored.

Filter Expressions

Filter Expressions

I was going to take a screenshot and crop in the sections that were most troublesome, but there are SO MANY it simply was not possible. To give you an idea, here’s what a screenshot of the page looked like in my image editor. I should also add that from a pure design perspective, they are really not using space efficiently.

So there’s a Normandy server, and Normandy is also an object that contains what they call ‘general information’ about the client. Client in a case like this should indicate the browser itself, not the person using it, but separating the two is not so easily done, and as a designer / developer, you’re ultimately trying to learn about the user, about their wants, needs and habits, so you can provide a better product. I have no problem with that, and in fact am completely on board with their Heartbeat initiative, but that is relatively benign.

Anyway, back to the above screenshot. They can track a lot, and in the crop below, you’ll see they do something interesting called bucket sampling. I’m intrigued by what this would be used for and how / why / when / etc., so I will have to dig even deeper. I’m guessing it’s some kind of parsing via demographics / usage statistics, but I’m not completely sure. I only found one mention of it on Mozilla’s site, on this page towards the bottom under the heading “Filling the Gap,” but it does lend credence to my assumption. While there’s no formal method known as bucket sampling, it’s something we do, in effect, all the time. I’m looking at you, marketing!

Bucket Sample

Bucket Sample

So what does all of this have to do with the study that started this whole post? To be quite honest, this rabbit hole didn’t tell me anything I, or any of you, didn’t already know: It’s very easy to track you online, and not just the websites you visit. Where you are, what software you use, what OS you have, whether you’re a new user or experienced, how many mobile and desktop clients you have, how long you stayed at a site, they can even track if you’ve enabled Do Not Track!

And while this sounds like a conspiratorial, tinfoil-hat rant, I actually am not overly bothered by it. I don’t do anything outrageous online other than download Sega Genesis ROMs, because I’m dangerous and like to live life on the edge. But other than that it’s all pretty boring. My issue was mainly that Mozilla may be opting us in to studies without formally informing anyone, and that I just can’t get behind. This was originally going to be a very short post that I thought could be whipped up in about fifteen minutes, but it instead turned into a downhill slalom.

If you’re interested in reading more about Normandy, Mozilla has a helpful website where you can learn all about it and what it can (and does) do, and it really is an interesting, dare I say fascinating, read. http://normandy.readthedocs.io/en/latest/

Oh, one ore thing – In the latest build of Firefox, Mozilla has also made individual cookie management much more cumbersome. I don’t use that feature a lot, but it is definitely not something people should have to go in to the developer console to access. I love the new Firefox, but this isn’t good – user control should be at the center of all commercial software development. Get it together, Mozilla.

Fixing a component install error in Visual Studio Community 2017 (hint: It’s really easy)

I recently noticed in the notification window of Visual Studio Community ’17 that there were some Azure-related updates available. Normally these updates lead to an uneventful experience in which I approve the updates, VS takes a loooong time to download install said updates, then everything moves forward as planned. This time, however, was slightly different so I thought I’d bring you all on my journey of discovery and illustrate why Visual Studio’s attempt to help me figure out what went wrong was so unhelpful. Even so, it’s an easy fix that I’m guessing many of you could figure out anyway. Indeed, my approach takes the slightly longer way round.

The story begins with the update notifications. This is a very common thing to happen with this particular Integrated Development Environment (IDE), and now that I think about it, I can’t recall a time there weren’t updates to be had. Anyway, as I mentioned they’re normally uneventful, and these two were for components I don’t use too terribly much – Azure Data Lake and Stream Analytics Tools, and Azure Functions and Web Jobs Tools. The former are tools to aid and assist with the streaming, gathering, and analysis of the massive amounts of data gathered from IoT connected devices, and that is an area in which I have significant interest, so these tools will be useful for me soon (I hope). Azure Functions and Web Jobs Tools are of less interest to me, but I can’t have an incomplete update hanging in the air above me. So, without thinking too much about it I clicked the notification, clicked on [Update], and went about my business.

No problem

No problem

As expected, the process began without much of a hitch.

Good so far

Good so far

Still good

Still good

I should mention that when it comes to Visual Studio, you don’t ‘update’ or ‘patch,’ you instead ‘Modify.’ All just a big semantics issue. Anyway, although things had gone well up to this point, that was about to change.

Uh Oh

Uh Oh

So the Data Lake and Stream Analytics tools installed fine, but the functions and Web Jobs tools was another story. I don’t even need those, and I probably should have let it go, but I just couldn’t. That red X would have haunted my dreams.

The obvious next step was to check the install log, as the Installer so helpfully suggested. Speaking of the installer, and if you’re interested, VSIX is a method of deploying updates to Visual Studio, and does so in the right place with the right components and so on. It’s a compressed file, and you can even rename a VSIX file to .zip and extract it. Just wanted to get that out of the way.

I clicked on “View Install Log,’ and the result was a log that was a breezy 865 lines long, and mainly, but not entirely, comprised of stuff that looked like this:

Thanks for the help, install log

Thanks for the help, install log

This log punches you in the gut twice, because A) You have to find the part that identifies the problem in those 800+ lines all on your own, and B) the part that identified this particular problem was all the way at the very end. Luckily, ctrl-F helped out significantly here.

There you are

There you are

Luckily, I knew, or at least suspected, that from here it would be easy. That the update task didn’t exist and the setup instance was not in a launchable state indicated to me that there was a disagreement between the installer and Visual Studio, and not that some fatal, unrecoverable error had happened. Fortunately, my suspicions turned out to be correct and it was really easy to fix. In fact, I could have just done it from within Visual Studio, but this is more illustrative. At least I think it is. Maybe it’s not.

I didn’t even need to go full control panel on it. I simply went in to the apps section of settings, scrolled down to Visual Studio, and selected ‘Modify.’

That brought up the Modification window with the ‘Workloads’ tab highlighted, but we want ‘Individual Components.’

Workload tab in modification window

Workload tab in modification window

Individual Components tab in modification window

Individual Components tab in modification window

You’ll notice that ‘Microsoft Azure WebJobs Tools’ isn’t highlighted, so easy fix. That’s also the reason I mentioned earlier that it could easily be repaired from within VS itself. Clicking that checkbox caused almost all the other checkboxes to become selected, and after clicking [Modify] there in the lower right-hand corner, the long, slow update process began anew!

It was at zero percent for a long time, but that’s normal for Visual Studio. Its updates are never a quick thing. After it chugged along for a while, success!

Success!

Success!

Remember, I don’t actually need these components, but I was not going to let the VSIX installer win, and now I could raise my head in triumph. Plus, I can experience – for a brief moment anyway, if I know Visual Studio – the blissful Zen of the empty notifications window.

Ahh...

Ahh…

When it comes to a failed component update, I have seen others suggest uninstalling and reinstalling Visual Studio, clearing caches, even making changes to the registry! I have never needed to do any of that for any kind of installation or update hiccup. I don’t know why people think that, especially when a straight repair or modify, as shown here, always works. This is not an advanced technique or a hidden trick Microsoft doesn’t want you to know about; it’s a standard, Occam’s Razor approach that should always be the first line of defense and action for this kind of thing.

If this helps anyone out there avoid all those unnecessary machinations, then I’ve done my part.

How to kill windows tasks, even those that just won’t die

I recently created a video that shows how to kill tasks via the command line, and if that doesn’t work, how to kill them via Process Explorer, and if that doesn’t work how to disable them using the Services window. It’s embedded just below, but if you’re not in a video mood, fear not! I have summed up its contents, although the video shows you the steps and comes complete with witty commentary.

If you’re running Windows, any version of Windows, you know there are what seems like thousands of things going on in the background. All you have to do is bring up the Task Manager ([Ctrl]-[Shift]-[Esc], or the well-known [Ctrl]-[Alt]-[Delete] if you’re old-school and enjoy the extra steps) and you’ll come face-to-face with the process party happening inside your machine.

Task Manager - Hello, running tasks!

Task Manager – Hello, running tasks!

As you may also know, each of those running tasks requires some of your machine’s resources; sometimes a little, sometimes a lot, and you can see in Task Manager how much of each resource each process is using. If the process is something you don’t use or don’t need, then it’s not a bad idea to stop the process and recover whatever amount of system resources it’s hogging to itself, and it even turns out that while taking the above screenshot I happened to notice Windows’ Photo App, Movies & TV app, and Messaging app were taking up a lot, so away they went! Right-clicking brings up a menu that allows you to ‘End Task.’

But that’s also where an insidious problem lies. Sometimes, when you right click and select End Task, the task just doesn’t end. This can manifest in a couple of ways: The task may continue to run, mocking and taunting you to click End Task again, reveling in your failure to stop it. It might stop for a moment, only to reappear a second later, better, stronger, faster (not really better or stronger or faster, but it will reappear).

This even happens when using the command line to kill tasks. Normally this is done by listing the running tasks using tasklist, then, once the Process ID (PID) is known, using taskkill /PID actualPID, however even here that doesn’t guarantee a termination, with a similarly mocking response from your system.

Damn

Damn

Why does that happen? The main but not necessarily only reason is that it is actually part of a complex hierarchy of tasks that prevent it from being shut down. Either it is a child spawned from a parent process and upon being stopped, the parent just restarts it, or it’s a parent process that can’t be terminated because we can’t have orphaned tasks.

To determine which of these is the case, I am a big fan of Process Explorer, part of Microsoft’s SysInternals software suite that helps maintain and monitor Windows. If you click on the link, you can see in the left-hand sidebar the other packages available; They’re quite comprehensive.

Process explorer shows you everything that is going on, how many resources are being used by each process including specific and shared memory, the Process ID (PID), the name of the company that developed the software, it color codes by category, it’s a great program that shows a lot and really gives an idea of not just what is going on but how it all relates.

Process Explorer

Process Explorer

You can see that some processes are child processes and some are parent processes and whether you can or can’t kill a process is very dependent on the nature of that hierarchy. Trying to stop a task even here will prove futile as the hierarchy is the same. Interestingly, in Task Manager right clicking gives the option to End Task, while in Process Explorer it gives the option to Kill Task. Hmm. The deeper we go, the more violent we get.

Ultimately, if you really want a process to not run, you should first make sure it’s not something you need. Google, even Bing, is your friend. If you are certain the process is just leeching resources, then you’ll have to go into services.msc, which you can run by typing it in to the search bar there by the start menu, and disable the service by right-clicking the service and selecting Properties, then Startup Type. Don’t set it to Automatic, obviously, or even Automatic Delayed Start, but also don’t set it to Manual, because then it will just start up again if the system decides it’s needed. Be sure to set it to Disabled.

Services.msc

Services.msc

Set Startup Type to 'Disabled'

Set Startup Type to ‘Disabled’

That will stop the process from starting up, even if the system thinks it should. Remember, though, that if you *do* end up needing the service, you’ll have to go back into Services.msc and start it up yourself again, as this shuts it down completely. And I have to say it again: be careful about disabling processes. If you disable one that you need, or worse, the system needs, you may find your device acting very strangely, or in a worst-case scenario, software or even hardware not working at all. Even if you don’t disable anything, this is a good way and a good opportunity to learn what your system is actually doing.

Russian botnet master nabbed in Spain, extradited to US

This is a story that has been ongoing for some time. Pyotr Levashov, a well-known and well-established Russian cybercriminal who was arrested in April of last year (2017 if you’re reading this in the distant future – welcome alien overlords!) while vacationing in Spain, has finally been extradited to the U.S. Apparently cybercriminaling does pay well sometimes. The arrest was based on a formal U.S. Department of Justice indictment against him for, among other things, operating the Kelihos Botnet, a long-running, expansive, global botnet that bombarded the world with all kinds of spam for nonsense like get-rich-quick schemes and enhancement medications; if you’re interested, and you should be, you can read the DOJ press release about the indictment and the actual search warrant that allowed for their infiltration of the botnet.

Before we continue, let’s talk about what a botnet is. When malware, or bad software (get it? Mal ware?), is surreptitiously installed on your machine, either through a drive-by attack in which it’s embedded in a Flash ad, or you click on a link or file from a rogue email, or one of many other attack vectors, it will use your machine to carry out tasks without your permission, involvement, or even knowledge. And just to be sure, those tasks it’s carrying out are bad. It can use your machine to send spam, participate in DDoS attacks, store harmful or illegal files, and many other unethical / criminal activities, all without you ever being privy to what’s going on. When that happens, your machine is what’s known as a zombie computer, or more commonly, a bot. Now, imagine hundreds of thousands of these infected machines all acting in unison, for a common goal or under a central control authority. That’s a botnet. Here’s an effective graphic from Reuters that illustrates the architecture of a botnet.

Typical botnet architecture (Source: Reuters)

Typical botnet architecture (Source: Reuters)

I wanted to embed an interactive map from Arbor Networks that shows real time attacks happening right now, and provides historical data, but their embed code which uses iframes doesn’t work on WordPress. I find it strange a security firm would still be supplying iframe embed codes, but who am I to judge? No matter; there are other sites that provide similar information using their own honeypot networks, such as Kaspersky’s real-time threat map and the well-known Norsecorp map. Actually, I had intended to use Norsecorp’s IPViking map, however it is now run under HP’s banner, although powered by Norse, and I simply couldn’t get it to work in any browser. Their map linked above works beautifully, though.

Kaspersky's Threat Map

Kaspersky’s Threat Map

Norsecorp's Threat Map

Norsecorp’s Threat Map

There are several interesting facets to this case: The first is, this guy has been around a long time and was one of the bad actors behind the Storm botnet that first manifested all the way back in 2007. That botnet was eventually dismantled by the combined efforts of Microsoft, malware firms, and the feds, a partnership and collaboration that continues to this day. We’ll come back to this particular botnet soon, because the architecture of these things is going to become important.

By soon, I mean right now! Another interesting aspect to this case is that the botnet was very sophisticated. It used a hybrid structure that is unusual for this kind of thing. Botnets are typically peer-to-peer, in which all the infected machines communicate with each other to coordinate and carry out their nefarious activities, or they use what’s known as a C&C, or Command and Control server, that oversees the whole thing and controls the botnet form a more centralized location. That allows better control and oversight of the bots.

Kelihos, however, was a hybrid, in which there was a C&C server, but there was also a peer-to-peer aspect as there was some autonomy in the architecture that allowed the bots to continuously update among themselves a list of secondary control servers to which they would report, and those would be directly overseen by the main C&C. This is in direct contrast to the Storm botnet mentioned earlier, which was pure peer to peer. A hybrid network also allows for rapid updates to, and distribution of, associated malware.

That leads to the next neat(?) thing about the botnet: It was aggressively and frequently updated. In fact, when a live sinkholing, in which the bots are redirected to to different targets that can then help track the bots or even deactivate them, took place at a 2013 RSA security conference, a new version of the botnet rapidly took its place which indicated that the creators were prepared for just such an emergency and had pre-planned a contingency.

And this was not just a spamming botnet. Along with pushing spam of both the email and desktop pop-up kind, it also stole bitcoin and targeted banks and other large industry outlets with industry specific malware that could rake in millions of dollars while running undetected. For botnet software, this had a wide range of functionalities, both general and specific, although for all it could do it was not hard to track.

The next interesting aspect of this case is Russia fought vigorously against Levashov’s extradition. Not by attempting to block it, but rather by filing an extradition request of their own based on crimes they say he committed in Russia itself. A smart move, regardless of whether the Russian charges are true or an attempt to protect one of their own, that is a clever way of approaching it. It didn’t work, ultimately, and Levashov is now in U.S. custody, but it was an interesting tactic to counter the original extradition request. Not only that, it has happened before.

A really interesting story all the way around, and I’m curious to see how it concludes. In the meantime, be careful, ensure your OS is up to date and fully patched, be sure you are running up-to-date anti-virus and anti-malware protection, try not to visit questionable sites, don’t activate or respond to emails from unknown sources, use an ad-blocker (uBlock Origin is my preferred choice, and I have no connection to them; purely my own opinion), and just generally practice safe computing.

Going Up